[
  {
    "path": ".gitignore",
    "content": "out\nnode_modules\n.vscode-test/\n*.vsix\n*.bin"
  },
  {
    "path": ".vscode/launch.json",
    "content": "// A launch configuration that compiles the extension and then opens it inside a new window\n// Use IntelliSense to learn about possible attributes.\n// Hover to view descriptions of existing attributes.\n// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387\n{\n\t\"version\": \"0.2.0\",\n\t\"configurations\": [{\n\t\t\t\"name\": \"Run Extension\",\n\t\t\t\"type\": \"extensionHost\",\n\t\t\t\"request\": \"launch\",\n\t\t\t\"runtimeExecutable\": \"${execPath}\",\n\t\t\t\"args\": [\n\t\t\t\t\"--extensionDevelopmentPath=${workspaceFolder}\"\n\t\t\t],\n\t\t\t\"outFiles\": [\n\t\t\t\t\"${workspaceFolder}/out/**/*.js\"\n\t\t\t],\n\t\t\t\"preLaunchTask\": \"npm: compile\"\n\t\t},\n\t\t{\n\t\t\t\"name\": \"Extension Tests\",\n\t\t\t\"type\": \"extensionHost\",\n\t\t\t\"request\": \"launch\",\n\t\t\t\"runtimeExecutable\": \"${execPath}\",\n\t\t\t\"args\": [\n\t\t\t\t\"--extensionDevelopmentPath=${workspaceFolder}\",\n\t\t\t\t\"--extensionTestsPath=${workspaceFolder}/out/test\"\n\t\t\t],\n\t\t\t\"outFiles\": [\n\t\t\t\t\"${workspaceFolder}/out/test/**/*.js\"\n\t\t\t],\n\t\t\t\"preLaunchTask\": \"npm: compile\"\n\t\t},\n\t\t{\n\t\t\t\"name\": \"Debug tests\",\n\t\t\t\"type\": \"node\",\n\t\t\t\"request\": \"launch\",\n\t\t\t\"cwd\": \"${workspaceFolder}\",\n\t\t\t\"runtimeExecutable\": \"npm\",\n\t\t\t\"runtimeArgs\": [\n\t\t\t\t\"run-script\", \"debug\"\n\t\t\t],\n\t\t\t\"port\": 9229\n\t\t}\n\t]\n}\n"
  },
  {
    "path": ".vscode/settings.json",
    "content": "// Place your settings in this file to overwrite default and user settings.\n{\n    \"files.exclude\": {\n        \"out\": false // set this to true to hide the \"out\" folder with the compiled JS files\n    },\n    \"search.exclude\": {\n        \"out\": true // set this to false to include \"out\" folder in search results\n    },\n    // Turn off tsc task auto detection since we have the necessary tasks as npm scripts\n    \"typescript.tsc.autoDetect\": \"off\"\n}"
  },
  {
    "path": ".vscode/tasks.json",
    "content": "// See https://go.microsoft.com/fwlink/?LinkId=733558\n// for the documentation about the tasks.json format\n{\n\t\"version\": \"2.0.0\",\n\t\"tasks\": [\n\t\t{\n\t\t\t\"type\": \"npm\",\n\t\t\t\"script\": \"watch\",\n\t\t\t\"problemMatcher\": \"$tsc-watch\",\n\t\t\t\"isBackground\": true,\n\t\t\t\"presentation\": {\n\t\t\t\t\"reveal\": \"never\"\n\t\t\t},\n\t\t\t\"group\": {\n\t\t\t\t\"kind\": \"build\",\n\t\t\t\t\"isDefault\": true\n\t\t\t}\n\t\t}\n\t]\n}\n"
  },
  {
    "path": ".vscodeignore",
    "content": ".vscode/**\n.vscode-test/**\nout/test/**\nsrc/**\n.gitignore\nvsc-extension-quickstart.md\n**/tsconfig.json\n**/tslint.json\n**/*.map\n**/*.ts\nexamples/**"
  },
  {
    "path": "LICENSE.md",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2016 George Fraser\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Tree Sitter for VSCode [Deprecated]\n\n**With the improving support for custom syntax coloring through language server, this extension is no longer needed**\n\nThis extension gives VSCode support for [tree-sitter](http://tree-sitter.github.io/tree-sitter/) syntax coloring. Examples with tree-sitter coloring on the right:\n\n## Go\n\n![Go](./screenshots/go.png)\n\n## Rust\n\n![Rust](./screenshots/rust.png)\n\n## C++\n\n![C++](./screenshots/cpp.png)\n\n## Ruby\n\n![Ruby](./screenshots/ruby.png)\n\n## Javascript / Typescript\n\n![Typescript](./screenshots/typescript.png)\n\n## Contributing\n\n### Fixing colorization of an existing language\n\nIf you see something getting colored wrong, or something that should be colored but isn't, you can help! The simplest way to help is to create an issue with a simple example, a screenshot, and an explanation of what is wrong. \n\nYou are also welcome to fix the problem yourself and submit a PR. Colorization is performed by the various `colorLanguage(x, editor)` functions in `src/colors.ts`. When working on the colorization rules, please keep in mind two core principles:\n\n1. Good colorization is *consistent*. It's better to not color at all than to color inconsistently.\n2. Good colorization is *selective*. The fewer things that we color, the more emphasis the color gives.\n\n### Adding a new language\n\nIt's straightforward to add any [language with a tree-sitter grammar](https://tree-sitter.github.io/tree-sitter/).\n\n1. Add a dependency on the npm package for that language: `npm install tree-sitter-yourlang`.\n2. Add a color function to `./src/colors.ts`\n3. Add a language to the dictionary at the top of `./src/extension.ts`\n4. Add a **simplified** TextMate grammar to `./textmate/yourlang.tmLanguage.json`. The job of this textmate grammar is just to color keywords and simple literals; anything tricky should be left white and colored by your color function.\n5. Add a reference to the grammar to the [contributes.grammars section of package.json](https://github.com/georgewfraser/vscode-tree-sitter/blob/fb4400b78481845c6a8497d079508d28aea25c19/package.json#L26). `yourlang` must be a [VSCode language identifier](https://code.visualstudio.com/docs/languages/identifiers).\n6. Add a reference to `onLanguage:yourlang` to the [activationEvents section of package.json](https://github.com/georgewfraser/vscode-tree-sitter/blob/fb4400b78481845c6a8497d079508d28aea25c19/package.json#L18). `yourlang` must be a [VSCode language identifier](https://code.visualstudio.com/docs/languages/identifiers).\n7. Add an example to `examples/yourlang`.\n8. Hit `F5` in VSCode, with this project open, to test your changes.\n9. Take a screenshot comparing before-and-after and add it to the above list.\n10. Submit a PR!\n"
  },
  {
    "path": "TODO.md",
    "content": "## Bugs\n- Tree-sitter scope colors are wrong while the user is previewing other themes\n- Put back react support for .js and .tsx\n\n## Features\n- Folding-range provider https://code.visualstudio.com/api/references/vscode-api#FoldingRangeProvider\n- Extend-selection provider https://code.visualstudio.com/api/references/vscode-api#SelectionRangeProvider\n- Document highlight provider https://code.visualstudio.com/api/references/vscode-api#DocumentHighlightProvider"
  },
  {
    "path": "azure-pipelines.yml",
    "content": "# Node.js\n# Build a general Node.js project with npm.\n# Add steps that analyze code, save build artifacts, deploy, and more:\n# https://docs.microsoft.com/azure/devops/pipelines/languages/javascript\n\ntrigger:\n- master\n\npool:\n  vmImage: ubuntu-16.04\n\nsteps:\n- task: NodeTool@0\n  inputs:\n    versionSpec: '10.x'\n  displayName: 'Install Node.js'\n- script: 'npm install'\n  displayName: 'Install NPM deps'\n- script: 'npm run compile'\n  displayName: 'Compile Typescript'\n- script: 'node out/test.js'\n  displayName: 'Run tests'\n  failOnStderr: true"
  },
  {
    "path": "examples/cpp/marker-index.h",
    "content": "#ifndef MARKER_INDEX_H_\n#define MARKER_INDEX_H_\n\n#include <random>\n#include <unordered_map>\n#include \"flat_set.h\"\n#include \"point.h\"\n#include \"range.h\"\n\nclass MarkerIndex {\npublic:\n  using MarkerId = unsigned;\n  using MarkerIdSet = flat_set<MarkerId>;\n\n  struct SpliceResult {\n    flat_set<MarkerId> touch;\n    flat_set<MarkerId> inside;\n    flat_set<MarkerId> overlap;\n    flat_set<MarkerId> surround;\n  };\n\n  struct Boundary {\n    Point position;\n    flat_set<MarkerId> starting;\n    flat_set<MarkerId> ending;\n  };\n\n  struct BoundaryQueryResult {\n    std::vector<MarkerId> containing_start;\n    std::vector<Boundary> boundaries;\n  };\n\n  MarkerIndex(unsigned seed = 0u);\n  ~MarkerIndex();\n  int generate_random_number();\n  void insert(MarkerId id, Point start, Point end);\n  void set_exclusive(MarkerId id, bool exclusive);\n  void remove(MarkerId id);\n  bool has(MarkerId id);\n  SpliceResult splice(Point start, Point old_extent, Point new_extent);\n  Point get_start(MarkerId id) const;\n  Point get_end(MarkerId id) const;\n  Range get_range(MarkerId id) const;\n\n  int compare(MarkerId id1, MarkerId id2) const;\n  flat_set<MarkerId> find_intersecting(Point start, Point end);\n  flat_set<MarkerId> find_containing(Point start, Point end);\n  flat_set<MarkerId> find_contained_in(Point start, Point end);\n  flat_set<MarkerId> find_starting_in(Point start, Point end);\n  flat_set<MarkerId> find_starting_at(Point position);\n  flat_set<MarkerId> find_ending_in(Point start, Point end);\n  flat_set<MarkerId> find_ending_at(Point position);\n  BoundaryQueryResult find_boundaries_after(Point start, size_t max_count);\n\n  std::unordered_map<MarkerId, Range> dump();\n\nprivate:\n  friend class Iterator;\n\n  struct Node {\n    Node *parent;\n    Node *left;\n    Node *right;\n    Point left_extent;\n    flat_set<MarkerId> left_marker_ids;\n    flat_set<MarkerId> right_marker_ids;\n    flat_set<MarkerId> start_marker_ids;\n    flat_set<MarkerId> end_marker_ids;\n    int priority;\n\n    Node(Node *parent, Point left_extent);\n    bool is_marker_endpoint();\n  };\n\n  class Iterator {\n  public:\n    Iterator(MarkerIndex *marker_index);\n    void reset();\n    Node* insert_marker_start(const MarkerId &id, const Point &start_position, const Point &end_position);\n    Node* insert_marker_end(const MarkerId &id, const Point &start_position, const Point &end_position);\n    Node* insert_splice_boundary(const Point &position, bool is_insertion_end);\n    void find_intersecting(const Point &start, const Point &end, flat_set<MarkerId> *result);\n    void find_contained_in(const Point &start, const Point &end, flat_set<MarkerId> *result);\n    void find_starting_in(const Point &start, const Point &end, flat_set<MarkerId> *result);\n    void find_ending_in(const Point &start, const Point &end, flat_set<MarkerId> *result);\n    void find_boundaries_after(Point start, size_t max_count, BoundaryQueryResult *result);\n    std::unordered_map<MarkerId, Range> dump();\n\n  private:\n    void ascend();\n    void descend_left();\n    void descend_right();\n    void move_to_successor();\n    void seek_to_first_node_greater_than_or_equal_to(const Point &position);\n    void mark_right(const MarkerId &id, const Point &start_position, const Point &end_position);\n    void mark_left(const MarkerId &id, const Point &start_position, const Point &end_position);\n    Node* insert_left_child(const Point &position);\n    Node* insert_right_child(const Point &position);\n    void check_intersection(const Point &start, const Point &end, flat_set<MarkerId> *results);\n    void cache_node_position() const;\n\n    MarkerIndex *marker_index;\n    Node *current_node;\n    Point current_node_position;\n    Point left_ancestor_position;\n    Point right_ancestor_position;\n    std::vector<Point> left_ancestor_position_stack;\n    std::vector<Point> right_ancestor_position_stack;\n  };\n\n  Point get_node_position(const Node *node) const;\n  void delete_node(Node *node);\n  void delete_subtree(Node *node);\n  void bubble_node_up(Node *node);\n  void bubble_node_down(Node *node);\n  void rotate_node_left(Node *pivot);\n  void rotate_node_right(Node *pivot);\n  void get_starting_and_ending_markers_within_subtree(const Node *node, flat_set<MarkerId> *starting, flat_set<MarkerId> *ending);\n  void populate_splice_invalidation_sets(SpliceResult *invalidated, const Node *start_node, const Node *end_node, const flat_set<MarkerId> &starting_inside_splice, const flat_set<MarkerId> &ending_inside_splice);\n\n  std::default_random_engine random_engine;\n  std::uniform_int_distribution<int> random_distribution;\n  Node *root;\n  std::unordered_map<MarkerId, Node*> start_nodes_by_id;\n  std::unordered_map<MarkerId, Node*> end_nodes_by_id;\n  Iterator iterator;\n  flat_set<MarkerId> exclusive_marker_ids;\n  mutable std::unordered_map<const Node*, Point> node_position_cache;\n};\n\n#endif // MARKER_INDEX_H_\n"
  },
  {
    "path": "examples/cpp/rule.cc",
    "content": "#include \"compiler/rule.h\"\n#include \"compiler/util/hash_combine.h\"\n\nnamespace tree_sitter {\nnamespace rules {\n\nusing std::move;\nusing std::vector;\nusing util::hash_combine;\n\nRule::Rule(const Rule &other) : blank_(Blank{}), type(BlankType) {\n  *this = other;\n}\n\nRule::Rule(Rule &&other) noexcept : blank_(Blank{}), type(BlankType) {\n  *this = move(other);\n}\n\nstatic void destroy_value(Rule *rule) {\n  switch (rule->type) {\n    case Rule::BlankType: return rule->blank_.~Blank();\n    case Rule::CharacterSetType: return rule->character_set_.~CharacterSet();\n    case Rule::StringType: return rule->string_ .~String();\n    case Rule::PatternType: return rule->pattern_ .~Pattern();\n    case Rule::NamedSymbolType: return rule->named_symbol_.~NamedSymbol();\n    case Rule::SymbolType: return rule->symbol_ .~Symbol();\n    case Rule::ChoiceType: return rule->choice_ .~Choice();\n    case Rule::MetadataType: return rule->metadata_ .~Metadata();\n    case Rule::RepeatType: return rule->repeat_ .~Repeat();\n    case Rule::SeqType: return rule->seq_ .~Seq();\n  }\n}\n\nRule &Rule::operator=(const Rule &other) {\n  destroy_value(this);\n  type = other.type;\n  switch (type) {\n    case BlankType:\n      new (&blank_) Blank(other.blank_);\n      break;\n    case CharacterSetType:\n      new (&character_set_) CharacterSet(other.character_set_);\n      break;\n    case StringType:\n      new (&string_) String(other.string_);\n      break;\n    case PatternType:\n      new (&pattern_) Pattern(other.pattern_);\n      break;\n    case NamedSymbolType:\n      new (&named_symbol_) NamedSymbol(other.named_symbol_);\n      break;\n    case SymbolType:\n      new (&symbol_) Symbol(other.symbol_);\n      break;\n    case ChoiceType:\n      new (&choice_) Choice(other.choice_);\n      break;\n    case MetadataType:\n      new (&metadata_) Metadata(other.metadata_);\n      break;\n    case RepeatType:\n      new (&repeat_) Repeat(other.repeat_);\n      break;\n    case SeqType:\n      new (&seq_) Seq(other.seq_);\n      break;\n  }\n  return *this;\n}\n\nRule &Rule::operator=(Rule &&other) noexcept {\n  destroy_value(this);\n  type = other.type;\n  switch (type) {\n    case BlankType:\n      new (&blank_) Blank(move(other.blank_));\n      break;\n    case CharacterSetType:\n      new (&character_set_) CharacterSet(move(other.character_set_));\n      break;\n    case StringType:\n      new (&string_) String(move(other.string_));\n      break;\n    case PatternType:\n      new (&pattern_) Pattern(move(other.pattern_));\n      break;\n    case NamedSymbolType:\n      new (&named_symbol_) NamedSymbol(move(other.named_symbol_));\n      break;\n    case SymbolType:\n      new (&symbol_) Symbol(move(other.symbol_));\n      break;\n    case ChoiceType:\n      new (&choice_) Choice(move(other.choice_));\n      break;\n    case MetadataType:\n      new (&metadata_) Metadata(move(other.metadata_));\n      break;\n    case RepeatType:\n      new (&repeat_) Repeat(move(other.repeat_));\n      break;\n    case SeqType:\n      new (&seq_) Seq(move(other.seq_));\n      break;\n  }\n  other.type = BlankType;\n  other.blank_ = Blank{};\n  return *this;\n}\n\nRule::~Rule() noexcept {\n  destroy_value(this);\n}\n\nbool Rule::operator==(const Rule &other) const {\n  if (type != other.type) return false;\n  switch (type) {\n    case Rule::CharacterSetType: return character_set_ == other.character_set_;\n    case Rule::StringType: return string_ == other.string_;\n    case Rule::PatternType: return pattern_ == other.pattern_;\n    case Rule::NamedSymbolType: return named_symbol_ == other.named_symbol_;\n    case Rule::SymbolType: return symbol_ == other.symbol_;\n    case Rule::ChoiceType: return choice_ == other.choice_;\n    case Rule::MetadataType: return metadata_ == other.metadata_;\n    case Rule::RepeatType: return repeat_ == other.repeat_;\n    case Rule::SeqType: return seq_ == other.seq_;\n    default: return blank_ == other.blank_;\n  }\n}\n\ntemplate <>\nbool Rule::is<Blank>() const { return type == BlankType; }\n\ntemplate <>\nbool Rule::is<Symbol>() const { return type == SymbolType; }\n\ntemplate <>\nbool Rule::is<Repeat>() const { return type == RepeatType; }\n\ntemplate <>\nconst Symbol & Rule::get_unchecked<Symbol>() const { return symbol_; }\n\nstatic inline void add_choice_element(std::vector<Rule> *elements, const Rule &new_rule) {\n  new_rule.match(\n    [elements](Choice choice) {\n      for (auto &element : choice.elements) {\n        add_choice_element(elements, element);\n      }\n    },\n\n    [elements](auto rule) {\n      for (auto &element : *elements) {\n        if (element == rule) return;\n      }\n      elements->push_back(rule);\n    }\n  );\n}\n\nRule Rule::choice(const vector<Rule> &rules) {\n  vector<Rule> elements;\n  for (auto &element : rules) {\n    add_choice_element(&elements, element);\n  }\n  return (elements.size() == 1) ? elements.front() : Choice{elements};\n}\n\nRule Rule::repeat(const Rule &rule) {\n  return rule.is<Repeat>() ? rule : Repeat{rule};\n}\n\nRule Rule::seq(const vector<Rule> &rules) {\n  Rule result;\n  for (const auto &rule : rules) {\n    rule.match(\n      [](Blank) {},\n      [&](Metadata metadata) {\n        if (!metadata.rule->is<Blank>()) {\n          result = Seq{result, rule};\n        }\n      },\n      [&](auto) {\n        if (result.is<Blank>()) {\n          result = rule;\n        } else {\n          result = Seq{result, rule};\n        }\n      }\n    );\n  }\n  return result;\n}\n\n}  // namespace rules\n}  // namespace tree_sitter\n\nnamespace std {\n\nsize_t hash<Symbol>::operator()(const Symbol &symbol) const {\n  auto result = hash<int>()(symbol.index);\n  hash_combine(&result, hash<int>()(symbol.type));\n  return result;\n}\n\nsize_t hash<NamedSymbol>::operator()(const NamedSymbol &symbol) const {\n  return hash<string>()(symbol.value);\n}\n\nsize_t hash<Pattern>::operator()(const Pattern &symbol) const {\n  return hash<string>()(symbol.value);\n}\n\nsize_t hash<String>::operator()(const String &symbol) const {\n  return hash<string>()(symbol.value);\n}\n\nsize_t hash<CharacterSet>::operator()(const CharacterSet &character_set) const {\n  size_t result = 0;\n  hash_combine(&result, character_set.includes_all);\n  hash_combine(&result, character_set.included_chars.size());\n  for (uint32_t c : character_set.included_chars) {\n    hash_combine(&result, c);\n  }\n  hash_combine(&result, character_set.excluded_chars.size());\n  for (uint32_t c : character_set.excluded_chars) {\n    hash_combine(&result, c);\n  }\n  return result;\n}\n\nsize_t hash<Blank>::operator()(const Blank &blank) const {\n  return 0;\n}\n\nsize_t hash<Choice>::operator()(const Choice &choice) const {\n  size_t result = 0;\n  for (const auto &element : choice.elements) {\n    symmetric_hash_combine(&result, element);\n  }\n  return result;\n}\n\nsize_t hash<Repeat>::operator()(const Repeat &repeat) const {\n  size_t result = 0;\n  hash_combine(&result, *repeat.rule);\n  return result;\n}\n\nsize_t hash<Seq>::operator()(const Seq &seq) const {\n  size_t result = 0;\n  hash_combine(&result, *seq.left);\n  hash_combine(&result, *seq.right);\n  return result;\n}\n\nsize_t hash<Metadata>::operator()(const Metadata &metadata) const {\n  size_t result = 0;\n  hash_combine(&result, *metadata.rule);\n  hash_combine(&result, metadata.params.precedence);\n  hash_combine<int>(&result, metadata.params.associativity);\n  hash_combine(&result, metadata.params.has_precedence);\n  hash_combine(&result, metadata.params.has_associativity);\n  hash_combine(&result, metadata.params.is_token);\n  hash_combine(&result, metadata.params.is_string);\n  hash_combine(&result, metadata.params.is_active);\n  hash_combine(&result, metadata.params.is_main_token);\n  return result;\n}\n\nsize_t hash<Rule>::operator()(const Rule &rule) const {\n  size_t result = hash<int>()(rule.type);\n  switch (rule.type) {\n    case Rule::CharacterSetType: return result ^ hash<CharacterSet>()(rule.character_set_);\n    case Rule::StringType: return result ^ hash<String>()(rule.string_);\n    case Rule::PatternType: return result ^ hash<Pattern>()(rule.pattern_);\n    case Rule::NamedSymbolType: return result ^ hash<NamedSymbol>()(rule.named_symbol_);\n    case Rule::SymbolType: return result ^ hash<Symbol>()(rule.symbol_);\n    case Rule::ChoiceType: return result ^ hash<Choice>()(rule.choice_);\n    case Rule::MetadataType: return result ^ hash<Metadata>()(rule.metadata_);\n    case Rule::RepeatType: return result ^ hash<Repeat>()(rule.repeat_);\n    case Rule::SeqType: return result ^ hash<Seq>()(rule.seq_);\n    default: return result ^ hash<Blank>()(rule.blank_);\n  }\n}\n\n}  // namespace std"
  },
  {
    "path": "examples/go/letter_test.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage unicode_test\n\nimport (\n\t\"flag\"\n\t\"fmt\"\n\t\"runtime\"\n\t\"sort\"\n\t\"testing\"\n\t. \"unicode\"\n)\n\nvar upperTest = []rune{\n\t0x41,\n\t0xc0,\n\t0xd8,\n\t0x100,\n\t0x139,\n\t0x14a,\n\t0x178,\n\t0x181,\n\t0x376,\n\t0x3cf,\n\t0x13bd,\n\t0x1f2a,\n\t0x2102,\n\t0x2c00,\n\t0x2c10,\n\t0x2c20,\n\t0xa650,\n\t0xa722,\n\t0xff3a,\n\t0x10400,\n\t0x1d400,\n\t0x1d7ca,\n}\n\nvar notupperTest = []rune{\n\t0x40,\n\t0x5b,\n\t0x61,\n\t0x185,\n\t0x1b0,\n\t0x377,\n\t0x387,\n\t0x2150,\n\t0xab7d,\n\t0xffff,\n\t0x10000,\n}\n\nvar letterTest = []rune{\n\t0x41,\n\t0x61,\n\t0xaa,\n\t0xba,\n\t0xc8,\n\t0xdb,\n\t0xf9,\n\t0x2ec,\n\t0x535,\n\t0x620,\n\t0x6e6,\n\t0x93d,\n\t0xa15,\n\t0xb99,\n\t0xdc0,\n\t0xedd,\n\t0x1000,\n\t0x1200,\n\t0x1312,\n\t0x1401,\n\t0x1885,\n\t0x2c00,\n\t0xa800,\n\t0xf900,\n\t0xfa30,\n\t0xffda,\n\t0xffdc,\n\t0x10000,\n\t0x10300,\n\t0x10400,\n\t0x20000,\n\t0x2f800,\n\t0x2fa1d,\n}\n\nvar notletterTest = []rune{\n\t0x20,\n\t0x35,\n\t0x375,\n\t0x619,\n\t0x700,\n\t0xfffe,\n\t0x1ffff,\n\t0x10ffff,\n}\n\n// Contains all the special cased Latin-1 chars.\nvar spaceTest = []rune{\n\t0x09,\n\t0x0a,\n\t0x0b,\n\t0x0c,\n\t0x0d,\n\t0x20,\n\t0x85,\n\t0xA0,\n\t0x2000,\n\t0x3000,\n}\n\ntype caseT struct {\n\tcas     int\n\tin, out rune\n}\n\nvar caseTest = []caseT{\n\t// errors\n\t{-1, '\\n', 0xFFFD},\n\t{UpperCase, -1, -1},\n\t{UpperCase, 1 << 30, 1 << 30},\n\n\t// ASCII (special-cased so test carefully)\n\t{UpperCase, '\\n', '\\n'},\n\t{UpperCase, 'a', 'A'},\n\t{UpperCase, 'A', 'A'},\n\t{UpperCase, '7', '7'},\n\t{LowerCase, '\\n', '\\n'},\n\t{LowerCase, 'a', 'a'},\n\t{LowerCase, 'A', 'a'},\n\t{LowerCase, '7', '7'},\n\t{TitleCase, '\\n', '\\n'},\n\t{TitleCase, 'a', 'A'},\n\t{TitleCase, 'A', 'A'},\n\t{TitleCase, '7', '7'},\n\n\t// Latin-1: easy to read the tests!\n\t{UpperCase, 0x80, 0x80},\n\t{UpperCase, 'Å', 'Å'},\n\t{UpperCase, 'å', 'Å'},\n\t{LowerCase, 0x80, 0x80},\n\t{LowerCase, 'Å', 'å'},\n\t{LowerCase, 'å', 'å'},\n\t{TitleCase, 0x80, 0x80},\n\t{TitleCase, 'Å', 'Å'},\n\t{TitleCase, 'å', 'Å'},\n\n\t// 0131;LATIN SMALL LETTER DOTLESS I;Ll;0;L;;;;;N;;;0049;;0049\n\t{UpperCase, 0x0131, 'I'},\n\t{LowerCase, 0x0131, 0x0131},\n\t{TitleCase, 0x0131, 'I'},\n\n\t// 0133;LATIN SMALL LIGATURE IJ;Ll;0;L;<compat> 0069 006A;;;;N;LATIN SMALL LETTER I J;;0132;;0132\n\t{UpperCase, 0x0133, 0x0132},\n\t{LowerCase, 0x0133, 0x0133},\n\t{TitleCase, 0x0133, 0x0132},\n\n\t// 212A;KELVIN SIGN;Lu;0;L;004B;;;;N;DEGREES KELVIN;;;006B;\n\t{UpperCase, 0x212A, 0x212A},\n\t{LowerCase, 0x212A, 'k'},\n\t{TitleCase, 0x212A, 0x212A},\n\n\t// From an UpperLower sequence\n\t// A640;CYRILLIC CAPITAL LETTER ZEMLYA;Lu;0;L;;;;;N;;;;A641;\n\t{UpperCase, 0xA640, 0xA640},\n\t{LowerCase, 0xA640, 0xA641},\n\t{TitleCase, 0xA640, 0xA640},\n\t// A641;CYRILLIC SMALL LETTER ZEMLYA;Ll;0;L;;;;;N;;;A640;;A640\n\t{UpperCase, 0xA641, 0xA640},\n\t{LowerCase, 0xA641, 0xA641},\n\t{TitleCase, 0xA641, 0xA640},\n\t// A64E;CYRILLIC CAPITAL LETTER NEUTRAL YER;Lu;0;L;;;;;N;;;;A64F;\n\t{UpperCase, 0xA64E, 0xA64E},\n\t{LowerCase, 0xA64E, 0xA64F},\n\t{TitleCase, 0xA64E, 0xA64E},\n\t// A65F;CYRILLIC SMALL LETTER YN;Ll;0;L;;;;;N;;;A65E;;A65E\n\t{UpperCase, 0xA65F, 0xA65E},\n\t{LowerCase, 0xA65F, 0xA65F},\n\t{TitleCase, 0xA65F, 0xA65E},\n\n\t// From another UpperLower sequence\n\t// 0139;LATIN CAPITAL LETTER L WITH ACUTE;Lu;0;L;004C 0301;;;;N;LATIN CAPITAL LETTER L ACUTE;;;013A;\n\t{UpperCase, 0x0139, 0x0139},\n\t{LowerCase, 0x0139, 0x013A},\n\t{TitleCase, 0x0139, 0x0139},\n\t// 013F;LATIN CAPITAL LETTER L WITH MIDDLE DOT;Lu;0;L;<compat> 004C 00B7;;;;N;;;;0140;\n\t{UpperCase, 0x013f, 0x013f},\n\t{LowerCase, 0x013f, 0x0140},\n\t{TitleCase, 0x013f, 0x013f},\n\t// 0148;LATIN SMALL LETTER N WITH CARON;Ll;0;L;006E 030C;;;;N;LATIN SMALL LETTER N HACEK;;0147;;0147\n\t{UpperCase, 0x0148, 0x0147},\n\t{LowerCase, 0x0148, 0x0148},\n\t{TitleCase, 0x0148, 0x0147},\n\n\t// Lowercase lower than uppercase.\n\t// AB78;CHEROKEE SMALL LETTER GE;Ll;0;L;;;;;N;;;13A8;;13A8\n\t{UpperCase, 0xab78, 0x13a8},\n\t{LowerCase, 0xab78, 0xab78},\n\t{TitleCase, 0xab78, 0x13a8},\n\t{UpperCase, 0x13a8, 0x13a8},\n\t{LowerCase, 0x13a8, 0xab78},\n\t{TitleCase, 0x13a8, 0x13a8},\n\n\t// Last block in the 5.1.0 table\n\t// 10400;DESERET CAPITAL LETTER LONG I;Lu;0;L;;;;;N;;;;10428;\n\t{UpperCase, 0x10400, 0x10400},\n\t{LowerCase, 0x10400, 0x10428},\n\t{TitleCase, 0x10400, 0x10400},\n\t// 10427;DESERET CAPITAL LETTER EW;Lu;0;L;;;;;N;;;;1044F;\n\t{UpperCase, 0x10427, 0x10427},\n\t{LowerCase, 0x10427, 0x1044F},\n\t{TitleCase, 0x10427, 0x10427},\n\t// 10428;DESERET SMALL LETTER LONG I;Ll;0;L;;;;;N;;;10400;;10400\n\t{UpperCase, 0x10428, 0x10400},\n\t{LowerCase, 0x10428, 0x10428},\n\t{TitleCase, 0x10428, 0x10400},\n\t// 1044F;DESERET SMALL LETTER EW;Ll;0;L;;;;;N;;;10427;;10427\n\t{UpperCase, 0x1044F, 0x10427},\n\t{LowerCase, 0x1044F, 0x1044F},\n\t{TitleCase, 0x1044F, 0x10427},\n\n\t// First one not in the 5.1.0 table\n\t// 10450;SHAVIAN LETTER PEEP;Lo;0;L;;;;;N;;;;;\n\t{UpperCase, 0x10450, 0x10450},\n\t{LowerCase, 0x10450, 0x10450},\n\t{TitleCase, 0x10450, 0x10450},\n\n\t// Non-letters with case.\n\t{LowerCase, 0x2161, 0x2171},\n\t{UpperCase, 0x0345, 0x0399},\n}\n\nfunc TestIsLetter(t *testing.T) {\n\tfor _, r := range upperTest {\n\t\tif !IsLetter(r) {\n\t\t\tt.Errorf(\"IsLetter(U+%04X) = false, want true\", r)\n\t\t}\n\t}\n\tfor _, r := range letterTest {\n\t\tif !IsLetter(r) {\n\t\t\tt.Errorf(\"IsLetter(U+%04X) = false, want true\", r)\n\t\t}\n\t}\n\tfor _, r := range notletterTest {\n\t\tif IsLetter(r) {\n\t\t\tt.Errorf(\"IsLetter(U+%04X) = true, want false\", r)\n\t\t}\n\t}\n}\n\nfunc TestIsUpper(t *testing.T) {\n\tfor _, r := range upperTest {\n\t\tif !IsUpper(r) {\n\t\t\tt.Errorf(\"IsUpper(U+%04X) = false, want true\", r)\n\t\t}\n\t}\n\tfor _, r := range notupperTest {\n\t\tif IsUpper(r) {\n\t\t\tt.Errorf(\"IsUpper(U+%04X) = true, want false\", r)\n\t\t}\n\t}\n\tfor _, r := range notletterTest {\n\t\tif IsUpper(r) {\n\t\t\tt.Errorf(\"IsUpper(U+%04X) = true, want false\", r)\n\t\t}\n\t}\n}\n\nfunc caseString(c int) string {\n\tswitch c {\n\tcase UpperCase:\n\t\treturn \"UpperCase\"\n\tcase LowerCase:\n\t\treturn \"LowerCase\"\n\tcase TitleCase:\n\t\treturn \"TitleCase\"\n\t}\n\treturn \"ErrorCase\"\n}\n\nfunc TestTo(t *testing.T) {\n\tfor _, c := range caseTest {\n\t\tr := To(c.cas, c.in)\n\t\tif c.out != r {\n\t\t\tt.Errorf(\"To(U+%04X, %s) = U+%04X want U+%04X\", c.in, caseString(c.cas), r, c.out)\n\t\t}\n\t}\n}\n\nfunc TestToUpperCase(t *testing.T) {\n\tfor _, c := range caseTest {\n\t\tif c.cas != UpperCase {\n\t\t\tcontinue\n\t\t}\n\t\tr := ToUpper(c.in)\n\t\tif c.out != r {\n\t\t\tt.Errorf(\"ToUpper(U+%04X) = U+%04X want U+%04X\", c.in, r, c.out)\n\t\t}\n\t}\n}\n\nfunc TestToLowerCase(t *testing.T) {\n\tfor _, c := range caseTest {\n\t\tif c.cas != LowerCase {\n\t\t\tcontinue\n\t\t}\n\t\tr := ToLower(c.in)\n\t\tif c.out != r {\n\t\t\tt.Errorf(\"ToLower(U+%04X) = U+%04X want U+%04X\", c.in, r, c.out)\n\t\t}\n\t}\n}\n\nfunc TestToTitleCase(t *testing.T) {\n\tfor _, c := range caseTest {\n\t\tif c.cas != TitleCase {\n\t\t\tcontinue\n\t\t}\n\t\tr := ToTitle(c.in)\n\t\tif c.out != r {\n\t\t\tt.Errorf(\"ToTitle(U+%04X) = U+%04X want U+%04X\", c.in, r, c.out)\n\t\t}\n\t}\n}\n\nfunc TestIsSpace(t *testing.T) {\n\tfor _, c := range spaceTest {\n\t\tif !IsSpace(c) {\n\t\t\tt.Errorf(\"IsSpace(U+%04X) = false; want true\", c)\n\t\t}\n\t}\n\tfor _, c := range letterTest {\n\t\tif IsSpace(c) {\n\t\t\tt.Errorf(\"IsSpace(U+%04X) = true; want false\", c)\n\t\t}\n\t}\n}\n\n// Check that the optimizations for IsLetter etc. agree with the tables.\n// We only need to check the Latin-1 range.\nfunc TestLetterOptimizations(t *testing.T) {\n\tfor i := rune(0); i <= MaxLatin1; i++ {\n\t\tif Is(Letter, i) != IsLetter(i) {\n\t\t\tt.Errorf(\"IsLetter(U+%04X) disagrees with Is(Letter)\", i)\n\t\t}\n\t\tif Is(Upper, i) != IsUpper(i) {\n\t\t\tt.Errorf(\"IsUpper(U+%04X) disagrees with Is(Upper)\", i)\n\t\t}\n\t\tif Is(Lower, i) != IsLower(i) {\n\t\t\tt.Errorf(\"IsLower(U+%04X) disagrees with Is(Lower)\", i)\n\t\t}\n\t\tif Is(Title, i) != IsTitle(i) {\n\t\t\tt.Errorf(\"IsTitle(U+%04X) disagrees with Is(Title)\", i)\n\t\t}\n\t\tif Is(White_Space, i) != IsSpace(i) {\n\t\t\tt.Errorf(\"IsSpace(U+%04X) disagrees with Is(White_Space)\", i)\n\t\t}\n\t\tif To(UpperCase, i) != ToUpper(i) {\n\t\t\tt.Errorf(\"ToUpper(U+%04X) disagrees with To(Upper)\", i)\n\t\t}\n\t\tif To(LowerCase, i) != ToLower(i) {\n\t\t\tt.Errorf(\"ToLower(U+%04X) disagrees with To(Lower)\", i)\n\t\t}\n\t\tif To(TitleCase, i) != ToTitle(i) {\n\t\t\tt.Errorf(\"ToTitle(U+%04X) disagrees with To(Title)\", i)\n\t\t}\n\t}\n}\n\nfunc TestTurkishCase(t *testing.T) {\n\tlower := []rune(\"abcçdefgğhıijklmnoöprsştuüvyz\")\n\tupper := []rune(\"ABCÇDEFGĞHIİJKLMNOÖPRSŞTUÜVYZ\")\n\tfor i, l := range lower {\n\t\tu := upper[i]\n\t\tif TurkishCase.ToLower(l) != l {\n\t\t\tt.Errorf(\"lower(U+%04X) is U+%04X not U+%04X\", l, TurkishCase.ToLower(l), l)\n\t\t}\n\t\tif TurkishCase.ToUpper(u) != u {\n\t\t\tt.Errorf(\"upper(U+%04X) is U+%04X not U+%04X\", u, TurkishCase.ToUpper(u), u)\n\t\t}\n\t\tif TurkishCase.ToUpper(l) != u {\n\t\t\tt.Errorf(\"upper(U+%04X) is U+%04X not U+%04X\", l, TurkishCase.ToUpper(l), u)\n\t\t}\n\t\tif TurkishCase.ToLower(u) != l {\n\t\t\tt.Errorf(\"lower(U+%04X) is U+%04X not U+%04X\", u, TurkishCase.ToLower(l), l)\n\t\t}\n\t\tif TurkishCase.ToTitle(u) != u {\n\t\t\tt.Errorf(\"title(U+%04X) is U+%04X not U+%04X\", u, TurkishCase.ToTitle(u), u)\n\t\t}\n\t\tif TurkishCase.ToTitle(l) != u {\n\t\t\tt.Errorf(\"title(U+%04X) is U+%04X not U+%04X\", l, TurkishCase.ToTitle(l), u)\n\t\t}\n\t}\n}\n\nvar simpleFoldTests = []string{\n\t// SimpleFold(x) returns the next equivalent rune > x or wraps\n\t// around to smaller values.\n\n\t// Easy cases.\n\t\"Aa\",\n\t\"δΔ\",\n\n\t// ASCII special cases.\n\t\"KkK\",\n\t\"Ssſ\",\n\n\t// Non-ASCII special cases.\n\t\"ρϱΡ\",\n\t\"ͅΙιι\",\n\n\t// Extra special cases: has lower/upper but no case fold.\n\t\"İ\",\n\t\"ı\",\n\n\t// Upper comes before lower (Cherokee).\n\t\"\\u13b0\\uab80\",\n}\n\nfunc TestSimpleFold(t *testing.T) {\n\tfor _, tt := range simpleFoldTests {\n\t\tcycle := []rune(tt)\n\t\tr := cycle[len(cycle)-1]\n\t\tfor _, out := range cycle {\n\t\t\tif r := SimpleFold(r); r != out {\n\t\t\t\tt.Errorf(\"SimpleFold(%#U) = %#U, want %#U\", r, r, out)\n\t\t\t}\n\t\t\tr = out\n\t\t}\n\t}\n}\n\n// Running 'go test -calibrate' runs the calibration to find a plausible\n// cutoff point for linear search of a range list vs. binary search.\n// We create a fake table and then time how long it takes to do a\n// sequence of searches within that table, for all possible inputs\n// relative to the ranges (something before all, in each, between each, after all).\n// This assumes that all possible runes are equally likely.\n// In practice most runes are ASCII so this is a conservative estimate\n// of an effective cutoff value. In practice we could probably set it higher\n// than what this function recommends.\n\nvar calibrate = flag.Bool(\"calibrate\", false, \"compute crossover for linear vs. binary search\")\n\nfunc TestCalibrate(t *testing.T) {\n\tif !*calibrate {\n\t\treturn\n\t}\n\n\tif runtime.GOARCH == \"amd64\" {\n\t\tfmt.Printf(\"warning: running calibration on %s\\n\", runtime.GOARCH)\n\t}\n\n\t// Find the point where binary search wins by more than 10%.\n\t// The 10% bias gives linear search an edge when they're close,\n\t// because on predominantly ASCII inputs linear search is even\n\t// better than our benchmarks measure.\n\tn := sort.Search(64, func(n int) bool {\n\t\ttab := fakeTable(n)\n\t\tblinear := func(b *testing.B) {\n\t\t\ttab := tab\n\t\t\tmax := n*5 + 20\n\t\t\tfor i := 0; i < b.N; i++ {\n\t\t\t\tfor j := 0; j <= max; j++ {\n\t\t\t\t\tlinear(tab, uint16(j))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tbbinary := func(b *testing.B) {\n\t\t\ttab := tab\n\t\t\tmax := n*5 + 20\n\t\t\tfor i := 0; i < b.N; i++ {\n\t\t\t\tfor j := 0; j <= max; j++ {\n\t\t\t\t\tbinary(tab, uint16(j))\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tbmlinear := testing.Benchmark(blinear)\n\t\tbmbinary := testing.Benchmark(bbinary)\n\t\tfmt.Printf(\"n=%d: linear=%d binary=%d\\n\", n, bmlinear.NsPerOp(), bmbinary.NsPerOp())\n\t\treturn bmlinear.NsPerOp()*100 > bmbinary.NsPerOp()*110\n\t})\n\tfmt.Printf(\"calibration: linear cutoff = %d\\n\", n)\n}\n\nfunc fakeTable(n int) []Range16 {\n\tvar r16 []Range16\n\tfor i := 0; i < n; i++ {\n\t\tr16 = append(r16, Range16{uint16(i*5 + 10), uint16(i*5 + 12), 1})\n\t}\n\treturn r16\n}\n\nfunc linear(ranges []Range16, r uint16) bool {\n\tfor i := range ranges {\n\t\trange_ := &ranges[i]\n\t\tif r < range_.Lo {\n\t\t\treturn false\n\t\t}\n\t\tif r <= range_.Hi {\n\t\t\treturn (r-range_.Lo)%range_.Stride == 0\n\t\t}\n\t}\n\treturn false\n}\n\nfunc binary(ranges []Range16, r uint16) bool {\n\t// binary search over ranges\n\tlo := 0\n\thi := len(ranges)\n\tfor lo < hi {\n\t\tm := lo + (hi-lo)/2\n\t\trange_ := &ranges[m]\n\t\tif range_.Lo <= r && r <= range_.Hi {\n\t\t\treturn (r-range_.Lo)%range_.Stride == 0\n\t\t}\n\t\tif r < range_.Lo {\n\t\t\thi = m\n\t\t} else {\n\t\t\tlo = m + 1\n\t\t}\n\t}\n\treturn false\n}\n\nfunc TestLatinOffset(t *testing.T) {\n\tvar maps = []map[string]*RangeTable{\n\t\tCategories,\n\t\tFoldCategory,\n\t\tFoldScript,\n\t\tProperties,\n\t\tScripts,\n\t}\n\tfor _, m := range maps {\n\t\tfor name, tab := range m {\n\t\t\ti := 0\n\t\t\tfor i < len(tab.R16) && tab.R16[i].Hi <= MaxLatin1 {\n\t\t\t\ti++\n\t\t\t}\n\t\t\tif tab.LatinOffset != i {\n\t\t\t\tt.Errorf(\"%s: LatinOffset=%d, want %d\", name, tab.LatinOffset, i)\n\t\t\t}\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "examples/go/no_newline_at_eof.go",
    "content": "// run\n\n// Copyright 2015 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage main\n\nfunc main() {\n\tx := 0\n\tfunc() {\n\t\tx = 1\n\t}()\n\tfunc() {\n\t\tif x != 1 {\n\t\t\tpanic(\"x != 1\")\n\t\t}\n\t}()\n}"
  },
  {
    "path": "examples/go/proc.go",
    "content": "// Copyright 2014 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage runtime\n\nimport (\n\t\"runtime/internal/atomic\"\n\t\"runtime/internal/sys\"\n\t\"unsafe\"\n)\n\nvar buildVersion = sys.TheVersion\n\n// Goroutine scheduler\n// The scheduler's job is to distribute ready-to-run goroutines over worker threads.\n//\n// The main concepts are:\n// G - goroutine.\n// M - worker thread, or machine.\n// P - processor, a resource that is required to execute Go code.\n//     M must have an associated P to execute Go code, however it can be\n//     blocked or in a syscall w/o an associated P.\n//\n// Design doc at https://golang.org/s/go11sched.\n\n// Worker thread parking/unparking.\n// We need to balance between keeping enough running worker threads to utilize\n// available hardware parallelism and parking excessive running worker threads\n// to conserve CPU resources and power. This is not simple for two reasons:\n// (1) scheduler state is intentionally distributed (in particular, per-P work\n// queues), so it is not possible to compute global predicates on fast paths;\n// (2) for optimal thread management we would need to know the future (don't park\n// a worker thread when a new goroutine will be readied in near future).\n//\n// Three rejected approaches that would work badly:\n// 1. Centralize all scheduler state (would inhibit scalability).\n// 2. Direct goroutine handoff. That is, when we ready a new goroutine and there\n//    is a spare P, unpark a thread and handoff it the thread and the goroutine.\n//    This would lead to thread state thrashing, as the thread that readied the\n//    goroutine can be out of work the very next moment, we will need to park it.\n//    Also, it would destroy locality of computation as we want to preserve\n//    dependent goroutines on the same thread; and introduce additional latency.\n// 3. Unpark an additional thread whenever we ready a goroutine and there is an\n//    idle P, but don't do handoff. This would lead to excessive thread parking/\n//    unparking as the additional threads will instantly park without discovering\n//    any work to do.\n//\n// The current approach:\n// We unpark an additional thread when we ready a goroutine if (1) there is an\n// idle P and there are no \"spinning\" worker threads. A worker thread is considered\n// spinning if it is out of local work and did not find work in global run queue/\n// netpoller; the spinning state is denoted in m.spinning and in sched.nmspinning.\n// Threads unparked this way are also considered spinning; we don't do goroutine\n// handoff so such threads are out of work initially. Spinning threads do some\n// spinning looking for work in per-P run queues before parking. If a spinning\n// thread finds work it takes itself out of the spinning state and proceeds to\n// execution. If it does not find work it takes itself out of the spinning state\n// and then parks.\n// If there is at least one spinning thread (sched.nmspinning>1), we don't unpark\n// new threads when readying goroutines. To compensate for that, if the last spinning\n// thread finds work and stops spinning, it must unpark a new spinning thread.\n// This approach smooths out unjustified spikes of thread unparking,\n// but at the same time guarantees eventual maximal CPU parallelism utilization.\n//\n// The main implementation complication is that we need to be very careful during\n// spinning->non-spinning thread transition. This transition can race with submission\n// of a new goroutine, and either one part or another needs to unpark another worker\n// thread. If they both fail to do that, we can end up with semi-persistent CPU\n// underutilization. The general pattern for goroutine readying is: submit a goroutine\n// to local work queue, #StoreLoad-style memory barrier, check sched.nmspinning.\n// The general pattern for spinning->non-spinning transition is: decrement nmspinning,\n// #StoreLoad-style memory barrier, check all per-P work queues for new work.\n// Note that all this complexity does not apply to global run queue as we are not\n// sloppy about thread unparking when submitting to global queue. Also see comments\n// for nmspinning manipulation.\n\nvar (\n\tm0 m\n\tg0 g\n)\n\n//go:linkname runtime_init runtime.init\nfunc runtime_init()\n\n//go:linkname main_init main.init\nfunc main_init()\n\n// main_init_done is a signal used by cgocallbackg that initialization\n// has been completed. It is made before _cgo_notify_runtime_init_done,\n// so all cgo calls can rely on it existing. When main_init is complete,\n// it is closed, meaning cgocallbackg can reliably receive from it.\nvar main_init_done chan bool\n\n//go:linkname main_main main.main\nfunc main_main()\n\n// runtimeInitTime is the nanotime() at which the runtime started.\nvar runtimeInitTime int64\n\n// Value to use for signal mask for newly created M's.\nvar initSigmask sigset\n\n// The main goroutine.\nfunc main() {\n\tg := getg()\n\n\t// Racectx of m0->g0 is used only as the parent of the main goroutine.\n\t// It must not be used for anything else.\n\tg.m.g0.racectx = 0\n\n\t// Max stack size is 1 GB on 64-bit, 250 MB on 32-bit.\n\t// Using decimal instead of binary GB and MB because\n\t// they look nicer in the stack overflow failure message.\n\tif sys.PtrSize == 8 {\n\t\tmaxstacksize = 1000000000\n\t} else {\n\t\tmaxstacksize = 250000000\n\t}\n\n\t// Record when the world started.\n\truntimeInitTime = nanotime()\n\n\tsystemstack(func() {\n\t\tnewm(sysmon, nil)\n\t})\n\n\t// Lock the main goroutine onto this, the main OS thread,\n\t// during initialization.  Most programs won't care, but a few\n\t// do require certain calls to be made by the main thread.\n\t// Those can arrange for main.main to run in the main thread\n\t// by calling runtime.LockOSThread during initialization\n\t// to preserve the lock.\n\tlockOSThread()\n\n\tif g.m != &m0 {\n\t\tthrow(\"runtime.main not on m0\")\n\t}\n\n\truntime_init() // must be before defer\n\n\t// Defer unlock so that runtime.Goexit during init does the unlock too.\n\tneedUnlock := true\n\tdefer func() {\n\t\tif needUnlock {\n\t\t\tunlockOSThread()\n\t\t}\n\t}()\n\n\tgcenable()\n\n\tmain_init_done = make(chan bool)\n\tif iscgo {\n\t\tif _cgo_thread_start == nil {\n\t\t\tthrow(\"_cgo_thread_start missing\")\n\t\t}\n\t\tif _cgo_malloc == nil {\n\t\t\tthrow(\"_cgo_malloc missing\")\n\t\t}\n\t\tif _cgo_free == nil {\n\t\t\tthrow(\"_cgo_free missing\")\n\t\t}\n\t\tif GOOS != \"windows\" {\n\t\t\tif _cgo_setenv == nil {\n\t\t\t\tthrow(\"_cgo_setenv missing\")\n\t\t\t}\n\t\t\tif _cgo_unsetenv == nil {\n\t\t\t\tthrow(\"_cgo_unsetenv missing\")\n\t\t\t}\n\t\t}\n\t\tif _cgo_notify_runtime_init_done == nil {\n\t\t\tthrow(\"_cgo_notify_runtime_init_done missing\")\n\t\t}\n\t\tcgocall(_cgo_notify_runtime_init_done, nil)\n\t}\n\n\tmain_init()\n\tclose(main_init_done)\n\n\tneedUnlock = false\n\tunlockOSThread()\n\n\tif isarchive || islibrary {\n\t\t// A program compiled with -buildmode=c-archive or c-shared\n\t\t// has a main, but it is not executed.\n\t\treturn\n\t}\n\tmain_main()\n\tif raceenabled {\n\t\tracefini()\n\t}\n\n\t// Make racy client program work: if panicking on\n\t// another goroutine at the same time as main returns,\n\t// let the other goroutine finish printing the panic trace.\n\t// Once it does, it will exit. See issue 3934.\n\tif panicking != 0 {\n\t\tgopark(nil, nil, \"panicwait\", traceEvGoStop, 1)\n\t}\n\n\texit(0)\n\tfor {\n\t\tvar x *int32\n\t\t*x = 0\n\t}\n}\n\n// os_beforeExit is called from os.Exit(0).\n//go:linkname os_beforeExit os.runtime_beforeExit\nfunc os_beforeExit() {\n\tif raceenabled {\n\t\tracefini()\n\t}\n}\n\n// start forcegc helper goroutine\nfunc init() {\n\tgo forcegchelper()\n}\n\nfunc forcegchelper() {\n\tforcegc.g = getg()\n\tfor {\n\t\tlock(&forcegc.lock)\n\t\tif forcegc.idle != 0 {\n\t\t\tthrow(\"forcegc: phase error\")\n\t\t}\n\t\tatomic.Store(&forcegc.idle, 1)\n\t\tgoparkunlock(&forcegc.lock, \"force gc (idle)\", traceEvGoBlock, 1)\n\t\t// this goroutine is explicitly resumed by sysmon\n\t\tif debug.gctrace > 0 {\n\t\t\tprintln(\"GC forced\")\n\t\t}\n\t\tgcStart(gcBackgroundMode, true)\n\t}\n}\n\n//go:nosplit\n\n// Gosched yields the processor, allowing other goroutines to run.  It does not\n// suspend the current goroutine, so execution resumes automatically.\nfunc Gosched() {\n\tmcall(gosched_m)\n}\n\n// Puts the current goroutine into a waiting state and calls unlockf.\n// If unlockf returns false, the goroutine is resumed.\nfunc gopark(unlockf func(*g, unsafe.Pointer) bool, lock unsafe.Pointer, reason string, traceEv byte, traceskip int) {\n\tmp := acquirem()\n\tgp := mp.curg\n\tstatus := readgstatus(gp)\n\tif status != _Grunning && status != _Gscanrunning {\n\t\tthrow(\"gopark: bad g status\")\n\t}\n\tmp.waitlock = lock\n\tmp.waitunlockf = *(*unsafe.Pointer)(unsafe.Pointer(&unlockf))\n\tgp.waitreason = reason\n\tmp.waittraceev = traceEv\n\tmp.waittraceskip = traceskip\n\treleasem(mp)\n\t// can't do anything that might move the G between Ms here.\n\tmcall(park_m)\n}\n\n// Puts the current goroutine into a waiting state and unlocks the lock.\n// The goroutine can be made runnable again by calling goready(gp).\nfunc goparkunlock(lock *mutex, reason string, traceEv byte, traceskip int) {\n\tgopark(parkunlock_c, unsafe.Pointer(lock), reason, traceEv, traceskip)\n}\n\nfunc goready(gp *g, traceskip int) {\n\tsystemstack(func() {\n\t\tready(gp, traceskip)\n\t})\n}\n\n//go:nosplit\nfunc acquireSudog() *sudog {\n\t// Delicate dance: the semaphore implementation calls\n\t// acquireSudog, acquireSudog calls new(sudog),\n\t// new calls malloc, malloc can call the garbage collector,\n\t// and the garbage collector calls the semaphore implementation\n\t// in stopTheWorld.\n\t// Break the cycle by doing acquirem/releasem around new(sudog).\n\t// The acquirem/releasem increments m.locks during new(sudog),\n\t// which keeps the garbage collector from being invoked.\n\tmp := acquirem()\n\tpp := mp.p.ptr()\n\tif len(pp.sudogcache) == 0 {\n\t\tlock(&sched.sudoglock)\n\t\t// First, try to grab a batch from central cache.\n\t\tfor len(pp.sudogcache) < cap(pp.sudogcache)/2 && sched.sudogcache != nil {\n\t\t\ts := sched.sudogcache\n\t\t\tsched.sudogcache = s.next\n\t\t\ts.next = nil\n\t\t\tpp.sudogcache = append(pp.sudogcache, s)\n\t\t}\n\t\tunlock(&sched.sudoglock)\n\t\t// If the central cache is empty, allocate a new one.\n\t\tif len(pp.sudogcache) == 0 {\n\t\t\tpp.sudogcache = append(pp.sudogcache, new(sudog))\n\t\t}\n\t}\n\tn := len(pp.sudogcache)\n\ts := pp.sudogcache[n-1]\n\tpp.sudogcache[n-1] = nil\n\tpp.sudogcache = pp.sudogcache[:n-1]\n\tif s.elem != nil {\n\t\tthrow(\"acquireSudog: found s.elem != nil in cache\")\n\t}\n\treleasem(mp)\n\treturn s\n}\n\n//go:nosplit\nfunc releaseSudog(s *sudog) {\n\tif s.elem != nil {\n\t\tthrow(\"runtime: sudog with non-nil elem\")\n\t}\n\tif s.selectdone != nil {\n\t\tthrow(\"runtime: sudog with non-nil selectdone\")\n\t}\n\tif s.next != nil {\n\t\tthrow(\"runtime: sudog with non-nil next\")\n\t}\n\tif s.prev != nil {\n\t\tthrow(\"runtime: sudog with non-nil prev\")\n\t}\n\tif s.waitlink != nil {\n\t\tthrow(\"runtime: sudog with non-nil waitlink\")\n\t}\n\tgp := getg()\n\tif gp.param != nil {\n\t\tthrow(\"runtime: releaseSudog with non-nil gp.param\")\n\t}\n\tmp := acquirem() // avoid rescheduling to another P\n\tpp := mp.p.ptr()\n\tif len(pp.sudogcache) == cap(pp.sudogcache) {\n\t\t// Transfer half of local cache to the central cache.\n\t\tvar first, last *sudog\n\t\tfor len(pp.sudogcache) > cap(pp.sudogcache)/2 {\n\t\t\tn := len(pp.sudogcache)\n\t\t\tp := pp.sudogcache[n-1]\n\t\t\tpp.sudogcache[n-1] = nil\n\t\t\tpp.sudogcache = pp.sudogcache[:n-1]\n\t\t\tif first == nil {\n\t\t\t\tfirst = p\n\t\t\t} else {\n\t\t\t\tlast.next = p\n\t\t\t}\n\t\t\tlast = p\n\t\t}\n\t\tlock(&sched.sudoglock)\n\t\tlast.next = sched.sudogcache\n\t\tsched.sudogcache = first\n\t\tunlock(&sched.sudoglock)\n\t}\n\tpp.sudogcache = append(pp.sudogcache, s)\n\treleasem(mp)\n}\n\n// funcPC returns the entry PC of the function f.\n// It assumes that f is a func value. Otherwise the behavior is undefined.\n//go:nosplit\nfunc funcPC(f interface{}) uintptr {\n\treturn **(**uintptr)(add(unsafe.Pointer(&f), sys.PtrSize))\n}\n\n// called from assembly\nfunc badmcall(fn func(*g)) {\n\tthrow(\"runtime: mcall called on m->g0 stack\")\n}\n\nfunc badmcall2(fn func(*g)) {\n\tthrow(\"runtime: mcall function returned\")\n}\n\nfunc badreflectcall() {\n\tpanic(\"runtime: arg size to reflect.call more than 1GB\")\n}\n\nfunc lockedOSThread() bool {\n\tgp := getg()\n\treturn gp.lockedm != nil && gp.m.lockedg != nil\n}\n\nvar (\n\tallgs    []*g\n\tallglock mutex\n)\n\nfunc allgadd(gp *g) {\n\tif readgstatus(gp) == _Gidle {\n\t\tthrow(\"allgadd: bad status Gidle\")\n\t}\n\n\tlock(&allglock)\n\tallgs = append(allgs, gp)\n\tallglen = uintptr(len(allgs))\n\tunlock(&allglock)\n}\n\nconst (\n\t// Number of goroutine ids to grab from sched.goidgen to local per-P cache at once.\n\t// 16 seems to provide enough amortization, but other than that it's mostly arbitrary number.\n\t_GoidCacheBatch = 16\n)\n\n// The bootstrap sequence is:\n//\n//\tcall osinit\n//\tcall schedinit\n//\tmake & queue new G\n//\tcall runtime·mstart\n//\n// The new G calls runtime·main.\nfunc schedinit() {\n\t// raceinit must be the first call to race detector.\n\t// In particular, it must be done before mallocinit below calls racemapshadow.\n\t_g_ := getg()\n\tif raceenabled {\n\t\t_g_.racectx = raceinit()\n\t}\n\n\tsched.maxmcount = 10000\n\n\t// Cache the framepointer experiment.  This affects stack unwinding.\n\tframepointer_enabled = haveexperiment(\"framepointer\")\n\n\ttracebackinit()\n\tmoduledataverify()\n\tstackinit()\n\tmallocinit()\n\tmcommoninit(_g_.m)\n\n\tmsigsave(_g_.m)\n\tinitSigmask = _g_.m.sigmask\n\n\tgoargs()\n\tgoenvs()\n\tparsedebugvars()\n\tgcinit()\n\n\tsched.lastpoll = uint64(nanotime())\n\tprocs := int(ncpu)\n\tif n := atoi(gogetenv(\"GOMAXPROCS\")); n > 0 {\n\t\tif n > _MaxGomaxprocs {\n\t\t\tn = _MaxGomaxprocs\n\t\t}\n\t\tprocs = n\n\t}\n\tif procresize(int32(procs)) != nil {\n\t\tthrow(\"unknown runnable goroutine during bootstrap\")\n\t}\n\n\tif buildVersion == \"\" {\n\t\t// Condition should never trigger.  This code just serves\n\t\t// to ensure runtime·buildVersion is kept in the resulting binary.\n\t\tbuildVersion = \"unknown\"\n\t}\n}\n\nfunc dumpgstatus(gp *g) {\n\t_g_ := getg()\n\tprint(\"runtime: gp: gp=\", gp, \", goid=\", gp.goid, \", gp->atomicstatus=\", readgstatus(gp), \"\\n\")\n\tprint(\"runtime:  g:  g=\", _g_, \", goid=\", _g_.goid, \",  g->atomicstatus=\", readgstatus(_g_), \"\\n\")\n}\n\nfunc checkmcount() {\n\t// sched lock is held\n\tif sched.mcount > sched.maxmcount {\n\t\tprint(\"runtime: program exceeds \", sched.maxmcount, \"-thread limit\\n\")\n\t\tthrow(\"thread exhaustion\")\n\t}\n}\n\nfunc mcommoninit(mp *m) {\n\t_g_ := getg()\n\n\t// g0 stack won't make sense for user (and is not necessary unwindable).\n\tif _g_ != _g_.m.g0 {\n\t\tcallers(1, mp.createstack[:])\n\t}\n\n\tmp.fastrand = 0x49f6428a + uint32(mp.id) + uint32(cputicks())\n\tif mp.fastrand == 0 {\n\t\tmp.fastrand = 0x49f6428a\n\t}\n\n\tlock(&sched.lock)\n\tmp.id = sched.mcount\n\tsched.mcount++\n\tcheckmcount()\n\tmpreinit(mp)\n\tif mp.gsignal != nil {\n\t\tmp.gsignal.stackguard1 = mp.gsignal.stack.lo + _StackGuard\n\t}\n\n\t// Add to allm so garbage collector doesn't free g->m\n\t// when it is just in a register or thread-local storage.\n\tmp.alllink = allm\n\n\t// NumCgoCall() iterates over allm w/o schedlock,\n\t// so we need to publish it safely.\n\tatomicstorep(unsafe.Pointer(&allm), unsafe.Pointer(mp))\n\tunlock(&sched.lock)\n}\n\n// Mark gp ready to run.\nfunc ready(gp *g, traceskip int) {\n\tif trace.enabled {\n\t\ttraceGoUnpark(gp, traceskip)\n\t}\n\n\tstatus := readgstatus(gp)\n\n\t// Mark runnable.\n\t_g_ := getg()\n\t_g_.m.locks++ // disable preemption because it can be holding p in a local var\n\tif status&^_Gscan != _Gwaiting {\n\t\tdumpgstatus(gp)\n\t\tthrow(\"bad g->status in ready\")\n\t}\n\n\t// status is Gwaiting or Gscanwaiting, make Grunnable and put on runq\n\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\trunqput(_g_.m.p.ptr(), gp, true)\n\tif atomic.Load(&sched.npidle) != 0 && atomic.Load(&sched.nmspinning) == 0 { // TODO: fast atomic\n\t\twakep()\n\t}\n\t_g_.m.locks--\n\tif _g_.m.locks == 0 && _g_.preempt { // restore the preemption request in Case we've cleared it in newstack\n\t\t_g_.stackguard0 = stackPreempt\n\t}\n}\n\nfunc gcprocs() int32 {\n\t// Figure out how many CPUs to use during GC.\n\t// Limited by gomaxprocs, number of actual CPUs, and MaxGcproc.\n\tlock(&sched.lock)\n\tn := gomaxprocs\n\tif n > ncpu {\n\t\tn = ncpu\n\t}\n\tif n > _MaxGcproc {\n\t\tn = _MaxGcproc\n\t}\n\tif n > sched.nmidle+1 { // one M is currently running\n\t\tn = sched.nmidle + 1\n\t}\n\tunlock(&sched.lock)\n\treturn n\n}\n\nfunc needaddgcproc() bool {\n\tlock(&sched.lock)\n\tn := gomaxprocs\n\tif n > ncpu {\n\t\tn = ncpu\n\t}\n\tif n > _MaxGcproc {\n\t\tn = _MaxGcproc\n\t}\n\tn -= sched.nmidle + 1 // one M is currently running\n\tunlock(&sched.lock)\n\treturn n > 0\n}\n\nfunc helpgc(nproc int32) {\n\t_g_ := getg()\n\tlock(&sched.lock)\n\tpos := 0\n\tfor n := int32(1); n < nproc; n++ { // one M is currently running\n\t\tif allp[pos].mcache == _g_.m.mcache {\n\t\t\tpos++\n\t\t}\n\t\tmp := mget()\n\t\tif mp == nil {\n\t\t\tthrow(\"gcprocs inconsistency\")\n\t\t}\n\t\tmp.helpgc = n\n\t\tmp.p.set(allp[pos])\n\t\tmp.mcache = allp[pos].mcache\n\t\tpos++\n\t\tnotewakeup(&mp.park)\n\t}\n\tunlock(&sched.lock)\n}\n\n// freezeStopWait is a large value that freezetheworld sets\n// sched.stopwait to in order to request that all Gs permanently stop.\nconst freezeStopWait = 0x7fffffff\n\n// Similar to stopTheWorld but best-effort and can be called several times.\n// There is no reverse operation, used during crashing.\n// This function must not lock any mutexes.\nfunc freezetheworld() {\n\t// stopwait and preemption requests can be lost\n\t// due to races with concurrently executing threads,\n\t// so try several times\n\tfor i := 0; i < 5; i++ {\n\t\t// this should tell the scheduler to not start any new goroutines\n\t\tsched.stopwait = freezeStopWait\n\t\tatomic.Store(&sched.gcwaiting, 1)\n\t\t// this should stop running goroutines\n\t\tif !preemptall() {\n\t\t\tbreak // no running goroutines\n\t\t}\n\t\tusleep(1000)\n\t}\n\t// to be sure\n\tusleep(1000)\n\tpreemptall()\n\tusleep(1000)\n}\n\nfunc isscanstatus(status uint32) bool {\n\tif status == _Gscan {\n\t\tthrow(\"isscanstatus: Bad status Gscan\")\n\t}\n\treturn status&_Gscan == _Gscan\n}\n\n// All reads and writes of g's status go through readgstatus, casgstatus\n// castogscanstatus, casfrom_Gscanstatus.\n//go:nosplit\nfunc readgstatus(gp *g) uint32 {\n\treturn atomic.Load(&gp.atomicstatus)\n}\n\n// Ownership of gscanvalid:\n//\n// If gp is running (meaning status == _Grunning or _Grunning|_Gscan),\n// then gp owns gp.gscanvalid, and other goroutines must not modify it.\n//\n// Otherwise, a second goroutine can lock the scan state by setting _Gscan\n// in the status bit and then modify gscanvalid, and then unlock the scan state.\n//\n// Note that the first condition implies an exception to the second:\n// if a second goroutine changes gp's status to _Grunning|_Gscan,\n// that second goroutine still does not have the right to modify gscanvalid.\n\n// The Gscanstatuses are acting like locks and this releases them.\n// If it proves to be a performance hit we should be able to make these\n// simple atomic stores but for now we are going to throw if\n// we see an inconsistent state.\nfunc casfrom_Gscanstatus(gp *g, oldval, newval uint32) {\n\tsuccess := false\n\n\t// Check that transition is valid.\n\tswitch oldval {\n\tdefault:\n\t\tprint(\"runtime: casfrom_Gscanstatus bad oldval gp=\", gp, \", oldval=\", hex(oldval), \", newval=\", hex(newval), \"\\n\")\n\t\tdumpgstatus(gp)\n\t\tthrow(\"casfrom_Gscanstatus:top gp->status is not in scan state\")\n\tcase _Gscanrunnable,\n\t\t_Gscanwaiting,\n\t\t_Gscanrunning,\n\t\t_Gscansyscall:\n\t\tif newval == oldval&^_Gscan {\n\t\t\tsuccess = atomic.Cas(&gp.atomicstatus, oldval, newval)\n\t\t}\n\tcase _Gscanenqueue:\n\t\tif newval == _Gwaiting {\n\t\t\tsuccess = atomic.Cas(&gp.atomicstatus, oldval, newval)\n\t\t}\n\t}\n\tif !success {\n\t\tprint(\"runtime: casfrom_Gscanstatus failed gp=\", gp, \", oldval=\", hex(oldval), \", newval=\", hex(newval), \"\\n\")\n\t\tdumpgstatus(gp)\n\t\tthrow(\"casfrom_Gscanstatus: gp->status is not in scan state\")\n\t}\n\tif newval == _Grunning {\n\t\tgp.gcscanvalid = false\n\t}\n}\n\n// This will return false if the gp is not in the expected status and the cas fails.\n// This acts like a lock acquire while the casfromgstatus acts like a lock release.\nfunc castogscanstatus(gp *g, oldval, newval uint32) bool {\n\tswitch oldval {\n\tcase _Grunnable,\n\t\t_Gwaiting,\n\t\t_Gsyscall:\n\t\tif newval == oldval|_Gscan {\n\t\t\treturn atomic.Cas(&gp.atomicstatus, oldval, newval)\n\t\t}\n\tcase _Grunning:\n\t\tif newval == _Gscanrunning || newval == _Gscanenqueue {\n\t\t\treturn atomic.Cas(&gp.atomicstatus, oldval, newval)\n\t\t}\n\t}\n\tprint(\"runtime: castogscanstatus oldval=\", hex(oldval), \" newval=\", hex(newval), \"\\n\")\n\tthrow(\"castogscanstatus\")\n\tpanic(\"not reached\")\n}\n\n// If asked to move to or from a Gscanstatus this will throw. Use the castogscanstatus\n// and casfrom_Gscanstatus instead.\n// casgstatus will loop if the g->atomicstatus is in a Gscan status until the routine that\n// put it in the Gscan state is finished.\n//go:nosplit\nfunc casgstatus(gp *g, oldval, newval uint32) {\n\tif (oldval&_Gscan != 0) || (newval&_Gscan != 0) || oldval == newval {\n\t\tsystemstack(func() {\n\t\t\tprint(\"runtime: casgstatus: oldval=\", hex(oldval), \" newval=\", hex(newval), \"\\n\")\n\t\t\tthrow(\"casgstatus: bad incoming values\")\n\t\t})\n\t}\n\n\tif oldval == _Grunning && gp.gcscanvalid {\n\t\t// If oldvall == _Grunning, then the actual status must be\n\t\t// _Grunning or _Grunning|_Gscan; either way,\n\t\t// we own gp.gcscanvalid, so it's safe to read.\n\t\t// gp.gcscanvalid must not be true when we are running.\n\t\tprint(\"runtime: casgstatus \", hex(oldval), \"->\", hex(newval), \" gp.status=\", hex(gp.atomicstatus), \" gp.gcscanvalid=true\\n\")\n\t\tthrow(\"casgstatus\")\n\t}\n\n\t// loop if gp->atomicstatus is in a scan state giving\n\t// GC time to finish and change the state to oldval.\n\tfor !atomic.Cas(&gp.atomicstatus, oldval, newval) {\n\t\tif oldval == _Gwaiting && gp.atomicstatus == _Grunnable {\n\t\t\tsystemstack(func() {\n\t\t\t\tthrow(\"casgstatus: waiting for Gwaiting but is Grunnable\")\n\t\t\t})\n\t\t}\n\t\t// Help GC if needed.\n\t\t// if gp.preemptscan && !gp.gcworkdone && (oldval == _Grunning || oldval == _Gsyscall) {\n\t\t// \tgp.preemptscan = false\n\t\t// \tsystemstack(func() {\n\t\t// \t\tgcphasework(gp)\n\t\t// \t})\n\t\t// }\n\t}\n\tif newval == _Grunning {\n\t\tgp.gcscanvalid = false\n\t}\n}\n\n// casgstatus(gp, oldstatus, Gcopystack), assuming oldstatus is Gwaiting or Grunnable.\n// Returns old status. Cannot call casgstatus directly, because we are racing with an\n// async wakeup that might come in from netpoll. If we see Gwaiting from the readgstatus,\n// it might have become Grunnable by the time we get to the cas. If we called casgstatus,\n// it would loop waiting for the status to go back to Gwaiting, which it never will.\n//go:nosplit\nfunc casgcopystack(gp *g) uint32 {\n\tfor {\n\t\toldstatus := readgstatus(gp) &^ _Gscan\n\t\tif oldstatus != _Gwaiting && oldstatus != _Grunnable {\n\t\t\tthrow(\"copystack: bad status, not Gwaiting or Grunnable\")\n\t\t}\n\t\tif atomic.Cas(&gp.atomicstatus, oldstatus, _Gcopystack) {\n\t\t\treturn oldstatus\n\t\t}\n\t}\n}\n\n// scang blocks until gp's stack has been scanned.\n// It might be scanned by scang or it might be scanned by the goroutine itself.\n// Either way, the stack scan has completed when scang returns.\nfunc scang(gp *g) {\n\t// Invariant; we (the caller, markroot for a specific goroutine) own gp.gcscandone.\n\t// Nothing is racing with us now, but gcscandone might be set to true left over\n\t// from an earlier round of stack scanning (we scan twice per GC).\n\t// We use gcscandone to record whether the scan has been done during this round.\n\t// It is important that the scan happens exactly once: if called twice,\n\t// the installation of stack barriers will detect the double scan and die.\n\n\tgp.gcscandone = false\n\n\t// Endeavor to get gcscandone set to true,\n\t// either by doing the stack scan ourselves or by coercing gp to scan itself.\n\t// gp.gcscandone can transition from false to true when we're not looking\n\t// (if we asked for preemption), so any time we lock the status using\n\t// castogscanstatus we have to double-check that the scan is still not done.\n\tfor !gp.gcscandone {\n\t\tswitch s := readgstatus(gp); s {\n\t\tdefault:\n\t\t\tdumpgstatus(gp)\n\t\t\tthrow(\"stopg: invalid status\")\n\n\t\tcase _Gdead:\n\t\t\t// No stack.\n\t\t\tgp.gcscandone = true\n\n\t\tcase _Gcopystack:\n\t\t// Stack being switched. Go around again.\n\n\t\tcase _Grunnable, _Gsyscall, _Gwaiting:\n\t\t\t// Claim goroutine by setting scan bit.\n\t\t\t// Racing with execution or readying of gp.\n\t\t\t// The scan bit keeps them from running\n\t\t\t// the goroutine until we're done.\n\t\t\tif castogscanstatus(gp, s, s|_Gscan) {\n\t\t\t\tif !gp.gcscandone {\n\t\t\t\t\tscanstack(gp)\n\t\t\t\t\tgp.gcscandone = true\n\t\t\t\t}\n\t\t\t\trestartg(gp)\n\t\t\t}\n\n\t\tcase _Gscanwaiting:\n\t\t// newstack is doing a scan for us right now. Wait.\n\n\t\tcase _Grunning:\n\t\t\t// Goroutine running. Try to preempt execution so it can scan itself.\n\t\t\t// The preemption handler (in newstack) does the actual scan.\n\n\t\t\t// Optimization: if there is already a pending preemption request\n\t\t\t// (from the previous loop iteration), don't bother with the atomics.\n\t\t\tif gp.preemptscan && gp.preempt && gp.stackguard0 == stackPreempt {\n\t\t\t\tbreak\n\t\t\t}\n\n\t\t\t// Ask for preemption and self scan.\n\t\t\tif castogscanstatus(gp, _Grunning, _Gscanrunning) {\n\t\t\t\tif !gp.gcscandone {\n\t\t\t\t\tgp.preemptscan = true\n\t\t\t\t\tgp.preempt = true\n\t\t\t\t\tgp.stackguard0 = stackPreempt\n\t\t\t\t}\n\t\t\t\tcasfrom_Gscanstatus(gp, _Gscanrunning, _Grunning)\n\t\t\t}\n\t\t}\n\t}\n\n\tgp.preemptscan = false // cancel scan request if no longer needed\n}\n\n// The GC requests that this routine be moved from a scanmumble state to a mumble state.\nfunc restartg(gp *g) {\n\ts := readgstatus(gp)\n\tswitch s {\n\tdefault:\n\t\tdumpgstatus(gp)\n\t\tthrow(\"restartg: unexpected status\")\n\n\tcase _Gdead:\n\t// ok\n\n\tcase _Gscanrunnable,\n\t\t_Gscanwaiting,\n\t\t_Gscansyscall:\n\t\tcasfrom_Gscanstatus(gp, s, s&^_Gscan)\n\n\t// Scan is now completed.\n\t// Goroutine now needs to be made runnable.\n\t// We put it on the global run queue; ready blocks on the global scheduler lock.\n\tcase _Gscanenqueue:\n\t\tcasfrom_Gscanstatus(gp, _Gscanenqueue, _Gwaiting)\n\t\tif gp != getg().m.curg {\n\t\t\tthrow(\"processing Gscanenqueue on wrong m\")\n\t\t}\n\t\tdropg()\n\t\tready(gp, 0)\n\t}\n}\n\n// stopTheWorld stops all P's from executing goroutines, interrupting\n// all goroutines at GC safe points and records reason as the reason\n// for the stop. On return, only the current goroutine's P is running.\n// stopTheWorld must not be called from a system stack and the caller\n// must not hold worldsema. The caller must call startTheWorld when\n// other P's should resume execution.\n//\n// stopTheWorld is safe for multiple goroutines to call at the\n// same time. Each will execute its own stop, and the stops will\n// be serialized.\n//\n// This is also used by routines that do stack dumps. If the system is\n// in panic or being exited, this may not reliably stop all\n// goroutines.\nfunc stopTheWorld(reason string) {\n\tsemacquire(&worldsema, false)\n\tgetg().m.preemptoff = reason\n\tsystemstack(stopTheWorldWithSema)\n}\n\n// startTheWorld undoes the effects of stopTheWorld.\nfunc startTheWorld() {\n\tsystemstack(startTheWorldWithSema)\n\t// worldsema must be held over startTheWorldWithSema to ensure\n\t// gomaxprocs cannot change while worldsema is held.\n\tsemrelease(&worldsema)\n\tgetg().m.preemptoff = \"\"\n}\n\n// Holding worldsema grants an M the right to try to stop the world\n// and prevents gomaxprocs from changing concurrently.\nvar worldsema uint32 = 1\n\n// stopTheWorldWithSema is the core implementation of stopTheWorld.\n// The caller is responsible for acquiring worldsema and disabling\n// preemption first and then should stopTheWorldWithSema on the system\n// stack:\n//\n//\tsemacquire(&worldsema, false)\n//\tm.preemptoff = \"reason\"\n//\tsystemstack(stopTheWorldWithSema)\n//\n// When finished, the caller must either call startTheWorld or undo\n// these three operations separately:\n//\n//\tm.preemptoff = \"\"\n//\tsystemstack(startTheWorldWithSema)\n//\tsemrelease(&worldsema)\n//\n// It is allowed to acquire worldsema once and then execute multiple\n// startTheWorldWithSema/stopTheWorldWithSema pairs.\n// Other P's are able to execute between successive calls to\n// startTheWorldWithSema and stopTheWorldWithSema.\n// Holding worldsema causes any other goroutines invoking\n// stopTheWorld to block.\nfunc stopTheWorldWithSema() {\n\t_g_ := getg()\n\n\t// If we hold a lock, then we won't be able to stop another M\n\t// that is blocked trying to acquire the lock.\n\tif _g_.m.locks > 0 {\n\t\tthrow(\"stopTheWorld: holding locks\")\n\t}\n\n\tlock(&sched.lock)\n\tsched.stopwait = gomaxprocs\n\tatomic.Store(&sched.gcwaiting, 1)\n\tpreemptall()\n\t// stop current P\n\t_g_.m.p.ptr().status = _Pgcstop // Pgcstop is only diagnostic.\n\tsched.stopwait--\n\t// try to retake all P's in Psyscall status\n\tfor i := 0; i < int(gomaxprocs); i++ {\n\t\tp := allp[i]\n\t\ts := p.status\n\t\tif s == _Psyscall && atomic.Cas(&p.status, s, _Pgcstop) {\n\t\t\tif trace.enabled {\n\t\t\t\ttraceGoSysBlock(p)\n\t\t\t\ttraceProcStop(p)\n\t\t\t}\n\t\t\tp.syscalltick++\n\t\t\tsched.stopwait--\n\t\t}\n\t}\n\t// stop idle P's\n\tfor {\n\t\tp := pidleget()\n\t\tif p == nil {\n\t\t\tbreak\n\t\t}\n\t\tp.status = _Pgcstop\n\t\tsched.stopwait--\n\t}\n\twait := sched.stopwait > 0\n\tunlock(&sched.lock)\n\n\t// wait for remaining P's to stop voluntarily\n\tif wait {\n\t\tfor {\n\t\t\t// wait for 100us, then try to re-preempt in case of any races\n\t\t\tif notetsleep(&sched.stopnote, 100*1000) {\n\t\t\t\tnoteclear(&sched.stopnote)\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tpreemptall()\n\t\t}\n\t}\n\tif sched.stopwait != 0 {\n\t\tthrow(\"stopTheWorld: not stopped\")\n\t}\n\tfor i := 0; i < int(gomaxprocs); i++ {\n\t\tp := allp[i]\n\t\tif p.status != _Pgcstop {\n\t\t\tthrow(\"stopTheWorld: not stopped\")\n\t\t}\n\t}\n}\n\nfunc mhelpgc() {\n\t_g_ := getg()\n\t_g_.m.helpgc = -1\n}\n\nfunc startTheWorldWithSema() {\n\t_g_ := getg()\n\n\t_g_.m.locks++        // disable preemption because it can be holding p in a local var\n\tgp := netpoll(false) // non-blocking\n\tinjectglist(gp)\n\tadd := needaddgcproc()\n\tlock(&sched.lock)\n\n\tprocs := gomaxprocs\n\tif newprocs != 0 {\n\t\tprocs = newprocs\n\t\tnewprocs = 0\n\t}\n\tp1 := procresize(procs)\n\tsched.gcwaiting = 0\n\tif sched.sysmonwait != 0 {\n\t\tsched.sysmonwait = 0\n\t\tnotewakeup(&sched.sysmonnote)\n\t}\n\tunlock(&sched.lock)\n\n\tfor p1 != nil {\n\t\tp := p1\n\t\tp1 = p1.link.ptr()\n\t\tif p.m != 0 {\n\t\t\tmp := p.m.ptr()\n\t\t\tp.m = 0\n\t\t\tif mp.nextp != 0 {\n\t\t\t\tthrow(\"startTheWorld: inconsistent mp->nextp\")\n\t\t\t}\n\t\t\tmp.nextp.set(p)\n\t\t\tnotewakeup(&mp.park)\n\t\t} else {\n\t\t\t// Start M to run P.  Do not start another M below.\n\t\t\tnewm(nil, p)\n\t\t\tadd = false\n\t\t}\n\t}\n\n\t// Wakeup an additional proc in case we have excessive runnable goroutines\n\t// in local queues or in the global queue. If we don't, the proc will park itself.\n\t// If we have lots of excessive work, resetspinning will unpark additional procs as necessary.\n\tif atomic.Load(&sched.npidle) != 0 && atomic.Load(&sched.nmspinning) == 0 {\n\t\twakep()\n\t}\n\n\tif add {\n\t\t// If GC could have used another helper proc, start one now,\n\t\t// in the hope that it will be available next time.\n\t\t// It would have been even better to start it before the collection,\n\t\t// but doing so requires allocating memory, so it's tricky to\n\t\t// coordinate.  This lazy approach works out in practice:\n\t\t// we don't mind if the first couple gc rounds don't have quite\n\t\t// the maximum number of procs.\n\t\tnewm(mhelpgc, nil)\n\t}\n\t_g_.m.locks--\n\tif _g_.m.locks == 0 && _g_.preempt { // restore the preemption request in case we've cleared it in newstack\n\t\t_g_.stackguard0 = stackPreempt\n\t}\n}\n\n// Called to start an M.\n//go:nosplit\nfunc mstart() {\n\t_g_ := getg()\n\n\tif _g_.stack.lo == 0 {\n\t\t// Initialize stack bounds from system stack.\n\t\t// Cgo may have left stack size in stack.hi.\n\t\tsize := _g_.stack.hi\n\t\tif size == 0 {\n\t\t\tsize = 8192 * sys.StackGuardMultiplier\n\t\t}\n\t\t_g_.stack.hi = uintptr(noescape(unsafe.Pointer(&size)))\n\t\t_g_.stack.lo = _g_.stack.hi - size + 1024\n\t}\n\t// Initialize stack guards so that we can start calling\n\t// both Go and C functions with stack growth prologues.\n\t_g_.stackguard0 = _g_.stack.lo + _StackGuard\n\t_g_.stackguard1 = _g_.stackguard0\n\tmstart1()\n}\n\nfunc mstart1() {\n\t_g_ := getg()\n\n\tif _g_ != _g_.m.g0 {\n\t\tthrow(\"bad runtime·mstart\")\n\t}\n\n\t// Record top of stack for use by mcall.\n\t// Once we call schedule we're never coming back,\n\t// so other calls can reuse this stack space.\n\tgosave(&_g_.m.g0.sched)\n\t_g_.m.g0.sched.pc = ^uintptr(0) // make sure it is never used\n\tasminit()\n\tminit()\n\n\t// Install signal handlers; after minit so that minit can\n\t// prepare the thread to be able to handle the signals.\n\tif _g_.m == &m0 {\n\t\t// Create an extra M for callbacks on threads not created by Go.\n\t\tif iscgo && !cgoHasExtraM {\n\t\t\tcgoHasExtraM = true\n\t\t\tnewextram()\n\t\t}\n\t\tinitsig(false)\n\t}\n\n\tif fn := _g_.m.mstartfn; fn != nil {\n\t\tfn()\n\t}\n\n\tif _g_.m.helpgc != 0 {\n\t\t_g_.m.helpgc = 0\n\t\tstopm()\n\t} else if _g_.m != &m0 {\n\t\tacquirep(_g_.m.nextp.ptr())\n\t\t_g_.m.nextp = 0\n\t}\n\tschedule()\n}\n\n// forEachP calls fn(p) for every P p when p reaches a GC safe point.\n// If a P is currently executing code, this will bring the P to a GC\n// safe point and execute fn on that P. If the P is not executing code\n// (it is idle or in a syscall), this will call fn(p) directly while\n// preventing the P from exiting its state. This does not ensure that\n// fn will run on every CPU executing Go code, but it acts as a global\n// memory barrier. GC uses this as a \"ragged barrier.\"\n//\n// The caller must hold worldsema.\n//\n//go:systemstack\nfunc forEachP(fn func(*p)) {\n\tmp := acquirem()\n\t_p_ := getg().m.p.ptr()\n\n\tlock(&sched.lock)\n\tif sched.safePointWait != 0 {\n\t\tthrow(\"forEachP: sched.safePointWait != 0\")\n\t}\n\tsched.safePointWait = gomaxprocs - 1\n\tsched.safePointFn = fn\n\n\t// Ask all Ps to run the safe point function.\n\tfor _, p := range allp[:gomaxprocs] {\n\t\tif p != _p_ {\n\t\t\tatomic.Store(&p.runSafePointFn, 1)\n\t\t}\n\t}\n\tpreemptall()\n\n\t// Any P entering _Pidle or _Psyscall from now on will observe\n\t// p.runSafePointFn == 1 and will call runSafePointFn when\n\t// changing its status to _Pidle/_Psyscall.\n\n\t// Run safe point function for all idle Ps. sched.pidle will\n\t// not change because we hold sched.lock.\n\tfor p := sched.pidle.ptr(); p != nil; p = p.link.ptr() {\n\t\tif atomic.Cas(&p.runSafePointFn, 1, 0) {\n\t\t\tfn(p)\n\t\t\tsched.safePointWait--\n\t\t}\n\t}\n\n\twait := sched.safePointWait > 0\n\tunlock(&sched.lock)\n\n\t// Run fn for the current P.\n\tfn(_p_)\n\n\t// Force Ps currently in _Psyscall into _Pidle and hand them\n\t// off to induce safe point function execution.\n\tfor i := 0; i < int(gomaxprocs); i++ {\n\t\tp := allp[i]\n\t\ts := p.status\n\t\tif s == _Psyscall && p.runSafePointFn == 1 && atomic.Cas(&p.status, s, _Pidle) {\n\t\t\tif trace.enabled {\n\t\t\t\ttraceGoSysBlock(p)\n\t\t\t\ttraceProcStop(p)\n\t\t\t}\n\t\t\tp.syscalltick++\n\t\t\thandoffp(p)\n\t\t}\n\t}\n\n\t// Wait for remaining Ps to run fn.\n\tif wait {\n\t\tfor {\n\t\t\t// Wait for 100us, then try to re-preempt in\n\t\t\t// case of any races.\n\t\t\t//\n\t\t\t// Requires system stack.\n\t\t\tif notetsleep(&sched.safePointNote, 100*1000) {\n\t\t\t\tnoteclear(&sched.safePointNote)\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tpreemptall()\n\t\t}\n\t}\n\tif sched.safePointWait != 0 {\n\t\tthrow(\"forEachP: not done\")\n\t}\n\tfor i := 0; i < int(gomaxprocs); i++ {\n\t\tp := allp[i]\n\t\tif p.runSafePointFn != 0 {\n\t\t\tthrow(\"forEachP: P did not run fn\")\n\t\t}\n\t}\n\n\tlock(&sched.lock)\n\tsched.safePointFn = nil\n\tunlock(&sched.lock)\n\treleasem(mp)\n}\n\n// runSafePointFn runs the safe point function, if any, for this P.\n// This should be called like\n//\n//     if getg().m.p.runSafePointFn != 0 {\n//         runSafePointFn()\n//     }\n//\n// runSafePointFn must be checked on any transition in to _Pidle or\n// _Psyscall to avoid a race where forEachP sees that the P is running\n// just before the P goes into _Pidle/_Psyscall and neither forEachP\n// nor the P run the safe-point function.\nfunc runSafePointFn() {\n\tp := getg().m.p.ptr()\n\t// Resolve the race between forEachP running the safe-point\n\t// function on this P's behalf and this P running the\n\t// safe-point function directly.\n\tif !atomic.Cas(&p.runSafePointFn, 1, 0) {\n\t\treturn\n\t}\n\tsched.safePointFn(p)\n\tlock(&sched.lock)\n\tsched.safePointWait--\n\tif sched.safePointWait == 0 {\n\t\tnotewakeup(&sched.safePointNote)\n\t}\n\tunlock(&sched.lock)\n}\n\n// When running with cgo, we call _cgo_thread_start\n// to start threads for us so that we can play nicely with\n// foreign code.\nvar cgoThreadStart unsafe.Pointer\n\ntype cgothreadstart struct {\n\tg   guintptr\n\ttls *uint64\n\tfn  unsafe.Pointer\n}\n\n// Allocate a new m unassociated with any thread.\n// Can use p for allocation context if needed.\n// fn is recorded as the new m's m.mstartfn.\n//\n// This function it known to the compiler to inhibit the\n// go:nowritebarrierrec annotation because it uses P for allocation.\nfunc allocm(_p_ *p, fn func()) *m {\n\t_g_ := getg()\n\t_g_.m.locks++ // disable GC because it can be called from sysmon\n\tif _g_.m.p == 0 {\n\t\tacquirep(_p_) // temporarily borrow p for mallocs in this function\n\t}\n\tmp := new(m)\n\tmp.mstartfn = fn\n\tmcommoninit(mp)\n\n\t// In case of cgo or Solaris, pthread_create will make us a stack.\n\t// Windows and Plan 9 will layout sched stack on OS stack.\n\tif iscgo || GOOS == \"solaris\" || GOOS == \"windows\" || GOOS == \"plan9\" {\n\t\tmp.g0 = malg(-1)\n\t} else {\n\t\tmp.g0 = malg(8192 * sys.StackGuardMultiplier)\n\t}\n\tmp.g0.m = mp\n\n\tif _p_ == _g_.m.p.ptr() {\n\t\treleasep()\n\t}\n\t_g_.m.locks--\n\tif _g_.m.locks == 0 && _g_.preempt { // restore the preemption request in case we've cleared it in newstack\n\t\t_g_.stackguard0 = stackPreempt\n\t}\n\n\treturn mp\n}\n\n// needm is called when a cgo callback happens on a\n// thread without an m (a thread not created by Go).\n// In this case, needm is expected to find an m to use\n// and return with m, g initialized correctly.\n// Since m and g are not set now (likely nil, but see below)\n// needm is limited in what routines it can call. In particular\n// it can only call nosplit functions (textflag 7) and cannot\n// do any scheduling that requires an m.\n//\n// In order to avoid needing heavy lifting here, we adopt\n// the following strategy: there is a stack of available m's\n// that can be stolen. Using compare-and-swap\n// to pop from the stack has ABA races, so we simulate\n// a lock by doing an exchange (via casp) to steal the stack\n// head and replace the top pointer with MLOCKED (1).\n// This serves as a simple spin lock that we can use even\n// without an m. The thread that locks the stack in this way\n// unlocks the stack by storing a valid stack head pointer.\n//\n// In order to make sure that there is always an m structure\n// available to be stolen, we maintain the invariant that there\n// is always one more than needed. At the beginning of the\n// program (if cgo is in use) the list is seeded with a single m.\n// If needm finds that it has taken the last m off the list, its job\n// is - once it has installed its own m so that it can do things like\n// allocate memory - to create a spare m and put it on the list.\n//\n// Each of these extra m's also has a g0 and a curg that are\n// pressed into service as the scheduling stack and current\n// goroutine for the duration of the cgo callback.\n//\n// When the callback is done with the m, it calls dropm to\n// put the m back on the list.\n//go:nosplit\nfunc needm(x byte) {\n\tif iscgo && !cgoHasExtraM {\n\t\t// Can happen if C/C++ code calls Go from a global ctor.\n\t\t// Can not throw, because scheduler is not initialized yet.\n\t\twrite(2, unsafe.Pointer(&earlycgocallback[0]), int32(len(earlycgocallback)))\n\t\texit(1)\n\t}\n\n\t// Lock extra list, take head, unlock popped list.\n\t// nilokay=false is safe here because of the invariant above,\n\t// that the extra list always contains or will soon contain\n\t// at least one m.\n\tmp := lockextra(false)\n\n\t// Set needextram when we've just emptied the list,\n\t// so that the eventual call into cgocallbackg will\n\t// allocate a new m for the extra list. We delay the\n\t// allocation until then so that it can be done\n\t// after exitsyscall makes sure it is okay to be\n\t// running at all (that is, there's no garbage collection\n\t// running right now).\n\tmp.needextram = mp.schedlink == 0\n\tunlockextra(mp.schedlink.ptr())\n\n\t// Save and block signals before installing g.\n\t// Once g is installed, any incoming signals will try to execute,\n\t// but we won't have the sigaltstack settings and other data\n\t// set up appropriately until the end of minit, which will\n\t// unblock the signals. This is the same dance as when\n\t// starting a new m to run Go code via newosproc.\n\tmsigsave(mp)\n\tsigblock()\n\n\t// Install g (= m->g0) and set the stack bounds\n\t// to match the current stack. We don't actually know\n\t// how big the stack is, like we don't know how big any\n\t// scheduling stack is, but we assume there's at least 32 kB,\n\t// which is more than enough for us.\n\tsetg(mp.g0)\n\t_g_ := getg()\n\t_g_.stack.hi = uintptr(noescape(unsafe.Pointer(&x))) + 1024\n\t_g_.stack.lo = uintptr(noescape(unsafe.Pointer(&x))) - 32*1024\n\t_g_.stackguard0 = _g_.stack.lo + _StackGuard\n\n\t// Initialize this thread to use the m.\n\tasminit()\n\tminit()\n}\n\nvar earlycgocallback = []byte(\"fatal error: cgo callback before cgo call\\n\")\n\n// newextram allocates an m and puts it on the extra list.\n// It is called with a working local m, so that it can do things\n// like call schedlock and allocate.\nfunc newextram() {\n\t// Create extra goroutine locked to extra m.\n\t// The goroutine is the context in which the cgo callback will run.\n\t// The sched.pc will never be returned to, but setting it to\n\t// goexit makes clear to the traceback routines where\n\t// the goroutine stack ends.\n\tmp := allocm(nil, nil)\n\tgp := malg(4096)\n\tgp.sched.pc = funcPC(goexit) + sys.PCQuantum\n\tgp.sched.sp = gp.stack.hi\n\tgp.sched.sp -= 4 * sys.RegSize // extra space in case of reads slightly beyond frame\n\tgp.sched.lr = 0\n\tgp.sched.g = guintptr(unsafe.Pointer(gp))\n\tgp.syscallpc = gp.sched.pc\n\tgp.syscallsp = gp.sched.sp\n\tgp.stktopsp = gp.sched.sp\n\t// malg returns status as Gidle, change to Gsyscall before adding to allg\n\t// where GC will see it.\n\tcasgstatus(gp, _Gidle, _Gsyscall)\n\tgp.m = mp\n\tmp.curg = gp\n\tmp.locked = _LockInternal\n\tmp.lockedg = gp\n\tgp.lockedm = mp\n\tgp.goid = int64(atomic.Xadd64(&sched.goidgen, 1))\n\tif raceenabled {\n\t\tgp.racectx = racegostart(funcPC(newextram))\n\t}\n\t// put on allg for garbage collector\n\tallgadd(gp)\n\n\t// Add m to the extra list.\n\tmnext := lockextra(true)\n\tmp.schedlink.set(mnext)\n\tunlockextra(mp)\n}\n\n// dropm is called when a cgo callback has called needm but is now\n// done with the callback and returning back into the non-Go thread.\n// It puts the current m back onto the extra list.\n//\n// The main expense here is the call to signalstack to release the\n// m's signal stack, and then the call to needm on the next callback\n// from this thread. It is tempting to try to save the m for next time,\n// which would eliminate both these costs, but there might not be\n// a next time: the current thread (which Go does not control) might exit.\n// If we saved the m for that thread, there would be an m leak each time\n// such a thread exited. Instead, we acquire and release an m on each\n// call. These should typically not be scheduling operations, just a few\n// atomics, so the cost should be small.\n//\n// TODO(rsc): An alternative would be to allocate a dummy pthread per-thread\n// variable using pthread_key_create. Unlike the pthread keys we already use\n// on OS X, this dummy key would never be read by Go code. It would exist\n// only so that we could register at thread-exit-time destructor.\n// That destructor would put the m back onto the extra list.\n// This is purely a performance optimization. The current version,\n// in which dropm happens on each cgo call, is still correct too.\n// We may have to keep the current version on systems with cgo\n// but without pthreads, like Windows.\nfunc dropm() {\n\t// Clear m and g, and return m to the extra list.\n\t// After the call to setg we can only call nosplit functions\n\t// with no pointer manipulation.\n\tmp := getg().m\n\n\t// Block signals before unminit.\n\t// Unminit unregisters the signal handling stack (but needs g on some systems).\n\t// Setg(nil) clears g, which is the signal handler's cue not to run Go handlers.\n\t// It's important not to try to handle a signal between those two steps.\n\tsigmask := mp.sigmask\n\tsigblock()\n\tunminit()\n\n\tmnext := lockextra(true)\n\tmp.schedlink.set(mnext)\n\n\tsetg(nil)\n\n\t// Commit the release of mp.\n\tunlockextra(mp)\n\n\tmsigrestore(sigmask)\n}\n\n// A helper function for EnsureDropM.\nfunc getm() uintptr {\n\treturn uintptr(unsafe.Pointer(getg().m))\n}\n\nvar extram uintptr\n\n// lockextra locks the extra list and returns the list head.\n// The caller must unlock the list by storing a new list head\n// to extram. If nilokay is true, then lockextra will\n// return a nil list head if that's what it finds. If nilokay is false,\n// lockextra will keep waiting until the list head is no longer nil.\n//go:nosplit\nfunc lockextra(nilokay bool) *m {\n\tconst locked = 1\n\n\tfor {\n\t\told := atomic.Loaduintptr(&extram)\n\t\tif old == locked {\n\t\t\tyield := osyield\n\t\t\tyield()\n\t\t\tcontinue\n\t\t}\n\t\tif old == 0 && !nilokay {\n\t\t\tusleep(1)\n\t\t\tcontinue\n\t\t}\n\t\tif atomic.Casuintptr(&extram, old, locked) {\n\t\t\treturn (*m)(unsafe.Pointer(old))\n\t\t}\n\t\tyield := osyield\n\t\tyield()\n\t\tcontinue\n\t}\n}\n\n//go:nosplit\nfunc unlockextra(mp *m) {\n\tatomic.Storeuintptr(&extram, uintptr(unsafe.Pointer(mp)))\n}\n\n// Create a new m.  It will start off with a call to fn, or else the scheduler.\n// fn needs to be static and not a heap allocated closure.\n// May run with m.p==nil, so write barriers are not allowed.\n//go:nowritebarrier\nfunc newm(fn func(), _p_ *p) {\n\tmp := allocm(_p_, fn)\n\tmp.nextp.set(_p_)\n\tmp.sigmask = initSigmask\n\tif iscgo {\n\t\tvar ts cgothreadstart\n\t\tif _cgo_thread_start == nil {\n\t\t\tthrow(\"_cgo_thread_start missing\")\n\t\t}\n\t\tts.g.set(mp.g0)\n\t\tts.tls = (*uint64)(unsafe.Pointer(&mp.tls[0]))\n\t\tts.fn = unsafe.Pointer(funcPC(mstart))\n\t\tif msanenabled {\n\t\t\tmsanwrite(unsafe.Pointer(&ts), unsafe.Sizeof(ts))\n\t\t}\n\t\tasmcgocall(_cgo_thread_start, unsafe.Pointer(&ts))\n\t\treturn\n\t}\n\tnewosproc(mp, unsafe.Pointer(mp.g0.stack.hi))\n}\n\n// Stops execution of the current m until new work is available.\n// Returns with acquired P.\nfunc stopm() {\n\t_g_ := getg()\n\n\tif _g_.m.locks != 0 {\n\t\tthrow(\"stopm holding locks\")\n\t}\n\tif _g_.m.p != 0 {\n\t\tthrow(\"stopm holding p\")\n\t}\n\tif _g_.m.spinning {\n\t\tthrow(\"stopm spinning\")\n\t}\n\nretry:\n\tlock(&sched.lock)\n\tmput(_g_.m)\n\tunlock(&sched.lock)\n\tnotesleep(&_g_.m.park)\n\tnoteclear(&_g_.m.park)\n\tif _g_.m.helpgc != 0 {\n\t\tgchelper()\n\t\t_g_.m.helpgc = 0\n\t\t_g_.m.mcache = nil\n\t\t_g_.m.p = 0\n\t\tgoto retry\n\t}\n\tacquirep(_g_.m.nextp.ptr())\n\t_g_.m.nextp = 0\n}\n\nfunc mspinning() {\n\t// startm's caller incremented nmspinning. Set the new M's spinning.\n\tgetg().m.spinning = true\n}\n\n// Schedules some M to run the p (creates an M if necessary).\n// If p==nil, tries to get an idle P, if no idle P's does nothing.\n// May run with m.p==nil, so write barriers are not allowed.\n// If spinning is set, the caller has incremented nmspinning and startm will\n// either decrement nmspinning or set m.spinning in the newly started M.\n//go:nowritebarrier\nfunc startm(_p_ *p, spinning bool) {\n\tlock(&sched.lock)\n\tif _p_ == nil {\n\t\t_p_ = pidleget()\n\t\tif _p_ == nil {\n\t\t\tunlock(&sched.lock)\n\t\t\tif spinning {\n\t\t\t\t// The caller incremented nmspinning, but there are no idle Ps,\n\t\t\t\t// so it's okay to just undo the increment and give up.\n\t\t\t\tif int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {\n\t\t\t\t\tthrow(\"startm: negative nmspinning\")\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn\n\t\t}\n\t}\n\tmp := mget()\n\tunlock(&sched.lock)\n\tif mp == nil {\n\t\tvar fn func()\n\t\tif spinning {\n\t\t\t// The caller incremented nmspinning, so set m.spinning in the new M.\n\t\t\tfn = mspinning\n\t\t}\n\t\tnewm(fn, _p_)\n\t\treturn\n\t}\n\tif mp.spinning {\n\t\tthrow(\"startm: m is spinning\")\n\t}\n\tif mp.nextp != 0 {\n\t\tthrow(\"startm: m has p\")\n\t}\n\tif spinning && !runqempty(_p_) {\n\t\tthrow(\"startm: p has runnable gs\")\n\t}\n\t// The caller incremented nmspinning, so set m.spinning in the new M.\n\tmp.spinning = spinning\n\tmp.nextp.set(_p_)\n\tnotewakeup(&mp.park)\n}\n\n// Hands off P from syscall or locked M.\n// Always runs without a P, so write barriers are not allowed.\n//go:nowritebarrier\nfunc handoffp(_p_ *p) {\n\t// handoffp must start an M in any situation where\n\t// findrunnable would return a G to run on _p_.\n\n\t// if it has local work, start it straight away\n\tif !runqempty(_p_) || sched.runqsize != 0 {\n\t\tstartm(_p_, false)\n\t\treturn\n\t}\n\t// if it has GC work, start it straight away\n\tif gcBlackenEnabled != 0 && gcMarkWorkAvailable(_p_) {\n\t\tstartm(_p_, false)\n\t\treturn\n\t}\n\t// no local work, check that there are no spinning/idle M's,\n\t// otherwise our help is not required\n\tif atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) == 0 && atomic.Cas(&sched.nmspinning, 0, 1) { // TODO: fast atomic\n\t\tstartm(_p_, true)\n\t\treturn\n\t}\n\tlock(&sched.lock)\n\tif sched.gcwaiting != 0 {\n\t\t_p_.status = _Pgcstop\n\t\tsched.stopwait--\n\t\tif sched.stopwait == 0 {\n\t\t\tnotewakeup(&sched.stopnote)\n\t\t}\n\t\tunlock(&sched.lock)\n\t\treturn\n\t}\n\tif _p_.runSafePointFn != 0 && atomic.Cas(&_p_.runSafePointFn, 1, 0) {\n\t\tsched.safePointFn(_p_)\n\t\tsched.safePointWait--\n\t\tif sched.safePointWait == 0 {\n\t\t\tnotewakeup(&sched.safePointNote)\n\t\t}\n\t}\n\tif sched.runqsize != 0 {\n\t\tunlock(&sched.lock)\n\t\tstartm(_p_, false)\n\t\treturn\n\t}\n\t// If this is the last running P and nobody is polling network,\n\t// need to wakeup another M to poll network.\n\tif sched.npidle == uint32(gomaxprocs-1) && atomic.Load64(&sched.lastpoll) != 0 {\n\t\tunlock(&sched.lock)\n\t\tstartm(_p_, false)\n\t\treturn\n\t}\n\tpidleput(_p_)\n\tunlock(&sched.lock)\n}\n\n// Tries to add one more P to execute G's.\n// Called when a G is made runnable (newproc, ready).\nfunc wakep() {\n\t// be conservative about spinning threads\n\tif !atomic.Cas(&sched.nmspinning, 0, 1) {\n\t\treturn\n\t}\n\tstartm(nil, true)\n}\n\n// Stops execution of the current m that is locked to a g until the g is runnable again.\n// Returns with acquired P.\nfunc stoplockedm() {\n\t_g_ := getg()\n\n\tif _g_.m.lockedg == nil || _g_.m.lockedg.lockedm != _g_.m {\n\t\tthrow(\"stoplockedm: inconsistent locking\")\n\t}\n\tif _g_.m.p != 0 {\n\t\t// Schedule another M to run this p.\n\t\t_p_ := releasep()\n\t\thandoffp(_p_)\n\t}\n\tincidlelocked(1)\n\t// Wait until another thread schedules lockedg again.\n\tnotesleep(&_g_.m.park)\n\tnoteclear(&_g_.m.park)\n\tstatus := readgstatus(_g_.m.lockedg)\n\tif status&^_Gscan != _Grunnable {\n\t\tprint(\"runtime:stoplockedm: g is not Grunnable or Gscanrunnable\\n\")\n\t\tdumpgstatus(_g_)\n\t\tthrow(\"stoplockedm: not runnable\")\n\t}\n\tacquirep(_g_.m.nextp.ptr())\n\t_g_.m.nextp = 0\n}\n\n// Schedules the locked m to run the locked gp.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc startlockedm(gp *g) {\n\t_g_ := getg()\n\n\tmp := gp.lockedm\n\tif mp == _g_.m {\n\t\tthrow(\"startlockedm: locked to me\")\n\t}\n\tif mp.nextp != 0 {\n\t\tthrow(\"startlockedm: m has p\")\n\t}\n\t// directly handoff current P to the locked m\n\tincidlelocked(-1)\n\t_p_ := releasep()\n\tmp.nextp.set(_p_)\n\tnotewakeup(&mp.park)\n\tstopm()\n}\n\n// Stops the current m for stopTheWorld.\n// Returns when the world is restarted.\nfunc gcstopm() {\n\t_g_ := getg()\n\n\tif sched.gcwaiting == 0 {\n\t\tthrow(\"gcstopm: not waiting for gc\")\n\t}\n\tif _g_.m.spinning {\n\t\t_g_.m.spinning = false\n\t\t// OK to just drop nmspinning here,\n\t\t// startTheWorld will unpark threads as necessary.\n\t\tif int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {\n\t\t\tthrow(\"gcstopm: negative nmspinning\")\n\t\t}\n\t}\n\t_p_ := releasep()\n\tlock(&sched.lock)\n\t_p_.status = _Pgcstop\n\tsched.stopwait--\n\tif sched.stopwait == 0 {\n\t\tnotewakeup(&sched.stopnote)\n\t}\n\tunlock(&sched.lock)\n\tstopm()\n}\n\n// Schedules gp to run on the current M.\n// If inheritTime is true, gp inherits the remaining time in the\n// current time slice. Otherwise, it starts a new time slice.\n// Never returns.\nfunc execute(gp *g, inheritTime bool) {\n\t_g_ := getg()\n\n\tcasgstatus(gp, _Grunnable, _Grunning)\n\tgp.waitsince = 0\n\tgp.preempt = false\n\tgp.stackguard0 = gp.stack.lo + _StackGuard\n\tif !inheritTime {\n\t\t_g_.m.p.ptr().schedtick++\n\t}\n\t_g_.m.curg = gp\n\tgp.m = _g_.m\n\n\t// Check whether the profiler needs to be turned on or off.\n\thz := sched.profilehz\n\tif _g_.m.profilehz != hz {\n\t\tresetcpuprofiler(hz)\n\t}\n\n\tif trace.enabled {\n\t\t// GoSysExit has to happen when we have a P, but before GoStart.\n\t\t// So we emit it here.\n\t\tif gp.syscallsp != 0 && gp.sysblocktraced {\n\t\t\t// Since gp.sysblocktraced is true, we must emit an event.\n\t\t\t// There is a race between the code that initializes sysexitseq\n\t\t\t// and sysexitticks (in exitsyscall, which runs without a P,\n\t\t\t// and therefore is not stopped with the rest of the world)\n\t\t\t// and the code that initializes a new trace.\n\t\t\t// The recorded sysexitseq and sysexitticks must therefore\n\t\t\t// be treated as \"best effort\". If they are valid for this trace,\n\t\t\t// then great, use them for greater accuracy.\n\t\t\t// But if they're not valid for this trace, assume that the\n\t\t\t// trace was started after the actual syscall exit (but before\n\t\t\t// we actually managed to start the goroutine, aka right now),\n\t\t\t// and assign a fresh time stamp to keep the log consistent.\n\t\t\tseq, ts := gp.sysexitseq, gp.sysexitticks\n\t\t\tif seq == 0 || int64(seq)-int64(trace.seqStart) < 0 {\n\t\t\t\tseq, ts = tracestamp()\n\t\t\t}\n\t\t\ttraceGoSysExit(seq, ts)\n\t\t}\n\t\ttraceGoStart()\n\t}\n\n\tgogo(&gp.sched)\n}\n\n// Finds a runnable goroutine to execute.\n// Tries to steal from other P's, get g from global queue, poll network.\nfunc findrunnable() (gp *g, inheritTime bool) {\n\t_g_ := getg()\n\n\t// The conditions here and in handoffp must agree: if\n\t// findrunnable would return a G to run, handoffp must start\n\t// an M.\n\ntop:\n\tif sched.gcwaiting != 0 {\n\t\tgcstopm()\n\t\tgoto top\n\t}\n\tif _g_.m.p.ptr().runSafePointFn != 0 {\n\t\trunSafePointFn()\n\t}\n\tif fingwait && fingwake {\n\t\tif gp := wakefing(); gp != nil {\n\t\t\tready(gp, 0)\n\t\t}\n\t}\n\n\t// local runq\n\tif gp, inheritTime := runqget(_g_.m.p.ptr()); gp != nil {\n\t\treturn gp, inheritTime\n\t}\n\n\t// global runq\n\tif sched.runqsize != 0 {\n\t\tlock(&sched.lock)\n\t\tgp := globrunqget(_g_.m.p.ptr(), 0)\n\t\tunlock(&sched.lock)\n\t\tif gp != nil {\n\t\t\treturn gp, false\n\t\t}\n\t}\n\n\t// Poll network.\n\t// This netpoll is only an optimization before we resort to stealing.\n\t// We can safely skip it if there a thread blocked in netpoll already.\n\t// If there is any kind of logical race with that blocked thread\n\t// (e.g. it has already returned from netpoll, but does not set lastpoll yet),\n\t// this thread will do blocking netpoll below anyway.\n\tif netpollinited() && sched.lastpoll != 0 {\n\t\tif gp := netpoll(false); gp != nil { // non-blocking\n\t\t\t// netpoll returns list of goroutines linked by schedlink.\n\t\t\tinjectglist(gp.schedlink.ptr())\n\t\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\t\tif trace.enabled {\n\t\t\t\ttraceGoUnpark(gp, 0)\n\t\t\t}\n\t\t\treturn gp, false\n\t\t}\n\t}\n\n\t// If number of spinning M's >= number of busy P's, block.\n\t// This is necessary to prevent excessive CPU consumption\n\t// when GOMAXPROCS>>1 but the program parallelism is low.\n\tif !_g_.m.spinning && 2*atomic.Load(&sched.nmspinning) >= uint32(gomaxprocs)-atomic.Load(&sched.npidle) { // TODO: fast atomic\n\t\tgoto stop\n\t}\n\tif !_g_.m.spinning {\n\t\t_g_.m.spinning = true\n\t\tatomic.Xadd(&sched.nmspinning, 1)\n\t}\n\t// random steal from other P's\n\tfor i := 0; i < int(4*gomaxprocs); i++ {\n\t\tif sched.gcwaiting != 0 {\n\t\t\tgoto top\n\t\t}\n\t\t_p_ := allp[fastrand1()%uint32(gomaxprocs)]\n\t\tvar gp *g\n\t\tif _p_ == _g_.m.p.ptr() {\n\t\t\tgp, _ = runqget(_p_)\n\t\t} else {\n\t\t\tstealRunNextG := i > 2*int(gomaxprocs) // first look for ready queues with more than 1 g\n\t\t\tgp = runqsteal(_g_.m.p.ptr(), _p_, stealRunNextG)\n\t\t}\n\t\tif gp != nil {\n\t\t\treturn gp, false\n\t\t}\n\t}\n\nstop:\n\n\t// We have nothing to do. If we're in the GC mark phase, can\n\t// safely scan and blacken objects, and have work to do, run\n\t// idle-time marking rather than give up the P.\n\tif _p_ := _g_.m.p.ptr(); gcBlackenEnabled != 0 && _p_.gcBgMarkWorker != 0 && gcMarkWorkAvailable(_p_) {\n\t\t_p_.gcMarkWorkerMode = gcMarkWorkerIdleMode\n\t\tgp := _p_.gcBgMarkWorker.ptr()\n\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\tif trace.enabled {\n\t\t\ttraceGoUnpark(gp, 0)\n\t\t}\n\t\treturn gp, false\n\t}\n\n\t// return P and block\n\tlock(&sched.lock)\n\tif sched.gcwaiting != 0 || _g_.m.p.ptr().runSafePointFn != 0 {\n\t\tunlock(&sched.lock)\n\t\tgoto top\n\t}\n\tif sched.runqsize != 0 {\n\t\tgp := globrunqget(_g_.m.p.ptr(), 0)\n\t\tunlock(&sched.lock)\n\t\treturn gp, false\n\t}\n\t_p_ := releasep()\n\tpidleput(_p_)\n\tunlock(&sched.lock)\n\n\t// Delicate dance: thread transitions from spinning to non-spinning state,\n\t// potentially concurrently with submission of new goroutines. We must\n\t// drop nmspinning first and then check all per-P queues again (with\n\t// #StoreLoad memory barrier in between). If we do it the other way around,\n\t// another thread can submit a goroutine after we've checked all run queues\n\t// but before we drop nmspinning; as the result nobody will unpark a thread\n\t// to run the goroutine.\n\t// If we discover new work below, we need to restore m.spinning as a signal\n\t// for resetspinning to unpark a new worker thread (because there can be more\n\t// than one starving goroutine). However, if after discovering new work\n\t// we also observe no idle Ps, it is OK to just park the current thread:\n\t// the system is fully loaded so no spinning threads are required.\n\t// Also see \"Worker thread parking/unparking\" comment at the top of the file.\n\twasSpinning := _g_.m.spinning\n\tif _g_.m.spinning {\n\t\t_g_.m.spinning = false\n\t\tif int32(atomic.Xadd(&sched.nmspinning, -1)) < 0 {\n\t\t\tthrow(\"findrunnable: negative nmspinning\")\n\t\t}\n\t}\n\n\t// check all runqueues once again\n\tfor i := 0; i < int(gomaxprocs); i++ {\n\t\t_p_ := allp[i]\n\t\tif _p_ != nil && !runqempty(_p_) {\n\t\t\tlock(&sched.lock)\n\t\t\t_p_ = pidleget()\n\t\t\tunlock(&sched.lock)\n\t\t\tif _p_ != nil {\n\t\t\t\tacquirep(_p_)\n\t\t\t\tif wasSpinning {\n\t\t\t\t\t_g_.m.spinning = true\n\t\t\t\t\tatomic.Xadd(&sched.nmspinning, 1)\n\t\t\t\t}\n\t\t\t\tgoto top\n\t\t\t}\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// poll network\n\tif netpollinited() && atomic.Xchg64(&sched.lastpoll, 0) != 0 {\n\t\tif _g_.m.p != 0 {\n\t\t\tthrow(\"findrunnable: netpoll with p\")\n\t\t}\n\t\tif _g_.m.spinning {\n\t\t\tthrow(\"findrunnable: netpoll with spinning\")\n\t\t}\n\t\tgp := netpoll(true) // block until new work is available\n\t\tatomic.Store64(&sched.lastpoll, uint64(nanotime()))\n\t\tif gp != nil {\n\t\t\tlock(&sched.lock)\n\t\t\t_p_ = pidleget()\n\t\t\tunlock(&sched.lock)\n\t\t\tif _p_ != nil {\n\t\t\t\tacquirep(_p_)\n\t\t\t\tinjectglist(gp.schedlink.ptr())\n\t\t\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\t\t\tif trace.enabled {\n\t\t\t\t\ttraceGoUnpark(gp, 0)\n\t\t\t\t}\n\t\t\t\treturn gp, false\n\t\t\t}\n\t\t\tinjectglist(gp)\n\t\t}\n\t}\n\tstopm()\n\tgoto top\n}\n\nfunc resetspinning() {\n\t_g_ := getg()\n\tif !_g_.m.spinning {\n\t\tthrow(\"resetspinning: not a spinning m\")\n\t}\n\t_g_.m.spinning = false\n\tnmspinning := atomic.Xadd(&sched.nmspinning, -1)\n\tif int32(nmspinning) < 0 {\n\t\tthrow(\"findrunnable: negative nmspinning\")\n\t}\n\t// M wakeup policy is deliberately somewhat conservative, so check if we\n\t// need to wakeup another P here. See \"Worker thread parking/unparking\"\n\t// comment at the top of the file for details.\n\tif nmspinning == 0 && atomic.Load(&sched.npidle) > 0 {\n\t\twakep()\n\t}\n}\n\n// Injects the list of runnable G's into the scheduler.\n// Can run concurrently with GC.\nfunc injectglist(glist *g) {\n\tif glist == nil {\n\t\treturn\n\t}\n\tif trace.enabled {\n\t\tfor gp := glist; gp != nil; gp = gp.schedlink.ptr() {\n\t\t\ttraceGoUnpark(gp, 0)\n\t\t}\n\t}\n\tlock(&sched.lock)\n\tvar n int\n\tfor n = 0; glist != nil; n++ {\n\t\tgp := glist\n\t\tglist = gp.schedlink.ptr()\n\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\tglobrunqput(gp)\n\t}\n\tunlock(&sched.lock)\n\tfor ; n != 0 && sched.npidle != 0; n-- {\n\t\tstartm(nil, false)\n\t}\n}\n\n// One round of scheduler: find a runnable goroutine and execute it.\n// Never returns.\nfunc schedule() {\n\t_g_ := getg()\n\n\tif _g_.m.locks != 0 {\n\t\tthrow(\"schedule: holding locks\")\n\t}\n\n\tif _g_.m.lockedg != nil {\n\t\tstoplockedm()\n\t\texecute(_g_.m.lockedg, false) // Never returns.\n\t}\n\ntop:\n\tif sched.gcwaiting != 0 {\n\t\tgcstopm()\n\t\tgoto top\n\t}\n\tif _g_.m.p.ptr().runSafePointFn != 0 {\n\t\trunSafePointFn()\n\t}\n\n\tvar gp *g\n\tvar inheritTime bool\n\tif trace.enabled || trace.shutdown {\n\t\tgp = traceReader()\n\t\tif gp != nil {\n\t\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\t\ttraceGoUnpark(gp, 0)\n\t\t}\n\t}\n\tif gp == nil && gcBlackenEnabled != 0 {\n\t\tgp = gcController.findRunnableGCWorker(_g_.m.p.ptr())\n\t}\n\tif gp == nil {\n\t\t// Check the global runnable queue once in a while to ensure fairness.\n\t\t// Otherwise two goroutines can completely occupy the local runqueue\n\t\t// by constantly respawning each other.\n\t\tif _g_.m.p.ptr().schedtick%61 == 0 && sched.runqsize > 0 {\n\t\t\tlock(&sched.lock)\n\t\t\tgp = globrunqget(_g_.m.p.ptr(), 1)\n\t\t\tunlock(&sched.lock)\n\t\t}\n\t}\n\tif gp == nil {\n\t\tgp, inheritTime = runqget(_g_.m.p.ptr())\n\t\tif gp != nil && _g_.m.spinning {\n\t\t\tthrow(\"schedule: spinning with local work\")\n\t\t}\n\t}\n\tif gp == nil {\n\t\tgp, inheritTime = findrunnable() // blocks until work is available\n\t}\n\n\t// This thread is going to run a goroutine and is not spinning anymore,\n\t// so if it was marked as spinning we need to reset it now and potentially\n\t// start a new spinning M.\n\tif _g_.m.spinning {\n\t\tresetspinning()\n\t}\n\n\tif gp.lockedm != nil {\n\t\t// Hands off own p to the locked m,\n\t\t// then blocks waiting for a new p.\n\t\tstartlockedm(gp)\n\t\tgoto top\n\t}\n\n\texecute(gp, inheritTime)\n}\n\n// dropg removes the association between m and the current goroutine m->curg (gp for short).\n// Typically a caller sets gp's status away from Grunning and then\n// immediately calls dropg to finish the job. The caller is also responsible\n// for arranging that gp will be restarted using ready at an\n// appropriate time. After calling dropg and arranging for gp to be\n// readied later, the caller can do other work but eventually should\n// call schedule to restart the scheduling of goroutines on this m.\nfunc dropg() {\n\t_g_ := getg()\n\n\tif _g_.m.lockedg == nil {\n\t\t_g_.m.curg.m = nil\n\t\t_g_.m.curg = nil\n\t}\n}\n\nfunc parkunlock_c(gp *g, lock unsafe.Pointer) bool {\n\tunlock((*mutex)(lock))\n\treturn true\n}\n\n// park continuation on g0.\nfunc park_m(gp *g) {\n\t_g_ := getg()\n\n\tif trace.enabled {\n\t\ttraceGoPark(_g_.m.waittraceev, _g_.m.waittraceskip, gp)\n\t}\n\n\tcasgstatus(gp, _Grunning, _Gwaiting)\n\tdropg()\n\n\tif _g_.m.waitunlockf != nil {\n\t\tfn := *(*func(*g, unsafe.Pointer) bool)(unsafe.Pointer(&_g_.m.waitunlockf))\n\t\tok := fn(gp, _g_.m.waitlock)\n\t\t_g_.m.waitunlockf = nil\n\t\t_g_.m.waitlock = nil\n\t\tif !ok {\n\t\t\tif trace.enabled {\n\t\t\t\ttraceGoUnpark(gp, 2)\n\t\t\t}\n\t\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\t\texecute(gp, true) // Schedule it back, never returns.\n\t\t}\n\t}\n\tschedule()\n}\n\nfunc goschedImpl(gp *g) {\n\tstatus := readgstatus(gp)\n\tif status&^_Gscan != _Grunning {\n\t\tdumpgstatus(gp)\n\t\tthrow(\"bad g status\")\n\t}\n\tcasgstatus(gp, _Grunning, _Grunnable)\n\tdropg()\n\tlock(&sched.lock)\n\tglobrunqput(gp)\n\tunlock(&sched.lock)\n\n\tschedule()\n}\n\n// Gosched continuation on g0.\nfunc gosched_m(gp *g) {\n\tif trace.enabled {\n\t\ttraceGoSched()\n\t}\n\tgoschedImpl(gp)\n}\n\nfunc gopreempt_m(gp *g) {\n\tif trace.enabled {\n\t\ttraceGoPreempt()\n\t}\n\tgoschedImpl(gp)\n}\n\n// Finishes execution of the current goroutine.\nfunc goexit1() {\n\tif raceenabled {\n\t\tracegoend()\n\t}\n\tif trace.enabled {\n\t\ttraceGoEnd()\n\t}\n\tmcall(goexit0)\n}\n\n// goexit continuation on g0.\nfunc goexit0(gp *g) {\n\t_g_ := getg()\n\n\tcasgstatus(gp, _Grunning, _Gdead)\n\tif isSystemGoroutine(gp) {\n\t\tatomic.Xadd(&sched.ngsys, -1)\n\t}\n\tgp.m = nil\n\tgp.lockedm = nil\n\t_g_.m.lockedg = nil\n\tgp.paniconfault = false\n\tgp._defer = nil // should be true already but just in case.\n\tgp._panic = nil // non-nil for Goexit during panic. points at stack-allocated data.\n\tgp.writebuf = nil\n\tgp.waitreason = \"\"\n\tgp.param = nil\n\n\tdropg()\n\n\tif _g_.m.locked&^_LockExternal != 0 {\n\t\tprint(\"invalid m->locked = \", _g_.m.locked, \"\\n\")\n\t\tthrow(\"internal lockOSThread error\")\n\t}\n\t_g_.m.locked = 0\n\tgfput(_g_.m.p.ptr(), gp)\n\tschedule()\n}\n\n//go:nosplit\n//go:nowritebarrier\nfunc save(pc, sp uintptr) {\n\t_g_ := getg()\n\n\t_g_.sched.pc = pc\n\t_g_.sched.sp = sp\n\t_g_.sched.lr = 0\n\t_g_.sched.ret = 0\n\t_g_.sched.ctxt = nil\n\t_g_.sched.g = guintptr(unsafe.Pointer(_g_))\n}\n\n// The goroutine g is about to enter a system call.\n// Record that it's not using the cpu anymore.\n// This is called only from the go syscall library and cgocall,\n// not from the low-level system calls used by the runtime.\n//\n// Entersyscall cannot split the stack: the gosave must\n// make g->sched refer to the caller's stack segment, because\n// entersyscall is going to return immediately after.\n//\n// Nothing entersyscall calls can split the stack either.\n// We cannot safely move the stack during an active call to syscall,\n// because we do not know which of the uintptr arguments are\n// really pointers (back into the stack).\n// In practice, this means that we make the fast path run through\n// entersyscall doing no-split things, and the slow path has to use systemstack\n// to run bigger things on the system stack.\n//\n// reentersyscall is the entry point used by cgo callbacks, where explicitly\n// saved SP and PC are restored. This is needed when exitsyscall will be called\n// from a function further up in the call stack than the parent, as g->syscallsp\n// must always point to a valid stack frame. entersyscall below is the normal\n// entry point for syscalls, which obtains the SP and PC from the caller.\n//\n// Syscall tracing:\n// At the start of a syscall we emit traceGoSysCall to capture the stack trace.\n// If the syscall does not block, that is it, we do not emit any other events.\n// If the syscall blocks (that is, P is retaken), retaker emits traceGoSysBlock;\n// when syscall returns we emit traceGoSysExit and when the goroutine starts running\n// (potentially instantly, if exitsyscallfast returns true) we emit traceGoStart.\n// To ensure that traceGoSysExit is emitted strictly after traceGoSysBlock,\n// we remember current value of syscalltick in m (_g_.m.syscalltick = _g_.m.p.ptr().syscalltick),\n// whoever emits traceGoSysBlock increments p.syscalltick afterwards;\n// and we wait for the increment before emitting traceGoSysExit.\n// Note that the increment is done even if tracing is not enabled,\n// because tracing can be enabled in the middle of syscall. We don't want the wait to hang.\n//\n//go:nosplit\nfunc reentersyscall(pc, sp uintptr) {\n\t_g_ := getg()\n\n\t// Disable preemption because during this function g is in Gsyscall status,\n\t// but can have inconsistent g->sched, do not let GC observe it.\n\t_g_.m.locks++\n\n\t// Entersyscall must not call any function that might split/grow the stack.\n\t// (See details in comment above.)\n\t// Catch calls that might, by replacing the stack guard with something that\n\t// will trip any stack check and leaving a flag to tell newstack to die.\n\t_g_.stackguard0 = stackPreempt\n\t_g_.throwsplit = true\n\n\t// Leave SP around for GC and traceback.\n\tsave(pc, sp)\n\t_g_.syscallsp = sp\n\t_g_.syscallpc = pc\n\tcasgstatus(_g_, _Grunning, _Gsyscall)\n\tif _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {\n\t\tsystemstack(func() {\n\t\t\tprint(\"entersyscall inconsistent \", hex(_g_.syscallsp), \" [\", hex(_g_.stack.lo), \",\", hex(_g_.stack.hi), \"]\\n\")\n\t\t\tthrow(\"entersyscall\")\n\t\t})\n\t}\n\n\tif trace.enabled {\n\t\tsystemstack(traceGoSysCall)\n\t\t// systemstack itself clobbers g.sched.{pc,sp} and we might\n\t\t// need them later when the G is genuinely blocked in a\n\t\t// syscall\n\t\tsave(pc, sp)\n\t}\n\n\tif atomic.Load(&sched.sysmonwait) != 0 { // TODO: fast atomic\n\t\tsystemstack(entersyscall_sysmon)\n\t\tsave(pc, sp)\n\t}\n\n\tif _g_.m.p.ptr().runSafePointFn != 0 {\n\t\t// runSafePointFn may stack split if run on this stack\n\t\tsystemstack(runSafePointFn)\n\t\tsave(pc, sp)\n\t}\n\n\t_g_.m.syscalltick = _g_.m.p.ptr().syscalltick\n\t_g_.sysblocktraced = true\n\t_g_.m.mcache = nil\n\t_g_.m.p.ptr().m = 0\n\tatomic.Store(&_g_.m.p.ptr().status, _Psyscall)\n\tif sched.gcwaiting != 0 {\n\t\tsystemstack(entersyscall_gcwait)\n\t\tsave(pc, sp)\n\t}\n\n\t// Goroutines must not split stacks in Gsyscall status (it would corrupt g->sched).\n\t// We set _StackGuard to StackPreempt so that first split stack check calls morestack.\n\t// Morestack detects this case and throws.\n\t_g_.stackguard0 = stackPreempt\n\t_g_.m.locks--\n}\n\n// Standard syscall entry used by the go syscall library and normal cgo calls.\n//go:nosplit\nfunc entersyscall(dummy int32) {\n\treentersyscall(getcallerpc(unsafe.Pointer(&dummy)), getcallersp(unsafe.Pointer(&dummy)))\n}\n\nfunc entersyscall_sysmon() {\n\tlock(&sched.lock)\n\tif atomic.Load(&sched.sysmonwait) != 0 {\n\t\tatomic.Store(&sched.sysmonwait, 0)\n\t\tnotewakeup(&sched.sysmonnote)\n\t}\n\tunlock(&sched.lock)\n}\n\nfunc entersyscall_gcwait() {\n\t_g_ := getg()\n\t_p_ := _g_.m.p.ptr()\n\n\tlock(&sched.lock)\n\tif sched.stopwait > 0 && atomic.Cas(&_p_.status, _Psyscall, _Pgcstop) {\n\t\tif trace.enabled {\n\t\t\ttraceGoSysBlock(_p_)\n\t\t\ttraceProcStop(_p_)\n\t\t}\n\t\t_p_.syscalltick++\n\t\tif sched.stopwait--; sched.stopwait == 0 {\n\t\t\tnotewakeup(&sched.stopnote)\n\t\t}\n\t}\n\tunlock(&sched.lock)\n}\n\n// The same as entersyscall(), but with a hint that the syscall is blocking.\n//go:nosplit\nfunc entersyscallblock(dummy int32) {\n\t_g_ := getg()\n\n\t_g_.m.locks++ // see comment in entersyscall\n\t_g_.throwsplit = true\n\t_g_.stackguard0 = stackPreempt // see comment in entersyscall\n\t_g_.m.syscalltick = _g_.m.p.ptr().syscalltick\n\t_g_.sysblocktraced = true\n\t_g_.m.p.ptr().syscalltick++\n\n\t// Leave SP around for GC and traceback.\n\tpc := getcallerpc(unsafe.Pointer(&dummy))\n\tsp := getcallersp(unsafe.Pointer(&dummy))\n\tsave(pc, sp)\n\t_g_.syscallsp = _g_.sched.sp\n\t_g_.syscallpc = _g_.sched.pc\n\tif _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {\n\t\tsp1 := sp\n\t\tsp2 := _g_.sched.sp\n\t\tsp3 := _g_.syscallsp\n\t\tsystemstack(func() {\n\t\t\tprint(\"entersyscallblock inconsistent \", hex(sp1), \" \", hex(sp2), \" \", hex(sp3), \" [\", hex(_g_.stack.lo), \",\", hex(_g_.stack.hi), \"]\\n\")\n\t\t\tthrow(\"entersyscallblock\")\n\t\t})\n\t}\n\tcasgstatus(_g_, _Grunning, _Gsyscall)\n\tif _g_.syscallsp < _g_.stack.lo || _g_.stack.hi < _g_.syscallsp {\n\t\tsystemstack(func() {\n\t\t\tprint(\"entersyscallblock inconsistent \", hex(sp), \" \", hex(_g_.sched.sp), \" \", hex(_g_.syscallsp), \" [\", hex(_g_.stack.lo), \",\", hex(_g_.stack.hi), \"]\\n\")\n\t\t\tthrow(\"entersyscallblock\")\n\t\t})\n\t}\n\n\tsystemstack(entersyscallblock_handoff)\n\n\t// Resave for traceback during blocked call.\n\tsave(getcallerpc(unsafe.Pointer(&dummy)), getcallersp(unsafe.Pointer(&dummy)))\n\n\t_g_.m.locks--\n}\n\nfunc entersyscallblock_handoff() {\n\tif trace.enabled {\n\t\ttraceGoSysCall()\n\t\ttraceGoSysBlock(getg().m.p.ptr())\n\t}\n\thandoffp(releasep())\n}\n\n// The goroutine g exited its system call.\n// Arrange for it to run on a cpu again.\n// This is called only from the go syscall library, not\n// from the low-level system calls used by the\n//go:nosplit\nfunc exitsyscall(dummy int32) {\n\t_g_ := getg()\n\n\t_g_.m.locks++ // see comment in entersyscall\n\tif getcallersp(unsafe.Pointer(&dummy)) > _g_.syscallsp {\n\t\tthrow(\"exitsyscall: syscall frame is no longer valid\")\n\t}\n\n\t_g_.waitsince = 0\n\toldp := _g_.m.p.ptr()\n\tif exitsyscallfast() {\n\t\tif _g_.m.mcache == nil {\n\t\t\tthrow(\"lost mcache\")\n\t\t}\n\t\tif trace.enabled {\n\t\t\tif oldp != _g_.m.p.ptr() || _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {\n\t\t\t\tsystemstack(traceGoStart)\n\t\t\t}\n\t\t}\n\t\t// There's a cpu for us, so we can run.\n\t\t_g_.m.p.ptr().syscalltick++\n\t\t// We need to cas the status and scan before resuming...\n\t\tcasgstatus(_g_, _Gsyscall, _Grunning)\n\n\t\t// Garbage collector isn't running (since we are),\n\t\t// so okay to clear syscallsp.\n\t\t_g_.syscallsp = 0\n\t\t_g_.m.locks--\n\t\tif _g_.preempt {\n\t\t\t// restore the preemption request in case we've cleared it in newstack\n\t\t\t_g_.stackguard0 = stackPreempt\n\t\t} else {\n\t\t\t// otherwise restore the real _StackGuard, we've spoiled it in entersyscall/entersyscallblock\n\t\t\t_g_.stackguard0 = _g_.stack.lo + _StackGuard\n\t\t}\n\t\t_g_.throwsplit = false\n\t\treturn\n\t}\n\n\t_g_.sysexitticks = 0\n\t_g_.sysexitseq = 0\n\tif trace.enabled {\n\t\t// Wait till traceGoSysBlock event is emitted.\n\t\t// This ensures consistency of the trace (the goroutine is started after it is blocked).\n\t\tfor oldp != nil && oldp.syscalltick == _g_.m.syscalltick {\n\t\t\tosyield()\n\t\t}\n\t\t// We can't trace syscall exit right now because we don't have a P.\n\t\t// Tracing code can invoke write barriers that cannot run without a P.\n\t\t// So instead we remember the syscall exit time and emit the event\n\t\t// in execute when we have a P.\n\t\t_g_.sysexitseq, _g_.sysexitticks = tracestamp()\n\t}\n\n\t_g_.m.locks--\n\n\t// Call the scheduler.\n\tmcall(exitsyscall0)\n\n\tif _g_.m.mcache == nil {\n\t\tthrow(\"lost mcache\")\n\t}\n\n\t// Scheduler returned, so we're allowed to run now.\n\t// Delete the syscallsp information that we left for\n\t// the garbage collector during the system call.\n\t// Must wait until now because until gosched returns\n\t// we don't know for sure that the garbage collector\n\t// is not running.\n\t_g_.syscallsp = 0\n\t_g_.m.p.ptr().syscalltick++\n\t_g_.throwsplit = false\n}\n\n//go:nosplit\nfunc exitsyscallfast() bool {\n\t_g_ := getg()\n\n\t// Freezetheworld sets stopwait but does not retake P's.\n\tif sched.stopwait == freezeStopWait {\n\t\t_g_.m.mcache = nil\n\t\t_g_.m.p = 0\n\t\treturn false\n\t}\n\n\t// Try to re-acquire the last P.\n\tif _g_.m.p != 0 && _g_.m.p.ptr().status == _Psyscall && atomic.Cas(&_g_.m.p.ptr().status, _Psyscall, _Prunning) {\n\t\t// There's a cpu for us, so we can run.\n\t\t_g_.m.mcache = _g_.m.p.ptr().mcache\n\t\t_g_.m.p.ptr().m.set(_g_.m)\n\t\tif _g_.m.syscalltick != _g_.m.p.ptr().syscalltick {\n\t\t\tif trace.enabled {\n\t\t\t\t// The p was retaken and then enter into syscall again (since _g_.m.syscalltick has changed).\n\t\t\t\t// traceGoSysBlock for this syscall was already emitted,\n\t\t\t\t// but here we effectively retake the p from the new syscall running on the same p.\n\t\t\t\tsystemstack(func() {\n\t\t\t\t\t// Denote blocking of the new syscall.\n\t\t\t\t\ttraceGoSysBlock(_g_.m.p.ptr())\n\t\t\t\t\t// Denote completion of the current syscall.\n\t\t\t\t\ttraceGoSysExit(tracestamp())\n\t\t\t\t})\n\t\t\t}\n\t\t\t_g_.m.p.ptr().syscalltick++\n\t\t}\n\t\treturn true\n\t}\n\n\t// Try to get any other idle P.\n\toldp := _g_.m.p.ptr()\n\t_g_.m.mcache = nil\n\t_g_.m.p = 0\n\tif sched.pidle != 0 {\n\t\tvar ok bool\n\t\tsystemstack(func() {\n\t\t\tok = exitsyscallfast_pidle()\n\t\t\tif ok && trace.enabled {\n\t\t\t\tif oldp != nil {\n\t\t\t\t\t// Wait till traceGoSysBlock event is emitted.\n\t\t\t\t\t// This ensures consistency of the trace (the goroutine is started after it is blocked).\n\t\t\t\t\tfor oldp.syscalltick == _g_.m.syscalltick {\n\t\t\t\t\t\tosyield()\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ttraceGoSysExit(tracestamp())\n\t\t\t}\n\t\t})\n\t\tif ok {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\nfunc exitsyscallfast_pidle() bool {\n\tlock(&sched.lock)\n\t_p_ := pidleget()\n\tif _p_ != nil && atomic.Load(&sched.sysmonwait) != 0 {\n\t\tatomic.Store(&sched.sysmonwait, 0)\n\t\tnotewakeup(&sched.sysmonnote)\n\t}\n\tunlock(&sched.lock)\n\tif _p_ != nil {\n\t\tacquirep(_p_)\n\t\treturn true\n\t}\n\treturn false\n}\n\n// exitsyscall slow path on g0.\n// Failed to acquire P, enqueue gp as runnable.\nfunc exitsyscall0(gp *g) {\n\t_g_ := getg()\n\n\tcasgstatus(gp, _Gsyscall, _Grunnable)\n\tdropg()\n\tlock(&sched.lock)\n\t_p_ := pidleget()\n\tif _p_ == nil {\n\t\tglobrunqput(gp)\n\t} else if atomic.Load(&sched.sysmonwait) != 0 {\n\t\tatomic.Store(&sched.sysmonwait, 0)\n\t\tnotewakeup(&sched.sysmonnote)\n\t}\n\tunlock(&sched.lock)\n\tif _p_ != nil {\n\t\tacquirep(_p_)\n\t\texecute(gp, false) // Never returns.\n\t}\n\tif _g_.m.lockedg != nil {\n\t\t// Wait until another thread schedules gp and so m again.\n\t\tstoplockedm()\n\t\texecute(gp, false) // Never returns.\n\t}\n\tstopm()\n\tschedule() // Never returns.\n}\n\nfunc beforefork() {\n\tgp := getg().m.curg\n\n\t// Fork can hang if preempted with signals frequently enough (see issue 5517).\n\t// Ensure that we stay on the same M where we disable profiling.\n\tgp.m.locks++\n\tif gp.m.profilehz != 0 {\n\t\tresetcpuprofiler(0)\n\t}\n\n\t// This function is called before fork in syscall package.\n\t// Code between fork and exec must not allocate memory nor even try to grow stack.\n\t// Here we spoil g->_StackGuard to reliably detect any attempts to grow stack.\n\t// runtime_AfterFork will undo this in parent process, but not in child.\n\tgp.stackguard0 = stackFork\n}\n\n// Called from syscall package before fork.\n//go:linkname syscall_runtime_BeforeFork syscall.runtime_BeforeFork\n//go:nosplit\nfunc syscall_runtime_BeforeFork() {\n\tsystemstack(beforefork)\n}\n\nfunc afterfork() {\n\tgp := getg().m.curg\n\n\t// See the comment in beforefork.\n\tgp.stackguard0 = gp.stack.lo + _StackGuard\n\n\thz := sched.profilehz\n\tif hz != 0 {\n\t\tresetcpuprofiler(hz)\n\t}\n\tgp.m.locks--\n}\n\n// Called from syscall package after fork in parent.\n//go:linkname syscall_runtime_AfterFork syscall.runtime_AfterFork\n//go:nosplit\nfunc syscall_runtime_AfterFork() {\n\tsystemstack(afterfork)\n}\n\n// Allocate a new g, with a stack big enough for stacksize bytes.\nfunc malg(stacksize int32) *g {\n\tnewg := new(g)\n\tif stacksize >= 0 {\n\t\tstacksize = round2(_StackSystem + stacksize)\n\t\tsystemstack(func() {\n\t\t\tnewg.stack, newg.stkbar = stackalloc(uint32(stacksize))\n\t\t})\n\t\tnewg.stackguard0 = newg.stack.lo + _StackGuard\n\t\tnewg.stackguard1 = ^uintptr(0)\n\t\tnewg.stackAlloc = uintptr(stacksize)\n\t}\n\treturn newg\n}\n\n// Create a new g running fn with siz bytes of arguments.\n// Put it on the queue of g's waiting to run.\n// The compiler turns a go statement into a call to this.\n// Cannot split the stack because it assumes that the arguments\n// are available sequentially after &fn; they would not be\n// copied if a stack split occurred.\n//go:nosplit\nfunc newproc(siz int32, fn *funcval) {\n\targp := add(unsafe.Pointer(&fn), sys.PtrSize)\n\tpc := getcallerpc(unsafe.Pointer(&siz))\n\tsystemstack(func() {\n\t\tnewproc1(fn, (*uint8)(argp), siz, 0, pc)\n\t})\n}\n\n// Create a new g running fn with narg bytes of arguments starting\n// at argp and returning nret bytes of results.  callerpc is the\n// address of the go statement that created this.  The new g is put\n// on the queue of g's waiting to run.\nfunc newproc1(fn *funcval, argp *uint8, narg int32, nret int32, callerpc uintptr) *g {\n\t_g_ := getg()\n\n\tif fn == nil {\n\t\t_g_.m.throwing = -1 // do not dump full stacks\n\t\tthrow(\"go of nil func value\")\n\t}\n\t_g_.m.locks++ // disable preemption because it can be holding p in a local var\n\tsiz := narg + nret\n\tsiz = (siz + 7) &^ 7\n\n\t// We could allocate a larger initial stack if necessary.\n\t// Not worth it: this is almost always an error.\n\t// 4*sizeof(uintreg): extra space added below\n\t// sizeof(uintreg): caller's LR (arm) or return address (x86, in gostartcall).\n\tif siz >= _StackMin-4*sys.RegSize-sys.RegSize {\n\t\tthrow(\"newproc: function arguments too large for new goroutine\")\n\t}\n\n\t_p_ := _g_.m.p.ptr()\n\tnewg := gfget(_p_)\n\tif newg == nil {\n\t\tnewg = malg(_StackMin)\n\t\tcasgstatus(newg, _Gidle, _Gdead)\n\t\tallgadd(newg) // publishes with a g->status of Gdead so GC scanner doesn't look at uninitialized stack.\n\t}\n\tif newg.stack.hi == 0 {\n\t\tthrow(\"newproc1: newg missing stack\")\n\t}\n\n\tif readgstatus(newg) != _Gdead {\n\t\tthrow(\"newproc1: new g is not Gdead\")\n\t}\n\n\ttotalSize := 4*sys.RegSize + uintptr(siz) + sys.MinFrameSize // extra space in case of reads slightly beyond frame\n\ttotalSize += -totalSize & (sys.SpAlign - 1)                  // align to spAlign\n\tsp := newg.stack.hi - totalSize\n\tspArg := sp\n\tif usesLR {\n\t\t// caller's LR\n\t\t*(*unsafe.Pointer)(unsafe.Pointer(sp)) = nil\n\t\tprepGoExitFrame(sp)\n\t\tspArg += sys.MinFrameSize\n\t}\n\tmemmove(unsafe.Pointer(spArg), unsafe.Pointer(argp), uintptr(narg))\n\n\tmemclr(unsafe.Pointer(&newg.sched), unsafe.Sizeof(newg.sched))\n\tnewg.sched.sp = sp\n\tnewg.stktopsp = sp\n\tnewg.sched.pc = funcPC(goexit) + sys.PCQuantum // +PCQuantum so that previous instruction is in same function\n\tnewg.sched.g = guintptr(unsafe.Pointer(newg))\n\tgostartcallfn(&newg.sched, fn)\n\tnewg.gopc = callerpc\n\tnewg.startpc = fn.fn\n\tif isSystemGoroutine(newg) {\n\t\tatomic.Xadd(&sched.ngsys, +1)\n\t}\n\tcasgstatus(newg, _Gdead, _Grunnable)\n\n\tif _p_.goidcache == _p_.goidcacheend {\n\t\t// Sched.goidgen is the last allocated id,\n\t\t// this batch must be [sched.goidgen+1, sched.goidgen+GoidCacheBatch].\n\t\t// At startup sched.goidgen=0, so main goroutine receives goid=1.\n\t\t_p_.goidcache = atomic.Xadd64(&sched.goidgen, _GoidCacheBatch)\n\t\t_p_.goidcache -= _GoidCacheBatch - 1\n\t\t_p_.goidcacheend = _p_.goidcache + _GoidCacheBatch\n\t}\n\tnewg.goid = int64(_p_.goidcache)\n\t_p_.goidcache++\n\tif raceenabled {\n\t\tnewg.racectx = racegostart(callerpc)\n\t}\n\tif trace.enabled {\n\t\ttraceGoCreate(newg, newg.startpc)\n\t}\n\trunqput(_p_, newg, true)\n\n\tif atomic.Load(&sched.npidle) != 0 && atomic.Load(&sched.nmspinning) == 0 && unsafe.Pointer(fn.fn) != unsafe.Pointer(funcPC(main)) { // TODO: fast atomic\n\t\twakep()\n\t}\n\t_g_.m.locks--\n\tif _g_.m.locks == 0 && _g_.preempt { // restore the preemption request in case we've cleared it in newstack\n\t\t_g_.stackguard0 = stackPreempt\n\t}\n\treturn newg\n}\n\n// Put on gfree list.\n// If local list is too long, transfer a batch to the global list.\nfunc gfput(_p_ *p, gp *g) {\n\tif readgstatus(gp) != _Gdead {\n\t\tthrow(\"gfput: bad status (not Gdead)\")\n\t}\n\n\tstksize := gp.stackAlloc\n\n\tif stksize != _FixedStack {\n\t\t// non-standard stack size - free it.\n\t\tstackfree(gp.stack, gp.stackAlloc)\n\t\tgp.stack.lo = 0\n\t\tgp.stack.hi = 0\n\t\tgp.stackguard0 = 0\n\t\tgp.stkbar = nil\n\t\tgp.stkbarPos = 0\n\t} else {\n\t\t// Reset stack barriers.\n\t\tgp.stkbar = gp.stkbar[:0]\n\t\tgp.stkbarPos = 0\n\t}\n\n\tgp.schedlink.set(_p_.gfree)\n\t_p_.gfree = gp\n\t_p_.gfreecnt++\n\tif _p_.gfreecnt >= 64 {\n\t\tlock(&sched.gflock)\n\t\tfor _p_.gfreecnt >= 32 {\n\t\t\t_p_.gfreecnt--\n\t\t\tgp = _p_.gfree\n\t\t\t_p_.gfree = gp.schedlink.ptr()\n\t\t\tgp.schedlink.set(sched.gfree)\n\t\t\tsched.gfree = gp\n\t\t\tsched.ngfree++\n\t\t}\n\t\tunlock(&sched.gflock)\n\t}\n}\n\n// Get from gfree list.\n// If local list is empty, grab a batch from global list.\nfunc gfget(_p_ *p) *g {\nretry:\n\tgp := _p_.gfree\n\tif gp == nil && sched.gfree != nil {\n\t\tlock(&sched.gflock)\n\t\tfor _p_.gfreecnt < 32 && sched.gfree != nil {\n\t\t\t_p_.gfreecnt++\n\t\t\tgp = sched.gfree\n\t\t\tsched.gfree = gp.schedlink.ptr()\n\t\t\tsched.ngfree--\n\t\t\tgp.schedlink.set(_p_.gfree)\n\t\t\t_p_.gfree = gp\n\t\t}\n\t\tunlock(&sched.gflock)\n\t\tgoto retry\n\t}\n\tif gp != nil {\n\t\t_p_.gfree = gp.schedlink.ptr()\n\t\t_p_.gfreecnt--\n\t\tif gp.stack.lo == 0 {\n\t\t\t// Stack was deallocated in gfput.  Allocate a new one.\n\t\t\tsystemstack(func() {\n\t\t\t\tgp.stack, gp.stkbar = stackalloc(_FixedStack)\n\t\t\t})\n\t\t\tgp.stackguard0 = gp.stack.lo + _StackGuard\n\t\t\tgp.stackAlloc = _FixedStack\n\t\t} else {\n\t\t\tif raceenabled {\n\t\t\t\tracemalloc(unsafe.Pointer(gp.stack.lo), gp.stackAlloc)\n\t\t\t}\n\t\t\tif msanenabled {\n\t\t\t\tmsanmalloc(unsafe.Pointer(gp.stack.lo), gp.stackAlloc)\n\t\t\t}\n\t\t}\n\t}\n\treturn gp\n}\n\n// Purge all cached G's from gfree list to the global list.\nfunc gfpurge(_p_ *p) {\n\tlock(&sched.gflock)\n\tfor _p_.gfreecnt != 0 {\n\t\t_p_.gfreecnt--\n\t\tgp := _p_.gfree\n\t\t_p_.gfree = gp.schedlink.ptr()\n\t\tgp.schedlink.set(sched.gfree)\n\t\tsched.gfree = gp\n\t\tsched.ngfree++\n\t}\n\tunlock(&sched.gflock)\n}\n\n// Breakpoint executes a breakpoint trap.\nfunc Breakpoint() {\n\tbreakpoint()\n}\n\n// dolockOSThread is called by LockOSThread and lockOSThread below\n// after they modify m.locked. Do not allow preemption during this call,\n// or else the m might be different in this function than in the caller.\n//go:nosplit\nfunc dolockOSThread() {\n\t_g_ := getg()\n\t_g_.m.lockedg = _g_\n\t_g_.lockedm = _g_.m\n}\n\n//go:nosplit\n\n// LockOSThread wires the calling goroutine to its current operating system thread.\n// Until the calling goroutine exits or calls UnlockOSThread, it will always\n// execute in that thread, and no other goroutine can.\nfunc LockOSThread() {\n\tgetg().m.locked |= _LockExternal\n\tdolockOSThread()\n}\n\n//go:nosplit\nfunc lockOSThread() {\n\tgetg().m.locked += _LockInternal\n\tdolockOSThread()\n}\n\n// dounlockOSThread is called by UnlockOSThread and unlockOSThread below\n// after they update m->locked. Do not allow preemption during this call,\n// or else the m might be in different in this function than in the caller.\n//go:nosplit\nfunc dounlockOSThread() {\n\t_g_ := getg()\n\tif _g_.m.locked != 0 {\n\t\treturn\n\t}\n\t_g_.m.lockedg = nil\n\t_g_.lockedm = nil\n}\n\n//go:nosplit\n\n// UnlockOSThread unwires the calling goroutine from its fixed operating system thread.\n// If the calling goroutine has not called LockOSThread, UnlockOSThread is a no-op.\nfunc UnlockOSThread() {\n\tgetg().m.locked &^= _LockExternal\n\tdounlockOSThread()\n}\n\n//go:nosplit\nfunc unlockOSThread() {\n\t_g_ := getg()\n\tif _g_.m.locked < _LockInternal {\n\t\tsystemstack(badunlockosthread)\n\t}\n\t_g_.m.locked -= _LockInternal\n\tdounlockOSThread()\n}\n\nfunc badunlockosthread() {\n\tthrow(\"runtime: internal error: misuse of lockOSThread/unlockOSThread\")\n}\n\nfunc gcount() int32 {\n\tn := int32(allglen) - sched.ngfree - int32(atomic.Load(&sched.ngsys))\n\tfor i := 0; ; i++ {\n\t\t_p_ := allp[i]\n\t\tif _p_ == nil {\n\t\t\tbreak\n\t\t}\n\t\tn -= _p_.gfreecnt\n\t}\n\n\t// All these variables can be changed concurrently, so the result can be inconsistent.\n\t// But at least the current goroutine is running.\n\tif n < 1 {\n\t\tn = 1\n\t}\n\treturn n\n}\n\nfunc mcount() int32 {\n\treturn sched.mcount\n}\n\nvar prof struct {\n\tlock uint32\n\thz   int32\n}\n\nfunc _System()       { _System() }\nfunc _ExternalCode() { _ExternalCode() }\nfunc _GC()           { _GC() }\n\n// Called if we receive a SIGPROF signal.\nfunc sigprof(pc, sp, lr uintptr, gp *g, mp *m) {\n\tif prof.hz == 0 {\n\t\treturn\n\t}\n\n\t// Profiling runs concurrently with GC, so it must not allocate.\n\tmp.mallocing++\n\n\t// Define that a \"user g\" is a user-created goroutine, and a \"system g\"\n\t// is one that is m->g0 or m->gsignal.\n\t//\n\t// We might be interrupted for profiling halfway through a\n\t// goroutine switch. The switch involves updating three (or four) values:\n\t// g, PC, SP, and (on arm) LR. The PC must be the last to be updated,\n\t// because once it gets updated the new g is running.\n\t//\n\t// When switching from a user g to a system g, LR is not considered live,\n\t// so the update only affects g, SP, and PC. Since PC must be last, there\n\t// the possible partial transitions in ordinary execution are (1) g alone is updated,\n\t// (2) both g and SP are updated, and (3) SP alone is updated.\n\t// If SP or g alone is updated, we can detect the partial transition by checking\n\t// whether the SP is within g's stack bounds. (We could also require that SP\n\t// be changed only after g, but the stack bounds check is needed by other\n\t// cases, so there is no need to impose an additional requirement.)\n\t//\n\t// There is one exceptional transition to a system g, not in ordinary execution.\n\t// When a signal arrives, the operating system starts the signal handler running\n\t// with an updated PC and SP. The g is updated last, at the beginning of the\n\t// handler. There are two reasons this is okay. First, until g is updated the\n\t// g and SP do not match, so the stack bounds check detects the partial transition.\n\t// Second, signal handlers currently run with signals disabled, so a profiling\n\t// signal cannot arrive during the handler.\n\t//\n\t// When switching from a system g to a user g, there are three possibilities.\n\t//\n\t// First, it may be that the g switch has no PC update, because the SP\n\t// either corresponds to a user g throughout (as in asmcgocall)\n\t// or because it has been arranged to look like a user g frame\n\t// (as in cgocallback_gofunc). In this case, since the entire\n\t// transition is a g+SP update, a partial transition updating just one of\n\t// those will be detected by the stack bounds check.\n\t//\n\t// Second, when returning from a signal handler, the PC and SP updates\n\t// are performed by the operating system in an atomic update, so the g\n\t// update must be done before them. The stack bounds check detects\n\t// the partial transition here, and (again) signal handlers run with signals\n\t// disabled, so a profiling signal cannot arrive then anyway.\n\t//\n\t// Third, the common case: it may be that the switch updates g, SP, and PC\n\t// separately. If the PC is within any of the functions that does this,\n\t// we don't ask for a traceback. C.F. the function setsSP for more about this.\n\t//\n\t// There is another apparently viable approach, recorded here in case\n\t// the \"PC within setsSP function\" check turns out not to be usable.\n\t// It would be possible to delay the update of either g or SP until immediately\n\t// before the PC update instruction. Then, because of the stack bounds check,\n\t// the only problematic interrupt point is just before that PC update instruction,\n\t// and the sigprof handler can detect that instruction and simulate stepping past\n\t// it in order to reach a consistent state. On ARM, the update of g must be made\n\t// in two places (in R10 and also in a TLS slot), so the delayed update would\n\t// need to be the SP update. The sigprof handler must read the instruction at\n\t// the current PC and if it was the known instruction (for example, JMP BX or\n\t// MOV R2, PC), use that other register in place of the PC value.\n\t// The biggest drawback to this solution is that it requires that we can tell\n\t// whether it's safe to read from the memory pointed at by PC.\n\t// In a correct program, we can test PC == nil and otherwise read,\n\t// but if a profiling signal happens at the instant that a program executes\n\t// a bad jump (before the program manages to handle the resulting fault)\n\t// the profiling handler could fault trying to read nonexistent memory.\n\t//\n\t// To recap, there are no constraints on the assembly being used for the\n\t// transition. We simply require that g and SP match and that the PC is not\n\t// in gogo.\n\ttraceback := true\n\tif gp == nil || sp < gp.stack.lo || gp.stack.hi < sp || setsSP(pc) {\n\t\ttraceback = false\n\t}\n\tvar stk [maxCPUProfStack]uintptr\n\tvar haveStackLock *g\n\tn := 0\n\tif mp.ncgo > 0 && mp.curg != nil && mp.curg.syscallpc != 0 && mp.curg.syscallsp != 0 {\n\t\t// Cgo, we can't unwind and symbolize arbitrary C code,\n\t\t// so instead collect Go stack that leads to the cgo call.\n\t\t// This is especially important on windows, since all syscalls are cgo calls.\n\t\tif gcTryLockStackBarriers(mp.curg) {\n\t\t\thaveStackLock = mp.curg\n\t\t\tn = gentraceback(mp.curg.syscallpc, mp.curg.syscallsp, 0, mp.curg, 0, &stk[0], len(stk), nil, nil, 0)\n\t\t}\n\t} else if traceback {\n\t\tvar flags uint = _TraceTrap\n\t\tif gp.m.curg != nil && gcTryLockStackBarriers(gp.m.curg) {\n\t\t\t// It's safe to traceback the user stack.\n\t\t\thaveStackLock = gp.m.curg\n\t\t\tflags |= _TraceJumpStack\n\t\t}\n\t\t// Traceback is safe if we're on the system stack (if\n\t\t// necessary, flags will stop it before switching to\n\t\t// the user stack), or if we locked the user stack.\n\t\tif gp != gp.m.curg || haveStackLock != nil {\n\t\t\tn = gentraceback(pc, sp, lr, gp, 0, &stk[0], len(stk), nil, nil, flags)\n\t\t}\n\t}\n\tif haveStackLock != nil {\n\t\tgcUnlockStackBarriers(haveStackLock)\n\t}\n\n\tif n <= 0 {\n\t\t// Normal traceback is impossible or has failed.\n\t\t// See if it falls into several common cases.\n\t\tn = 0\n\t\tif GOOS == \"windows\" && mp.libcallg != 0 && mp.libcallpc != 0 && mp.libcallsp != 0 {\n\t\t\t// Libcall, i.e. runtime syscall on windows.\n\t\t\t// Collect Go stack that leads to the call.\n\t\t\tif gcTryLockStackBarriers(mp.libcallg.ptr()) {\n\t\t\t\tn = gentraceback(mp.libcallpc, mp.libcallsp, 0, mp.libcallg.ptr(), 0, &stk[0], len(stk), nil, nil, 0)\n\t\t\t\tgcUnlockStackBarriers(mp.libcallg.ptr())\n\t\t\t}\n\t\t}\n\t\tif n == 0 {\n\t\t\t// If all of the above has failed, account it against abstract \"System\" or \"GC\".\n\t\t\tn = 2\n\t\t\t// \"ExternalCode\" is better than \"etext\".\n\t\t\tif pc > firstmoduledata.etext {\n\t\t\t\tpc = funcPC(_ExternalCode) + sys.PCQuantum\n\t\t\t}\n\t\t\tstk[0] = pc\n\t\t\tif mp.preemptoff != \"\" || mp.helpgc != 0 {\n\t\t\t\tstk[1] = funcPC(_GC) + sys.PCQuantum\n\t\t\t} else {\n\t\t\t\tstk[1] = funcPC(_System) + sys.PCQuantum\n\t\t\t}\n\t\t}\n\t}\n\n\tif prof.hz != 0 {\n\t\t// Simple cas-lock to coordinate with setcpuprofilerate.\n\t\tfor !atomic.Cas(&prof.lock, 0, 1) {\n\t\t\tosyield()\n\t\t}\n\t\tif prof.hz != 0 {\n\t\t\tcpuprof.add(stk[:n])\n\t\t}\n\t\tatomic.Store(&prof.lock, 0)\n\t}\n\tmp.mallocing--\n}\n\n// Reports whether a function will set the SP\n// to an absolute value. Important that\n// we don't traceback when these are at the bottom\n// of the stack since we can't be sure that we will\n// find the caller.\n//\n// If the function is not on the bottom of the stack\n// we assume that it will have set it up so that traceback will be consistent,\n// either by being a traceback terminating function\n// or putting one on the stack at the right offset.\nfunc setsSP(pc uintptr) bool {\n\tf := findfunc(pc)\n\tif f == nil {\n\t\t// couldn't find the function for this PC,\n\t\t// so assume the worst and stop traceback\n\t\treturn true\n\t}\n\tswitch f.entry {\n\tcase gogoPC, systemstackPC, mcallPC, morestackPC:\n\t\treturn true\n\t}\n\treturn false\n}\n\n// Arrange to call fn with a traceback hz times a second.\nfunc setcpuprofilerate_m(hz int32) {\n\t// Force sane arguments.\n\tif hz < 0 {\n\t\thz = 0\n\t}\n\n\t// Disable preemption, otherwise we can be rescheduled to another thread\n\t// that has profiling enabled.\n\t_g_ := getg()\n\t_g_.m.locks++\n\n\t// Stop profiler on this thread so that it is safe to lock prof.\n\t// if a profiling signal came in while we had prof locked,\n\t// it would deadlock.\n\tresetcpuprofiler(0)\n\n\tfor !atomic.Cas(&prof.lock, 0, 1) {\n\t\tosyield()\n\t}\n\tprof.hz = hz\n\tatomic.Store(&prof.lock, 0)\n\n\tlock(&sched.lock)\n\tsched.profilehz = hz\n\tunlock(&sched.lock)\n\n\tif hz != 0 {\n\t\tresetcpuprofiler(hz)\n\t}\n\n\t_g_.m.locks--\n}\n\n// Change number of processors.  The world is stopped, sched is locked.\n// gcworkbufs are not being modified by either the GC or\n// the write barrier code.\n// Returns list of Ps with local work, they need to be scheduled by the caller.\nfunc procresize(nprocs int32) *p {\n\told := gomaxprocs\n\tif old < 0 || old > _MaxGomaxprocs || nprocs <= 0 || nprocs > _MaxGomaxprocs {\n\t\tthrow(\"procresize: invalid arg\")\n\t}\n\tif trace.enabled {\n\t\ttraceGomaxprocs(nprocs)\n\t}\n\n\t// update statistics\n\tnow := nanotime()\n\tif sched.procresizetime != 0 {\n\t\tsched.totaltime += int64(old) * (now - sched.procresizetime)\n\t}\n\tsched.procresizetime = now\n\n\t// initialize new P's\n\tfor i := int32(0); i < nprocs; i++ {\n\t\tpp := allp[i]\n\t\tif pp == nil {\n\t\t\tpp = new(p)\n\t\t\tpp.id = i\n\t\t\tpp.status = _Pgcstop\n\t\t\tpp.sudogcache = pp.sudogbuf[:0]\n\t\t\tfor i := range pp.deferpool {\n\t\t\t\tpp.deferpool[i] = pp.deferpoolbuf[i][:0]\n\t\t\t}\n\t\t\tatomicstorep(unsafe.Pointer(&allp[i]), unsafe.Pointer(pp))\n\t\t}\n\t\tif pp.mcache == nil {\n\t\t\tif old == 0 && i == 0 {\n\t\t\t\tif getg().m.mcache == nil {\n\t\t\t\t\tthrow(\"missing mcache?\")\n\t\t\t\t}\n\t\t\t\tpp.mcache = getg().m.mcache // bootstrap\n\t\t\t} else {\n\t\t\t\tpp.mcache = allocmcache()\n\t\t\t}\n\t\t}\n\t}\n\n\t// free unused P's\n\tfor i := nprocs; i < old; i++ {\n\t\tp := allp[i]\n\t\tif trace.enabled {\n\t\t\tif p == getg().m.p.ptr() {\n\t\t\t\t// moving to p[0], pretend that we were descheduled\n\t\t\t\t// and then scheduled again to keep the trace sane.\n\t\t\t\ttraceGoSched()\n\t\t\t\ttraceProcStop(p)\n\t\t\t}\n\t\t}\n\t\t// move all runnable goroutines to the global queue\n\t\tfor p.runqhead != p.runqtail {\n\t\t\t// pop from tail of local queue\n\t\t\tp.runqtail--\n\t\t\tgp := p.runq[p.runqtail%uint32(len(p.runq))].ptr()\n\t\t\t// push onto head of global queue\n\t\t\tglobrunqputhead(gp)\n\t\t}\n\t\tif p.runnext != 0 {\n\t\t\tglobrunqputhead(p.runnext.ptr())\n\t\t\tp.runnext = 0\n\t\t}\n\t\t// if there's a background worker, make it runnable and put\n\t\t// it on the global queue so it can clean itself up\n\t\tif gp := p.gcBgMarkWorker.ptr(); gp != nil {\n\t\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\t\tif trace.enabled {\n\t\t\t\ttraceGoUnpark(gp, 0)\n\t\t\t}\n\t\t\tglobrunqput(gp)\n\t\t\t// This assignment doesn't race because the\n\t\t\t// world is stopped.\n\t\t\tp.gcBgMarkWorker.set(nil)\n\t\t}\n\t\tfor i := range p.sudogbuf {\n\t\t\tp.sudogbuf[i] = nil\n\t\t}\n\t\tp.sudogcache = p.sudogbuf[:0]\n\t\tfor i := range p.deferpool {\n\t\t\tfor j := range p.deferpoolbuf[i] {\n\t\t\t\tp.deferpoolbuf[i][j] = nil\n\t\t\t}\n\t\t\tp.deferpool[i] = p.deferpoolbuf[i][:0]\n\t\t}\n\t\tfreemcache(p.mcache)\n\t\tp.mcache = nil\n\t\tgfpurge(p)\n\t\ttraceProcFree(p)\n\t\tp.status = _Pdead\n\t\t// can't free P itself because it can be referenced by an M in syscall\n\t}\n\n\t_g_ := getg()\n\tif _g_.m.p != 0 && _g_.m.p.ptr().id < nprocs {\n\t\t// continue to use the current P\n\t\t_g_.m.p.ptr().status = _Prunning\n\t} else {\n\t\t// release the current P and acquire allp[0]\n\t\tif _g_.m.p != 0 {\n\t\t\t_g_.m.p.ptr().m = 0\n\t\t}\n\t\t_g_.m.p = 0\n\t\t_g_.m.mcache = nil\n\t\tp := allp[0]\n\t\tp.m = 0\n\t\tp.status = _Pidle\n\t\tacquirep(p)\n\t\tif trace.enabled {\n\t\t\ttraceGoStart()\n\t\t}\n\t}\n\tvar runnablePs *p\n\tfor i := nprocs - 1; i >= 0; i-- {\n\t\tp := allp[i]\n\t\tif _g_.m.p.ptr() == p {\n\t\t\tcontinue\n\t\t}\n\t\tp.status = _Pidle\n\t\tif runqempty(p) {\n\t\t\tpidleput(p)\n\t\t} else {\n\t\t\tp.m.set(mget())\n\t\t\tp.link.set(runnablePs)\n\t\t\trunnablePs = p\n\t\t}\n\t}\n\tvar int32p *int32 = &gomaxprocs // make compiler check that gomaxprocs is an int32\n\tatomic.Store((*uint32)(unsafe.Pointer(int32p)), uint32(nprocs))\n\treturn runnablePs\n}\n\n// Associate p and the current m.\nfunc acquirep(_p_ *p) {\n\tacquirep1(_p_)\n\n\t// have p; write barriers now allowed\n\t_g_ := getg()\n\t_g_.m.mcache = _p_.mcache\n\n\tif trace.enabled {\n\t\ttraceProcStart()\n\t}\n}\n\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc acquirep1(_p_ *p) {\n\t_g_ := getg()\n\n\tif _g_.m.p != 0 || _g_.m.mcache != nil {\n\t\tthrow(\"acquirep: already in go\")\n\t}\n\tif _p_.m != 0 || _p_.status != _Pidle {\n\t\tid := int32(0)\n\t\tif _p_.m != 0 {\n\t\t\tid = _p_.m.ptr().id\n\t\t}\n\t\tprint(\"acquirep: p->m=\", _p_.m, \"(\", id, \") p->status=\", _p_.status, \"\\n\")\n\t\tthrow(\"acquirep: invalid p state\")\n\t}\n\t_g_.m.p.set(_p_)\n\t_p_.m.set(_g_.m)\n\t_p_.status = _Prunning\n}\n\n// Disassociate p and the current m.\nfunc releasep() *p {\n\t_g_ := getg()\n\n\tif _g_.m.p == 0 || _g_.m.mcache == nil {\n\t\tthrow(\"releasep: invalid arg\")\n\t}\n\t_p_ := _g_.m.p.ptr()\n\tif _p_.m.ptr() != _g_.m || _p_.mcache != _g_.m.mcache || _p_.status != _Prunning {\n\t\tprint(\"releasep: m=\", _g_.m, \" m->p=\", _g_.m.p.ptr(), \" p->m=\", _p_.m, \" m->mcache=\", _g_.m.mcache, \" p->mcache=\", _p_.mcache, \" p->status=\", _p_.status, \"\\n\")\n\t\tthrow(\"releasep: invalid p state\")\n\t}\n\tif trace.enabled {\n\t\ttraceProcStop(_g_.m.p.ptr())\n\t}\n\t_g_.m.p = 0\n\t_g_.m.mcache = nil\n\t_p_.m = 0\n\t_p_.status = _Pidle\n\treturn _p_\n}\n\nfunc incidlelocked(v int32) {\n\tlock(&sched.lock)\n\tsched.nmidlelocked += v\n\tif v > 0 {\n\t\tcheckdead()\n\t}\n\tunlock(&sched.lock)\n}\n\n// Check for deadlock situation.\n// The check is based on number of running M's, if 0 -> deadlock.\nfunc checkdead() {\n\t// For -buildmode=c-shared or -buildmode=c-archive it's OK if\n\t// there are no running goroutines.  The calling program is\n\t// assumed to be running.\n\tif islibrary || isarchive {\n\t\treturn\n\t}\n\n\t// If we are dying because of a signal caught on an already idle thread,\n\t// freezetheworld will cause all running threads to block.\n\t// And runtime will essentially enter into deadlock state,\n\t// except that there is a thread that will call exit soon.\n\tif panicking > 0 {\n\t\treturn\n\t}\n\n\t// -1 for sysmon\n\trun := sched.mcount - sched.nmidle - sched.nmidlelocked - 1\n\tif run > 0 {\n\t\treturn\n\t}\n\tif run < 0 {\n\t\tprint(\"runtime: checkdead: nmidle=\", sched.nmidle, \" nmidlelocked=\", sched.nmidlelocked, \" mcount=\", sched.mcount, \"\\n\")\n\t\tthrow(\"checkdead: inconsistent counts\")\n\t}\n\n\tgrunning := 0\n\tlock(&allglock)\n\tfor i := 0; i < len(allgs); i++ {\n\t\tgp := allgs[i]\n\t\tif isSystemGoroutine(gp) {\n\t\t\tcontinue\n\t\t}\n\t\ts := readgstatus(gp)\n\t\tswitch s &^ _Gscan {\n\t\tcase _Gwaiting:\n\t\t\tgrunning++\n\t\tcase _Grunnable,\n\t\t\t_Grunning,\n\t\t\t_Gsyscall:\n\t\t\tunlock(&allglock)\n\t\t\tprint(\"runtime: checkdead: find g \", gp.goid, \" in status \", s, \"\\n\")\n\t\t\tthrow(\"checkdead: runnable g\")\n\t\t}\n\t}\n\tunlock(&allglock)\n\tif grunning == 0 { // possible if main goroutine calls runtime·Goexit()\n\t\tthrow(\"no goroutines (main called runtime.Goexit) - deadlock!\")\n\t}\n\n\t// Maybe jump time forward for playground.\n\tgp := timejump()\n\tif gp != nil {\n\t\tcasgstatus(gp, _Gwaiting, _Grunnable)\n\t\tglobrunqput(gp)\n\t\t_p_ := pidleget()\n\t\tif _p_ == nil {\n\t\t\tthrow(\"checkdead: no p for timer\")\n\t\t}\n\t\tmp := mget()\n\t\tif mp == nil {\n\t\t\t// There should always be a free M since\n\t\t\t// nothing is running.\n\t\t\tthrow(\"checkdead: no m for timer\")\n\t\t}\n\t\tmp.nextp.set(_p_)\n\t\tnotewakeup(&mp.park)\n\t\treturn\n\t}\n\n\tgetg().m.throwing = -1 // do not dump full stacks\n\tthrow(\"all goroutines are asleep - deadlock!\")\n}\n\n// forcegcperiod is the maximum time in nanoseconds between garbage\n// collections. If we go this long without a garbage collection, one\n// is forced to run.\n//\n// This is a variable for testing purposes. It normally doesn't change.\nvar forcegcperiod int64 = 2 * 60 * 1e9\n\n// Always runs without a P, so write barriers are not allowed.\n//\n//go:nowritebarrierrec\nfunc sysmon() {\n\t// If a heap span goes unused for 5 minutes after a garbage collection,\n\t// we hand it back to the operating system.\n\tscavengelimit := int64(5 * 60 * 1e9)\n\n\tif debug.scavenge > 0 {\n\t\t// Scavenge-a-lot for testing.\n\t\tforcegcperiod = 10 * 1e6\n\t\tscavengelimit = 20 * 1e6\n\t}\n\n\tlastscavenge := nanotime()\n\tnscavenge := 0\n\n\tlasttrace := int64(0)\n\tidle := 0 // how many cycles in succession we had not wokeup somebody\n\tdelay := uint32(0)\n\tfor {\n\t\tif idle == 0 { // start with 20us sleep...\n\t\t\tdelay = 20\n\t\t} else if idle > 50 { // start doubling the sleep after 1ms...\n\t\t\tdelay *= 2\n\t\t}\n\t\tif delay > 10*1000 { // up to 10ms\n\t\t\tdelay = 10 * 1000\n\t\t}\n\t\tusleep(delay)\n\t\tif debug.schedtrace <= 0 && (sched.gcwaiting != 0 || atomic.Load(&sched.npidle) == uint32(gomaxprocs)) { // TODO: fast atomic\n\t\t\tlock(&sched.lock)\n\t\t\tif atomic.Load(&sched.gcwaiting) != 0 || atomic.Load(&sched.npidle) == uint32(gomaxprocs) {\n\t\t\t\tatomic.Store(&sched.sysmonwait, 1)\n\t\t\t\tunlock(&sched.lock)\n\t\t\t\t// Make wake-up period small enough\n\t\t\t\t// for the sampling to be correct.\n\t\t\t\tmaxsleep := forcegcperiod / 2\n\t\t\t\tif scavengelimit < forcegcperiod {\n\t\t\t\t\tmaxsleep = scavengelimit / 2\n\t\t\t\t}\n\t\t\t\tnotetsleep(&sched.sysmonnote, maxsleep)\n\t\t\t\tlock(&sched.lock)\n\t\t\t\tatomic.Store(&sched.sysmonwait, 0)\n\t\t\t\tnoteclear(&sched.sysmonnote)\n\t\t\t\tidle = 0\n\t\t\t\tdelay = 20\n\t\t\t}\n\t\t\tunlock(&sched.lock)\n\t\t}\n\t\t// poll network if not polled for more than 10ms\n\t\tlastpoll := int64(atomic.Load64(&sched.lastpoll))\n\t\tnow := nanotime()\n\t\tunixnow := unixnanotime()\n\t\tif lastpoll != 0 && lastpoll+10*1000*1000 < now {\n\t\t\tatomic.Cas64(&sched.lastpoll, uint64(lastpoll), uint64(now))\n\t\t\tgp := netpoll(false) // non-blocking - returns list of goroutines\n\t\t\tif gp != nil {\n\t\t\t\t// Need to decrement number of idle locked M's\n\t\t\t\t// (pretending that one more is running) before injectglist.\n\t\t\t\t// Otherwise it can lead to the following situation:\n\t\t\t\t// injectglist grabs all P's but before it starts M's to run the P's,\n\t\t\t\t// another M returns from syscall, finishes running its G,\n\t\t\t\t// observes that there is no work to do and no other running M's\n\t\t\t\t// and reports deadlock.\n\t\t\t\tincidlelocked(-1)\n\t\t\t\tinjectglist(gp)\n\t\t\t\tincidlelocked(1)\n\t\t\t}\n\t\t}\n\t\t// retake P's blocked in syscalls\n\t\t// and preempt long running G's\n\t\tif retake(now) != 0 {\n\t\t\tidle = 0\n\t\t} else {\n\t\t\tidle++\n\t\t}\n\t\t// check if we need to force a GC\n\t\tlastgc := int64(atomic.Load64(&memstats.last_gc))\n\t\tif gcphase == _GCoff && lastgc != 0 && unixnow-lastgc > forcegcperiod && atomic.Load(&forcegc.idle) != 0 {\n\t\t\tlock(&forcegc.lock)\n\t\t\tforcegc.idle = 0\n\t\t\tforcegc.g.schedlink = 0\n\t\t\tinjectglist(forcegc.g)\n\t\t\tunlock(&forcegc.lock)\n\t\t}\n\t\t// scavenge heap once in a while\n\t\tif lastscavenge+scavengelimit/2 < now {\n\t\t\tmheap_.scavenge(int32(nscavenge), uint64(now), uint64(scavengelimit))\n\t\t\tlastscavenge = now\n\t\t\tnscavenge++\n\t\t}\n\t\tif debug.schedtrace > 0 && lasttrace+int64(debug.schedtrace)*1000000 <= now {\n\t\t\tlasttrace = now\n\t\t\tschedtrace(debug.scheddetail > 0)\n\t\t}\n\t}\n}\n\nvar pdesc [_MaxGomaxprocs]struct {\n\tschedtick   uint32\n\tschedwhen   int64\n\tsyscalltick uint32\n\tsyscallwhen int64\n}\n\n// forcePreemptNS is the time slice given to a G before it is\n// preempted.\nconst forcePreemptNS = 10 * 1000 * 1000 // 10ms\n\nfunc retake(now int64) uint32 {\n\tn := 0\n\tfor i := int32(0); i < gomaxprocs; i++ {\n\t\t_p_ := allp[i]\n\t\tif _p_ == nil {\n\t\t\tcontinue\n\t\t}\n\t\tpd := &pdesc[i]\n\t\ts := _p_.status\n\t\tif s == _Psyscall {\n\t\t\t// Retake P from syscall if it's there for more than 1 sysmon tick (at least 20us).\n\t\t\tt := int64(_p_.syscalltick)\n\t\t\tif int64(pd.syscalltick) != t {\n\t\t\t\tpd.syscalltick = uint32(t)\n\t\t\t\tpd.syscallwhen = now\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// On the one hand we don't want to retake Ps if there is no other work to do,\n\t\t\t// but on the other hand we want to retake them eventually\n\t\t\t// because they can prevent the sysmon thread from deep sleep.\n\t\t\tif runqempty(_p_) && atomic.Load(&sched.nmspinning)+atomic.Load(&sched.npidle) > 0 && pd.syscallwhen+10*1000*1000 > now {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// Need to decrement number of idle locked M's\n\t\t\t// (pretending that one more is running) before the CAS.\n\t\t\t// Otherwise the M from which we retake can exit the syscall,\n\t\t\t// increment nmidle and report deadlock.\n\t\t\tincidlelocked(-1)\n\t\t\tif atomic.Cas(&_p_.status, s, _Pidle) {\n\t\t\t\tif trace.enabled {\n\t\t\t\t\ttraceGoSysBlock(_p_)\n\t\t\t\t\ttraceProcStop(_p_)\n\t\t\t\t}\n\t\t\t\tn++\n\t\t\t\t_p_.syscalltick++\n\t\t\t\thandoffp(_p_)\n\t\t\t}\n\t\t\tincidlelocked(1)\n\t\t} else if s == _Prunning {\n\t\t\t// Preempt G if it's running for too long.\n\t\t\tt := int64(_p_.schedtick)\n\t\t\tif int64(pd.schedtick) != t {\n\t\t\t\tpd.schedtick = uint32(t)\n\t\t\t\tpd.schedwhen = now\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif pd.schedwhen+forcePreemptNS > now {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tpreemptone(_p_)\n\t\t}\n\t}\n\treturn uint32(n)\n}\n\n// Tell all goroutines that they have been preempted and they should stop.\n// This function is purely best-effort.  It can fail to inform a goroutine if a\n// processor just started running it.\n// No locks need to be held.\n// Returns true if preemption request was issued to at least one goroutine.\nfunc preemptall() bool {\n\tres := false\n\tfor i := int32(0); i < gomaxprocs; i++ {\n\t\t_p_ := allp[i]\n\t\tif _p_ == nil || _p_.status != _Prunning {\n\t\t\tcontinue\n\t\t}\n\t\tif preemptone(_p_) {\n\t\t\tres = true\n\t\t}\n\t}\n\treturn res\n}\n\n// Tell the goroutine running on processor P to stop.\n// This function is purely best-effort.  It can incorrectly fail to inform the\n// goroutine.  It can send inform the wrong goroutine.  Even if it informs the\n// correct goroutine, that goroutine might ignore the request if it is\n// simultaneously executing newstack.\n// No lock needs to be held.\n// Returns true if preemption request was issued.\n// The actual preemption will happen at some point in the future\n// and will be indicated by the gp->status no longer being\n// Grunning\nfunc preemptone(_p_ *p) bool {\n\tmp := _p_.m.ptr()\n\tif mp == nil || mp == getg().m {\n\t\treturn false\n\t}\n\tgp := mp.curg\n\tif gp == nil || gp == mp.g0 {\n\t\treturn false\n\t}\n\n\tgp.preempt = true\n\n\t// Every call in a go routine checks for stack overflow by\n\t// comparing the current stack pointer to gp->stackguard0.\n\t// Setting gp->stackguard0 to StackPreempt folds\n\t// preemption into the normal stack overflow check.\n\tgp.stackguard0 = stackPreempt\n\treturn true\n}\n\nvar starttime int64\n\nfunc schedtrace(detailed bool) {\n\tnow := nanotime()\n\tif starttime == 0 {\n\t\tstarttime = now\n\t}\n\n\tlock(&sched.lock)\n\tprint(\"SCHED \", (now-starttime)/1e6, \"ms: gomaxprocs=\", gomaxprocs, \" idleprocs=\", sched.npidle, \" threads=\", sched.mcount, \" spinningthreads=\", sched.nmspinning, \" idlethreads=\", sched.nmidle, \" runqueue=\", sched.runqsize)\n\tif detailed {\n\t\tprint(\" gcwaiting=\", sched.gcwaiting, \" nmidlelocked=\", sched.nmidlelocked, \" stopwait=\", sched.stopwait, \" sysmonwait=\", sched.sysmonwait, \"\\n\")\n\t}\n\t// We must be careful while reading data from P's, M's and G's.\n\t// Even if we hold schedlock, most data can be changed concurrently.\n\t// E.g. (p->m ? p->m->id : -1) can crash if p->m changes from non-nil to nil.\n\tfor i := int32(0); i < gomaxprocs; i++ {\n\t\t_p_ := allp[i]\n\t\tif _p_ == nil {\n\t\t\tcontinue\n\t\t}\n\t\tmp := _p_.m.ptr()\n\t\th := atomic.Load(&_p_.runqhead)\n\t\tt := atomic.Load(&_p_.runqtail)\n\t\tif detailed {\n\t\t\tid := int32(-1)\n\t\t\tif mp != nil {\n\t\t\t\tid = mp.id\n\t\t\t}\n\t\t\tprint(\"  P\", i, \": status=\", _p_.status, \" schedtick=\", _p_.schedtick, \" syscalltick=\", _p_.syscalltick, \" m=\", id, \" runqsize=\", t-h, \" gfreecnt=\", _p_.gfreecnt, \"\\n\")\n\t\t} else {\n\t\t\t// In non-detailed mode format lengths of per-P run queues as:\n\t\t\t// [len1 len2 len3 len4]\n\t\t\tprint(\" \")\n\t\t\tif i == 0 {\n\t\t\t\tprint(\"[\")\n\t\t\t}\n\t\t\tprint(t - h)\n\t\t\tif i == gomaxprocs-1 {\n\t\t\t\tprint(\"]\\n\")\n\t\t\t}\n\t\t}\n\t}\n\n\tif !detailed {\n\t\tunlock(&sched.lock)\n\t\treturn\n\t}\n\n\tfor mp := allm; mp != nil; mp = mp.alllink {\n\t\t_p_ := mp.p.ptr()\n\t\tgp := mp.curg\n\t\tlockedg := mp.lockedg\n\t\tid1 := int32(-1)\n\t\tif _p_ != nil {\n\t\t\tid1 = _p_.id\n\t\t}\n\t\tid2 := int64(-1)\n\t\tif gp != nil {\n\t\t\tid2 = gp.goid\n\t\t}\n\t\tid3 := int64(-1)\n\t\tif lockedg != nil {\n\t\t\tid3 = lockedg.goid\n\t\t}\n\t\tprint(\"  M\", mp.id, \": p=\", id1, \" curg=\", id2, \" mallocing=\", mp.mallocing, \" throwing=\", mp.throwing, \" preemptoff=\", mp.preemptoff, \"\"+\" locks=\", mp.locks, \" dying=\", mp.dying, \" helpgc=\", mp.helpgc, \" spinning=\", mp.spinning, \" blocked=\", getg().m.blocked, \" lockedg=\", id3, \"\\n\")\n\t}\n\n\tlock(&allglock)\n\tfor gi := 0; gi < len(allgs); gi++ {\n\t\tgp := allgs[gi]\n\t\tmp := gp.m\n\t\tlockedm := gp.lockedm\n\t\tid1 := int32(-1)\n\t\tif mp != nil {\n\t\t\tid1 = mp.id\n\t\t}\n\t\tid2 := int32(-1)\n\t\tif lockedm != nil {\n\t\t\tid2 = lockedm.id\n\t\t}\n\t\tprint(\"  G\", gp.goid, \": status=\", readgstatus(gp), \"(\", gp.waitreason, \") m=\", id1, \" lockedm=\", id2, \"\\n\")\n\t}\n\tunlock(&allglock)\n\tunlock(&sched.lock)\n}\n\n// Put mp on midle list.\n// Sched must be locked.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc mput(mp *m) {\n\tmp.schedlink = sched.midle\n\tsched.midle.set(mp)\n\tsched.nmidle++\n\tcheckdead()\n}\n\n// Try to get an m from midle list.\n// Sched must be locked.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc mget() *m {\n\tmp := sched.midle.ptr()\n\tif mp != nil {\n\t\tsched.midle = mp.schedlink\n\t\tsched.nmidle--\n\t}\n\treturn mp\n}\n\n// Put gp on the global runnable queue.\n// Sched must be locked.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc globrunqput(gp *g) {\n\tgp.schedlink = 0\n\tif sched.runqtail != 0 {\n\t\tsched.runqtail.ptr().schedlink.set(gp)\n\t} else {\n\t\tsched.runqhead.set(gp)\n\t}\n\tsched.runqtail.set(gp)\n\tsched.runqsize++\n}\n\n// Put gp at the head of the global runnable queue.\n// Sched must be locked.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc globrunqputhead(gp *g) {\n\tgp.schedlink = sched.runqhead\n\tsched.runqhead.set(gp)\n\tif sched.runqtail == 0 {\n\t\tsched.runqtail.set(gp)\n\t}\n\tsched.runqsize++\n}\n\n// Put a batch of runnable goroutines on the global runnable queue.\n// Sched must be locked.\nfunc globrunqputbatch(ghead *g, gtail *g, n int32) {\n\tgtail.schedlink = 0\n\tif sched.runqtail != 0 {\n\t\tsched.runqtail.ptr().schedlink.set(ghead)\n\t} else {\n\t\tsched.runqhead.set(ghead)\n\t}\n\tsched.runqtail.set(gtail)\n\tsched.runqsize += n\n}\n\n// Try get a batch of G's from the global runnable queue.\n// Sched must be locked.\nfunc globrunqget(_p_ *p, max int32) *g {\n\tif sched.runqsize == 0 {\n\t\treturn nil\n\t}\n\n\tn := sched.runqsize/gomaxprocs + 1\n\tif n > sched.runqsize {\n\t\tn = sched.runqsize\n\t}\n\tif max > 0 && n > max {\n\t\tn = max\n\t}\n\tif n > int32(len(_p_.runq))/2 {\n\t\tn = int32(len(_p_.runq)) / 2\n\t}\n\n\tsched.runqsize -= n\n\tif sched.runqsize == 0 {\n\t\tsched.runqtail = 0\n\t}\n\n\tgp := sched.runqhead.ptr()\n\tsched.runqhead = gp.schedlink\n\tn--\n\tfor ; n > 0; n-- {\n\t\tgp1 := sched.runqhead.ptr()\n\t\tsched.runqhead = gp1.schedlink\n\t\trunqput(_p_, gp1, false)\n\t}\n\treturn gp\n}\n\n// Put p to on _Pidle list.\n// Sched must be locked.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc pidleput(_p_ *p) {\n\tif !runqempty(_p_) {\n\t\tthrow(\"pidleput: P has non-empty run queue\")\n\t}\n\t_p_.link = sched.pidle\n\tsched.pidle.set(_p_)\n\tatomic.Xadd(&sched.npidle, 1) // TODO: fast atomic\n}\n\n// Try get a p from _Pidle list.\n// Sched must be locked.\n// May run during STW, so write barriers are not allowed.\n//go:nowritebarrier\nfunc pidleget() *p {\n\t_p_ := sched.pidle.ptr()\n\tif _p_ != nil {\n\t\tsched.pidle = _p_.link\n\t\tatomic.Xadd(&sched.npidle, -1) // TODO: fast atomic\n\t}\n\treturn _p_\n}\n\n// runqempty returns true if _p_ has no Gs on its local run queue.\n// Note that this test is generally racy.\nfunc runqempty(_p_ *p) bool {\n\treturn _p_.runqhead == _p_.runqtail && _p_.runnext == 0\n}\n\n// To shake out latent assumptions about scheduling order,\n// we introduce some randomness into scheduling decisions\n// when running with the race detector.\n// The need for this was made obvious by changing the\n// (deterministic) scheduling order in Go 1.5 and breaking\n// many poorly-written tests.\n// With the randomness here, as long as the tests pass\n// consistently with -race, they shouldn't have latent scheduling\n// assumptions.\nconst randomizeScheduler = raceenabled\n\n// runqput tries to put g on the local runnable queue.\n// If next if false, runqput adds g to the tail of the runnable queue.\n// If next is true, runqput puts g in the _p_.runnext slot.\n// If the run queue is full, runnext puts g on the global queue.\n// Executed only by the owner P.\nfunc runqput(_p_ *p, gp *g, next bool) {\n\tif randomizeScheduler && next && fastrand1()%2 == 0 {\n\t\tnext = false\n\t}\n\n\tif next {\n\tretryNext:\n\t\toldnext := _p_.runnext\n\t\tif !_p_.runnext.cas(oldnext, guintptr(unsafe.Pointer(gp))) {\n\t\t\tgoto retryNext\n\t\t}\n\t\tif oldnext == 0 {\n\t\t\treturn\n\t\t}\n\t\t// Kick the old runnext out to the regular run queue.\n\t\tgp = oldnext.ptr()\n\t}\n\nretry:\n\th := atomic.Load(&_p_.runqhead) // load-acquire, synchronize with consumers\n\tt := _p_.runqtail\n\tif t-h < uint32(len(_p_.runq)) {\n\t\t_p_.runq[t%uint32(len(_p_.runq))].set(gp)\n\t\tatomic.Store(&_p_.runqtail, t+1) // store-release, makes the item available for consumption\n\t\treturn\n\t}\n\tif runqputslow(_p_, gp, h, t) {\n\t\treturn\n\t}\n\t// the queue is not full, now the put above must suceed\n\tgoto retry\n}\n\n// Put g and a batch of work from local runnable queue on global queue.\n// Executed only by the owner P.\nfunc runqputslow(_p_ *p, gp *g, h, t uint32) bool {\n\tvar batch [len(_p_.runq)/2 + 1]*g\n\n\t// First, grab a batch from local queue.\n\tn := t - h\n\tn = n / 2\n\tif n != uint32(len(_p_.runq)/2) {\n\t\tthrow(\"runqputslow: queue is not full\")\n\t}\n\tfor i := uint32(0); i < n; i++ {\n\t\tbatch[i] = _p_.runq[(h+i)%uint32(len(_p_.runq))].ptr()\n\t}\n\tif !atomic.Cas(&_p_.runqhead, h, h+n) { // cas-release, commits consume\n\t\treturn false\n\t}\n\tbatch[n] = gp\n\n\tif randomizeScheduler {\n\t\tfor i := uint32(1); i <= n; i++ {\n\t\t\tj := fastrand1() % (i + 1)\n\t\t\tbatch[i], batch[j] = batch[j], batch[i]\n\t\t}\n\t}\n\n\t// Link the goroutines.\n\tfor i := uint32(0); i < n; i++ {\n\t\tbatch[i].schedlink.set(batch[i+1])\n\t}\n\n\t// Now put the batch on global queue.\n\tlock(&sched.lock)\n\tglobrunqputbatch(batch[0], batch[n], int32(n+1))\n\tunlock(&sched.lock)\n\treturn true\n}\n\n// Get g from local runnable queue.\n// If inheritTime is true, gp should inherit the remaining time in the\n// current time slice. Otherwise, it should start a new time slice.\n// Executed only by the owner P.\nfunc runqget(_p_ *p) (gp *g, inheritTime bool) {\n\t// If there's a runnext, it's the next G to run.\n\tfor {\n\t\tnext := _p_.runnext\n\t\tif next == 0 {\n\t\t\tbreak\n\t\t}\n\t\tif _p_.runnext.cas(next, 0) {\n\t\t\treturn next.ptr(), true\n\t\t}\n\t}\n\n\tfor {\n\t\th := atomic.Load(&_p_.runqhead) // load-acquire, synchronize with other consumers\n\t\tt := _p_.runqtail\n\t\tif t == h {\n\t\t\treturn nil, false\n\t\t}\n\t\tgp := _p_.runq[h%uint32(len(_p_.runq))].ptr()\n\t\tif atomic.Cas(&_p_.runqhead, h, h+1) { // cas-release, commits consume\n\t\t\treturn gp, false\n\t\t}\n\t}\n}\n\n// Grabs a batch of goroutines from _p_'s runnable queue into batch.\n// Batch is a ring buffer starting at batchHead.\n// Returns number of grabbed goroutines.\n// Can be executed by any P.\nfunc runqgrab(_p_ *p, batch *[256]guintptr, batchHead uint32, stealRunNextG bool) uint32 {\n\tfor {\n\t\th := atomic.Load(&_p_.runqhead) // load-acquire, synchronize with other consumers\n\t\tt := atomic.Load(&_p_.runqtail) // load-acquire, synchronize with the producer\n\t\tn := t - h\n\t\tn = n - n/2\n\t\tif n == 0 {\n\t\t\tif stealRunNextG {\n\t\t\t\t// Try to steal from _p_.runnext.\n\t\t\t\tif next := _p_.runnext; next != 0 {\n\t\t\t\t\t// Sleep to ensure that _p_ isn't about to run the g we\n\t\t\t\t\t// are about to steal.\n\t\t\t\t\t// The important use case here is when the g running on _p_\n\t\t\t\t\t// ready()s another g and then almost immediately blocks.\n\t\t\t\t\t// Instead of stealing runnext in this window, back off\n\t\t\t\t\t// to give _p_ a chance to schedule runnext. This will avoid\n\t\t\t\t\t// thrashing gs between different Ps.\n\t\t\t\t\tusleep(100)\n\t\t\t\t\tif !_p_.runnext.cas(next, 0) {\n\t\t\t\t\t\tcontinue\n\t\t\t\t\t}\n\t\t\t\t\tbatch[batchHead%uint32(len(batch))] = next\n\t\t\t\t\treturn 1\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn 0\n\t\t}\n\t\tif n > uint32(len(_p_.runq)/2) { // read inconsistent h and t\n\t\t\tcontinue\n\t\t}\n\t\tfor i := uint32(0); i < n; i++ {\n\t\t\tg := _p_.runq[(h+i)%uint32(len(_p_.runq))]\n\t\t\tbatch[(batchHead+i)%uint32(len(batch))] = g\n\t\t}\n\t\tif atomic.Cas(&_p_.runqhead, h, h+n) { // cas-release, commits consume\n\t\t\treturn n\n\t\t}\n\t}\n}\n\n// Steal half of elements from local runnable queue of p2\n// and put onto local runnable queue of p.\n// Returns one of the stolen elements (or nil if failed).\nfunc runqsteal(_p_, p2 *p, stealRunNextG bool) *g {\n\tt := _p_.runqtail\n\tn := runqgrab(p2, &_p_.runq, t, stealRunNextG)\n\tif n == 0 {\n\t\treturn nil\n\t}\n\tn--\n\tgp := _p_.runq[(t+n)%uint32(len(_p_.runq))].ptr()\n\tif n == 0 {\n\t\treturn gp\n\t}\n\th := atomic.Load(&_p_.runqhead) // load-acquire, synchronize with consumers\n\tif t-h+n >= uint32(len(_p_.runq)) {\n\t\tthrow(\"runqsteal: runq overflow\")\n\t}\n\tatomic.Store(&_p_.runqtail, t+n) // store-release, makes the item available for consumption\n\treturn gp\n}\n\nfunc testSchedLocalQueue() {\n\t_p_ := new(p)\n\tgs := make([]g, len(_p_.runq))\n\tfor i := 0; i < len(_p_.runq); i++ {\n\t\tif g, _ := runqget(_p_); g != nil {\n\t\t\tthrow(\"runq is not empty initially\")\n\t\t}\n\t\tfor j := 0; j < i; j++ {\n\t\t\trunqput(_p_, &gs[i], false)\n\t\t}\n\t\tfor j := 0; j < i; j++ {\n\t\t\tif g, _ := runqget(_p_); g != &gs[i] {\n\t\t\t\tprint(\"bad element at iter \", i, \"/\", j, \"\\n\")\n\t\t\t\tthrow(\"bad element\")\n\t\t\t}\n\t\t}\n\t\tif g, _ := runqget(_p_); g != nil {\n\t\t\tthrow(\"runq is not empty afterwards\")\n\t\t}\n\t}\n}\n\nfunc testSchedLocalQueueSteal() {\n\tp1 := new(p)\n\tp2 := new(p)\n\tgs := make([]g, len(p1.runq))\n\tfor i := 0; i < len(p1.runq); i++ {\n\t\tfor j := 0; j < i; j++ {\n\t\t\tgs[j].sig = 0\n\t\t\trunqput(p1, &gs[j], false)\n\t\t}\n\t\tgp := runqsteal(p2, p1, true)\n\t\ts := 0\n\t\tif gp != nil {\n\t\t\ts++\n\t\t\tgp.sig++\n\t\t}\n\t\tfor {\n\t\t\tgp, _ = runqget(p2)\n\t\t\tif gp == nil {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\ts++\n\t\t\tgp.sig++\n\t\t}\n\t\tfor {\n\t\t\tgp, _ = runqget(p1)\n\t\t\tif gp == nil {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tgp.sig++\n\t\t}\n\t\tfor j := 0; j < i; j++ {\n\t\t\tif gs[j].sig != 1 {\n\t\t\t\tprint(\"bad element \", j, \"(\", gs[j].sig, \") at iter \", i, \"\\n\")\n\t\t\t\tthrow(\"bad element\")\n\t\t\t}\n\t\t}\n\t\tif s != i/2 && s != i/2+1 {\n\t\t\tprint(\"bad steal \", s, \", want \", i/2, \" or \", i/2+1, \", iter \", i, \"\\n\")\n\t\t\tthrow(\"bad steal\")\n\t\t}\n\t}\n}\n\n//go:linkname setMaxThreads runtime/debug.setMaxThreads\nfunc setMaxThreads(in int) (out int) {\n\tlock(&sched.lock)\n\tout = int(sched.maxmcount)\n\tsched.maxmcount = int32(in)\n\tcheckmcount()\n\tunlock(&sched.lock)\n\treturn\n}\n\nfunc haveexperiment(name string) bool {\n\tx := sys.Goexperiment\n\tfor x != \"\" {\n\t\txname := \"\"\n\t\ti := index(x, \",\")\n\t\tif i < 0 {\n\t\t\txname, x = x, \"\"\n\t\t} else {\n\t\t\txname, x = x[:i], x[i+1:]\n\t\t}\n\t\tif xname == name {\n\t\t\treturn true\n\t\t}\n\t}\n\treturn false\n}\n\n//go:nosplit\nfunc procPin() int {\n\t_g_ := getg()\n\tmp := _g_.m\n\n\tmp.locks++\n\treturn int(mp.p.ptr().id)\n}\n\n//go:nosplit\nfunc procUnpin() {\n\t_g_ := getg()\n\t_g_.m.locks--\n}\n\n//go:linkname sync_runtime_procPin sync.runtime_procPin\n//go:nosplit\nfunc sync_runtime_procPin() int {\n\treturn procPin()\n}\n\n//go:linkname sync_runtime_procUnpin sync.runtime_procUnpin\n//go:nosplit\nfunc sync_runtime_procUnpin() {\n\tprocUnpin()\n}\n\n//go:linkname sync_atomic_runtime_procPin sync/atomic.runtime_procPin\n//go:nosplit\nfunc sync_atomic_runtime_procPin() int {\n\treturn procPin()\n}\n\n//go:linkname sync_atomic_runtime_procUnpin sync/atomic.runtime_procUnpin\n//go:nosplit\nfunc sync_atomic_runtime_procUnpin() {\n\tprocUnpin()\n}\n\n// Active spinning for sync.Mutex.\n//go:linkname sync_runtime_canSpin sync.runtime_canSpin\n//go:nosplit\nfunc sync_runtime_canSpin(i int) bool {\n\t// sync.Mutex is cooperative, so we are conservative with spinning.\n\t// Spin only few times and only if running on a multicore machine and\n\t// GOMAXPROCS>1 and there is at least one other running P and local runq is empty.\n\t// As opposed to runtime mutex we don't do passive spinning here,\n\t// because there can be work on global runq on on other Ps.\n\tif i >= active_spin || ncpu <= 1 || gomaxprocs <= int32(sched.npidle+sched.nmspinning)+1 {\n\t\treturn false\n\t}\n\tif p := getg().m.p.ptr(); !runqempty(p) {\n\t\treturn false\n\t}\n\treturn true\n}\n\n//go:linkname sync_runtime_doSpin sync.runtime_doSpin\n//go:nosplit\nfunc sync_runtime_doSpin() {\n\tprocyield(active_spin_cnt)\n}\n"
  },
  {
    "path": "examples/go/small.go",
    "content": "package example\n\ntype Person struct {\n\tname string\n\tmom  *Person\n}\n\nfunc NewPerson(name string, mom *Person) Person {\n\treturn Person{name: name, mom: mom}\n}\n\nfunc (self *Person) GetName() string {\n\treturn self.name\n}\n\nfunc (self *Person) GetMom() *Person {\n\treturn self.mom\n}\n\nvar people = []Person{\n\tPerson{name: \"Pebbles\", mom: \"Wilma\"},\n\tPerson{name: \"Wilma\", mom: \"Pearl\"},\n}\n\nfunc main() {\n\tfor p := range people {\n\t\tprintln(p)\n\t}\n}\n"
  },
  {
    "path": "examples/go/type_switch.go",
    "content": "package p\n\nfunc f(a interface{}) {\n\tswitch aa := a.(type) {\n\tcase *int:\n\t\tprint(aa)\n\t}\n}\n"
  },
  {
    "path": "examples/go/value.go",
    "content": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license that can be found in the LICENSE file.\n\npackage reflect\n\nimport (\n\t\"math\"\n\t\"runtime\"\n\t\"unsafe\"\n)\n\nconst ptrSize = 4 << (^uintptr(0) >> 63) // unsafe.Sizeof(uintptr(0)) but an ideal const\nconst cannotSet = \"cannot set value obtained from unexported struct field\"\n\n// Value is the reflection interface to a Go value.\n//\n// Not all methods apply to all kinds of values.  Restrictions,\n// if any, are noted in the documentation for each method.\n// Use the Kind method to find out the kind of value before\n// calling kind-specific methods.  Calling a method\n// inappropriate to the kind of type causes a run time panic.\n//\n// The zero Value represents no value.\n// Its IsValid method returns false, its Kind method returns Invalid,\n// its String method returns \"<invalid Value>\", and all other methods panic.\n// Most functions and methods never return an invalid value.\n// If one does, its documentation states the conditions explicitly.\n//\n// A Value can be used concurrently by multiple goroutines provided that\n// the underlying Go value can be used concurrently for the equivalent\n// direct operations.\n//\n// Using == on two Values does not compare the underlying values\n// they represent, but rather the contents of the Value structs.\n// To compare two Values, compare the results of the Interface method.\ntype Value struct {\n\t// typ holds the type of the value represented by a Value.\n\ttyp *rtype\n\n\t// Pointer-valued data or, if flagIndir is set, pointer to data.\n\t// Valid when either flagIndir is set or typ.pointers() is true.\n\tptr unsafe.Pointer\n\n\t// flag holds metadata about the value.\n\t// The lowest bits are flag bits:\n\t//\t- flagStickyRO: obtained via unexported not embedded field, so read-only\n\t//\t- flagEmbedRO: obtained via unexported embedded field, so read-only\n\t//\t- flagIndir: val holds a pointer to the data\n\t//\t- flagAddr: v.CanAddr is true (implies flagIndir)\n\t//\t- flagMethod: v is a method value.\n\t// The next five bits give the Kind of the value.\n\t// This repeats typ.Kind() except for method values.\n\t// The remaining 23+ bits give a method number for method values.\n\t// If flag.kind() != Func, code can assume that flagMethod is unset.\n\t// If ifaceIndir(typ), code can assume that flagIndir is set.\n\tflag\n\n\t// A method value represents a curried method invocation\n\t// like r.Read for some receiver r.  The typ+val+flag bits describe\n\t// the receiver r, but the flag's Kind bits say Func (methods are\n\t// functions), and the top bits of the flag give the method number\n\t// in r's type's method table.\n}\n\ntype flag uintptr\n\nconst (\n\tflagKindWidth        = 5 // there are 27 kinds\n\tflagKindMask    flag = 1<<flagKindWidth - 1\n\tflagStickyRO    flag = 1 << 5\n\tflagEmbedRO     flag = 1 << 6\n\tflagIndir       flag = 1 << 7\n\tflagAddr        flag = 1 << 8\n\tflagMethod      flag = 1 << 9\n\tflagMethodShift      = 10\n\tflagRO          flag = flagStickyRO | flagEmbedRO\n)\n\nfunc (f flag) kind() Kind {\n\treturn Kind(f & flagKindMask)\n}\n\n// pointer returns the underlying pointer represented by v.\n// v.Kind() must be Ptr, Map, Chan, Func, or UnsafePointer\nfunc (v Value) pointer() unsafe.Pointer {\n\tif v.typ.size != ptrSize || !v.typ.pointers() {\n\t\tpanic(\"can't call pointer on a non-pointer Value\")\n\t}\n\tif v.flag&flagIndir != 0 {\n\t\treturn *(*unsafe.Pointer)(v.ptr)\n\t}\n\treturn v.ptr\n}\n\n// packEface converts v to the empty interface.\nfunc packEface(v Value) interface{} {\n\tt := v.typ\n\tvar i interface{}\n\te := (*emptyInterface)(unsafe.Pointer(&i))\n\t// First, fill in the data portion of the interface.\n\tswitch {\n\tcase ifaceIndir(t):\n\t\tif v.flag&flagIndir == 0 {\n\t\t\tpanic(\"bad indir\")\n\t\t}\n\t\t// Value is indirect, and so is the interface we're making.\n\t\tptr := v.ptr\n\t\tif v.flag&flagAddr != 0 {\n\t\t\t// TODO: pass safe boolean from valueInterface so\n\t\t\t// we don't need to copy if safe==true?\n\t\t\tc := unsafe_New(t)\n\t\t\ttypedmemmove(t, c, ptr)\n\t\t\tptr = c\n\t\t}\n\t\te.word = ptr\n\tcase v.flag&flagIndir != 0:\n\t\t// Value is indirect, but interface is direct.  We need\n\t\t// to load the data at v.ptr into the interface data word.\n\t\te.word = *(*unsafe.Pointer)(v.ptr)\n\tdefault:\n\t\t// Value is direct, and so is the interface.\n\t\te.word = v.ptr\n\t}\n\t// Now, fill in the type portion.  We're very careful here not\n\t// to have any operation between the e.word and e.typ assignments\n\t// that would let the garbage collector observe the partially-built\n\t// interface value.\n\te.typ = t\n\treturn i\n}\n\n// unpackEface converts the empty interface i to a Value.\nfunc unpackEface(i interface{}) Value {\n\te := (*emptyInterface)(unsafe.Pointer(&i))\n\t// NOTE: don't read e.word until we know whether it is really a pointer or not.\n\tt := e.typ\n\tif t == nil {\n\t\treturn Value{}\n\t}\n\tf := flag(t.Kind())\n\tif ifaceIndir(t) {\n\t\tf |= flagIndir\n\t}\n\treturn Value{t, e.word, f}\n}\n\n// A ValueError occurs when a Value method is invoked on\n// a Value that does not support it.  Such cases are documented\n// in the description of each method.\ntype ValueError struct {\n\tMethod string\n\tKind   Kind\n}\n\nfunc (e *ValueError) Error() string {\n\tif e.Kind == 0 {\n\t\treturn \"reflect: call of \" + e.Method + \" on zero Value\"\n\t}\n\treturn \"reflect: call of \" + e.Method + \" on \" + e.Kind.String() + \" Value\"\n}\n\n// methodName returns the name of the calling method,\n// assumed to be two stack frames above.\nfunc methodName() string {\n\tpc, _, _, _ := runtime.Caller(2)\n\tf := runtime.FuncForPC(pc)\n\tif f == nil {\n\t\treturn \"unknown method\"\n\t}\n\treturn f.Name()\n}\n\n// emptyInterface is the header for an interface{} value.\ntype emptyInterface struct {\n\ttyp  *rtype\n\tword unsafe.Pointer\n}\n\n// nonEmptyInterface is the header for a interface value with methods.\ntype nonEmptyInterface struct {\n\t// see ../runtime/iface.go:/Itab\n\titab *struct {\n\t\tityp   *rtype // static interface type\n\t\ttyp    *rtype // dynamic concrete type\n\t\tlink   unsafe.Pointer\n\t\tbad    int32\n\t\tunused int32\n\t\tfun    [100000]unsafe.Pointer // method table\n\t}\n\tword unsafe.Pointer\n}\n\n// mustBe panics if f's kind is not expected.\n// Making this a method on flag instead of on Value\n// (and embedding flag in Value) means that we can write\n// the very clear v.mustBe(Bool) and have it compile into\n// v.flag.mustBe(Bool), which will only bother to copy the\n// single important word for the receiver.\nfunc (f flag) mustBe(expected Kind) {\n\tif f.kind() != expected {\n\t\tpanic(&ValueError{methodName(), f.kind()})\n\t}\n}\n\n// mustBeExported panics if f records that the value was obtained using\n// an unexported field.\nfunc (f flag) mustBeExported() {\n\tif f == 0 {\n\t\tpanic(&ValueError{methodName(), 0})\n\t}\n\tif f&flagRO != 0 {\n\t\tpanic(\"reflect: \" + methodName() + \" using value obtained using unexported field\")\n\t}\n}\n\n// mustBeAssignable panics if f records that the value is not assignable,\n// which is to say that either it was obtained using an unexported field\n// or it is not addressable.\nfunc (f flag) mustBeAssignable() {\n\tif f == 0 {\n\t\tpanic(&ValueError{methodName(), Invalid})\n\t}\n\t// Assignable if addressable and not read-only.\n\tif f&flagRO != 0 {\n\t\tpanic(\"reflect: \" + methodName() + \" using value obtained using unexported field\")\n\t}\n\tif f&flagAddr == 0 {\n\t\tpanic(\"reflect: \" + methodName() + \" using unaddressable value\")\n\t}\n}\n\n// Addr returns a pointer value representing the address of v.\n// It panics if CanAddr() returns false.\n// Addr is typically used to obtain a pointer to a struct field\n// or slice element in order to call a method that requires a\n// pointer receiver.\nfunc (v Value) Addr() Value {\n\tif v.flag&flagAddr == 0 {\n\t\tpanic(\"reflect.Value.Addr of unaddressable value\")\n\t}\n\treturn Value{v.typ.ptrTo(), v.ptr, (v.flag & flagRO) | flag(Ptr)}\n}\n\n// Bool returns v's underlying value.\n// It panics if v's kind is not Bool.\nfunc (v Value) Bool() bool {\n\tv.mustBe(Bool)\n\treturn *(*bool)(v.ptr)\n}\n\n// Bytes returns v's underlying value.\n// It panics if v's underlying value is not a slice of bytes.\nfunc (v Value) Bytes() []byte {\n\tv.mustBe(Slice)\n\tif v.typ.Elem().Kind() != Uint8 {\n\t\tpanic(\"reflect.Value.Bytes of non-byte slice\")\n\t}\n\t// Slice is always bigger than a word; assume flagIndir.\n\treturn *(*[]byte)(v.ptr)\n}\n\n// runes returns v's underlying value.\n// It panics if v's underlying value is not a slice of runes (int32s).\nfunc (v Value) runes() []rune {\n\tv.mustBe(Slice)\n\tif v.typ.Elem().Kind() != Int32 {\n\t\tpanic(\"reflect.Value.Bytes of non-rune slice\")\n\t}\n\t// Slice is always bigger than a word; assume flagIndir.\n\treturn *(*[]rune)(v.ptr)\n}\n\n// CanAddr reports whether the value's address can be obtained with Addr.\n// Such values are called addressable.  A value is addressable if it is\n// an element of a slice, an element of an addressable array,\n// a field of an addressable struct, or the result of dereferencing a pointer.\n// If CanAddr returns false, calling Addr will panic.\nfunc (v Value) CanAddr() bool {\n\treturn v.flag&flagAddr != 0\n}\n\n// CanSet reports whether the value of v can be changed.\n// A Value can be changed only if it is addressable and was not\n// obtained by the use of unexported struct fields.\n// If CanSet returns false, calling Set or any type-specific\n// setter (e.g., SetBool, SetInt) will panic.\nfunc (v Value) CanSet() bool {\n\treturn v.flag&(flagAddr|flagRO) == flagAddr\n}\n\n// Call calls the function v with the input arguments in.\n// For example, if len(in) == 3, v.Call(in) represents the Go call v(in[0], in[1], in[2]).\n// Call panics if v's Kind is not Func.\n// It returns the output results as Values.\n// As in Go, each input argument must be assignable to the\n// type of the function's corresponding input parameter.\n// If v is a variadic function, Call creates the variadic slice parameter\n// itself, copying in the corresponding values.\nfunc (v Value) Call(in []Value) []Value {\n\tv.mustBe(Func)\n\tv.mustBeExported()\n\treturn v.call(\"Call\", in)\n}\n\n// CallSlice calls the variadic function v with the input arguments in,\n// assigning the slice in[len(in)-1] to v's final variadic argument.\n// For example, if len(in) == 3, v.CallSlice(in) represents the Go call v(in[0], in[1], in[2]...).\n// CallSlice panics if v's Kind is not Func or if v is not variadic.\n// It returns the output results as Values.\n// As in Go, each input argument must be assignable to the\n// type of the function's corresponding input parameter.\nfunc (v Value) CallSlice(in []Value) []Value {\n\tv.mustBe(Func)\n\tv.mustBeExported()\n\treturn v.call(\"CallSlice\", in)\n}\n\nvar callGC bool // for testing; see TestCallMethodJump\n\nfunc (v Value) call(op string, in []Value) []Value {\n\t// Get function pointer, type.\n\tt := v.typ\n\tvar (\n\t\tfn       unsafe.Pointer\n\t\trcvr     Value\n\t\trcvrtype *rtype\n\t)\n\tif v.flag&flagMethod != 0 {\n\t\trcvr = v\n\t\trcvrtype, t, fn = methodReceiver(op, v, int(v.flag)>>flagMethodShift)\n\t} else if v.flag&flagIndir != 0 {\n\t\tfn = *(*unsafe.Pointer)(v.ptr)\n\t} else {\n\t\tfn = v.ptr\n\t}\n\n\tif fn == nil {\n\t\tpanic(\"reflect.Value.Call: call of nil function\")\n\t}\n\n\tisSlice := op == \"CallSlice\"\n\tn := t.NumIn()\n\tif isSlice {\n\t\tif !t.IsVariadic() {\n\t\t\tpanic(\"reflect: CallSlice of non-variadic function\")\n\t\t}\n\t\tif len(in) < n {\n\t\t\tpanic(\"reflect: CallSlice with too few input arguments\")\n\t\t}\n\t\tif len(in) > n {\n\t\t\tpanic(\"reflect: CallSlice with too many input arguments\")\n\t\t}\n\t} else {\n\t\tif t.IsVariadic() {\n\t\t\tn--\n\t\t}\n\t\tif len(in) < n {\n\t\t\tpanic(\"reflect: Call with too few input arguments\")\n\t\t}\n\t\tif !t.IsVariadic() && len(in) > n {\n\t\t\tpanic(\"reflect: Call with too many input arguments\")\n\t\t}\n\t}\n\tfor _, x := range in {\n\t\tif x.Kind() == Invalid {\n\t\t\tpanic(\"reflect: \" + op + \" using zero Value argument\")\n\t\t}\n\t}\n\tfor i := 0; i < n; i++ {\n\t\tif xt, targ := in[i].Type(), t.In(i); !xt.AssignableTo(targ) {\n\t\t\tpanic(\"reflect: \" + op + \" using \" + xt.String() + \" as type \" + targ.String())\n\t\t}\n\t}\n\tif !isSlice && t.IsVariadic() {\n\t\t// prepare slice for remaining values\n\t\tm := len(in) - n\n\t\tslice := MakeSlice(t.In(n), m, m)\n\t\telem := t.In(n).Elem()\n\t\tfor i := 0; i < m; i++ {\n\t\t\tx := in[n+i]\n\t\t\tif xt := x.Type(); !xt.AssignableTo(elem) {\n\t\t\t\tpanic(\"reflect: cannot use \" + xt.String() + \" as type \" + elem.String() + \" in \" + op)\n\t\t\t}\n\t\t\tslice.Index(i).Set(x)\n\t\t}\n\t\torigIn := in\n\t\tin = make([]Value, n+1)\n\t\tcopy(in[:n], origIn)\n\t\tin[n] = slice\n\t}\n\n\tnin := len(in)\n\tif nin != t.NumIn() {\n\t\tpanic(\"reflect.Value.Call: wrong argument count\")\n\t}\n\tnout := t.NumOut()\n\n\t// Compute frame type.\n\tframetype, _, retOffset, _, framePool := funcLayout(t, rcvrtype)\n\n\t// Allocate a chunk of memory for frame.\n\tvar args unsafe.Pointer\n\tif nout == 0 {\n\t\targs = framePool.Get().(unsafe.Pointer)\n\t} else {\n\t\t// Can't use pool if the function has return values.\n\t\t// We will leak pointer to args in ret, so its lifetime is not scoped.\n\t\targs = unsafe_New(frametype)\n\t}\n\toff := uintptr(0)\n\n\t// Copy inputs into args.\n\tif rcvrtype != nil {\n\t\tstoreRcvr(rcvr, args)\n\t\toff = ptrSize\n\t}\n\tfor i, v := range in {\n\t\tv.mustBeExported()\n\t\ttarg := t.In(i).(*rtype)\n\t\ta := uintptr(targ.align)\n\t\toff = (off + a - 1) &^ (a - 1)\n\t\tn := targ.size\n\t\taddr := unsafe.Pointer(uintptr(args) + off)\n\t\tv = v.assignTo(\"reflect.Value.Call\", targ, addr)\n\t\tif v.flag&flagIndir != 0 {\n\t\t\ttypedmemmove(targ, addr, v.ptr)\n\t\t} else {\n\t\t\t*(*unsafe.Pointer)(addr) = v.ptr\n\t\t}\n\t\toff += n\n\t}\n\n\t// Call.\n\tcall(frametype, fn, args, uint32(frametype.size), uint32(retOffset))\n\n\t// For testing; see TestCallMethodJump.\n\tif callGC {\n\t\truntime.GC()\n\t}\n\n\tvar ret []Value\n\tif nout == 0 {\n\t\tmemclr(args, frametype.size)\n\t\tframePool.Put(args)\n\t} else {\n\t\t// Zero the now unused input area of args,\n\t\t// because the Values returned by this function contain pointers to the args object,\n\t\t// and will thus keep the args object alive indefinitely.\n\t\tmemclr(args, retOffset)\n\t\t// Copy return values out of args.\n\t\tret = make([]Value, nout)\n\t\toff = retOffset\n\t\tfor i := 0; i < nout; i++ {\n\t\t\ttv := t.Out(i)\n\t\t\ta := uintptr(tv.Align())\n\t\t\toff = (off + a - 1) &^ (a - 1)\n\t\t\tfl := flagIndir | flag(tv.Kind())\n\t\t\tret[i] = Value{tv.common(), unsafe.Pointer(uintptr(args) + off), fl}\n\t\t\toff += tv.Size()\n\t\t}\n\t}\n\n\treturn ret\n}\n\n// callReflect is the call implementation used by a function\n// returned by MakeFunc. In many ways it is the opposite of the\n// method Value.call above. The method above converts a call using Values\n// into a call of a function with a concrete argument frame, while\n// callReflect converts a call of a function with a concrete argument\n// frame into a call using Values.\n// It is in this file so that it can be next to the call method above.\n// The remainder of the MakeFunc implementation is in makefunc.go.\n//\n// NOTE: This function must be marked as a \"wrapper\" in the generated code,\n// so that the linker can make it work correctly for panic and recover.\n// The gc compilers know to do that for the name \"reflect.callReflect\".\nfunc callReflect(ctxt *makeFuncImpl, frame unsafe.Pointer) {\n\tftyp := ctxt.typ\n\tf := ctxt.fn\n\n\t// Copy argument frame into Values.\n\tptr := frame\n\toff := uintptr(0)\n\tin := make([]Value, 0, len(ftyp.in))\n\tfor _, arg := range ftyp.in {\n\t\ttyp := arg\n\t\toff += -off & uintptr(typ.align-1)\n\t\taddr := unsafe.Pointer(uintptr(ptr) + off)\n\t\tv := Value{typ, nil, flag(typ.Kind())}\n\t\tif ifaceIndir(typ) {\n\t\t\t// value cannot be inlined in interface data.\n\t\t\t// Must make a copy, because f might keep a reference to it,\n\t\t\t// and we cannot let f keep a reference to the stack frame\n\t\t\t// after this function returns, not even a read-only reference.\n\t\t\tv.ptr = unsafe_New(typ)\n\t\t\ttypedmemmove(typ, v.ptr, addr)\n\t\t\tv.flag |= flagIndir\n\t\t} else {\n\t\t\tv.ptr = *(*unsafe.Pointer)(addr)\n\t\t}\n\t\tin = append(in, v)\n\t\toff += typ.size\n\t}\n\n\t// Call underlying function.\n\tout := f(in)\n\tif len(out) != len(ftyp.out) {\n\t\tpanic(\"reflect: wrong return count from function created by MakeFunc\")\n\t}\n\n\t// Copy results back into argument frame.\n\tif len(ftyp.out) > 0 {\n\t\toff += -off & (ptrSize - 1)\n\t\tif runtime.GOARCH == \"amd64p32\" {\n\t\t\toff = align(off, 8)\n\t\t}\n\t\tfor i, arg := range ftyp.out {\n\t\t\ttyp := arg\n\t\t\tv := out[i]\n\t\t\tif v.typ != typ {\n\t\t\t\tpanic(\"reflect: function created by MakeFunc using \" + funcName(f) +\n\t\t\t\t\t\" returned wrong type: have \" +\n\t\t\t\t\tout[i].typ.String() + \" for \" + typ.String())\n\t\t\t}\n\t\t\tif v.flag&flagRO != 0 {\n\t\t\t\tpanic(\"reflect: function created by MakeFunc using \" + funcName(f) +\n\t\t\t\t\t\" returned value obtained from unexported field\")\n\t\t\t}\n\t\t\toff += -off & uintptr(typ.align-1)\n\t\t\taddr := unsafe.Pointer(uintptr(ptr) + off)\n\t\t\tif v.flag&flagIndir != 0 {\n\t\t\t\ttypedmemmove(typ, addr, v.ptr)\n\t\t\t} else {\n\t\t\t\t*(*unsafe.Pointer)(addr) = v.ptr\n\t\t\t}\n\t\t\toff += typ.size\n\t\t}\n\t}\n}\n\n// methodReceiver returns information about the receiver\n// described by v. The Value v may or may not have the\n// flagMethod bit set, so the kind cached in v.flag should\n// not be used.\n// The return value rcvrtype gives the method's actual receiver type.\n// The return value t gives the method type signature (without the receiver).\n// The return value fn is a pointer to the method code.\nfunc methodReceiver(op string, v Value, methodIndex int) (rcvrtype, t *rtype, fn unsafe.Pointer) {\n\ti := methodIndex\n\tif v.typ.Kind() == Interface {\n\t\ttt := (*interfaceType)(unsafe.Pointer(v.typ))\n\t\tif uint(i) >= uint(len(tt.methods)) {\n\t\t\tpanic(\"reflect: internal error: invalid method index\")\n\t\t}\n\t\tm := &tt.methods[i]\n\t\tif m.pkgPath != nil {\n\t\t\tpanic(\"reflect: \" + op + \" of unexported method\")\n\t\t}\n\t\tiface := (*nonEmptyInterface)(v.ptr)\n\t\tif iface.itab == nil {\n\t\t\tpanic(\"reflect: \" + op + \" of method on nil interface value\")\n\t\t}\n\t\trcvrtype = iface.itab.typ\n\t\tfn = unsafe.Pointer(&iface.itab.fun[i])\n\t\tt = m.typ\n\t} else {\n\t\trcvrtype = v.typ\n\t\tut := v.typ.uncommon()\n\t\tif ut == nil || uint(i) >= uint(len(ut.methods)) {\n\t\t\tpanic(\"reflect: internal error: invalid method index\")\n\t\t}\n\t\tm := &ut.methods[i]\n\t\tif m.pkgPath != nil {\n\t\t\tpanic(\"reflect: \" + op + \" of unexported method\")\n\t\t}\n\t\tfn = unsafe.Pointer(&m.ifn)\n\t\tt = m.mtyp\n\t}\n\treturn\n}\n\n// v is a method receiver.  Store at p the word which is used to\n// encode that receiver at the start of the argument list.\n// Reflect uses the \"interface\" calling convention for\n// methods, which always uses one word to record the receiver.\nfunc storeRcvr(v Value, p unsafe.Pointer) {\n\tt := v.typ\n\tif t.Kind() == Interface {\n\t\t// the interface data word becomes the receiver word\n\t\tiface := (*nonEmptyInterface)(v.ptr)\n\t\t*(*unsafe.Pointer)(p) = iface.word\n\t} else if v.flag&flagIndir != 0 && !ifaceIndir(t) {\n\t\t*(*unsafe.Pointer)(p) = *(*unsafe.Pointer)(v.ptr)\n\t} else {\n\t\t*(*unsafe.Pointer)(p) = v.ptr\n\t}\n}\n\n// align returns the result of rounding x up to a multiple of n.\n// n must be a power of two.\nfunc align(x, n uintptr) uintptr {\n\treturn (x + n - 1) &^ (n - 1)\n}\n\n// callMethod is the call implementation used by a function returned\n// by makeMethodValue (used by v.Method(i).Interface()).\n// It is a streamlined version of the usual reflect call: the caller has\n// already laid out the argument frame for us, so we don't have\n// to deal with individual Values for each argument.\n// It is in this file so that it can be next to the two similar functions above.\n// The remainder of the makeMethodValue implementation is in makefunc.go.\n//\n// NOTE: This function must be marked as a \"wrapper\" in the generated code,\n// so that the linker can make it work correctly for panic and recover.\n// The gc compilers know to do that for the name \"reflect.callMethod\".\nfunc callMethod(ctxt *methodValue, frame unsafe.Pointer) {\n\trcvr := ctxt.rcvr\n\trcvrtype, t, fn := methodReceiver(\"call\", rcvr, ctxt.method)\n\tframetype, argSize, retOffset, _, framePool := funcLayout(t, rcvrtype)\n\n\t// Make a new frame that is one word bigger so we can store the receiver.\n\targs := framePool.Get().(unsafe.Pointer)\n\n\t// Copy in receiver and rest of args.\n\tstoreRcvr(rcvr, args)\n\ttypedmemmovepartial(frametype, unsafe.Pointer(uintptr(args)+ptrSize), frame, ptrSize, argSize-ptrSize)\n\n\t// Call.\n\tcall(frametype, fn, args, uint32(frametype.size), uint32(retOffset))\n\n\t// Copy return values. On amd64p32, the beginning of return values\n\t// is 64-bit aligned, so the caller's frame layout (which doesn't have\n\t// a receiver) is different from the layout of the fn call, which has\n\t// a receiver.\n\t// Ignore any changes to args and just copy return values.\n\tcallerRetOffset := retOffset - ptrSize\n\tif runtime.GOARCH == \"amd64p32\" {\n\t\tcallerRetOffset = align(argSize-ptrSize, 8)\n\t}\n\ttypedmemmovepartial(frametype,\n\t\tunsafe.Pointer(uintptr(frame)+callerRetOffset),\n\t\tunsafe.Pointer(uintptr(args)+retOffset),\n\t\tretOffset,\n\t\tframetype.size-retOffset)\n\n\tmemclr(args, frametype.size)\n\tframePool.Put(args)\n}\n\n// funcName returns the name of f, for use in error messages.\nfunc funcName(f func([]Value) []Value) string {\n\tpc := *(*uintptr)(unsafe.Pointer(&f))\n\trf := runtime.FuncForPC(pc)\n\tif rf != nil {\n\t\treturn rf.Name()\n\t}\n\treturn \"closure\"\n}\n\n// Cap returns v's capacity.\n// It panics if v's Kind is not Array, Chan, or Slice.\nfunc (v Value) Cap() int {\n\tk := v.kind()\n\tswitch k {\n\tcase Array:\n\t\treturn v.typ.Len()\n\tcase Chan:\n\t\treturn int(chancap(v.pointer()))\n\tcase Slice:\n\t\t// Slice is always bigger than a word; assume flagIndir.\n\t\treturn (*sliceHeader)(v.ptr).Cap\n\t}\n\tpanic(&ValueError{\"reflect.Value.Cap\", v.kind()})\n}\n\n// Close closes the channel v.\n// It panics if v's Kind is not Chan.\nfunc (v Value) Close() {\n\tv.mustBe(Chan)\n\tv.mustBeExported()\n\tchanclose(v.pointer())\n}\n\n// Complex returns v's underlying value, as a complex128.\n// It panics if v's Kind is not Complex64 or Complex128\nfunc (v Value) Complex() complex128 {\n\tk := v.kind()\n\tswitch k {\n\tcase Complex64:\n\t\treturn complex128(*(*complex64)(v.ptr))\n\tcase Complex128:\n\t\treturn *(*complex128)(v.ptr)\n\t}\n\tpanic(&ValueError{\"reflect.Value.Complex\", v.kind()})\n}\n\n// Elem returns the value that the interface v contains\n// or that the pointer v points to.\n// It panics if v's Kind is not Interface or Ptr.\n// It returns the zero Value if v is nil.\nfunc (v Value) Elem() Value {\n\tk := v.kind()\n\tswitch k {\n\tcase Interface:\n\t\tvar eface interface{}\n\t\tif v.typ.NumMethod() == 0 {\n\t\t\teface = *(*interface{})(v.ptr)\n\t\t} else {\n\t\t\teface = (interface{})(*(*interface {\n\t\t\t\tM()\n\t\t\t})(v.ptr))\n\t\t}\n\t\tx := unpackEface(eface)\n\t\tif x.flag != 0 {\n\t\t\tx.flag |= v.flag & flagRO\n\t\t}\n\t\treturn x\n\tcase Ptr:\n\t\tptr := v.ptr\n\t\tif v.flag&flagIndir != 0 {\n\t\t\tptr = *(*unsafe.Pointer)(ptr)\n\t\t}\n\t\t// The returned value's address is v's value.\n\t\tif ptr == nil {\n\t\t\treturn Value{}\n\t\t}\n\t\ttt := (*ptrType)(unsafe.Pointer(v.typ))\n\t\ttyp := tt.elem\n\t\tfl := v.flag&flagRO | flagIndir | flagAddr\n\t\tfl |= flag(typ.Kind())\n\t\treturn Value{typ, ptr, fl}\n\t}\n\tpanic(&ValueError{\"reflect.Value.Elem\", v.kind()})\n}\n\n// Field returns the i'th field of the struct v.\n// It panics if v's Kind is not Struct or i is out of range.\nfunc (v Value) Field(i int) Value {\n\tif v.kind() != Struct {\n\t\tpanic(&ValueError{\"reflect.Value.Field\", v.kind()})\n\t}\n\ttt := (*structType)(unsafe.Pointer(v.typ))\n\tif uint(i) >= uint(len(tt.fields)) {\n\t\tpanic(\"reflect: Field index out of range\")\n\t}\n\tfield := &tt.fields[i]\n\ttyp := field.typ\n\n\t// Inherit permission bits from v, but clear flagEmbedRO.\n\tfl := v.flag&(flagStickyRO|flagIndir|flagAddr) | flag(typ.Kind())\n\t// Using an unexported field forces flagRO.\n\tif field.pkgPath != nil {\n\t\tif field.name == nil {\n\t\t\tfl |= flagEmbedRO\n\t\t} else {\n\t\t\tfl |= flagStickyRO\n\t\t}\n\t}\n\t// Either flagIndir is set and v.ptr points at struct,\n\t// or flagIndir is not set and v.ptr is the actual struct data.\n\t// In the former case, we want v.ptr + offset.\n\t// In the latter case, we must have field.offset = 0,\n\t// so v.ptr + field.offset is still okay.\n\tptr := unsafe.Pointer(uintptr(v.ptr) + field.offset)\n\treturn Value{typ, ptr, fl}\n}\n\n// FieldByIndex returns the nested field corresponding to index.\n// It panics if v's Kind is not struct.\nfunc (v Value) FieldByIndex(index []int) Value {\n\tif len(index) == 1 {\n\t\treturn v.Field(index[0])\n\t}\n\tv.mustBe(Struct)\n\tfor i, x := range index {\n\t\tif i > 0 {\n\t\t\tif v.Kind() == Ptr && v.typ.Elem().Kind() == Struct {\n\t\t\t\tif v.IsNil() {\n\t\t\t\t\tpanic(\"reflect: indirection through nil pointer to embedded struct\")\n\t\t\t\t}\n\t\t\t\tv = v.Elem()\n\t\t\t}\n\t\t}\n\t\tv = v.Field(x)\n\t}\n\treturn v\n}\n\n// FieldByName returns the struct field with the given name.\n// It returns the zero Value if no field was found.\n// It panics if v's Kind is not struct.\nfunc (v Value) FieldByName(name string) Value {\n\tv.mustBe(Struct)\n\tif f, ok := v.typ.FieldByName(name); ok {\n\t\treturn v.FieldByIndex(f.Index)\n\t}\n\treturn Value{}\n}\n\n// FieldByNameFunc returns the struct field with a name\n// that satisfies the match function.\n// It panics if v's Kind is not struct.\n// It returns the zero Value if no field was found.\nfunc (v Value) FieldByNameFunc(match func(string) bool) Value {\n\tif f, ok := v.typ.FieldByNameFunc(match); ok {\n\t\treturn v.FieldByIndex(f.Index)\n\t}\n\treturn Value{}\n}\n\n// Float returns v's underlying value, as a float64.\n// It panics if v's Kind is not Float32 or Float64\nfunc (v Value) Float() float64 {\n\tk := v.kind()\n\tswitch k {\n\tcase Float32:\n\t\treturn float64(*(*float32)(v.ptr))\n\tcase Float64:\n\t\treturn *(*float64)(v.ptr)\n\t}\n\tpanic(&ValueError{\"reflect.Value.Float\", v.kind()})\n}\n\nvar uint8Type = TypeOf(uint8(0)).(*rtype)\n\n// Index returns v's i'th element.\n// It panics if v's Kind is not Array, Slice, or String or i is out of range.\nfunc (v Value) Index(i int) Value {\n\tswitch v.kind() {\n\tcase Array:\n\t\ttt := (*arrayType)(unsafe.Pointer(v.typ))\n\t\tif uint(i) >= uint(tt.len) {\n\t\t\tpanic(\"reflect: array index out of range\")\n\t\t}\n\t\ttyp := tt.elem\n\t\toffset := uintptr(i) * typ.size\n\n\t\t// Either flagIndir is set and v.ptr points at array,\n\t\t// or flagIndir is not set and v.ptr is the actual array data.\n\t\t// In the former case, we want v.ptr + offset.\n\t\t// In the latter case, we must be doing Index(0), so offset = 0,\n\t\t// so v.ptr + offset is still okay.\n\t\tval := unsafe.Pointer(uintptr(v.ptr) + offset)\n\t\tfl := v.flag&(flagRO|flagIndir|flagAddr) | flag(typ.Kind()) // bits same as overall array\n\t\treturn Value{typ, val, fl}\n\n\tcase Slice:\n\t\t// Element flag same as Elem of Ptr.\n\t\t// Addressable, indirect, possibly read-only.\n\t\ts := (*sliceHeader)(v.ptr)\n\t\tif uint(i) >= uint(s.Len) {\n\t\t\tpanic(\"reflect: slice index out of range\")\n\t\t}\n\t\ttt := (*sliceType)(unsafe.Pointer(v.typ))\n\t\ttyp := tt.elem\n\t\tval := arrayAt(s.Data, i, typ.size)\n\t\tfl := flagAddr | flagIndir | v.flag&flagRO | flag(typ.Kind())\n\t\treturn Value{typ, val, fl}\n\n\tcase String:\n\t\ts := (*stringHeader)(v.ptr)\n\t\tif uint(i) >= uint(s.Len) {\n\t\t\tpanic(\"reflect: string index out of range\")\n\t\t}\n\t\tp := arrayAt(s.Data, i, 1)\n\t\tfl := v.flag&flagRO | flag(Uint8) | flagIndir\n\t\treturn Value{uint8Type, p, fl}\n\t}\n\tpanic(&ValueError{\"reflect.Value.Index\", v.kind()})\n}\n\n// Int returns v's underlying value, as an int64.\n// It panics if v's Kind is not Int, Int8, Int16, Int32, or Int64.\nfunc (v Value) Int() int64 {\n\tk := v.kind()\n\tp := v.ptr\n\tswitch k {\n\tcase Int:\n\t\treturn int64(*(*int)(p))\n\tcase Int8:\n\t\treturn int64(*(*int8)(p))\n\tcase Int16:\n\t\treturn int64(*(*int16)(p))\n\tcase Int32:\n\t\treturn int64(*(*int32)(p))\n\tcase Int64:\n\t\treturn int64(*(*int64)(p))\n\t}\n\tpanic(&ValueError{\"reflect.Value.Int\", v.kind()})\n}\n\n// CanInterface reports whether Interface can be used without panicking.\nfunc (v Value) CanInterface() bool {\n\tif v.flag == 0 {\n\t\tpanic(&ValueError{\"reflect.Value.CanInterface\", Invalid})\n\t}\n\treturn v.flag&flagRO == 0\n}\n\n// Interface returns v's current value as an interface{}.\n// It is equivalent to:\n//\tvar i interface{} = (v's underlying value)\n// It panics if the Value was obtained by accessing\n// unexported struct fields.\nfunc (v Value) Interface() (i interface{}) {\n\treturn valueInterface(v, true)\n}\n\nfunc valueInterface(v Value, safe bool) interface{} {\n\tif v.flag == 0 {\n\t\tpanic(&ValueError{\"reflect.Value.Interface\", 0})\n\t}\n\tif safe && v.flag&flagRO != 0 {\n\t\t// Do not allow access to unexported values via Interface,\n\t\t// because they might be pointers that should not be\n\t\t// writable or methods or function that should not be callable.\n\t\tpanic(\"reflect.Value.Interface: cannot return value obtained from unexported field or method\")\n\t}\n\tif v.flag&flagMethod != 0 {\n\t\tv = makeMethodValue(\"Interface\", v)\n\t}\n\n\tif v.kind() == Interface {\n\t\t// Special case: return the element inside the interface.\n\t\t// Empty interface has one layout, all interfaces with\n\t\t// methods have a second layout.\n\t\tif v.NumMethod() == 0 {\n\t\t\treturn *(*interface{})(v.ptr)\n\t\t}\n\t\treturn *(*interface {\n\t\t\tM()\n\t\t})(v.ptr)\n\t}\n\n\t// TODO: pass safe to packEface so we don't need to copy if safe==true?\n\treturn packEface(v)\n}\n\n// InterfaceData returns the interface v's value as a uintptr pair.\n// It panics if v's Kind is not Interface.\nfunc (v Value) InterfaceData() [2]uintptr {\n\t// TODO: deprecate this\n\tv.mustBe(Interface)\n\t// We treat this as a read operation, so we allow\n\t// it even for unexported data, because the caller\n\t// has to import \"unsafe\" to turn it into something\n\t// that can be abused.\n\t// Interface value is always bigger than a word; assume flagIndir.\n\treturn *(*[2]uintptr)(v.ptr)\n}\n\n// IsNil reports whether its argument v is nil. The argument must be\n// a chan, func, interface, map, pointer, or slice value; if it is\n// not, IsNil panics. Note that IsNil is not always equivalent to a\n// regular comparison with nil in Go. For example, if v was created\n// by calling ValueOf with an uninitialized interface variable i,\n// i==nil will be true but v.IsNil will panic as v will be the zero\n// Value.\nfunc (v Value) IsNil() bool {\n\tk := v.kind()\n\tswitch k {\n\tcase Chan, Func, Map, Ptr:\n\t\tif v.flag&flagMethod != 0 {\n\t\t\treturn false\n\t\t}\n\t\tptr := v.ptr\n\t\tif v.flag&flagIndir != 0 {\n\t\t\tptr = *(*unsafe.Pointer)(ptr)\n\t\t}\n\t\treturn ptr == nil\n\tcase Interface, Slice:\n\t\t// Both interface and slice are nil if first word is 0.\n\t\t// Both are always bigger than a word; assume flagIndir.\n\t\treturn *(*unsafe.Pointer)(v.ptr) == nil\n\t}\n\tpanic(&ValueError{\"reflect.Value.IsNil\", v.kind()})\n}\n\n// IsValid reports whether v represents a value.\n// It returns false if v is the zero Value.\n// If IsValid returns false, all other methods except String panic.\n// Most functions and methods never return an invalid value.\n// If one does, its documentation states the conditions explicitly.\nfunc (v Value) IsValid() bool {\n\treturn v.flag != 0\n}\n\n// Kind returns v's Kind.\n// If v is the zero Value (IsValid returns false), Kind returns Invalid.\nfunc (v Value) Kind() Kind {\n\treturn v.kind()\n}\n\n// Len returns v's length.\n// It panics if v's Kind is not Array, Chan, Map, Slice, or String.\nfunc (v Value) Len() int {\n\tk := v.kind()\n\tswitch k {\n\tcase Array:\n\t\ttt := (*arrayType)(unsafe.Pointer(v.typ))\n\t\treturn int(tt.len)\n\tcase Chan:\n\t\treturn chanlen(v.pointer())\n\tcase Map:\n\t\treturn maplen(v.pointer())\n\tcase Slice:\n\t\t// Slice is bigger than a word; assume flagIndir.\n\t\treturn (*sliceHeader)(v.ptr).Len\n\tcase String:\n\t\t// String is bigger than a word; assume flagIndir.\n\t\treturn (*stringHeader)(v.ptr).Len\n\t}\n\tpanic(&ValueError{\"reflect.Value.Len\", v.kind()})\n}\n\n// MapIndex returns the value associated with key in the map v.\n// It panics if v's Kind is not Map.\n// It returns the zero Value if key is not found in the map or if v represents a nil map.\n// As in Go, the key's value must be assignable to the map's key type.\nfunc (v Value) MapIndex(key Value) Value {\n\tv.mustBe(Map)\n\ttt := (*mapType)(unsafe.Pointer(v.typ))\n\n\t// Do not require key to be exported, so that DeepEqual\n\t// and other programs can use all the keys returned by\n\t// MapKeys as arguments to MapIndex.  If either the map\n\t// or the key is unexported, though, the result will be\n\t// considered unexported.  This is consistent with the\n\t// behavior for structs, which allow read but not write\n\t// of unexported fields.\n\tkey = key.assignTo(\"reflect.Value.MapIndex\", tt.key, nil)\n\n\tvar k unsafe.Pointer\n\tif key.flag&flagIndir != 0 {\n\t\tk = key.ptr\n\t} else {\n\t\tk = unsafe.Pointer(&key.ptr)\n\t}\n\te := mapaccess(v.typ, v.pointer(), k)\n\tif e == nil {\n\t\treturn Value{}\n\t}\n\ttyp := tt.elem\n\tfl := (v.flag | key.flag) & flagRO\n\tfl |= flag(typ.Kind())\n\tif ifaceIndir(typ) {\n\t\t// Copy result so future changes to the map\n\t\t// won't change the underlying value.\n\t\tc := unsafe_New(typ)\n\t\ttypedmemmove(typ, c, e)\n\t\treturn Value{typ, c, fl | flagIndir}\n\t} else {\n\t\treturn Value{typ, *(*unsafe.Pointer)(e), fl}\n\t}\n}\n\n// MapKeys returns a slice containing all the keys present in the map,\n// in unspecified order.\n// It panics if v's Kind is not Map.\n// It returns an empty slice if v represents a nil map.\nfunc (v Value) MapKeys() []Value {\n\tv.mustBe(Map)\n\ttt := (*mapType)(unsafe.Pointer(v.typ))\n\tkeyType := tt.key\n\n\tfl := v.flag&flagRO | flag(keyType.Kind())\n\n\tm := v.pointer()\n\tmlen := int(0)\n\tif m != nil {\n\t\tmlen = maplen(m)\n\t}\n\tit := mapiterinit(v.typ, m)\n\ta := make([]Value, mlen)\n\tvar i int\n\tfor i = 0; i < len(a); i++ {\n\t\tkey := mapiterkey(it)\n\t\tif key == nil {\n\t\t\t// Someone deleted an entry from the map since we\n\t\t\t// called maplen above.  It's a data race, but nothing\n\t\t\t// we can do about it.\n\t\t\tbreak\n\t\t}\n\t\tif ifaceIndir(keyType) {\n\t\t\t// Copy result so future changes to the map\n\t\t\t// won't change the underlying value.\n\t\t\tc := unsafe_New(keyType)\n\t\t\ttypedmemmove(keyType, c, key)\n\t\t\ta[i] = Value{keyType, c, fl | flagIndir}\n\t\t} else {\n\t\t\ta[i] = Value{keyType, *(*unsafe.Pointer)(key), fl}\n\t\t}\n\t\tmapiternext(it)\n\t}\n\treturn a[:i]\n}\n\n// Method returns a function value corresponding to v's i'th method.\n// The arguments to a Call on the returned function should not include\n// a receiver; the returned function will always use v as the receiver.\n// Method panics if i is out of range or if v is a nil interface value.\nfunc (v Value) Method(i int) Value {\n\tif v.typ == nil {\n\t\tpanic(&ValueError{\"reflect.Value.Method\", Invalid})\n\t}\n\tif v.flag&flagMethod != 0 || uint(i) >= uint(v.typ.NumMethod()) {\n\t\tpanic(\"reflect: Method index out of range\")\n\t}\n\tif v.typ.Kind() == Interface && v.IsNil() {\n\t\tpanic(\"reflect: Method on nil interface value\")\n\t}\n\tfl := v.flag & (flagStickyRO | flagIndir) // Clear flagEmbedRO\n\tfl |= flag(Func)\n\tfl |= flag(i)<<flagMethodShift | flagMethod\n\treturn Value{v.typ, v.ptr, fl}\n}\n\n// NumMethod returns the number of methods in the value's method set.\nfunc (v Value) NumMethod() int {\n\tif v.typ == nil {\n\t\tpanic(&ValueError{\"reflect.Value.NumMethod\", Invalid})\n\t}\n\tif v.flag&flagMethod != 0 {\n\t\treturn 0\n\t}\n\treturn v.typ.NumMethod()\n}\n\n// MethodByName returns a function value corresponding to the method\n// of v with the given name.\n// The arguments to a Call on the returned function should not include\n// a receiver; the returned function will always use v as the receiver.\n// It returns the zero Value if no method was found.\nfunc (v Value) MethodByName(name string) Value {\n\tif v.typ == nil {\n\t\tpanic(&ValueError{\"reflect.Value.MethodByName\", Invalid})\n\t}\n\tif v.flag&flagMethod != 0 {\n\t\treturn Value{}\n\t}\n\tm, ok := v.typ.MethodByName(name)\n\tif !ok {\n\t\treturn Value{}\n\t}\n\treturn v.Method(m.Index)\n}\n\n// NumField returns the number of fields in the struct v.\n// It panics if v's Kind is not Struct.\nfunc (v Value) NumField() int {\n\tv.mustBe(Struct)\n\ttt := (*structType)(unsafe.Pointer(v.typ))\n\treturn len(tt.fields)\n}\n\n// OverflowComplex reports whether the complex128 x cannot be represented by v's type.\n// It panics if v's Kind is not Complex64 or Complex128.\nfunc (v Value) OverflowComplex(x complex128) bool {\n\tk := v.kind()\n\tswitch k {\n\tcase Complex64:\n\t\treturn overflowFloat32(real(x)) || overflowFloat32(imag(x))\n\tcase Complex128:\n\t\treturn false\n\t}\n\tpanic(&ValueError{\"reflect.Value.OverflowComplex\", v.kind()})\n}\n\n// OverflowFloat reports whether the float64 x cannot be represented by v's type.\n// It panics if v's Kind is not Float32 or Float64.\nfunc (v Value) OverflowFloat(x float64) bool {\n\tk := v.kind()\n\tswitch k {\n\tcase Float32:\n\t\treturn overflowFloat32(x)\n\tcase Float64:\n\t\treturn false\n\t}\n\tpanic(&ValueError{\"reflect.Value.OverflowFloat\", v.kind()})\n}\n\nfunc overflowFloat32(x float64) bool {\n\tif x < 0 {\n\t\tx = -x\n\t}\n\treturn math.MaxFloat32 < x && x <= math.MaxFloat64\n}\n\n// OverflowInt reports whether the int64 x cannot be represented by v's type.\n// It panics if v's Kind is not Int, Int8, int16, Int32, or Int64.\nfunc (v Value) OverflowInt(x int64) bool {\n\tk := v.kind()\n\tswitch k {\n\tcase Int, Int8, Int16, Int32, Int64:\n\t\tbitSize := v.typ.size * 8\n\t\ttrunc := (x << (64 - bitSize)) >> (64 - bitSize)\n\t\treturn x != trunc\n\t}\n\tpanic(&ValueError{\"reflect.Value.OverflowInt\", v.kind()})\n}\n\n// OverflowUint reports whether the uint64 x cannot be represented by v's type.\n// It panics if v's Kind is not Uint, Uintptr, Uint8, Uint16, Uint32, or Uint64.\nfunc (v Value) OverflowUint(x uint64) bool {\n\tk := v.kind()\n\tswitch k {\n\tcase Uint, Uintptr, Uint8, Uint16, Uint32, Uint64:\n\t\tbitSize := v.typ.size * 8\n\t\ttrunc := (x << (64 - bitSize)) >> (64 - bitSize)\n\t\treturn x != trunc\n\t}\n\tpanic(&ValueError{\"reflect.Value.OverflowUint\", v.kind()})\n}\n\n// Pointer returns v's value as a uintptr.\n// It returns uintptr instead of unsafe.Pointer so that\n// code using reflect cannot obtain unsafe.Pointers\n// without importing the unsafe package explicitly.\n// It panics if v's Kind is not Chan, Func, Map, Ptr, Slice, or UnsafePointer.\n//\n// If v's Kind is Func, the returned pointer is an underlying\n// code pointer, but not necessarily enough to identify a\n// single function uniquely. The only guarantee is that the\n// result is zero if and only if v is a nil func Value.\n//\n// If v's Kind is Slice, the returned pointer is to the first\n// element of the slice.  If the slice is nil the returned value\n// is 0.  If the slice is empty but non-nil the return value is non-zero.\nfunc (v Value) Pointer() uintptr {\n\t// TODO: deprecate\n\tk := v.kind()\n\tswitch k {\n\tcase Chan, Map, Ptr, UnsafePointer:\n\t\treturn uintptr(v.pointer())\n\tcase Func:\n\t\tif v.flag&flagMethod != 0 {\n\t\t\t// As the doc comment says, the returned pointer is an\n\t\t\t// underlying code pointer but not necessarily enough to\n\t\t\t// identify a single function uniquely. All method expressions\n\t\t\t// created via reflect have the same underlying code pointer,\n\t\t\t// so their Pointers are equal. The function used here must\n\t\t\t// match the one used in makeMethodValue.\n\t\t\tf := methodValueCall\n\t\t\treturn **(**uintptr)(unsafe.Pointer(&f))\n\t\t}\n\t\tp := v.pointer()\n\t\t// Non-nil func value points at data block.\n\t\t// First word of data block is actual code.\n\t\tif p != nil {\n\t\t\tp = *(*unsafe.Pointer)(p)\n\t\t}\n\t\treturn uintptr(p)\n\n\tcase Slice:\n\t\treturn (*SliceHeader)(v.ptr).Data\n\t}\n\tpanic(&ValueError{\"reflect.Value.Pointer\", v.kind()})\n}\n\n// Recv receives and returns a value from the channel v.\n// It panics if v's Kind is not Chan.\n// The receive blocks until a value is ready.\n// The boolean value ok is true if the value x corresponds to a send\n// on the channel, false if it is a zero value received because the channel is closed.\nfunc (v Value) Recv() (x Value, ok bool) {\n\tv.mustBe(Chan)\n\tv.mustBeExported()\n\treturn v.recv(false)\n}\n\n// internal recv, possibly non-blocking (nb).\n// v is known to be a channel.\nfunc (v Value) recv(nb bool) (val Value, ok bool) {\n\ttt := (*chanType)(unsafe.Pointer(v.typ))\n\tif ChanDir(tt.dir)&RecvDir == 0 {\n\t\tpanic(\"reflect: recv on send-only channel\")\n\t}\n\tt := tt.elem\n\tval = Value{t, nil, flag(t.Kind())}\n\tvar p unsafe.Pointer\n\tif ifaceIndir(t) {\n\t\tp = unsafe_New(t)\n\t\tval.ptr = p\n\t\tval.flag |= flagIndir\n\t} else {\n\t\tp = unsafe.Pointer(&val.ptr)\n\t}\n\tselected, ok := chanrecv(v.typ, v.pointer(), nb, p)\n\tif !selected {\n\t\tval = Value{}\n\t}\n\treturn\n}\n\n// Send sends x on the channel v.\n// It panics if v's kind is not Chan or if x's type is not the same type as v's element type.\n// As in Go, x's value must be assignable to the channel's element type.\nfunc (v Value) Send(x Value) {\n\tv.mustBe(Chan)\n\tv.mustBeExported()\n\tv.send(x, false)\n}\n\n// internal send, possibly non-blocking.\n// v is known to be a channel.\nfunc (v Value) send(x Value, nb bool) (selected bool) {\n\ttt := (*chanType)(unsafe.Pointer(v.typ))\n\tif ChanDir(tt.dir)&SendDir == 0 {\n\t\tpanic(\"reflect: send on recv-only channel\")\n\t}\n\tx.mustBeExported()\n\tx = x.assignTo(\"reflect.Value.Send\", tt.elem, nil)\n\tvar p unsafe.Pointer\n\tif x.flag&flagIndir != 0 {\n\t\tp = x.ptr\n\t} else {\n\t\tp = unsafe.Pointer(&x.ptr)\n\t}\n\treturn chansend(v.typ, v.pointer(), p, nb)\n}\n\n// Set assigns x to the value v.\n// It panics if CanSet returns false.\n// As in Go, x's value must be assignable to v's type.\nfunc (v Value) Set(x Value) {\n\tv.mustBeAssignable()\n\tx.mustBeExported() // do not let unexported x leak\n\tvar target unsafe.Pointer\n\tif v.kind() == Interface {\n\t\ttarget = v.ptr\n\t}\n\tx = x.assignTo(\"reflect.Set\", v.typ, target)\n\tif x.flag&flagIndir != 0 {\n\t\ttypedmemmove(v.typ, v.ptr, x.ptr)\n\t} else {\n\t\t*(*unsafe.Pointer)(v.ptr) = x.ptr\n\t}\n}\n\n// SetBool sets v's underlying value.\n// It panics if v's Kind is not Bool or if CanSet() is false.\nfunc (v Value) SetBool(x bool) {\n\tv.mustBeAssignable()\n\tv.mustBe(Bool)\n\t*(*bool)(v.ptr) = x\n}\n\n// SetBytes sets v's underlying value.\n// It panics if v's underlying value is not a slice of bytes.\nfunc (v Value) SetBytes(x []byte) {\n\tv.mustBeAssignable()\n\tv.mustBe(Slice)\n\tif v.typ.Elem().Kind() != Uint8 {\n\t\tpanic(\"reflect.Value.SetBytes of non-byte slice\")\n\t}\n\t*(*[]byte)(v.ptr) = x\n}\n\n// setRunes sets v's underlying value.\n// It panics if v's underlying value is not a slice of runes (int32s).\nfunc (v Value) setRunes(x []rune) {\n\tv.mustBeAssignable()\n\tv.mustBe(Slice)\n\tif v.typ.Elem().Kind() != Int32 {\n\t\tpanic(\"reflect.Value.setRunes of non-rune slice\")\n\t}\n\t*(*[]rune)(v.ptr) = x\n}\n\n// SetComplex sets v's underlying value to x.\n// It panics if v's Kind is not Complex64 or Complex128, or if CanSet() is false.\nfunc (v Value) SetComplex(x complex128) {\n\tv.mustBeAssignable()\n\tswitch k := v.kind(); k {\n\tdefault:\n\t\tpanic(&ValueError{\"reflect.Value.SetComplex\", v.kind()})\n\tcase Complex64:\n\t\t*(*complex64)(v.ptr) = complex64(x)\n\tcase Complex128:\n\t\t*(*complex128)(v.ptr) = x\n\t}\n}\n\n// SetFloat sets v's underlying value to x.\n// It panics if v's Kind is not Float32 or Float64, or if CanSet() is false.\nfunc (v Value) SetFloat(x float64) {\n\tv.mustBeAssignable()\n\tswitch k := v.kind(); k {\n\tdefault:\n\t\tpanic(&ValueError{\"reflect.Value.SetFloat\", v.kind()})\n\tcase Float32:\n\t\t*(*float32)(v.ptr) = float32(x)\n\tcase Float64:\n\t\t*(*float64)(v.ptr) = x\n\t}\n}\n\n// SetInt sets v's underlying value to x.\n// It panics if v's Kind is not Int, Int8, Int16, Int32, or Int64, or if CanSet() is false.\nfunc (v Value) SetInt(x int64) {\n\tv.mustBeAssignable()\n\tswitch k := v.kind(); k {\n\tdefault:\n\t\tpanic(&ValueError{\"reflect.Value.SetInt\", v.kind()})\n\tcase Int:\n\t\t*(*int)(v.ptr) = int(x)\n\tcase Int8:\n\t\t*(*int8)(v.ptr) = int8(x)\n\tcase Int16:\n\t\t*(*int16)(v.ptr) = int16(x)\n\tcase Int32:\n\t\t*(*int32)(v.ptr) = int32(x)\n\tcase Int64:\n\t\t*(*int64)(v.ptr) = x\n\t}\n}\n\n// SetLen sets v's length to n.\n// It panics if v's Kind is not Slice or if n is negative or\n// greater than the capacity of the slice.\nfunc (v Value) SetLen(n int) {\n\tv.mustBeAssignable()\n\tv.mustBe(Slice)\n\ts := (*sliceHeader)(v.ptr)\n\tif uint(n) > uint(s.Cap) {\n\t\tpanic(\"reflect: slice length out of range in SetLen\")\n\t}\n\ts.Len = n\n}\n\n// SetCap sets v's capacity to n.\n// It panics if v's Kind is not Slice or if n is smaller than the length or\n// greater than the capacity of the slice.\nfunc (v Value) SetCap(n int) {\n\tv.mustBeAssignable()\n\tv.mustBe(Slice)\n\ts := (*sliceHeader)(v.ptr)\n\tif n < int(s.Len) || n > int(s.Cap) {\n\t\tpanic(\"reflect: slice capacity out of range in SetCap\")\n\t}\n\ts.Cap = n\n}\n\n// SetMapIndex sets the value associated with key in the map v to val.\n// It panics if v's Kind is not Map.\n// If val is the zero Value, SetMapIndex deletes the key from the map.\n// Otherwise if v holds a nil map, SetMapIndex will panic.\n// As in Go, key's value must be assignable to the map's key type,\n// and val's value must be assignable to the map's value type.\nfunc (v Value) SetMapIndex(key, val Value) {\n\tv.mustBe(Map)\n\tv.mustBeExported()\n\tkey.mustBeExported()\n\ttt := (*mapType)(unsafe.Pointer(v.typ))\n\tkey = key.assignTo(\"reflect.Value.SetMapIndex\", tt.key, nil)\n\tvar k unsafe.Pointer\n\tif key.flag&flagIndir != 0 {\n\t\tk = key.ptr\n\t} else {\n\t\tk = unsafe.Pointer(&key.ptr)\n\t}\n\tif val.typ == nil {\n\t\tmapdelete(v.typ, v.pointer(), k)\n\t\treturn\n\t}\n\tval.mustBeExported()\n\tval = val.assignTo(\"reflect.Value.SetMapIndex\", tt.elem, nil)\n\tvar e unsafe.Pointer\n\tif val.flag&flagIndir != 0 {\n\t\te = val.ptr\n\t} else {\n\t\te = unsafe.Pointer(&val.ptr)\n\t}\n\tmapassign(v.typ, v.pointer(), k, e)\n}\n\n// SetUint sets v's underlying value to x.\n// It panics if v's Kind is not Uint, Uintptr, Uint8, Uint16, Uint32, or Uint64, or if CanSet() is false.\nfunc (v Value) SetUint(x uint64) {\n\tv.mustBeAssignable()\n\tswitch k := v.kind(); k {\n\tdefault:\n\t\tpanic(&ValueError{\"reflect.Value.SetUint\", v.kind()})\n\tcase Uint:\n\t\t*(*uint)(v.ptr) = uint(x)\n\tcase Uint8:\n\t\t*(*uint8)(v.ptr) = uint8(x)\n\tcase Uint16:\n\t\t*(*uint16)(v.ptr) = uint16(x)\n\tcase Uint32:\n\t\t*(*uint32)(v.ptr) = uint32(x)\n\tcase Uint64:\n\t\t*(*uint64)(v.ptr) = x\n\tcase Uintptr:\n\t\t*(*uintptr)(v.ptr) = uintptr(x)\n\t}\n}\n\n// SetPointer sets the unsafe.Pointer value v to x.\n// It panics if v's Kind is not UnsafePointer.\nfunc (v Value) SetPointer(x unsafe.Pointer) {\n\tv.mustBeAssignable()\n\tv.mustBe(UnsafePointer)\n\t*(*unsafe.Pointer)(v.ptr) = x\n}\n\n// SetString sets v's underlying value to x.\n// It panics if v's Kind is not String or if CanSet() is false.\nfunc (v Value) SetString(x string) {\n\tv.mustBeAssignable()\n\tv.mustBe(String)\n\t*(*string)(v.ptr) = x\n}\n\n// Slice returns v[i:j].\n// It panics if v's Kind is not Array, Slice or String, or if v is an unaddressable array,\n// or if the indexes are out of bounds.\nfunc (v Value) Slice(i, j int) Value {\n\tvar (\n\t\tcap  int\n\t\ttyp  *sliceType\n\t\tbase unsafe.Pointer\n\t)\n\tswitch kind := v.kind(); kind {\n\tdefault:\n\t\tpanic(&ValueError{\"reflect.Value.Slice\", v.kind()})\n\n\tcase Array:\n\t\tif v.flag&flagAddr == 0 {\n\t\t\tpanic(\"reflect.Value.Slice: slice of unaddressable array\")\n\t\t}\n\t\ttt := (*arrayType)(unsafe.Pointer(v.typ))\n\t\tcap = int(tt.len)\n\t\ttyp = (*sliceType)(unsafe.Pointer(tt.slice))\n\t\tbase = v.ptr\n\n\tcase Slice:\n\t\ttyp = (*sliceType)(unsafe.Pointer(v.typ))\n\t\ts := (*sliceHeader)(v.ptr)\n\t\tbase = unsafe.Pointer(s.Data)\n\t\tcap = s.Cap\n\n\tcase String:\n\t\ts := (*stringHeader)(v.ptr)\n\t\tif i < 0 || j < i || j > s.Len {\n\t\t\tpanic(\"reflect.Value.Slice: string slice index out of bounds\")\n\t\t}\n\t\tt := stringHeader{arrayAt(s.Data, i, 1), j - i}\n\t\treturn Value{v.typ, unsafe.Pointer(&t), v.flag}\n\t}\n\n\tif i < 0 || j < i || j > cap {\n\t\tpanic(\"reflect.Value.Slice: slice index out of bounds\")\n\t}\n\n\t// Declare slice so that gc can see the base pointer in it.\n\tvar x []unsafe.Pointer\n\n\t// Reinterpret as *sliceHeader to edit.\n\ts := (*sliceHeader)(unsafe.Pointer(&x))\n\ts.Len = j - i\n\ts.Cap = cap - i\n\tif cap-i > 0 {\n\t\ts.Data = arrayAt(base, i, typ.elem.Size())\n\t} else {\n\t\t// do not advance pointer, to avoid pointing beyond end of slice\n\t\ts.Data = base\n\t}\n\n\tfl := v.flag&flagRO | flagIndir | flag(Slice)\n\treturn Value{typ.common(), unsafe.Pointer(&x), fl}\n}\n\n// Slice3 is the 3-index form of the slice operation: it returns v[i:j:k].\n// It panics if v's Kind is not Array or Slice, or if v is an unaddressable array,\n// or if the indexes are out of bounds.\nfunc (v Value) Slice3(i, j, k int) Value {\n\tvar (\n\t\tcap  int\n\t\ttyp  *sliceType\n\t\tbase unsafe.Pointer\n\t)\n\tswitch kind := v.kind(); kind {\n\tdefault:\n\t\tpanic(&ValueError{\"reflect.Value.Slice3\", v.kind()})\n\n\tcase Array:\n\t\tif v.flag&flagAddr == 0 {\n\t\t\tpanic(\"reflect.Value.Slice3: slice of unaddressable array\")\n\t\t}\n\t\ttt := (*arrayType)(unsafe.Pointer(v.typ))\n\t\tcap = int(tt.len)\n\t\ttyp = (*sliceType)(unsafe.Pointer(tt.slice))\n\t\tbase = v.ptr\n\n\tcase Slice:\n\t\ttyp = (*sliceType)(unsafe.Pointer(v.typ))\n\t\ts := (*sliceHeader)(v.ptr)\n\t\tbase = s.Data\n\t\tcap = s.Cap\n\t}\n\n\tif i < 0 || j < i || k < j || k > cap {\n\t\tpanic(\"reflect.Value.Slice3: slice index out of bounds\")\n\t}\n\n\t// Declare slice so that the garbage collector\n\t// can see the base pointer in it.\n\tvar x []unsafe.Pointer\n\n\t// Reinterpret as *sliceHeader to edit.\n\ts := (*sliceHeader)(unsafe.Pointer(&x))\n\ts.Len = j - i\n\ts.Cap = k - i\n\tif k-i > 0 {\n\t\ts.Data = arrayAt(base, i, typ.elem.Size())\n\t} else {\n\t\t// do not advance pointer, to avoid pointing beyond end of slice\n\t\ts.Data = base\n\t}\n\n\tfl := v.flag&flagRO | flagIndir | flag(Slice)\n\treturn Value{typ.common(), unsafe.Pointer(&x), fl}\n}\n\n// String returns the string v's underlying value, as a string.\n// String is a special case because of Go's String method convention.\n// Unlike the other getters, it does not panic if v's Kind is not String.\n// Instead, it returns a string of the form \"<T value>\" where T is v's type.\n// The fmt package treats Values specially. It does not call their String\n// method implicitly but instead prints the concrete values they hold.\nfunc (v Value) String() string {\n\tswitch k := v.kind(); k {\n\tcase Invalid:\n\t\treturn \"<invalid Value>\"\n\tcase String:\n\t\treturn *(*string)(v.ptr)\n\t}\n\t// If you call String on a reflect.Value of other type, it's better to\n\t// print something than to panic. Useful in debugging.\n\treturn \"<\" + v.Type().String() + \" Value>\"\n}\n\n// TryRecv attempts to receive a value from the channel v but will not block.\n// It panics if v's Kind is not Chan.\n// If the receive delivers a value, x is the transferred value and ok is true.\n// If the receive cannot finish without blocking, x is the zero Value and ok is false.\n// If the channel is closed, x is the zero value for the channel's element type and ok is false.\nfunc (v Value) TryRecv() (x Value, ok bool) {\n\tv.mustBe(Chan)\n\tv.mustBeExported()\n\treturn v.recv(true)\n}\n\n// TrySend attempts to send x on the channel v but will not block.\n// It panics if v's Kind is not Chan.\n// It reports whether the value was sent.\n// As in Go, x's value must be assignable to the channel's element type.\nfunc (v Value) TrySend(x Value) bool {\n\tv.mustBe(Chan)\n\tv.mustBeExported()\n\treturn v.send(x, true)\n}\n\n// Type returns v's type.\nfunc (v Value) Type() Type {\n\tf := v.flag\n\tif f == 0 {\n\t\tpanic(&ValueError{\"reflect.Value.Type\", Invalid})\n\t}\n\tif f&flagMethod == 0 {\n\t\t// Easy case\n\t\treturn v.typ\n\t}\n\n\t// Method value.\n\t// v.typ describes the receiver, not the method type.\n\ti := int(v.flag) >> flagMethodShift\n\tif v.typ.Kind() == Interface {\n\t\t// Method on interface.\n\t\ttt := (*interfaceType)(unsafe.Pointer(v.typ))\n\t\tif uint(i) >= uint(len(tt.methods)) {\n\t\t\tpanic(\"reflect: internal error: invalid method index\")\n\t\t}\n\t\tm := &tt.methods[i]\n\t\treturn m.typ\n\t}\n\t// Method on concrete type.\n\tut := v.typ.uncommon()\n\tif ut == nil || uint(i) >= uint(len(ut.methods)) {\n\t\tpanic(\"reflect: internal error: invalid method index\")\n\t}\n\tm := &ut.methods[i]\n\treturn m.mtyp\n}\n\n// Uint returns v's underlying value, as a uint64.\n// It panics if v's Kind is not Uint, Uintptr, Uint8, Uint16, Uint32, or Uint64.\nfunc (v Value) Uint() uint64 {\n\tk := v.kind()\n\tp := v.ptr\n\tswitch k {\n\tcase Uint:\n\t\treturn uint64(*(*uint)(p))\n\tcase Uint8:\n\t\treturn uint64(*(*uint8)(p))\n\tcase Uint16:\n\t\treturn uint64(*(*uint16)(p))\n\tcase Uint32:\n\t\treturn uint64(*(*uint32)(p))\n\tcase Uint64:\n\t\treturn uint64(*(*uint64)(p))\n\tcase Uintptr:\n\t\treturn uint64(*(*uintptr)(p))\n\t}\n\tpanic(&ValueError{\"reflect.Value.Uint\", v.kind()})\n}\n\n// UnsafeAddr returns a pointer to v's data.\n// It is for advanced clients that also import the \"unsafe\" package.\n// It panics if v is not addressable.\nfunc (v Value) UnsafeAddr() uintptr {\n\t// TODO: deprecate\n\tif v.typ == nil {\n\t\tpanic(&ValueError{\"reflect.Value.UnsafeAddr\", Invalid})\n\t}\n\tif v.flag&flagAddr == 0 {\n\t\tpanic(\"reflect.Value.UnsafeAddr of unaddressable value\")\n\t}\n\treturn uintptr(v.ptr)\n}\n\n// StringHeader is the runtime representation of a string.\n// It cannot be used safely or portably and its representation may\n// change in a later release.\n// Moreover, the Data field is not sufficient to guarantee the data\n// it references will not be garbage collected, so programs must keep\n// a separate, correctly typed pointer to the underlying data.\ntype StringHeader struct {\n\tData uintptr\n\tLen  int\n}\n\n// stringHeader is a safe version of StringHeader used within this package.\ntype stringHeader struct {\n\tData unsafe.Pointer\n\tLen  int\n}\n\n// SliceHeader is the runtime representation of a slice.\n// It cannot be used safely or portably and its representation may\n// change in a later release.\n// Moreover, the Data field is not sufficient to guarantee the data\n// it references will not be garbage collected, so programs must keep\n// a separate, correctly typed pointer to the underlying data.\ntype SliceHeader struct {\n\tData uintptr\n\tLen  int\n\tCap  int\n}\n\n// sliceHeader is a safe version of SliceHeader used within this package.\ntype sliceHeader struct {\n\tData unsafe.Pointer\n\tLen  int\n\tCap  int\n}\n\nfunc typesMustMatch(what string, t1, t2 Type) {\n\tif t1 != t2 {\n\t\tpanic(what + \": \" + t1.String() + \" != \" + t2.String())\n\t}\n}\n\n// arrayAt returns the i-th element of p, a C-array whose elements are\n// eltSize wide (in bytes).\nfunc arrayAt(p unsafe.Pointer, i int, eltSize uintptr) unsafe.Pointer {\n\treturn unsafe.Pointer(uintptr(p) + uintptr(i)*eltSize)\n}\n\n// grow grows the slice s so that it can hold extra more values, allocating\n// more capacity if needed. It also returns the old and new slice lengths.\nfunc grow(s Value, extra int) (Value, int, int) {\n\ti0 := s.Len()\n\ti1 := i0 + extra\n\tif i1 < i0 {\n\t\tpanic(\"reflect.Append: slice overflow\")\n\t}\n\tm := s.Cap()\n\tif i1 <= m {\n\t\treturn s.Slice(0, i1), i0, i1\n\t}\n\tif m == 0 {\n\t\tm = extra\n\t} else {\n\t\tfor m < i1 {\n\t\t\tif i0 < 1024 {\n\t\t\t\tm += m\n\t\t\t} else {\n\t\t\t\tm += m / 4\n\t\t\t}\n\t\t}\n\t}\n\tt := MakeSlice(s.Type(), i1, m)\n\tCopy(t, s)\n\treturn t, i0, i1\n}\n\n// Append appends the values x to a slice s and returns the resulting slice.\n// As in Go, each x's value must be assignable to the slice's element type.\nfunc Append(s Value, x ...Value) Value {\n\ts.mustBe(Slice)\n\ts, i0, i1 := grow(s, len(x))\n\tfor i, j := i0, 0; i < i1; i, j = i+1, j+1 {\n\t\ts.Index(i).Set(x[j])\n\t}\n\treturn s\n}\n\n// AppendSlice appends a slice t to a slice s and returns the resulting slice.\n// The slices s and t must have the same element type.\nfunc AppendSlice(s, t Value) Value {\n\ts.mustBe(Slice)\n\tt.mustBe(Slice)\n\ttypesMustMatch(\"reflect.AppendSlice\", s.Type().Elem(), t.Type().Elem())\n\ts, i0, i1 := grow(s, t.Len())\n\tCopy(s.Slice(i0, i1), t)\n\treturn s\n}\n\n// Copy copies the contents of src into dst until either\n// dst has been filled or src has been exhausted.\n// It returns the number of elements copied.\n// Dst and src each must have kind Slice or Array, and\n// dst and src must have the same element type.\nfunc Copy(dst, src Value) int {\n\tdk := dst.kind()\n\tif dk != Array && dk != Slice {\n\t\tpanic(&ValueError{\"reflect.Copy\", dk})\n\t}\n\tif dk == Array {\n\t\tdst.mustBeAssignable()\n\t}\n\tdst.mustBeExported()\n\n\tsk := src.kind()\n\tif sk != Array && sk != Slice {\n\t\tpanic(&ValueError{\"reflect.Copy\", sk})\n\t}\n\tsrc.mustBeExported()\n\n\tde := dst.typ.Elem()\n\tse := src.typ.Elem()\n\ttypesMustMatch(\"reflect.Copy\", de, se)\n\n\tvar ds, ss sliceHeader\n\tif dk == Array {\n\t\tds.Data = dst.ptr\n\t\tds.Len = dst.Len()\n\t\tds.Cap = ds.Len\n\t} else {\n\t\tds = *(*sliceHeader)(dst.ptr)\n\t}\n\tif sk == Array {\n\t\tss.Data = src.ptr\n\t\tss.Len = src.Len()\n\t\tss.Cap = ss.Len\n\t} else {\n\t\tss = *(*sliceHeader)(src.ptr)\n\t}\n\n\treturn typedslicecopy(de.common(), ds, ss)\n}\n\n// A runtimeSelect is a single case passed to rselect.\n// This must match ../runtime/select.go:/runtimeSelect\ntype runtimeSelect struct {\n\tdir uintptr        // 0, SendDir, or RecvDir\n\ttyp *rtype         // channel type\n\tch  unsafe.Pointer // channel\n\tval unsafe.Pointer // ptr to data (SendDir) or ptr to receive buffer (RecvDir)\n}\n\n// rselect runs a select.  It returns the index of the chosen case.\n// If the case was a receive, val is filled in with the received value.\n// The conventional OK bool indicates whether the receive corresponds\n// to a sent value.\n//go:noescape\nfunc rselect([]runtimeSelect) (chosen int, recvOK bool)\n\n// A SelectDir describes the communication direction of a select case.\ntype SelectDir int\n\n// NOTE: These values must match ../runtime/select.go:/selectDir.\n\nconst (\n\t_             SelectDir = iota\n\tSelectSend              // case Chan <- Send\n\tSelectRecv              // case <-Chan:\n\tSelectDefault           // default\n)\n\n// A SelectCase describes a single case in a select operation.\n// The kind of case depends on Dir, the communication direction.\n//\n// If Dir is SelectDefault, the case represents a default case.\n// Chan and Send must be zero Values.\n//\n// If Dir is SelectSend, the case represents a send operation.\n// Normally Chan's underlying value must be a channel, and Send's underlying value must be\n// assignable to the channel's element type. As a special case, if Chan is a zero Value,\n// then the case is ignored, and the field Send will also be ignored and may be either zero\n// or non-zero.\n//\n// If Dir is SelectRecv, the case represents a receive operation.\n// Normally Chan's underlying value must be a channel and Send must be a zero Value.\n// If Chan is a zero Value, then the case is ignored, but Send must still be a zero Value.\n// When a receive operation is selected, the received Value is returned by Select.\n//\ntype SelectCase struct {\n\tDir  SelectDir // direction of case\n\tChan Value     // channel to use (for send or receive)\n\tSend Value     // value to send (for send)\n}\n\n// Select executes a select operation described by the list of cases.\n// Like the Go select statement, it blocks until at least one of the cases\n// can proceed, makes a uniform pseudo-random choice,\n// and then executes that case. It returns the index of the chosen case\n// and, if that case was a receive operation, the value received and a\n// boolean indicating whether the value corresponds to a send on the channel\n// (as opposed to a zero value received because the channel is closed).\nfunc Select(cases []SelectCase) (chosen int, recv Value, recvOK bool) {\n\t// NOTE: Do not trust that caller is not modifying cases data underfoot.\n\t// The range is safe because the caller cannot modify our copy of the len\n\t// and each iteration makes its own copy of the value c.\n\truncases := make([]runtimeSelect, len(cases))\n\thaveDefault := false\n\tfor i, c := range cases {\n\t\trc := &runcases[i]\n\t\trc.dir = uintptr(c.Dir)\n\t\tswitch c.Dir {\n\t\tdefault:\n\t\t\tpanic(\"reflect.Select: invalid Dir\")\n\n\t\tcase SelectDefault: // default\n\t\t\tif haveDefault {\n\t\t\t\tpanic(\"reflect.Select: multiple default cases\")\n\t\t\t}\n\t\t\thaveDefault = true\n\t\t\tif c.Chan.IsValid() {\n\t\t\t\tpanic(\"reflect.Select: default case has Chan value\")\n\t\t\t}\n\t\t\tif c.Send.IsValid() {\n\t\t\t\tpanic(\"reflect.Select: default case has Send value\")\n\t\t\t}\n\n\t\tcase SelectSend:\n\t\t\tch := c.Chan\n\t\t\tif !ch.IsValid() {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tch.mustBe(Chan)\n\t\t\tch.mustBeExported()\n\t\t\ttt := (*chanType)(unsafe.Pointer(ch.typ))\n\t\t\tif ChanDir(tt.dir)&SendDir == 0 {\n\t\t\t\tpanic(\"reflect.Select: SendDir case using recv-only channel\")\n\t\t\t}\n\t\t\trc.ch = ch.pointer()\n\t\t\trc.typ = &tt.rtype\n\t\t\tv := c.Send\n\t\t\tif !v.IsValid() {\n\t\t\t\tpanic(\"reflect.Select: SendDir case missing Send value\")\n\t\t\t}\n\t\t\tv.mustBeExported()\n\t\t\tv = v.assignTo(\"reflect.Select\", tt.elem, nil)\n\t\t\tif v.flag&flagIndir != 0 {\n\t\t\t\trc.val = v.ptr\n\t\t\t} else {\n\t\t\t\trc.val = unsafe.Pointer(&v.ptr)\n\t\t\t}\n\n\t\tcase SelectRecv:\n\t\t\tif c.Send.IsValid() {\n\t\t\t\tpanic(\"reflect.Select: RecvDir case has Send value\")\n\t\t\t}\n\t\t\tch := c.Chan\n\t\t\tif !ch.IsValid() {\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tch.mustBe(Chan)\n\t\t\tch.mustBeExported()\n\t\t\ttt := (*chanType)(unsafe.Pointer(ch.typ))\n\t\t\tif ChanDir(tt.dir)&RecvDir == 0 {\n\t\t\t\tpanic(\"reflect.Select: RecvDir case using send-only channel\")\n\t\t\t}\n\t\t\trc.ch = ch.pointer()\n\t\t\trc.typ = &tt.rtype\n\t\t\trc.val = unsafe_New(tt.elem)\n\t\t}\n\t}\n\n\tchosen, recvOK = rselect(runcases)\n\tif runcases[chosen].dir == uintptr(SelectRecv) {\n\t\ttt := (*chanType)(unsafe.Pointer(runcases[chosen].typ))\n\t\tt := tt.elem\n\t\tp := runcases[chosen].val\n\t\tfl := flag(t.Kind())\n\t\tif ifaceIndir(t) {\n\t\t\trecv = Value{t, p, fl | flagIndir}\n\t\t} else {\n\t\t\trecv = Value{t, *(*unsafe.Pointer)(p), fl}\n\t\t}\n\t}\n\treturn chosen, recv, recvOK\n}\n\n/*\n * constructors\n */\n\n// implemented in package runtime\nfunc unsafe_New(*rtype) unsafe.Pointer\nfunc unsafe_NewArray(*rtype, int) unsafe.Pointer\n\n// MakeSlice creates a new zero-initialized slice value\n// for the specified slice type, length, and capacity.\nfunc MakeSlice(typ Type, len, cap int) Value {\n\tif typ.Kind() != Slice {\n\t\tpanic(\"reflect.MakeSlice of non-slice type\")\n\t}\n\tif len < 0 {\n\t\tpanic(\"reflect.MakeSlice: negative len\")\n\t}\n\tif cap < 0 {\n\t\tpanic(\"reflect.MakeSlice: negative cap\")\n\t}\n\tif len > cap {\n\t\tpanic(\"reflect.MakeSlice: len > cap\")\n\t}\n\n\ts := sliceHeader{unsafe_NewArray(typ.Elem().(*rtype), cap), len, cap}\n\treturn Value{typ.common(), unsafe.Pointer(&s), flagIndir | flag(Slice)}\n}\n\n// MakeChan creates a new channel with the specified type and buffer size.\nfunc MakeChan(typ Type, buffer int) Value {\n\tif typ.Kind() != Chan {\n\t\tpanic(\"reflect.MakeChan of non-chan type\")\n\t}\n\tif buffer < 0 {\n\t\tpanic(\"reflect.MakeChan: negative buffer size\")\n\t}\n\tif typ.ChanDir() != BothDir {\n\t\tpanic(\"reflect.MakeChan: unidirectional channel type\")\n\t}\n\tch := makechan(typ.(*rtype), uint64(buffer))\n\treturn Value{typ.common(), ch, flag(Chan)}\n}\n\n// MakeMap creates a new map of the specified type.\nfunc MakeMap(typ Type) Value {\n\tif typ.Kind() != Map {\n\t\tpanic(\"reflect.MakeMap of non-map type\")\n\t}\n\tm := makemap(typ.(*rtype))\n\treturn Value{typ.common(), m, flag(Map)}\n}\n\n// Indirect returns the value that v points to.\n// If v is a nil pointer, Indirect returns a zero Value.\n// If v is not a pointer, Indirect returns v.\nfunc Indirect(v Value) Value {\n\tif v.Kind() != Ptr {\n\t\treturn v\n\t}\n\treturn v.Elem()\n}\n\n// ValueOf returns a new Value initialized to the concrete value\n// stored in the interface i.  ValueOf(nil) returns the zero Value.\nfunc ValueOf(i interface{}) Value {\n\tif i == nil {\n\t\treturn Value{}\n\t}\n\n\t// TODO: Maybe allow contents of a Value to live on the stack.\n\t// For now we make the contents always escape to the heap.  It\n\t// makes life easier in a few places (see chanrecv/mapassign\n\t// comment below).\n\tescapes(i)\n\n\treturn unpackEface(i)\n}\n\n// Zero returns a Value representing the zero value for the specified type.\n// The result is different from the zero value of the Value struct,\n// which represents no value at all.\n// For example, Zero(TypeOf(42)) returns a Value with Kind Int and value 0.\n// The returned value is neither addressable nor settable.\nfunc Zero(typ Type) Value {\n\tif typ == nil {\n\t\tpanic(\"reflect: Zero(nil)\")\n\t}\n\tt := typ.common()\n\tfl := flag(t.Kind())\n\tif ifaceIndir(t) {\n\t\treturn Value{t, unsafe_New(typ.(*rtype)), fl | flagIndir}\n\t}\n\treturn Value{t, nil, fl}\n}\n\n// New returns a Value representing a pointer to a new zero value\n// for the specified type.  That is, the returned Value's Type is PtrTo(typ).\nfunc New(typ Type) Value {\n\tif typ == nil {\n\t\tpanic(\"reflect: New(nil)\")\n\t}\n\tptr := unsafe_New(typ.(*rtype))\n\tfl := flag(Ptr)\n\treturn Value{typ.common().ptrTo(), ptr, fl}\n}\n\n// NewAt returns a Value representing a pointer to a value of the\n// specified type, using p as that pointer.\nfunc NewAt(typ Type, p unsafe.Pointer) Value {\n\tfl := flag(Ptr)\n\treturn Value{typ.common().ptrTo(), p, fl}\n}\n\n// assignTo returns a value v that can be assigned directly to typ.\n// It panics if v is not assignable to typ.\n// For a conversion to an interface type, target is a suggested scratch space to use.\nfunc (v Value) assignTo(context string, dst *rtype, target unsafe.Pointer) Value {\n\tif v.flag&flagMethod != 0 {\n\t\tv = makeMethodValue(context, v)\n\t}\n\n\tswitch {\n\tcase directlyAssignable(dst, v.typ):\n\t\t// Overwrite type so that they match.\n\t\t// Same memory layout, so no harm done.\n\t\tv.typ = dst\n\t\tfl := v.flag & (flagRO | flagAddr | flagIndir)\n\t\tfl |= flag(dst.Kind())\n\t\treturn Value{dst, v.ptr, fl}\n\n\tcase implements(dst, v.typ):\n\t\tif target == nil {\n\t\t\ttarget = unsafe_New(dst)\n\t\t}\n\t\tx := valueInterface(v, false)\n\t\tif dst.NumMethod() == 0 {\n\t\t\t*(*interface{})(target) = x\n\t\t} else {\n\t\t\tifaceE2I(dst, x, target)\n\t\t}\n\t\treturn Value{dst, target, flagIndir | flag(Interface)}\n\t}\n\n\t// Failed.\n\tpanic(context + \": value of type \" + v.typ.String() + \" is not assignable to type \" + dst.String())\n}\n\n// Convert returns the value v converted to type t.\n// If the usual Go conversion rules do not allow conversion\n// of the value v to type t, Convert panics.\nfunc (v Value) Convert(t Type) Value {\n\tif v.flag&flagMethod != 0 {\n\t\tv = makeMethodValue(\"Convert\", v)\n\t}\n\top := convertOp(t.common(), v.typ)\n\tif op == nil {\n\t\tpanic(\"reflect.Value.Convert: value of type \" + v.typ.String() + \" cannot be converted to type \" + t.String())\n\t}\n\treturn op(v, t)\n}\n\n// convertOp returns the function to convert a value of type src\n// to a value of type dst. If the conversion is illegal, convertOp returns nil.\nfunc convertOp(dst, src *rtype) func(Value, Type) Value {\n\tswitch src.Kind() {\n\tcase Int, Int8, Int16, Int32, Int64:\n\t\tswitch dst.Kind() {\n\t\tcase Int, Int8, Int16, Int32, Int64, Uint, Uint8, Uint16, Uint32, Uint64, Uintptr:\n\t\t\treturn cvtInt\n\t\tcase Float32, Float64:\n\t\t\treturn cvtIntFloat\n\t\tcase String:\n\t\t\treturn cvtIntString\n\t\t}\n\n\tcase Uint, Uint8, Uint16, Uint32, Uint64, Uintptr:\n\t\tswitch dst.Kind() {\n\t\tcase Int, Int8, Int16, Int32, Int64, Uint, Uint8, Uint16, Uint32, Uint64, Uintptr:\n\t\t\treturn cvtUint\n\t\tcase Float32, Float64:\n\t\t\treturn cvtUintFloat\n\t\tcase String:\n\t\t\treturn cvtUintString\n\t\t}\n\n\tcase Float32, Float64:\n\t\tswitch dst.Kind() {\n\t\tcase Int, Int8, Int16, Int32, Int64:\n\t\t\treturn cvtFloatInt\n\t\tcase Uint, Uint8, Uint16, Uint32, Uint64, Uintptr:\n\t\t\treturn cvtFloatUint\n\t\tcase Float32, Float64:\n\t\t\treturn cvtFloat\n\t\t}\n\n\tcase Complex64, Complex128:\n\t\tswitch dst.Kind() {\n\t\tcase Complex64, Complex128:\n\t\t\treturn cvtComplex\n\t\t}\n\n\tcase String:\n\t\tif dst.Kind() == Slice && dst.Elem().PkgPath() == \"\" {\n\t\t\tswitch dst.Elem().Kind() {\n\t\t\tcase Uint8:\n\t\t\t\treturn cvtStringBytes\n\t\t\tcase Int32:\n\t\t\t\treturn cvtStringRunes\n\t\t\t}\n\t\t}\n\n\tcase Slice:\n\t\tif dst.Kind() == String && src.Elem().PkgPath() == \"\" {\n\t\t\tswitch src.Elem().Kind() {\n\t\t\tcase Uint8:\n\t\t\t\treturn cvtBytesString\n\t\t\tcase Int32:\n\t\t\t\treturn cvtRunesString\n\t\t\t}\n\t\t}\n\t}\n\n\t// dst and src have same underlying type.\n\tif haveIdenticalUnderlyingType(dst, src) {\n\t\treturn cvtDirect\n\t}\n\n\t// dst and src are unnamed pointer types with same underlying base type.\n\tif dst.Kind() == Ptr && dst.Name() == \"\" &&\n\t\tsrc.Kind() == Ptr && src.Name() == \"\" &&\n\t\thaveIdenticalUnderlyingType(dst.Elem().common(), src.Elem().common()) {\n\t\treturn cvtDirect\n\t}\n\n\tif implements(dst, src) {\n\t\tif src.Kind() == Interface {\n\t\t\treturn cvtI2I\n\t\t}\n\t\treturn cvtT2I\n\t}\n\n\treturn nil\n}\n\n// makeInt returns a Value of type t equal to bits (possibly truncated),\n// where t is a signed or unsigned int type.\nfunc makeInt(f flag, bits uint64, t Type) Value {\n\ttyp := t.common()\n\tptr := unsafe_New(typ)\n\tswitch typ.size {\n\tcase 1:\n\t\t*(*uint8)(unsafe.Pointer(ptr)) = uint8(bits)\n\tcase 2:\n\t\t*(*uint16)(unsafe.Pointer(ptr)) = uint16(bits)\n\tcase 4:\n\t\t*(*uint32)(unsafe.Pointer(ptr)) = uint32(bits)\n\tcase 8:\n\t\t*(*uint64)(unsafe.Pointer(ptr)) = bits\n\t}\n\treturn Value{typ, ptr, f | flagIndir | flag(typ.Kind())}\n}\n\n// makeFloat returns a Value of type t equal to v (possibly truncated to float32),\n// where t is a float32 or float64 type.\nfunc makeFloat(f flag, v float64, t Type) Value {\n\ttyp := t.common()\n\tptr := unsafe_New(typ)\n\tswitch typ.size {\n\tcase 4:\n\t\t*(*float32)(unsafe.Pointer(ptr)) = float32(v)\n\tcase 8:\n\t\t*(*float64)(unsafe.Pointer(ptr)) = v\n\t}\n\treturn Value{typ, ptr, f | flagIndir | flag(typ.Kind())}\n}\n\n// makeComplex returns a Value of type t equal to v (possibly truncated to complex64),\n// where t is a complex64 or complex128 type.\nfunc makeComplex(f flag, v complex128, t Type) Value {\n\ttyp := t.common()\n\tptr := unsafe_New(typ)\n\tswitch typ.size {\n\tcase 8:\n\t\t*(*complex64)(unsafe.Pointer(ptr)) = complex64(v)\n\tcase 16:\n\t\t*(*complex128)(unsafe.Pointer(ptr)) = v\n\t}\n\treturn Value{typ, ptr, f | flagIndir | flag(typ.Kind())}\n}\n\nfunc makeString(f flag, v string, t Type) Value {\n\tret := New(t).Elem()\n\tret.SetString(v)\n\tret.flag = ret.flag&^flagAddr | f\n\treturn ret\n}\n\nfunc makeBytes(f flag, v []byte, t Type) Value {\n\tret := New(t).Elem()\n\tret.SetBytes(v)\n\tret.flag = ret.flag&^flagAddr | f\n\treturn ret\n}\n\nfunc makeRunes(f flag, v []rune, t Type) Value {\n\tret := New(t).Elem()\n\tret.setRunes(v)\n\tret.flag = ret.flag&^flagAddr | f\n\treturn ret\n}\n\n// These conversion functions are returned by convertOp\n// for classes of conversions. For example, the first function, cvtInt,\n// takes any value v of signed int type and returns the value converted\n// to type t, where t is any signed or unsigned int type.\n\n// convertOp: intXX -> [u]intXX\nfunc cvtInt(v Value, t Type) Value {\n\treturn makeInt(v.flag&flagRO, uint64(v.Int()), t)\n}\n\n// convertOp: uintXX -> [u]intXX\nfunc cvtUint(v Value, t Type) Value {\n\treturn makeInt(v.flag&flagRO, v.Uint(), t)\n}\n\n// convertOp: floatXX -> intXX\nfunc cvtFloatInt(v Value, t Type) Value {\n\treturn makeInt(v.flag&flagRO, uint64(int64(v.Float())), t)\n}\n\n// convertOp: floatXX -> uintXX\nfunc cvtFloatUint(v Value, t Type) Value {\n\treturn makeInt(v.flag&flagRO, uint64(v.Float()), t)\n}\n\n// convertOp: intXX -> floatXX\nfunc cvtIntFloat(v Value, t Type) Value {\n\treturn makeFloat(v.flag&flagRO, float64(v.Int()), t)\n}\n\n// convertOp: uintXX -> floatXX\nfunc cvtUintFloat(v Value, t Type) Value {\n\treturn makeFloat(v.flag&flagRO, float64(v.Uint()), t)\n}\n\n// convertOp: floatXX -> floatXX\nfunc cvtFloat(v Value, t Type) Value {\n\treturn makeFloat(v.flag&flagRO, v.Float(), t)\n}\n\n// convertOp: complexXX -> complexXX\nfunc cvtComplex(v Value, t Type) Value {\n\treturn makeComplex(v.flag&flagRO, v.Complex(), t)\n}\n\n// convertOp: intXX -> string\nfunc cvtIntString(v Value, t Type) Value {\n\treturn makeString(v.flag&flagRO, string(v.Int()), t)\n}\n\n// convertOp: uintXX -> string\nfunc cvtUintString(v Value, t Type) Value {\n\treturn makeString(v.flag&flagRO, string(v.Uint()), t)\n}\n\n// convertOp: []byte -> string\nfunc cvtBytesString(v Value, t Type) Value {\n\treturn makeString(v.flag&flagRO, string(v.Bytes()), t)\n}\n\n// convertOp: string -> []byte\nfunc cvtStringBytes(v Value, t Type) Value {\n\treturn makeBytes(v.flag&flagRO, []byte(v.String()), t)\n}\n\n// convertOp: []rune -> string\nfunc cvtRunesString(v Value, t Type) Value {\n\treturn makeString(v.flag&flagRO, string(v.runes()), t)\n}\n\n// convertOp: string -> []rune\nfunc cvtStringRunes(v Value, t Type) Value {\n\treturn makeRunes(v.flag&flagRO, []rune(v.String()), t)\n}\n\n// convertOp: direct copy\nfunc cvtDirect(v Value, typ Type) Value {\n\tf := v.flag\n\tt := typ.common()\n\tptr := v.ptr\n\tif f&flagAddr != 0 {\n\t\t// indirect, mutable word - make a copy\n\t\tc := unsafe_New(t)\n\t\ttypedmemmove(t, c, ptr)\n\t\tptr = c\n\t\tf &^= flagAddr\n\t}\n\treturn Value{t, ptr, v.flag&flagRO | f} // v.flag&flagRO|f == f?\n}\n\n// convertOp: concrete -> interface\nfunc cvtT2I(v Value, typ Type) Value {\n\ttarget := unsafe_New(typ.common())\n\tx := valueInterface(v, false)\n\tif typ.NumMethod() == 0 {\n\t\t*(*interface{})(target) = x\n\t} else {\n\t\tifaceE2I(typ.(*rtype), x, target)\n\t}\n\treturn Value{typ.common(), target, v.flag&flagRO | flagIndir | flag(Interface)}\n}\n\n// convertOp: interface -> interface\nfunc cvtI2I(v Value, typ Type) Value {\n\tif v.IsNil() {\n\t\tret := Zero(typ)\n\t\tret.flag |= v.flag & flagRO\n\t\treturn ret\n\t}\n\treturn cvtT2I(v.Elem(), typ)\n}\n\n// implemented in ../runtime\nfunc chancap(ch unsafe.Pointer) int\nfunc chanclose(ch unsafe.Pointer)\nfunc chanlen(ch unsafe.Pointer) int\n\n// Note: some of the noescape annotations below are technically a lie,\n// but safe in the context of this package.  Functions like chansend\n// and mapassign don't escape the referent, but may escape anything\n// the referent points to (they do shallow copies of the referent).\n// It is safe in this package because the referent may only point\n// to something a Value may point to, and that is always in the heap\n// (due to the escapes() call in ValueOf).\n\n//go:noescape\nfunc chanrecv(t *rtype, ch unsafe.Pointer, nb bool, val unsafe.Pointer) (selected, received bool)\n\n//go:noescape\nfunc chansend(t *rtype, ch unsafe.Pointer, val unsafe.Pointer, nb bool) bool\n\nfunc makechan(typ *rtype, size uint64) (ch unsafe.Pointer)\nfunc makemap(t *rtype) (m unsafe.Pointer)\n\n//go:noescape\nfunc mapaccess(t *rtype, m unsafe.Pointer, key unsafe.Pointer) (val unsafe.Pointer)\n\n//go:noescape\nfunc mapassign(t *rtype, m unsafe.Pointer, key, val unsafe.Pointer)\n\n//go:noescape\nfunc mapdelete(t *rtype, m unsafe.Pointer, key unsafe.Pointer)\n\n// m escapes into the return value, but the caller of mapiterinit\n// doesn't let the return value escape.\n//go:noescape\nfunc mapiterinit(t *rtype, m unsafe.Pointer) unsafe.Pointer\n\n//go:noescape\nfunc mapiterkey(it unsafe.Pointer) (key unsafe.Pointer)\n\n//go:noescape\nfunc mapiternext(it unsafe.Pointer)\n\n//go:noescape\nfunc maplen(m unsafe.Pointer) int\n\n// call calls fn with a copy of the n argument bytes pointed at by arg.\n// After fn returns, reflectcall copies n-retoffset result bytes\n// back into arg+retoffset before returning. If copying result bytes back,\n// the caller must pass the argument frame type as argtype, so that\n// call can execute appropriate write barriers during the copy.\nfunc call(argtype *rtype, fn, arg unsafe.Pointer, n uint32, retoffset uint32)\n\nfunc ifaceE2I(t *rtype, src interface{}, dst unsafe.Pointer)\n\n// typedmemmove copies a value of type t to dst from src.\n//go:noescape\nfunc typedmemmove(t *rtype, dst, src unsafe.Pointer)\n\n// typedmemmovepartial is like typedmemmove but assumes that\n// dst and src point off bytes into the value and only copies size bytes.\n//go:noescape\nfunc typedmemmovepartial(t *rtype, dst, src unsafe.Pointer, off, size uintptr)\n\n// typedslicecopy copies a slice of elemType values from src to dst,\n// returning the number of elements copied.\n//go:noescape\nfunc typedslicecopy(elemType *rtype, dst, src sliceHeader) int\n\n//go:noescape\nfunc memclr(ptr unsafe.Pointer, n uintptr)\n\n// Dummy annotation marking that the value x escapes,\n// for use in cases where the reflect code is so clever that\n// the compiler cannot follow.\nfunc escapes(x interface{}) {\n\tif dummy.b {\n\t\tdummy.x = x\n\t}\n}\n\nvar dummy struct {\n\tb bool\n\tx interface{}\n}\n"
  },
  {
    "path": "examples/javascript/destructuring.js",
    "content": "let {a, b} = object\nlet {a, b, ...c} = object\nconst {a, b: {c, d}} = object\n\n\n\n\nfunction a ({b, c}, {d}) {}\n\n\n\n\n[a, b] = array;\n[a, b, ...c] = array;\n[,, c,, d,] = array;\n\n\n\n\nfunction a({b = true}, [c, d = false]) {}\nfunction b({c} = {}) {}\n\n\n\n"
  },
  {
    "path": "examples/javascript/expressions.js",
    "content": "\"A string with \\\"double\\\" and 'single' quotes\";\n'A string with \"double\" and \\'single\\' quotes';\n'\\\\'\n\"\\\\\"\n\n'A string with new \\\nline';\n\n\n`one line`;\n`multi\n  line`;\n\n`multi\n  ${2 + 2}\n  hello\n  ${1 + 1, 2 + 2}\n  line`;\n\n`$$$$`;\n`$$$$${ 1 }`;\n\n`(a|b)$`;\n\n`$`;\n\n`$${'$'}$$${'$'}$$$$`;\n\n`\\ `;\n\n`The command \\`git ${args.join(' ')}\\` exited with an unexpected code: ${exitCode}. The caller should either handle this error, or expect that exit code.`\n\n`\\\\`;\n\n`//`;\n\n\nf `hello`;\n\n\n101;\n3.14;\n3.14e+1;\n0x1ABCDEFabcdef;\n0o7632157312;\n0b1010101001;\n1e+3;\n\n\ntheVar;\ntheVar2;\n$_;\n\n\nvar a = b\n  , c = d\n  , e = f;\n\n\nthis;\nnull;\nundefined;\ntrue;\nfalse;\n\n\n/one\\\\/;\n/one/g;\n/one/i;\n/one/gim;\n/on\\/e/gim;\n/on[^/]afe/gim;\n/[\\]/]/;\n\n\n  foo\n    ? /* comment */bar\n    : baz\n\n\nvar x = {};\nvar x = { a: \"b\" };\nvar x = { c: \"d\", \"e\": f, 1: 2 };\nvar x = {\n  g: h\n}\n\nvar x = {\n  [methodName]() {\n  }\n}\n\n\nx = {a, b, get};\ny = {a,};\n\n\nvar x = {\n  foo: true,\n\n  add(a, b) {\n    return a + b;\n  },\n\n  get bar() { return c; },\n\n  set bar(a) { c = a; },\n\n  *barGenerator() { yield c; },\n\n  get() { return 1; }\n};\n\n\nvar x = {\n  finally() {},\n  catch() {},\n  get: function () {},\n  set: function () {},\n  static: true,\n  async: true,\n};\n\n\nclass Foo {\n  static one(a) { return a; };\n  two(b) { return b; }\n  three(c) { return c; }\n}\n\nclass Foo extends require('another-class') {\n  constructor() {\n    super()\n  }\n\n  bar() {\n    super.a()\n  }\n}\n\n\nclass Foo {\n  catch() {}\n  finally() {}\n}\n\n\nclass Foo {\n\tstatic foo = 2\n}\n\n\n@eval\nclass Foo {\n\t@foo.bar(baz) static foo() {\n\n\t}\n}\n\n\n[];\n[ \"item1\" ];\n[ \"item1\", ];\n[ \"item1\", item2 ];\n[ , item2 ];\n[ item2 = 5 ];\n\n\n[\n  function() {},\n  function(arg1, ...arg2) {\n    arg2;\n  },\n  function stuff() {},\n  function trailing(a,) {},\n  function trailing(a,b,) {}\n]\n\n\na => 1;\n() => 2;\n(d, e) => 3;\n(f, g) => {\n  return h;\n};\n(trailing,) => 4;\n(h, trailing,) => 5;\n(set, kv) => 2;\n\n\n[\n  function *() {},\n  function *generateStuff(arg1, arg2) {\n    yield;\n    yield arg2;\n  }\n]\n\n\nfunction a({b}, c = d, e = f) {\n}\n\n\nx.someProperty;\nx[someVariable];\nx[\"some-string\"];\n\n\nreturn returned.promise()\n  .done( newDefer.resolve )\n  .fail( newDefer.reject )\n\n\nreturn this.map(function (a) {\n  return a.b;\n})\n\n// a comment\n\n.filter(function (c) {\n  return c.d;\n})\n\n\n\nx.someMethod(arg1, \"arg2\");\nvar x = function(x, y) {\n\n}(a, b);\n\n\nnew module.Klass(1, \"two\");\nnew Thing;\n\n\nawait asyncFunction();\nawait asyncPromise;\n\n\nasync function foo() {}\n\nvar x = {\n  async bar() {\n  }\n}\n\nclass Foo {\n  async bar() {}\n}\n\nasync (a) => { return foo; };\n\n\ni++;\ni--;\ni + j * 3 - j % 5;\n2 ** i * 3;\n2 * i ** 3;\n+x;\n-x;\n\n\ni || j;\ni && j;\n!a && !b || !c && !d;\n\n\ni >> j;\ni >>> j;\ni << j;\ni & j;\ni | j;\n~i ^ ~j\n\n\nx < y;\nx <= y;\nx == y;\nx === y;\nx != y;\nx !== y;\nx >= y;\nx > y;\n\n\nx = 0;\nx.y = 0;\nx[\"y\"] = 0;\nasync = 0;\n\n\na = 1, b = 2;\nc = {d: (3, 4 + 5, 6)};\n\n\ncondition ? case1 : case2;\n\nx.y = some.condition ?\n  some.case :\n  some.other.case;\n\ntypeof x;\nx instanceof String;\n\n\ndelete thing['prop'];\ntrue ? delete thing.prop : null;\n\n\na = void b()\n\n\ns |= 1;\nt %= 2;\nw ^= 3;\nx += 4;\ny.z *= 5;\nasync += 1;\na >>= 1;\nb >>>= 1;\nc <<= 1;\n\n\na <= b && c >= d;\na.b = c ? d : e;\na && b(c) && d;\na && new b(c) && d;\ntypeof a == b && c instanceof d\n\n\na = <div className='b' tabIndex={1} />;\nb = <Foo.Bar>a <span>b</span> c</Foo.Bar>;\nb = <Foo.Bar.Baz.Baz></Foo.Bar.Baz.Baz>;\n\n\na = <a b c={d}> e {f} g </a>\nh = <i>{...j}</i>\nb = <Text {...j} />\nb = <Text {...j}></Text>\n\n\n\n\nfoo(...rest)\n\n\n(foo - bar) / baz\nif (foo - bar) /baz/;\n(this.a() / this.b() - 1) / 2\n\n\n⁠// Type definitions for Dexie v1.4.1\n﻿// Project: https://github.com/dfahlander/Dexie.js\n​// Definitions by: David Fahlander <http://github.com/dfahlander>\n// Definitions: https://github.com/DefinitelyTyped/DefinitelyTyped\n\n\nyield db.users.where('[endpoint+email]')\n\n\nvar a = <Foo></Foo>\nb = <Foo.Bar></Foo.Bar>\nc = <> <Foo /> </>\nd = <Bar> <Foo /> </Bar>\ne = <Foo bar/>\nf = <Foo bar=\"string\" baz={2} data-i8n=\"dialogs.welcome.heading\" bam />\ng = <Avatar userId={foo.creatorId} />\nh = <input checked={this.state.selectedNewStreetType === 'new-street-default' || !this.state.selectedNewStreetType}> </input>\ni = <Foo:Bar bar={}>{...children}</Foo:Bar>\n\n"
  },
  {
    "path": "examples/javascript/literals.js",
    "content": "04000\n400\n100n\n\nconst últimaVez = 1\nvar x = { 県: '大阪府', '': '' }\n\n\"//ok\\n//what\"\n"
  },
  {
    "path": "examples/javascript/semicolon_insertion.js",
    "content": "if (a) {\n  var b = c\n  d()\n  e()\n  return f\n}\n\n\n\n\nif (a)\n  d()\n++b\n\nif (a)\n  d()\n--b\n\n \n\nobject\n  .someProperty\n  .otherProperty\n\n\n\n\n  function x() {}\n  return z;\n\n\n\n\na\n  ? b\n  : c\n\na\n  || b\n\na\n  ^ b\n\na\n  !== b\n\na\n  !b; // standalone statement\n\n\n\n\na\n  i;\n\na\n  in b;\n\na\n  ins;\n\na\n  inst;\n\na\n  instanceof b;\n\na\n  instanceofX;\n\n\n\n\nif (a) {b} else {c}\n\n\n\n\nfunction a() {b}\nfunction c() {return d}\n\n\n\n\nvar a = new A()\n  .b({c: 'd'})\n  .e()\n\n\n\n\nif (a) { if (b) return c }\nif (d) { for (;;) break }\nif (e) { for (f in g) break }\nif (h) { for (i of j) continue }\nif (k) { while (l) break }\nif (m) { do { n; } while (o) }\nif (p) { var q }\n\n\n\n\nfunction a () { function b () {} function *c () {} class D {} return }\n\n\n\nlet a // comment at end of declaration\n\n// comment outside of declaration\nlet b /* comment between declarators */, c\n\n/** comment with *stars* **/ /* comment with /slashes/ */\n/* third comment in a row */\n\nlet d\n\n\n\n"
  },
  {
    "path": "examples/javascript/statements.js",
    "content": "#!/usr/bin/env node\n\nimport defaultMember from \"module-name\";\nimport * as name from \"module-name\";\nimport { member } from \"module-name\";\nimport { member1 , member2 } from \"module-name\";\nimport { member1 , member2 as alias2 } from \"module-name\";\nimport defaultMember, { member1, member2 as alias2 } from \"module-name\";\nimport defaultMember, * as name from \"module-name\";\nimport \"module-name\";\nimport { member1 , member2 as alias2, } from \"module-name\";\n\n\n\n\nexport { name1, name2, name3, nameN };\nexport { variable1 as name1, variable2 as name2, nameN };\nexport let name1, name2, nameN;\nexport let name1 = value1, name2 = value2, name3, nameN;\n\nexport default expression;\nexport default function () { }\nexport default function name1() { }\nexport { name1 as default };\n\nexport * from 'foo';\nexport { name1, name2, nameN } from 'foo';\nexport { import1 as name1, import2 as name2, nameN } from 'foo';\n\n\n\n\n@injectable()\nexport class Foo {\n}\n\n\n\n\nif (x)\n  log(y);\n\nif (a.b) {\n  log(c);\n  d;\n}\n\n\n\n\nif (x)\n  y;\nelse if (a)\n  b;\n\nif (a) {\n  c;\n  d;\n} else {\n  e;\n}\n\n\n\n\nfor (var a, b; c; d)\n  e;\n\nfor (i = 0, init(); i < 10; i++)\n  log(y);\n\nfor (;;) {\n  z;\n  continue;\n}\n\nfor (var i = 0\n  ; i < l\n  ; i++) {\n}\n\n\n\n\nfor (item in items)\n  item();\n\nfor (var item in items || {})\n  item();\n\nfor (const {thing} in things)\n  thing();\n\n\nfor (a of b)\n  process(a);\n\nfor (let {a, b} of items || [])\n  process(a, b);\n\n\n\n\nfor await (const chunk of stream) {\n  str += chunk;\n}\n\n\n\n\nwhile (a)\n  b();\n\n\n\n\ndo {\n  a;\n} while (b)\n\ndo a; while (b)\n\n\n\nreturn;\nreturn 5;\nreturn 1,2;\nreturn async;\nreturn a;\n\n\n\n\nvar x = 1;\nvar x, y = {}, z;\n\n\n\n\nvar x = {\n\n  // This is a property\n  aProperty: 1,\n\n  /*\n   * This is a method\n   */\n  aMethod: function() {}\n};\n\n\n\n\n// this is the beginning of the script.\n// here we go.\nvar thing = {\n\n  // this is a property.\n  // its value is a function.\n  key: function(x /* this is a parameter */) {\n    // this is one statement\n    one();\n    // this is another statement\n    two();\n  }\n};\n\n\n\n\n/* a */\nconst a = 1;\n\n/* b **/\nconst b = 1;\n\n/* c ***/\nconst c = 1;\n\n/* d\n\n***/\nconst d = 1;\n\n\n\n\ny // comment\n  * z;\n\n\n\nswitch (x) {\n  case 1:\n  case 2:\n    something();\n    break;\n  case \"three\":\n    somethingElse();\n    break;\n  default:\n    return 4;\n}\n\n\n\n\nthrow new Error(\"uh oh\");\n\n\n\n\nthrow f = 1, f;\nthrow g = 2, g\n\n\n\ntry { a; } catch (b) { c; }\ntry { d; } finally { e; }\ntry { f; } catch { g; } finally { h; }\n\n\n\n\nif (true) { ; };;;\n\n\n\n\ntheLoop:\nfor (;;) {\n  if (a) {\n    break theLoop;\n  } else {\n    continue theLoop;\n  }\n}\n\n\n\n\ndebugger;\ndebugger\n\n\n\n\nwith (x) { i; }\n\n\nconsole.log(\"HI\")\n\n\n\n"
  },
  {
    "path": "examples/ruby/classes.rb",
    "content": "# Class names must be capitalized.  Technically, it's a constant.\nclass Fred\n  \n  # The initialize method is the constructor.  The @val is\n  # an object value.\n  def initialize(v)\n    @val = v\n  end\n\n  # Set it and get it.\n  def set(v)\n    @val = v\n  end\n\n  def get\n    return @val\n  end\nend\n\n# Objects are created by the new method of the class object.\na = Fred.new(10)\nb = Fred.new(22)\n\nprint \"A: \", a.get, \" \", b.get,\"\\n\";\nb.set(34)\nprint \"B: \", a.get, \" \", b.get,\"\\n\";\n\n# Ruby classes are always unfinished works.  This does not\n# re-define Fred, it adds more stuff to it.\nclass Fred \n  def inc\n    @val += 1\n  end\nend\n\na.inc\nb.inc\nprint \"C: \", a.get, \" \", b.get,\"\\n\";\n\n# Objects may have methods all to themselves.\ndef b.dec\n  @val -= 1\nend\n\nbegin\n  b.dec\n  a.dec\nrescue StandardError => msg\n  print \"Error: \", msg, \"\\n\"\nend\n\nprint \"D: \", a.get, \" \", b.get,\"\\n\";\n\nx = :foo\ny = :'bar'\nz = :\"doh\"\n\nrequire 'uri'\n\nbegin\n  URI.open('https://google.com')\nrescue URI::InvalidURIError => e\n  puts \"Error: #{e}\"\nend\n\nClient.new('test')\n\nClient::Subclient.method('test')\n\nhash = {\n  key1: 'value2',\n  key2: 'value2'\n}\n\nhash2 = {\n  :key1 => 'value1',\n  :key2 => 'value2'\n}\n\nprogress_bar = ProgressBar.create(\n  total: 'test',\n  format: \"\\e[0;32m%c/%C |%b>%i| %e\\e[0m\"\n)\n\n# def and end are the same color\ndef x_to_string\n  x.to_s\nend\n\n# do should use the same color as end in this block of code\n10.times do |i|\n  puts i\nend\n\nclass Human\n  # A class variable. It is shared by all instances of this class.\n  @@species = 'Homo sapiens'\nend\n\n$global = 'this is a global'\n\n@var = \"I'm an instance var\"\ndefined? @var #=> \"instance-variable\"\ndefined @var"
  },
  {
    "path": "examples/ruby/comments.rb",
    "content": "# anything else here should be ignored\n\n=begin\n=end\n\n=begin\nwhatever\n=end\n\n=begin rdoc\n=end\n\n\n=begin\nwhatever\nmultiple lines of whatever\n=end\n\n=begin\nwhatever\nmultiple lines of whatever\n=end\n# Another comment\n\n=begin\n=e\n=en\n=end\n"
  },
  {
    "path": "examples/ruby/control-flow.rb",
    "content": "while foo do\nend\n\nwhile foo\nend\n\nwhile foo do\n  bar\nend\n\nuntil foo do\nend\n\nuntil foo do\n  bar\nend\n\nif foo\nend\n\nif foo then\nelse\nend\n\nif true then ;; 123; end\n\n\nif foo then bar else quux end\n\nif foo\n  bar\nelsif quux\n  baz\nend\n\n\nif foo\n  bar\nelsif quux\n  baz\nelse\n  bat\nend\n\n\nunless foo\nend\n\nunless foo then\nend\n\nunless foo\nelse\nend\n\nfor x in y do\n\tf\nend\n\nfor x, y in z do\n\tf\nend\n\n\nfor x in y\n  f\nend\n\nfor x in y\n  next\nend\n\nfor x in y\n  retry\nend\n\nwhile b\n  break\nend\n\nwhile b\n  redo\nend\n\nbegin\nend\n\nbegin\n\tfoo\nend\n\nbegin\n\tfoo\nelse\n  bar\nend\n\n\nbegin\n\tfoo\nensure\n  bar\nend\n\n\nbegin\nrescue\nend\n\nbegin\nrescue then\nend\n\nbegin\nrescue\n  bar\nend\n\n\nbegin\nrescue x\nend\n\nbegin\nrescue x then\nend\n\nbegin\nrescue x\n  bar\nend\n\nbegin\nrescue => x\n  bar\nend\n\nbegin\nrescue x, y\n  bar\nend\n\nbegin\nrescue Error => x\nend\n\nbegin\nrescue Error => x\n  bar\nend\n\n\nbegin\nrescue *args\nend\n\nfoo rescue nil\n\nif foo rescue nil\nelsif bar rescue nil\nend\n\nunless foo rescue nil\nend\n\n\nbegin\n\tfoo\nrescue x\n  retry\nelse\n\tquux\nensure\n  baz\nend\n\n\nreturn foo\n\nreturn\n\ncase foo\nwhen bar\nend\n\n\ncase foo\nwhen bar\nelse\nend\n\ncase key\nwhen bar\nelse; leaf\nend\n\n\ncase a\nwhen b\n  c\nwhen d\n  e\nelse\n  f\nend\n\n\ncase a\nwhen *foo\n  c\nend\n\n\nx = case foo\nwhen bar\nelse\nend\n\n\nx = case foo = bar | baz\nwhen bar\nelse\nend\n\n"
  },
  {
    "path": "examples/ruby/declarations.rb",
    "content": "def foo\nend\n\ndef foo?\nend\n\ndef foo!\nend\n\n\n\ndef foo\n  bar\nend\n\n\n\ndef foo=\nend\n\n\n\ndef `(a)\n  \"`\"\nend\n\ndef -@(a)\nend\n\ndef %(a)\nend\n\ndef ..(a)\nend\n\ndef !~(a)\nend\n\n\n\nputs /(/\n\ndef /(name)\nend\n\ndef / name\nend\n\n\n\n\ndef foo\n  super\nend\n\ndef foo\n  bar.baz { super }\nend\n\ndef foo\n  super.bar a, b\nend\n\n\n\ndef foo(bar)\nend\n\ndef foo(bar); end\ndef foo(bar) end\n\n\n\ndef foo bar\nend\n\n\n\ndef foo(bar, quux)\nend\n\n\n\ndef foo bar, quux\nend\n\n\n\ndef foo(bar: nil, baz:)\nend\n\n\n\ndef foo(bar = nil)\nend\n\ndef foo(bar=nil)\nend\n\n\n\ndef foo(*options)\nend\n\ndef foo(x, *options)\nend\n\ndef foo(x, *options, y)\nend\n\ndef foo(**options)\nend\n\ndef foo(name:, **)\nend\n\ndef foo(&block)\nend\n\n\n\ndef self.foo\nend\n\n\n\ndef self.foo\n  bar\nend\n\n\n\n\ndef self.foo(bar)\nend\n\n\n\ndef self.foo bar\nend\n\n\n\ndef self.foo(bar, baz)\nend\n\n\n\n\ndef self.foo bar, baz\nend\n\n\n\nclass Foo\nend\n\nclass Foo; end\n\nclass Foo::Bar\nend\n\nclass ::Foo::Bar\nend\n\nclass Cß\nend\n\n\n\nclass Foo < Bar\nend\n\n\n\nclass Foo < Bar::Quux\nend\n\nclass Foo < ::Bar\nend\n\nclass Foo < Bar::Baz.new(foo)\nend\n\n\n\nclass Foo\n\tdef bar\n\tend\nend\n\n\n\nclass foo()::Bar\nend\n\n\n\nclass << self\nend\n\nclass <<self\nend\n\nclass << Foo\nend\n\nclass << Foo::Bar\nend\n\n\n\n\nmodule Foo\nend\n\nmodule Foo::Bar\nend\n\n\n\nmodule Foo\n\tdef bar\n\tend\nend\n\n\n\nmodule Foo end\n\n\n\nword\n__END__\nword\nx\nab\nd\n\n\n\nmodule A\n  class B < C\n    include D::E.f.g\n\n    attr_reader :h\n\n    # i\n    def j\n      k\n    end\n\n    def self.l\n    end\n  end\nend\n\n\n\n\nBEGIN {\n\n}\n\n\n\nbaz\nBEGIN {\nfoo\n}\nbar\n\n\n\nEND {\n\n}\n\n\n\nbaz\nEND {\nfoo\n}\nbar\n\n\n"
  },
  {
    "path": "examples/ruby/expressions.rb",
    "content": "Foo::bar\n::Bar\n\nputs ::Foo::Bar\n\n\n\nfoo[bar]\nfoo[*bar]\nfoo[* bar]\nfoo[]\n\n\n\nfoo[\"bar\"]\n\n\n\nfoo[:bar]\n\n\n\nfoo[bar] = 1\n\n\n\n()\n\n\n\n;\n\n\n\nyield\n\n\n\nyield foo\nyield foo, bar\n\n\n\nnot foo\n\n\n\nfoo and bar\n\n\n\nfoo or bar\n\n\n\na or b and c\n\n\n\ndefined? foo\ndefined? Foo.bar\ndefined?(foo)\ndefined?($foo)\ndefined?(@foo)\ndefined?(@äö)\n\n\n\nx = y\nx = *args\nFALSE = \"false\"\nTRUE = \"true\"\nNIL = \"nil\"\n\n\n\nx, y = [1, 2]\nx, * = [1, 2]\nx, *args = [1, 2]\nx, y = *foo\nself.foo, self.bar = target.a?, target.b\n(x, y) = foo\n(a, b, c = 1)\n\n\n\nfoo = 1, 2\nx, y = foo, bar\n\n\n\na, (b, c), d, (e, (f, g)) = foo\n\n\n\nx = foo a, b\nx = foo a, :b => 1, :c => 2\n\n\n\nx += y\nx -= y\nx *= y\nx **= y\nx /= y\nputs \"/hi\"\n\n\n\nx ||= y\nx &&= y\nx &= y\nx |= y\nx %= y\nx >>= y\nx <<= y\nx ^= y\n\n\n\na ? b : c\n\na ? b\n  : c\n\n\n\ntrue ?\")\":\"c\"\n\n\nfoo ? true: false\nfoo ? return: false\n\n\n\na..b\n\n\n\na...b\n\n\n\na || b\n\n\n\na && b\n\n\n\na == b\na != b\na =~ b\na !~ b\n\n\n\na < b\na <= b\na > b\na >= b\n\n\n\na | b\n\n\n\na ^ b\n\n\n\na & b\n\n\n\na >> b\na << b\n\n\n\na + b\n\n\n\na * b\n\n\n\n2+2*2\n\n\n\n-a\nfoo -a, bar\nfoo(-a, bar)\n\n\n\nfoo-a\n@ivar-1\n\n\n\na ** b\n\n\n\n!a\n\n\n\nfoo\nfoo()\nprint \"hello\"\nprint(\"hello\")\n\n\n\nfoo a,\n  b, c\n\n\n\nfoo(a, b,)\nfoo(bar(a),)\n\n\n\nfoo.bar\nfoo.bar()\nfoo.bar \"hi\"\nfoo.bar \"hi\", 2\nfoo.bar(\"hi\")\nfoo.bar(\"hi\", 2)\n\n\n\nfoo[bar].()\nfoo.(1, 2)\n\n\n\na.() {}\na.(b: c) do\n  d\nend\n\n\n\nfoo.[]()\n\n\n\nfoo&.bar\n\n\n\nfoo(:a => true)\nfoo([] => 1)\nfoo(bar => 1)\nfoo :a => true, :c => 1\n\n\n\nfoo(a: true)\nfoo a: true\nfoo B: true\n\n\n\nfoo(if: true)\nfoo alias: true\nfoo and: true\nfoo begin: true\nfoo break: true\nfoo case: true\nfoo class: true\nfoo def: true\nfoo defined: true\nfoo do: true\nfoo else: true\nfoo elsif: true\nfoo end: true\nfoo ensure: true\nfoo false: true\nfoo for: true\nfoo if: true\nfoo in: true\nfoo module: true\nfoo next: true\nfoo nil: true\nfoo not: true\nfoo or: true\nfoo redo: true\nfoo rescue: true\nfoo retry: true\nfoo return: true\nfoo self: true\nfoo super: true\nfoo then: true\nfoo true: true\nfoo undef: true\nfoo unless: true\nfoo until: true\nfoo when: true\nfoo while: true\nfoo yield: true\n\n\n\nfoo (b), a\n\n\n\nfoo(&:sort)\nfoo(&bar)\nfoo(&bar, 1)\nfoo &bar\nfoo &bar, 1\n\n\n\nfoo(*bar)\nfoo *bar\nfoo *%w{ .. lib }\nfoo *(bar.baz)\n\n\n\nfoo :bar, -> (a) { 1 }\nfoo :bar, -> (a) { where(:c => b) }\n\n\n\nfoo :bar, -> (a) { 1 } do\nend\n\n\n\nfoo(*bar)\nfoo(*[bar, baz].quoz)\nfoo(x, *bar)\nfoo(*bar.baz)\nfoo(**baz)\n\n\n\ninclude D::E.f\n\n\n\nFoo\n  .bar\n  .baz\n\nFoo \\\n  .bar\n\n\n\nfoo do |i|\n  foo\nend\n\nfoo do\n  |i| i\nend\n\nfoo do; end\n\nfoo(a) do |i|\n  foo\nend\n\nfoo.bar a do |i|\n  foo\nend\n\nfoo(a) do |name: i, *args|\nend\n\n\n\nfoo { |i| foo }\nfoo items.any? { |i| i > 0 }\nfoo(bar, baz) { quux }\n\n\n\nfoo { |; i, j| }\n\n\n\nrequest.GET\n\n\n\n-> (d, *f, (x, y)) {}\n\ndef foo(d, *f, (x, y))\nend\n\ndef foo d, *f, (x, y)\nend\n\nfoo do |a, (c, d, *f, (x, y)), *e|\nend\n\n\n\nfoo []\nfoo [1]\nfoo[1]\n\n\n\nlambda {}\n\n\n\nlambda { foo }\nlambda(&block) { foo }\nlambda(&lambda{})\n\n\n\nlambda { |foo| 1 }\n\n\n\nlambda { |a, b, c|\n  1\n  2\n}\n\n\n\nlambda { |a, b,|\n  1\n}\n\n\n\nlambda { |a, b=nil|\n  1\n}\n\n\n\nlambda { |a, b: nil|\n  1\n}\n\n\n\nlambda do |foo|\n  1\nend\n\n\n\nproc = Proc.new\nlambda = lambda {}\nproc = proc {}\n\n\n\nfoo \\\n  a, b\n\n\"abc \\\nde\"\n\nfoo \\\n  \"abc\"\n\n\n\n10 / 5\n\n\n\nh/w\n\"#{foo}\"\n\nTime.at(timestamp/1000)\n\"#{timestamp}\"\n\n\n\nfoo /bar/\n\n\n\nfoo\n/ bar/\n\n\n\nFoo / \"bar\"\n\"/edit\"\n\n\n\n/ a\n  b/\n\n\n"
  },
  {
    "path": "examples/ruby/literals.rb",
    "content": ":foo\n:foo!\n:foo?\n:foo=\n:@foo\n:@foo_0123_bar\n:@@foo\n:$foo\n:$0\n:_bar\n:åäö\n\n\n\n\n:+\n:-\n:+@\n:-@\n:[]\n:[]=\n:&\n:!\n:`\n:^\n:|\n:~\n:/\n:%\n:*\n:**\n:==\n:===\n:=~\n:>\n:>=\n:>>\n:<\n:<=\n:<<\n:<=>\n\n\n\n\n:'foo bar'\n:'#{'\n\n\n\n\n:\"foo bar\"\n:\"#\"\n\n\n\n\n:\"foo #{bar}\"\n\n\n\n\n%s/a/\n%s\\a\\\n%s#a#\n\n\n\n\n%s{a{b}c}\n%s<a<b>c>\n%s(a(b)c)\n%s[a[b]c]\n\n\n\n\n$foo\n$$\n$!\n$@\n$&\n$`\n$'\n$+\n$~\n$=\n$/\n$\\\n$,\n$;\n$.\n$<\n$>\n$_\n$0\n$*\n$$\n$?\n$:\n$\"\n$0\n$1\n$2\n$3\n$4\n$5\n$6\n$7\n$8\n$9\n$0\n$10\n$stdin\n$stdout\n$stderr\n$DEBUG\n$FILENAME\n$LOAD_PATH\n$VERBOSE\n\n\n\n\n1234\n\n\n\n\n3.times\n\n\n\n\n1_234\n\n\n\n\n0d1_234\n0D1_234\n\n\n\n\n0xa_bcd_ef0_123_456_789\n\n\n\n\n01234567\n0o1234567\n\n\n\n\n0B1_0\n\n\n\n\n1.234_5e678_90\n1E30\n1.0e+6\n1.0e-6\n\n\n\n\n-2i\n+2i\n1+1i\n1-10i\n10+3i\n12-34i\n\n\n\n\n2/3r\n\n\n\n\ntrue\nfalse\n\n\n\n\nnil\n\n\n\n\n''\n' '\n'  '\n\n\n\n\n'\\''\n'\\\\ \\n'\n'\\x00\\x01\\x02'\n\n\n\n\n'#{hello'\n\n\n\n\n\"\"\n\" \"\n\"  \"\n\n\n\n\n\"\\\"\"\n\"\\\\\"\n\"\\d\"\n\"\\#{foo}\"\n\n\n\n\n\"#\"\n\n\n\n\n\"#{foo}\"\n\"#{':foo' unless bar}\"\n\n\n\n\n%q/a/\n%q\\a\\\n%q#a#\n\n\n\n\n%q<a<b>c>\n%q{a{b}c}\n%q[a[b]c]\n%q(a(b)c)\n\n\n\n\n%/a/\n%\\a\\\n%#a#\n\n\n\n\n%<a<b>c>\n%{a{b}c}\n%[a[b]c]\n%(a(b)c)\n\n\n\n\n%Q#a#\n%Q/a/\n%Q\\a\\\n\n\n\n\n%Q<a<b>c>\n%Q{a{b}c}\n%Q[a[b]c]\n%Q(a(b)c)\n\n\n\n\n%q(a) \"b\" \"c\"\n\"d\" \"e\"\n\n\n\n\nflash[:notice] = \"Pattern addition failed for '%s' in '%s'\", %\n                  [pattern, key]\n\nfoo(\"%s '%s' \" %\n  [a, b])\n\n\n\n\n?a\n??\n?\\n\n?\\\\\n?\\377\n?\\u{41}\n?\\M-a\n?\\C-a\n?\\M-\\C-a\n?あ\nfoo(?/)\n\n\n\n\n\"abc#{\n  %r(def(ghi#{\n    `whoami`\n  })klm)\n}nop\"\n\n\n\n\n\n<<TEXT\nheredoc \\x01 content\nTEXT\n\n<<TEXT1\n  TEXT1 ok if indented\nTEXT1\n\n<<TEXT_B\n* heredoc content\nTEXT_B\n\n<<~TEXT\ncontent\nTEXT\n\nif indentation_works?\n  <<-sql\n  heredoc content\n  sql\n\n  <<~EOF\n    content\n  EOF\nend\n\n<<'..end src/parser.c modeval..id7a99570e05'\nheredoc content\n..end src/parser.c modeval..id7a99570e05\n\n\n\n\n\n<<-eos\n  repositories\neos\n\n\n\n\n<<HTML\n<HTML>\n  <HEAD></HEAD><BODY></BODY>\n</HTML>\nHTML\n\n<<a\nattr_accessor\na\n\n\n\n\ndef foo\n  select(<<-SQL)\n  .\n  SQL\nend\n\n\n\n\nselect(<<-SQL)\nab\nSQL\n  .join()\n\n\njoins(<<~SQL).\n   `foo`\nSQL\nwhere(\"a\")\n\n\n\n\njoins(<<~SQL).where(<<~SQL).\n  `one`\nSQL\n  `two`\nSQL\ngroup(\"b\")\n\n\n\n\n<<TEXT\na\nb #{[1, \"c #{2} d\", 3]} e\n#{4} f #{foo.bar}\n#{a if b}\n#{foo(1, bar).baz}\ng\nTEXT\n\nreturn\n\n\n\n\nfoo.new(\n  select: <<-TEXT,\n    heredoc content,\n  TEXT\n  conditions: <<-TEXT\n    heredoc content\n  TEXT\n)\n{\n  select: <<-TEXT,\n    heredoc content,\n  TEXT\n  conditions: <<-TEXT\n    heredoc content\n  TEXT\n}\n\n[\n  <<-TEXT,\n  a\n  TEXT\n  <<-TEXT\n  b\n  TEXT\n]\n\nfoo[\n  1,\n  <<-TEXT\n  hi\n  TEXT\n  ] = 3\n\n\n\n\nfoo(<<-STR.strip_heredoc.tr()\n    content #{bar().foo}\n  STR\n)\n\n\n\n\nputs <<-ONE.size, <<-TWO.size\nfirst heredoc content\nONE\nsecond heredoc content\nTWO\n\n\n\n\n-> {\n  select(<<-SQL)\n  .\n  SQL\n}\n\n\n\n\n<<-ONE\n\n\n\n\n`/usr/bin/env test blah blah`\n\n\n\n\n`/usr/bin/env test blah \\`blah\\``\n\n\n\n\n[]\n\n\n\n\n[ foo, bar ]\n[foo, *bar]\n[foo, *@bar]\n[foo, *$bar]\n[foo, :bar => 1]\n\n\n\n\n[1, 2].any? { |i| i > 1 }\n\n\n\n[ foo, ]\n\n\n\n\n%w()\n\n\n\n\n%w/one two/\n\n\n\n\n%w(word word)\n\n\n\n\n%W(a #{b} c)\n\n\n\n\n%i()\n\n\n\n\n%i/one two/\n\n\n\n\n%i(word word)\n\n\n\n\n%I(a #{b} c)\n\n\n\n\n%I{\n  *\n  /#{something}+\n  ok\n}\n\n\n\n\n{}\n\n\n\n\n{:name=>\"foo\"}\n\n\n\n\n{ \"a\" => 1, \"b\" => 2 }\n{ [] => 1 }\n{ foo => 1 }\n\n\n\n\n{\n  alias: :foo,\n  and: :foo,\n  begin: :foo,\n  break: :foo,\n  case: :foo,\n  class: :foo,\n  def: :foo,\n  defined: :foo,\n  do: :foo,\n  else: :foo,\n  elsif: :foo,\n  end: :foo,\n  ensure: :foo,\n  false: :foo,\n  for: :foo,\n  in: :foo,\n  module: :foo,\n  next: :foo,\n  nil: :foo,\n  not: :foo,\n  or: :foo,\n  redo: :foo,\n  rescue: :foo,\n  retry: :foo,\n  return: :foo,\n  self: :foo,\n  super: :foo,\n  then: :foo,\n  true: :foo,\n  undef: :foo,\n  when: :foo,\n  yield: :foo,\n  if: :foo,\n  unless: :foo,\n  while: :foo,\n  until: :foo\n}\n\n\n\n\n{ a: 1, b: 2, \"c\": 3 }\n{a:1, b:2, \"c\":3 }\n\n\n\n\n{ a: 1, }\n\n\n\n\n{a: 1, **{b: 2}}\n\n\n\n\n{\n  :pusher => pusher,\n\n  # Only warm caches if there are fewer than 10 tags and branches.\n  :should_warm_caches_after => 10,\n}\n\n\n\n\n/^(foo|bar[^_])$/i\n\n\n\n\n/word#{foo}word/\n/word#word/\n/#/\n\n\n\n\n%r/a/\n%r\\a\\\n%r#a#\n\n\n\n\n\n%r<a<b>c>\n%r{a{b}c}\n%r[a[b]c]\n%r(a(b)c)\n%r(#)\n\n\n\n\n%r/a#{b}c/\n\n\n\n\n%r(a#{b}c)\n\n\n\n\n-> {}\n\n\n\n\n-> { foo }\n\n\n\n\n-> foo { 1 }\n-> (foo) { 1 }\n-> *foo { 1 }\n-> foo: 1 { 2 }\n-> foo, bar { 2 }\n\n\n\n\n-> (a, b, c) {\n  1\n  2\n}\n\n\n\n\n-> (foo) do\n  1\nend\n\n\n\n\nCß\n@äö\n@@äö\n:äö\näö\n\n\n\n"
  },
  {
    "path": "examples/ruby/statements.rb",
    "content": "foo if bar\nreturn if false\nreturn true if foo\nreturn nil if foo\n\n\n\nfoo while bar\n\n\n\nfoo unless bar\n\n\n\nfoo until bar\n\n\n\nalias :foo :bar\nalias foo bar\nalias $FOO $&\nalias foo +\n\n\n\nundef :foo\nundef foo\nundef +\nundef :foo, :bar\n\n\n"
  },
  {
    "path": "examples/rust/ast.rs",
    "content": "// Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT\n// file at the top-level directory of this distribution and at\n// http://rust-lang.org/COPYRIGHT.\n//\n// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or\n// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license\n// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your\n// option. This file may not be copied, modified, or distributed\n// except according to those terms.\n\n// The Rust abstract syntax tree.\n\npub use self::TyParamBound::*;\npub use self::UnsafeSource::*;\npub use self::PathParameters::*;\npub use symbol::{Ident, Symbol as Name};\npub use util::ThinVec;\npub use util::parser::ExprPrecedence;\n\nuse syntax_pos::{Span, DUMMY_SP};\nuse codemap::{respan, Spanned};\nuse abi::Abi;\nuse ext::hygiene::{Mark, SyntaxContext};\nuse print::pprust;\nuse ptr::P;\nuse rustc_data_structures::indexed_vec;\nuse symbol::{Symbol, keywords};\nuse tokenstream::{ThinTokenStream, TokenStream};\n\nuse serialize::{self, Encoder, Decoder};\nuse std::collections::HashSet;\nuse std::fmt;\nuse std::rc::Rc;\nuse std::u32;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Copy)]\npub struct Lifetime {\n    pub id: NodeId,\n    pub span: Span,\n    pub ident: Ident,\n}\n\nimpl fmt::Debug for Lifetime {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"lifetime({}: {})\", self.id, pprust::lifetime_to_string(self))\n    }\n}\n\n/// A lifetime definition, e.g. `'a: 'b+'c+'d`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct LifetimeDef {\n    pub attrs: ThinVec<Attribute>,\n    pub lifetime: Lifetime,\n    pub bounds: Vec<Lifetime>\n}\n\n/// A \"Path\" is essentially Rust's notion of a name.\n///\n/// It's represented as a sequence of identifiers,\n/// along with a bunch of supporting information.\n///\n/// E.g. `std::cmp::PartialEq`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)]\npub struct Path {\n    pub span: Span,\n    /// The segments in the path: the things separated by `::`.\n    /// Global paths begin with `keywords::CrateRoot`.\n    pub segments: Vec<PathSegment>,\n}\n\nimpl<'a> PartialEq<&'a str> for Path {\n    fn eq(&self, string: &&'a str) -> bool {\n        self.segments.len() == 1 && self.segments[0].identifier.name == *string\n    }\n}\n\nimpl fmt::Debug for Path {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"path({})\", pprust::path_to_string(self))\n    }\n}\n\nimpl fmt::Display for Path {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"{}\", pprust::path_to_string(self))\n    }\n}\n\nimpl Path {\n    // convert a span and an identifier to the corresponding\n    // 1-segment path\n    pub fn from_ident(s: Span, identifier: Ident) -> Path {\n        Path {\n            span: s,\n            segments: vec![PathSegment::from_ident(identifier, s)],\n        }\n    }\n\n    // Add starting \"crate root\" segment to all paths except those that\n    // already have it or start with `self`, `super`, `Self` or `$crate`.\n    pub fn default_to_global(mut self) -> Path {\n        if !self.is_global() {\n            let ident = self.segments[0].identifier;\n            if !::parse::token::Ident(ident).is_path_segment_keyword() ||\n               ident.name == keywords::Crate.name() {\n                self.segments.insert(0, PathSegment::crate_root(self.span));\n            }\n        }\n        self\n    }\n\n    pub fn is_global(&self) -> bool {\n        !self.segments.is_empty() && self.segments[0].identifier.name == keywords::CrateRoot.name()\n    }\n}\n\n/// A segment of a path: an identifier, an optional lifetime, and a set of types.\n///\n/// E.g. `std`, `String` or `Box<T>`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct PathSegment {\n    /// The identifier portion of this path segment.\n    pub identifier: Ident,\n    /// Span of the segment identifier.\n    pub span: Span,\n\n    /// Type/lifetime parameters attached to this path. They come in\n    /// two flavors: `Path<A,B,C>` and `Path(A,B) -> C`.\n    /// `None` means that no parameter list is supplied (`Path`),\n    /// `Some` means that parameter list is supplied (`Path<X, Y>`)\n    /// but it can be empty (`Path<>`).\n    /// `P` is used as a size optimization for the common case with no parameters.\n    pub parameters: Option<P<PathParameters>>,\n}\n\nimpl PathSegment {\n    pub fn from_ident(ident: Ident, span: Span) -> Self {\n        PathSegment { identifier: ident, span: span, parameters: None }\n    }\n    pub fn crate_root(span: Span) -> Self {\n        PathSegment {\n            identifier: Ident { ctxt: span.ctxt(), ..keywords::CrateRoot.ident() },\n            span,\n            parameters: None,\n        }\n    }\n}\n\n/// Parameters of a path segment.\n///\n/// E.g. `<A, B>` as in `Foo<A, B>` or `(A, B)` as in `Foo(A, B)`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum PathParameters {\n    /// The `<'a, A,B,C>` in `foo::bar::baz::<'a, A,B,C>`\n    AngleBracketed(AngleBracketedParameterData),\n    /// The `(A,B)` and `C` in `Foo(A,B) -> C`\n    Parenthesized(ParenthesizedParameterData),\n}\n\nimpl PathParameters {\n    pub fn span(&self) -> Span {\n        match *self {\n            AngleBracketed(ref data) => data.span,\n            Parenthesized(ref data) => data.span,\n        }\n    }\n}\n\n/// A path like `Foo<'a, T>`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Default)]\npub struct AngleBracketedParameterData {\n    /// Overall span\n    pub span: Span,\n    /// The lifetime parameters for this path segment.\n    pub lifetimes: Vec<Lifetime>,\n    /// The type parameters for this path segment, if present.\n    pub types: Vec<P<Ty>>,\n    /// Bindings (equality constraints) on associated types, if present.\n    ///\n    /// E.g., `Foo<A=Bar>`.\n    pub bindings: Vec<TypeBinding>,\n}\n\nimpl Into<Option<P<PathParameters>>> for AngleBracketedParameterData {\n    fn into(self) -> Option<P<PathParameters>> {\n        Some(P(PathParameters::AngleBracketed(self)))\n    }\n}\n\nimpl Into<Option<P<PathParameters>>> for ParenthesizedParameterData {\n    fn into(self) -> Option<P<PathParameters>> {\n        Some(P(PathParameters::Parenthesized(self)))\n    }\n}\n\n/// A path like `Foo(A,B) -> C`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct ParenthesizedParameterData {\n    /// Overall span\n    pub span: Span,\n\n    /// `(A,B)`\n    pub inputs: Vec<P<Ty>>,\n\n    /// `C`\n    pub output: Option<P<Ty>>,\n}\n\n#[derive(Clone, Copy, PartialEq, PartialOrd, Eq, Ord, Hash, Debug)]\npub struct NodeId(u32);\n\nimpl NodeId {\n    pub fn new(x: usize) -> NodeId {\n        assert!(x < (u32::MAX as usize));\n        NodeId(x as u32)\n    }\n\n    pub fn from_u32(x: u32) -> NodeId {\n        NodeId(x)\n    }\n\n    pub fn as_usize(&self) -> usize {\n        self.0 as usize\n    }\n\n    pub fn as_u32(&self) -> u32 {\n        self.0\n    }\n\n    pub fn placeholder_from_mark(mark: Mark) -> Self {\n        NodeId(mark.as_u32())\n    }\n\n    pub fn placeholder_to_mark(self) -> Mark {\n        Mark::from_u32(self.0)\n    }\n}\n\nimpl fmt::Display for NodeId {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        fmt::Display::fmt(&self.0, f)\n    }\n}\n\nimpl serialize::UseSpecializedEncodable for NodeId {\n    fn default_encode<S: Encoder>(&self, s: &mut S) -> Result<(), S::Error> {\n        s.emit_u32(self.0)\n    }\n}\n\nimpl serialize::UseSpecializedDecodable for NodeId {\n    fn default_decode<D: Decoder>(d: &mut D) -> Result<NodeId, D::Error> {\n        d.read_u32().map(NodeId)\n    }\n}\n\nimpl indexed_vec::Idx for NodeId {\n    fn new(idx: usize) -> Self {\n        NodeId::new(idx)\n    }\n\n    fn index(self) -> usize {\n        self.as_usize()\n    }\n}\n\n/// Node id used to represent the root of the crate.\npub const CRATE_NODE_ID: NodeId = NodeId(0);\n\n/// When parsing and doing expansions, we initially give all AST nodes this AST\n/// node value. Then later, in the renumber pass, we renumber them to have\n/// small, positive ids.\npub const DUMMY_NODE_ID: NodeId = NodeId(!0);\n\n/// The AST represents all type param bounds as types.\n/// typeck::collect::compute_bounds matches these against\n/// the \"special\" built-in traits (see middle::lang_items) and\n/// detects Copy, Send and Sync.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum TyParamBound {\n    TraitTyParamBound(PolyTraitRef, TraitBoundModifier),\n    RegionTyParamBound(Lifetime)\n}\n\n/// A modifier on a bound, currently this is only used for `?Sized`, where the\n/// modifier is `Maybe`. Negative bounds should also be handled here.\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum TraitBoundModifier {\n    None,\n    Maybe,\n}\n\npub type TyParamBounds = Vec<TyParamBound>;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct TyParam {\n    pub attrs: ThinVec<Attribute>,\n    pub ident: Ident,\n    pub id: NodeId,\n    pub bounds: TyParamBounds,\n    pub default: Option<P<Ty>>,\n    pub span: Span,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum GenericParam {\n    Lifetime(LifetimeDef),\n    Type(TyParam),\n}\n\nimpl GenericParam {\n    pub fn is_lifetime_param(&self) -> bool {\n        match *self {\n            GenericParam::Lifetime(_) => true,\n            _ => false,\n        }\n    }\n\n    pub fn is_type_param(&self) -> bool {\n        match *self {\n            GenericParam::Type(_) => true,\n            _ => false,\n        }\n    }\n}\n\n/// Represents lifetime, type and const parameters attached to a declaration of\n/// a function, enum, trait, etc.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Generics {\n    pub params: Vec<GenericParam>,\n    pub where_clause: WhereClause,\n    pub span: Span,\n}\n\nimpl Generics {\n    pub fn is_lt_parameterized(&self) -> bool {\n        self.params.iter().any(|param| param.is_lifetime_param())\n    }\n\n    pub fn is_type_parameterized(&self) -> bool {\n        self.params.iter().any(|param| param.is_type_param())\n    }\n\n    pub fn is_parameterized(&self) -> bool {\n        !self.params.is_empty()\n    }\n\n    pub fn span_for_name(&self, name: &str) -> Option<Span> {\n        for param in &self.params {\n            if let GenericParam::Type(ref t) = *param {\n                if t.ident.name == name {\n                    return Some(t.span);\n                }\n            }\n        }\n        None\n    }\n}\n\nimpl Default for Generics {\n    /// Creates an instance of `Generics`.\n    fn default() ->  Generics {\n        Generics {\n            params: Vec::new(),\n            where_clause: WhereClause {\n                id: DUMMY_NODE_ID,\n                predicates: Vec::new(),\n                span: DUMMY_SP,\n            },\n            span: DUMMY_SP,\n        }\n    }\n}\n\n/// A `where` clause in a definition\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct WhereClause {\n    pub id: NodeId,\n    pub predicates: Vec<WherePredicate>,\n    pub span: Span,\n}\n\n/// A single predicate in a `where` clause\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum WherePredicate {\n    /// A type binding, e.g. `for<'c> Foo: Send+Clone+'c`\n    BoundPredicate(WhereBoundPredicate),\n    /// A lifetime predicate, e.g. `'a: 'b+'c`\n    RegionPredicate(WhereRegionPredicate),\n    /// An equality predicate (unsupported)\n    EqPredicate(WhereEqPredicate),\n}\n\n/// A type bound.\n///\n/// E.g. `for<'c> Foo: Send+Clone+'c`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct WhereBoundPredicate {\n    pub span: Span,\n    /// Any generics from a `for` binding\n    pub bound_generic_params: Vec<GenericParam>,\n    /// The type being bounded\n    pub bounded_ty: P<Ty>,\n    /// Trait and lifetime bounds (`Clone+Send+'static`)\n    pub bounds: TyParamBounds,\n}\n\n/// A lifetime predicate.\n///\n/// E.g. `'a: 'b+'c`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct WhereRegionPredicate {\n    pub span: Span,\n    pub lifetime: Lifetime,\n    pub bounds: Vec<Lifetime>,\n}\n\n/// An equality predicate (unsupported).\n///\n/// E.g. `T=int`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct WhereEqPredicate {\n    pub id: NodeId,\n    pub span: Span,\n    pub lhs_ty: P<Ty>,\n    pub rhs_ty: P<Ty>,\n}\n\n/// The set of MetaItems that define the compilation environment of the crate,\n/// used to drive conditional compilation\npub type CrateConfig = HashSet<(Name, Option<Symbol>)>;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Crate {\n    pub module: Mod,\n    pub attrs: Vec<Attribute>,\n    pub span: Span,\n}\n\n/// A spanned compile-time attribute list item.\npub type NestedMetaItem = Spanned<NestedMetaItemKind>;\n\n/// Possible values inside of compile-time attribute lists.\n///\n/// E.g. the '..' in `#[name(..)]`.\n#[derive(Clone, Eq, RustcEncodable, RustcDecodable, Hash, Debug, PartialEq)]\npub enum NestedMetaItemKind {\n    /// A full MetaItem, for recursive meta items.\n    MetaItem(MetaItem),\n    /// A literal.\n    ///\n    /// E.g. \"foo\", 64, true\n    Literal(Lit),\n}\n\n/// A spanned compile-time attribute item.\n///\n/// E.g. `#[test]`, `#[derive(..)]` or `#[feature = \"foo\"]`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct MetaItem {\n    pub name: Name,\n    pub node: MetaItemKind,\n    pub span: Span,\n}\n\n/// A compile-time attribute item.\n///\n/// E.g. `#[test]`, `#[derive(..)]` or `#[feature = \"foo\"]`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum MetaItemKind {\n    /// Word meta item.\n    ///\n    /// E.g. `test` as in `#[test]`\n    Word,\n    /// List meta item.\n    ///\n    /// E.g. `derive(..)` as in `#[derive(..)]`\n    List(Vec<NestedMetaItem>),\n    /// Name value meta item.\n    ///\n    /// E.g. `feature = \"foo\"` as in `#[feature = \"foo\"]`\n    NameValue(Lit)\n}\n\n/// A Block (`{ .. }`).\n///\n/// E.g. `{ .. }` as in `fn foo() { .. }`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Block {\n    /// Statements in a block\n    pub stmts: Vec<Stmt>,\n    pub id: NodeId,\n    /// Distinguishes between `unsafe { ... }` and `{ ... }`\n    pub rules: BlockCheckMode,\n    pub span: Span,\n    pub recovered: bool,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)]\npub struct Pat {\n    pub id: NodeId,\n    pub node: PatKind,\n    pub span: Span,\n}\n\nimpl fmt::Debug for Pat {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"pat({}: {})\", self.id, pprust::pat_to_string(self))\n    }\n}\n\nimpl Pat {\n    pub(super) fn to_ty(&self) -> Option<P<Ty>> {\n        let node = match &self.node {\n            PatKind::Wild => TyKind::Infer,\n            PatKind::Ident(BindingMode::ByValue(Mutability::Immutable), ident, None) =>\n                TyKind::Path(None, Path::from_ident(ident.span, ident.node)),\n            PatKind::Path(qself, path) => TyKind::Path(qself.clone(), path.clone()),\n            PatKind::Mac(mac) => TyKind::Mac(mac.clone()),\n            PatKind::Ref(pat, mutbl) =>\n                pat.to_ty().map(|ty| TyKind::Rptr(None, MutTy { ty, mutbl: *mutbl }))?,\n            PatKind::Slice(pats, None, _) if pats.len() == 1 =>\n                pats[0].to_ty().map(TyKind::Slice)?,\n            PatKind::Tuple(pats, None) => {\n                let mut tys = Vec::new();\n                for pat in pats {\n                    tys.push(pat.to_ty()?);\n                }\n                TyKind::Tup(tys)\n            }\n            _ => return None,\n        };\n\n        Some(P(Ty { node, id: self.id, span: self.span }))\n    }\n\n    pub fn walk<F>(&self, it: &mut F) -> bool\n        where F: FnMut(&Pat) -> bool\n    {\n        if !it(self) {\n            return false;\n        }\n\n        match self.node {\n            PatKind::Ident(_, _, Some(ref p)) => p.walk(it),\n            PatKind::Struct(_, ref fields, _) => {\n                fields.iter().all(|field| field.node.pat.walk(it))\n            }\n            PatKind::TupleStruct(_, ref s, _) | PatKind::Tuple(ref s, _) => {\n                s.iter().all(|p| p.walk(it))\n            }\n            PatKind::Box(ref s) | PatKind::Ref(ref s, _) => {\n                s.walk(it)\n            }\n            PatKind::Slice(ref before, ref slice, ref after) => {\n                before.iter().all(|p| p.walk(it)) &&\n                slice.iter().all(|p| p.walk(it)) &&\n                after.iter().all(|p| p.walk(it))\n            }\n            PatKind::Wild |\n            PatKind::Lit(_) |\n            PatKind::Range(..) |\n            PatKind::Ident(..) |\n            PatKind::Path(..) |\n            PatKind::Mac(_) => {\n                true\n            }\n        }\n    }\n}\n\n/// A single field in a struct pattern\n///\n/// Patterns like the fields of Foo `{ x, ref y, ref mut z }`\n/// are treated the same as` x: x, y: ref y, z: ref mut z`,\n/// except is_shorthand is true\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct FieldPat {\n    /// The identifier for the field\n    pub ident: Ident,\n    /// The pattern the field is destructured to\n    pub pat: P<Pat>,\n    pub is_shorthand: bool,\n    pub attrs: ThinVec<Attribute>,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum BindingMode {\n    ByRef(Mutability),\n    ByValue(Mutability),\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum RangeEnd {\n    Included(RangeSyntax),\n    Excluded,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum RangeSyntax {\n    DotDotDot,\n    DotDotEq,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum PatKind {\n    /// Represents a wildcard pattern (`_`)\n    Wild,\n\n    /// A `PatKind::Ident` may either be a new bound variable (`ref mut binding @ OPT_SUBPATTERN`),\n    /// or a unit struct/variant pattern, or a const pattern (in the last two cases the third\n    /// field must be `None`). Disambiguation cannot be done with parser alone, so it happens\n    /// during name resolution.\n    Ident(BindingMode, SpannedIdent, Option<P<Pat>>),\n\n    /// A struct or struct variant pattern, e.g. `Variant {x, y, ..}`.\n    /// The `bool` is `true` in the presence of a `..`.\n    Struct(Path, Vec<Spanned<FieldPat>>, bool),\n\n    /// A tuple struct/variant pattern `Variant(x, y, .., z)`.\n    /// If the `..` pattern fragment is present, then `Option<usize>` denotes its position.\n    /// 0 <= position <= subpats.len()\n    TupleStruct(Path, Vec<P<Pat>>, Option<usize>),\n\n    /// A possibly qualified path pattern.\n    /// Unqualified path patterns `A::B::C` can legally refer to variants, structs, constants\n    /// or associated constants. Qualified path patterns `<A>::B::C`/`<A as Trait>::B::C` can\n    /// only legally refer to associated constants.\n    Path(Option<QSelf>, Path),\n\n    /// A tuple pattern `(a, b)`.\n    /// If the `..` pattern fragment is present, then `Option<usize>` denotes its position.\n    /// 0 <= position <= subpats.len()\n    Tuple(Vec<P<Pat>>, Option<usize>),\n    /// A `box` pattern\n    Box(P<Pat>),\n    /// A reference pattern, e.g. `&mut (a, b)`\n    Ref(P<Pat>, Mutability),\n    /// A literal\n    Lit(P<Expr>),\n    /// A range pattern, e.g. `1...2`, `1..=2` or `1..2`\n    Range(P<Expr>, P<Expr>, RangeEnd),\n    /// `[a, b, ..i, y, z]` is represented as:\n    ///     `PatKind::Slice(box [a, b], Some(i), box [y, z])`\n    Slice(Vec<P<Pat>>, Option<P<Pat>>, Vec<P<Pat>>),\n    /// A macro pattern; pre-expansion\n    Mac(Mac),\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum Mutability {\n    Mutable,\n    Immutable,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum BinOpKind {\n    /// The `+` operator (addition)\n    Add,\n    /// The `-` operator (subtraction)\n    Sub,\n    /// The `*` operator (multiplication)\n    Mul,\n    /// The `/` operator (division)\n    Div,\n    /// The `%` operator (modulus)\n    Rem,\n    /// The `&&` operator (logical and)\n    And,\n    /// The `||` operator (logical or)\n    Or,\n    /// The `^` operator (bitwise xor)\n    BitXor,\n    /// The `&` operator (bitwise and)\n    BitAnd,\n    /// The `|` operator (bitwise or)\n    BitOr,\n    /// The `<<` operator (shift left)\n    Shl,\n    /// The `>>` operator (shift right)\n    Shr,\n    /// The `==` operator (equality)\n    Eq,\n    /// The `<` operator (less than)\n    Lt,\n    /// The `<=` operator (less than or equal to)\n    Le,\n    /// The `!=` operator (not equal to)\n    Ne,\n    /// The `>=` operator (greater than or equal to)\n    Ge,\n    /// The `>` operator (greater than)\n    Gt,\n}\n\nimpl BinOpKind {\n    pub fn to_string(&self) -> &'static str {\n        use self::BinOpKind::*;\n        match *self {\n            Add => \"+\",\n            Sub => \"-\",\n            Mul => \"*\",\n            Div => \"/\",\n            Rem => \"%\",\n            And => \"&&\",\n            Or => \"||\",\n            BitXor => \"^\",\n            BitAnd => \"&\",\n            BitOr => \"|\",\n            Shl => \"<<\",\n            Shr => \">>\",\n            Eq => \"==\",\n            Lt => \"<\",\n            Le => \"<=\",\n            Ne => \"!=\",\n            Ge => \">=\",\n            Gt => \">\",\n        }\n    }\n    pub fn lazy(&self) -> bool {\n        match *self {\n            BinOpKind::And | BinOpKind::Or => true,\n            _ => false\n        }\n    }\n\n    pub fn is_shift(&self) -> bool {\n        match *self {\n            BinOpKind::Shl | BinOpKind::Shr => true,\n            _ => false\n        }\n    }\n\n    pub fn is_comparison(&self) -> bool {\n        use self::BinOpKind::*;\n        match *self {\n            Eq | Lt | Le | Ne | Gt | Ge =>\n            true,\n            And | Or | Add | Sub | Mul | Div | Rem |\n            BitXor | BitAnd | BitOr | Shl | Shr =>\n            false,\n        }\n    }\n\n    /// Returns `true` if the binary operator takes its arguments by value\n    pub fn is_by_value(&self) -> bool {\n        !self.is_comparison()\n    }\n}\n\npub type BinOp = Spanned<BinOpKind>;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum UnOp {\n    /// The `*` operator for dereferencing\n    Deref,\n    /// The `!` operator for logical inversion\n    Not,\n    /// The `-` operator for negation\n    Neg,\n}\n\nimpl UnOp {\n    /// Returns `true` if the unary operator takes its argument by value\n    pub fn is_by_value(u: UnOp) -> bool {\n        match u {\n            UnOp::Neg | UnOp::Not => true,\n            _ => false,\n        }\n    }\n\n    pub fn to_string(op: UnOp) -> &'static str {\n        match op {\n            UnOp::Deref => \"*\",\n            UnOp::Not => \"!\",\n            UnOp::Neg => \"-\",\n        }\n    }\n}\n\n/// A statement\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)]\npub struct Stmt {\n    pub id: NodeId,\n    pub node: StmtKind,\n    pub span: Span,\n}\n\nimpl Stmt {\n    pub fn add_trailing_semicolon(mut self) -> Self {\n        self.node = match self.node {\n            StmtKind::Expr(expr) => StmtKind::Semi(expr),\n            StmtKind::Mac(mac) => StmtKind::Mac(mac.map(|(mac, _style, attrs)| {\n                (mac, MacStmtStyle::Semicolon, attrs)\n            })),\n            node => node,\n        };\n        self\n    }\n\n    pub fn is_item(&self) -> bool {\n        match self.node {\n            StmtKind::Local(_) => true,\n            _ => false,\n        }\n    }\n}\n\nimpl fmt::Debug for Stmt {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"stmt({}: {})\", self.id.to_string(), pprust::stmt_to_string(self))\n    }\n}\n\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)]\npub enum StmtKind {\n    /// A local (let) binding.\n    Local(P<Local>),\n\n    /// An item definition.\n    Item(P<Item>),\n\n    /// Expr without trailing semi-colon.\n    Expr(P<Expr>),\n    /// Expr with a trailing semi-colon.\n    Semi(P<Expr>),\n    /// Macro.\n    Mac(P<(Mac, MacStmtStyle, ThinVec<Attribute>)>),\n}\n\n#[derive(Clone, Copy, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum MacStmtStyle {\n    /// The macro statement had a trailing semicolon, e.g. `foo! { ... };`\n    /// `foo!(...);`, `foo![...];`\n    Semicolon,\n    /// The macro statement had braces; e.g. foo! { ... }\n    Braces,\n    /// The macro statement had parentheses or brackets and no semicolon; e.g.\n    /// `foo!(...)`. All of these will end up being converted into macro\n    /// expressions.\n    NoBraces,\n}\n\n/// Local represents a `let` statement, e.g., `let <pat>:<ty> = <expr>;`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Local {\n    pub pat: P<Pat>,\n    pub ty: Option<P<Ty>>,\n    /// Initializer expression to set the value, if any\n    pub init: Option<P<Expr>>,\n    pub id: NodeId,\n    pub span: Span,\n    pub attrs: ThinVec<Attribute>,\n}\n\n/// An arm of a 'match'.\n///\n/// E.g. `0...10 => { println!(\"match!\") }` as in\n///\n/// ```\n/// match 123 {\n///     0...10 => { println!(\"match!\") },\n///     _ => { println!(\"no match!\") },\n/// }\n/// ```\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Arm {\n    pub attrs: Vec<Attribute>,\n    pub pats: Vec<P<Pat>>,\n    pub guard: Option<P<Expr>>,\n    pub body: P<Expr>,\n    pub beginning_vert: Option<Span>, // For RFC 1925 feature gate\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Field {\n    pub ident: SpannedIdent,\n    pub expr: P<Expr>,\n    pub span: Span,\n    pub is_shorthand: bool,\n    pub attrs: ThinVec<Attribute>,\n}\n\npub type SpannedIdent = Spanned<Ident>;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum BlockCheckMode {\n    Default,\n    Unsafe(UnsafeSource),\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum UnsafeSource {\n    CompilerGenerated,\n    UserProvided,\n}\n\n/// An expression\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash,)]\npub struct Expr {\n    pub id: NodeId,\n    pub node: ExprKind,\n    pub span: Span,\n    pub attrs: ThinVec<Attribute>\n}\n\nimpl Expr {\n    /// Wether this expression would be valid somewhere that expects a value, for example, an `if`\n    /// condition.\n    pub fn returns(&self) -> bool {\n        if let ExprKind::Block(ref block) = self.node {\n            match block.stmts.last().map(|last_stmt| &last_stmt.node) {\n                // implicit return\n                Some(&StmtKind::Expr(_)) => true,\n                Some(&StmtKind::Semi(ref expr)) => {\n                    if let ExprKind::Ret(_) = expr.node {\n                        // last statement is explicit return\n                        true\n                    } else {\n                        false\n                    }\n                }\n                // This is a block that doesn't end in either an implicit or explicit return\n                _ => false,\n            }\n        } else {\n            // This is not a block, it is a value\n            true\n        }\n    }\n\n    fn to_bound(&self) -> Option<TyParamBound> {\n        match &self.node {\n            ExprKind::Path(None, path) =>\n                Some(TraitTyParamBound(PolyTraitRef::new(Vec::new(), path.clone(), self.span),\n                                       TraitBoundModifier::None)),\n            _ => None,\n        }\n    }\n\n    pub(super) fn to_ty(&self) -> Option<P<Ty>> {\n        let node = match &self.node {\n            ExprKind::Path(qself, path) => TyKind::Path(qself.clone(), path.clone()),\n            ExprKind::Mac(mac) => TyKind::Mac(mac.clone()),\n            ExprKind::Paren(expr) => expr.to_ty().map(TyKind::Paren)?,\n            ExprKind::AddrOf(mutbl, expr) =>\n                expr.to_ty().map(|ty| TyKind::Rptr(None, MutTy { ty, mutbl: *mutbl }))?,\n            ExprKind::Repeat(expr, expr_len) =>\n                expr.to_ty().map(|ty| TyKind::Array(ty, expr_len.clone()))?,\n            ExprKind::Array(exprs) if exprs.len() == 1 =>\n                exprs[0].to_ty().map(TyKind::Slice)?,\n            ExprKind::Tup(exprs) => {\n                let mut tys = Vec::new();\n                for expr in exprs {\n                    tys.push(expr.to_ty()?);\n                }\n                TyKind::Tup(tys)\n            }\n            ExprKind::Binary(binop, lhs, rhs) if binop.node == BinOpKind::Add =>\n                if let (Some(lhs), Some(rhs)) = (lhs.to_bound(), rhs.to_bound()) {\n                    TyKind::TraitObject(vec![lhs, rhs], TraitObjectSyntax::None)\n                } else {\n                    return None;\n                }\n            _ => return None,\n        };\n\n        Some(P(Ty { node, id: self.id, span: self.span }))\n    }\n\n    pub fn precedence(&self) -> ExprPrecedence {\n        match self.node {\n            ExprKind::Box(_) => ExprPrecedence::Box,\n            ExprKind::InPlace(..) => ExprPrecedence::InPlace,\n            ExprKind::Array(_) => ExprPrecedence::Array,\n            ExprKind::Call(..) => ExprPrecedence::Call,\n            ExprKind::MethodCall(..) => ExprPrecedence::MethodCall,\n            ExprKind::Tup(_) => ExprPrecedence::Tup,\n            ExprKind::Binary(op, ..) => ExprPrecedence::Binary(op.node),\n            ExprKind::Unary(..) => ExprPrecedence::Unary,\n            ExprKind::Lit(_) => ExprPrecedence::Lit,\n            ExprKind::Type(..) | ExprKind::Cast(..) => ExprPrecedence::Cast,\n            ExprKind::If(..) => ExprPrecedence::If,\n            ExprKind::IfLet(..) => ExprPrecedence::IfLet,\n            ExprKind::While(..) => ExprPrecedence::While,\n            ExprKind::WhileLet(..) => ExprPrecedence::WhileLet,\n            ExprKind::ForLoop(..) => ExprPrecedence::ForLoop,\n            ExprKind::Loop(..) => ExprPrecedence::Loop,\n            ExprKind::Match(..) => ExprPrecedence::Match,\n            ExprKind::Closure(..) => ExprPrecedence::Closure,\n            ExprKind::Block(..) => ExprPrecedence::Block,\n            ExprKind::Catch(..) => ExprPrecedence::Catch,\n            ExprKind::Assign(..) => ExprPrecedence::Assign,\n            ExprKind::AssignOp(..) => ExprPrecedence::AssignOp,\n            ExprKind::Field(..) => ExprPrecedence::Field,\n            ExprKind::TupField(..) => ExprPrecedence::TupField,\n            ExprKind::Index(..) => ExprPrecedence::Index,\n            ExprKind::Range(..) => ExprPrecedence::Range,\n            ExprKind::Path(..) => ExprPrecedence::Path,\n            ExprKind::AddrOf(..) => ExprPrecedence::AddrOf,\n            ExprKind::Break(..) => ExprPrecedence::Break,\n            ExprKind::Continue(..) => ExprPrecedence::Continue,\n            ExprKind::Ret(..) => ExprPrecedence::Ret,\n            ExprKind::InlineAsm(..) => ExprPrecedence::InlineAsm,\n            ExprKind::Mac(..) => ExprPrecedence::Mac,\n            ExprKind::Struct(..) => ExprPrecedence::Struct,\n            ExprKind::Repeat(..) => ExprPrecedence::Repeat,\n            ExprKind::Paren(..) => ExprPrecedence::Paren,\n            ExprKind::Try(..) => ExprPrecedence::Try,\n            ExprKind::Yield(..) => ExprPrecedence::Yield,\n        }\n    }\n}\n\nimpl fmt::Debug for Expr {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"expr({}: {})\", self.id, pprust::expr_to_string(self))\n    }\n}\n\n/// Limit types of a range (inclusive or exclusive)\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum RangeLimits {\n    /// Inclusive at the beginning, exclusive at the end\n    HalfOpen,\n    /// Inclusive at the beginning and end\n    Closed,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum ExprKind {\n    /// A `box x` expression.\n    Box(P<Expr>),\n    /// First expr is the place; second expr is the value.\n    InPlace(P<Expr>, P<Expr>),\n    /// An array (`[a, b, c, d]`)\n    Array(Vec<P<Expr>>),\n    /// A function call\n    ///\n    /// The first field resolves to the function itself,\n    /// and the second field is the list of arguments.\n    /// This also represents calling the constructor of\n    /// tuple-like ADTs such as tuple structs and enum variants.\n    Call(P<Expr>, Vec<P<Expr>>),\n    /// A method call (`x.foo::<'static, Bar, Baz>(a, b, c, d)`)\n    ///\n    /// The `PathSegment` represents the method name and its generic arguments\n    /// (within the angle brackets).\n    /// The first element of the vector of `Expr`s is the expression that evaluates\n    /// to the object on which the method is being called on (the receiver),\n    /// and the remaining elements are the rest of the arguments.\n    /// Thus, `x.foo::<Bar, Baz>(a, b, c, d)` is represented as\n    /// `ExprKind::MethodCall(PathSegment { foo, [Bar, Baz] }, [x, a, b, c, d])`.\n    MethodCall(PathSegment, Vec<P<Expr>>),\n    /// A tuple (`(a, b, c ,d)`)\n    Tup(Vec<P<Expr>>),\n    /// A binary operation (For example: `a + b`, `a * b`)\n    Binary(BinOp, P<Expr>, P<Expr>),\n    /// A unary operation (For example: `!x`, `*x`)\n    Unary(UnOp, P<Expr>),\n    /// A literal (For example: `1`, `\"foo\"`)\n    Lit(P<Lit>),\n    /// A cast (`foo as f64`)\n    Cast(P<Expr>, P<Ty>),\n    Type(P<Expr>, P<Ty>),\n    /// An `if` block, with an optional else block\n    ///\n    /// `if expr { block } else { expr }`\n    If(P<Expr>, P<Block>, Option<P<Expr>>),\n    /// An `if let` expression with an optional else block\n    ///\n    /// `if let pat = expr { block } else { expr }`\n    ///\n    /// This is desugared to a `match` expression.\n    IfLet(P<Pat>, P<Expr>, P<Block>, Option<P<Expr>>),\n    /// A while loop, with an optional label\n    ///\n    /// `'label: while expr { block }`\n    While(P<Expr>, P<Block>, Option<SpannedIdent>),\n    /// A while-let loop, with an optional label\n    ///\n    /// `'label: while let pat = expr { block }`\n    ///\n    /// This is desugared to a combination of `loop` and `match` expressions.\n    WhileLet(P<Pat>, P<Expr>, P<Block>, Option<SpannedIdent>),\n    /// A for loop, with an optional label\n    ///\n    /// `'label: for pat in expr { block }`\n    ///\n    /// This is desugared to a combination of `loop` and `match` expressions.\n    ForLoop(P<Pat>, P<Expr>, P<Block>, Option<SpannedIdent>),\n    /// Conditionless loop (can be exited with break, continue, or return)\n    ///\n    /// `'label: loop { block }`\n    Loop(P<Block>, Option<SpannedIdent>),\n    /// A `match` block.\n    Match(P<Expr>, Vec<Arm>),\n    /// A closure (for example, `move |a, b, c| a + b + c`)\n    ///\n    /// The final span is the span of the argument block `|...|`\n    Closure(CaptureBy, P<FnDecl>, P<Expr>, Span),\n    /// A block (`{ ... }`)\n    Block(P<Block>),\n    /// A catch block (`catch { ... }`)\n    Catch(P<Block>),\n\n    /// An assignment (`a = foo()`)\n    Assign(P<Expr>, P<Expr>),\n    /// An assignment with an operator\n    ///\n    /// For example, `a += 1`.\n    AssignOp(BinOp, P<Expr>, P<Expr>),\n    /// Access of a named struct field (`obj.foo`)\n    Field(P<Expr>, SpannedIdent),\n    /// Access of an unnamed field of a struct or tuple-struct\n    ///\n    /// For example, `foo.0`.\n    TupField(P<Expr>, Spanned<usize>),\n    /// An indexing operation (`foo[2]`)\n    Index(P<Expr>, P<Expr>),\n    /// A range (`1..2`, `1..`, `..2`, `1...2`, `1...`, `...2`)\n    Range(Option<P<Expr>>, Option<P<Expr>>, RangeLimits),\n\n    /// Variable reference, possibly containing `::` and/or type\n    /// parameters, e.g. foo::bar::<baz>.\n    ///\n    /// Optionally \"qualified\",\n    /// E.g. `<Vec<T> as SomeTrait>::SomeType`.\n    Path(Option<QSelf>, Path),\n\n    /// A referencing operation (`&a` or `&mut a`)\n    AddrOf(Mutability, P<Expr>),\n    /// A `break`, with an optional label to break, and an optional expression\n    Break(Option<SpannedIdent>, Option<P<Expr>>),\n    /// A `continue`, with an optional label\n    Continue(Option<SpannedIdent>),\n    /// A `return`, with an optional value to be returned\n    Ret(Option<P<Expr>>),\n\n    /// Output of the `asm!()` macro\n    InlineAsm(P<InlineAsm>),\n\n    /// A macro invocation; pre-expansion\n    Mac(Mac),\n\n    /// A struct literal expression.\n    ///\n    /// For example, `Foo {x: 1, y: 2}`, or\n    /// `Foo {x: 1, .. base}`, where `base` is the `Option<Expr>`.\n    Struct(Path, Vec<Field>, Option<P<Expr>>),\n\n    /// An array literal constructed from one repeated element.\n    ///\n    /// For example, `[1; 5]`. The first expression is the element\n    /// to be repeated; the second is the number of times to repeat it.\n    Repeat(P<Expr>, P<Expr>),\n\n    /// No-op: used solely so we can pretty-print faithfully\n    Paren(P<Expr>),\n\n    /// `expr?`\n    Try(P<Expr>),\n\n    /// A `yield`, with an optional value to be yielded\n    Yield(Option<P<Expr>>),\n}\n\n/// The explicit Self type in a \"qualified path\". The actual\n/// path, including the trait and the associated item, is stored\n/// separately. `position` represents the index of the associated\n/// item qualified with this Self type.\n///\n/// ```ignore (only-for-syntax-highlight)\n/// <Vec<T> as a::b::Trait>::AssociatedItem\n///  ^~~~~     ~~~~~~~~~~~~~~^\n///  ty        position = 3\n///\n/// <Vec<T>>::AssociatedItem\n///  ^~~~~    ^\n///  ty       position = 0\n/// ```\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct QSelf {\n    pub ty: P<Ty>,\n    pub position: usize\n}\n\n/// A capture clause\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum CaptureBy {\n    Value,\n    Ref,\n}\n\npub type Mac = Spanned<Mac_>;\n\n/// Represents a macro invocation. The Path indicates which macro\n/// is being invoked, and the vector of token-trees contains the source\n/// of the macro invocation.\n///\n/// NB: the additional ident for a macro_rules-style macro is actually\n/// stored in the enclosing item. Oog.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Mac_ {\n    pub path: Path,\n    pub tts: ThinTokenStream,\n}\n\nimpl Mac_ {\n    pub fn stream(&self) -> TokenStream {\n        self.tts.clone().into()\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct MacroDef {\n    pub tokens: ThinTokenStream,\n    pub legacy: bool,\n}\n\nimpl MacroDef {\n    pub fn stream(&self) -> TokenStream {\n        self.tokens.clone().into()\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum StrStyle {\n    /// A regular string, like `\"foo\"`\n    Cooked,\n    /// A raw string, like `r##\"foo\"##`\n    ///\n    /// The uint is the number of `#` symbols used\n    Raw(usize)\n}\n\n/// A literal\npub type Lit = Spanned<LitKind>;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum LitIntType {\n    Signed(IntTy),\n    Unsigned(UintTy),\n    Unsuffixed,\n}\n\n/// Literal kind.\n///\n/// E.g. `\"foo\"`, `42`, `12.34` or `bool`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum LitKind {\n    /// A string literal (`\"foo\"`)\n    Str(Symbol, StrStyle),\n    /// A byte string (`b\"foo\"`)\n    ByteStr(Rc<Vec<u8>>),\n    /// A byte char (`b'f'`)\n    Byte(u8),\n    /// A character literal (`'a'`)\n    Char(char),\n    /// An integer literal (`1`)\n    Int(u128, LitIntType),\n    /// A float literal (`1f64` or `1E10f64`)\n    Float(Symbol, FloatTy),\n    /// A float literal without a suffix (`1.0 or 1.0E10`)\n    FloatUnsuffixed(Symbol),\n    /// A boolean literal\n    Bool(bool),\n}\n\nimpl LitKind {\n    /// Returns true if this literal is a string and false otherwise.\n    pub fn is_str(&self) -> bool {\n        match *self {\n            LitKind::Str(..) => true,\n            _ => false,\n        }\n    }\n\n    /// Returns true if this literal has no suffix. Note: this will return true\n    /// for literals with prefixes such as raw strings and byte strings.\n    pub fn is_unsuffixed(&self) -> bool {\n        match *self {\n            // unsuffixed variants\n            LitKind::Str(..) |\n            LitKind::ByteStr(..) |\n            LitKind::Byte(..) |\n            LitKind::Char(..) |\n            LitKind::Int(_, LitIntType::Unsuffixed) |\n            LitKind::FloatUnsuffixed(..) |\n            LitKind::Bool(..) => true,\n            // suffixed variants\n            LitKind::Int(_, LitIntType::Signed(..)) |\n            LitKind::Int(_, LitIntType::Unsigned(..)) |\n            LitKind::Float(..) => false,\n        }\n    }\n\n    /// Returns true if this literal has a suffix.\n    pub fn is_suffixed(&self) -> bool {\n        !self.is_unsuffixed()\n    }\n}\n\n// NB: If you change this, you'll probably want to change the corresponding\n// type structure in middle/ty.rs as well.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct MutTy {\n    pub ty: P<Ty>,\n    pub mutbl: Mutability,\n}\n\n/// Represents a method's signature in a trait declaration,\n/// or in an implementation.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct MethodSig {\n    pub unsafety: Unsafety,\n    pub constness: Spanned<Constness>,\n    pub abi: Abi,\n    pub decl: P<FnDecl>,\n}\n\n/// Represents an item declaration within a trait declaration,\n/// possibly including a default implementation. A trait item is\n/// either required (meaning it doesn't have an implementation, just a\n/// signature) or provided (meaning it has a default implementation).\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct TraitItem {\n    pub id: NodeId,\n    pub ident: Ident,\n    pub attrs: Vec<Attribute>,\n    pub generics: Generics,\n    pub node: TraitItemKind,\n    pub span: Span,\n    /// See `Item::tokens` for what this is\n    pub tokens: Option<TokenStream>,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum TraitItemKind {\n    Const(P<Ty>, Option<P<Expr>>),\n    Method(MethodSig, Option<P<Block>>),\n    Type(TyParamBounds, Option<P<Ty>>),\n    Macro(Mac),\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct ImplItem {\n    pub id: NodeId,\n    pub ident: Ident,\n    pub vis: Visibility,\n    pub defaultness: Defaultness,\n    pub attrs: Vec<Attribute>,\n    pub generics: Generics,\n    pub node: ImplItemKind,\n    pub span: Span,\n    /// See `Item::tokens` for what this is\n    pub tokens: Option<TokenStream>,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum ImplItemKind {\n    Const(P<Ty>, P<Expr>),\n    Method(MethodSig, P<Block>),\n    Type(P<Ty>),\n    Macro(Mac),\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Copy,\n         PartialOrd, Ord)]\npub enum IntTy {\n    Isize,\n    I8,\n    I16,\n    I32,\n    I64,\n    I128,\n}\n\nimpl fmt::Debug for IntTy {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        fmt::Display::fmt(self, f)\n    }\n}\n\nimpl fmt::Display for IntTy {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"{}\", self.ty_to_string())\n    }\n}\n\nimpl IntTy {\n    pub fn ty_to_string(&self) -> &'static str {\n        match *self {\n            IntTy::Isize => \"isize\",\n            IntTy::I8 => \"i8\",\n            IntTy::I16 => \"i16\",\n            IntTy::I32 => \"i32\",\n            IntTy::I64 => \"i64\",\n            IntTy::I128 => \"i128\",\n        }\n    }\n\n    pub fn val_to_string(&self, val: i128) -> String {\n        // cast to a u128 so we can correctly print INT128_MIN. All integral types\n        // are parsed as u128, so we wouldn't want to print an extra negative\n        // sign.\n        format!(\"{}{}\", val as u128, self.ty_to_string())\n    }\n\n    pub fn bit_width(&self) -> Option<usize> {\n        Some(match *self {\n            IntTy::Isize => return None,\n            IntTy::I8 => 8,\n            IntTy::I16 => 16,\n            IntTy::I32 => 32,\n            IntTy::I64 => 64,\n            IntTy::I128 => 128,\n        })\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Copy,\n         PartialOrd, Ord)]\npub enum UintTy {\n    Usize,\n    U8,\n    U16,\n    U32,\n    U64,\n    U128,\n}\n\nimpl UintTy {\n    pub fn ty_to_string(&self) -> &'static str {\n        match *self {\n            UintTy::Usize => \"usize\",\n            UintTy::U8 => \"u8\",\n            UintTy::U16 => \"u16\",\n            UintTy::U32 => \"u32\",\n            UintTy::U64 => \"u64\",\n            UintTy::U128 => \"u128\",\n        }\n    }\n\n    pub fn val_to_string(&self, val: u128) -> String {\n        format!(\"{}{}\", val, self.ty_to_string())\n    }\n\n    pub fn bit_width(&self) -> Option<usize> {\n        Some(match *self {\n            UintTy::Usize => return None,\n            UintTy::U8 => 8,\n            UintTy::U16 => 16,\n            UintTy::U32 => 32,\n            UintTy::U64 => 64,\n            UintTy::U128 => 128,\n        })\n    }\n}\n\nimpl fmt::Debug for UintTy {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        fmt::Display::fmt(self, f)\n    }\n}\n\nimpl fmt::Display for UintTy {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"{}\", self.ty_to_string())\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Copy,\n         PartialOrd, Ord)]\npub enum FloatTy {\n    F32,\n    F64,\n}\n\nimpl fmt::Debug for FloatTy {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        fmt::Display::fmt(self, f)\n    }\n}\n\nimpl fmt::Display for FloatTy {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"{}\", self.ty_to_string())\n    }\n}\n\nimpl FloatTy {\n    pub fn ty_to_string(&self) -> &'static str {\n        match *self {\n            FloatTy::F32 => \"f32\",\n            FloatTy::F64 => \"f64\",\n        }\n    }\n\n    pub fn bit_width(&self) -> usize {\n        match *self {\n            FloatTy::F32 => 32,\n            FloatTy::F64 => 64,\n        }\n    }\n}\n\n// Bind a type to an associated type: `A=Foo`.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct TypeBinding {\n    pub id: NodeId,\n    pub ident: Ident,\n    pub ty: P<Ty>,\n    pub span: Span,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)]\npub struct Ty {\n    pub id: NodeId,\n    pub node: TyKind,\n    pub span: Span,\n}\n\nimpl fmt::Debug for Ty {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"type({})\", pprust::ty_to_string(self))\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct BareFnTy {\n    pub unsafety: Unsafety,\n    pub abi: Abi,\n    pub generic_params: Vec<GenericParam>,\n    pub decl: P<FnDecl>\n}\n\n/// The different kinds of types recognized by the compiler\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum TyKind {\n    /// A variable-length slice (`[T]`)\n    Slice(P<Ty>),\n    /// A fixed length array (`[T; n]`)\n    Array(P<Ty>, P<Expr>),\n    /// A raw pointer (`*const T` or `*mut T`)\n    Ptr(MutTy),\n    /// A reference (`&'a T` or `&'a mut T`)\n    Rptr(Option<Lifetime>, MutTy),\n    /// A bare function (e.g. `fn(usize) -> bool`)\n    BareFn(P<BareFnTy>),\n    /// The never type (`!`)\n    Never,\n    /// A tuple (`(A, B, C, D,...)`)\n    Tup(Vec<P<Ty>> ),\n    /// A path (`module::module::...::Type`), optionally\n    /// \"qualified\", e.g. `<Vec<T> as SomeTrait>::SomeType`.\n    ///\n    /// Type parameters are stored in the Path itself\n    Path(Option<QSelf>, Path),\n    /// A trait object type `Bound1 + Bound2 + Bound3`\n    /// where `Bound` is a trait or a lifetime.\n    TraitObject(TyParamBounds, TraitObjectSyntax),\n    /// An `impl Bound1 + Bound2 + Bound3` type\n    /// where `Bound` is a trait or a lifetime.\n    ImplTrait(TyParamBounds),\n    /// No-op; kept solely so that we can pretty-print faithfully\n    Paren(P<Ty>),\n    /// Unused for now\n    Typeof(P<Expr>),\n    /// TyKind::Infer means the type should be inferred instead of it having been\n    /// specified. This can appear anywhere in a type.\n    Infer,\n    /// Inferred type of a `self` or `&self` argument in a method.\n    ImplicitSelf,\n    // A macro in the type position.\n    Mac(Mac),\n    /// Placeholder for a kind that has failed to be defined.\n    Err,\n}\n\n/// Syntax used to declare a trait object.\n#[derive(Clone, Copy, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum TraitObjectSyntax {\n    Dyn,\n    None,\n}\n\n/// Inline assembly dialect.\n///\n/// E.g. `\"intel\"` as in `asm!(\"mov eax, 2\" : \"={eax}\"(result) : : : \"intel\")`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum AsmDialect {\n    Att,\n    Intel,\n}\n\n/// Inline assembly.\n///\n/// E.g. `\"={eax}\"(result)` as in `asm!(\"mov eax, 2\" : \"={eax}\"(result) : : : \"intel\")`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct InlineAsmOutput {\n    pub constraint: Symbol,\n    pub expr: P<Expr>,\n    pub is_rw: bool,\n    pub is_indirect: bool,\n}\n\n/// Inline assembly.\n///\n/// E.g. `asm!(\"NOP\");`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct InlineAsm {\n    pub asm: Symbol,\n    pub asm_str_style: StrStyle,\n    pub outputs: Vec<InlineAsmOutput>,\n    pub inputs: Vec<(Symbol, P<Expr>)>,\n    pub clobbers: Vec<Symbol>,\n    pub volatile: bool,\n    pub alignstack: bool,\n    pub dialect: AsmDialect,\n    pub ctxt: SyntaxContext,\n}\n\n/// An argument in a function header.\n///\n/// E.g. `bar: usize` as in `fn foo(bar: usize)`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Arg {\n    pub ty: P<Ty>,\n    pub pat: P<Pat>,\n    pub id: NodeId,\n}\n\n/// Alternative representation for `Arg`s describing `self` parameter of methods.\n///\n/// E.g. `&mut self` as in `fn foo(&mut self)`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum SelfKind {\n    /// `self`, `mut self`\n    Value(Mutability),\n    /// `&'lt self`, `&'lt mut self`\n    Region(Option<Lifetime>, Mutability),\n    /// `self: TYPE`, `mut self: TYPE`\n    Explicit(P<Ty>, Mutability),\n}\n\npub type ExplicitSelf = Spanned<SelfKind>;\n\nimpl Arg {\n    pub fn to_self(&self) -> Option<ExplicitSelf> {\n        if let PatKind::Ident(BindingMode::ByValue(mutbl), ident, _) = self.pat.node {\n            if ident.node.name == keywords::SelfValue.name() {\n                return match self.ty.node {\n                    TyKind::ImplicitSelf => Some(respan(self.pat.span, SelfKind::Value(mutbl))),\n                    TyKind::Rptr(lt, MutTy{ref ty, mutbl}) if ty.node == TyKind::ImplicitSelf => {\n                        Some(respan(self.pat.span, SelfKind::Region(lt, mutbl)))\n                    }\n                    _ => Some(respan(self.pat.span.to(self.ty.span),\n                                     SelfKind::Explicit(self.ty.clone(), mutbl))),\n                }\n            }\n        }\n        None\n    }\n\n    pub fn is_self(&self) -> bool {\n        if let PatKind::Ident(_, ident, _) = self.pat.node {\n            ident.node.name == keywords::SelfValue.name()\n        } else {\n            false\n        }\n    }\n\n    pub fn from_self(eself: ExplicitSelf, eself_ident: SpannedIdent) -> Arg {\n        let span = eself.span.to(eself_ident.span);\n        let infer_ty = P(Ty {\n            id: DUMMY_NODE_ID,\n            node: TyKind::ImplicitSelf,\n            span,\n        });\n        let arg = |mutbl, ty| Arg {\n            pat: P(Pat {\n                id: DUMMY_NODE_ID,\n                node: PatKind::Ident(BindingMode::ByValue(mutbl), eself_ident, None),\n                span,\n            }),\n            ty,\n            id: DUMMY_NODE_ID,\n        };\n        match eself.node {\n            SelfKind::Explicit(ty, mutbl) => arg(mutbl, ty),\n            SelfKind::Value(mutbl) => arg(mutbl, infer_ty),\n            SelfKind::Region(lt, mutbl) => arg(Mutability::Immutable, P(Ty {\n                id: DUMMY_NODE_ID,\n                node: TyKind::Rptr(lt, MutTy { ty: infer_ty, mutbl: mutbl }),\n                span,\n            })),\n        }\n    }\n}\n\n/// Header (not the body) of a function declaration.\n///\n/// E.g. `fn foo(bar: baz)`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct FnDecl {\n    pub inputs: Vec<Arg>,\n    pub output: FunctionRetTy,\n    pub variadic: bool\n}\n\nimpl FnDecl {\n    pub fn get_self(&self) -> Option<ExplicitSelf> {\n        self.inputs.get(0).and_then(Arg::to_self)\n    }\n    pub fn has_self(&self) -> bool {\n        self.inputs.get(0).map(Arg::is_self).unwrap_or(false)\n    }\n}\n\n/// Is the trait definition an auto trait?\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum IsAuto {\n    Yes,\n    No\n}\n\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum Unsafety {\n    Unsafe,\n    Normal,\n}\n\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum Constness {\n    Const,\n    NotConst,\n}\n\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum Defaultness {\n    Default,\n    Final,\n}\n\nimpl fmt::Display for Unsafety {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        fmt::Display::fmt(match *self {\n            Unsafety::Normal => \"normal\",\n            Unsafety::Unsafe => \"unsafe\",\n        }, f)\n    }\n}\n\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash)]\npub enum ImplPolarity {\n    /// `impl Trait for Type`\n    Positive,\n    /// `impl !Trait for Type`\n    Negative,\n}\n\nimpl fmt::Debug for ImplPolarity {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        match *self {\n            ImplPolarity::Positive => \"positive\".fmt(f),\n            ImplPolarity::Negative => \"negative\".fmt(f),\n        }\n    }\n}\n\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum FunctionRetTy {\n    /// Return type is not specified.\n    ///\n    /// Functions default to `()` and\n    /// closures default to inference. Span points to where return\n    /// type would be inserted.\n    Default(Span),\n    /// Everything else\n    Ty(P<Ty>),\n}\n\nimpl FunctionRetTy {\n    pub fn span(&self) -> Span {\n        match *self {\n            FunctionRetTy::Default(span) => span,\n            FunctionRetTy::Ty(ref ty) => ty.span,\n        }\n    }\n}\n\n/// Module declaration.\n///\n/// E.g. `mod foo;` or `mod foo { .. }`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Mod {\n    /// A span from the first token past `{` to the last token until `}`.\n    /// For `mod foo;`, the inner span ranges from the first token\n    /// to the last token in the external file.\n    pub inner: Span,\n    pub items: Vec<P<Item>>,\n}\n\n/// Foreign module declaration.\n///\n/// E.g. `extern { .. }` or `extern C { .. }`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct ForeignMod {\n    pub abi: Abi,\n    pub items: Vec<ForeignItem>,\n}\n\n/// Global inline assembly\n///\n/// aka module-level assembly or file-scoped assembly\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub struct GlobalAsm {\n    pub asm: Symbol,\n    pub ctxt: SyntaxContext,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct EnumDef {\n    pub variants: Vec<Variant>,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Variant_ {\n    pub name: Ident,\n    pub attrs: Vec<Attribute>,\n    pub data: VariantData,\n    /// Explicit discriminant, e.g. `Foo = 1`\n    pub disr_expr: Option<P<Expr>>,\n}\n\npub type Variant = Spanned<Variant_>;\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum UseTreeKind {\n    Simple(Ident),\n    Glob,\n    Nested(Vec<(UseTree, NodeId)>),\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct UseTree {\n    pub kind: UseTreeKind,\n    pub prefix: Path,\n    pub span: Span,\n}\n\n/// Distinguishes between Attributes that decorate items and Attributes that\n/// are contained as statements within items. These two cases need to be\n/// distinguished for pretty-printing.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub enum AttrStyle {\n    Outer,\n    Inner,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug, Copy)]\npub struct AttrId(pub usize);\n\n/// Meta-data associated with an item\n/// Doc-comments are promoted to attributes that have is_sugared_doc = true\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Attribute {\n    pub id: AttrId,\n    pub style: AttrStyle,\n    pub path: Path,\n    pub tokens: TokenStream,\n    pub is_sugared_doc: bool,\n    pub span: Span,\n}\n\n/// TraitRef's appear in impls.\n///\n/// resolve maps each TraitRef's ref_id to its defining trait; that's all\n/// that the ref_id is for. The impl_id maps to the \"self type\" of this impl.\n/// If this impl is an ItemKind::Impl, the impl_id is redundant (it could be the\n/// same as the impl's node id).\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct TraitRef {\n    pub path: Path,\n    pub ref_id: NodeId,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct PolyTraitRef {\n    /// The `'a` in `<'a> Foo<&'a T>`\n    pub bound_generic_params: Vec<GenericParam>,\n\n    /// The `Foo<&'a T>` in `<'a> Foo<&'a T>`\n    pub trait_ref: TraitRef,\n\n    pub span: Span,\n}\n\nimpl PolyTraitRef {\n    pub fn new(generic_params: Vec<GenericParam>, path: Path, span: Span) -> Self {\n        PolyTraitRef {\n            bound_generic_params: generic_params,\n            trait_ref: TraitRef { path: path, ref_id: DUMMY_NODE_ID },\n            span,\n        }\n    }\n}\n\n#[derive(Copy, Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum CrateSugar {\n    /// Source is `pub(crate)`\n    PubCrate,\n\n    /// Source is (just) `crate`\n    JustCrate,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum Visibility {\n    Public,\n    Crate(Span, CrateSugar),\n    Restricted { path: P<Path>, id: NodeId },\n    Inherited,\n}\n\n/// Field of a struct.\n///\n/// E.g. `bar: usize` as in `struct Foo { bar: usize }`\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct StructField {\n    pub span: Span,\n    pub ident: Option<Ident>,\n    pub vis: Visibility,\n    pub id: NodeId,\n    pub ty: P<Ty>,\n    pub attrs: Vec<Attribute>,\n}\n\n/// Fields and Ids of enum variants and structs\n///\n/// For enum variants: `NodeId` represents both an Id of the variant itself (relevant for all\n/// variant kinds) and an Id of the variant's constructor (not relevant for `Struct`-variants).\n/// One shared Id can be successfully used for these two purposes.\n/// Id of the whole enum lives in `Item`.\n///\n/// For structs: `NodeId` represents an Id of the structure's constructor, so it is not actually\n/// used for `Struct`-structs (but still presents). Structures don't have an analogue of \"Id of\n/// the variant itself\" from enum variants.\n/// Id of the whole struct lives in `Item`.\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum VariantData {\n    /// Struct variant.\n    ///\n    /// E.g. `Bar { .. }` as in `enum Foo { Bar { .. } }`\n    Struct(Vec<StructField>, NodeId),\n    /// Tuple variant.\n    ///\n    /// E.g. `Bar(..)` as in `enum Foo { Bar(..) }`\n    Tuple(Vec<StructField>, NodeId),\n    /// Unit variant.\n    ///\n    /// E.g. `Bar = ..` as in `enum Foo { Bar = .. }`\n    Unit(NodeId),\n}\n\nimpl VariantData {\n    pub fn fields(&self) -> &[StructField] {\n        match *self {\n            VariantData::Struct(ref fields, _) | VariantData::Tuple(ref fields, _) => fields,\n            _ => &[],\n        }\n    }\n    pub fn id(&self) -> NodeId {\n        match *self {\n            VariantData::Struct(_, id) | VariantData::Tuple(_, id) | VariantData::Unit(id) => id\n        }\n    }\n    pub fn is_struct(&self) -> bool {\n        if let VariantData::Struct(..) = *self { true } else { false }\n    }\n    pub fn is_tuple(&self) -> bool {\n        if let VariantData::Tuple(..) = *self { true } else { false }\n    }\n    pub fn is_unit(&self) -> bool {\n        if let VariantData::Unit(..) = *self { true } else { false }\n    }\n}\n\n/// An item\n///\n/// The name might be a dummy name in case of anonymous items\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct Item {\n    pub ident: Ident,\n    pub attrs: Vec<Attribute>,\n    pub id: NodeId,\n    pub node: ItemKind,\n    pub vis: Visibility,\n    pub span: Span,\n\n    /// Original tokens this item was parsed from. This isn't necessarily\n    /// available for all items, although over time more and more items should\n    /// have this be `Some`. Right now this is primarily used for procedural\n    /// macros, notably custom attributes.\n    ///\n    /// Note that the tokens here do not include the outer attributes, but will\n    /// include inner attributes.\n    pub tokens: Option<TokenStream>,\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum ItemKind {\n    /// An `extern crate` item, with optional original crate name.\n    ///\n    /// E.g. `extern crate foo` or `extern crate foo_bar as foo`\n    ExternCrate(Option<Name>),\n    /// A use declaration (`use` or `pub use`) item.\n    ///\n    /// E.g. `use foo;`, `use foo::bar;` or `use foo::bar as FooBar;`\n    Use(P<UseTree>),\n    /// A static item (`static` or `pub static`).\n    ///\n    /// E.g. `static FOO: i32 = 42;` or `static FOO: &'static str = \"bar\";`\n    Static(P<Ty>, Mutability, P<Expr>),\n    /// A constant item (`const` or `pub const`).\n    ///\n    /// E.g. `const FOO: i32 = 42;`\n    Const(P<Ty>, P<Expr>),\n    /// A function declaration (`fn` or `pub fn`).\n    ///\n    /// E.g. `fn foo(bar: usize) -> usize { .. }`\n    Fn(P<FnDecl>, Unsafety, Spanned<Constness>, Abi, Generics, P<Block>),\n    /// A module declaration (`mod` or `pub mod`).\n    ///\n    /// E.g. `mod foo;` or `mod foo { .. }`\n    Mod(Mod),\n    /// An external module (`extern` or `pub extern`).\n    ///\n    /// E.g. `extern {}` or `extern \"C\" {}`\n    ForeignMod(ForeignMod),\n    /// Module-level inline assembly (from `global_asm!()`)\n    GlobalAsm(P<GlobalAsm>),\n    /// A type alias (`type` or `pub type`).\n    ///\n    /// E.g. `type Foo = Bar<u8>;`\n    Ty(P<Ty>, Generics),\n    /// An enum definition (`enum` or `pub enum`).\n    ///\n    /// E.g. `enum Foo<A, B> { C<A>, D<B> }`\n    Enum(EnumDef, Generics),\n    /// A struct definition (`struct` or `pub struct`).\n    ///\n    /// E.g. `struct Foo<A> { x: A }`\n    Struct(VariantData, Generics),\n    /// A union definition (`union` or `pub union`).\n    ///\n    /// E.g. `union Foo<A, B> { x: A, y: B }`\n    Union(VariantData, Generics),\n    /// A Trait declaration (`trait` or `pub trait`).\n    ///\n    /// E.g. `trait Foo { .. }`, `trait Foo<T> { .. }` or `auto trait Foo {}`\n    Trait(IsAuto, Unsafety, Generics, TyParamBounds, Vec<TraitItem>),\n    /// Trait alias\n    ///\n    /// E.g. `trait Foo = Bar + Quux;`\n    TraitAlias(Generics, TyParamBounds),\n    /// An implementation.\n    ///\n    /// E.g. `impl<A> Foo<A> { .. }` or `impl<A> Trait for Foo<A> { .. }`\n    Impl(Unsafety,\n             ImplPolarity,\n             Defaultness,\n             Generics,\n             Option<TraitRef>, // (optional) trait this impl implements\n             P<Ty>, // self\n             Vec<ImplItem>),\n    /// A macro invocation.\n    ///\n    /// E.g. `macro_rules! foo { .. }` or `foo!(..)`\n    Mac(Mac),\n\n    /// A macro definition.\n    MacroDef(MacroDef),\n}\n\nimpl ItemKind {\n    pub fn descriptive_variant(&self) -> &str {\n        match *self {\n            ItemKind::ExternCrate(..) => \"extern crate\",\n            ItemKind::Use(..) => \"use\",\n            ItemKind::Static(..) => \"static item\",\n            ItemKind::Const(..) => \"constant item\",\n            ItemKind::Fn(..) => \"function\",\n            ItemKind::Mod(..) => \"module\",\n            ItemKind::ForeignMod(..) => \"foreign module\",\n            ItemKind::GlobalAsm(..) => \"global asm\",\n            ItemKind::Ty(..) => \"type alias\",\n            ItemKind::Enum(..) => \"enum\",\n            ItemKind::Struct(..) => \"struct\",\n            ItemKind::Union(..) => \"union\",\n            ItemKind::Trait(..) => \"trait\",\n            ItemKind::TraitAlias(..) => \"trait alias\",\n            ItemKind::Mac(..) |\n            ItemKind::MacroDef(..) |\n            ItemKind::Impl(..) => \"item\"\n        }\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub struct ForeignItem {\n    pub ident: Ident,\n    pub attrs: Vec<Attribute>,\n    pub node: ForeignItemKind,\n    pub id: NodeId,\n    pub span: Span,\n    pub vis: Visibility,\n}\n\n/// An item within an `extern` block\n#[derive(Clone, PartialEq, Eq, RustcEncodable, RustcDecodable, Hash, Debug)]\npub enum ForeignItemKind {\n    /// A foreign function\n    Fn(P<FnDecl>, Generics),\n    /// A foreign static item (`static ext: u8`), with optional mutability\n    /// (the boolean is true when mutable)\n    Static(P<Ty>, bool),\n    /// A foreign type\n    Ty,\n}\n\nimpl ForeignItemKind {\n    pub fn descriptive_variant(&self) -> &str {\n        match *self {\n            ForeignItemKind::Fn(..) => \"foreign function\",\n            ForeignItemKind::Static(..) => \"foreign static item\",\n            ForeignItemKind::Ty => \"foreign type\",\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use serialize;\n    use super::*;\n\n    // are ASTs encodable?\n    #[test]\n    fn check_asts_encodable() {\n        fn assert_encodable<T: serialize::Encodable>() {}\n        assert_encodable::<Crate>();\n    }\n}"
  },
  {
    "path": "examples/rust/keywords.txt",
    "content": "false\ntrue\n\nas\nasync\nawait\nbecome\nbreak\ncontinue\ndo\nelse\nfor\nif\nin\nloop\nmatch\nmove\nreturn\ntry\ntypeof\nunsafe\nuse\nwhile\nyield\n\n'static\nSelf\nabstract\nbox\nconst\ncrate\ndyn\nenum\nextern\nfinal\nfn\nimpl\nlet\nmacro\nmod\nmut\noverride\npriv\npub\nref\nself\nstatic\nstruct\nsuper\ntrait\ntype\nunion\nunsized\nvirtual\nwhere"
  },
  {
    "path": "examples/rust/scratch.rs",
    "content": "fn f() {\n    match self.node {\n        Foo::PatKind::Ident(_) => 1\n    }\n}"
  },
  {
    "path": "examples/typescript/keywords.txt",
    "content": "abstract\narguments\nclass\nconst\ndeclare\nenum\nexport\nextends\nfrom\nfunction\nimplements\nimport\ninterface\nlet\nmodule\nnamespace\npackage\nprivate\nprotected\npublic\nstatic\nsuper\nthis\ntype\nvar\nvoid\nwith\n\nawait\nbreak\ncase\ncatch\ncontinue\ndebugger\ndefault\ndelete\ndo\nin\nof\nelse\neval\nfinally\nfor\nif\ninstanceof\nnew\nreturn\nswitch\nthrow\ntry\ntypeof\nwhile\nyield\n\nnull\ntrue\nfalse\nundefined"
  },
  {
    "path": "examples/typescript/parser.ts",
    "content": "/// <reference path=\"utilities.ts\"/>\n/// <reference path=\"scanner.ts\"/>\n\nnamespace ts {\n    const enum SignatureFlags {\n        None = 0,\n        Yield = 1 << 0,\n        Await = 1 << 1,\n        Type  = 1 << 2,\n        RequireCompleteParameterList = 1 << 3,\n        IgnoreMissingOpenBrace = 1 << 4,\n        JSDoc = 1 << 5,\n    }\n\n    // tslint:disable variable-name\n    let NodeConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n    let TokenConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n    let IdentifierConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n    let SourceFileConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n    // tslint:enable variable-name\n\n    export function createNode(kind: SyntaxKind, pos?: number, end?: number): Node {\n        if (kind === SyntaxKind.SourceFile) {\n            return new (SourceFileConstructor || (SourceFileConstructor = objectAllocator.getSourceFileConstructor()))(kind, pos, end);\n        }\n        else if (kind === SyntaxKind.Identifier) {\n            return new (IdentifierConstructor || (IdentifierConstructor = objectAllocator.getIdentifierConstructor()))(kind, pos, end);\n        }\n        else if (!isNodeKind(kind)) {\n            return new (TokenConstructor || (TokenConstructor = objectAllocator.getTokenConstructor()))(kind, pos, end);\n        }\n        else {\n            return new (NodeConstructor || (NodeConstructor = objectAllocator.getNodeConstructor()))(kind, pos, end);\n        }\n    }\n\n    function visitNode<T>(cbNode: (node: Node) => T, node: Node): T | undefined {\n        return node && cbNode(node);\n    }\n\n    function visitNodes<T>(cbNode: (node: Node) => T, cbNodes: (node: NodeArray<Node>) => T | undefined, nodes: NodeArray<Node>): T | undefined {\n        if (nodes) {\n            if (cbNodes) {\n                return cbNodes(nodes);\n            }\n            for (const node of nodes) {\n                const result = cbNode(node);\n                if (result) {\n                    return result;\n                }\n            }\n        }\n    }\n\n    /**\n     * Invokes a callback for each child of the given node. The 'cbNode' callback is invoked for all child nodes\n     * stored in properties. If a 'cbNodes' callback is specified, it is invoked for embedded arrays; otherwise,\n     * embedded arrays are flattened and the 'cbNode' callback is invoked for each element. If a callback returns\n     * a truthy value, iteration stops and that value is returned. Otherwise, undefined is returned.\n     *\n     * @param node a given node to visit its children\n     * @param cbNode a callback to be invoked for all child nodes\n     * @param cbNodes a callback to be invoked for embedded array\n     *\n     * @remarks `forEachChild` must visit the children of a node in the order\n     * that they appear in the source code. The language service depends on this property to locate nodes by position.\n     */\n    export function forEachChild<T>(node: Node, cbNode: (node: Node) => T | undefined, cbNodes?: (nodes: NodeArray<Node>) => T | undefined): T | undefined {\n        if (!node || node.kind <= SyntaxKind.LastToken) {\n            return;\n        }\n        switch (node.kind) {\n            case SyntaxKind.QualifiedName:\n                return visitNode(cbNode, (<QualifiedName>node).left) ||\n                    visitNode(cbNode, (<QualifiedName>node).right);\n            case SyntaxKind.TypeParameter:\n                return visitNode(cbNode, (<TypeParameterDeclaration>node).name) ||\n                    visitNode(cbNode, (<TypeParameterDeclaration>node).constraint) ||\n                    visitNode(cbNode, (<TypeParameterDeclaration>node).default) ||\n                    visitNode(cbNode, (<TypeParameterDeclaration>node).expression);\n            case SyntaxKind.ShorthandPropertyAssignment:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ShorthandPropertyAssignment>node).name) ||\n                    visitNode(cbNode, (<ShorthandPropertyAssignment>node).questionToken) ||\n                    visitNode(cbNode, (<ShorthandPropertyAssignment>node).equalsToken) ||\n                    visitNode(cbNode, (<ShorthandPropertyAssignment>node).objectAssignmentInitializer);\n            case SyntaxKind.SpreadAssignment:\n                return visitNode(cbNode, (<SpreadAssignment>node).expression);\n            case SyntaxKind.Parameter:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ParameterDeclaration>node).dotDotDotToken) ||\n                    visitNode(cbNode, (<ParameterDeclaration>node).name) ||\n                    visitNode(cbNode, (<ParameterDeclaration>node).questionToken) ||\n                    visitNode(cbNode, (<ParameterDeclaration>node).type) ||\n                    visitNode(cbNode, (<ParameterDeclaration>node).initializer);\n            case SyntaxKind.PropertyDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<PropertyDeclaration>node).name) ||\n                    visitNode(cbNode, (<PropertyDeclaration>node).questionToken) ||\n                    visitNode(cbNode, (<PropertyDeclaration>node).exclamationToken) ||\n                    visitNode(cbNode, (<PropertyDeclaration>node).type) ||\n                    visitNode(cbNode, (<PropertyDeclaration>node).initializer);\n            case SyntaxKind.PropertySignature:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<PropertySignature>node).name) ||\n                    visitNode(cbNode, (<PropertySignature>node).questionToken) ||\n                    visitNode(cbNode, (<PropertySignature>node).type) ||\n                    visitNode(cbNode, (<PropertySignature>node).initializer);\n            case SyntaxKind.PropertyAssignment:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<PropertyAssignment>node).name) ||\n                    visitNode(cbNode, (<PropertyAssignment>node).questionToken) ||\n                    visitNode(cbNode, (<PropertyAssignment>node).initializer);\n            case SyntaxKind.VariableDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<VariableDeclaration>node).name) ||\n                    visitNode(cbNode, (<VariableDeclaration>node).exclamationToken) ||\n                    visitNode(cbNode, (<VariableDeclaration>node).type) ||\n                    visitNode(cbNode, (<VariableDeclaration>node).initializer);\n            case SyntaxKind.BindingElement:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<BindingElement>node).dotDotDotToken) ||\n                    visitNode(cbNode, (<BindingElement>node).propertyName) ||\n                    visitNode(cbNode, (<BindingElement>node).name) ||\n                    visitNode(cbNode, (<BindingElement>node).initializer);\n            case SyntaxKind.FunctionType:\n            case SyntaxKind.ConstructorType:\n            case SyntaxKind.CallSignature:\n            case SyntaxKind.ConstructSignature:\n            case SyntaxKind.IndexSignature:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNodes(cbNode, cbNodes, (<SignatureDeclaration>node).typeParameters) ||\n                    visitNodes(cbNode, cbNodes, (<SignatureDeclaration>node).parameters) ||\n                    visitNode(cbNode, (<SignatureDeclaration>node).type);\n            case SyntaxKind.MethodDeclaration:\n            case SyntaxKind.MethodSignature:\n            case SyntaxKind.Constructor:\n            case SyntaxKind.GetAccessor:\n            case SyntaxKind.SetAccessor:\n            case SyntaxKind.FunctionExpression:\n            case SyntaxKind.FunctionDeclaration:\n            case SyntaxKind.ArrowFunction:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<FunctionLikeDeclaration>node).asteriskToken) ||\n                    visitNode(cbNode, (<FunctionLikeDeclaration>node).name) ||\n                    visitNode(cbNode, (<FunctionLikeDeclaration>node).questionToken) ||\n                    visitNodes(cbNode, cbNodes, (<FunctionLikeDeclaration>node).typeParameters) ||\n                    visitNodes(cbNode, cbNodes, (<FunctionLikeDeclaration>node).parameters) ||\n                    visitNode(cbNode, (<FunctionLikeDeclaration>node).type) ||\n                    visitNode(cbNode, (<ArrowFunction>node).equalsGreaterThanToken) ||\n                    visitNode(cbNode, (<FunctionLikeDeclaration>node).body);\n            case SyntaxKind.TypeReference:\n                return visitNode(cbNode, (<TypeReferenceNode>node).typeName) ||\n                    visitNodes(cbNode, cbNodes, (<TypeReferenceNode>node).typeArguments);\n            case SyntaxKind.TypePredicate:\n                return visitNode(cbNode, (<TypePredicateNode>node).parameterName) ||\n                    visitNode(cbNode, (<TypePredicateNode>node).type);\n            case SyntaxKind.TypeQuery:\n                return visitNode(cbNode, (<TypeQueryNode>node).exprName);\n            case SyntaxKind.TypeLiteral:\n                return visitNodes(cbNode, cbNodes, (<TypeLiteralNode>node).members);\n            case SyntaxKind.ArrayType:\n                return visitNode(cbNode, (<ArrayTypeNode>node).elementType);\n            case SyntaxKind.TupleType:\n                return visitNodes(cbNode, cbNodes, (<TupleTypeNode>node).elementTypes);\n            case SyntaxKind.UnionType:\n            case SyntaxKind.IntersectionType:\n                return visitNodes(cbNode, cbNodes, (<UnionOrIntersectionTypeNode>node).types);\n            case SyntaxKind.ConditionalType:\n                return visitNode(cbNode, (<ConditionalTypeNode>node).checkType) ||\n                    visitNode(cbNode, (<ConditionalTypeNode>node).extendsType) ||\n                    visitNode(cbNode, (<ConditionalTypeNode>node).trueType) ||\n                    visitNode(cbNode, (<ConditionalTypeNode>node).falseType);\n            case SyntaxKind.InferType:\n                return visitNode(cbNode, (<InferTypeNode>node).typeParameter);\n            case SyntaxKind.ParenthesizedType:\n            case SyntaxKind.TypeOperator:\n                return visitNode(cbNode, (<ParenthesizedTypeNode | TypeOperatorNode>node).type);\n            case SyntaxKind.IndexedAccessType:\n                return visitNode(cbNode, (<IndexedAccessTypeNode>node).objectType) ||\n                    visitNode(cbNode, (<IndexedAccessTypeNode>node).indexType);\n            case SyntaxKind.MappedType:\n                return visitNode(cbNode, (<MappedTypeNode>node).readonlyToken) ||\n                    visitNode(cbNode, (<MappedTypeNode>node).typeParameter) ||\n                    visitNode(cbNode, (<MappedTypeNode>node).questionToken) ||\n                    visitNode(cbNode, (<MappedTypeNode>node).type);\n            case SyntaxKind.LiteralType:\n                return visitNode(cbNode, (<LiteralTypeNode>node).literal);\n            case SyntaxKind.ObjectBindingPattern:\n            case SyntaxKind.ArrayBindingPattern:\n                return visitNodes(cbNode, cbNodes, (<BindingPattern>node).elements);\n            case SyntaxKind.ArrayLiteralExpression:\n                return visitNodes(cbNode, cbNodes, (<ArrayLiteralExpression>node).elements);\n            case SyntaxKind.ObjectLiteralExpression:\n                return visitNodes(cbNode, cbNodes, (<ObjectLiteralExpression>node).properties);\n            case SyntaxKind.PropertyAccessExpression:\n                return visitNode(cbNode, (<PropertyAccessExpression>node).expression) ||\n                    visitNode(cbNode, (<PropertyAccessExpression>node).name);\n            case SyntaxKind.ElementAccessExpression:\n                return visitNode(cbNode, (<ElementAccessExpression>node).expression) ||\n                    visitNode(cbNode, (<ElementAccessExpression>node).argumentExpression);\n            case SyntaxKind.CallExpression:\n            case SyntaxKind.NewExpression:\n                return visitNode(cbNode, (<CallExpression>node).expression) ||\n                    visitNodes(cbNode, cbNodes, (<CallExpression>node).typeArguments) ||\n                    visitNodes(cbNode, cbNodes, (<CallExpression>node).arguments);\n            case SyntaxKind.TaggedTemplateExpression:\n                return visitNode(cbNode, (<TaggedTemplateExpression>node).tag) ||\n                    visitNode(cbNode, (<TaggedTemplateExpression>node).template);\n            case SyntaxKind.TypeAssertionExpression:\n                return visitNode(cbNode, (<TypeAssertion>node).type) ||\n                    visitNode(cbNode, (<TypeAssertion>node).expression);\n            case SyntaxKind.ParenthesizedExpression:\n                return visitNode(cbNode, (<ParenthesizedExpression>node).expression);\n            case SyntaxKind.DeleteExpression:\n                return visitNode(cbNode, (<DeleteExpression>node).expression);\n            case SyntaxKind.TypeOfExpression:\n                return visitNode(cbNode, (<TypeOfExpression>node).expression);\n            case SyntaxKind.VoidExpression:\n                return visitNode(cbNode, (<VoidExpression>node).expression);\n            case SyntaxKind.PrefixUnaryExpression:\n                return visitNode(cbNode, (<PrefixUnaryExpression>node).operand);\n            case SyntaxKind.YieldExpression:\n                return visitNode(cbNode, (<YieldExpression>node).asteriskToken) ||\n                    visitNode(cbNode, (<YieldExpression>node).expression);\n            case SyntaxKind.AwaitExpression:\n                return visitNode(cbNode, (<AwaitExpression>node).expression);\n            case SyntaxKind.PostfixUnaryExpression:\n                return visitNode(cbNode, (<PostfixUnaryExpression>node).operand);\n            case SyntaxKind.BinaryExpression:\n                return visitNode(cbNode, (<BinaryExpression>node).left) ||\n                    visitNode(cbNode, (<BinaryExpression>node).operatorToken) ||\n                    visitNode(cbNode, (<BinaryExpression>node).right);\n            case SyntaxKind.AsExpression:\n                return visitNode(cbNode, (<AsExpression>node).expression) ||\n                    visitNode(cbNode, (<AsExpression>node).type);\n            case SyntaxKind.NonNullExpression:\n                return visitNode(cbNode, (<NonNullExpression>node).expression);\n            case SyntaxKind.MetaProperty:\n                return visitNode(cbNode, (<MetaProperty>node).name);\n            case SyntaxKind.ConditionalExpression:\n                return visitNode(cbNode, (<ConditionalExpression>node).condition) ||\n                    visitNode(cbNode, (<ConditionalExpression>node).questionToken) ||\n                    visitNode(cbNode, (<ConditionalExpression>node).whenTrue) ||\n                    visitNode(cbNode, (<ConditionalExpression>node).colonToken) ||\n                    visitNode(cbNode, (<ConditionalExpression>node).whenFalse);\n            case SyntaxKind.SpreadElement:\n                return visitNode(cbNode, (<SpreadElement>node).expression);\n            case SyntaxKind.Block:\n            case SyntaxKind.ModuleBlock:\n                return visitNodes(cbNode, cbNodes, (<Block>node).statements);\n            case SyntaxKind.SourceFile:\n                return visitNodes(cbNode, cbNodes, (<SourceFile>node).statements) ||\n                    visitNode(cbNode, (<SourceFile>node).endOfFileToken);\n            case SyntaxKind.VariableStatement:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<VariableStatement>node).declarationList);\n            case SyntaxKind.VariableDeclarationList:\n                return visitNodes(cbNode, cbNodes, (<VariableDeclarationList>node).declarations);\n            case SyntaxKind.ExpressionStatement:\n                return visitNode(cbNode, (<ExpressionStatement>node).expression);\n            case SyntaxKind.IfStatement:\n                return visitNode(cbNode, (<IfStatement>node).expression) ||\n                    visitNode(cbNode, (<IfStatement>node).thenStatement) ||\n                    visitNode(cbNode, (<IfStatement>node).elseStatement);\n            case SyntaxKind.DoStatement:\n                return visitNode(cbNode, (<DoStatement>node).statement) ||\n                    visitNode(cbNode, (<DoStatement>node).expression);\n            case SyntaxKind.WhileStatement:\n                return visitNode(cbNode, (<WhileStatement>node).expression) ||\n                    visitNode(cbNode, (<WhileStatement>node).statement);\n            case SyntaxKind.ForStatement:\n                return visitNode(cbNode, (<ForStatement>node).initializer) ||\n                    visitNode(cbNode, (<ForStatement>node).condition) ||\n                    visitNode(cbNode, (<ForStatement>node).incrementor) ||\n                    visitNode(cbNode, (<ForStatement>node).statement);\n            case SyntaxKind.ForInStatement:\n                return visitNode(cbNode, (<ForInStatement>node).initializer) ||\n                    visitNode(cbNode, (<ForInStatement>node).expression) ||\n                    visitNode(cbNode, (<ForInStatement>node).statement);\n            case SyntaxKind.ForOfStatement:\n                return visitNode(cbNode, (<ForOfStatement>node).awaitModifier) ||\n                    visitNode(cbNode, (<ForOfStatement>node).initializer) ||\n                    visitNode(cbNode, (<ForOfStatement>node).expression) ||\n                    visitNode(cbNode, (<ForOfStatement>node).statement);\n            case SyntaxKind.ContinueStatement:\n            case SyntaxKind.BreakStatement:\n                return visitNode(cbNode, (<BreakOrContinueStatement>node).label);\n            case SyntaxKind.ReturnStatement:\n                return visitNode(cbNode, (<ReturnStatement>node).expression);\n            case SyntaxKind.WithStatement:\n                return visitNode(cbNode, (<WithStatement>node).expression) ||\n                    visitNode(cbNode, (<WithStatement>node).statement);\n            case SyntaxKind.SwitchStatement:\n                return visitNode(cbNode, (<SwitchStatement>node).expression) ||\n                    visitNode(cbNode, (<SwitchStatement>node).caseBlock);\n            case SyntaxKind.CaseBlock:\n                return visitNodes(cbNode, cbNodes, (<CaseBlock>node).clauses);\n            case SyntaxKind.CaseClause:\n                return visitNode(cbNode, (<CaseClause>node).expression) ||\n                    visitNodes(cbNode, cbNodes, (<CaseClause>node).statements);\n            case SyntaxKind.DefaultClause:\n                return visitNodes(cbNode, cbNodes, (<DefaultClause>node).statements);\n            case SyntaxKind.LabeledStatement:\n                return visitNode(cbNode, (<LabeledStatement>node).label) ||\n                    visitNode(cbNode, (<LabeledStatement>node).statement);\n            case SyntaxKind.ThrowStatement:\n                return visitNode(cbNode, (<ThrowStatement>node).expression);\n            case SyntaxKind.TryStatement:\n                return visitNode(cbNode, (<TryStatement>node).tryBlock) ||\n                    visitNode(cbNode, (<TryStatement>node).catchClause) ||\n                    visitNode(cbNode, (<TryStatement>node).finallyBlock);\n            case SyntaxKind.CatchClause:\n                return visitNode(cbNode, (<CatchClause>node).variableDeclaration) ||\n                    visitNode(cbNode, (<CatchClause>node).block);\n            case SyntaxKind.Decorator:\n                return visitNode(cbNode, (<Decorator>node).expression);\n            case SyntaxKind.ClassDeclaration:\n            case SyntaxKind.ClassExpression:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ClassLikeDeclaration>node).name) ||\n                    visitNodes(cbNode, cbNodes, (<ClassLikeDeclaration>node).typeParameters) ||\n                    visitNodes(cbNode, cbNodes, (<ClassLikeDeclaration>node).heritageClauses) ||\n                    visitNodes(cbNode, cbNodes, (<ClassLikeDeclaration>node).members);\n            case SyntaxKind.InterfaceDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<InterfaceDeclaration>node).name) ||\n                    visitNodes(cbNode, cbNodes, (<InterfaceDeclaration>node).typeParameters) ||\n                    visitNodes(cbNode, cbNodes, (<ClassDeclaration>node).heritageClauses) ||\n                    visitNodes(cbNode, cbNodes, (<InterfaceDeclaration>node).members);\n            case SyntaxKind.TypeAliasDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<TypeAliasDeclaration>node).name) ||\n                    visitNodes(cbNode, cbNodes, (<TypeAliasDeclaration>node).typeParameters) ||\n                    visitNode(cbNode, (<TypeAliasDeclaration>node).type);\n            case SyntaxKind.EnumDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<EnumDeclaration>node).name) ||\n                    visitNodes(cbNode, cbNodes, (<EnumDeclaration>node).members);\n            case SyntaxKind.EnumMember:\n                return visitNode(cbNode, (<EnumMember>node).name) ||\n                    visitNode(cbNode, (<EnumMember>node).initializer);\n            case SyntaxKind.ModuleDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ModuleDeclaration>node).name) ||\n                    visitNode(cbNode, (<ModuleDeclaration>node).body);\n            case SyntaxKind.ImportEqualsDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ImportEqualsDeclaration>node).name) ||\n                    visitNode(cbNode, (<ImportEqualsDeclaration>node).moduleReference);\n            case SyntaxKind.ImportDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ImportDeclaration>node).importClause) ||\n                    visitNode(cbNode, (<ImportDeclaration>node).moduleSpecifier);\n            case SyntaxKind.ImportClause:\n                return visitNode(cbNode, (<ImportClause>node).name) ||\n                    visitNode(cbNode, (<ImportClause>node).namedBindings);\n            case SyntaxKind.NamespaceExportDeclaration:\n                return visitNode(cbNode, (<NamespaceExportDeclaration>node).name);\n\n            case SyntaxKind.NamespaceImport:\n                return visitNode(cbNode, (<NamespaceImport>node).name);\n            case SyntaxKind.NamedImports:\n            case SyntaxKind.NamedExports:\n                return visitNodes(cbNode, cbNodes, (<NamedImportsOrExports>node).elements);\n            case SyntaxKind.ExportDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ExportDeclaration>node).exportClause) ||\n                    visitNode(cbNode, (<ExportDeclaration>node).moduleSpecifier);\n            case SyntaxKind.ImportSpecifier:\n            case SyntaxKind.ExportSpecifier:\n                return visitNode(cbNode, (<ImportOrExportSpecifier>node).propertyName) ||\n                    visitNode(cbNode, (<ImportOrExportSpecifier>node).name);\n            case SyntaxKind.ExportAssignment:\n                return visitNodes(cbNode, cbNodes, node.decorators) ||\n                    visitNodes(cbNode, cbNodes, node.modifiers) ||\n                    visitNode(cbNode, (<ExportAssignment>node).expression);\n            case SyntaxKind.TemplateExpression:\n                return visitNode(cbNode, (<TemplateExpression>node).head) || visitNodes(cbNode, cbNodes, (<TemplateExpression>node).templateSpans);\n            case SyntaxKind.TemplateSpan:\n                return visitNode(cbNode, (<TemplateSpan>node).expression) || visitNode(cbNode, (<TemplateSpan>node).literal);\n            case SyntaxKind.ComputedPropertyName:\n                return visitNode(cbNode, (<ComputedPropertyName>node).expression);\n            case SyntaxKind.HeritageClause:\n                return visitNodes(cbNode, cbNodes, (<HeritageClause>node).types);\n            case SyntaxKind.ExpressionWithTypeArguments:\n                return visitNode(cbNode, (<ExpressionWithTypeArguments>node).expression) ||\n                    visitNodes(cbNode, cbNodes, (<ExpressionWithTypeArguments>node).typeArguments);\n            case SyntaxKind.ExternalModuleReference:\n                return visitNode(cbNode, (<ExternalModuleReference>node).expression);\n            case SyntaxKind.MissingDeclaration:\n                return visitNodes(cbNode, cbNodes, node.decorators);\n            case SyntaxKind.CommaListExpression:\n                return visitNodes(cbNode, cbNodes, (<CommaListExpression>node).elements);\n\n            case SyntaxKind.JsxElement:\n                return visitNode(cbNode, (<JsxElement>node).openingElement) ||\n                    visitNodes(cbNode, cbNodes, (<JsxElement>node).children) ||\n                    visitNode(cbNode, (<JsxElement>node).closingElement);\n            case SyntaxKind.JsxFragment:\n                return visitNode(cbNode, (<JsxFragment>node).openingFragment) ||\n                    visitNodes(cbNode, cbNodes, (<JsxFragment>node).children) ||\n                    visitNode(cbNode, (<JsxFragment>node).closingFragment);\n            case SyntaxKind.JsxSelfClosingElement:\n            case SyntaxKind.JsxOpeningElement:\n                return visitNode(cbNode, (<JsxOpeningLikeElement>node).tagName) ||\n                    visitNode(cbNode, (<JsxOpeningLikeElement>node).attributes);\n            case SyntaxKind.JsxAttributes:\n                return visitNodes(cbNode, cbNodes, (<JsxAttributes>node).properties);\n            case SyntaxKind.JsxAttribute:\n                return visitNode(cbNode, (<JsxAttribute>node).name) ||\n                    visitNode(cbNode, (<JsxAttribute>node).initializer);\n            case SyntaxKind.JsxSpreadAttribute:\n                return visitNode(cbNode, (<JsxSpreadAttribute>node).expression);\n            case SyntaxKind.JsxExpression:\n                return visitNode(cbNode, (node as JsxExpression).dotDotDotToken) ||\n                    visitNode(cbNode, (node as JsxExpression).expression);\n            case SyntaxKind.JsxClosingElement:\n                return visitNode(cbNode, (<JsxClosingElement>node).tagName);\n\n            case SyntaxKind.JSDocTypeExpression:\n                return visitNode(cbNode, (<JSDocTypeExpression>node).type);\n            case SyntaxKind.JSDocNonNullableType:\n                return visitNode(cbNode, (<JSDocNonNullableType>node).type);\n            case SyntaxKind.JSDocNullableType:\n                return visitNode(cbNode, (<JSDocNullableType>node).type);\n            case SyntaxKind.JSDocOptionalType:\n                return visitNode(cbNode, (<JSDocOptionalType>node).type);\n            case SyntaxKind.JSDocFunctionType:\n                return visitNodes(cbNode, cbNodes, (<JSDocFunctionType>node).parameters) ||\n                    visitNode(cbNode, (<JSDocFunctionType>node).type);\n            case SyntaxKind.JSDocVariadicType:\n                return visitNode(cbNode, (<JSDocVariadicType>node).type);\n            case SyntaxKind.JSDocComment:\n                return visitNodes(cbNode, cbNodes, (<JSDoc>node).tags);\n            case SyntaxKind.JSDocParameterTag:\n            case SyntaxKind.JSDocPropertyTag:\n                if ((node as JSDocPropertyLikeTag).isNameFirst) {\n                    return visitNode(cbNode, (<JSDocPropertyLikeTag>node).name) ||\n                        visitNode(cbNode, (<JSDocPropertyLikeTag>node).typeExpression);\n                }\n                else {\n                    return visitNode(cbNode, (<JSDocPropertyLikeTag>node).typeExpression) ||\n                        visitNode(cbNode, (<JSDocPropertyLikeTag>node).name);\n                }\n            case SyntaxKind.JSDocReturnTag:\n                return visitNode(cbNode, (<JSDocReturnTag>node).typeExpression);\n            case SyntaxKind.JSDocTypeTag:\n                return visitNode(cbNode, (<JSDocTypeTag>node).typeExpression);\n            case SyntaxKind.JSDocAugmentsTag:\n                return visitNode(cbNode, (<JSDocAugmentsTag>node).class);\n            case SyntaxKind.JSDocTemplateTag:\n                return visitNodes(cbNode, cbNodes, (<JSDocTemplateTag>node).typeParameters);\n            case SyntaxKind.JSDocTypedefTag:\n                if ((node as JSDocTypedefTag).typeExpression &&\n                    (node as JSDocTypedefTag).typeExpression.kind === SyntaxKind.JSDocTypeExpression) {\n                    return visitNode(cbNode, (<JSDocTypedefTag>node).typeExpression) ||\n                        visitNode(cbNode, (<JSDocTypedefTag>node).fullName);\n                }\n                else {\n                    return visitNode(cbNode, (<JSDocTypedefTag>node).fullName) ||\n                        visitNode(cbNode, (<JSDocTypedefTag>node).typeExpression);\n                }\n            case SyntaxKind.JSDocTypeLiteral:\n                if ((node as JSDocTypeLiteral).jsDocPropertyTags) {\n                    for (const tag of (node as JSDocTypeLiteral).jsDocPropertyTags) {\n                        visitNode(cbNode, tag);\n                    }\n                }\n                return;\n            case SyntaxKind.PartiallyEmittedExpression:\n                return visitNode(cbNode, (<PartiallyEmittedExpression>node).expression);\n        }\n    }\n\n    export function createSourceFile(fileName: string, sourceText: string, languageVersion: ScriptTarget, setParentNodes = false, scriptKind?: ScriptKind): SourceFile {\n        performance.mark(\"beforeParse\");\n        const result = Parser.parseSourceFile(fileName, sourceText, languageVersion, /*syntaxCursor*/ undefined, setParentNodes, scriptKind);\n        performance.mark(\"afterParse\");\n        performance.measure(\"Parse\", \"beforeParse\", \"afterParse\");\n        return result;\n    }\n\n    export function parseIsolatedEntityName(text: string, languageVersion: ScriptTarget): EntityName {\n        return Parser.parseIsolatedEntityName(text, languageVersion);\n    }\n\n    /**\n     * Parse json text into SyntaxTree and return node and parse errors if any\n     * @param fileName\n     * @param sourceText\n     */\n    export function parseJsonText(fileName: string, sourceText: string): JsonSourceFile {\n        return Parser.parseJsonText(fileName, sourceText);\n    }\n\n    // See also `isExternalOrCommonJsModule` in utilities.ts\n    export function isExternalModule(file: SourceFile): boolean {\n        return file.externalModuleIndicator !== undefined;\n    }\n\n    // Produces a new SourceFile for the 'newText' provided. The 'textChangeRange' parameter\n    // indicates what changed between the 'text' that this SourceFile has and the 'newText'.\n    // The SourceFile will be created with the compiler attempting to reuse as many nodes from\n    // this file as possible.\n    //\n    // Note: this function mutates nodes from this SourceFile. That means any existing nodes\n    // from this SourceFile that are being held onto may change as a result (including\n    // becoming detached from any SourceFile).  It is recommended that this SourceFile not\n    // be used once 'update' is called on it.\n    export function updateSourceFile(sourceFile: SourceFile, newText: string, textChangeRange: TextChangeRange, aggressiveChecks?: boolean): SourceFile {\n        const newSourceFile = IncrementalParser.updateSourceFile(sourceFile, newText, textChangeRange, aggressiveChecks);\n        // Because new source file node is created, it may not have the flag PossiblyContainDynamicImport. This is the case if there is no new edit to add dynamic import.\n        // We will manually port the flag to the new source file.\n        newSourceFile.flags |= (sourceFile.flags & NodeFlags.PossiblyContainsDynamicImport);\n        return newSourceFile;\n    }\n\n    /* @internal */\n    export function parseIsolatedJSDocComment(content: string, start?: number, length?: number) {\n        const result = Parser.JSDocParser.parseIsolatedJSDocComment(content, start, length);\n        if (result && result.jsDoc) {\n            // because the jsDocComment was parsed out of the source file, it might\n            // not be covered by the fixupParentReferences.\n            Parser.fixupParentReferences(result.jsDoc);\n        }\n\n        return result;\n    }\n\n    /* @internal */\n    // Exposed only for testing.\n    export function parseJSDocTypeExpressionForTests(content: string, start?: number, length?: number) {\n        return Parser.JSDocParser.parseJSDocTypeExpressionForTests(content, start, length);\n    }\n\n    // Implement the parser as a singleton module.  We do this for perf reasons because creating\n    // parser instances can actually be expensive enough to impact us on projects with many source\n    // files.\n    namespace Parser {\n        // Share a single scanner across all calls to parse a source file.  This helps speed things\n        // up by avoiding the cost of creating/compiling scanners over and over again.\n        const scanner = createScanner(ScriptTarget.Latest, /*skipTrivia*/ true);\n        const disallowInAndDecoratorContext = NodeFlags.DisallowInContext | NodeFlags.DecoratorContext;\n\n        // capture constructors in 'initializeState' to avoid null checks\n        // tslint:disable variable-name\n        let NodeConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n        let TokenConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n        let IdentifierConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n        let SourceFileConstructor: new (kind: SyntaxKind, pos: number, end: number) => Node;\n        // tslint:enable variable-name\n\n        let sourceFile: SourceFile;\n        let parseDiagnostics: Diagnostic[];\n        let syntaxCursor: IncrementalParser.SyntaxCursor;\n\n        let currentToken: SyntaxKind;\n        let sourceText: string;\n        let nodeCount: number;\n        let identifiers: Map<string>;\n        let identifierCount: number;\n\n        let parsingContext: ParsingContext;\n\n        // Flags that dictate what parsing context we're in.  For example:\n        // Whether or not we are in strict parsing mode.  All that changes in strict parsing mode is\n        // that some tokens that would be considered identifiers may be considered keywords.\n        //\n        // When adding more parser context flags, consider which is the more common case that the\n        // flag will be in.  This should be the 'false' state for that flag.  The reason for this is\n        // that we don't store data in our nodes unless the value is in the *non-default* state.  So,\n        // for example, more often than code 'allows-in' (or doesn't 'disallow-in').  We opt for\n        // 'disallow-in' set to 'false'.  Otherwise, if we had 'allowsIn' set to 'true', then almost\n        // all nodes would need extra state on them to store this info.\n        //\n        // Note: 'allowIn' and 'allowYield' track 1:1 with the [in] and [yield] concepts in the ES6\n        // grammar specification.\n        //\n        // An important thing about these context concepts.  By default they are effectively inherited\n        // while parsing through every grammar production.  i.e. if you don't change them, then when\n        // you parse a sub-production, it will have the same context values as the parent production.\n        // This is great most of the time.  After all, consider all the 'expression' grammar productions\n        // and how nearly all of them pass along the 'in' and 'yield' context values:\n        //\n        // EqualityExpression[In, Yield] :\n        //      RelationalExpression[?In, ?Yield]\n        //      EqualityExpression[?In, ?Yield] == RelationalExpression[?In, ?Yield]\n        //      EqualityExpression[?In, ?Yield] != RelationalExpression[?In, ?Yield]\n        //      EqualityExpression[?In, ?Yield] === RelationalExpression[?In, ?Yield]\n        //      EqualityExpression[?In, ?Yield] !== RelationalExpression[?In, ?Yield]\n        //\n        // Where you have to be careful is then understanding what the points are in the grammar\n        // where the values are *not* passed along.  For example:\n        //\n        // SingleNameBinding[Yield,GeneratorParameter]\n        //      [+GeneratorParameter]BindingIdentifier[Yield] Initializer[In]opt\n        //      [~GeneratorParameter]BindingIdentifier[?Yield]Initializer[In, ?Yield]opt\n        //\n        // Here this is saying that if the GeneratorParameter context flag is set, that we should\n        // explicitly set the 'yield' context flag to false before calling into the BindingIdentifier\n        // and we should explicitly unset the 'yield' context flag before calling into the Initializer.\n        // production.  Conversely, if the GeneratorParameter context flag is not set, then we\n        // should leave the 'yield' context flag alone.\n        //\n        // Getting this all correct is tricky and requires careful reading of the grammar to\n        // understand when these values should be changed versus when they should be inherited.\n        //\n        // Note: it should not be necessary to save/restore these flags during speculative/lookahead\n        // parsing.  These context flags are naturally stored and restored through normal recursive\n        // descent parsing and unwinding.\n        let contextFlags: NodeFlags;\n\n        // Whether or not we've had a parse error since creating the last AST node.  If we have\n        // encountered an error, it will be stored on the next AST node we create.  Parse errors\n        // can be broken down into three categories:\n        //\n        // 1) An error that occurred during scanning.  For example, an unterminated literal, or a\n        //    character that was completely not understood.\n        //\n        // 2) A token was expected, but was not present.  This type of error is commonly produced\n        //    by the 'parseExpected' function.\n        //\n        // 3) A token was present that no parsing function was able to consume.  This type of error\n        //    only occurs in the 'abortParsingListOrMoveToNextToken' function when the parser\n        //    decides to skip the token.\n        //\n        // In all of these cases, we want to mark the next node as having had an error before it.\n        // With this mark, we can know in incremental settings if this node can be reused, or if\n        // we have to reparse it.  If we don't keep this information around, we may just reuse the\n        // node.  in that event we would then not produce the same errors as we did before, causing\n        // significant confusion problems.\n        //\n        // Note: it is necessary that this value be saved/restored during speculative/lookahead\n        // parsing.  During lookahead parsing, we will often create a node.  That node will have\n        // this value attached, and then this value will be set back to 'false'.  If we decide to\n        // rewind, we must get back to the same value we had prior to the lookahead.\n        //\n        // Note: any errors at the end of the file that do not precede a regular node, should get\n        // attached to the EOF token.\n        let parseErrorBeforeNextFinishedNode = false;\n\n        export function parseSourceFile(fileName: string, sourceText: string, languageVersion: ScriptTarget, syntaxCursor: IncrementalParser.SyntaxCursor, setParentNodes?: boolean, scriptKind?: ScriptKind): SourceFile {\n            scriptKind = ensureScriptKind(fileName, scriptKind);\n\n            initializeState(sourceText, languageVersion, syntaxCursor, scriptKind);\n\n            const result = parseSourceFileWorker(fileName, languageVersion, setParentNodes, scriptKind);\n\n            clearState();\n\n            return result;\n        }\n\n        export function parseIsolatedEntityName(content: string, languageVersion: ScriptTarget): EntityName {\n            // Choice of `isDeclarationFile` should be arbitrary\n            initializeState(content, languageVersion, /*syntaxCursor*/ undefined, ScriptKind.JS);\n            // Prime the scanner.\n            nextToken();\n            const entityName = parseEntityName(/*allowReservedWords*/ true);\n            const isInvalid = token() === SyntaxKind.EndOfFileToken && !parseDiagnostics.length;\n            clearState();\n            return isInvalid ? entityName : undefined;\n        }\n\n        export function parseJsonText(fileName: string, sourceText: string): JsonSourceFile {\n            initializeState(sourceText, ScriptTarget.ES2015, /*syntaxCursor*/ undefined, ScriptKind.JSON);\n            // Set source file so that errors will be reported with this file name\n            sourceFile = createSourceFile(fileName, ScriptTarget.ES2015, ScriptKind.JSON, /*isDeclaration*/ false);\n            const result = <JsonSourceFile>sourceFile;\n\n            // Prime the scanner.\n            nextToken();\n            if (token() === SyntaxKind.EndOfFileToken) {\n                sourceFile.endOfFileToken = <EndOfFileToken>parseTokenNode();\n            }\n            else if (token() === SyntaxKind.OpenBraceToken ||\n                lookAhead(() => token() === SyntaxKind.StringLiteral)) {\n                result.jsonObject = parseObjectLiteralExpression();\n                sourceFile.endOfFileToken = parseExpectedToken(SyntaxKind.EndOfFileToken, Diagnostics.Unexpected_token);\n            }\n            else {\n                parseExpected(SyntaxKind.OpenBraceToken);\n            }\n\n            sourceFile.parseDiagnostics = parseDiagnostics;\n            clearState();\n            return result;\n        }\n\n        function getLanguageVariant(scriptKind: ScriptKind) {\n            // .tsx and .jsx files are treated as jsx language variant.\n            return scriptKind === ScriptKind.TSX || scriptKind === ScriptKind.JSX || scriptKind === ScriptKind.JS || scriptKind === ScriptKind.JSON ? LanguageVariant.JSX : LanguageVariant.Standard;\n        }\n\n        function initializeState(_sourceText: string, languageVersion: ScriptTarget, _syntaxCursor: IncrementalParser.SyntaxCursor, scriptKind: ScriptKind) {\n            NodeConstructor = objectAllocator.getNodeConstructor();\n            TokenConstructor = objectAllocator.getTokenConstructor();\n            IdentifierConstructor = objectAllocator.getIdentifierConstructor();\n            SourceFileConstructor = objectAllocator.getSourceFileConstructor();\n\n            sourceText = _sourceText;\n            syntaxCursor = _syntaxCursor;\n\n            parseDiagnostics = [];\n            parsingContext = 0;\n            identifiers = createMap<string>();\n            identifierCount = 0;\n            nodeCount = 0;\n\n            switch (scriptKind) {\n                case ScriptKind.JS:\n                case ScriptKind.JSX:\n                case ScriptKind.JSON:\n                    contextFlags = NodeFlags.JavaScriptFile;\n                    break;\n                default:\n                    contextFlags = NodeFlags.None;\n                    break;\n            }\n            parseErrorBeforeNextFinishedNode = false;\n\n            // Initialize and prime the scanner before parsing the source elements.\n            scanner.setText(sourceText);\n            scanner.setOnError(scanError);\n            scanner.setScriptTarget(languageVersion);\n            scanner.setLanguageVariant(getLanguageVariant(scriptKind));\n        }\n\n        function clearState() {\n            // Clear out the text the scanner is pointing at, so it doesn't keep anything alive unnecessarily.\n            scanner.setText(\"\");\n            scanner.setOnError(undefined);\n\n            // Clear any data.  We don't want to accidentally hold onto it for too long.\n            parseDiagnostics = undefined;\n            sourceFile = undefined;\n            identifiers = undefined;\n            syntaxCursor = undefined;\n            sourceText = undefined;\n        }\n\n        function parseSourceFileWorker(fileName: string, languageVersion: ScriptTarget, setParentNodes: boolean, scriptKind: ScriptKind): SourceFile {\n            const isDeclarationFile = isDeclarationFileName(fileName);\n            if (isDeclarationFile) {\n                contextFlags |= NodeFlags.Ambient;\n            }\n\n            sourceFile = createSourceFile(fileName, languageVersion, scriptKind, isDeclarationFile);\n            sourceFile.flags = contextFlags;\n\n            // Prime the scanner.\n            nextToken();\n            processReferenceComments(sourceFile);\n\n            sourceFile.statements = parseList(ParsingContext.SourceElements, parseStatement);\n            Debug.assert(token() === SyntaxKind.EndOfFileToken);\n            sourceFile.endOfFileToken = addJSDocComment(parseTokenNode() as EndOfFileToken);\n\n            setExternalModuleIndicator(sourceFile);\n\n            sourceFile.nodeCount = nodeCount;\n            sourceFile.identifierCount = identifierCount;\n            sourceFile.identifiers = identifiers;\n            sourceFile.parseDiagnostics = parseDiagnostics;\n\n            if (setParentNodes) {\n                fixupParentReferences(sourceFile);\n            }\n\n            return sourceFile;\n        }\n\n        function addJSDocComment<T extends HasJSDoc>(node: T): T {\n            const comments = getJSDocCommentRanges(node, sourceFile.text);\n            if (comments) {\n                for (const comment of comments) {\n                    node.jsDoc = append(node.jsDoc, JSDocParser.parseJSDocComment(node, comment.pos, comment.end - comment.pos));\n                }\n            }\n\n            return node;\n        }\n\n        export function fixupParentReferences(rootNode: Node) {\n            // normally parent references are set during binding. However, for clients that only need\n            // a syntax tree, and no semantic features, then the binding process is an unnecessary\n            // overhead.  This functions allows us to set all the parents, without all the expense of\n            // binding.\n\n            let parent: Node = rootNode;\n            forEachChild(rootNode, visitNode);\n            return;\n\n            function visitNode(n: Node): void {\n                // walk down setting parents that differ from the parent we think it should be.  This\n                // allows us to quickly bail out of setting parents for subtrees during incremental\n                // parsing\n                if (n.parent !== parent) {\n                    n.parent = parent;\n\n                    const saveParent = parent;\n                    parent = n;\n                    forEachChild(n, visitNode);\n                    if (hasJSDocNodes(n)) {\n                        for (const jsDoc of n.jsDoc) {\n                            jsDoc.parent = n;\n                            parent = jsDoc;\n                            forEachChild(jsDoc, visitNode);\n                        }\n                    }\n                    parent = saveParent;\n                }\n            }\n        }\n\n        function createSourceFile(fileName: string, languageVersion: ScriptTarget, scriptKind: ScriptKind, isDeclarationFile: boolean): SourceFile {\n            // code from createNode is inlined here so createNode won't have to deal with special case of creating source files\n            // this is quite rare comparing to other nodes and createNode should be as fast as possible\n            const sourceFile = <SourceFile>new SourceFileConstructor(SyntaxKind.SourceFile, /*pos*/ 0, /* end */ sourceText.length);\n            nodeCount++;\n\n            sourceFile.text = sourceText;\n            sourceFile.bindDiagnostics = [];\n            sourceFile.languageVersion = languageVersion;\n            sourceFile.fileName = normalizePath(fileName);\n            sourceFile.languageVariant = getLanguageVariant(scriptKind);\n            sourceFile.isDeclarationFile = isDeclarationFile;\n            sourceFile.scriptKind = scriptKind;\n\n            return sourceFile;\n        }\n\n        function setContextFlag(val: boolean, flag: NodeFlags) {\n            if (val) {\n                contextFlags |= flag;\n            }\n            else {\n                contextFlags &= ~flag;\n            }\n        }\n\n        function setDisallowInContext(val: boolean) {\n            setContextFlag(val, NodeFlags.DisallowInContext);\n        }\n\n        function setYieldContext(val: boolean) {\n            setContextFlag(val, NodeFlags.YieldContext);\n        }\n\n        function setDecoratorContext(val: boolean) {\n            setContextFlag(val, NodeFlags.DecoratorContext);\n        }\n\n        function setAwaitContext(val: boolean) {\n            setContextFlag(val, NodeFlags.AwaitContext);\n        }\n\n        function doOutsideOfContext<T>(context: NodeFlags, func: () => T): T {\n            // contextFlagsToClear will contain only the context flags that are\n            // currently set that we need to temporarily clear\n            // We don't just blindly reset to the previous flags to ensure\n            // that we do not mutate cached flags for the incremental\n            // parser (ThisNodeHasError, ThisNodeOrAnySubNodesHasError, and\n            // HasAggregatedChildData).\n            const contextFlagsToClear = context & contextFlags;\n            if (contextFlagsToClear) {\n                // clear the requested context flags\n                setContextFlag(/*val*/ false, contextFlagsToClear);\n                const result = func();\n                // restore the context flags we just cleared\n                setContextFlag(/*val*/ true, contextFlagsToClear);\n                return result;\n            }\n\n            // no need to do anything special as we are not in any of the requested contexts\n            return func();\n        }\n\n        function doInsideOfContext<T>(context: NodeFlags, func: () => T): T {\n            // contextFlagsToSet will contain only the context flags that\n            // are not currently set that we need to temporarily enable.\n            // We don't just blindly reset to the previous flags to ensure\n            // that we do not mutate cached flags for the incremental\n            // parser (ThisNodeHasError, ThisNodeOrAnySubNodesHasError, and\n            // HasAggregatedChildData).\n            const contextFlagsToSet = context & ~contextFlags;\n            if (contextFlagsToSet) {\n                // set the requested context flags\n                setContextFlag(/*val*/ true, contextFlagsToSet);\n                const result = func();\n                // reset the context flags we just set\n                setContextFlag(/*val*/ false, contextFlagsToSet);\n                return result;\n            }\n\n            // no need to do anything special as we are already in all of the requested contexts\n            return func();\n        }\n\n        function allowInAnd<T>(func: () => T): T {\n            return doOutsideOfContext(NodeFlags.DisallowInContext, func);\n        }\n\n        function disallowInAnd<T>(func: () => T): T {\n            return doInsideOfContext(NodeFlags.DisallowInContext, func);\n        }\n\n        function doInYieldContext<T>(func: () => T): T {\n            return doInsideOfContext(NodeFlags.YieldContext, func);\n        }\n\n        function doInDecoratorContext<T>(func: () => T): T {\n            return doInsideOfContext(NodeFlags.DecoratorContext, func);\n        }\n\n        function doInAwaitContext<T>(func: () => T): T {\n            return doInsideOfContext(NodeFlags.AwaitContext, func);\n        }\n\n        function doOutsideOfAwaitContext<T>(func: () => T): T {\n            return doOutsideOfContext(NodeFlags.AwaitContext, func);\n        }\n\n        function doInYieldAndAwaitContext<T>(func: () => T): T {\n            return doInsideOfContext(NodeFlags.YieldContext | NodeFlags.AwaitContext, func);\n        }\n\n        function inContext(flags: NodeFlags) {\n            return (contextFlags & flags) !== 0;\n        }\n\n        function inYieldContext() {\n            return inContext(NodeFlags.YieldContext);\n        }\n\n        function inDisallowInContext() {\n            return inContext(NodeFlags.DisallowInContext);\n        }\n\n        function inDecoratorContext() {\n            return inContext(NodeFlags.DecoratorContext);\n        }\n\n        function inAwaitContext() {\n            return inContext(NodeFlags.AwaitContext);\n        }\n\n        function parseErrorAtCurrentToken(message: DiagnosticMessage, arg0?: any): void {\n            const start = scanner.getTokenPos();\n            const length = scanner.getTextPos() - start;\n\n            parseErrorAtPosition(start, length, message, arg0);\n        }\n\n        function parseErrorAtPosition(start: number, length: number, message: DiagnosticMessage, arg0?: any): void {\n            // Don't report another error if it would just be at the same position as the last error.\n            const lastError = lastOrUndefined(parseDiagnostics);\n            if (!lastError || start !== lastError.start) {\n                parseDiagnostics.push(createFileDiagnostic(sourceFile, start, length, message, arg0));\n            }\n\n            // Mark that we've encountered an error.  We'll set an appropriate bit on the next\n            // node we finish so that it can't be reused incrementally.\n            parseErrorBeforeNextFinishedNode = true;\n        }\n\n        function scanError(message: DiagnosticMessage, length?: number) {\n            const pos = scanner.getTextPos();\n            parseErrorAtPosition(pos, length || 0, message);\n        }\n\n        function getNodePos(): number {\n            return scanner.getStartPos();\n        }\n\n        // Use this function to access the current token instead of reading the currentToken\n        // variable. Since function results aren't narrowed in control flow analysis, this ensures\n        // that the type checker doesn't make wrong assumptions about the type of the current\n        // token (e.g. a call to nextToken() changes the current token but the checker doesn't\n        // reason about this side effect).  Mainstream VMs inline simple functions like this, so\n        // there is no performance penalty.\n        function token(): SyntaxKind {\n            return currentToken;\n        }\n\n        function nextToken(): SyntaxKind {\n            return currentToken = scanner.scan();\n        }\n\n        function reScanGreaterToken(): SyntaxKind {\n            return currentToken = scanner.reScanGreaterToken();\n        }\n\n        function reScanSlashToken(): SyntaxKind {\n            return currentToken = scanner.reScanSlashToken();\n        }\n\n        function reScanTemplateToken(): SyntaxKind {\n            return currentToken = scanner.reScanTemplateToken();\n        }\n\n        function scanJsxIdentifier(): SyntaxKind {\n            return currentToken = scanner.scanJsxIdentifier();\n        }\n\n        function scanJsxText(): SyntaxKind {\n            return currentToken = scanner.scanJsxToken();\n        }\n\n        function scanJsxAttributeValue(): SyntaxKind {\n            return currentToken = scanner.scanJsxAttributeValue();\n        }\n\n        function speculationHelper<T>(callback: () => T, isLookAhead: boolean): T {\n            // Keep track of the state we'll need to rollback to if lookahead fails (or if the\n            // caller asked us to always reset our state).\n            const saveToken = currentToken;\n            const saveParseDiagnosticsLength = parseDiagnostics.length;\n            const saveParseErrorBeforeNextFinishedNode = parseErrorBeforeNextFinishedNode;\n\n            // Note: it is not actually necessary to save/restore the context flags here.  That's\n            // because the saving/restoring of these flags happens naturally through the recursive\n            // descent nature of our parser.  However, we still store this here just so we can\n            // assert that invariant holds.\n            const saveContextFlags = contextFlags;\n\n            // If we're only looking ahead, then tell the scanner to only lookahead as well.\n            // Otherwise, if we're actually speculatively parsing, then tell the scanner to do the\n            // same.\n            const result = isLookAhead\n                ? scanner.lookAhead(callback)\n                : scanner.tryScan(callback);\n\n            Debug.assert(saveContextFlags === contextFlags);\n\n            // If our callback returned something 'falsy' or we're just looking ahead,\n            // then unconditionally restore us to where we were.\n            if (!result || isLookAhead) {\n                currentToken = saveToken;\n                parseDiagnostics.length = saveParseDiagnosticsLength;\n                parseErrorBeforeNextFinishedNode = saveParseErrorBeforeNextFinishedNode;\n            }\n\n            return result;\n        }\n\n        /** Invokes the provided callback then unconditionally restores the parser to the state it\n         * was in immediately prior to invoking the callback.  The result of invoking the callback\n         * is returned from this function.\n         */\n        function lookAhead<T>(callback: () => T): T {\n            return speculationHelper(callback, /*isLookAhead*/ true);\n        }\n\n        /** Invokes the provided callback.  If the callback returns something falsy, then it restores\n         * the parser to the state it was in immediately prior to invoking the callback.  If the\n         * callback returns something truthy, then the parser state is not rolled back.  The result\n         * of invoking the callback is returned from this function.\n         */\n        function tryParse<T>(callback: () => T): T {\n            return speculationHelper(callback, /*isLookAhead*/ false);\n        }\n\n        // Ignore strict mode flag because we will report an error in type checker instead.\n        function isIdentifier(): boolean {\n            if (token() === SyntaxKind.Identifier) {\n                return true;\n            }\n\n            // If we have a 'yield' keyword, and we're in the [yield] context, then 'yield' is\n            // considered a keyword and is not an identifier.\n            if (token() === SyntaxKind.YieldKeyword && inYieldContext()) {\n                return false;\n            }\n\n            // If we have a 'await' keyword, and we're in the [Await] context, then 'await' is\n            // considered a keyword and is not an identifier.\n            if (token() === SyntaxKind.AwaitKeyword && inAwaitContext()) {\n                return false;\n            }\n\n            return token() > SyntaxKind.LastReservedWord;\n        }\n\n        function parseExpected(kind: SyntaxKind, diagnosticMessage?: DiagnosticMessage, shouldAdvance = true): boolean {\n            if (token() === kind) {\n                if (shouldAdvance) {\n                    nextToken();\n                }\n                return true;\n            }\n\n            // Report specific message if provided with one.  Otherwise, report generic fallback message.\n            if (diagnosticMessage) {\n                parseErrorAtCurrentToken(diagnosticMessage);\n            }\n            else {\n                parseErrorAtCurrentToken(Diagnostics._0_expected, tokenToString(kind));\n            }\n            return false;\n        }\n\n        function parseOptional(t: SyntaxKind): boolean {\n            if (token() === t) {\n                nextToken();\n                return true;\n            }\n            return false;\n        }\n\n        function parseOptionalToken<TKind extends SyntaxKind>(t: TKind): Token<TKind>;\n        function parseOptionalToken(t: SyntaxKind): Node {\n            if (token() === t) {\n                return parseTokenNode();\n            }\n            return undefined;\n        }\n\n        function parseExpectedToken<TKind extends SyntaxKind>(t: TKind, diagnosticMessage?: DiagnosticMessage, arg0?: any): Token<TKind>;\n        function parseExpectedToken(t: SyntaxKind, diagnosticMessage?: DiagnosticMessage, arg0?: any): Node {\n            return parseOptionalToken(t) ||\n                createMissingNode(t, /*reportAtCurrentPosition*/ false, diagnosticMessage || Diagnostics._0_expected, arg0 || tokenToString(t));\n        }\n\n        function parseTokenNode<T extends Node>(): T {\n            const node = <T>createNode(token());\n            nextToken();\n            return finishNode(node);\n        }\n\n        function canParseSemicolon() {\n            // If there's a real semicolon, then we can always parse it out.\n            if (token() === SyntaxKind.SemicolonToken) {\n                return true;\n            }\n\n            // We can parse out an optional semicolon in ASI cases in the following cases.\n            return token() === SyntaxKind.CloseBraceToken || token() === SyntaxKind.EndOfFileToken || scanner.hasPrecedingLineBreak();\n        }\n\n        function parseSemicolon(): boolean {\n            if (canParseSemicolon()) {\n                if (token() === SyntaxKind.SemicolonToken) {\n                    // consume the semicolon if it was explicitly provided.\n                    nextToken();\n                }\n\n                return true;\n            }\n            else {\n                return parseExpected(SyntaxKind.SemicolonToken);\n            }\n        }\n\n        function createNode(kind: SyntaxKind, pos?: number): Node {\n            nodeCount++;\n            const p = pos >= 0 ? pos : scanner.getStartPos();\n            return isNodeKind(kind) || kind === SyntaxKind.Unknown ? new NodeConstructor(kind, p, p) :\n                kind === SyntaxKind.Identifier ? new IdentifierConstructor(kind, p, p) :\n                new TokenConstructor(kind, p, p);\n        }\n\n        function createNodeWithJSDoc(kind: SyntaxKind): Node {\n            const node = createNode(kind);\n            if (scanner.getTokenFlags() & TokenFlags.PrecedingJSDocComment) {\n                addJSDocComment(<HasJSDoc>node);\n            }\n            return node;\n        }\n\n        function createNodeArray<T extends Node>(elements: T[], pos: number, end?: number): NodeArray<T> {\n            // Since the element list of a node array is typically created by starting with an empty array and\n            // repeatedly calling push(), the list may not have the optimal memory layout. We invoke slice() for\n            // small arrays (1 to 4 elements) to give the VM a chance to allocate an optimal representation.\n            const length = elements.length;\n            const array = <MutableNodeArray<T>>(length >= 1 && length <= 4 ? elements.slice() : elements);\n            array.pos = pos;\n            array.end = end === undefined ? scanner.getStartPos() : end;\n            return array;\n        }\n\n        function finishNode<T extends Node>(node: T, end?: number): T {\n            node.end = end === undefined ? scanner.getStartPos() : end;\n\n            if (contextFlags) {\n                node.flags |= contextFlags;\n            }\n\n            // Keep track on the node if we encountered an error while parsing it.  If we did, then\n            // we cannot reuse the node incrementally.  Once we've marked this node, clear out the\n            // flag so that we don't mark any subsequent nodes.\n            if (parseErrorBeforeNextFinishedNode) {\n                parseErrorBeforeNextFinishedNode = false;\n                node.flags |= NodeFlags.ThisNodeHasError;\n            }\n\n            return node;\n        }\n\n        function createMissingNode<T extends Node>(kind: T[\"kind\"], reportAtCurrentPosition: boolean, diagnosticMessage: DiagnosticMessage, arg0?: any): T {\n            if (reportAtCurrentPosition) {\n                parseErrorAtPosition(scanner.getStartPos(), 0, diagnosticMessage, arg0);\n            }\n            else {\n                parseErrorAtCurrentToken(diagnosticMessage, arg0);\n            }\n\n            const result = createNode(kind);\n\n            if (kind === SyntaxKind.Identifier) {\n                (result as Identifier).escapedText = \"\" as __String;\n            }\n            else if (isLiteralKind(kind) || isTemplateLiteralKind(kind)) {\n                (result as LiteralLikeNode).text = \"\";\n            }\n\n            return finishNode(result) as T;\n        }\n\n        function internIdentifier(text: string): string {\n            let identifier = identifiers.get(text);\n            if (identifier === undefined) {\n                identifiers.set(text, identifier = text);\n            }\n            return identifier;\n        }\n\n        // An identifier that starts with two underscores has an extra underscore character prepended to it to avoid issues\n        // with magic property names like '__proto__'. The 'identifiers' object is used to share a single string instance for\n        // each identifier in order to reduce memory consumption.\n        function createIdentifier(isIdentifier: boolean, diagnosticMessage?: DiagnosticMessage): Identifier {\n            identifierCount++;\n            if (isIdentifier) {\n                const node = <Identifier>createNode(SyntaxKind.Identifier);\n\n                // Store original token kind if it is not just an Identifier so we can report appropriate error later in type checker\n                if (token() !== SyntaxKind.Identifier) {\n                    node.originalKeywordKind = token();\n                }\n                node.escapedText = escapeLeadingUnderscores(internIdentifier(scanner.getTokenValue()));\n                nextToken();\n                return finishNode(node);\n            }\n\n            // Only for end of file because the error gets reported incorrectly on embedded script tags.\n            const reportAtCurrentPosition = token() === SyntaxKind.EndOfFileToken;\n\n            return createMissingNode<Identifier>(SyntaxKind.Identifier, reportAtCurrentPosition, diagnosticMessage || Diagnostics.Identifier_expected);\n        }\n\n        function parseIdentifier(diagnosticMessage?: DiagnosticMessage): Identifier {\n            return createIdentifier(isIdentifier(), diagnosticMessage);\n        }\n\n        function parseIdentifierName(diagnosticMessage?: DiagnosticMessage): Identifier {\n            return createIdentifier(tokenIsIdentifierOrKeyword(token()), diagnosticMessage);\n        }\n\n        function isLiteralPropertyName(): boolean {\n            return tokenIsIdentifierOrKeyword(token()) ||\n                token() === SyntaxKind.StringLiteral ||\n                token() === SyntaxKind.NumericLiteral;\n        }\n\n        function parsePropertyNameWorker(allowComputedPropertyNames: boolean): PropertyName {\n            if (token() === SyntaxKind.StringLiteral || token() === SyntaxKind.NumericLiteral) {\n                const node = <StringLiteral | NumericLiteral>parseLiteralNode();\n                node.text = internIdentifier(node.text);\n                return node;\n            }\n            if (allowComputedPropertyNames && token() === SyntaxKind.OpenBracketToken) {\n                return parseComputedPropertyName();\n            }\n            return parseIdentifierName();\n        }\n\n        function parsePropertyName(): PropertyName {\n            return parsePropertyNameWorker(/*allowComputedPropertyNames*/ true);\n        }\n\n        function parseComputedPropertyName(): ComputedPropertyName {\n            // PropertyName [Yield]:\n            //      LiteralPropertyName\n            //      ComputedPropertyName[?Yield]\n            const node = <ComputedPropertyName>createNode(SyntaxKind.ComputedPropertyName);\n            parseExpected(SyntaxKind.OpenBracketToken);\n\n            // We parse any expression (including a comma expression). But the grammar\n            // says that only an assignment expression is allowed, so the grammar checker\n            // will error if it sees a comma expression.\n            node.expression = allowInAnd(parseExpression);\n\n            parseExpected(SyntaxKind.CloseBracketToken);\n            return finishNode(node);\n        }\n\n        function parseContextualModifier(t: SyntaxKind): boolean {\n            return token() === t && tryParse(nextTokenCanFollowModifier);\n        }\n\n        function nextTokenIsOnSameLineAndCanFollowModifier() {\n            nextToken();\n            if (scanner.hasPrecedingLineBreak()) {\n                return false;\n            }\n            return canFollowModifier();\n        }\n\n        function nextTokenCanFollowModifier() {\n            if (token() === SyntaxKind.ConstKeyword) {\n                // 'const' is only a modifier if followed by 'enum'.\n                return nextToken() === SyntaxKind.EnumKeyword;\n            }\n            if (token() === SyntaxKind.ExportKeyword) {\n                nextToken();\n                if (token() === SyntaxKind.DefaultKeyword) {\n                    return lookAhead(nextTokenCanFollowDefaultKeyword);\n                }\n                return token() !== SyntaxKind.AsteriskToken && token() !== SyntaxKind.AsKeyword && token() !== SyntaxKind.OpenBraceToken && canFollowModifier();\n            }\n            if (token() === SyntaxKind.DefaultKeyword) {\n                return nextTokenCanFollowDefaultKeyword();\n            }\n            if (token() === SyntaxKind.StaticKeyword) {\n                nextToken();\n                return canFollowModifier();\n            }\n\n            return nextTokenIsOnSameLineAndCanFollowModifier();\n        }\n\n        function parseAnyContextualModifier(): boolean {\n            return isModifierKind(token()) && tryParse(nextTokenCanFollowModifier);\n        }\n\n        function canFollowModifier(): boolean {\n            return token() === SyntaxKind.OpenBracketToken\n                || token() === SyntaxKind.OpenBraceToken\n                || token() === SyntaxKind.AsteriskToken\n                || token() === SyntaxKind.DotDotDotToken\n                || isLiteralPropertyName();\n        }\n\n        function nextTokenCanFollowDefaultKeyword(): boolean {\n            nextToken();\n            return token() === SyntaxKind.ClassKeyword || token() === SyntaxKind.FunctionKeyword ||\n                token() === SyntaxKind.InterfaceKeyword ||\n                (token() === SyntaxKind.AbstractKeyword && lookAhead(nextTokenIsClassKeywordOnSameLine)) ||\n                (token() === SyntaxKind.AsyncKeyword && lookAhead(nextTokenIsFunctionKeywordOnSameLine));\n        }\n\n        // True if positioned at the start of a list element\n        function isListElement(parsingContext: ParsingContext, inErrorRecovery: boolean): boolean {\n            const node = currentNode(parsingContext);\n            if (node) {\n                return true;\n            }\n\n            switch (parsingContext) {\n                case ParsingContext.SourceElements:\n                case ParsingContext.BlockStatements:\n                case ParsingContext.SwitchClauseStatements:\n                    // If we're in error recovery, then we don't want to treat ';' as an empty statement.\n                    // The problem is that ';' can show up in far too many contexts, and if we see one\n                    // and assume it's a statement, then we may bail out inappropriately from whatever\n                    // we're parsing.  For example, if we have a semicolon in the middle of a class, then\n                    // we really don't want to assume the class is over and we're on a statement in the\n                    // outer module.  We just want to consume and move on.\n                    return !(token() === SyntaxKind.SemicolonToken && inErrorRecovery) && isStartOfStatement();\n                case ParsingContext.SwitchClauses:\n                    return token() === SyntaxKind.CaseKeyword || token() === SyntaxKind.DefaultKeyword;\n                case ParsingContext.TypeMembers:\n                    return lookAhead(isTypeMemberStart);\n                case ParsingContext.ClassMembers:\n                    // We allow semicolons as class elements (as specified by ES6) as long as we're\n                    // not in error recovery.  If we're in error recovery, we don't want an errant\n                    // semicolon to be treated as a class member (since they're almost always used\n                    // for statements.\n                    return lookAhead(isClassMemberStart) || (token() === SyntaxKind.SemicolonToken && !inErrorRecovery);\n                case ParsingContext.EnumMembers:\n                    // Include open bracket computed properties. This technically also lets in indexers,\n                    // which would be a candidate for improved error reporting.\n                    return token() === SyntaxKind.OpenBracketToken || isLiteralPropertyName();\n                case ParsingContext.ObjectLiteralMembers:\n                    return token() === SyntaxKind.OpenBracketToken || token() === SyntaxKind.AsteriskToken || token() === SyntaxKind.DotDotDotToken || isLiteralPropertyName();\n                case ParsingContext.RestProperties:\n                    return isLiteralPropertyName();\n                case ParsingContext.ObjectBindingElements:\n                    return token() === SyntaxKind.OpenBracketToken || token() === SyntaxKind.DotDotDotToken || isLiteralPropertyName();\n                case ParsingContext.HeritageClauseElement:\n                    // If we see `{ ... }` then only consume it as an expression if it is followed by `,` or `{`\n                    // That way we won't consume the body of a class in its heritage clause.\n                    if (token() === SyntaxKind.OpenBraceToken) {\n                        return lookAhead(isValidHeritageClauseObjectLiteral);\n                    }\n\n                    if (!inErrorRecovery) {\n                        return isStartOfLeftHandSideExpression() && !isHeritageClauseExtendsOrImplementsKeyword();\n                    }\n                    else {\n                        // If we're in error recovery we tighten up what we're willing to match.\n                        // That way we don't treat something like \"this\" as a valid heritage clause\n                        // element during recovery.\n                        return isIdentifier() && !isHeritageClauseExtendsOrImplementsKeyword();\n                    }\n                case ParsingContext.VariableDeclarations:\n                    return isIdentifierOrPattern();\n                case ParsingContext.ArrayBindingElements:\n                    return token() === SyntaxKind.CommaToken || token() === SyntaxKind.DotDotDotToken || isIdentifierOrPattern();\n                case ParsingContext.TypeParameters:\n                    return isIdentifier();\n                case ParsingContext.ArrayLiteralMembers:\n                    if (token() === SyntaxKind.CommaToken) {\n                        return true;\n                    }\n                    // falls through\n                case ParsingContext.ArgumentExpressions:\n                    return token() === SyntaxKind.DotDotDotToken || isStartOfExpression();\n                case ParsingContext.Parameters:\n                    return isStartOfParameter();\n                case ParsingContext.TypeArguments:\n                case ParsingContext.TupleElementTypes:\n                    return token() === SyntaxKind.CommaToken || isStartOfType();\n                case ParsingContext.HeritageClauses:\n                    return isHeritageClause();\n                case ParsingContext.ImportOrExportSpecifiers:\n                    return tokenIsIdentifierOrKeyword(token());\n                case ParsingContext.JsxAttributes:\n                    return tokenIsIdentifierOrKeyword(token()) || token() === SyntaxKind.OpenBraceToken;\n                case ParsingContext.JsxChildren:\n                    return true;\n            }\n\n            Debug.fail(\"Non-exhaustive case in 'isListElement'.\");\n        }\n\n        function isValidHeritageClauseObjectLiteral() {\n            Debug.assert(token() === SyntaxKind.OpenBraceToken);\n            if (nextToken() === SyntaxKind.CloseBraceToken) {\n                // if we see \"extends {}\" then only treat the {} as what we're extending (and not\n                // the class body) if we have:\n                //\n                //      extends {} {\n                //      extends {},\n                //      extends {} extends\n                //      extends {} implements\n\n                const next = nextToken();\n                return next === SyntaxKind.CommaToken || next === SyntaxKind.OpenBraceToken || next === SyntaxKind.ExtendsKeyword || next === SyntaxKind.ImplementsKeyword;\n            }\n\n            return true;\n        }\n\n        function nextTokenIsIdentifier() {\n            nextToken();\n            return isIdentifier();\n        }\n\n        function nextTokenIsIdentifierOrKeyword() {\n            nextToken();\n            return tokenIsIdentifierOrKeyword(token());\n        }\n\n        function nextTokenIsIdentifierOrKeywordOrGreaterThan() {\n            nextToken();\n            return tokenIsIdentifierOrKeywordOrGreaterThan(token());\n        }\n\n        function isHeritageClauseExtendsOrImplementsKeyword(): boolean {\n            if (token() === SyntaxKind.ImplementsKeyword ||\n                token() === SyntaxKind.ExtendsKeyword) {\n\n                return lookAhead(nextTokenIsStartOfExpression);\n            }\n\n            return false;\n        }\n\n        function nextTokenIsStartOfExpression() {\n            nextToken();\n            return isStartOfExpression();\n        }\n\n        function nextTokenIsStartOfType() {\n            nextToken();\n            return isStartOfType();\n        }\n\n        // True if positioned at a list terminator\n        function isListTerminator(kind: ParsingContext): boolean {\n            if (token() === SyntaxKind.EndOfFileToken) {\n                // Being at the end of the file ends all lists.\n                return true;\n            }\n\n            switch (kind) {\n                case ParsingContext.BlockStatements:\n                case ParsingContext.SwitchClauses:\n                case ParsingContext.TypeMembers:\n                case ParsingContext.ClassMembers:\n                case ParsingContext.EnumMembers:\n                case ParsingContext.ObjectLiteralMembers:\n                case ParsingContext.ObjectBindingElements:\n                case ParsingContext.ImportOrExportSpecifiers:\n                    return token() === SyntaxKind.CloseBraceToken;\n                case ParsingContext.SwitchClauseStatements:\n                    return token() === SyntaxKind.CloseBraceToken || token() === SyntaxKind.CaseKeyword || token() === SyntaxKind.DefaultKeyword;\n                case ParsingContext.HeritageClauseElement:\n                    return token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.ExtendsKeyword || token() === SyntaxKind.ImplementsKeyword;\n                case ParsingContext.VariableDeclarations:\n                    return isVariableDeclaratorListTerminator();\n                case ParsingContext.TypeParameters:\n                    // Tokens other than '>' are here for better error recovery\n                    return token() === SyntaxKind.GreaterThanToken || token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.ExtendsKeyword || token() === SyntaxKind.ImplementsKeyword;\n                case ParsingContext.ArgumentExpressions:\n                    // Tokens other than ')' are here for better error recovery\n                    return token() === SyntaxKind.CloseParenToken || token() === SyntaxKind.SemicolonToken;\n                case ParsingContext.ArrayLiteralMembers:\n                case ParsingContext.TupleElementTypes:\n                case ParsingContext.ArrayBindingElements:\n                    return token() === SyntaxKind.CloseBracketToken;\n                case ParsingContext.Parameters:\n                case ParsingContext.RestProperties:\n                    // Tokens other than ')' and ']' (the latter for index signatures) are here for better error recovery\n                    return token() === SyntaxKind.CloseParenToken || token() === SyntaxKind.CloseBracketToken /*|| token === SyntaxKind.OpenBraceToken*/;\n                case ParsingContext.TypeArguments:\n                    // All other tokens should cause the type-argument to terminate except comma token\n                    return token() !== SyntaxKind.CommaToken;\n                case ParsingContext.HeritageClauses:\n                    return token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.CloseBraceToken;\n                case ParsingContext.JsxAttributes:\n                    return token() === SyntaxKind.GreaterThanToken || token() === SyntaxKind.SlashToken;\n                case ParsingContext.JsxChildren:\n                    return token() === SyntaxKind.LessThanToken && lookAhead(nextTokenIsSlash);\n            }\n        }\n\n        function isVariableDeclaratorListTerminator(): boolean {\n            // If we can consume a semicolon (either explicitly, or with ASI), then consider us done\n            // with parsing the list of variable declarators.\n            if (canParseSemicolon()) {\n                return true;\n            }\n\n            // in the case where we're parsing the variable declarator of a 'for-in' statement, we\n            // are done if we see an 'in' keyword in front of us. Same with for-of\n            if (isInOrOfKeyword(token())) {\n                return true;\n            }\n\n            // ERROR RECOVERY TWEAK:\n            // For better error recovery, if we see an '=>' then we just stop immediately.  We've got an\n            // arrow function here and it's going to be very unlikely that we'll resynchronize and get\n            // another variable declaration.\n            if (token() === SyntaxKind.EqualsGreaterThanToken) {\n                return true;\n            }\n\n            // Keep trying to parse out variable declarators.\n            return false;\n        }\n\n        // True if positioned at element or terminator of the current list or any enclosing list\n        function isInSomeParsingContext(): boolean {\n            for (let kind = 0; kind < ParsingContext.Count; kind++) {\n                if (parsingContext & (1 << kind)) {\n                    if (isListElement(kind, /*inErrorRecovery*/ true) || isListTerminator(kind)) {\n                        return true;\n                    }\n                }\n            }\n\n            return false;\n        }\n\n        // Parses a list of elements\n        function parseList<T extends Node>(kind: ParsingContext, parseElement: () => T): NodeArray<T> {\n            const saveParsingContext = parsingContext;\n            parsingContext |= 1 << kind;\n            const list = [];\n            const listPos = getNodePos();\n\n            while (!isListTerminator(kind)) {\n                if (isListElement(kind, /*inErrorRecovery*/ false)) {\n                    const element = parseListElement(kind, parseElement);\n                    list.push(element);\n\n                    continue;\n                }\n\n                if (abortParsingListOrMoveToNextToken(kind)) {\n                    break;\n                }\n            }\n\n            parsingContext = saveParsingContext;\n            return createNodeArray(list, listPos);\n        }\n\n        function parseListElement<T extends Node>(parsingContext: ParsingContext, parseElement: () => T): T {\n            const node = currentNode(parsingContext);\n            if (node) {\n                return <T>consumeNode(node);\n            }\n\n            return parseElement();\n        }\n\n        function currentNode(parsingContext: ParsingContext): Node {\n            // If there is an outstanding parse error that we've encountered, but not attached to\n            // some node, then we cannot get a node from the old source tree.  This is because we\n            // want to mark the next node we encounter as being unusable.\n            //\n            // Note: This may be too conservative.  Perhaps we could reuse the node and set the bit\n            // on it (or its leftmost child) as having the error.  For now though, being conservative\n            // is nice and likely won't ever affect perf.\n            if (parseErrorBeforeNextFinishedNode) {\n                return undefined;\n            }\n\n            if (!syntaxCursor) {\n                // if we don't have a cursor, we could never return a node from the old tree.\n                return undefined;\n            }\n\n            const node = syntaxCursor.currentNode(scanner.getStartPos());\n\n            // Can't reuse a missing node.\n            if (nodeIsMissing(node)) {\n                return undefined;\n            }\n\n            // Can't reuse a node that intersected the change range.\n            if (node.intersectsChange) {\n                return undefined;\n            }\n\n            // Can't reuse a node that contains a parse error.  This is necessary so that we\n            // produce the same set of errors again.\n            if (containsParseError(node)) {\n                return undefined;\n            }\n\n            // We can only reuse a node if it was parsed under the same strict mode that we're\n            // currently in.  i.e. if we originally parsed a node in non-strict mode, but then\n            // the user added 'using strict' at the top of the file, then we can't use that node\n            // again as the presence of strict mode may cause us to parse the tokens in the file\n            // differently.\n            //\n            // Note: we *can* reuse tokens when the strict mode changes.  That's because tokens\n            // are unaffected by strict mode.  It's just the parser will decide what to do with it\n            // differently depending on what mode it is in.\n            //\n            // This also applies to all our other context flags as well.\n            const nodeContextFlags = node.flags & NodeFlags.ContextFlags;\n            if (nodeContextFlags !== contextFlags) {\n                return undefined;\n            }\n\n            // Ok, we have a node that looks like it could be reused.  Now verify that it is valid\n            // in the current list parsing context that we're currently at.\n            if (!canReuseNode(node, parsingContext)) {\n                return undefined;\n            }\n\n            if ((node as JSDocContainer).jsDocCache) {\n                // jsDocCache may include tags from parent nodes, which might have been modified.\n                (node as JSDocContainer).jsDocCache = undefined;\n            }\n\n            return node;\n        }\n\n        function consumeNode(node: Node) {\n            // Move the scanner so it is after the node we just consumed.\n            scanner.setTextPos(node.end);\n            nextToken();\n            return node;\n        }\n\n        function canReuseNode(node: Node, parsingContext: ParsingContext): boolean {\n            switch (parsingContext) {\n                case ParsingContext.ClassMembers:\n                    return isReusableClassMember(node);\n\n                case ParsingContext.SwitchClauses:\n                    return isReusableSwitchClause(node);\n\n                case ParsingContext.SourceElements:\n                case ParsingContext.BlockStatements:\n                case ParsingContext.SwitchClauseStatements:\n                    return isReusableStatement(node);\n\n                case ParsingContext.EnumMembers:\n                    return isReusableEnumMember(node);\n\n                case ParsingContext.TypeMembers:\n                    return isReusableTypeMember(node);\n\n                case ParsingContext.VariableDeclarations:\n                    return isReusableVariableDeclaration(node);\n\n                case ParsingContext.Parameters:\n                    return isReusableParameter(node);\n\n                case ParsingContext.RestProperties:\n                    return false;\n\n                // Any other lists we do not care about reusing nodes in.  But feel free to add if\n                // you can do so safely.  Danger areas involve nodes that may involve speculative\n                // parsing.  If speculative parsing is involved with the node, then the range the\n                // parser reached while looking ahead might be in the edited range (see the example\n                // in canReuseVariableDeclaratorNode for a good case of this).\n                case ParsingContext.HeritageClauses:\n                // This would probably be safe to reuse.  There is no speculative parsing with\n                // heritage clauses.\n\n                case ParsingContext.TypeParameters:\n                // This would probably be safe to reuse.  There is no speculative parsing with\n                // type parameters.  Note that that's because type *parameters* only occur in\n                // unambiguous *type* contexts.  While type *arguments* occur in very ambiguous\n                // *expression* contexts.\n\n                case ParsingContext.TupleElementTypes:\n                // This would probably be safe to reuse.  There is no speculative parsing with\n                // tuple types.\n\n                // Technically, type argument list types are probably safe to reuse.  While\n                // speculative parsing is involved with them (since type argument lists are only\n                // produced from speculative parsing a < as a type argument list), we only have\n                // the types because speculative parsing succeeded.  Thus, the lookahead never\n                // went past the end of the list and rewound.\n                case ParsingContext.TypeArguments:\n\n                // Note: these are almost certainly not safe to ever reuse.  Expressions commonly\n                // need a large amount of lookahead, and we should not reuse them as they may\n                // have actually intersected the edit.\n                case ParsingContext.ArgumentExpressions:\n\n                // This is not safe to reuse for the same reason as the 'AssignmentExpression'\n                // cases.  i.e. a property assignment may end with an expression, and thus might\n                // have lookahead far beyond it's old node.\n                case ParsingContext.ObjectLiteralMembers:\n\n                // This is probably not safe to reuse.  There can be speculative parsing with\n                // type names in a heritage clause.  There can be generic names in the type\n                // name list, and there can be left hand side expressions (which can have type\n                // arguments.)\n                case ParsingContext.HeritageClauseElement:\n\n                // Perhaps safe to reuse, but it's unlikely we'd see more than a dozen attributes\n                // on any given element. Same for children.\n                case ParsingContext.JsxAttributes:\n                case ParsingContext.JsxChildren:\n\n            }\n\n            return false;\n        }\n\n        function isReusableClassMember(node: Node) {\n            if (node) {\n                switch (node.kind) {\n                    case SyntaxKind.Constructor:\n                    case SyntaxKind.IndexSignature:\n                    case SyntaxKind.GetAccessor:\n                    case SyntaxKind.SetAccessor:\n                    case SyntaxKind.PropertyDeclaration:\n                    case SyntaxKind.SemicolonClassElement:\n                        return true;\n                    case SyntaxKind.MethodDeclaration:\n                        // Method declarations are not necessarily reusable.  An object-literal\n                        // may have a method calls \"constructor(...)\" and we must reparse that\n                        // into an actual .ConstructorDeclaration.\n                        const methodDeclaration = <MethodDeclaration>node;\n                        const nameIsConstructor = methodDeclaration.name.kind === SyntaxKind.Identifier &&\n                            (<Identifier>methodDeclaration.name).originalKeywordKind === SyntaxKind.ConstructorKeyword;\n\n                        return !nameIsConstructor;\n                }\n            }\n\n            return false;\n        }\n\n        function isReusableSwitchClause(node: Node) {\n            if (node) {\n                switch (node.kind) {\n                    case SyntaxKind.CaseClause:\n                    case SyntaxKind.DefaultClause:\n                        return true;\n                }\n            }\n\n            return false;\n        }\n\n        function isReusableStatement(node: Node) {\n            if (node) {\n                switch (node.kind) {\n                    case SyntaxKind.FunctionDeclaration:\n                    case SyntaxKind.VariableStatement:\n                    case SyntaxKind.Block:\n                    case SyntaxKind.IfStatement:\n                    case SyntaxKind.ExpressionStatement:\n                    case SyntaxKind.ThrowStatement:\n                    case SyntaxKind.ReturnStatement:\n                    case SyntaxKind.SwitchStatement:\n                    case SyntaxKind.BreakStatement:\n                    case SyntaxKind.ContinueStatement:\n                    case SyntaxKind.ForInStatement:\n                    case SyntaxKind.ForOfStatement:\n                    case SyntaxKind.ForStatement:\n                    case SyntaxKind.WhileStatement:\n                    case SyntaxKind.WithStatement:\n                    case SyntaxKind.EmptyStatement:\n                    case SyntaxKind.TryStatement:\n                    case SyntaxKind.LabeledStatement:\n                    case SyntaxKind.DoStatement:\n                    case SyntaxKind.DebuggerStatement:\n                    case SyntaxKind.ImportDeclaration:\n                    case SyntaxKind.ImportEqualsDeclaration:\n                    case SyntaxKind.ExportDeclaration:\n                    case SyntaxKind.ExportAssignment:\n                    case SyntaxKind.ModuleDeclaration:\n                    case SyntaxKind.ClassDeclaration:\n                    case SyntaxKind.InterfaceDeclaration:\n                    case SyntaxKind.EnumDeclaration:\n                    case SyntaxKind.TypeAliasDeclaration:\n                        return true;\n                }\n            }\n\n            return false;\n        }\n\n        function isReusableEnumMember(node: Node) {\n            return node.kind === SyntaxKind.EnumMember;\n        }\n\n        function isReusableTypeMember(node: Node) {\n            if (node) {\n                switch (node.kind) {\n                    case SyntaxKind.ConstructSignature:\n                    case SyntaxKind.MethodSignature:\n                    case SyntaxKind.IndexSignature:\n                    case SyntaxKind.PropertySignature:\n                    case SyntaxKind.CallSignature:\n                        return true;\n                }\n            }\n\n            return false;\n        }\n\n        function isReusableVariableDeclaration(node: Node) {\n            if (node.kind !== SyntaxKind.VariableDeclaration) {\n                return false;\n            }\n\n            // Very subtle incremental parsing bug.  Consider the following code:\n            //\n            //      let v = new List < A, B\n            //\n            // This is actually legal code.  It's a list of variable declarators \"v = new List<A\"\n            // on one side and \"B\" on the other. If you then change that to:\n            //\n            //      let v = new List < A, B >()\n            //\n            // then we have a problem.  \"v = new List<A\" doesn't intersect the change range, so we\n            // start reparsing at \"B\" and we completely fail to handle this properly.\n            //\n            // In order to prevent this, we do not allow a variable declarator to be reused if it\n            // has an initializer.\n            const variableDeclarator = <VariableDeclaration>node;\n            return variableDeclarator.initializer === undefined;\n        }\n\n        function isReusableParameter(node: Node) {\n            if (node.kind !== SyntaxKind.Parameter) {\n                return false;\n            }\n\n            // See the comment in isReusableVariableDeclaration for why we do this.\n            const parameter = <ParameterDeclaration>node;\n            return parameter.initializer === undefined;\n        }\n\n        // Returns true if we should abort parsing.\n        function abortParsingListOrMoveToNextToken(kind: ParsingContext) {\n            parseErrorAtCurrentToken(parsingContextErrors(kind));\n            if (isInSomeParsingContext()) {\n                return true;\n            }\n\n            nextToken();\n            return false;\n        }\n\n        function parsingContextErrors(context: ParsingContext): DiagnosticMessage {\n            switch (context) {\n                case ParsingContext.SourceElements: return Diagnostics.Declaration_or_statement_expected;\n                case ParsingContext.BlockStatements: return Diagnostics.Declaration_or_statement_expected;\n                case ParsingContext.SwitchClauses: return Diagnostics.case_or_default_expected;\n                case ParsingContext.SwitchClauseStatements: return Diagnostics.Statement_expected;\n                case ParsingContext.RestProperties: // fallthrough\n                case ParsingContext.TypeMembers: return Diagnostics.Property_or_signature_expected;\n                case ParsingContext.ClassMembers: return Diagnostics.Unexpected_token_A_constructor_method_accessor_or_property_was_expected;\n                case ParsingContext.EnumMembers: return Diagnostics.Enum_member_expected;\n                case ParsingContext.HeritageClauseElement: return Diagnostics.Expression_expected;\n                case ParsingContext.VariableDeclarations: return Diagnostics.Variable_declaration_expected;\n                case ParsingContext.ObjectBindingElements: return Diagnostics.Property_destructuring_pattern_expected;\n                case ParsingContext.ArrayBindingElements: return Diagnostics.Array_element_destructuring_pattern_expected;\n                case ParsingContext.ArgumentExpressions: return Diagnostics.Argument_expression_expected;\n                case ParsingContext.ObjectLiteralMembers: return Diagnostics.Property_assignment_expected;\n                case ParsingContext.ArrayLiteralMembers: return Diagnostics.Expression_or_comma_expected;\n                case ParsingContext.Parameters: return Diagnostics.Parameter_declaration_expected;\n                case ParsingContext.TypeParameters: return Diagnostics.Type_parameter_declaration_expected;\n                case ParsingContext.TypeArguments: return Diagnostics.Type_argument_expected;\n                case ParsingContext.TupleElementTypes: return Diagnostics.Type_expected;\n                case ParsingContext.HeritageClauses: return Diagnostics.Unexpected_token_expected;\n                case ParsingContext.ImportOrExportSpecifiers: return Diagnostics.Identifier_expected;\n                case ParsingContext.JsxAttributes: return Diagnostics.Identifier_expected;\n                case ParsingContext.JsxChildren: return Diagnostics.Identifier_expected;\n            }\n        }\n\n        // Parses a comma-delimited list of elements\n        function parseDelimitedList<T extends Node>(kind: ParsingContext, parseElement: () => T, considerSemicolonAsDelimiter?: boolean): NodeArray<T> {\n            const saveParsingContext = parsingContext;\n            parsingContext |= 1 << kind;\n            const list = [];\n            const listPos = getNodePos();\n\n            let commaStart = -1; // Meaning the previous token was not a comma\n            while (true) {\n                if (isListElement(kind, /*inErrorRecovery*/ false)) {\n                    const startPos = scanner.getStartPos();\n                    list.push(parseListElement(kind, parseElement));\n                    commaStart = scanner.getTokenPos();\n\n                    if (parseOptional(SyntaxKind.CommaToken)) {\n                        // No need to check for a zero length node since we know we parsed a comma\n                        continue;\n                    }\n\n                    commaStart = -1; // Back to the state where the last token was not a comma\n                    if (isListTerminator(kind)) {\n                        break;\n                    }\n\n                    // We didn't get a comma, and the list wasn't terminated, explicitly parse\n                    // out a comma so we give a good error message.\n                    parseExpected(SyntaxKind.CommaToken);\n\n                    // If the token was a semicolon, and the caller allows that, then skip it and\n                    // continue.  This ensures we get back on track and don't result in tons of\n                    // parse errors.  For example, this can happen when people do things like use\n                    // a semicolon to delimit object literal members.   Note: we'll have already\n                    // reported an error when we called parseExpected above.\n                    if (considerSemicolonAsDelimiter && token() === SyntaxKind.SemicolonToken && !scanner.hasPrecedingLineBreak()) {\n                        nextToken();\n                    }\n                    if (startPos === scanner.getStartPos()) {\n                        // What we're parsing isn't actually remotely recognizable as a element and we've consumed no tokens whatsoever\n                        // Consume a token to advance the parser in some way and avoid an infinite loop\n                        // This can happen when we're speculatively parsing parenthesized expressions which we think may be arrow functions,\n                        // or when a modifier keyword which is disallowed as a parameter name (ie, `static` in strict mode) is supplied\n                        nextToken();\n                    }\n                    continue;\n                }\n\n                if (isListTerminator(kind)) {\n                    break;\n                }\n\n                if (abortParsingListOrMoveToNextToken(kind)) {\n                    break;\n                }\n            }\n\n            parsingContext = saveParsingContext;\n            const result = createNodeArray(list, listPos);\n            // Recording the trailing comma is deliberately done after the previous\n            // loop, and not just if we see a list terminator. This is because the list\n            // may have ended incorrectly, but it is still important to know if there\n            // was a trailing comma.\n            // Check if the last token was a comma.\n            if (commaStart >= 0) {\n                // Always preserve a trailing comma by marking it on the NodeArray\n                result.hasTrailingComma = true;\n            }\n            return result;\n        }\n\n        function createMissingList<T extends Node>(): NodeArray<T> {\n            return createNodeArray<T>([], getNodePos());\n        }\n\n        function parseBracketedList<T extends Node>(kind: ParsingContext, parseElement: () => T, open: SyntaxKind, close: SyntaxKind): NodeArray<T> {\n            if (parseExpected(open)) {\n                const result = parseDelimitedList(kind, parseElement);\n                parseExpected(close);\n                return result;\n            }\n\n            return createMissingList<T>();\n        }\n\n        function parseEntityName(allowReservedWords: boolean, diagnosticMessage?: DiagnosticMessage): EntityName {\n            let entity: EntityName = allowReservedWords ? parseIdentifierName(diagnosticMessage) : parseIdentifier(diagnosticMessage);\n            let dotPos = scanner.getStartPos();\n            while (parseOptional(SyntaxKind.DotToken)) {\n                if (token() === SyntaxKind.LessThanToken) {\n                    // the entity is part of a JSDoc-style generic, so record the trailing dot for later error reporting\n                    entity.jsdocDotPos = dotPos;\n                    break;\n                }\n                dotPos = scanner.getStartPos();\n                entity = createQualifiedName(entity, parseRightSideOfDot(allowReservedWords));\n            }\n            return entity;\n        }\n\n        function createQualifiedName(entity: EntityName, name: Identifier): QualifiedName {\n            const node = createNode(SyntaxKind.QualifiedName, entity.pos) as QualifiedName;\n            node.left = entity;\n            node.right = name;\n            return finishNode(node);\n        }\n\n        function parseRightSideOfDot(allowIdentifierNames: boolean): Identifier {\n            // Technically a keyword is valid here as all identifiers and keywords are identifier names.\n            // However, often we'll encounter this in error situations when the identifier or keyword\n            // is actually starting another valid construct.\n            //\n            // So, we check for the following specific case:\n            //\n            //      name.\n            //      identifierOrKeyword identifierNameOrKeyword\n            //\n            // Note: the newlines are important here.  For example, if that above code\n            // were rewritten into:\n            //\n            //      name.identifierOrKeyword\n            //      identifierNameOrKeyword\n            //\n            // Then we would consider it valid.  That's because ASI would take effect and\n            // the code would be implicitly: \"name.identifierOrKeyword; identifierNameOrKeyword\".\n            // In the first case though, ASI will not take effect because there is not a\n            // line terminator after the identifier or keyword.\n            if (scanner.hasPrecedingLineBreak() && tokenIsIdentifierOrKeyword(token())) {\n                const matchesPattern = lookAhead(nextTokenIsIdentifierOrKeywordOnSameLine);\n\n                if (matchesPattern) {\n                    // Report that we need an identifier.  However, report it right after the dot,\n                    // and not on the next token.  This is because the next token might actually\n                    // be an identifier and the error would be quite confusing.\n                    return createMissingNode<Identifier>(SyntaxKind.Identifier, /*reportAtCurrentPosition*/ true, Diagnostics.Identifier_expected);\n                }\n            }\n\n            return allowIdentifierNames ? parseIdentifierName() : parseIdentifier();\n        }\n\n        function parseTemplateExpression(): TemplateExpression {\n            const template = <TemplateExpression>createNode(SyntaxKind.TemplateExpression);\n\n            template.head = parseTemplateHead();\n            Debug.assert(template.head.kind === SyntaxKind.TemplateHead, \"Template head has wrong token kind\");\n\n            const list = [];\n            const listPos = getNodePos();\n\n            do {\n                list.push(parseTemplateSpan());\n            }\n            while (lastOrUndefined(list).literal.kind === SyntaxKind.TemplateMiddle);\n\n            template.templateSpans = createNodeArray(list, listPos);\n\n            return finishNode(template);\n        }\n\n        function parseTemplateSpan(): TemplateSpan {\n            const span = <TemplateSpan>createNode(SyntaxKind.TemplateSpan);\n            span.expression = allowInAnd(parseExpression);\n\n            let literal: TemplateMiddle | TemplateTail;\n            if (token() === SyntaxKind.CloseBraceToken) {\n                reScanTemplateToken();\n                literal = parseTemplateMiddleOrTemplateTail();\n            }\n            else {\n                literal = <TemplateTail>parseExpectedToken(SyntaxKind.TemplateTail, Diagnostics._0_expected, tokenToString(SyntaxKind.CloseBraceToken));\n            }\n\n            span.literal = literal;\n            return finishNode(span);\n        }\n\n        function parseLiteralNode(): LiteralExpression {\n            return <LiteralExpression>parseLiteralLikeNode(token());\n        }\n\n        function parseTemplateHead(): TemplateHead {\n            const fragment = parseLiteralLikeNode(token());\n            Debug.assert(fragment.kind === SyntaxKind.TemplateHead, \"Template head has wrong token kind\");\n            return <TemplateHead>fragment;\n        }\n\n        function parseTemplateMiddleOrTemplateTail(): TemplateMiddle | TemplateTail {\n            const fragment = parseLiteralLikeNode(token());\n            Debug.assert(fragment.kind === SyntaxKind.TemplateMiddle || fragment.kind === SyntaxKind.TemplateTail, \"Template fragment has wrong token kind\");\n            return <TemplateMiddle | TemplateTail>fragment;\n        }\n\n        function parseLiteralLikeNode(kind: SyntaxKind): LiteralExpression | LiteralLikeNode {\n            const node = <LiteralExpression>createNode(kind);\n            const text = scanner.getTokenValue();\n            node.text = text;\n\n            if (scanner.hasExtendedUnicodeEscape()) {\n                node.hasExtendedUnicodeEscape = true;\n            }\n\n            if (scanner.isUnterminated()) {\n                node.isUnterminated = true;\n            }\n\n            // Octal literals are not allowed in strict mode or ES5\n            // Note that theoretically the following condition would hold true literals like 009,\n            // which is not octal.But because of how the scanner separates the tokens, we would\n            // never get a token like this. Instead, we would get 00 and 9 as two separate tokens.\n            // We also do not need to check for negatives because any prefix operator would be part of a\n            // parent unary expression.\n            if (node.kind === SyntaxKind.NumericLiteral) {\n                (<NumericLiteral>node).numericLiteralFlags = scanner.getTokenFlags() & TokenFlags.NumericLiteralFlags;\n            }\n\n            nextToken();\n            finishNode(node);\n\n            return node;\n        }\n\n        // TYPES\n\n        function parseTypeReference(): TypeReferenceNode {\n            const node = <TypeReferenceNode>createNode(SyntaxKind.TypeReference);\n            node.typeName = parseEntityName(/*allowReservedWords*/ true, Diagnostics.Type_expected);\n            if (!scanner.hasPrecedingLineBreak() && token() === SyntaxKind.LessThanToken) {\n                node.typeArguments = parseBracketedList(ParsingContext.TypeArguments, parseType, SyntaxKind.LessThanToken, SyntaxKind.GreaterThanToken);\n            }\n            return finishNode(node);\n        }\n\n        function parseThisTypePredicate(lhs: ThisTypeNode): TypePredicateNode {\n            nextToken();\n            const node = createNode(SyntaxKind.TypePredicate, lhs.pos) as TypePredicateNode;\n            node.parameterName = lhs;\n            node.type = parseType();\n            return finishNode(node);\n        }\n\n        function parseThisTypeNode(): ThisTypeNode {\n            const node = createNode(SyntaxKind.ThisType) as ThisTypeNode;\n            nextToken();\n            return finishNode(node);\n        }\n\n        function parseJSDocAllType(): JSDocAllType {\n            const result = <JSDocAllType>createNode(SyntaxKind.JSDocAllType);\n            nextToken();\n            return finishNode(result);\n        }\n\n        function parseJSDocUnknownOrNullableType(): JSDocUnknownType | JSDocNullableType {\n            const pos = scanner.getStartPos();\n            // skip the ?\n            nextToken();\n\n            // Need to lookahead to decide if this is a nullable or unknown type.\n\n            // Here are cases where we'll pick the unknown type:\n            //\n            //      Foo(?,\n            //      { a: ? }\n            //      Foo(?)\n            //      Foo<?>\n            //      Foo(?=\n            //      (?|\n            if (token() === SyntaxKind.CommaToken ||\n                token() === SyntaxKind.CloseBraceToken ||\n                token() === SyntaxKind.CloseParenToken ||\n                token() === SyntaxKind.GreaterThanToken ||\n                token() === SyntaxKind.EqualsToken ||\n                token() === SyntaxKind.BarToken) {\n\n                const result = <JSDocUnknownType>createNode(SyntaxKind.JSDocUnknownType, pos);\n                return finishNode(result);\n            }\n            else {\n                const result = <JSDocNullableType>createNode(SyntaxKind.JSDocNullableType, pos);\n                result.type = parseType();\n                return finishNode(result);\n            }\n        }\n\n        function parseJSDocFunctionType(): JSDocFunctionType | TypeReferenceNode {\n            if (lookAhead(nextTokenIsOpenParen)) {\n                const result = <JSDocFunctionType>createNodeWithJSDoc(SyntaxKind.JSDocFunctionType);\n                nextToken();\n                fillSignature(SyntaxKind.ColonToken, SignatureFlags.Type | SignatureFlags.JSDoc, result);\n                return finishNode(result);\n            }\n            const node = <TypeReferenceNode>createNode(SyntaxKind.TypeReference);\n            node.typeName = parseIdentifierName();\n            return finishNode(node);\n        }\n\n        function parseJSDocParameter(): ParameterDeclaration {\n            const parameter = createNode(SyntaxKind.Parameter) as ParameterDeclaration;\n            if (token() === SyntaxKind.ThisKeyword || token() === SyntaxKind.NewKeyword) {\n                parameter.name = parseIdentifierName();\n                parseExpected(SyntaxKind.ColonToken);\n            }\n            parameter.type = parseType();\n            return finishNode(parameter);\n        }\n\n        function parseJSDocNodeWithType(kind: SyntaxKind.JSDocVariadicType | SyntaxKind.JSDocNonNullableType): TypeNode {\n            const result = createNode(kind) as JSDocVariadicType | JSDocNonNullableType;\n            nextToken();\n            result.type = parseNonArrayType();\n            return finishNode(result);\n        }\n\n        function parseTypeQuery(): TypeQueryNode {\n            const node = <TypeQueryNode>createNode(SyntaxKind.TypeQuery);\n            parseExpected(SyntaxKind.TypeOfKeyword);\n            node.exprName = parseEntityName(/*allowReservedWords*/ true);\n            return finishNode(node);\n        }\n\n        function parseTypeParameter(): TypeParameterDeclaration {\n            const node = <TypeParameterDeclaration>createNode(SyntaxKind.TypeParameter);\n            node.name = parseIdentifier();\n            if (parseOptional(SyntaxKind.ExtendsKeyword)) {\n                // It's not uncommon for people to write improper constraints to a generic.  If the\n                // user writes a constraint that is an expression and not an actual type, then parse\n                // it out as an expression (so we can recover well), but report that a type is needed\n                // instead.\n                if (isStartOfType() || !isStartOfExpression()) {\n                    node.constraint = parseType();\n                }\n                else {\n                    // It was not a type, and it looked like an expression.  Parse out an expression\n                    // here so we recover well.  Note: it is important that we call parseUnaryExpression\n                    // and not parseExpression here.  If the user has:\n                    //\n                    //      <T extends \"\">\n                    //\n                    // We do *not* want to consume the `>` as we're consuming the expression for \"\".\n                    node.expression = parseUnaryExpressionOrHigher();\n                }\n            }\n\n            if (parseOptional(SyntaxKind.EqualsToken)) {\n                node.default = parseType();\n            }\n\n            return finishNode(node);\n        }\n\n        function parseTypeParameters(): NodeArray<TypeParameterDeclaration> | undefined {\n            if (token() === SyntaxKind.LessThanToken) {\n                return parseBracketedList(ParsingContext.TypeParameters, parseTypeParameter, SyntaxKind.LessThanToken, SyntaxKind.GreaterThanToken);\n            }\n        }\n\n        function parseParameterType(): TypeNode {\n            if (parseOptional(SyntaxKind.ColonToken)) {\n                return parseType();\n            }\n\n            return undefined;\n        }\n\n        function isStartOfParameter(): boolean {\n            return token() === SyntaxKind.DotDotDotToken ||\n                isIdentifierOrPattern() ||\n                isModifierKind(token()) ||\n                token() === SyntaxKind.AtToken ||\n                isStartOfType(/*inStartOfParameter*/ true);\n        }\n\n        function parseParameter(): ParameterDeclaration {\n            const node = <ParameterDeclaration>createNodeWithJSDoc(SyntaxKind.Parameter);\n            if (token() === SyntaxKind.ThisKeyword) {\n                node.name = createIdentifier(/*isIdentifier*/ true);\n                node.type = parseParameterType();\n                return finishNode(node);\n            }\n\n            node.decorators = parseDecorators();\n            node.modifiers = parseModifiers();\n            node.dotDotDotToken = parseOptionalToken(SyntaxKind.DotDotDotToken);\n\n            // FormalParameter [Yield,Await]:\n            //      BindingElement[?Yield,?Await]\n            node.name = parseIdentifierOrPattern();\n            if (getFullWidth(node.name) === 0 && !hasModifiers(node) && isModifierKind(token())) {\n                // in cases like\n                // 'use strict'\n                // function foo(static)\n                // isParameter('static') === true, because of isModifier('static')\n                // however 'static' is not a legal identifier in a strict mode.\n                // so result of this function will be ParameterDeclaration (flags = 0, name = missing, type = undefined, initializer = undefined)\n                // and current token will not change => parsing of the enclosing parameter list will last till the end of time (or OOM)\n                // to avoid this we'll advance cursor to the next token.\n                nextToken();\n            }\n\n            node.questionToken = parseOptionalToken(SyntaxKind.QuestionToken);\n            node.type = parseParameterType();\n            node.initializer = parseInitializer();\n\n            return finishNode(node);\n        }\n\n        function fillSignature(\n            returnToken: SyntaxKind.ColonToken | SyntaxKind.EqualsGreaterThanToken,\n            flags: SignatureFlags,\n            signature: SignatureDeclaration): void {\n            if (!(flags & SignatureFlags.JSDoc)) {\n                signature.typeParameters = parseTypeParameters();\n            }\n            signature.parameters = parseParameterList(flags);\n            signature.type = parseReturnType(returnToken, !!(flags & SignatureFlags.Type));\n        }\n\n        function parseReturnType(returnToken: SyntaxKind.ColonToken | SyntaxKind.EqualsGreaterThanToken, isType: boolean): TypeNode | undefined {\n            return shouldParseReturnType(returnToken, isType) ? parseTypeOrTypePredicate() : undefined;\n        }\n        function shouldParseReturnType(returnToken: SyntaxKind.ColonToken | SyntaxKind.EqualsGreaterThanToken, isType: boolean): boolean {\n            if (returnToken === SyntaxKind.EqualsGreaterThanToken) {\n                parseExpected(returnToken);\n                return true;\n            }\n            else if (parseOptional(SyntaxKind.ColonToken)) {\n                return true;\n            }\n            else if (isType && token() === SyntaxKind.EqualsGreaterThanToken) {\n                // This is easy to get backward, especially in type contexts, so parse the type anyway\n                parseErrorAtCurrentToken(Diagnostics._0_expected, tokenToString(SyntaxKind.ColonToken));\n                nextToken();\n                return true;\n            }\n            return false;\n        }\n\n        function parseParameterList(flags: SignatureFlags) {\n            // FormalParameters [Yield,Await]: (modified)\n            //      [empty]\n            //      FormalParameterList[?Yield,Await]\n            //\n            // FormalParameter[Yield,Await]: (modified)\n            //      BindingElement[?Yield,Await]\n            //\n            // BindingElement [Yield,Await]: (modified)\n            //      SingleNameBinding[?Yield,?Await]\n            //      BindingPattern[?Yield,?Await]Initializer [In, ?Yield,?Await] opt\n            //\n            // SingleNameBinding [Yield,Await]:\n            //      BindingIdentifier[?Yield,?Await]Initializer [In, ?Yield,?Await] opt\n            if (parseExpected(SyntaxKind.OpenParenToken)) {\n                const savedYieldContext = inYieldContext();\n                const savedAwaitContext = inAwaitContext();\n\n                setYieldContext(!!(flags & SignatureFlags.Yield));\n                setAwaitContext(!!(flags & SignatureFlags.Await));\n\n                const result = parseDelimitedList(ParsingContext.Parameters, flags & SignatureFlags.JSDoc ? parseJSDocParameter : parseParameter);\n\n                setYieldContext(savedYieldContext);\n                setAwaitContext(savedAwaitContext);\n\n                if (!parseExpected(SyntaxKind.CloseParenToken) && (flags & SignatureFlags.RequireCompleteParameterList)) {\n                    // Caller insisted that we had to end with a )   We didn't.  So just return\n                    // undefined here.\n                    return undefined;\n                }\n\n                return result;\n            }\n\n            // We didn't even have an open paren.  If the caller requires a complete parameter list,\n            // we definitely can't provide that.  However, if they're ok with an incomplete one,\n            // then just return an empty set of parameters.\n            return (flags & SignatureFlags.RequireCompleteParameterList) ? undefined : createMissingList<ParameterDeclaration>();\n        }\n\n        function parseTypeMemberSemicolon() {\n            // We allow type members to be separated by commas or (possibly ASI) semicolons.\n            // First check if it was a comma.  If so, we're done with the member.\n            if (parseOptional(SyntaxKind.CommaToken)) {\n                return;\n            }\n\n            // Didn't have a comma.  We must have a (possible ASI) semicolon.\n            parseSemicolon();\n        }\n\n        function parseSignatureMember(kind: SyntaxKind.CallSignature | SyntaxKind.ConstructSignature): CallSignatureDeclaration | ConstructSignatureDeclaration {\n            const node = <CallSignatureDeclaration | ConstructSignatureDeclaration>createNodeWithJSDoc(kind);\n            if (kind === SyntaxKind.ConstructSignature) {\n                parseExpected(SyntaxKind.NewKeyword);\n            }\n            fillSignature(SyntaxKind.ColonToken, SignatureFlags.Type, node);\n            parseTypeMemberSemicolon();\n            return finishNode(node);\n        }\n\n        function isIndexSignature(): boolean {\n            return token() === SyntaxKind.OpenBracketToken && lookAhead(isUnambiguouslyIndexSignature);\n        }\n\n        function isUnambiguouslyIndexSignature() {\n            // The only allowed sequence is:\n            //\n            //   [id:\n            //\n            // However, for error recovery, we also check the following cases:\n            //\n            //   [...\n            //   [id,\n            //   [id?,\n            //   [id?:\n            //   [id?]\n            //   [public id\n            //   [private id\n            //   [protected id\n            //   []\n            //\n            nextToken();\n            if (token() === SyntaxKind.DotDotDotToken || token() === SyntaxKind.CloseBracketToken) {\n                return true;\n            }\n\n            if (isModifierKind(token())) {\n                nextToken();\n                if (isIdentifier()) {\n                    return true;\n                }\n            }\n            else if (!isIdentifier()) {\n                return false;\n            }\n            else {\n                // Skip the identifier\n                nextToken();\n            }\n\n            // A colon signifies a well formed indexer\n            // A comma should be a badly formed indexer because comma expressions are not allowed\n            // in computed properties.\n            if (token() === SyntaxKind.ColonToken || token() === SyntaxKind.CommaToken) {\n                return true;\n            }\n\n            // Question mark could be an indexer with an optional property,\n            // or it could be a conditional expression in a computed property.\n            if (token() !== SyntaxKind.QuestionToken) {\n                return false;\n            }\n\n            // If any of the following tokens are after the question mark, it cannot\n            // be a conditional expression, so treat it as an indexer.\n            nextToken();\n            return token() === SyntaxKind.ColonToken || token() === SyntaxKind.CommaToken || token() === SyntaxKind.CloseBracketToken;\n        }\n\n        function parseIndexSignatureDeclaration(node: IndexSignatureDeclaration): IndexSignatureDeclaration {\n            node.kind = SyntaxKind.IndexSignature;\n            node.parameters = parseBracketedList(ParsingContext.Parameters, parseParameter, SyntaxKind.OpenBracketToken, SyntaxKind.CloseBracketToken);\n            node.type = parseTypeAnnotation();\n            parseTypeMemberSemicolon();\n            return finishNode(node);\n        }\n\n        function parsePropertyOrMethodSignature(node: PropertySignature | MethodSignature): PropertySignature | MethodSignature {\n            node.name = parsePropertyName();\n            node.questionToken = parseOptionalToken(SyntaxKind.QuestionToken);\n            if (token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken) {\n                node.kind = SyntaxKind.MethodSignature;\n                // Method signatures don't exist in expression contexts.  So they have neither\n                // [Yield] nor [Await]\n                fillSignature(SyntaxKind.ColonToken, SignatureFlags.Type, <MethodSignature>node);\n            }\n            else {\n                node.kind = SyntaxKind.PropertySignature;\n                node.type = parseTypeAnnotation();\n                if (token() === SyntaxKind.EqualsToken) {\n                    // Although type literal properties cannot not have initializers, we attempt\n                    // to parse an initializer so we can report in the checker that an interface\n                    // property or type literal property cannot have an initializer.\n                    (<PropertySignature>node).initializer = parseInitializer();\n                }\n            }\n            parseTypeMemberSemicolon();\n            return finishNode(node);\n        }\n\n        function isTypeMemberStart(): boolean {\n            // Return true if we have the start of a signature member\n            if (token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken) {\n                return true;\n            }\n            let idToken: boolean;\n            // Eat up all modifiers, but hold on to the last one in case it is actually an identifier\n            while (isModifierKind(token())) {\n                idToken = true;\n                nextToken();\n            }\n            // Index signatures and computed property names are type members\n            if (token() === SyntaxKind.OpenBracketToken) {\n                return true;\n            }\n            // Try to get the first property-like token following all modifiers\n            if (isLiteralPropertyName()) {\n                idToken = true;\n                nextToken();\n            }\n            // If we were able to get any potential identifier, check that it is\n            // the start of a member declaration\n            if (idToken) {\n                return token() === SyntaxKind.OpenParenToken ||\n                    token() === SyntaxKind.LessThanToken ||\n                    token() === SyntaxKind.QuestionToken ||\n                    token() === SyntaxKind.ColonToken ||\n                    token() === SyntaxKind.CommaToken ||\n                    canParseSemicolon();\n            }\n            return false;\n        }\n\n        function parseTypeMember(): TypeElement {\n            if (token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken) {\n                return parseSignatureMember(SyntaxKind.CallSignature);\n            }\n            if (token() === SyntaxKind.NewKeyword && lookAhead(nextTokenIsOpenParenOrLessThan)) {\n                return parseSignatureMember(SyntaxKind.ConstructSignature);\n            }\n            const node = <TypeElement>createNodeWithJSDoc(SyntaxKind.Unknown);\n            node.modifiers = parseModifiers();\n            if (isIndexSignature()) {\n                return parseIndexSignatureDeclaration(<IndexSignatureDeclaration>node);\n            }\n            return parsePropertyOrMethodSignature(<PropertySignature | MethodSignature>node);\n        }\n\n        function nextTokenIsOpenParenOrLessThan() {\n            nextToken();\n            return token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken;\n        }\n\n        function parseTypeLiteral(): TypeLiteralNode {\n            const node = <TypeLiteralNode>createNode(SyntaxKind.TypeLiteral);\n            node.members = parseObjectTypeMembers();\n            return finishNode(node);\n        }\n\n        function parseObjectTypeMembers(): NodeArray<TypeElement> {\n            let members: NodeArray<TypeElement>;\n            if (parseExpected(SyntaxKind.OpenBraceToken)) {\n                members = parseList(ParsingContext.TypeMembers, parseTypeMember);\n                parseExpected(SyntaxKind.CloseBraceToken);\n            }\n            else {\n                members = createMissingList<TypeElement>();\n            }\n\n            return members;\n        }\n\n        function isStartOfMappedType() {\n            nextToken();\n            if (token() === SyntaxKind.PlusToken || token() === SyntaxKind.MinusToken) {\n                return nextToken() === SyntaxKind.ReadonlyKeyword;\n            }\n            if (token() === SyntaxKind.ReadonlyKeyword) {\n                nextToken();\n            }\n            return token() === SyntaxKind.OpenBracketToken && nextTokenIsIdentifier() && nextToken() === SyntaxKind.InKeyword;\n        }\n\n        function parseMappedTypeParameter() {\n            const node = <TypeParameterDeclaration>createNode(SyntaxKind.TypeParameter);\n            node.name = parseIdentifier();\n            parseExpected(SyntaxKind.InKeyword);\n            node.constraint = parseType();\n            return finishNode(node);\n        }\n\n        function parseMappedType() {\n            const node = <MappedTypeNode>createNode(SyntaxKind.MappedType);\n            parseExpected(SyntaxKind.OpenBraceToken);\n            if (token() === SyntaxKind.ReadonlyKeyword || token() === SyntaxKind.PlusToken || token() === SyntaxKind.MinusToken) {\n                node.readonlyToken = parseTokenNode();\n                if (node.readonlyToken.kind !== SyntaxKind.ReadonlyKeyword) {\n                    parseExpectedToken(SyntaxKind.ReadonlyKeyword);\n                }\n            }\n            parseExpected(SyntaxKind.OpenBracketToken);\n            node.typeParameter = parseMappedTypeParameter();\n            parseExpected(SyntaxKind.CloseBracketToken);\n            if (token() === SyntaxKind.QuestionToken || token() === SyntaxKind.PlusToken || token() === SyntaxKind.MinusToken) {\n                node.questionToken = parseTokenNode();\n                if (node.questionToken.kind !== SyntaxKind.QuestionToken) {\n                    parseExpectedToken(SyntaxKind.QuestionToken);\n                }\n            }\n            node.type = parseTypeAnnotation();\n            parseSemicolon();\n            parseExpected(SyntaxKind.CloseBraceToken);\n            return finishNode(node);\n        }\n\n        function parseTupleType(): TupleTypeNode {\n            const node = <TupleTypeNode>createNode(SyntaxKind.TupleType);\n            node.elementTypes = parseBracketedList(ParsingContext.TupleElementTypes, parseType, SyntaxKind.OpenBracketToken, SyntaxKind.CloseBracketToken);\n            return finishNode(node);\n        }\n\n        function parseParenthesizedType(): ParenthesizedTypeNode {\n            const node = <ParenthesizedTypeNode>createNode(SyntaxKind.ParenthesizedType);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.type = parseType();\n            parseExpected(SyntaxKind.CloseParenToken);\n            return finishNode(node);\n        }\n\n        function parseFunctionOrConstructorType(kind: SyntaxKind): FunctionOrConstructorTypeNode {\n            const node = <FunctionOrConstructorTypeNode>createNodeWithJSDoc(kind);\n            if (kind === SyntaxKind.ConstructorType) {\n                parseExpected(SyntaxKind.NewKeyword);\n            }\n            fillSignature(SyntaxKind.EqualsGreaterThanToken, SignatureFlags.Type, node);\n            return finishNode(node);\n        }\n\n        function parseKeywordAndNoDot(): TypeNode | undefined {\n            const node = parseTokenNode<TypeNode>();\n            return token() === SyntaxKind.DotToken ? undefined : node;\n        }\n\n        function parseLiteralTypeNode(negative?: boolean): LiteralTypeNode {\n            const node = createNode(SyntaxKind.LiteralType) as LiteralTypeNode;\n            let unaryMinusExpression: PrefixUnaryExpression;\n            if (negative) {\n                unaryMinusExpression = createNode(SyntaxKind.PrefixUnaryExpression) as PrefixUnaryExpression;\n                unaryMinusExpression.operator = SyntaxKind.MinusToken;\n                nextToken();\n            }\n            let expression: BooleanLiteral | LiteralExpression | PrefixUnaryExpression = token() === SyntaxKind.TrueKeyword || token() === SyntaxKind.FalseKeyword\n                ? parseTokenNode<BooleanLiteral>()\n                : parseLiteralLikeNode(token()) as LiteralExpression;\n            if (negative) {\n                unaryMinusExpression.operand = expression;\n                finishNode(unaryMinusExpression);\n                expression = unaryMinusExpression;\n            }\n            node.literal = expression;\n            return finishNode(node);\n        }\n\n        function nextTokenIsNumericLiteral() {\n            return nextToken() === SyntaxKind.NumericLiteral;\n        }\n\n        function parseNonArrayType(): TypeNode {\n            switch (token()) {\n                case SyntaxKind.AnyKeyword:\n                case SyntaxKind.StringKeyword:\n                case SyntaxKind.NumberKeyword:\n                case SyntaxKind.SymbolKeyword:\n                case SyntaxKind.BooleanKeyword:\n                case SyntaxKind.UndefinedKeyword:\n                case SyntaxKind.NeverKeyword:\n                case SyntaxKind.ObjectKeyword:\n                    // If these are followed by a dot, then parse these out as a dotted type reference instead.\n                    return tryParse(parseKeywordAndNoDot) || parseTypeReference();\n                case SyntaxKind.AsteriskToken:\n                    return parseJSDocAllType();\n                case SyntaxKind.QuestionToken:\n                    return parseJSDocUnknownOrNullableType();\n                case SyntaxKind.FunctionKeyword:\n                    return parseJSDocFunctionType();\n                case SyntaxKind.ExclamationToken:\n                    return parseJSDocNodeWithType(SyntaxKind.JSDocNonNullableType);\n                case SyntaxKind.NoSubstitutionTemplateLiteral:\n                case SyntaxKind.StringLiteral:\n                case SyntaxKind.NumericLiteral:\n                case SyntaxKind.TrueKeyword:\n                case SyntaxKind.FalseKeyword:\n                    return parseLiteralTypeNode();\n                case SyntaxKind.MinusToken:\n                    return lookAhead(nextTokenIsNumericLiteral) ? parseLiteralTypeNode(/*negative*/ true) : parseTypeReference();\n                case SyntaxKind.VoidKeyword:\n                case SyntaxKind.NullKeyword:\n                    return parseTokenNode<TypeNode>();\n                case SyntaxKind.ThisKeyword: {\n                    const thisKeyword = parseThisTypeNode();\n                    if (token() === SyntaxKind.IsKeyword && !scanner.hasPrecedingLineBreak()) {\n                        return parseThisTypePredicate(thisKeyword);\n                    }\n                    else {\n                        return thisKeyword;\n                    }\n                }\n                case SyntaxKind.TypeOfKeyword:\n                    return parseTypeQuery();\n                case SyntaxKind.OpenBraceToken:\n                    return lookAhead(isStartOfMappedType) ? parseMappedType() : parseTypeLiteral();\n                case SyntaxKind.OpenBracketToken:\n                    return parseTupleType();\n                case SyntaxKind.OpenParenToken:\n                    return parseParenthesizedType();\n                default:\n                    return parseTypeReference();\n            }\n        }\n\n        function isStartOfType(inStartOfParameter?: boolean): boolean {\n            switch (token()) {\n                case SyntaxKind.AnyKeyword:\n                case SyntaxKind.StringKeyword:\n                case SyntaxKind.NumberKeyword:\n                case SyntaxKind.BooleanKeyword:\n                case SyntaxKind.SymbolKeyword:\n                case SyntaxKind.UniqueKeyword:\n                case SyntaxKind.VoidKeyword:\n                case SyntaxKind.UndefinedKeyword:\n                case SyntaxKind.NullKeyword:\n                case SyntaxKind.ThisKeyword:\n                case SyntaxKind.TypeOfKeyword:\n                case SyntaxKind.NeverKeyword:\n                case SyntaxKind.OpenBraceToken:\n                case SyntaxKind.OpenBracketToken:\n                case SyntaxKind.LessThanToken:\n                case SyntaxKind.BarToken:\n                case SyntaxKind.AmpersandToken:\n                case SyntaxKind.NewKeyword:\n                case SyntaxKind.StringLiteral:\n                case SyntaxKind.NumericLiteral:\n                case SyntaxKind.TrueKeyword:\n                case SyntaxKind.FalseKeyword:\n                case SyntaxKind.ObjectKeyword:\n                case SyntaxKind.AsteriskToken:\n                case SyntaxKind.QuestionToken:\n                case SyntaxKind.ExclamationToken:\n                case SyntaxKind.DotDotDotToken:\n                case SyntaxKind.InferKeyword:\n                    return true;\n                case SyntaxKind.MinusToken:\n                    return !inStartOfParameter && lookAhead(nextTokenIsNumericLiteral);\n                case SyntaxKind.OpenParenToken:\n                    // Only consider '(' the start of a type if followed by ')', '...', an identifier, a modifier,\n                    // or something that starts a type. We don't want to consider things like '(1)' a type.\n                    return !inStartOfParameter && lookAhead(isStartOfParenthesizedOrFunctionType);\n                default:\n                    return isIdentifier();\n            }\n        }\n\n        function isStartOfParenthesizedOrFunctionType() {\n            nextToken();\n            return token() === SyntaxKind.CloseParenToken || isStartOfParameter() || isStartOfType();\n        }\n\n        function parsePostfixTypeOrHigher(): TypeNode {\n            let type = parseNonArrayType();\n            while (!scanner.hasPrecedingLineBreak()) {\n                switch (token()) {\n                    case SyntaxKind.EqualsToken:\n                        // only parse postfix = inside jsdoc, because it's ambiguous elsewhere\n                        if (!(contextFlags & NodeFlags.JSDoc)) {\n                            return type;\n                        }\n                        type = createJSDocPostfixType(SyntaxKind.JSDocOptionalType, type);\n                        break;\n                    case SyntaxKind.ExclamationToken:\n                        type = createJSDocPostfixType(SyntaxKind.JSDocNonNullableType, type);\n                        break;\n                    case SyntaxKind.QuestionToken:\n                        // If not in JSDoc and next token is start of a type we have a conditional type\n                        if (!(contextFlags & NodeFlags.JSDoc) && lookAhead(nextTokenIsStartOfType)) {\n                            return type;\n                        }\n                        type = createJSDocPostfixType(SyntaxKind.JSDocNullableType, type);\n                        break;\n                    case SyntaxKind.OpenBracketToken:\n                        parseExpected(SyntaxKind.OpenBracketToken);\n                        if (isStartOfType()) {\n                            const node = createNode(SyntaxKind.IndexedAccessType, type.pos) as IndexedAccessTypeNode;\n                            node.objectType = type;\n                            node.indexType = parseType();\n                            parseExpected(SyntaxKind.CloseBracketToken);\n                            type = finishNode(node);\n                        }\n                        else {\n                            const node = createNode(SyntaxKind.ArrayType, type.pos) as ArrayTypeNode;\n                            node.elementType = type;\n                            parseExpected(SyntaxKind.CloseBracketToken);\n                            type = finishNode(node);\n                        }\n                        break;\n                    default:\n                        return type;\n                }\n            }\n            return type;\n        }\n\n        function createJSDocPostfixType(kind: SyntaxKind, type: TypeNode) {\n            nextToken();\n            const postfix = createNode(kind, type.pos) as JSDocOptionalType | JSDocNonNullableType | JSDocNullableType;\n            postfix.type = type;\n            return finishNode(postfix);\n        }\n\n        function parseTypeOperator(operator: SyntaxKind.KeyOfKeyword | SyntaxKind.UniqueKeyword) {\n            const node = <TypeOperatorNode>createNode(SyntaxKind.TypeOperator);\n            parseExpected(operator);\n            node.operator = operator;\n            node.type = parseTypeOperatorOrHigher();\n            return finishNode(node);\n        }\n\n        function parseInferType(): InferTypeNode {\n            const node = <InferTypeNode>createNode(SyntaxKind.InferType);\n            parseExpected(SyntaxKind.InferKeyword);\n            const typeParameter = <TypeParameterDeclaration>createNode(SyntaxKind.TypeParameter);\n            typeParameter.name = parseIdentifier();\n            node.typeParameter = finishNode(typeParameter);\n            return finishNode(node);\n        }\n\n        function parseTypeOperatorOrHigher(): TypeNode {\n            const operator = token();\n            switch (operator) {\n                case SyntaxKind.KeyOfKeyword:\n                case SyntaxKind.UniqueKeyword:\n                    return parseTypeOperator(operator);\n                case SyntaxKind.InferKeyword:\n                    return parseInferType();\n                case SyntaxKind.DotDotDotToken: {\n                    const result = createNode(SyntaxKind.JSDocVariadicType) as JSDocVariadicType;\n                    nextToken();\n                    result.type = parsePostfixTypeOrHigher();\n                    return finishNode(result);\n                }\n            }\n            return parsePostfixTypeOrHigher();\n        }\n\n        function parseUnionOrIntersectionType(kind: SyntaxKind.UnionType | SyntaxKind.IntersectionType, parseConstituentType: () => TypeNode, operator: SyntaxKind.BarToken | SyntaxKind.AmpersandToken): TypeNode {\n            parseOptional(operator);\n            let type = parseConstituentType();\n            if (token() === operator) {\n                const types = [type];\n                while (parseOptional(operator)) {\n                    types.push(parseConstituentType());\n                }\n                const node = <UnionOrIntersectionTypeNode>createNode(kind, type.pos);\n                node.types = createNodeArray(types, type.pos);\n                type = finishNode(node);\n            }\n            return type;\n        }\n\n        function parseIntersectionTypeOrHigher(): TypeNode {\n            return parseUnionOrIntersectionType(SyntaxKind.IntersectionType, parseTypeOperatorOrHigher, SyntaxKind.AmpersandToken);\n        }\n\n        function parseUnionTypeOrHigher(): TypeNode {\n            return parseUnionOrIntersectionType(SyntaxKind.UnionType, parseIntersectionTypeOrHigher, SyntaxKind.BarToken);\n        }\n\n        function isStartOfFunctionType(): boolean {\n            if (token() === SyntaxKind.LessThanToken) {\n                return true;\n            }\n            return token() === SyntaxKind.OpenParenToken && lookAhead(isUnambiguouslyStartOfFunctionType);\n        }\n\n        function skipParameterStart(): boolean {\n            if (isModifierKind(token())) {\n                // Skip modifiers\n                parseModifiers();\n            }\n            if (isIdentifier() || token() === SyntaxKind.ThisKeyword) {\n                nextToken();\n                return true;\n            }\n            if (token() === SyntaxKind.OpenBracketToken || token() === SyntaxKind.OpenBraceToken) {\n                // Return true if we can parse an array or object binding pattern with no errors\n                const previousErrorCount = parseDiagnostics.length;\n                parseIdentifierOrPattern();\n                return previousErrorCount === parseDiagnostics.length;\n            }\n            return false;\n        }\n\n        function isUnambiguouslyStartOfFunctionType() {\n            nextToken();\n            if (token() === SyntaxKind.CloseParenToken || token() === SyntaxKind.DotDotDotToken) {\n                // ( )\n                // ( ...\n                return true;\n            }\n            if (skipParameterStart()) {\n                // We successfully skipped modifiers (if any) and an identifier or binding pattern,\n                // now see if we have something that indicates a parameter declaration\n                if (token() === SyntaxKind.ColonToken || token() === SyntaxKind.CommaToken ||\n                    token() === SyntaxKind.QuestionToken || token() === SyntaxKind.EqualsToken) {\n                    // ( xxx :\n                    // ( xxx ,\n                    // ( xxx ?\n                    // ( xxx =\n                    return true;\n                }\n                if (token() === SyntaxKind.CloseParenToken) {\n                    nextToken();\n                    if (token() === SyntaxKind.EqualsGreaterThanToken) {\n                        // ( xxx ) =>\n                        return true;\n                    }\n                }\n            }\n            return false;\n        }\n\n        function parseTypeOrTypePredicate(): TypeNode {\n            const typePredicateVariable = isIdentifier() && tryParse(parseTypePredicatePrefix);\n            const type = parseType();\n            if (typePredicateVariable) {\n                const node = <TypePredicateNode>createNode(SyntaxKind.TypePredicate, typePredicateVariable.pos);\n                node.parameterName = typePredicateVariable;\n                node.type = type;\n                return finishNode(node);\n            }\n            else {\n                return type;\n            }\n        }\n\n        function parseTypePredicatePrefix() {\n            const id = parseIdentifier();\n            if (token() === SyntaxKind.IsKeyword && !scanner.hasPrecedingLineBreak()) {\n                nextToken();\n                return id;\n            }\n        }\n\n        function parseType(): TypeNode {\n            // The rules about 'yield' only apply to actual code/expression contexts.  They don't\n            // apply to 'type' contexts.  So we disable these parameters here before moving on.\n            return doOutsideOfContext(NodeFlags.TypeExcludesFlags, parseTypeWorker);\n        }\n\n        function parseTypeWorker(noConditionalTypes?: boolean): TypeNode {\n            if (isStartOfFunctionType()) {\n                return parseFunctionOrConstructorType(SyntaxKind.FunctionType);\n            }\n            if (token() === SyntaxKind.NewKeyword) {\n                return parseFunctionOrConstructorType(SyntaxKind.ConstructorType);\n            }\n            const type = parseUnionTypeOrHigher();\n            if (!noConditionalTypes && !scanner.hasPrecedingLineBreak() && parseOptional(SyntaxKind.ExtendsKeyword)) {\n                const node = <ConditionalTypeNode>createNode(SyntaxKind.ConditionalType, type.pos);\n                node.checkType = type;\n                // The type following 'extends' is not permitted to be another conditional type\n                node.extendsType = parseTypeWorker(/*noConditionalTypes*/ true);\n                parseExpected(SyntaxKind.QuestionToken);\n                node.trueType = parseTypeWorker();\n                parseExpected(SyntaxKind.ColonToken);\n                node.falseType = parseTypeWorker();\n                return finishNode(node);\n            }\n            return type;\n        }\n\n        function parseTypeAnnotation(): TypeNode {\n            return parseOptional(SyntaxKind.ColonToken) ? parseType() : undefined;\n        }\n\n        // EXPRESSIONS\n        function isStartOfLeftHandSideExpression(): boolean {\n            switch (token()) {\n                case SyntaxKind.ThisKeyword:\n                case SyntaxKind.SuperKeyword:\n                case SyntaxKind.NullKeyword:\n                case SyntaxKind.TrueKeyword:\n                case SyntaxKind.FalseKeyword:\n                case SyntaxKind.NumericLiteral:\n                case SyntaxKind.StringLiteral:\n                case SyntaxKind.NoSubstitutionTemplateLiteral:\n                case SyntaxKind.TemplateHead:\n                case SyntaxKind.OpenParenToken:\n                case SyntaxKind.OpenBracketToken:\n                case SyntaxKind.OpenBraceToken:\n                case SyntaxKind.FunctionKeyword:\n                case SyntaxKind.ClassKeyword:\n                case SyntaxKind.NewKeyword:\n                case SyntaxKind.SlashToken:\n                case SyntaxKind.SlashEqualsToken:\n                case SyntaxKind.Identifier:\n                    return true;\n                case SyntaxKind.ImportKeyword:\n                    return lookAhead(nextTokenIsOpenParenOrLessThan);\n                default:\n                    return isIdentifier();\n            }\n        }\n\n        function isStartOfExpression(): boolean {\n            if (isStartOfLeftHandSideExpression()) {\n                return true;\n            }\n\n            switch (token()) {\n                case SyntaxKind.PlusToken:\n                case SyntaxKind.MinusToken:\n                case SyntaxKind.TildeToken:\n                case SyntaxKind.ExclamationToken:\n                case SyntaxKind.DeleteKeyword:\n                case SyntaxKind.TypeOfKeyword:\n                case SyntaxKind.VoidKeyword:\n                case SyntaxKind.PlusPlusToken:\n                case SyntaxKind.MinusMinusToken:\n                case SyntaxKind.LessThanToken:\n                case SyntaxKind.AwaitKeyword:\n                case SyntaxKind.YieldKeyword:\n                    // Yield/await always starts an expression.  Either it is an identifier (in which case\n                    // it is definitely an expression).  Or it's a keyword (either because we're in\n                    // a generator or async function, or in strict mode (or both)) and it started a yield or await expression.\n                    return true;\n                default:\n                    // Error tolerance.  If we see the start of some binary operator, we consider\n                    // that the start of an expression.  That way we'll parse out a missing identifier,\n                    // give a good message about an identifier being missing, and then consume the\n                    // rest of the binary expression.\n                    if (isBinaryOperator()) {\n                        return true;\n                    }\n\n                    return isIdentifier();\n            }\n        }\n\n        function isStartOfExpressionStatement(): boolean {\n            // As per the grammar, none of '{' or 'function' or 'class' can start an expression statement.\n            return token() !== SyntaxKind.OpenBraceToken &&\n                token() !== SyntaxKind.FunctionKeyword &&\n                token() !== SyntaxKind.ClassKeyword &&\n                token() !== SyntaxKind.AtToken &&\n                isStartOfExpression();\n        }\n\n        function parseExpression(): Expression {\n            // Expression[in]:\n            //      AssignmentExpression[in]\n            //      Expression[in] , AssignmentExpression[in]\n\n            // clear the decorator context when parsing Expression, as it should be unambiguous when parsing a decorator\n            const saveDecoratorContext = inDecoratorContext();\n            if (saveDecoratorContext) {\n                setDecoratorContext(/*val*/ false);\n            }\n\n            let expr = parseAssignmentExpressionOrHigher();\n            let operatorToken: BinaryOperatorToken;\n            while ((operatorToken = parseOptionalToken(SyntaxKind.CommaToken))) {\n                expr = makeBinaryExpression(expr, operatorToken, parseAssignmentExpressionOrHigher());\n            }\n\n            if (saveDecoratorContext) {\n                setDecoratorContext(/*val*/ true);\n            }\n            return expr;\n        }\n\n        function parseInitializer(): Expression | undefined {\n            return parseOptional(SyntaxKind.EqualsToken) ? parseAssignmentExpressionOrHigher() : undefined;\n        }\n\n        function parseAssignmentExpressionOrHigher(): Expression {\n            //  AssignmentExpression[in,yield]:\n            //      1) ConditionalExpression[?in,?yield]\n            //      2) LeftHandSideExpression = AssignmentExpression[?in,?yield]\n            //      3) LeftHandSideExpression AssignmentOperator AssignmentExpression[?in,?yield]\n            //      4) ArrowFunctionExpression[?in,?yield]\n            //      5) AsyncArrowFunctionExpression[in,yield,await]\n            //      6) [+Yield] YieldExpression[?In]\n            //\n            // Note: for ease of implementation we treat productions '2' and '3' as the same thing.\n            // (i.e. they're both BinaryExpressions with an assignment operator in it).\n\n            // First, do the simple check if we have a YieldExpression (production '6').\n            if (isYieldExpression()) {\n                return parseYieldExpression();\n            }\n\n            // Then, check if we have an arrow function (production '4' and '5') that starts with a parenthesized\n            // parameter list or is an async arrow function.\n            // AsyncArrowFunctionExpression:\n            //      1) async[no LineTerminator here]AsyncArrowBindingIdentifier[?Yield][no LineTerminator here]=>AsyncConciseBody[?In]\n            //      2) CoverCallExpressionAndAsyncArrowHead[?Yield, ?Await][no LineTerminator here]=>AsyncConciseBody[?In]\n            // Production (1) of AsyncArrowFunctionExpression is parsed in \"tryParseAsyncSimpleArrowFunctionExpression\".\n            // And production (2) is parsed in \"tryParseParenthesizedArrowFunctionExpression\".\n            //\n            // If we do successfully parse arrow-function, we must *not* recurse for productions 1, 2 or 3. An ArrowFunction is\n            // not a LeftHandSideExpression, nor does it start a ConditionalExpression.  So we are done\n            // with AssignmentExpression if we see one.\n            const arrowExpression = tryParseParenthesizedArrowFunctionExpression() || tryParseAsyncSimpleArrowFunctionExpression();\n            if (arrowExpression) {\n                return arrowExpression;\n            }\n\n            // Now try to see if we're in production '1', '2' or '3'.  A conditional expression can\n            // start with a LogicalOrExpression, while the assignment productions can only start with\n            // LeftHandSideExpressions.\n            //\n            // So, first, we try to just parse out a BinaryExpression.  If we get something that is a\n            // LeftHandSide or higher, then we can try to parse out the assignment expression part.\n            // Otherwise, we try to parse out the conditional expression bit.  We want to allow any\n            // binary expression here, so we pass in the 'lowest' precedence here so that it matches\n            // and consumes anything.\n            const expr = parseBinaryExpressionOrHigher(/*precedence*/ 0);\n\n            // To avoid a look-ahead, we did not handle the case of an arrow function with a single un-parenthesized\n            // parameter ('x => ...') above. We handle it here by checking if the parsed expression was a single\n            // identifier and the current token is an arrow.\n            if (expr.kind === SyntaxKind.Identifier && token() === SyntaxKind.EqualsGreaterThanToken) {\n                return parseSimpleArrowFunctionExpression(<Identifier>expr);\n            }\n\n            // Now see if we might be in cases '2' or '3'.\n            // If the expression was a LHS expression, and we have an assignment operator, then\n            // we're in '2' or '3'. Consume the assignment and return.\n            //\n            // Note: we call reScanGreaterToken so that we get an appropriately merged token\n            // for cases like `> > =` becoming `>>=`\n            if (isLeftHandSideExpression(expr) && isAssignmentOperator(reScanGreaterToken())) {\n                return makeBinaryExpression(expr, <BinaryOperatorToken>parseTokenNode(), parseAssignmentExpressionOrHigher());\n            }\n\n            // It wasn't an assignment or a lambda.  This is a conditional expression:\n            return parseConditionalExpressionRest(expr);\n        }\n\n        function isYieldExpression(): boolean {\n            if (token() === SyntaxKind.YieldKeyword) {\n                // If we have a 'yield' keyword, and this is a context where yield expressions are\n                // allowed, then definitely parse out a yield expression.\n                if (inYieldContext()) {\n                    return true;\n                }\n\n                // We're in a context where 'yield expr' is not allowed.  However, if we can\n                // definitely tell that the user was trying to parse a 'yield expr' and not\n                // just a normal expr that start with a 'yield' identifier, then parse out\n                // a 'yield expr'.  We can then report an error later that they are only\n                // allowed in generator expressions.\n                //\n                // for example, if we see 'yield(foo)', then we'll have to treat that as an\n                // invocation expression of something called 'yield'.  However, if we have\n                // 'yield foo' then that is not legal as a normal expression, so we can\n                // definitely recognize this as a yield expression.\n                //\n                // for now we just check if the next token is an identifier.  More heuristics\n                // can be added here later as necessary.  We just need to make sure that we\n                // don't accidentally consume something legal.\n                return lookAhead(nextTokenIsIdentifierOrKeywordOrLiteralOnSameLine);\n            }\n\n            return false;\n        }\n\n        function nextTokenIsIdentifierOnSameLine() {\n            nextToken();\n            return !scanner.hasPrecedingLineBreak() && isIdentifier();\n        }\n\n        function parseYieldExpression(): YieldExpression {\n            const node = <YieldExpression>createNode(SyntaxKind.YieldExpression);\n\n            // YieldExpression[In] :\n            //      yield\n            //      yield [no LineTerminator here] [Lexical goal InputElementRegExp]AssignmentExpression[?In, Yield]\n            //      yield [no LineTerminator here] * [Lexical goal InputElementRegExp]AssignmentExpression[?In, Yield]\n            nextToken();\n\n            if (!scanner.hasPrecedingLineBreak() &&\n                (token() === SyntaxKind.AsteriskToken || isStartOfExpression())) {\n                node.asteriskToken = parseOptionalToken(SyntaxKind.AsteriskToken);\n                node.expression = parseAssignmentExpressionOrHigher();\n                return finishNode(node);\n            }\n            else {\n                // if the next token is not on the same line as yield.  or we don't have an '*' or\n                // the start of an expression, then this is just a simple \"yield\" expression.\n                return finishNode(node);\n            }\n        }\n\n        function parseSimpleArrowFunctionExpression(identifier: Identifier, asyncModifier?: NodeArray<Modifier>): ArrowFunction {\n            Debug.assert(token() === SyntaxKind.EqualsGreaterThanToken, \"parseSimpleArrowFunctionExpression should only have been called if we had a =>\");\n\n            let node: ArrowFunction;\n            if (asyncModifier) {\n                node = <ArrowFunction>createNode(SyntaxKind.ArrowFunction, asyncModifier.pos);\n                node.modifiers = asyncModifier;\n            }\n            else {\n                node = <ArrowFunction>createNode(SyntaxKind.ArrowFunction, identifier.pos);\n            }\n\n            const parameter = <ParameterDeclaration>createNode(SyntaxKind.Parameter, identifier.pos);\n            parameter.name = identifier;\n            finishNode(parameter);\n\n            node.parameters = createNodeArray<ParameterDeclaration>([parameter], parameter.pos, parameter.end);\n\n            node.equalsGreaterThanToken = parseExpectedToken(SyntaxKind.EqualsGreaterThanToken);\n            node.body = parseArrowFunctionExpressionBody(/*isAsync*/ !!asyncModifier);\n\n            return addJSDocComment(finishNode(node));\n        }\n\n        function tryParseParenthesizedArrowFunctionExpression(): Expression | undefined {\n            const triState = isParenthesizedArrowFunctionExpression();\n            if (triState === Tristate.False) {\n                // It's definitely not a parenthesized arrow function expression.\n                return undefined;\n            }\n\n            // If we definitely have an arrow function, then we can just parse one, not requiring a\n            // following => or { token. Otherwise, we *might* have an arrow function.  Try to parse\n            // it out, but don't allow any ambiguity, and return 'undefined' if this could be an\n            // expression instead.\n            const arrowFunction = triState === Tristate.True\n                ? parseParenthesizedArrowFunctionExpressionHead(/*allowAmbiguity*/ true)\n                : tryParse(parsePossibleParenthesizedArrowFunctionExpressionHead);\n\n            if (!arrowFunction) {\n                // Didn't appear to actually be a parenthesized arrow function.  Just bail out.\n                return undefined;\n            }\n\n            const isAsync = hasModifier(arrowFunction, ModifierFlags.Async);\n\n            // If we have an arrow, then try to parse the body. Even if not, try to parse if we\n            // have an opening brace, just in case we're in an error state.\n            const lastToken = token();\n            arrowFunction.equalsGreaterThanToken = parseExpectedToken(SyntaxKind.EqualsGreaterThanToken);\n            arrowFunction.body = (lastToken === SyntaxKind.EqualsGreaterThanToken || lastToken === SyntaxKind.OpenBraceToken)\n                ? parseArrowFunctionExpressionBody(isAsync)\n                : parseIdentifier();\n\n            return finishNode(arrowFunction);\n        }\n\n        //  True        -> We definitely expect a parenthesized arrow function here.\n        //  False       -> There *cannot* be a parenthesized arrow function here.\n        //  Unknown     -> There *might* be a parenthesized arrow function here.\n        //                 Speculatively look ahead to be sure, and rollback if not.\n        function isParenthesizedArrowFunctionExpression(): Tristate {\n            if (token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken || token() === SyntaxKind.AsyncKeyword) {\n                return lookAhead(isParenthesizedArrowFunctionExpressionWorker);\n            }\n\n            if (token() === SyntaxKind.EqualsGreaterThanToken) {\n                // ERROR RECOVERY TWEAK:\n                // If we see a standalone => try to parse it as an arrow function expression as that's\n                // likely what the user intended to write.\n                return Tristate.True;\n            }\n            // Definitely not a parenthesized arrow function.\n            return Tristate.False;\n        }\n\n        function isParenthesizedArrowFunctionExpressionWorker() {\n            if (token() === SyntaxKind.AsyncKeyword) {\n                nextToken();\n                if (scanner.hasPrecedingLineBreak()) {\n                    return Tristate.False;\n                }\n                if (token() !== SyntaxKind.OpenParenToken && token() !== SyntaxKind.LessThanToken) {\n                    return Tristate.False;\n                }\n            }\n\n            const first = token();\n            const second = nextToken();\n\n            if (first === SyntaxKind.OpenParenToken) {\n                if (second === SyntaxKind.CloseParenToken) {\n                    // Simple cases: \"() =>\", \"(): \", and \"() {\".\n                    // This is an arrow function with no parameters.\n                    // The last one is not actually an arrow function,\n                    // but this is probably what the user intended.\n                    const third = nextToken();\n                    switch (third) {\n                        case SyntaxKind.EqualsGreaterThanToken:\n                        case SyntaxKind.ColonToken:\n                        case SyntaxKind.OpenBraceToken:\n                            return Tristate.True;\n                        default:\n                            return Tristate.False;\n                    }\n                }\n\n                // If encounter \"([\" or \"({\", this could be the start of a binding pattern.\n                // Examples:\n                //      ([ x ]) => { }\n                //      ({ x }) => { }\n                //      ([ x ])\n                //      ({ x })\n                if (second === SyntaxKind.OpenBracketToken || second === SyntaxKind.OpenBraceToken) {\n                    return Tristate.Unknown;\n                }\n\n                // Simple case: \"(...\"\n                // This is an arrow function with a rest parameter.\n                if (second === SyntaxKind.DotDotDotToken) {\n                    return Tristate.True;\n                }\n\n                // Check for \"(xxx yyy\", where xxx is a modifier and yyy is an identifier. This\n                // isn't actually allowed, but we want to treat it as a lambda so we can provide\n                // a good error message.\n                if (isModifierKind(second) && second !== SyntaxKind.AsyncKeyword && lookAhead(nextTokenIsIdentifier)) {\n                    return Tristate.True;\n                }\n\n                // If we had \"(\" followed by something that's not an identifier,\n                // then this definitely doesn't look like a lambda.\n                if (!isIdentifier()) {\n                    return Tristate.False;\n                }\n\n                switch (nextToken()) {\n                    case SyntaxKind.ColonToken:\n                        // If we have something like \"(a:\", then we must have a\n                        // type-annotated parameter in an arrow function expression.\n                        return Tristate.True;\n                    case SyntaxKind.QuestionToken:\n                        nextToken();\n                        // If we have \"(a?:\" or \"(a?,\" or \"(a?=\" or \"(a?)\" then it is definitely a lambda.\n                        if (token() === SyntaxKind.ColonToken || token() === SyntaxKind.CommaToken || token() === SyntaxKind.EqualsToken || token() === SyntaxKind.CloseParenToken) {\n                            return Tristate.True;\n                        }\n                        // Otherwise it is definitely not a lambda.\n                        return Tristate.False;\n                    case SyntaxKind.CommaToken:\n                    case SyntaxKind.EqualsToken:\n                    case SyntaxKind.CloseParenToken:\n                        // If we have \"(a,\" or \"(a=\" or \"(a)\" this *could* be an arrow function\n                        return Tristate.Unknown;\n                }\n                // It is definitely not an arrow function\n                return Tristate.False;\n            }\n            else {\n                Debug.assert(first === SyntaxKind.LessThanToken);\n\n                // If we have \"<\" not followed by an identifier,\n                // then this definitely is not an arrow function.\n                if (!isIdentifier()) {\n                    return Tristate.False;\n                }\n\n                // JSX overrides\n                if (sourceFile.languageVariant === LanguageVariant.JSX) {\n                    const isArrowFunctionInJsx = lookAhead(() => {\n                        const third = nextToken();\n                        if (third === SyntaxKind.ExtendsKeyword) {\n                            const fourth = nextToken();\n                            switch (fourth) {\n                                case SyntaxKind.EqualsToken:\n                                case SyntaxKind.GreaterThanToken:\n                                    return false;\n                                default:\n                                    return true;\n                            }\n                        }\n                        else if (third === SyntaxKind.CommaToken) {\n                            return true;\n                        }\n                        return false;\n                    });\n\n                    if (isArrowFunctionInJsx) {\n                        return Tristate.True;\n                    }\n\n                    return Tristate.False;\n                }\n\n                // This *could* be a parenthesized arrow function.\n                return Tristate.Unknown;\n            }\n        }\n\n        function parsePossibleParenthesizedArrowFunctionExpressionHead(): ArrowFunction {\n            return parseParenthesizedArrowFunctionExpressionHead(/*allowAmbiguity*/ false);\n        }\n\n        function tryParseAsyncSimpleArrowFunctionExpression(): ArrowFunction | undefined {\n            // We do a check here so that we won't be doing unnecessarily call to \"lookAhead\"\n            if (token() === SyntaxKind.AsyncKeyword) {\n                if (lookAhead(isUnParenthesizedAsyncArrowFunctionWorker) === Tristate.True) {\n                    const asyncModifier = parseModifiersForArrowFunction();\n                    const expr = parseBinaryExpressionOrHigher(/*precedence*/ 0);\n                    return parseSimpleArrowFunctionExpression(<Identifier>expr, asyncModifier);\n                }\n            }\n            return undefined;\n        }\n\n        function isUnParenthesizedAsyncArrowFunctionWorker(): Tristate {\n            // AsyncArrowFunctionExpression:\n            //      1) async[no LineTerminator here]AsyncArrowBindingIdentifier[?Yield][no LineTerminator here]=>AsyncConciseBody[?In]\n            //      2) CoverCallExpressionAndAsyncArrowHead[?Yield, ?Await][no LineTerminator here]=>AsyncConciseBody[?In]\n            if (token() === SyntaxKind.AsyncKeyword) {\n                nextToken();\n                // If the \"async\" is followed by \"=>\" token then it is not a begining of an async arrow-function\n                // but instead a simple arrow-function which will be parsed inside \"parseAssignmentExpressionOrHigher\"\n                if (scanner.hasPrecedingLineBreak() || token() === SyntaxKind.EqualsGreaterThanToken) {\n                    return Tristate.False;\n                }\n                // Check for un-parenthesized AsyncArrowFunction\n                const expr = parseBinaryExpressionOrHigher(/*precedence*/ 0);\n                if (!scanner.hasPrecedingLineBreak() && expr.kind === SyntaxKind.Identifier && token() === SyntaxKind.EqualsGreaterThanToken) {\n                    return Tristate.True;\n                }\n            }\n\n            return Tristate.False;\n        }\n\n        function parseParenthesizedArrowFunctionExpressionHead(allowAmbiguity: boolean): ArrowFunction {\n            const node = <ArrowFunction>createNodeWithJSDoc(SyntaxKind.ArrowFunction);\n            node.modifiers = parseModifiersForArrowFunction();\n            const isAsync = hasModifier(node, ModifierFlags.Async) ? SignatureFlags.Await : SignatureFlags.None;\n            // Arrow functions are never generators.\n            //\n            // If we're speculatively parsing a signature for a parenthesized arrow function, then\n            // we have to have a complete parameter list.  Otherwise we might see something like\n            // a => (b => c)\n            // And think that \"(b =>\" was actually a parenthesized arrow function with a missing\n            // close paren.\n            fillSignature(SyntaxKind.ColonToken, isAsync | (allowAmbiguity ? SignatureFlags.None : SignatureFlags.RequireCompleteParameterList), node);\n\n            // If we couldn't get parameters, we definitely could not parse out an arrow function.\n            if (!node.parameters) {\n                return undefined;\n            }\n\n            // Parsing a signature isn't enough.\n            // Parenthesized arrow signatures often look like other valid expressions.\n            // For instance:\n            //  - \"(x = 10)\" is an assignment expression parsed as a signature with a default parameter value.\n            //  - \"(x,y)\" is a comma expression parsed as a signature with two parameters.\n            //  - \"a ? (b): c\" will have \"(b):\" parsed as a signature with a return type annotation.\n            //\n            // So we need just a bit of lookahead to ensure that it can only be a signature.\n            if (!allowAmbiguity && token() !== SyntaxKind.EqualsGreaterThanToken && token() !== SyntaxKind.OpenBraceToken) {\n                // Returning undefined here will cause our caller to rewind to where we started from.\n                return undefined;\n            }\n\n            return node;\n        }\n\n        function parseArrowFunctionExpressionBody(isAsync: boolean): Block | Expression {\n            if (token() === SyntaxKind.OpenBraceToken) {\n                return parseFunctionBlock(isAsync ? SignatureFlags.Await : SignatureFlags.None);\n            }\n\n            if (token() !== SyntaxKind.SemicolonToken &&\n                token() !== SyntaxKind.FunctionKeyword &&\n                token() !== SyntaxKind.ClassKeyword &&\n                isStartOfStatement() &&\n                !isStartOfExpressionStatement()) {\n                // Check if we got a plain statement (i.e. no expression-statements, no function/class expressions/declarations)\n                //\n                // Here we try to recover from a potential error situation in the case where the\n                // user meant to supply a block. For example, if the user wrote:\n                //\n                //  a =>\n                //      let v = 0;\n                //  }\n                //\n                // they may be missing an open brace.  Check to see if that's the case so we can\n                // try to recover better.  If we don't do this, then the next close curly we see may end\n                // up preemptively closing the containing construct.\n                //\n                // Note: even when 'IgnoreMissingOpenBrace' is passed, parseBody will still error.\n                return parseFunctionBlock(SignatureFlags.IgnoreMissingOpenBrace | (isAsync ? SignatureFlags.Await : SignatureFlags.None));\n            }\n\n            return isAsync\n                ? doInAwaitContext(parseAssignmentExpressionOrHigher)\n                : doOutsideOfAwaitContext(parseAssignmentExpressionOrHigher);\n        }\n\n        function parseConditionalExpressionRest(leftOperand: Expression): Expression {\n            // Note: we are passed in an expression which was produced from parseBinaryExpressionOrHigher.\n            const questionToken = parseOptionalToken(SyntaxKind.QuestionToken);\n            if (!questionToken) {\n                return leftOperand;\n            }\n\n            // Note: we explicitly 'allowIn' in the whenTrue part of the condition expression, and\n            // we do not that for the 'whenFalse' part.\n            const node = <ConditionalExpression>createNode(SyntaxKind.ConditionalExpression, leftOperand.pos);\n            node.condition = leftOperand;\n            node.questionToken = questionToken;\n            node.whenTrue = doOutsideOfContext(disallowInAndDecoratorContext, parseAssignmentExpressionOrHigher);\n            node.colonToken = parseExpectedToken(SyntaxKind.ColonToken);\n            node.whenFalse = nodeIsPresent(node.colonToken)\n                ? parseAssignmentExpressionOrHigher()\n                : createMissingNode(SyntaxKind.Identifier, /*reportAtCurrentPosition*/ false, Diagnostics._0_expected, tokenToString(SyntaxKind.ColonToken));\n            return finishNode(node);\n        }\n\n        function parseBinaryExpressionOrHigher(precedence: number): Expression {\n            const leftOperand = parseUnaryExpressionOrHigher();\n            return parseBinaryExpressionRest(precedence, leftOperand);\n        }\n\n        function isInOrOfKeyword(t: SyntaxKind) {\n            return t === SyntaxKind.InKeyword || t === SyntaxKind.OfKeyword;\n        }\n\n        function parseBinaryExpressionRest(precedence: number, leftOperand: Expression): Expression {\n            while (true) {\n                // We either have a binary operator here, or we're finished.  We call\n                // reScanGreaterToken so that we merge token sequences like > and = into >=\n\n                reScanGreaterToken();\n                const newPrecedence = getBinaryOperatorPrecedence();\n\n                // Check the precedence to see if we should \"take\" this operator\n                // - For left associative operator (all operator but **), consume the operator,\n                //   recursively call the function below, and parse binaryExpression as a rightOperand\n                //   of the caller if the new precedence of the operator is greater then or equal to the current precedence.\n                //   For example:\n                //      a - b - c;\n                //            ^token; leftOperand = b. Return b to the caller as a rightOperand\n                //      a * b - c\n                //            ^token; leftOperand = b. Return b to the caller as a rightOperand\n                //      a - b * c;\n                //            ^token; leftOperand = b. Return b * c to the caller as a rightOperand\n                // - For right associative operator (**), consume the operator, recursively call the function\n                //   and parse binaryExpression as a rightOperand of the caller if the new precedence of\n                //   the operator is strictly grater than the current precedence\n                //   For example:\n                //      a ** b ** c;\n                //             ^^token; leftOperand = b. Return b ** c to the caller as a rightOperand\n                //      a - b ** c;\n                //            ^^token; leftOperand = b. Return b ** c to the caller as a rightOperand\n                //      a ** b - c\n                //             ^token; leftOperand = b. Return b to the caller as a rightOperand\n                const consumeCurrentOperator = token() === SyntaxKind.AsteriskAsteriskToken ?\n                    newPrecedence >= precedence :\n                    newPrecedence > precedence;\n\n                if (!consumeCurrentOperator) {\n                    break;\n                }\n\n                if (token() === SyntaxKind.InKeyword && inDisallowInContext()) {\n                    break;\n                }\n\n                if (token() === SyntaxKind.AsKeyword) {\n                    // Make sure we *do* perform ASI for constructs like this:\n                    //    var x = foo\n                    //    as (Bar)\n                    // This should be parsed as an initialized variable, followed\n                    // by a function call to 'as' with the argument 'Bar'\n                    if (scanner.hasPrecedingLineBreak()) {\n                        break;\n                    }\n                    else {\n                        nextToken();\n                        leftOperand = makeAsExpression(leftOperand, parseType());\n                    }\n                }\n                else {\n                    leftOperand = makeBinaryExpression(leftOperand, <BinaryOperatorToken>parseTokenNode(), parseBinaryExpressionOrHigher(newPrecedence));\n                }\n            }\n\n            return leftOperand;\n        }\n\n        function isBinaryOperator() {\n            if (inDisallowInContext() && token() === SyntaxKind.InKeyword) {\n                return false;\n            }\n\n            return getBinaryOperatorPrecedence() > 0;\n        }\n\n        function getBinaryOperatorPrecedence(): number {\n            switch (token()) {\n                case SyntaxKind.BarBarToken:\n                    return 1;\n                case SyntaxKind.AmpersandAmpersandToken:\n                    return 2;\n                case SyntaxKind.BarToken:\n                    return 3;\n                case SyntaxKind.CaretToken:\n                    return 4;\n                case SyntaxKind.AmpersandToken:\n                    return 5;\n                case SyntaxKind.EqualsEqualsToken:\n                case SyntaxKind.ExclamationEqualsToken:\n                case SyntaxKind.EqualsEqualsEqualsToken:\n                case SyntaxKind.ExclamationEqualsEqualsToken:\n                    return 6;\n                case SyntaxKind.LessThanToken:\n                case SyntaxKind.GreaterThanToken:\n                case SyntaxKind.LessThanEqualsToken:\n                case SyntaxKind.GreaterThanEqualsToken:\n                case SyntaxKind.InstanceOfKeyword:\n                case SyntaxKind.InKeyword:\n                case SyntaxKind.AsKeyword:\n                    return 7;\n                case SyntaxKind.LessThanLessThanToken:\n                case SyntaxKind.GreaterThanGreaterThanToken:\n                case SyntaxKind.GreaterThanGreaterThanGreaterThanToken:\n                    return 8;\n                case SyntaxKind.PlusToken:\n                case SyntaxKind.MinusToken:\n                    return 9;\n                case SyntaxKind.AsteriskToken:\n                case SyntaxKind.SlashToken:\n                case SyntaxKind.PercentToken:\n                    return 10;\n                case SyntaxKind.AsteriskAsteriskToken:\n                    return 11;\n            }\n\n            // -1 is lower than all other precedences.  Returning it will cause binary expression\n            // parsing to stop.\n            return -1;\n        }\n\n        function makeBinaryExpression(left: Expression, operatorToken: BinaryOperatorToken, right: Expression): BinaryExpression {\n            const node = <BinaryExpression>createNode(SyntaxKind.BinaryExpression, left.pos);\n            node.left = left;\n            node.operatorToken = operatorToken;\n            node.right = right;\n            return finishNode(node);\n        }\n\n        function makeAsExpression(left: Expression, right: TypeNode): AsExpression {\n            const node = <AsExpression>createNode(SyntaxKind.AsExpression, left.pos);\n            node.expression = left;\n            node.type = right;\n            return finishNode(node);\n        }\n\n        function parsePrefixUnaryExpression() {\n            const node = <PrefixUnaryExpression>createNode(SyntaxKind.PrefixUnaryExpression);\n            node.operator = <PrefixUnaryOperator>token();\n            nextToken();\n            node.operand = parseSimpleUnaryExpression();\n\n            return finishNode(node);\n        }\n\n        function parseDeleteExpression() {\n            const node = <DeleteExpression>createNode(SyntaxKind.DeleteExpression);\n            nextToken();\n            node.expression = parseSimpleUnaryExpression();\n            return finishNode(node);\n        }\n\n        function parseTypeOfExpression() {\n            const node = <TypeOfExpression>createNode(SyntaxKind.TypeOfExpression);\n            nextToken();\n            node.expression = parseSimpleUnaryExpression();\n            return finishNode(node);\n        }\n\n        function parseVoidExpression() {\n            const node = <VoidExpression>createNode(SyntaxKind.VoidExpression);\n            nextToken();\n            node.expression = parseSimpleUnaryExpression();\n            return finishNode(node);\n        }\n\n        function isAwaitExpression(): boolean {\n            if (token() === SyntaxKind.AwaitKeyword) {\n                if (inAwaitContext()) {\n                    return true;\n                }\n\n                // here we are using similar heuristics as 'isYieldExpression'\n                return lookAhead(nextTokenIsIdentifierOrKeywordOrLiteralOnSameLine);\n            }\n\n            return false;\n        }\n\n        function parseAwaitExpression() {\n            const node = <AwaitExpression>createNode(SyntaxKind.AwaitExpression);\n            nextToken();\n            node.expression = parseSimpleUnaryExpression();\n            return finishNode(node);\n        }\n\n        /**\n         * Parse ES7 exponential expression and await expression\n         *\n         * ES7 ExponentiationExpression:\n         *      1) UnaryExpression[?Yield]\n         *      2) UpdateExpression[?Yield] ** ExponentiationExpression[?Yield]\n         *\n         */\n        function parseUnaryExpressionOrHigher(): UnaryExpression | BinaryExpression {\n            /**\n             * ES7 UpdateExpression:\n             *      1) LeftHandSideExpression[?Yield]\n             *      2) LeftHandSideExpression[?Yield][no LineTerminator here]++\n             *      3) LeftHandSideExpression[?Yield][no LineTerminator here]--\n             *      4) ++UnaryExpression[?Yield]\n             *      5) --UnaryExpression[?Yield]\n             */\n            if (isUpdateExpression()) {\n                const updateExpression = parseUpdateExpression();\n                return token() === SyntaxKind.AsteriskAsteriskToken ?\n                    <BinaryExpression>parseBinaryExpressionRest(getBinaryOperatorPrecedence(), updateExpression) :\n                    updateExpression;\n            }\n\n            /**\n             * ES7 UnaryExpression:\n             *      1) UpdateExpression[?yield]\n             *      2) delete UpdateExpression[?yield]\n             *      3) void UpdateExpression[?yield]\n             *      4) typeof UpdateExpression[?yield]\n             *      5) + UpdateExpression[?yield]\n             *      6) - UpdateExpression[?yield]\n             *      7) ~ UpdateExpression[?yield]\n             *      8) ! UpdateExpression[?yield]\n             */\n            const unaryOperator = token();\n            const simpleUnaryExpression = parseSimpleUnaryExpression();\n            if (token() === SyntaxKind.AsteriskAsteriskToken) {\n                const start = skipTrivia(sourceText, simpleUnaryExpression.pos);\n                if (simpleUnaryExpression.kind === SyntaxKind.TypeAssertionExpression) {\n                    parseErrorAtPosition(start, simpleUnaryExpression.end - start, Diagnostics.A_type_assertion_expression_is_not_allowed_in_the_left_hand_side_of_an_exponentiation_expression_Consider_enclosing_the_expression_in_parentheses);\n                }\n                else {\n                    parseErrorAtPosition(start, simpleUnaryExpression.end - start, Diagnostics.An_unary_expression_with_the_0_operator_is_not_allowed_in_the_left_hand_side_of_an_exponentiation_expression_Consider_enclosing_the_expression_in_parentheses, tokenToString(unaryOperator));\n                }\n            }\n            return simpleUnaryExpression;\n        }\n\n        /**\n         * Parse ES7 simple-unary expression or higher:\n         *\n         * ES7 UnaryExpression:\n         *      1) UpdateExpression[?yield]\n         *      2) delete UnaryExpression[?yield]\n         *      3) void UnaryExpression[?yield]\n         *      4) typeof UnaryExpression[?yield]\n         *      5) + UnaryExpression[?yield]\n         *      6) - UnaryExpression[?yield]\n         *      7) ~ UnaryExpression[?yield]\n         *      8) ! UnaryExpression[?yield]\n         *      9) [+Await] await UnaryExpression[?yield]\n         */\n        function parseSimpleUnaryExpression(): UnaryExpression {\n            switch (token()) {\n                case SyntaxKind.PlusToken:\n                case SyntaxKind.MinusToken:\n                case SyntaxKind.TildeToken:\n                case SyntaxKind.ExclamationToken:\n                    return parsePrefixUnaryExpression();\n                case SyntaxKind.DeleteKeyword:\n                    return parseDeleteExpression();\n                case SyntaxKind.TypeOfKeyword:\n                    return parseTypeOfExpression();\n                case SyntaxKind.VoidKeyword:\n                    return parseVoidExpression();\n                case SyntaxKind.LessThanToken:\n                    // This is modified UnaryExpression grammar in TypeScript\n                    //  UnaryExpression (modified):\n                    //      < type > UnaryExpression\n                    return parseTypeAssertion();\n                case SyntaxKind.AwaitKeyword:\n                    if (isAwaitExpression()) {\n                        return parseAwaitExpression();\n                    }\n                    // falls through\n                default:\n                    return parseUpdateExpression();\n            }\n        }\n\n        /**\n         * Check if the current token can possibly be an ES7 increment expression.\n         *\n         * ES7 UpdateExpression:\n         *      LeftHandSideExpression[?Yield]\n         *      LeftHandSideExpression[?Yield][no LineTerminator here]++\n         *      LeftHandSideExpression[?Yield][no LineTerminator here]--\n         *      ++LeftHandSideExpression[?Yield]\n         *      --LeftHandSideExpression[?Yield]\n         */\n        function isUpdateExpression(): boolean {\n            // This function is called inside parseUnaryExpression to decide\n            // whether to call parseSimpleUnaryExpression or call parseUpdateExpression directly\n            switch (token()) {\n                case SyntaxKind.PlusToken:\n                case SyntaxKind.MinusToken:\n                case SyntaxKind.TildeToken:\n                case SyntaxKind.ExclamationToken:\n                case SyntaxKind.DeleteKeyword:\n                case SyntaxKind.TypeOfKeyword:\n                case SyntaxKind.VoidKeyword:\n                case SyntaxKind.AwaitKeyword:\n                    return false;\n                case SyntaxKind.LessThanToken:\n                    // If we are not in JSX context, we are parsing TypeAssertion which is an UnaryExpression\n                    if (sourceFile.languageVariant !== LanguageVariant.JSX) {\n                        return false;\n                    }\n                    // We are in JSX context and the token is part of JSXElement.\n                    // falls through\n                default:\n                    return true;\n            }\n        }\n\n        /**\n         * Parse ES7 UpdateExpression. UpdateExpression is used instead of ES6's PostFixExpression.\n         *\n         * ES7 UpdateExpression[yield]:\n         *      1) LeftHandSideExpression[?yield]\n         *      2) LeftHandSideExpression[?yield] [[no LineTerminator here]]++\n         *      3) LeftHandSideExpression[?yield] [[no LineTerminator here]]--\n         *      4) ++LeftHandSideExpression[?yield]\n         *      5) --LeftHandSideExpression[?yield]\n         * In TypeScript (2), (3) are parsed as PostfixUnaryExpression. (4), (5) are parsed as PrefixUnaryExpression\n         */\n        function parseUpdateExpression(): UpdateExpression {\n            if (token() === SyntaxKind.PlusPlusToken || token() === SyntaxKind.MinusMinusToken) {\n                const node = <PrefixUnaryExpression>createNode(SyntaxKind.PrefixUnaryExpression);\n                node.operator = <PrefixUnaryOperator>token();\n                nextToken();\n                node.operand = parseLeftHandSideExpressionOrHigher();\n                return finishNode(node);\n            }\n            else if (sourceFile.languageVariant === LanguageVariant.JSX && token() === SyntaxKind.LessThanToken && lookAhead(nextTokenIsIdentifierOrKeywordOrGreaterThan)) {\n                // JSXElement is part of primaryExpression\n                return parseJsxElementOrSelfClosingElementOrFragment(/*inExpressionContext*/ true);\n            }\n\n            const expression = parseLeftHandSideExpressionOrHigher();\n\n            Debug.assert(isLeftHandSideExpression(expression));\n            if ((token() === SyntaxKind.PlusPlusToken || token() === SyntaxKind.MinusMinusToken) && !scanner.hasPrecedingLineBreak()) {\n                const node = <PostfixUnaryExpression>createNode(SyntaxKind.PostfixUnaryExpression, expression.pos);\n                node.operand = expression;\n                node.operator = <PostfixUnaryOperator>token();\n                nextToken();\n                return finishNode(node);\n            }\n\n            return expression;\n        }\n\n        function parseLeftHandSideExpressionOrHigher(): LeftHandSideExpression {\n            // Original Ecma:\n            // LeftHandSideExpression: See 11.2\n            //      NewExpression\n            //      CallExpression\n            //\n            // Our simplification:\n            //\n            // LeftHandSideExpression: See 11.2\n            //      MemberExpression\n            //      CallExpression\n            //\n            // See comment in parseMemberExpressionOrHigher on how we replaced NewExpression with\n            // MemberExpression to make our lives easier.\n            //\n            // to best understand the below code, it's important to see how CallExpression expands\n            // out into its own productions:\n            //\n            // CallExpression:\n            //      MemberExpression Arguments\n            //      CallExpression Arguments\n            //      CallExpression[Expression]\n            //      CallExpression.IdentifierName\n            //      import (AssignmentExpression)\n            //      super Arguments\n            //      super.IdentifierName\n            //\n            // Because of the recursion in these calls, we need to bottom out first. There are three\n            // bottom out states we can run into: 1) We see 'super' which must start either of\n            // the last two CallExpression productions. 2) We see 'import' which must start import call.\n            // 3)we have a MemberExpression which either completes the LeftHandSideExpression,\n            // or starts the beginning of the first four CallExpression productions.\n            let expression: MemberExpression;\n            if (token() === SyntaxKind.ImportKeyword && lookAhead(nextTokenIsOpenParenOrLessThan)) {\n                // We don't want to eagerly consume all import keyword as import call expression so we look a head to find \"(\"\n                // For example:\n                //      var foo3 = require(\"subfolder\n                //      import * as foo1 from \"module-from-node\n                // We want this import to be a statement rather than import call expression\n                sourceFile.flags |= NodeFlags.PossiblyContainsDynamicImport;\n                expression = parseTokenNode<PrimaryExpression>();\n            }\n            else {\n                expression = token() === SyntaxKind.SuperKeyword ? parseSuperExpression() : parseMemberExpressionOrHigher();\n            }\n\n            // Now, we *may* be complete.  However, we might have consumed the start of a\n            // CallExpression.  As such, we need to consume the rest of it here to be complete.\n            return parseCallExpressionRest(expression);\n        }\n\n        function parseMemberExpressionOrHigher(): MemberExpression {\n            // Note: to make our lives simpler, we decompose the NewExpression productions and\n            // place ObjectCreationExpression and FunctionExpression into PrimaryExpression.\n            // like so:\n            //\n            //   PrimaryExpression : See 11.1\n            //      this\n            //      Identifier\n            //      Literal\n            //      ArrayLiteral\n            //      ObjectLiteral\n            //      (Expression)\n            //      FunctionExpression\n            //      new MemberExpression Arguments?\n            //\n            //   MemberExpression : See 11.2\n            //      PrimaryExpression\n            //      MemberExpression[Expression]\n            //      MemberExpression.IdentifierName\n            //\n            //   CallExpression : See 11.2\n            //      MemberExpression\n            //      CallExpression Arguments\n            //      CallExpression[Expression]\n            //      CallExpression.IdentifierName\n            //\n            // Technically this is ambiguous.  i.e. CallExpression defines:\n            //\n            //   CallExpression:\n            //      CallExpression Arguments\n            //\n            // If you see: \"new Foo()\"\n            //\n            // Then that could be treated as a single ObjectCreationExpression, or it could be\n            // treated as the invocation of \"new Foo\".  We disambiguate that in code (to match\n            // the original grammar) by making sure that if we see an ObjectCreationExpression\n            // we always consume arguments if they are there. So we treat \"new Foo()\" as an\n            // object creation only, and not at all as an invocation.  Another way to think\n            // about this is that for every \"new\" that we see, we will consume an argument list if\n            // it is there as part of the *associated* object creation node.  Any additional\n            // argument lists we see, will become invocation expressions.\n            //\n            // Because there are no other places in the grammar now that refer to FunctionExpression\n            // or ObjectCreationExpression, it is safe to push down into the PrimaryExpression\n            // production.\n            //\n            // Because CallExpression and MemberExpression are left recursive, we need to bottom out\n            // of the recursion immediately.  So we parse out a primary expression to start with.\n            const expression = parsePrimaryExpression();\n            return parseMemberExpressionRest(expression);\n        }\n\n        function parseSuperExpression(): MemberExpression {\n            const expression = parseTokenNode<PrimaryExpression>();\n            if (token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.DotToken || token() === SyntaxKind.OpenBracketToken) {\n                return expression;\n            }\n\n            // If we have seen \"super\" it must be followed by '(' or '.'.\n            // If it wasn't then just try to parse out a '.' and report an error.\n            const node = <PropertyAccessExpression>createNode(SyntaxKind.PropertyAccessExpression, expression.pos);\n            node.expression = expression;\n            parseExpectedToken(SyntaxKind.DotToken, Diagnostics.super_must_be_followed_by_an_argument_list_or_member_access);\n            node.name = parseRightSideOfDot(/*allowIdentifierNames*/ true);\n            return finishNode(node);\n        }\n\n        function tagNamesAreEquivalent(lhs: JsxTagNameExpression, rhs: JsxTagNameExpression): boolean {\n            if (lhs.kind !== rhs.kind) {\n                return false;\n            }\n\n            if (lhs.kind === SyntaxKind.Identifier) {\n                return (<Identifier>lhs).escapedText === (<Identifier>rhs).escapedText;\n            }\n\n            if (lhs.kind === SyntaxKind.ThisKeyword) {\n                return true;\n            }\n\n            // If we are at this statement then we must have PropertyAccessExpression and because tag name in Jsx element can only\n            // take forms of JsxTagNameExpression which includes an identifier, \"this\" expression, or another propertyAccessExpression\n            // it is safe to case the expression property as such. See parseJsxElementName for how we parse tag name in Jsx element\n            return (<PropertyAccessExpression>lhs).name.escapedText === (<PropertyAccessExpression>rhs).name.escapedText &&\n                tagNamesAreEquivalent((<PropertyAccessExpression>lhs).expression as JsxTagNameExpression, (<PropertyAccessExpression>rhs).expression as JsxTagNameExpression);\n        }\n\n\n        function parseJsxElementOrSelfClosingElementOrFragment(inExpressionContext: boolean): JsxElement | JsxSelfClosingElement | JsxFragment {\n            const opening = parseJsxOpeningOrSelfClosingElementOrOpeningFragment(inExpressionContext);\n            let result: JsxElement | JsxSelfClosingElement | JsxFragment;\n            if (opening.kind === SyntaxKind.JsxOpeningElement) {\n                const node = <JsxElement>createNode(SyntaxKind.JsxElement, opening.pos);\n                node.openingElement = opening;\n\n                node.children = parseJsxChildren(node.openingElement);\n                node.closingElement = parseJsxClosingElement(inExpressionContext);\n\n                if (!tagNamesAreEquivalent(node.openingElement.tagName, node.closingElement.tagName)) {\n                    parseErrorAtPosition(node.closingElement.pos, node.closingElement.end - node.closingElement.pos, Diagnostics.Expected_corresponding_JSX_closing_tag_for_0, getTextOfNodeFromSourceText(sourceText, node.openingElement.tagName));\n                }\n\n                result = finishNode(node);\n            }\n            else if (opening.kind === SyntaxKind.JsxOpeningFragment) {\n                const node = <JsxFragment>createNode(SyntaxKind.JsxFragment, opening.pos);\n                node.openingFragment = opening;\n                node.children = parseJsxChildren(node.openingFragment);\n                node.closingFragment = parseJsxClosingFragment(inExpressionContext);\n\n                result = finishNode(node);\n            }\n            else {\n                Debug.assert(opening.kind === SyntaxKind.JsxSelfClosingElement);\n                // Nothing else to do for self-closing elements\n                result = <JsxSelfClosingElement>opening;\n            }\n\n            // If the user writes the invalid code '<div></div><div></div>' in an expression context (i.e. not wrapped in\n            // an enclosing tag), we'll naively try to parse   ^ this as a 'less than' operator and the remainder of the tag\n            // as garbage, which will cause the formatter to badly mangle the JSX. Perform a speculative parse of a JSX\n            // element if we see a < token so that we can wrap it in a synthetic binary expression so the formatter\n            // does less damage and we can report a better error.\n            // Since JSX elements are invalid < operands anyway, this lookahead parse will only occur in error scenarios\n            // of one sort or another.\n            if (inExpressionContext && token() === SyntaxKind.LessThanToken) {\n                const invalidElement = tryParse(() => parseJsxElementOrSelfClosingElementOrFragment(/*inExpressionContext*/ true));\n                if (invalidElement) {\n                    parseErrorAtCurrentToken(Diagnostics.JSX_expressions_must_have_one_parent_element);\n                    const badNode = <BinaryExpression>createNode(SyntaxKind.BinaryExpression, result.pos);\n                    badNode.end = invalidElement.end;\n                    badNode.left = result;\n                    badNode.right = invalidElement;\n                    badNode.operatorToken = <BinaryOperatorToken>createMissingNode(SyntaxKind.CommaToken, /*reportAtCurrentPosition*/ false, /*diagnosticMessage*/ undefined);\n                    badNode.operatorToken.pos = badNode.operatorToken.end = badNode.right.pos;\n                    return <JsxElement><Node>badNode;\n                }\n            }\n\n            return result;\n        }\n\n        function parseJsxText(): JsxText {\n            const node = <JsxText>createNode(SyntaxKind.JsxText);\n            node.containsOnlyWhiteSpaces = currentToken === SyntaxKind.JsxTextAllWhiteSpaces;\n            currentToken = scanner.scanJsxToken();\n            return finishNode(node);\n        }\n\n        function parseJsxChild(): JsxChild {\n            switch (token()) {\n                case SyntaxKind.JsxText:\n                case SyntaxKind.JsxTextAllWhiteSpaces:\n                    return parseJsxText();\n                case SyntaxKind.OpenBraceToken:\n                    return parseJsxExpression(/*inExpressionContext*/ false);\n                case SyntaxKind.LessThanToken:\n                    return parseJsxElementOrSelfClosingElementOrFragment(/*inExpressionContext*/ false);\n            }\n            Debug.fail(\"Unknown JSX child kind \" + token());\n        }\n\n        function parseJsxChildren(openingTag: JsxOpeningElement | JsxOpeningFragment): NodeArray<JsxChild> {\n            const list = [];\n            const listPos = getNodePos();\n            const saveParsingContext = parsingContext;\n            parsingContext |= 1 << ParsingContext.JsxChildren;\n\n            while (true) {\n                currentToken = scanner.reScanJsxToken();\n                if (token() === SyntaxKind.LessThanSlashToken) {\n                    // Closing tag\n                    break;\n                }\n                else if (token() === SyntaxKind.EndOfFileToken) {\n                    // If we hit EOF, issue the error at the tag that lacks the closing element\n                    // rather than at the end of the file (which is useless)\n                    if (isJsxOpeningFragment(openingTag)) {\n                        parseErrorAtPosition(openingTag.pos, openingTag.end - openingTag.pos, Diagnostics.JSX_fragment_has_no_corresponding_closing_tag);\n                    }\n                    else {\n                        const openingTagName = openingTag.tagName;\n                        parseErrorAtPosition(openingTagName.pos, openingTagName.end - openingTagName.pos, Diagnostics.JSX_element_0_has_no_corresponding_closing_tag, getTextOfNodeFromSourceText(sourceText, openingTagName));\n                    }\n                    break;\n                }\n                else if (token() === SyntaxKind.ConflictMarkerTrivia) {\n                    break;\n                }\n                const child = parseJsxChild();\n                if (child) {\n                    list.push(child);\n                }\n            }\n\n            parsingContext = saveParsingContext;\n\n            return createNodeArray(list, listPos);\n        }\n\n        function parseJsxAttributes(): JsxAttributes {\n            const jsxAttributes = <JsxAttributes>createNode(SyntaxKind.JsxAttributes);\n            jsxAttributes.properties = parseList(ParsingContext.JsxAttributes, parseJsxAttribute);\n            return finishNode(jsxAttributes);\n        }\n\n        function parseJsxOpeningOrSelfClosingElementOrOpeningFragment(inExpressionContext: boolean): JsxOpeningElement | JsxSelfClosingElement | JsxOpeningFragment {\n            const fullStart = scanner.getStartPos();\n\n            parseExpected(SyntaxKind.LessThanToken);\n\n            if (token() === SyntaxKind.GreaterThanToken) {\n                parseExpected(SyntaxKind.GreaterThanToken);\n                const node: JsxOpeningFragment = <JsxOpeningFragment>createNode(SyntaxKind.JsxOpeningFragment, fullStart);\n                return finishNode(node);\n            }\n\n            const tagName = parseJsxElementName();\n            const attributes = parseJsxAttributes();\n\n            let node: JsxOpeningLikeElement;\n\n            if (token() === SyntaxKind.GreaterThanToken) {\n                // Closing tag, so scan the immediately-following text with the JSX scanning instead\n                // of regular scanning to avoid treating illegal characters (e.g. '#') as immediate\n                // scanning errors\n                node = <JsxOpeningElement>createNode(SyntaxKind.JsxOpeningElement, fullStart);\n                scanJsxText();\n            }\n            else {\n                parseExpected(SyntaxKind.SlashToken);\n                if (inExpressionContext) {\n                    parseExpected(SyntaxKind.GreaterThanToken);\n                }\n                else {\n                    parseExpected(SyntaxKind.GreaterThanToken, /*diagnostic*/ undefined, /*shouldAdvance*/ false);\n                    scanJsxText();\n                }\n                node = <JsxSelfClosingElement>createNode(SyntaxKind.JsxSelfClosingElement, fullStart);\n            }\n\n            node.tagName = tagName;\n            node.attributes = attributes;\n\n            return finishNode(node);\n        }\n\n        function parseJsxElementName(): JsxTagNameExpression {\n            scanJsxIdentifier();\n            // JsxElement can have name in the form of\n            //      propertyAccessExpression\n            //      primaryExpression in the form of an identifier and \"this\" keyword\n            // We can't just simply use parseLeftHandSideExpressionOrHigher because then we will start consider class,function etc as a keyword\n            // We only want to consider \"this\" as a primaryExpression\n            let expression: JsxTagNameExpression = token() === SyntaxKind.ThisKeyword ?\n                parseTokenNode<PrimaryExpression>() : parseIdentifierName();\n            while (parseOptional(SyntaxKind.DotToken)) {\n                const propertyAccess: PropertyAccessExpression = <PropertyAccessExpression>createNode(SyntaxKind.PropertyAccessExpression, expression.pos);\n                propertyAccess.expression = expression;\n                propertyAccess.name = parseRightSideOfDot(/*allowIdentifierNames*/ true);\n                expression = finishNode(propertyAccess);\n            }\n            return expression;\n        }\n\n        function parseJsxExpression(inExpressionContext: boolean): JsxExpression {\n            const node = <JsxExpression>createNode(SyntaxKind.JsxExpression);\n\n            parseExpected(SyntaxKind.OpenBraceToken);\n            if (token() !== SyntaxKind.CloseBraceToken) {\n                node.dotDotDotToken = parseOptionalToken(SyntaxKind.DotDotDotToken);\n                node.expression = parseAssignmentExpressionOrHigher();\n            }\n            if (inExpressionContext) {\n                parseExpected(SyntaxKind.CloseBraceToken);\n            }\n            else {\n                parseExpected(SyntaxKind.CloseBraceToken, /*message*/ undefined, /*shouldAdvance*/ false);\n                scanJsxText();\n            }\n\n            return finishNode(node);\n        }\n\n        function parseJsxAttribute(): JsxAttribute | JsxSpreadAttribute {\n            if (token() === SyntaxKind.OpenBraceToken) {\n                return parseJsxSpreadAttribute();\n            }\n\n            scanJsxIdentifier();\n            const node = <JsxAttribute>createNode(SyntaxKind.JsxAttribute);\n            node.name = parseIdentifierName();\n            if (token() === SyntaxKind.EqualsToken) {\n                switch (scanJsxAttributeValue()) {\n                    case SyntaxKind.StringLiteral:\n                        node.initializer = <StringLiteral>parseLiteralNode();\n                        break;\n                    default:\n                        node.initializer = parseJsxExpression(/*inExpressionContext*/ true);\n                        break;\n                }\n            }\n            return finishNode(node);\n        }\n\n        function parseJsxSpreadAttribute(): JsxSpreadAttribute {\n            const node = <JsxSpreadAttribute>createNode(SyntaxKind.JsxSpreadAttribute);\n            parseExpected(SyntaxKind.OpenBraceToken);\n            parseExpected(SyntaxKind.DotDotDotToken);\n            node.expression = parseExpression();\n            parseExpected(SyntaxKind.CloseBraceToken);\n            return finishNode(node);\n        }\n\n        function parseJsxClosingElement(inExpressionContext: boolean): JsxClosingElement {\n            const node = <JsxClosingElement>createNode(SyntaxKind.JsxClosingElement);\n            parseExpected(SyntaxKind.LessThanSlashToken);\n            node.tagName = parseJsxElementName();\n            if (inExpressionContext) {\n                parseExpected(SyntaxKind.GreaterThanToken);\n            }\n            else {\n                parseExpected(SyntaxKind.GreaterThanToken, /*diagnostic*/ undefined, /*shouldAdvance*/ false);\n                scanJsxText();\n            }\n            return finishNode(node);\n        }\n\n        function parseJsxClosingFragment(inExpressionContext: boolean): JsxClosingFragment {\n            const node = <JsxClosingFragment>createNode(SyntaxKind.JsxClosingFragment);\n            parseExpected(SyntaxKind.LessThanSlashToken);\n            if (tokenIsIdentifierOrKeyword(token())) {\n                const unexpectedTagName = parseJsxElementName();\n                parseErrorAtPosition(unexpectedTagName.pos, unexpectedTagName.end - unexpectedTagName.pos, Diagnostics.Expected_corresponding_closing_tag_for_JSX_fragment);\n            }\n            if (inExpressionContext) {\n                parseExpected(SyntaxKind.GreaterThanToken);\n            }\n            else {\n                parseExpected(SyntaxKind.GreaterThanToken, /*diagnostic*/ undefined, /*shouldAdvance*/ false);\n                scanJsxText();\n            }\n            return finishNode(node);\n        }\n\n        function parseTypeAssertion(): TypeAssertion {\n            const node = <TypeAssertion>createNode(SyntaxKind.TypeAssertionExpression);\n            parseExpected(SyntaxKind.LessThanToken);\n            node.type = parseType();\n            parseExpected(SyntaxKind.GreaterThanToken);\n            node.expression = parseSimpleUnaryExpression();\n            return finishNode(node);\n        }\n\n        function parseMemberExpressionRest(expression: LeftHandSideExpression): MemberExpression {\n            while (true) {\n                const dotToken = parseOptionalToken(SyntaxKind.DotToken);\n                if (dotToken) {\n                    const propertyAccess = <PropertyAccessExpression>createNode(SyntaxKind.PropertyAccessExpression, expression.pos);\n                    propertyAccess.expression = expression;\n                    propertyAccess.name = parseRightSideOfDot(/*allowIdentifierNames*/ true);\n                    expression = finishNode(propertyAccess);\n                    continue;\n                }\n\n                if (token() === SyntaxKind.ExclamationToken && !scanner.hasPrecedingLineBreak()) {\n                    nextToken();\n                    const nonNullExpression = <NonNullExpression>createNode(SyntaxKind.NonNullExpression, expression.pos);\n                    nonNullExpression.expression = expression;\n                    expression = finishNode(nonNullExpression);\n                    continue;\n                }\n\n                // when in the [Decorator] context, we do not parse ElementAccess as it could be part of a ComputedPropertyName\n                if (!inDecoratorContext() && parseOptional(SyntaxKind.OpenBracketToken)) {\n                    const indexedAccess = <ElementAccessExpression>createNode(SyntaxKind.ElementAccessExpression, expression.pos);\n                    indexedAccess.expression = expression;\n\n                    // It's not uncommon for a user to write: \"new Type[]\".\n                    // Check for that common pattern and report a better error message.\n                    if (token() !== SyntaxKind.CloseBracketToken) {\n                        indexedAccess.argumentExpression = allowInAnd(parseExpression);\n                        if (indexedAccess.argumentExpression.kind === SyntaxKind.StringLiteral || indexedAccess.argumentExpression.kind === SyntaxKind.NumericLiteral) {\n                            const literal = <LiteralExpression>indexedAccess.argumentExpression;\n                            literal.text = internIdentifier(literal.text);\n                        }\n                    }\n\n                    parseExpected(SyntaxKind.CloseBracketToken);\n                    expression = finishNode(indexedAccess);\n                    continue;\n                }\n\n                if (token() === SyntaxKind.NoSubstitutionTemplateLiteral || token() === SyntaxKind.TemplateHead) {\n                    const tagExpression = <TaggedTemplateExpression>createNode(SyntaxKind.TaggedTemplateExpression, expression.pos);\n                    tagExpression.tag = expression;\n                    tagExpression.template = token() === SyntaxKind.NoSubstitutionTemplateLiteral\n                        ? <NoSubstitutionTemplateLiteral>parseLiteralNode()\n                        : parseTemplateExpression();\n                    expression = finishNode(tagExpression);\n                    continue;\n                }\n\n                return <MemberExpression>expression;\n            }\n        }\n\n        function parseCallExpressionRest(expression: LeftHandSideExpression): LeftHandSideExpression {\n            while (true) {\n                expression = parseMemberExpressionRest(expression);\n                if (token() === SyntaxKind.LessThanToken) {\n                    // See if this is the start of a generic invocation.  If so, consume it and\n                    // keep checking for postfix expressions.  Otherwise, it's just a '<' that's\n                    // part of an arithmetic expression.  Break out so we consume it higher in the\n                    // stack.\n                    const typeArguments = tryParse(parseTypeArgumentsInExpression);\n                    if (!typeArguments) {\n                        return expression;\n                    }\n\n                    const callExpr = <CallExpression>createNode(SyntaxKind.CallExpression, expression.pos);\n                    callExpr.expression = expression;\n                    callExpr.typeArguments = typeArguments;\n                    callExpr.arguments = parseArgumentList();\n                    expression = finishNode(callExpr);\n                    continue;\n                }\n                else if (token() === SyntaxKind.OpenParenToken) {\n                    const callExpr = <CallExpression>createNode(SyntaxKind.CallExpression, expression.pos);\n                    callExpr.expression = expression;\n                    callExpr.arguments = parseArgumentList();\n                    expression = finishNode(callExpr);\n                    continue;\n                }\n\n                return expression;\n            }\n        }\n\n        function parseArgumentList() {\n            parseExpected(SyntaxKind.OpenParenToken);\n            const result = parseDelimitedList(ParsingContext.ArgumentExpressions, parseArgumentExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n            return result;\n        }\n\n        function parseTypeArgumentsInExpression() {\n            if (!parseOptional(SyntaxKind.LessThanToken)) {\n                return undefined;\n            }\n\n            const typeArguments = parseDelimitedList(ParsingContext.TypeArguments, parseType);\n            if (!parseExpected(SyntaxKind.GreaterThanToken)) {\n                // If it doesn't have the closing `>` then it's definitely not an type argument list.\n                return undefined;\n            }\n\n            // If we have a '<', then only parse this as a argument list if the type arguments\n            // are complete and we have an open paren.  if we don't, rewind and return nothing.\n            return typeArguments && canFollowTypeArgumentsInExpression()\n                ? typeArguments\n                : undefined;\n        }\n\n        function canFollowTypeArgumentsInExpression(): boolean {\n            switch (token()) {\n                case SyntaxKind.OpenParenToken:                 // foo<x>(\n                // this case are the only case where this token can legally follow a type argument\n                // list.  So we definitely want to treat this as a type arg list.\n\n                case SyntaxKind.DotToken:                       // foo<x>.\n                case SyntaxKind.CloseParenToken:                // foo<x>)\n                case SyntaxKind.CloseBracketToken:              // foo<x>]\n                case SyntaxKind.ColonToken:                     // foo<x>:\n                case SyntaxKind.SemicolonToken:                 // foo<x>;\n                case SyntaxKind.QuestionToken:                  // foo<x>?\n                case SyntaxKind.EqualsEqualsToken:              // foo<x> ==\n                case SyntaxKind.EqualsEqualsEqualsToken:        // foo<x> ===\n                case SyntaxKind.ExclamationEqualsToken:         // foo<x> !=\n                case SyntaxKind.ExclamationEqualsEqualsToken:   // foo<x> !==\n                case SyntaxKind.AmpersandAmpersandToken:        // foo<x> &&\n                case SyntaxKind.BarBarToken:                    // foo<x> ||\n                case SyntaxKind.CaretToken:                     // foo<x> ^\n                case SyntaxKind.AmpersandToken:                 // foo<x> &\n                case SyntaxKind.BarToken:                       // foo<x> |\n                case SyntaxKind.CloseBraceToken:                // foo<x> }\n                case SyntaxKind.EndOfFileToken:                 // foo<x>\n                    // these cases can't legally follow a type arg list.  However, they're not legal\n                    // expressions either.  The user is probably in the middle of a generic type. So\n                    // treat it as such.\n                    return true;\n\n                case SyntaxKind.CommaToken:                     // foo<x>,\n                case SyntaxKind.OpenBraceToken:                 // foo<x> {\n                // We don't want to treat these as type arguments.  Otherwise we'll parse this\n                // as an invocation expression.  Instead, we want to parse out the expression\n                // in isolation from the type arguments.\n\n                default:\n                    // Anything else treat as an expression.\n                    return false;\n            }\n        }\n\n        function parsePrimaryExpression(): PrimaryExpression {\n            switch (token()) {\n                case SyntaxKind.NumericLiteral:\n                case SyntaxKind.StringLiteral:\n                case SyntaxKind.NoSubstitutionTemplateLiteral:\n                    return parseLiteralNode();\n                case SyntaxKind.ThisKeyword:\n                case SyntaxKind.SuperKeyword:\n                case SyntaxKind.NullKeyword:\n                case SyntaxKind.TrueKeyword:\n                case SyntaxKind.FalseKeyword:\n                    return parseTokenNode<PrimaryExpression>();\n                case SyntaxKind.OpenParenToken:\n                    return parseParenthesizedExpression();\n                case SyntaxKind.OpenBracketToken:\n                    return parseArrayLiteralExpression();\n                case SyntaxKind.OpenBraceToken:\n                    return parseObjectLiteralExpression();\n                case SyntaxKind.AsyncKeyword:\n                    // Async arrow functions are parsed earlier in parseAssignmentExpressionOrHigher.\n                    // If we encounter `async [no LineTerminator here] function` then this is an async\n                    // function; otherwise, its an identifier.\n                    if (!lookAhead(nextTokenIsFunctionKeywordOnSameLine)) {\n                        break;\n                    }\n\n                    return parseFunctionExpression();\n                case SyntaxKind.ClassKeyword:\n                    return parseClassExpression();\n                case SyntaxKind.FunctionKeyword:\n                    return parseFunctionExpression();\n                case SyntaxKind.NewKeyword:\n                    return parseNewExpression();\n                case SyntaxKind.SlashToken:\n                case SyntaxKind.SlashEqualsToken:\n                    if (reScanSlashToken() === SyntaxKind.RegularExpressionLiteral) {\n                        return parseLiteralNode();\n                    }\n                    break;\n                case SyntaxKind.TemplateHead:\n                    return parseTemplateExpression();\n            }\n\n            return parseIdentifier(Diagnostics.Expression_expected);\n        }\n\n        function parseParenthesizedExpression(): ParenthesizedExpression {\n            const node = <ParenthesizedExpression>createNodeWithJSDoc(SyntaxKind.ParenthesizedExpression);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n            return finishNode(node);\n        }\n\n        function parseSpreadElement(): Expression {\n            const node = <SpreadElement>createNode(SyntaxKind.SpreadElement);\n            parseExpected(SyntaxKind.DotDotDotToken);\n            node.expression = parseAssignmentExpressionOrHigher();\n            return finishNode(node);\n        }\n\n        function parseArgumentOrArrayLiteralElement(): Expression {\n            return token() === SyntaxKind.DotDotDotToken ? parseSpreadElement() :\n                token() === SyntaxKind.CommaToken ? <Expression>createNode(SyntaxKind.OmittedExpression) :\n                    parseAssignmentExpressionOrHigher();\n        }\n\n        function parseArgumentExpression(): Expression {\n            return doOutsideOfContext(disallowInAndDecoratorContext, parseArgumentOrArrayLiteralElement);\n        }\n\n        function parseArrayLiteralExpression(): ArrayLiteralExpression {\n            const node = <ArrayLiteralExpression>createNode(SyntaxKind.ArrayLiteralExpression);\n            parseExpected(SyntaxKind.OpenBracketToken);\n            if (scanner.hasPrecedingLineBreak()) {\n                node.multiLine = true;\n            }\n            node.elements = parseDelimitedList(ParsingContext.ArrayLiteralMembers, parseArgumentOrArrayLiteralElement);\n            parseExpected(SyntaxKind.CloseBracketToken);\n            return finishNode(node);\n        }\n\n        function parseObjectLiteralElement(): ObjectLiteralElementLike {\n            const node = <ObjectLiteralElementLike>createNodeWithJSDoc(SyntaxKind.Unknown);\n\n            if (parseOptionalToken(SyntaxKind.DotDotDotToken)) {\n                node.kind = SyntaxKind.SpreadAssignment;\n                (<SpreadAssignment>node).expression = parseAssignmentExpressionOrHigher();\n                return finishNode(node);\n            }\n\n            node.decorators = parseDecorators();\n            node.modifiers = parseModifiers();\n\n            if (parseContextualModifier(SyntaxKind.GetKeyword)) {\n                return parseAccessorDeclaration(<AccessorDeclaration>node, SyntaxKind.GetAccessor);\n            }\n            if (parseContextualModifier(SyntaxKind.SetKeyword)) {\n                return parseAccessorDeclaration(<AccessorDeclaration>node, SyntaxKind.SetAccessor);\n            }\n\n            const asteriskToken = parseOptionalToken(SyntaxKind.AsteriskToken);\n            const tokenIsIdentifier = isIdentifier();\n            node.name = parsePropertyName();\n            // Disallowing of optional property assignments happens in the grammar checker.\n            (<MethodDeclaration>node).questionToken = parseOptionalToken(SyntaxKind.QuestionToken);\n            if (asteriskToken || token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken) {\n                return parseMethodDeclaration(<MethodDeclaration>node, asteriskToken);\n            }\n\n            // check if it is short-hand property assignment or normal property assignment\n            // NOTE: if token is EqualsToken it is interpreted as CoverInitializedName production\n            // CoverInitializedName[Yield] :\n            //     IdentifierReference[?Yield] Initializer[In, ?Yield]\n            // this is necessary because ObjectLiteral productions are also used to cover grammar for ObjectAssignmentPattern\n            const isShorthandPropertyAssignment =\n                tokenIsIdentifier && (token() === SyntaxKind.CommaToken || token() === SyntaxKind.CloseBraceToken || token() === SyntaxKind.EqualsToken);\n            if (isShorthandPropertyAssignment) {\n                node.kind = SyntaxKind.ShorthandPropertyAssignment;\n                const equalsToken = parseOptionalToken(SyntaxKind.EqualsToken);\n                if (equalsToken) {\n                    (<ShorthandPropertyAssignment>node).equalsToken = equalsToken;\n                    (<ShorthandPropertyAssignment>node).objectAssignmentInitializer = allowInAnd(parseAssignmentExpressionOrHigher);\n                }\n            }\n            else {\n                node.kind = SyntaxKind.PropertyAssignment;\n                parseExpected(SyntaxKind.ColonToken);\n                (<PropertyAssignment>node).initializer = allowInAnd(parseAssignmentExpressionOrHigher);\n            }\n            return finishNode(node);\n        }\n\n        function parseObjectLiteralExpression(): ObjectLiteralExpression {\n            const node = <ObjectLiteralExpression>createNode(SyntaxKind.ObjectLiteralExpression);\n            parseExpected(SyntaxKind.OpenBraceToken);\n            if (scanner.hasPrecedingLineBreak()) {\n                node.multiLine = true;\n            }\n\n            node.properties = parseDelimitedList(ParsingContext.ObjectLiteralMembers, parseObjectLiteralElement, /*considerSemicolonAsDelimiter*/ true);\n            parseExpected(SyntaxKind.CloseBraceToken);\n            return finishNode(node);\n        }\n\n        function parseFunctionExpression(): FunctionExpression {\n            // GeneratorExpression:\n            //      function* BindingIdentifier [Yield][opt](FormalParameters[Yield]){ GeneratorBody }\n            //\n            // FunctionExpression:\n            //      function BindingIdentifier[opt](FormalParameters){ FunctionBody }\n            const saveDecoratorContext = inDecoratorContext();\n            if (saveDecoratorContext) {\n                setDecoratorContext(/*val*/ false);\n            }\n\n            const node = <FunctionExpression>createNodeWithJSDoc(SyntaxKind.FunctionExpression);\n            node.modifiers = parseModifiers();\n            parseExpected(SyntaxKind.FunctionKeyword);\n            node.asteriskToken = parseOptionalToken(SyntaxKind.AsteriskToken);\n\n            const isGenerator = node.asteriskToken ? SignatureFlags.Yield : SignatureFlags.None;\n            const isAsync = hasModifier(node, ModifierFlags.Async) ? SignatureFlags.Await : SignatureFlags.None;\n            node.name =\n                isGenerator && isAsync ? doInYieldAndAwaitContext(parseOptionalIdentifier) :\n                    isGenerator ? doInYieldContext(parseOptionalIdentifier) :\n                        isAsync ? doInAwaitContext(parseOptionalIdentifier) :\n                            parseOptionalIdentifier();\n\n            fillSignature(SyntaxKind.ColonToken, isGenerator | isAsync, node);\n            node.body = parseFunctionBlock(isGenerator | isAsync);\n\n            if (saveDecoratorContext) {\n                setDecoratorContext(/*val*/ true);\n            }\n\n            return finishNode(node);\n        }\n\n        function parseOptionalIdentifier(): Identifier | undefined {\n            return isIdentifier() ? parseIdentifier() : undefined;\n        }\n\n        function parseNewExpression(): NewExpression | MetaProperty {\n            const fullStart = scanner.getStartPos();\n            parseExpected(SyntaxKind.NewKeyword);\n            if (parseOptional(SyntaxKind.DotToken)) {\n                const node = <MetaProperty>createNode(SyntaxKind.MetaProperty, fullStart);\n                node.keywordToken = SyntaxKind.NewKeyword;\n                node.name = parseIdentifierName();\n                return finishNode(node);\n            }\n\n            const node = <NewExpression>createNode(SyntaxKind.NewExpression, fullStart);\n            node.expression = parseMemberExpressionOrHigher();\n            node.typeArguments = tryParse(parseTypeArgumentsInExpression);\n            if (node.typeArguments || token() === SyntaxKind.OpenParenToken) {\n                node.arguments = parseArgumentList();\n            }\n            return finishNode(node);\n        }\n\n        // STATEMENTS\n        function parseBlock(ignoreMissingOpenBrace: boolean, diagnosticMessage?: DiagnosticMessage): Block {\n            const node = <Block>createNode(SyntaxKind.Block);\n            if (parseExpected(SyntaxKind.OpenBraceToken, diagnosticMessage) || ignoreMissingOpenBrace) {\n                if (scanner.hasPrecedingLineBreak()) {\n                    node.multiLine = true;\n                }\n\n                node.statements = parseList(ParsingContext.BlockStatements, parseStatement);\n                parseExpected(SyntaxKind.CloseBraceToken);\n            }\n            else {\n                node.statements = createMissingList<Statement>();\n            }\n            return finishNode(node);\n        }\n\n        function parseFunctionBlock(flags: SignatureFlags, diagnosticMessage?: DiagnosticMessage): Block {\n            const savedYieldContext = inYieldContext();\n            setYieldContext(!!(flags & SignatureFlags.Yield));\n\n            const savedAwaitContext = inAwaitContext();\n            setAwaitContext(!!(flags & SignatureFlags.Await));\n\n            // We may be in a [Decorator] context when parsing a function expression or\n            // arrow function. The body of the function is not in [Decorator] context.\n            const saveDecoratorContext = inDecoratorContext();\n            if (saveDecoratorContext) {\n                setDecoratorContext(/*val*/ false);\n            }\n\n            const block = parseBlock(!!(flags & SignatureFlags.IgnoreMissingOpenBrace), diagnosticMessage);\n\n            if (saveDecoratorContext) {\n                setDecoratorContext(/*val*/ true);\n            }\n\n            setYieldContext(savedYieldContext);\n            setAwaitContext(savedAwaitContext);\n\n            return block;\n        }\n\n        function parseEmptyStatement(): Statement {\n            const node = <Statement>createNode(SyntaxKind.EmptyStatement);\n            parseExpected(SyntaxKind.SemicolonToken);\n            return finishNode(node);\n        }\n\n        function parseIfStatement(): IfStatement {\n            const node = <IfStatement>createNode(SyntaxKind.IfStatement);\n            parseExpected(SyntaxKind.IfKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n            node.thenStatement = parseStatement();\n            node.elseStatement = parseOptional(SyntaxKind.ElseKeyword) ? parseStatement() : undefined;\n            return finishNode(node);\n        }\n\n        function parseDoStatement(): DoStatement {\n            const node = <DoStatement>createNode(SyntaxKind.DoStatement);\n            parseExpected(SyntaxKind.DoKeyword);\n            node.statement = parseStatement();\n            parseExpected(SyntaxKind.WhileKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n\n            // From: https://mail.mozilla.org/pipermail/es-discuss/2011-August/016188.html\n            // 157 min --- All allen at wirfs-brock.com CONF --- \"do{;}while(false)false\" prohibited in\n            // spec but allowed in consensus reality. Approved -- this is the de-facto standard whereby\n            //  do;while(0)x will have a semicolon inserted before x.\n            parseOptional(SyntaxKind.SemicolonToken);\n            return finishNode(node);\n        }\n\n        function parseWhileStatement(): WhileStatement {\n            const node = <WhileStatement>createNode(SyntaxKind.WhileStatement);\n            parseExpected(SyntaxKind.WhileKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n            node.statement = parseStatement();\n            return finishNode(node);\n        }\n\n        function parseForOrForInOrForOfStatement(): Statement {\n            const pos = getNodePos();\n            parseExpected(SyntaxKind.ForKeyword);\n            const awaitToken = parseOptionalToken(SyntaxKind.AwaitKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n\n            let initializer: VariableDeclarationList | Expression = undefined;\n            if (token() !== SyntaxKind.SemicolonToken) {\n                if (token() === SyntaxKind.VarKeyword || token() === SyntaxKind.LetKeyword || token() === SyntaxKind.ConstKeyword) {\n                    initializer = parseVariableDeclarationList(/*inForStatementInitializer*/ true);\n                }\n                else {\n                    initializer = disallowInAnd(parseExpression);\n                }\n            }\n            let forOrForInOrForOfStatement: IterationStatement;\n            if (awaitToken ? parseExpected(SyntaxKind.OfKeyword) : parseOptional(SyntaxKind.OfKeyword)) {\n                const forOfStatement = <ForOfStatement>createNode(SyntaxKind.ForOfStatement, pos);\n                forOfStatement.awaitModifier = awaitToken;\n                forOfStatement.initializer = initializer;\n                forOfStatement.expression = allowInAnd(parseAssignmentExpressionOrHigher);\n                parseExpected(SyntaxKind.CloseParenToken);\n                forOrForInOrForOfStatement = forOfStatement;\n            }\n            else if (parseOptional(SyntaxKind.InKeyword)) {\n                const forInStatement = <ForInStatement>createNode(SyntaxKind.ForInStatement, pos);\n                forInStatement.initializer = initializer;\n                forInStatement.expression = allowInAnd(parseExpression);\n                parseExpected(SyntaxKind.CloseParenToken);\n                forOrForInOrForOfStatement = forInStatement;\n            }\n            else {\n                const forStatement = <ForStatement>createNode(SyntaxKind.ForStatement, pos);\n                forStatement.initializer = initializer;\n                parseExpected(SyntaxKind.SemicolonToken);\n                if (token() !== SyntaxKind.SemicolonToken && token() !== SyntaxKind.CloseParenToken) {\n                    forStatement.condition = allowInAnd(parseExpression);\n                }\n                parseExpected(SyntaxKind.SemicolonToken);\n                if (token() !== SyntaxKind.CloseParenToken) {\n                    forStatement.incrementor = allowInAnd(parseExpression);\n                }\n                parseExpected(SyntaxKind.CloseParenToken);\n                forOrForInOrForOfStatement = forStatement;\n            }\n\n            forOrForInOrForOfStatement.statement = parseStatement();\n\n            return finishNode(forOrForInOrForOfStatement);\n        }\n\n        function parseBreakOrContinueStatement(kind: SyntaxKind): BreakOrContinueStatement {\n            const node = <BreakOrContinueStatement>createNode(kind);\n\n            parseExpected(kind === SyntaxKind.BreakStatement ? SyntaxKind.BreakKeyword : SyntaxKind.ContinueKeyword);\n            if (!canParseSemicolon()) {\n                node.label = parseIdentifier();\n            }\n\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseReturnStatement(): ReturnStatement {\n            const node = <ReturnStatement>createNode(SyntaxKind.ReturnStatement);\n\n            parseExpected(SyntaxKind.ReturnKeyword);\n            if (!canParseSemicolon()) {\n                node.expression = allowInAnd(parseExpression);\n            }\n\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseWithStatement(): WithStatement {\n            const node = <WithStatement>createNode(SyntaxKind.WithStatement);\n            parseExpected(SyntaxKind.WithKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n            node.statement = doInsideOfContext(NodeFlags.InWithStatement, parseStatement);\n            return finishNode(node);\n        }\n\n        function parseCaseClause(): CaseClause {\n            const node = <CaseClause>createNode(SyntaxKind.CaseClause);\n            parseExpected(SyntaxKind.CaseKeyword);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.ColonToken);\n            node.statements = parseList(ParsingContext.SwitchClauseStatements, parseStatement);\n            return finishNode(node);\n        }\n\n        function parseDefaultClause(): DefaultClause {\n            const node = <DefaultClause>createNode(SyntaxKind.DefaultClause);\n            parseExpected(SyntaxKind.DefaultKeyword);\n            parseExpected(SyntaxKind.ColonToken);\n            node.statements = parseList(ParsingContext.SwitchClauseStatements, parseStatement);\n            return finishNode(node);\n        }\n\n        function parseCaseOrDefaultClause(): CaseOrDefaultClause {\n            return token() === SyntaxKind.CaseKeyword ? parseCaseClause() : parseDefaultClause();\n        }\n\n        function parseSwitchStatement(): SwitchStatement {\n            const node = <SwitchStatement>createNode(SyntaxKind.SwitchStatement);\n            parseExpected(SyntaxKind.SwitchKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = allowInAnd(parseExpression);\n            parseExpected(SyntaxKind.CloseParenToken);\n            const caseBlock = <CaseBlock>createNode(SyntaxKind.CaseBlock);\n            parseExpected(SyntaxKind.OpenBraceToken);\n            caseBlock.clauses = parseList(ParsingContext.SwitchClauses, parseCaseOrDefaultClause);\n            parseExpected(SyntaxKind.CloseBraceToken);\n            node.caseBlock = finishNode(caseBlock);\n            return finishNode(node);\n        }\n\n        function parseThrowStatement(): ThrowStatement {\n            // ThrowStatement[Yield] :\n            //      throw [no LineTerminator here]Expression[In, ?Yield];\n\n            // Because of automatic semicolon insertion, we need to report error if this\n            // throw could be terminated with a semicolon.  Note: we can't call 'parseExpression'\n            // directly as that might consume an expression on the following line.\n            // We just return 'undefined' in that case.  The actual error will be reported in the\n            // grammar walker.\n            const node = <ThrowStatement>createNode(SyntaxKind.ThrowStatement);\n            parseExpected(SyntaxKind.ThrowKeyword);\n            node.expression = scanner.hasPrecedingLineBreak() ? undefined : allowInAnd(parseExpression);\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        // TODO: Review for error recovery\n        function parseTryStatement(): TryStatement {\n            const node = <TryStatement>createNode(SyntaxKind.TryStatement);\n\n            parseExpected(SyntaxKind.TryKeyword);\n            node.tryBlock = parseBlock(/*ignoreMissingOpenBrace*/ false);\n            node.catchClause = token() === SyntaxKind.CatchKeyword ? parseCatchClause() : undefined;\n\n            // If we don't have a catch clause, then we must have a finally clause.  Try to parse\n            // one out no matter what.\n            if (!node.catchClause || token() === SyntaxKind.FinallyKeyword) {\n                parseExpected(SyntaxKind.FinallyKeyword);\n                node.finallyBlock = parseBlock(/*ignoreMissingOpenBrace*/ false);\n            }\n\n            return finishNode(node);\n        }\n\n        function parseCatchClause(): CatchClause {\n            const result = <CatchClause>createNode(SyntaxKind.CatchClause);\n            parseExpected(SyntaxKind.CatchKeyword);\n\n            if (parseOptional(SyntaxKind.OpenParenToken)) {\n                result.variableDeclaration = parseVariableDeclaration();\n                parseExpected(SyntaxKind.CloseParenToken);\n            }\n            else {\n                // Keep shape of node to avoid degrading performance.\n                result.variableDeclaration = undefined;\n            }\n\n            result.block = parseBlock(/*ignoreMissingOpenBrace*/ false);\n            return finishNode(result);\n        }\n\n        function parseDebuggerStatement(): Statement {\n            const node = <Statement>createNode(SyntaxKind.DebuggerStatement);\n            parseExpected(SyntaxKind.DebuggerKeyword);\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseExpressionOrLabeledStatement(): ExpressionStatement | LabeledStatement {\n            // Avoiding having to do the lookahead for a labeled statement by just trying to parse\n            // out an expression, seeing if it is identifier and then seeing if it is followed by\n            // a colon.\n            const node = <ExpressionStatement | LabeledStatement>createNodeWithJSDoc(SyntaxKind.Unknown);\n            const expression = allowInAnd(parseExpression);\n            if (expression.kind === SyntaxKind.Identifier && parseOptional(SyntaxKind.ColonToken)) {\n                node.kind = SyntaxKind.LabeledStatement;\n                (<LabeledStatement>node).label = <Identifier>expression;\n                (<LabeledStatement>node).statement = parseStatement();\n            }\n            else {\n                node.kind = SyntaxKind.ExpressionStatement;\n                (<ExpressionStatement>node).expression = expression;\n                parseSemicolon();\n            }\n            return finishNode(node);\n        }\n\n        function nextTokenIsIdentifierOrKeywordOnSameLine() {\n            nextToken();\n            return tokenIsIdentifierOrKeyword(token()) && !scanner.hasPrecedingLineBreak();\n        }\n\n        function nextTokenIsClassKeywordOnSameLine() {\n            nextToken();\n            return token() === SyntaxKind.ClassKeyword && !scanner.hasPrecedingLineBreak();\n        }\n\n        function nextTokenIsFunctionKeywordOnSameLine() {\n            nextToken();\n            return token() === SyntaxKind.FunctionKeyword && !scanner.hasPrecedingLineBreak();\n        }\n\n        function nextTokenIsIdentifierOrKeywordOrLiteralOnSameLine() {\n            nextToken();\n            return (tokenIsIdentifierOrKeyword(token()) || token() === SyntaxKind.NumericLiteral || token() === SyntaxKind.StringLiteral) && !scanner.hasPrecedingLineBreak();\n        }\n\n        function isDeclaration(): boolean {\n            while (true) {\n                switch (token()) {\n                    case SyntaxKind.VarKeyword:\n                    case SyntaxKind.LetKeyword:\n                    case SyntaxKind.ConstKeyword:\n                    case SyntaxKind.FunctionKeyword:\n                    case SyntaxKind.ClassKeyword:\n                    case SyntaxKind.EnumKeyword:\n                        return true;\n\n                    // 'declare', 'module', 'namespace', 'interface'* and 'type' are all legal JavaScript identifiers;\n                    // however, an identifier cannot be followed by another identifier on the same line. This is what we\n                    // count on to parse out the respective declarations. For instance, we exploit this to say that\n                    //\n                    //    namespace n\n                    //\n                    // can be none other than the beginning of a namespace declaration, but need to respect that JavaScript sees\n                    //\n                    //    namespace\n                    //    n\n                    //\n                    // as the identifier 'namespace' on one line followed by the identifier 'n' on another.\n                    // We need to look one token ahead to see if it permissible to try parsing a declaration.\n                    //\n                    // *Note*: 'interface' is actually a strict mode reserved word. So while\n                    //\n                    //   \"use strict\"\n                    //   interface\n                    //   I {}\n                    //\n                    // could be legal, it would add complexity for very little gain.\n                    case SyntaxKind.InterfaceKeyword:\n                    case SyntaxKind.TypeKeyword:\n                        return nextTokenIsIdentifierOnSameLine();\n                    case SyntaxKind.ModuleKeyword:\n                    case SyntaxKind.NamespaceKeyword:\n                        return nextTokenIsIdentifierOrStringLiteralOnSameLine();\n                    case SyntaxKind.AbstractKeyword:\n                    case SyntaxKind.AsyncKeyword:\n                    case SyntaxKind.DeclareKeyword:\n                    case SyntaxKind.PrivateKeyword:\n                    case SyntaxKind.ProtectedKeyword:\n                    case SyntaxKind.PublicKeyword:\n                    case SyntaxKind.ReadonlyKeyword:\n                        nextToken();\n                        // ASI takes effect for this modifier.\n                        if (scanner.hasPrecedingLineBreak()) {\n                            return false;\n                        }\n                        continue;\n\n                    case SyntaxKind.GlobalKeyword:\n                        nextToken();\n                        return token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.Identifier || token() === SyntaxKind.ExportKeyword;\n\n                    case SyntaxKind.ImportKeyword:\n                        nextToken();\n                        return token() === SyntaxKind.StringLiteral || token() === SyntaxKind.AsteriskToken ||\n                            token() === SyntaxKind.OpenBraceToken || tokenIsIdentifierOrKeyword(token());\n                    case SyntaxKind.ExportKeyword:\n                        nextToken();\n                        if (token() === SyntaxKind.EqualsToken || token() === SyntaxKind.AsteriskToken ||\n                            token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.DefaultKeyword ||\n                            token() === SyntaxKind.AsKeyword) {\n                            return true;\n                        }\n                        continue;\n\n                    case SyntaxKind.StaticKeyword:\n                        nextToken();\n                        continue;\n                    default:\n                        return false;\n                }\n            }\n        }\n\n        function isStartOfDeclaration(): boolean {\n            return lookAhead(isDeclaration);\n        }\n\n        function isStartOfStatement(): boolean {\n            switch (token()) {\n                case SyntaxKind.AtToken:\n                case SyntaxKind.SemicolonToken:\n                case SyntaxKind.OpenBraceToken:\n                case SyntaxKind.VarKeyword:\n                case SyntaxKind.LetKeyword:\n                case SyntaxKind.FunctionKeyword:\n                case SyntaxKind.ClassKeyword:\n                case SyntaxKind.EnumKeyword:\n                case SyntaxKind.IfKeyword:\n                case SyntaxKind.DoKeyword:\n                case SyntaxKind.WhileKeyword:\n                case SyntaxKind.ForKeyword:\n                case SyntaxKind.ContinueKeyword:\n                case SyntaxKind.BreakKeyword:\n                case SyntaxKind.ReturnKeyword:\n                case SyntaxKind.WithKeyword:\n                case SyntaxKind.SwitchKeyword:\n                case SyntaxKind.ThrowKeyword:\n                case SyntaxKind.TryKeyword:\n                case SyntaxKind.DebuggerKeyword:\n                // 'catch' and 'finally' do not actually indicate that the code is part of a statement,\n                // however, we say they are here so that we may gracefully parse them and error later.\n                case SyntaxKind.CatchKeyword:\n                case SyntaxKind.FinallyKeyword:\n                    return true;\n\n                case SyntaxKind.ImportKeyword:\n                    return isStartOfDeclaration() || lookAhead(nextTokenIsOpenParenOrLessThan);\n\n                case SyntaxKind.ConstKeyword:\n                case SyntaxKind.ExportKeyword:\n                    return isStartOfDeclaration();\n\n                case SyntaxKind.AsyncKeyword:\n                case SyntaxKind.DeclareKeyword:\n                case SyntaxKind.InterfaceKeyword:\n                case SyntaxKind.ModuleKeyword:\n                case SyntaxKind.NamespaceKeyword:\n                case SyntaxKind.TypeKeyword:\n                case SyntaxKind.GlobalKeyword:\n                    // When these don't start a declaration, they're an identifier in an expression statement\n                    return true;\n\n                case SyntaxKind.PublicKeyword:\n                case SyntaxKind.PrivateKeyword:\n                case SyntaxKind.ProtectedKeyword:\n                case SyntaxKind.StaticKeyword:\n                case SyntaxKind.ReadonlyKeyword:\n                    // When these don't start a declaration, they may be the start of a class member if an identifier\n                    // immediately follows. Otherwise they're an identifier in an expression statement.\n                    return isStartOfDeclaration() || !lookAhead(nextTokenIsIdentifierOrKeywordOnSameLine);\n\n                default:\n                    return isStartOfExpression();\n            }\n        }\n\n        function nextTokenIsIdentifierOrStartOfDestructuring() {\n            nextToken();\n            return isIdentifier() || token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.OpenBracketToken;\n        }\n\n        function isLetDeclaration() {\n            // In ES6 'let' always starts a lexical declaration if followed by an identifier or {\n            // or [.\n            return lookAhead(nextTokenIsIdentifierOrStartOfDestructuring);\n        }\n\n        function parseStatement(): Statement {\n            switch (token()) {\n                case SyntaxKind.SemicolonToken:\n                    return parseEmptyStatement();\n                case SyntaxKind.OpenBraceToken:\n                    return parseBlock(/*ignoreMissingOpenBrace*/ false);\n                case SyntaxKind.VarKeyword:\n                    return parseVariableStatement(<VariableStatement>createNodeWithJSDoc(SyntaxKind.VariableDeclaration));\n                case SyntaxKind.LetKeyword:\n                    if (isLetDeclaration()) {\n                        return parseVariableStatement(<VariableStatement>createNodeWithJSDoc(SyntaxKind.VariableDeclaration));\n                    }\n                    break;\n                case SyntaxKind.FunctionKeyword:\n                    return parseFunctionDeclaration(<FunctionDeclaration>createNodeWithJSDoc(SyntaxKind.FunctionDeclaration));\n                case SyntaxKind.ClassKeyword:\n                    return parseClassDeclaration(<ClassDeclaration>createNodeWithJSDoc(SyntaxKind.ClassDeclaration));\n                case SyntaxKind.IfKeyword:\n                    return parseIfStatement();\n                case SyntaxKind.DoKeyword:\n                    return parseDoStatement();\n                case SyntaxKind.WhileKeyword:\n                    return parseWhileStatement();\n                case SyntaxKind.ForKeyword:\n                    return parseForOrForInOrForOfStatement();\n                case SyntaxKind.ContinueKeyword:\n                    return parseBreakOrContinueStatement(SyntaxKind.ContinueStatement);\n                case SyntaxKind.BreakKeyword:\n                    return parseBreakOrContinueStatement(SyntaxKind.BreakStatement);\n                case SyntaxKind.ReturnKeyword:\n                    return parseReturnStatement();\n                case SyntaxKind.WithKeyword:\n                    return parseWithStatement();\n                case SyntaxKind.SwitchKeyword:\n                    return parseSwitchStatement();\n                case SyntaxKind.ThrowKeyword:\n                    return parseThrowStatement();\n                case SyntaxKind.TryKeyword:\n                // Include 'catch' and 'finally' for error recovery.\n                case SyntaxKind.CatchKeyword:\n                case SyntaxKind.FinallyKeyword:\n                    return parseTryStatement();\n                case SyntaxKind.DebuggerKeyword:\n                    return parseDebuggerStatement();\n                case SyntaxKind.AtToken:\n                    return parseDeclaration();\n                case SyntaxKind.AsyncKeyword:\n                case SyntaxKind.InterfaceKeyword:\n                case SyntaxKind.TypeKeyword:\n                case SyntaxKind.ModuleKeyword:\n                case SyntaxKind.NamespaceKeyword:\n                case SyntaxKind.DeclareKeyword:\n                case SyntaxKind.ConstKeyword:\n                case SyntaxKind.EnumKeyword:\n                case SyntaxKind.ExportKeyword:\n                case SyntaxKind.ImportKeyword:\n                case SyntaxKind.PrivateKeyword:\n                case SyntaxKind.ProtectedKeyword:\n                case SyntaxKind.PublicKeyword:\n                case SyntaxKind.AbstractKeyword:\n                case SyntaxKind.StaticKeyword:\n                case SyntaxKind.ReadonlyKeyword:\n                case SyntaxKind.GlobalKeyword:\n                    if (isStartOfDeclaration()) {\n                        return parseDeclaration();\n                    }\n                    break;\n            }\n            return parseExpressionOrLabeledStatement();\n        }\n\n        function isDeclareModifier(modifier: Modifier) {\n            return modifier.kind === SyntaxKind.DeclareKeyword;\n        }\n\n        function parseDeclaration(): Statement {\n            const node = <Statement>createNodeWithJSDoc(SyntaxKind.Unknown);\n            node.decorators = parseDecorators();\n            node.modifiers = parseModifiers();\n            if (some(node.modifiers, isDeclareModifier)) {\n                for (const m of node.modifiers) {\n                    m.flags |= NodeFlags.Ambient;\n                }\n                return doInsideOfContext(NodeFlags.Ambient, () => parseDeclarationWorker(node));\n            }\n            else {\n                return parseDeclarationWorker(node);\n            }\n        }\n\n        function parseDeclarationWorker(node: Statement): Statement {\n            switch (token()) {\n                case SyntaxKind.VarKeyword:\n                case SyntaxKind.LetKeyword:\n                case SyntaxKind.ConstKeyword:\n                    return parseVariableStatement(<VariableStatement>node);\n                case SyntaxKind.FunctionKeyword:\n                    return parseFunctionDeclaration(<FunctionDeclaration>node);\n                case SyntaxKind.ClassKeyword:\n                    return parseClassDeclaration(<ClassDeclaration>node);\n                case SyntaxKind.InterfaceKeyword:\n                    return parseInterfaceDeclaration(<InterfaceDeclaration>node);\n                case SyntaxKind.TypeKeyword:\n                    return parseTypeAliasDeclaration(<TypeAliasDeclaration>node);\n                case SyntaxKind.EnumKeyword:\n                    return parseEnumDeclaration(<EnumDeclaration>node);\n                case SyntaxKind.GlobalKeyword:\n                case SyntaxKind.ModuleKeyword:\n                case SyntaxKind.NamespaceKeyword:\n                    return parseModuleDeclaration(<ModuleDeclaration>node);\n                case SyntaxKind.ImportKeyword:\n                    return parseImportDeclarationOrImportEqualsDeclaration(<ImportDeclaration | ImportEqualsDeclaration>node);\n                case SyntaxKind.ExportKeyword:\n                    nextToken();\n                    switch (token()) {\n                        case SyntaxKind.DefaultKeyword:\n                        case SyntaxKind.EqualsToken:\n                            return parseExportAssignment(<ExportAssignment>node);\n                        case SyntaxKind.AsKeyword:\n                            return parseNamespaceExportDeclaration(<NamespaceExportDeclaration>node);\n                        default:\n                            return parseExportDeclaration(<ExportDeclaration>node);\n                    }\n                default:\n                    if (node.decorators || node.modifiers) {\n                        // We reached this point because we encountered decorators and/or modifiers and assumed a declaration\n                        // would follow. For recovery and error reporting purposes, return an incomplete declaration.\n                        const missing = <Statement>createMissingNode(SyntaxKind.MissingDeclaration, /*reportAtCurrentPosition*/ true, Diagnostics.Declaration_expected);\n                        missing.pos = node.pos;\n                        missing.decorators = node.decorators;\n                        missing.modifiers = node.modifiers;\n                        return finishNode(missing);\n                    }\n            }\n        }\n\n        function nextTokenIsIdentifierOrStringLiteralOnSameLine() {\n            nextToken();\n            return !scanner.hasPrecedingLineBreak() && (isIdentifier() || token() === SyntaxKind.StringLiteral);\n        }\n\n        function parseFunctionBlockOrSemicolon(flags: SignatureFlags, diagnosticMessage?: DiagnosticMessage): Block {\n            if (token() !== SyntaxKind.OpenBraceToken && canParseSemicolon()) {\n                parseSemicolon();\n                return;\n            }\n\n            return parseFunctionBlock(flags, diagnosticMessage);\n        }\n\n        // DECLARATIONS\n\n        function parseArrayBindingElement(): ArrayBindingElement {\n            if (token() === SyntaxKind.CommaToken) {\n                return <OmittedExpression>createNode(SyntaxKind.OmittedExpression);\n            }\n            const node = <BindingElement>createNode(SyntaxKind.BindingElement);\n            node.dotDotDotToken = parseOptionalToken(SyntaxKind.DotDotDotToken);\n            node.name = parseIdentifierOrPattern();\n            node.initializer = parseInitializer();\n            return finishNode(node);\n        }\n\n        function parseObjectBindingElement(): BindingElement {\n            const node = <BindingElement>createNode(SyntaxKind.BindingElement);\n            node.dotDotDotToken = parseOptionalToken(SyntaxKind.DotDotDotToken);\n            const tokenIsIdentifier = isIdentifier();\n            const propertyName = parsePropertyName();\n            if (tokenIsIdentifier && token() !== SyntaxKind.ColonToken) {\n                node.name = <Identifier>propertyName;\n            }\n            else {\n                parseExpected(SyntaxKind.ColonToken);\n                node.propertyName = propertyName;\n                node.name = parseIdentifierOrPattern();\n            }\n            node.initializer = parseInitializer();\n            return finishNode(node);\n        }\n\n        function parseObjectBindingPattern(): ObjectBindingPattern {\n            const node = <ObjectBindingPattern>createNode(SyntaxKind.ObjectBindingPattern);\n            parseExpected(SyntaxKind.OpenBraceToken);\n            node.elements = parseDelimitedList(ParsingContext.ObjectBindingElements, parseObjectBindingElement);\n            parseExpected(SyntaxKind.CloseBraceToken);\n            return finishNode(node);\n        }\n\n        function parseArrayBindingPattern(): ArrayBindingPattern {\n            const node = <ArrayBindingPattern>createNode(SyntaxKind.ArrayBindingPattern);\n            parseExpected(SyntaxKind.OpenBracketToken);\n            node.elements = parseDelimitedList(ParsingContext.ArrayBindingElements, parseArrayBindingElement);\n            parseExpected(SyntaxKind.CloseBracketToken);\n            return finishNode(node);\n        }\n\n        function isIdentifierOrPattern() {\n            return token() === SyntaxKind.OpenBraceToken || token() === SyntaxKind.OpenBracketToken || isIdentifier();\n        }\n\n        function parseIdentifierOrPattern(): Identifier | BindingPattern {\n            if (token() === SyntaxKind.OpenBracketToken) {\n                return parseArrayBindingPattern();\n            }\n            if (token() === SyntaxKind.OpenBraceToken) {\n                return parseObjectBindingPattern();\n            }\n            return parseIdentifier();\n        }\n\n        function parseVariableDeclarationAllowExclamation() {\n            return parseVariableDeclaration(/*allowExclamation*/ true);\n        }\n\n        function parseVariableDeclaration(allowExclamation?: boolean): VariableDeclaration {\n            const node = <VariableDeclaration>createNode(SyntaxKind.VariableDeclaration);\n            node.name = parseIdentifierOrPattern();\n            if (allowExclamation && node.name.kind === SyntaxKind.Identifier &&\n                token() === SyntaxKind.ExclamationToken && !scanner.hasPrecedingLineBreak()) {\n                node.exclamationToken = parseTokenNode();\n            }\n            node.type = parseTypeAnnotation();\n            if (!isInOrOfKeyword(token())) {\n                node.initializer = parseInitializer();\n            }\n            return finishNode(node);\n        }\n\n        function parseVariableDeclarationList(inForStatementInitializer: boolean): VariableDeclarationList {\n            const node = <VariableDeclarationList>createNode(SyntaxKind.VariableDeclarationList);\n\n            switch (token()) {\n                case SyntaxKind.VarKeyword:\n                    break;\n                case SyntaxKind.LetKeyword:\n                    node.flags |= NodeFlags.Let;\n                    break;\n                case SyntaxKind.ConstKeyword:\n                    node.flags |= NodeFlags.Const;\n                    break;\n                default:\n                    Debug.fail();\n            }\n\n            nextToken();\n\n            // The user may have written the following:\n            //\n            //    for (let of X) { }\n            //\n            // In this case, we want to parse an empty declaration list, and then parse 'of'\n            // as a keyword. The reason this is not automatic is that 'of' is a valid identifier.\n            // So we need to look ahead to determine if 'of' should be treated as a keyword in\n            // this context.\n            // The checker will then give an error that there is an empty declaration list.\n            if (token() === SyntaxKind.OfKeyword && lookAhead(canFollowContextualOfKeyword)) {\n                node.declarations = createMissingList<VariableDeclaration>();\n            }\n            else {\n                const savedDisallowIn = inDisallowInContext();\n                setDisallowInContext(inForStatementInitializer);\n\n                node.declarations = parseDelimitedList(ParsingContext.VariableDeclarations,\n                    inForStatementInitializer ? parseVariableDeclaration : parseVariableDeclarationAllowExclamation);\n\n                setDisallowInContext(savedDisallowIn);\n            }\n\n            return finishNode(node);\n        }\n\n        function canFollowContextualOfKeyword(): boolean {\n            return nextTokenIsIdentifier() && nextToken() === SyntaxKind.CloseParenToken;\n        }\n\n        function parseVariableStatement(node: VariableStatement): VariableStatement {\n            node.kind = SyntaxKind.VariableStatement;\n            node.declarationList = parseVariableDeclarationList(/*inForStatementInitializer*/ false);\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseFunctionDeclaration(node: FunctionDeclaration): FunctionDeclaration {\n            node.kind = SyntaxKind.FunctionDeclaration;\n            parseExpected(SyntaxKind.FunctionKeyword);\n            node.asteriskToken = parseOptionalToken(SyntaxKind.AsteriskToken);\n            node.name = hasModifier(node, ModifierFlags.Default) ? parseOptionalIdentifier() : parseIdentifier();\n            const isGenerator = node.asteriskToken ? SignatureFlags.Yield : SignatureFlags.None;\n            const isAsync = hasModifier(node, ModifierFlags.Async) ? SignatureFlags.Await : SignatureFlags.None;\n            fillSignature(SyntaxKind.ColonToken, isGenerator | isAsync, node);\n            node.body = parseFunctionBlockOrSemicolon(isGenerator | isAsync, Diagnostics.or_expected);\n            return finishNode(node);\n        }\n\n        function parseConstructorDeclaration(node: ConstructorDeclaration): ConstructorDeclaration {\n            node.kind = SyntaxKind.Constructor;\n            parseExpected(SyntaxKind.ConstructorKeyword);\n            fillSignature(SyntaxKind.ColonToken, SignatureFlags.None, node);\n            node.body = parseFunctionBlockOrSemicolon(SignatureFlags.None, Diagnostics.or_expected);\n            return finishNode(node);\n        }\n\n        function parseMethodDeclaration(node: MethodDeclaration, asteriskToken: AsteriskToken, diagnosticMessage?: DiagnosticMessage): MethodDeclaration {\n            node.kind = SyntaxKind.MethodDeclaration;\n            node.asteriskToken = asteriskToken;\n            const isGenerator = asteriskToken ? SignatureFlags.Yield : SignatureFlags.None;\n            const isAsync = hasModifier(node, ModifierFlags.Async) ? SignatureFlags.Await : SignatureFlags.None;\n            fillSignature(SyntaxKind.ColonToken, isGenerator | isAsync, node);\n            node.body = parseFunctionBlockOrSemicolon(isGenerator | isAsync, diagnosticMessage);\n            return finishNode(node);\n        }\n\n        function parsePropertyDeclaration(node: PropertyDeclaration): PropertyDeclaration {\n            node.kind = SyntaxKind.PropertyDeclaration;\n            if (!node.questionToken && token() === SyntaxKind.ExclamationToken && !scanner.hasPrecedingLineBreak()) {\n                node.exclamationToken = parseTokenNode();\n            }\n            node.type = parseTypeAnnotation();\n\n            // For instance properties specifically, since they are evaluated inside the constructor,\n            // we do *not * want to parse yield expressions, so we specifically turn the yield context\n            // off. The grammar would look something like this:\n            //\n            //    MemberVariableDeclaration[Yield]:\n            //        AccessibilityModifier_opt PropertyName TypeAnnotation_opt Initializer_opt[In];\n            //        AccessibilityModifier_opt static_opt PropertyName TypeAnnotation_opt Initializer_opt[In, ?Yield];\n            //\n            // The checker may still error in the static case to explicitly disallow the yield expression.\n            node.initializer = hasModifier(node, ModifierFlags.Static)\n                ? allowInAnd(parseInitializer)\n                : doOutsideOfContext(NodeFlags.YieldContext | NodeFlags.DisallowInContext, parseInitializer);\n\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parsePropertyOrMethodDeclaration(node: PropertyDeclaration | MethodDeclaration): PropertyDeclaration | MethodDeclaration {\n            const asteriskToken = parseOptionalToken(SyntaxKind.AsteriskToken);\n            node.name = parsePropertyName();\n            // Note: this is not legal as per the grammar.  But we allow it in the parser and\n            // report an error in the grammar checker.\n            node.questionToken = parseOptionalToken(SyntaxKind.QuestionToken);\n            if (asteriskToken || token() === SyntaxKind.OpenParenToken || token() === SyntaxKind.LessThanToken) {\n                return parseMethodDeclaration(<MethodDeclaration>node, asteriskToken, Diagnostics.or_expected);\n            }\n            return parsePropertyDeclaration(<PropertyDeclaration>node);\n        }\n\n        function parseAccessorDeclaration(node: AccessorDeclaration, kind: AccessorDeclaration[\"kind\"]): AccessorDeclaration {\n            node.kind = kind;\n            node.name = parsePropertyName();\n            fillSignature(SyntaxKind.ColonToken, SignatureFlags.None, node);\n            node.body = parseFunctionBlockOrSemicolon(SignatureFlags.None);\n            return finishNode(node);\n        }\n\n        function isClassMemberModifier(idToken: SyntaxKind) {\n            switch (idToken) {\n                case SyntaxKind.PublicKeyword:\n                case SyntaxKind.PrivateKeyword:\n                case SyntaxKind.ProtectedKeyword:\n                case SyntaxKind.StaticKeyword:\n                case SyntaxKind.ReadonlyKeyword:\n                    return true;\n                default:\n                    return false;\n            }\n        }\n\n        function isClassMemberStart(): boolean {\n            let idToken: SyntaxKind;\n\n            if (token() === SyntaxKind.AtToken) {\n                return true;\n            }\n\n            // Eat up all modifiers, but hold on to the last one in case it is actually an identifier.\n            while (isModifierKind(token())) {\n                idToken = token();\n                // If the idToken is a class modifier (protected, private, public, and static), it is\n                // certain that we are starting to parse class member. This allows better error recovery\n                // Example:\n                //      public foo() ...     // true\n                //      public @dec blah ... // true; we will then report an error later\n                //      export public ...    // true; we will then report an error later\n                if (isClassMemberModifier(idToken)) {\n                    return true;\n                }\n\n                nextToken();\n            }\n\n            if (token() === SyntaxKind.AsteriskToken) {\n                return true;\n            }\n\n            // Try to get the first property-like token following all modifiers.\n            // This can either be an identifier or the 'get' or 'set' keywords.\n            if (isLiteralPropertyName()) {\n                idToken = token();\n                nextToken();\n            }\n\n            // Index signatures and computed properties are class members; we can parse.\n            if (token() === SyntaxKind.OpenBracketToken) {\n                return true;\n            }\n\n            // If we were able to get any potential identifier...\n            if (idToken !== undefined) {\n                // If we have a non-keyword identifier, or if we have an accessor, then it's safe to parse.\n                if (!isKeyword(idToken) || idToken === SyntaxKind.SetKeyword || idToken === SyntaxKind.GetKeyword) {\n                    return true;\n                }\n\n                // If it *is* a keyword, but not an accessor, check a little farther along\n                // to see if it should actually be parsed as a class member.\n                switch (token()) {\n                    case SyntaxKind.OpenParenToken:     // Method declaration\n                    case SyntaxKind.LessThanToken:      // Generic Method declaration\n                    case SyntaxKind.ExclamationToken:   // Non-null assertion on property name\n                    case SyntaxKind.ColonToken:         // Type Annotation for declaration\n                    case SyntaxKind.EqualsToken:        // Initializer for declaration\n                    case SyntaxKind.QuestionToken:      // Not valid, but permitted so that it gets caught later on.\n                        return true;\n                    default:\n                        // Covers\n                        //  - Semicolons     (declaration termination)\n                        //  - Closing braces (end-of-class, must be declaration)\n                        //  - End-of-files   (not valid, but permitted so that it gets caught later on)\n                        //  - Line-breaks    (enabling *automatic semicolon insertion*)\n                        return canParseSemicolon();\n                }\n            }\n\n            return false;\n        }\n\n        function parseDecorators(): NodeArray<Decorator> | undefined {\n            let list: Decorator[] | undefined;\n            const listPos = getNodePos();\n            while (true) {\n                const decoratorStart = getNodePos();\n                if (!parseOptional(SyntaxKind.AtToken)) {\n                    break;\n                }\n                const decorator = <Decorator>createNode(SyntaxKind.Decorator, decoratorStart);\n                decorator.expression = doInDecoratorContext(parseLeftHandSideExpressionOrHigher);\n                finishNode(decorator);\n                (list || (list = [])).push(decorator);\n            }\n            return list && createNodeArray(list, listPos);\n        }\n\n        /*\n         * There are situations in which a modifier like 'const' will appear unexpectedly, such as on a class member.\n         * In those situations, if we are entirely sure that 'const' is not valid on its own (such as when ASI takes effect\n         * and turns it into a standalone declaration), then it is better to parse it and report an error later.\n         *\n         * In such situations, 'permitInvalidConstAsModifier' should be set to true.\n         */\n        function parseModifiers(permitInvalidConstAsModifier?: boolean): NodeArray<Modifier> | undefined {\n            let list: Modifier[];\n            const listPos = getNodePos();\n            while (true) {\n                const modifierStart = scanner.getStartPos();\n                const modifierKind = token();\n\n                if (token() === SyntaxKind.ConstKeyword && permitInvalidConstAsModifier) {\n                    // We need to ensure that any subsequent modifiers appear on the same line\n                    // so that when 'const' is a standalone declaration, we don't issue an error.\n                    if (!tryParse(nextTokenIsOnSameLineAndCanFollowModifier)) {\n                        break;\n                    }\n                }\n                else {\n                    if (!parseAnyContextualModifier()) {\n                        break;\n                    }\n                }\n\n                const modifier = finishNode(<Modifier>createNode(modifierKind, modifierStart));\n                (list || (list = [])).push(modifier);\n            }\n            return list && createNodeArray(list, listPos);\n        }\n\n        function parseModifiersForArrowFunction(): NodeArray<Modifier> {\n            let modifiers: NodeArray<Modifier>;\n            if (token() === SyntaxKind.AsyncKeyword) {\n                const modifierStart = scanner.getStartPos();\n                const modifierKind = token();\n                nextToken();\n                const modifier = finishNode(<Modifier>createNode(modifierKind, modifierStart));\n                modifiers = createNodeArray<Modifier>([modifier], modifierStart);\n            }\n            return modifiers;\n        }\n\n        function parseClassElement(): ClassElement {\n            if (token() === SyntaxKind.SemicolonToken) {\n                const result = <SemicolonClassElement>createNode(SyntaxKind.SemicolonClassElement);\n                nextToken();\n                return finishNode(result);\n            }\n\n            const node = <ClassElement>createNodeWithJSDoc(SyntaxKind.Unknown);\n            node.decorators = parseDecorators();\n            node.modifiers = parseModifiers(/*permitInvalidConstAsModifier*/ true);\n\n            if (parseContextualModifier(SyntaxKind.GetKeyword)) {\n                return parseAccessorDeclaration(<AccessorDeclaration>node, SyntaxKind.GetAccessor);\n            }\n\n            if (parseContextualModifier(SyntaxKind.SetKeyword)) {\n                return parseAccessorDeclaration(<AccessorDeclaration>node, SyntaxKind.SetAccessor);\n            }\n\n            if (token() === SyntaxKind.ConstructorKeyword) {\n                return parseConstructorDeclaration(<ConstructorDeclaration>node);\n            }\n\n            if (isIndexSignature()) {\n                return parseIndexSignatureDeclaration(<IndexSignatureDeclaration>node);\n            }\n\n            // It is very important that we check this *after* checking indexers because\n            // the [ token can start an index signature or a computed property name\n            if (tokenIsIdentifierOrKeyword(token()) ||\n                token() === SyntaxKind.StringLiteral ||\n                token() === SyntaxKind.NumericLiteral ||\n                token() === SyntaxKind.AsteriskToken ||\n                token() === SyntaxKind.OpenBracketToken) {\n\n                return parsePropertyOrMethodDeclaration(<PropertyDeclaration | MethodDeclaration>node);\n            }\n\n            if (node.decorators || node.modifiers) {\n                // treat this as a property declaration with a missing name.\n                node.name = createMissingNode<Identifier>(SyntaxKind.Identifier, /*reportAtCurrentPosition*/ true, Diagnostics.Declaration_expected);\n                return parsePropertyDeclaration(<PropertyDeclaration>node);\n            }\n\n            // 'isClassMemberStart' should have hinted not to attempt parsing.\n            Debug.fail(\"Should not have attempted to parse class member declaration.\");\n        }\n\n        function parseClassExpression(): ClassExpression {\n            return <ClassExpression>parseClassDeclarationOrExpression(<ClassLikeDeclaration>createNodeWithJSDoc(SyntaxKind.Unknown), SyntaxKind.ClassExpression);\n        }\n\n        function parseClassDeclaration(node: ClassLikeDeclaration): ClassDeclaration {\n            return <ClassDeclaration>parseClassDeclarationOrExpression(node, SyntaxKind.ClassDeclaration);\n        }\n\n        function parseClassDeclarationOrExpression(node: ClassLikeDeclaration, kind: ClassLikeDeclaration[\"kind\"]): ClassLikeDeclaration {\n            node.kind = kind;\n            parseExpected(SyntaxKind.ClassKeyword);\n            node.name = parseNameOfClassDeclarationOrExpression();\n            node.typeParameters = parseTypeParameters();\n            node.heritageClauses = parseHeritageClauses();\n\n            if (parseExpected(SyntaxKind.OpenBraceToken)) {\n                // ClassTail[Yield,Await] : (Modified) See 14.5\n                //      ClassHeritage[?Yield,?Await]opt { ClassBody[?Yield,?Await]opt }\n                node.members = parseClassMembers();\n                parseExpected(SyntaxKind.CloseBraceToken);\n            }\n            else {\n                node.members = createMissingList<ClassElement>();\n            }\n\n            return finishNode(node);\n        }\n\n        function parseNameOfClassDeclarationOrExpression(): Identifier | undefined {\n            // implements is a future reserved word so\n            // 'class implements' might mean either\n            // - class expression with omitted name, 'implements' starts heritage clause\n            // - class with name 'implements'\n            // 'isImplementsClause' helps to disambiguate between these two cases\n            return isIdentifier() && !isImplementsClause()\n                ? parseIdentifier()\n                : undefined;\n        }\n\n        function isImplementsClause() {\n            return token() === SyntaxKind.ImplementsKeyword && lookAhead(nextTokenIsIdentifierOrKeyword);\n        }\n\n        function parseHeritageClauses(): NodeArray<HeritageClause> | undefined {\n            // ClassTail[Yield,Await] : (Modified) See 14.5\n            //      ClassHeritage[?Yield,?Await]opt { ClassBody[?Yield,?Await]opt }\n\n            if (isHeritageClause()) {\n                return parseList(ParsingContext.HeritageClauses, parseHeritageClause);\n            }\n\n            return undefined;\n        }\n\n        function parseHeritageClause(): HeritageClause | undefined {\n            const tok = token();\n            if (tok === SyntaxKind.ExtendsKeyword || tok === SyntaxKind.ImplementsKeyword) {\n                const node = <HeritageClause>createNode(SyntaxKind.HeritageClause);\n                node.token = tok;\n                nextToken();\n                node.types = parseDelimitedList(ParsingContext.HeritageClauseElement, parseExpressionWithTypeArguments);\n                return finishNode(node);\n            }\n\n            return undefined;\n        }\n\n        function parseExpressionWithTypeArguments(): ExpressionWithTypeArguments {\n            const node = <ExpressionWithTypeArguments>createNode(SyntaxKind.ExpressionWithTypeArguments);\n            node.expression = parseLeftHandSideExpressionOrHigher();\n            node.typeArguments = tryParseTypeArguments();\n            return finishNode(node);\n        }\n\n        function tryParseTypeArguments(): NodeArray<TypeNode> | undefined {\n            return token() === SyntaxKind.LessThanToken\n               ? parseBracketedList(ParsingContext.TypeArguments, parseType, SyntaxKind.LessThanToken, SyntaxKind.GreaterThanToken)\n               : undefined;\n        }\n\n        function isHeritageClause(): boolean {\n            return token() === SyntaxKind.ExtendsKeyword || token() === SyntaxKind.ImplementsKeyword;\n        }\n\n        function parseClassMembers(): NodeArray<ClassElement> {\n            return parseList(ParsingContext.ClassMembers, parseClassElement);\n        }\n\n        function parseInterfaceDeclaration(node: InterfaceDeclaration): InterfaceDeclaration {\n            node.kind = SyntaxKind.InterfaceDeclaration;\n            parseExpected(SyntaxKind.InterfaceKeyword);\n            node.name = parseIdentifier();\n            node.typeParameters = parseTypeParameters();\n            node.heritageClauses = parseHeritageClauses();\n            node.members = parseObjectTypeMembers();\n            return finishNode(node);\n        }\n\n        function parseTypeAliasDeclaration(node: TypeAliasDeclaration): TypeAliasDeclaration {\n            node.kind = SyntaxKind.TypeAliasDeclaration;\n            parseExpected(SyntaxKind.TypeKeyword);\n            node.name = parseIdentifier();\n            node.typeParameters = parseTypeParameters();\n            parseExpected(SyntaxKind.EqualsToken);\n            node.type = parseType();\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        // In an ambient declaration, the grammar only allows integer literals as initializers.\n        // In a non-ambient declaration, the grammar allows uninitialized members only in a\n        // ConstantEnumMemberSection, which starts at the beginning of an enum declaration\n        // or any time an integer literal initializer is encountered.\n        function parseEnumMember(): EnumMember {\n            const node = <EnumMember>createNodeWithJSDoc(SyntaxKind.EnumMember);\n            node.name = parsePropertyName();\n            node.initializer = allowInAnd(parseInitializer);\n            return finishNode(node);\n        }\n\n        function parseEnumDeclaration(node: EnumDeclaration): EnumDeclaration {\n            node.kind = SyntaxKind.EnumDeclaration;\n            parseExpected(SyntaxKind.EnumKeyword);\n            node.name = parseIdentifier();\n            if (parseExpected(SyntaxKind.OpenBraceToken)) {\n                node.members = parseDelimitedList(ParsingContext.EnumMembers, parseEnumMember);\n                parseExpected(SyntaxKind.CloseBraceToken);\n            }\n            else {\n                node.members = createMissingList<EnumMember>();\n            }\n            return finishNode(node);\n        }\n\n        function parseModuleBlock(): ModuleBlock {\n            const node = <ModuleBlock>createNode(SyntaxKind.ModuleBlock);\n            if (parseExpected(SyntaxKind.OpenBraceToken)) {\n                node.statements = parseList(ParsingContext.BlockStatements, parseStatement);\n                parseExpected(SyntaxKind.CloseBraceToken);\n            }\n            else {\n                node.statements = createMissingList<Statement>();\n            }\n            return finishNode(node);\n        }\n\n        function parseModuleOrNamespaceDeclaration(node: ModuleDeclaration, flags: NodeFlags): ModuleDeclaration {\n            node.kind = SyntaxKind.ModuleDeclaration;\n            // If we are parsing a dotted namespace name, we want to\n            // propagate the 'Namespace' flag across the names if set.\n            const namespaceFlag = flags & NodeFlags.Namespace;\n            node.flags |= flags;\n            node.name = parseIdentifier();\n            node.body = parseOptional(SyntaxKind.DotToken)\n                ? <NamespaceDeclaration>parseModuleOrNamespaceDeclaration(<ModuleDeclaration>createNode(SyntaxKind.Unknown), NodeFlags.NestedNamespace | namespaceFlag)\n                : parseModuleBlock();\n            return finishNode(node);\n        }\n\n        function parseAmbientExternalModuleDeclaration(node: ModuleDeclaration): ModuleDeclaration {\n            node.kind = SyntaxKind.ModuleDeclaration;\n            if (token() === SyntaxKind.GlobalKeyword) {\n                // parse 'global' as name of global scope augmentation\n                node.name = parseIdentifier();\n                node.flags |= NodeFlags.GlobalAugmentation;\n            }\n            else {\n                node.name = <StringLiteral>parseLiteralNode();\n                node.name.text = internIdentifier(node.name.text);\n            }\n            if (token() === SyntaxKind.OpenBraceToken) {\n                node.body = parseModuleBlock();\n            }\n            else {\n                parseSemicolon();\n            }\n            return finishNode(node);\n        }\n\n        function parseModuleDeclaration(node: ModuleDeclaration): ModuleDeclaration {\n            let flags: NodeFlags = 0;\n            if (token() === SyntaxKind.GlobalKeyword) {\n                // global augmentation\n                return parseAmbientExternalModuleDeclaration(node);\n            }\n            else if (parseOptional(SyntaxKind.NamespaceKeyword)) {\n                flags |= NodeFlags.Namespace;\n            }\n            else {\n                parseExpected(SyntaxKind.ModuleKeyword);\n                if (token() === SyntaxKind.StringLiteral) {\n                    return parseAmbientExternalModuleDeclaration(node);\n                }\n            }\n            return parseModuleOrNamespaceDeclaration(node, flags);\n        }\n\n        function isExternalModuleReference() {\n            return token() === SyntaxKind.RequireKeyword &&\n                lookAhead(nextTokenIsOpenParen);\n        }\n\n        function nextTokenIsOpenParen() {\n            return nextToken() === SyntaxKind.OpenParenToken;\n        }\n\n        function nextTokenIsSlash() {\n            return nextToken() === SyntaxKind.SlashToken;\n        }\n\n        function parseNamespaceExportDeclaration(node: NamespaceExportDeclaration): NamespaceExportDeclaration {\n            node.kind = SyntaxKind.NamespaceExportDeclaration;\n            parseExpected(SyntaxKind.AsKeyword);\n            parseExpected(SyntaxKind.NamespaceKeyword);\n            node.name = parseIdentifier();\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseImportDeclarationOrImportEqualsDeclaration(node: ImportEqualsDeclaration | ImportDeclaration): ImportEqualsDeclaration | ImportDeclaration {\n            parseExpected(SyntaxKind.ImportKeyword);\n            const afterImportPos = scanner.getStartPos();\n\n            let identifier: Identifier;\n            if (isIdentifier()) {\n                identifier = parseIdentifier();\n                if (token() !== SyntaxKind.CommaToken && token() !== SyntaxKind.FromKeyword) {\n                    return parseImportEqualsDeclaration(<ImportEqualsDeclaration>node, identifier);\n                }\n            }\n\n            // Import statement\n            node.kind = SyntaxKind.ImportDeclaration;\n            // ImportDeclaration:\n            //  import ImportClause from ModuleSpecifier ;\n            //  import ModuleSpecifier;\n            if (identifier || // import id\n                token() === SyntaxKind.AsteriskToken || // import *\n                token() === SyntaxKind.OpenBraceToken) { // import {\n                (<ImportDeclaration>node).importClause = parseImportClause(identifier, afterImportPos);\n                parseExpected(SyntaxKind.FromKeyword);\n            }\n\n            (<ImportDeclaration>node).moduleSpecifier = parseModuleSpecifier();\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseImportEqualsDeclaration(node: ImportEqualsDeclaration, identifier: ts.Identifier): ImportEqualsDeclaration {\n            node.kind = SyntaxKind.ImportEqualsDeclaration;\n            node.name = identifier;\n            parseExpected(SyntaxKind.EqualsToken);\n            node.moduleReference = parseModuleReference();\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseImportClause(identifier: Identifier, fullStart: number) {\n            // ImportClause:\n            //  ImportedDefaultBinding\n            //  NameSpaceImport\n            //  NamedImports\n            //  ImportedDefaultBinding, NameSpaceImport\n            //  ImportedDefaultBinding, NamedImports\n\n            const importClause = <ImportClause>createNode(SyntaxKind.ImportClause, fullStart);\n            if (identifier) {\n                // ImportedDefaultBinding:\n                //  ImportedBinding\n                importClause.name = identifier;\n            }\n\n            // If there was no default import or if there is comma token after default import\n            // parse namespace or named imports\n            if (!importClause.name ||\n                parseOptional(SyntaxKind.CommaToken)) {\n                importClause.namedBindings = token() === SyntaxKind.AsteriskToken ? parseNamespaceImport() : parseNamedImportsOrExports(SyntaxKind.NamedImports);\n            }\n\n            return finishNode(importClause);\n        }\n\n        function parseModuleReference() {\n            return isExternalModuleReference()\n                ? parseExternalModuleReference()\n                : parseEntityName(/*allowReservedWords*/ false);\n        }\n\n        function parseExternalModuleReference() {\n            const node = <ExternalModuleReference>createNode(SyntaxKind.ExternalModuleReference);\n            parseExpected(SyntaxKind.RequireKeyword);\n            parseExpected(SyntaxKind.OpenParenToken);\n            node.expression = parseModuleSpecifier();\n            parseExpected(SyntaxKind.CloseParenToken);\n            return finishNode(node);\n        }\n\n        function parseModuleSpecifier(): Expression {\n            if (token() === SyntaxKind.StringLiteral) {\n                const result = parseLiteralNode();\n                result.text = internIdentifier(result.text);\n                return result;\n            }\n            else {\n                // We allow arbitrary expressions here, even though the grammar only allows string\n                // literals.  We check to ensure that it is only a string literal later in the grammar\n                // check pass.\n                return parseExpression();\n            }\n        }\n\n        function parseNamespaceImport(): NamespaceImport {\n            // NameSpaceImport:\n            //  * as ImportedBinding\n            const namespaceImport = <NamespaceImport>createNode(SyntaxKind.NamespaceImport);\n            parseExpected(SyntaxKind.AsteriskToken);\n            parseExpected(SyntaxKind.AsKeyword);\n            namespaceImport.name = parseIdentifier();\n            return finishNode(namespaceImport);\n        }\n\n        function parseNamedImportsOrExports(kind: SyntaxKind.NamedImports): NamedImports;\n        function parseNamedImportsOrExports(kind: SyntaxKind.NamedExports): NamedExports;\n        function parseNamedImportsOrExports(kind: SyntaxKind): NamedImportsOrExports {\n            const node = <NamedImports | NamedExports>createNode(kind);\n\n            // NamedImports:\n            //  { }\n            //  { ImportsList }\n            //  { ImportsList, }\n\n            // ImportsList:\n            //  ImportSpecifier\n            //  ImportsList, ImportSpecifier\n            node.elements = <NodeArray<ImportSpecifier> | NodeArray<ExportSpecifier>>parseBracketedList(ParsingContext.ImportOrExportSpecifiers,\n                kind === SyntaxKind.NamedImports ? parseImportSpecifier : parseExportSpecifier,\n                SyntaxKind.OpenBraceToken, SyntaxKind.CloseBraceToken);\n            return finishNode(node);\n        }\n\n        function parseExportSpecifier() {\n            return parseImportOrExportSpecifier(SyntaxKind.ExportSpecifier);\n        }\n\n        function parseImportSpecifier() {\n            return parseImportOrExportSpecifier(SyntaxKind.ImportSpecifier);\n        }\n\n        function parseImportOrExportSpecifier(kind: SyntaxKind): ImportOrExportSpecifier {\n            const node = <ImportSpecifier>createNode(kind);\n            // ImportSpecifier:\n            //   BindingIdentifier\n            //   IdentifierName as BindingIdentifier\n            // ExportSpecifier:\n            //   IdentifierName\n            //   IdentifierName as IdentifierName\n            let checkIdentifierIsKeyword = isKeyword(token()) && !isIdentifier();\n            let checkIdentifierStart = scanner.getTokenPos();\n            let checkIdentifierEnd = scanner.getTextPos();\n            const identifierName = parseIdentifierName();\n            if (token() === SyntaxKind.AsKeyword) {\n                node.propertyName = identifierName;\n                parseExpected(SyntaxKind.AsKeyword);\n                checkIdentifierIsKeyword = isKeyword(token()) && !isIdentifier();\n                checkIdentifierStart = scanner.getTokenPos();\n                checkIdentifierEnd = scanner.getTextPos();\n                node.name = parseIdentifierName();\n            }\n            else {\n                node.name = identifierName;\n            }\n            if (kind === SyntaxKind.ImportSpecifier && checkIdentifierIsKeyword) {\n                // Report error identifier expected\n                parseErrorAtPosition(checkIdentifierStart, checkIdentifierEnd - checkIdentifierStart, Diagnostics.Identifier_expected);\n            }\n            return finishNode(node);\n        }\n\n        function parseExportDeclaration(node: ExportDeclaration): ExportDeclaration {\n            node.kind = SyntaxKind.ExportDeclaration;\n            if (parseOptional(SyntaxKind.AsteriskToken)) {\n                parseExpected(SyntaxKind.FromKeyword);\n                node.moduleSpecifier = parseModuleSpecifier();\n            }\n            else {\n                node.exportClause = parseNamedImportsOrExports(SyntaxKind.NamedExports);\n                // It is not uncommon to accidentally omit the 'from' keyword. Additionally, in editing scenarios,\n                // the 'from' keyword can be parsed as a named export when the export clause is unterminated (i.e. `export { from \"moduleName\";`)\n                // If we don't have a 'from' keyword, see if we have a string literal such that ASI won't take effect.\n                if (token() === SyntaxKind.FromKeyword || (token() === SyntaxKind.StringLiteral && !scanner.hasPrecedingLineBreak())) {\n                    parseExpected(SyntaxKind.FromKeyword);\n                    node.moduleSpecifier = parseModuleSpecifier();\n                }\n            }\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function parseExportAssignment(node: ExportAssignment): ExportAssignment {\n            node.kind = SyntaxKind.ExportAssignment;\n            if (parseOptional(SyntaxKind.EqualsToken)) {\n                node.isExportEquals = true;\n            }\n            else {\n                parseExpected(SyntaxKind.DefaultKeyword);\n            }\n            node.expression = parseAssignmentExpressionOrHigher();\n            parseSemicolon();\n            return finishNode(node);\n        }\n\n        function processReferenceComments(sourceFile: SourceFile): void {\n            const triviaScanner = createScanner(sourceFile.languageVersion, /*skipTrivia*/ false, LanguageVariant.Standard, sourceText);\n            const referencedFiles: FileReference[] = [];\n            const typeReferenceDirectives: FileReference[] = [];\n            const amdDependencies: { path: string; name: string }[] = [];\n            let amdModuleName: string;\n            let checkJsDirective: CheckJsDirective = undefined;\n\n            // Keep scanning all the leading trivia in the file until we get to something that\n            // isn't trivia.  Any single line comment will be analyzed to see if it is a\n            // reference comment.\n            while (true) {\n                const kind = triviaScanner.scan();\n                if (kind !== SyntaxKind.SingleLineCommentTrivia) {\n                    if (isTrivia(kind)) {\n                        continue;\n                    }\n                    else {\n                        break;\n                    }\n                }\n\n                const range = {\n                    kind: <SyntaxKind.SingleLineCommentTrivia | SyntaxKind.MultiLineCommentTrivia>triviaScanner.getToken(),\n                    pos: triviaScanner.getTokenPos(),\n                    end: triviaScanner.getTextPos(),\n                };\n\n                const comment = sourceText.substring(range.pos, range.end);\n                const referencePathMatchResult = getFileReferenceFromReferencePath(comment, range);\n                if (referencePathMatchResult) {\n                    const fileReference = referencePathMatchResult.fileReference;\n                    sourceFile.hasNoDefaultLib = referencePathMatchResult.isNoDefaultLib;\n                    const diagnosticMessage = referencePathMatchResult.diagnosticMessage;\n                    if (fileReference) {\n                        if (referencePathMatchResult.isTypeReferenceDirective) {\n                            typeReferenceDirectives.push(fileReference);\n                        }\n                        else {\n                            referencedFiles.push(fileReference);\n                        }\n                    }\n                    if (diagnosticMessage) {\n                        parseDiagnostics.push(createFileDiagnostic(sourceFile, range.pos, range.end - range.pos, diagnosticMessage));\n                    }\n                }\n                else {\n                    const amdModuleNameRegEx = /^\\/\\/\\/\\s*<amd-module\\s+name\\s*=\\s*('|\")(.+?)\\1/gim;\n                    const amdModuleNameMatchResult = amdModuleNameRegEx.exec(comment);\n                    if (amdModuleNameMatchResult) {\n                        if (amdModuleName) {\n                            parseDiagnostics.push(createFileDiagnostic(sourceFile, range.pos, range.end - range.pos, Diagnostics.An_AMD_module_cannot_have_multiple_name_assignments));\n                        }\n                        amdModuleName = amdModuleNameMatchResult[2];\n                    }\n\n                    const amdDependencyRegEx = /^\\/\\/\\/\\s*<amd-dependency\\s/gim;\n                    const pathRegex = /\\spath\\s*=\\s*('|\")(.+?)\\1/gim;\n                    const nameRegex = /\\sname\\s*=\\s*('|\")(.+?)\\1/gim;\n                    const amdDependencyMatchResult = amdDependencyRegEx.exec(comment);\n                    if (amdDependencyMatchResult) {\n                        const pathMatchResult = pathRegex.exec(comment);\n                        const nameMatchResult = nameRegex.exec(comment);\n                        if (pathMatchResult) {\n                            const amdDependency = { path: pathMatchResult[2], name: nameMatchResult ? nameMatchResult[2] : undefined };\n                            amdDependencies.push(amdDependency);\n                        }\n                    }\n\n                    const checkJsDirectiveRegEx = /^\\/\\/\\/?\\s*(@ts-check|@ts-nocheck)\\s*$/gim;\n                    const checkJsDirectiveMatchResult = checkJsDirectiveRegEx.exec(comment);\n                    if (checkJsDirectiveMatchResult) {\n                        checkJsDirective = {\n                            enabled: equateStringsCaseInsensitive(checkJsDirectiveMatchResult[1], \"@ts-check\"),\n                            end: range.end,\n                            pos: range.pos\n                        };\n                    }\n                }\n            }\n\n            sourceFile.referencedFiles = referencedFiles;\n            sourceFile.typeReferenceDirectives = typeReferenceDirectives;\n            sourceFile.amdDependencies = amdDependencies;\n            sourceFile.moduleName = amdModuleName;\n            sourceFile.checkJsDirective = checkJsDirective;\n        }\n\n        function setExternalModuleIndicator(sourceFile: SourceFile) {\n            sourceFile.externalModuleIndicator = forEach(sourceFile.statements, node =>\n                hasModifier(node, ModifierFlags.Export)\n                    || node.kind === SyntaxKind.ImportEqualsDeclaration && (<ImportEqualsDeclaration>node).moduleReference.kind === SyntaxKind.ExternalModuleReference\n                    || node.kind === SyntaxKind.ImportDeclaration\n                    || node.kind === SyntaxKind.ExportAssignment\n                    || node.kind === SyntaxKind.ExportDeclaration\n                    ? node\n                    : undefined);\n        }\n\n        const enum ParsingContext {\n            SourceElements,            // Elements in source file\n            BlockStatements,           // Statements in block\n            SwitchClauses,             // Clauses in switch statement\n            SwitchClauseStatements,    // Statements in switch clause\n            TypeMembers,               // Members in interface or type literal\n            ClassMembers,              // Members in class declaration\n            EnumMembers,               // Members in enum declaration\n            HeritageClauseElement,     // Elements in a heritage clause\n            VariableDeclarations,      // Variable declarations in variable statement\n            ObjectBindingElements,     // Binding elements in object binding list\n            ArrayBindingElements,      // Binding elements in array binding list\n            ArgumentExpressions,       // Expressions in argument list\n            ObjectLiteralMembers,      // Members in object literal\n            JsxAttributes,             // Attributes in jsx element\n            JsxChildren,               // Things between opening and closing JSX tags\n            ArrayLiteralMembers,       // Members in array literal\n            Parameters,                // Parameters in parameter list\n            RestProperties,            // Property names in a rest type list\n            TypeParameters,            // Type parameters in type parameter list\n            TypeArguments,             // Type arguments in type argument list\n            TupleElementTypes,         // Element types in tuple element type list\n            HeritageClauses,           // Heritage clauses for a class or interface declaration.\n            ImportOrExportSpecifiers,  // Named import clause's import specifier list\n            Count                      // Number of parsing contexts\n        }\n\n        const enum Tristate {\n            False,\n            True,\n            Unknown\n        }\n\n        export namespace JSDocParser {\n            export function parseJSDocTypeExpressionForTests(content: string, start: number, length: number): { jsDocTypeExpression: JSDocTypeExpression, diagnostics: Diagnostic[] } | undefined {\n                initializeState(content, ScriptTarget.Latest, /*_syntaxCursor:*/ undefined, ScriptKind.JS);\n                sourceFile = createSourceFile(\"file.js\", ScriptTarget.Latest, ScriptKind.JS, /*isDeclarationFile*/ false);\n                scanner.setText(content, start, length);\n                currentToken = scanner.scan();\n                const jsDocTypeExpression = parseJSDocTypeExpression();\n                const diagnostics = parseDiagnostics;\n                clearState();\n\n                return jsDocTypeExpression ? { jsDocTypeExpression, diagnostics } : undefined;\n            }\n\n            // Parses out a JSDoc type expression.\n            export function parseJSDocTypeExpression(mayOmitBraces?: boolean): JSDocTypeExpression {\n                const result = <JSDocTypeExpression>createNode(SyntaxKind.JSDocTypeExpression, scanner.getTokenPos());\n\n                const hasBrace = (mayOmitBraces ? parseOptional : parseExpected)(SyntaxKind.OpenBraceToken);\n                result.type = doInsideOfContext(NodeFlags.JSDoc, parseType);\n                if (!mayOmitBraces || hasBrace) {\n                    parseExpected(SyntaxKind.CloseBraceToken);\n                }\n\n                fixupParentReferences(result);\n                return finishNode(result);\n            }\n\n            export function parseIsolatedJSDocComment(content: string, start: number, length: number): { jsDoc: JSDoc, diagnostics: Diagnostic[] } | undefined {\n                initializeState(content, ScriptTarget.Latest, /*_syntaxCursor:*/ undefined, ScriptKind.JS);\n                sourceFile = <SourceFile>{ languageVariant: LanguageVariant.Standard, text: content }; // tslint:disable-line no-object-literal-type-assertion\n                const jsDoc = parseJSDocCommentWorker(start, length);\n                const diagnostics = parseDiagnostics;\n                clearState();\n\n                return jsDoc ? { jsDoc, diagnostics } : undefined;\n            }\n\n            export function parseJSDocComment(parent: HasJSDoc, start: number, length: number): JSDoc {\n                const saveToken = currentToken;\n                const saveParseDiagnosticsLength = parseDiagnostics.length;\n                const saveParseErrorBeforeNextFinishedNode = parseErrorBeforeNextFinishedNode;\n\n                const comment = parseJSDocCommentWorker(start, length);\n                if (comment) {\n                    comment.parent = parent;\n                }\n\n                if (contextFlags & NodeFlags.JavaScriptFile) {\n                    if (!sourceFile.jsDocDiagnostics) {\n                        sourceFile.jsDocDiagnostics = [];\n                    }\n                    sourceFile.jsDocDiagnostics.push(...parseDiagnostics);\n                }\n                currentToken = saveToken;\n                parseDiagnostics.length = saveParseDiagnosticsLength;\n                parseErrorBeforeNextFinishedNode = saveParseErrorBeforeNextFinishedNode;\n\n                return comment;\n            }\n\n            const enum JSDocState {\n                BeginningOfLine,\n                SawAsterisk,\n                SavingComments,\n            }\n\n            const enum PropertyLikeParse {\n                Property,\n                Parameter,\n            }\n\n            export function parseJSDocCommentWorker(start: number, length: number): JSDoc {\n                const content = sourceText;\n                start = start || 0;\n                const end = length === undefined ? content.length : start + length;\n                length = end - start;\n\n                Debug.assert(start >= 0);\n                Debug.assert(start <= end);\n                Debug.assert(end <= content.length);\n\n                let tags: JSDocTag[];\n                let tagsPos: number;\n                let tagsEnd: number;\n                const comments: string[] = [];\n                let result: JSDoc;\n\n                // Check for /** (JSDoc opening part)\n                if (!isJsDocStart(content, start)) {\n                    return result;\n                }\n\n                // + 3 for leading /**, - 5 in total for /** */\n                scanner.scanRange(start + 3, length - 5, () => {\n                    // Initially we can parse out a tag.  We also have seen a starting asterisk.\n                    // This is so that /** * @type */ doesn't parse.\n                    let state = JSDocState.SawAsterisk;\n                    let margin: number | undefined = undefined;\n                    // + 4 for leading '/** '\n                    let indent = start - Math.max(content.lastIndexOf(\"\\n\", start), 0) + 4;\n                    function pushComment(text: string) {\n                        if (!margin) {\n                            margin = indent;\n                        }\n                        comments.push(text);\n                        indent += text.length;\n                    }\n\n                    let t = nextJSDocToken();\n                    while (t === SyntaxKind.WhitespaceTrivia) {\n                        t = nextJSDocToken();\n                    }\n                    if (t === SyntaxKind.NewLineTrivia) {\n                        state = JSDocState.BeginningOfLine;\n                        indent = 0;\n                        t = nextJSDocToken();\n                    }\n                    loop: while (true) {\n                        switch (t) {\n                            case SyntaxKind.AtToken:\n                                if (state === JSDocState.BeginningOfLine || state === JSDocState.SawAsterisk) {\n                                    removeTrailingNewlines(comments);\n                                    parseTag(indent);\n                                    // NOTE: According to usejsdoc.org, a tag goes to end of line, except the last tag.\n                                    // Real-world comments may break this rule, so \"BeginningOfLine\" will not be a real line beginning\n                                    // for malformed examples like `/** @param {string} x @returns {number} the length */`\n                                    state = JSDocState.BeginningOfLine;\n                                    margin = undefined;\n                                    indent++;\n                                }\n                                else {\n                                    pushComment(scanner.getTokenText());\n                                }\n                                break;\n                            case SyntaxKind.NewLineTrivia:\n                                comments.push(scanner.getTokenText());\n                                state = JSDocState.BeginningOfLine;\n                                indent = 0;\n                                break;\n                            case SyntaxKind.AsteriskToken:\n                                const asterisk = scanner.getTokenText();\n                                if (state === JSDocState.SawAsterisk || state === JSDocState.SavingComments) {\n                                    // If we've already seen an asterisk, then we can no longer parse a tag on this line\n                                    state = JSDocState.SavingComments;\n                                    pushComment(asterisk);\n                                }\n                                else {\n                                    // Ignore the first asterisk on a line\n                                    state = JSDocState.SawAsterisk;\n                                    indent += asterisk.length;\n                                }\n                                break;\n                            case SyntaxKind.Identifier:\n                                // Anything else is doc comment text. We just save it. Because it\n                                // wasn't a tag, we can no longer parse a tag on this line until we hit the next\n                                // line break.\n                                pushComment(scanner.getTokenText());\n                                state = JSDocState.SavingComments;\n                                break;\n                            case SyntaxKind.WhitespaceTrivia:\n                                // only collect whitespace if we're already saving comments or have just crossed the comment indent margin\n                                const whitespace = scanner.getTokenText();\n                                if (state === JSDocState.SavingComments) {\n                                    comments.push(whitespace);\n                                }\n                                else if (margin !== undefined && indent + whitespace.length > margin) {\n                                    comments.push(whitespace.slice(margin - indent - 1));\n                                }\n                                indent += whitespace.length;\n                                break;\n                            case SyntaxKind.EndOfFileToken:\n                                break loop;\n                            default:\n                                // anything other than whitespace or asterisk at the beginning of the line starts the comment text\n                                state = JSDocState.SavingComments;\n                                pushComment(scanner.getTokenText());\n                                break;\n                        }\n                        t = nextJSDocToken();\n                    }\n                    removeLeadingNewlines(comments);\n                    removeTrailingNewlines(comments);\n                    result = createJSDocComment();\n\n                });\n\n                return result;\n\n                function removeLeadingNewlines(comments: string[]) {\n                    while (comments.length && (comments[0] === \"\\n\" || comments[0] === \"\\r\")) {\n                        comments.shift();\n                    }\n                }\n\n                function removeTrailingNewlines(comments: string[]) {\n                    while (comments.length && (comments[comments.length - 1] === \"\\n\" || comments[comments.length - 1] === \"\\r\")) {\n                        comments.pop();\n                    }\n                }\n\n                function isJsDocStart(content: string, start: number) {\n                    return content.charCodeAt(start) === CharacterCodes.slash &&\n                        content.charCodeAt(start + 1) === CharacterCodes.asterisk &&\n                        content.charCodeAt(start + 2) === CharacterCodes.asterisk &&\n                        content.charCodeAt(start + 3) !== CharacterCodes.asterisk;\n                }\n\n                function createJSDocComment(): JSDoc {\n                    const result = <JSDoc>createNode(SyntaxKind.JSDocComment, start);\n                    result.tags = tags && createNodeArray(tags, tagsPos, tagsEnd);\n                    result.comment = comments.length ? comments.join(\"\") : undefined;\n                    return finishNode(result, end);\n                }\n\n                function skipWhitespace(): void {\n                    while (token() === SyntaxKind.WhitespaceTrivia || token() === SyntaxKind.NewLineTrivia) {\n                        nextJSDocToken();\n                    }\n                }\n\n                function parseTag(indent: number) {\n                    Debug.assert(token() === SyntaxKind.AtToken);\n                    const atToken = <AtToken>createNode(SyntaxKind.AtToken, scanner.getTokenPos());\n                    atToken.end = scanner.getTextPos();\n                    nextJSDocToken();\n\n                    const tagName = parseJSDocIdentifierName();\n                    skipWhitespace();\n                    if (!tagName) {\n                        return;\n                    }\n\n                    let tag: JSDocTag;\n                    if (tagName) {\n                        switch (tagName.escapedText) {\n                            case \"augments\":\n                            case \"extends\":\n                                tag = parseAugmentsTag(atToken, tagName);\n                                break;\n                            case \"class\":\n                            case \"constructor\":\n                                tag = parseClassTag(atToken, tagName);\n                                break;\n                            case \"arg\":\n                            case \"argument\":\n                            case \"param\":\n                                tag = parseParameterOrPropertyTag(atToken, tagName, PropertyLikeParse.Parameter);\n                                break;\n                            case \"return\":\n                            case \"returns\":\n                                tag = parseReturnTag(atToken, tagName);\n                                break;\n                            case \"template\":\n                                tag = parseTemplateTag(atToken, tagName);\n                                break;\n                            case \"type\":\n                                tag = parseTypeTag(atToken, tagName);\n                                break;\n                            case \"typedef\":\n                                tag = parseTypedefTag(atToken, tagName);\n                                break;\n                            default:\n                                tag = parseUnknownTag(atToken, tagName);\n                                break;\n                        }\n                    }\n                    else {\n                        tag = parseUnknownTag(atToken, tagName);\n                    }\n\n                    if (!tag) {\n                        // a badly malformed tag should not be added to the list of tags\n                        return;\n                    }\n                    tag.comment = parseTagComments(indent + tag.end - tag.pos);\n                    addTag(tag);\n                }\n\n                function parseTagComments(indent: number): string | undefined {\n                    const comments: string[] = [];\n                    let state = JSDocState.BeginningOfLine;\n                    let margin: number | undefined;\n                    function pushComment(text: string) {\n                        if (!margin) {\n                            margin = indent;\n                        }\n                        comments.push(text);\n                        indent += text.length;\n                    }\n                    let tok = token() as JsDocSyntaxKind;\n                    loop: while (true) {\n                        switch (tok) {\n                            case SyntaxKind.NewLineTrivia:\n                                if (state >= JSDocState.SawAsterisk) {\n                                    state = JSDocState.BeginningOfLine;\n                                    comments.push(scanner.getTokenText());\n                                }\n                                indent = 0;\n                                break;\n                            case SyntaxKind.AtToken:\n                                scanner.setTextPos(scanner.getTextPos() - 1);\n                                // falls through\n                            case SyntaxKind.EndOfFileToken:\n                                // Done\n                                break loop;\n                            case SyntaxKind.WhitespaceTrivia:\n                                if (state === JSDocState.SavingComments) {\n                                    pushComment(scanner.getTokenText());\n                                }\n                                else {\n                                    const whitespace = scanner.getTokenText();\n                                    // if the whitespace crosses the margin, take only the whitespace that passes the margin\n                                    if (margin !== undefined && indent + whitespace.length > margin) {\n                                        comments.push(whitespace.slice(margin - indent - 1));\n                                    }\n                                    indent += whitespace.length;\n                                }\n                                break;\n                            case SyntaxKind.AsteriskToken:\n                                if (state === JSDocState.BeginningOfLine) {\n                                    // leading asterisks start recording on the *next* (non-whitespace) token\n                                    state = JSDocState.SawAsterisk;\n                                    indent += 1;\n                                    break;\n                                }\n                                // record the * as a comment\n                                // falls through\n                            default:\n                                state = JSDocState.SavingComments; // leading identifiers start recording as well\n                                pushComment(scanner.getTokenText());\n                                break;\n                        }\n                        tok = nextJSDocToken();\n                    }\n\n                    removeLeadingNewlines(comments);\n                    removeTrailingNewlines(comments);\n                    return comments.length === 0 ? undefined : comments.join(\"\");\n                }\n\n                function parseUnknownTag(atToken: AtToken, tagName: Identifier) {\n                    const result = <JSDocTag>createNode(SyntaxKind.JSDocTag, atToken.pos);\n                    result.atToken = atToken;\n                    result.tagName = tagName;\n                    return finishNode(result);\n                }\n\n                function addTag(tag: JSDocTag): void {\n                    if (!tags) {\n                        tags = [tag];\n                        tagsPos = tag.pos;\n                    }\n                    else {\n                        tags.push(tag);\n                    }\n                    tagsEnd = tag.end;\n                }\n\n                function tryParseTypeExpression(): JSDocTypeExpression | undefined {\n                    skipWhitespace();\n                    return token() === SyntaxKind.OpenBraceToken ? parseJSDocTypeExpression() : undefined;\n                }\n\n                function parseBracketNameInPropertyAndParamTag(): { name: EntityName, isBracketed: boolean } {\n                    // Looking for something like '[foo]', 'foo', '[foo.bar]' or 'foo.bar'\n                    const isBracketed = parseOptional(SyntaxKind.OpenBracketToken);\n                    const name = parseJSDocEntityName();\n                    if (isBracketed) {\n                        skipWhitespace();\n\n                        // May have an optional default, e.g. '[foo = 42]'\n                        if (parseOptionalToken(SyntaxKind.EqualsToken)) {\n                            parseExpression();\n                        }\n\n                        parseExpected(SyntaxKind.CloseBracketToken);\n                    }\n\n                    return { name, isBracketed };\n                }\n\n                function isObjectOrObjectArrayTypeReference(node: TypeNode): boolean {\n                    switch (node.kind) {\n                        case SyntaxKind.ObjectKeyword:\n                            return true;\n                        case SyntaxKind.ArrayType:\n                            return isObjectOrObjectArrayTypeReference((node as ArrayTypeNode).elementType);\n                        default:\n                            return isTypeReferenceNode(node) && ts.isIdentifier(node.typeName) && node.typeName.escapedText === \"Object\";\n                    }\n                }\n\n                function parseParameterOrPropertyTag(atToken: AtToken, tagName: Identifier, target: PropertyLikeParse): JSDocParameterTag | JSDocPropertyTag {\n                    let typeExpression = tryParseTypeExpression();\n                    let isNameFirst = !typeExpression;\n                    skipWhitespace();\n\n                    const { name, isBracketed } = parseBracketNameInPropertyAndParamTag();\n                    skipWhitespace();\n\n                    if (isNameFirst) {\n                        typeExpression = tryParseTypeExpression();\n                    }\n\n                    const result = target === PropertyLikeParse.Parameter ?\n                        <JSDocParameterTag>createNode(SyntaxKind.JSDocParameterTag, atToken.pos) :\n                        <JSDocPropertyTag>createNode(SyntaxKind.JSDocPropertyTag, atToken.pos);\n                    const nestedTypeLiteral = parseNestedTypeLiteral(typeExpression, name);\n                    if (nestedTypeLiteral) {\n                        typeExpression = nestedTypeLiteral;\n                        isNameFirst = true;\n                    }\n                    result.atToken = atToken;\n                    result.tagName = tagName;\n                    result.typeExpression = typeExpression;\n                    result.name = name;\n                    result.isNameFirst = isNameFirst;\n                    result.isBracketed = isBracketed;\n                    return finishNode(result);\n                }\n\n                function parseNestedTypeLiteral(typeExpression: JSDocTypeExpression, name: EntityName) {\n                    if (typeExpression && isObjectOrObjectArrayTypeReference(typeExpression.type)) {\n                        const typeLiteralExpression = <JSDocTypeExpression>createNode(SyntaxKind.JSDocTypeExpression, scanner.getTokenPos());\n                        let child: JSDocParameterTag | false;\n                        let jsdocTypeLiteral: JSDocTypeLiteral;\n                        const start = scanner.getStartPos();\n                        let children: JSDocParameterTag[];\n                        while (child = tryParse(() => parseChildParameterOrPropertyTag(PropertyLikeParse.Parameter, name))) {\n                            children = append(children, child);\n                        }\n                        if (children) {\n                            jsdocTypeLiteral = <JSDocTypeLiteral>createNode(SyntaxKind.JSDocTypeLiteral, start);\n                            jsdocTypeLiteral.jsDocPropertyTags = children;\n                            if (typeExpression.type.kind === SyntaxKind.ArrayType) {\n                                jsdocTypeLiteral.isArrayType = true;\n                            }\n                            typeLiteralExpression.type = finishNode(jsdocTypeLiteral);\n                            return finishNode(typeLiteralExpression);\n                        }\n                    }\n                }\n\n                function parseReturnTag(atToken: AtToken, tagName: Identifier): JSDocReturnTag {\n                    if (forEach(tags, t => t.kind === SyntaxKind.JSDocReturnTag)) {\n                        parseErrorAtPosition(tagName.pos, scanner.getTokenPos() - tagName.pos, Diagnostics._0_tag_already_specified, tagName.escapedText);\n                    }\n\n                    const result = <JSDocReturnTag>createNode(SyntaxKind.JSDocReturnTag, atToken.pos);\n                    result.atToken = atToken;\n                    result.tagName = tagName;\n                    result.typeExpression = tryParseTypeExpression();\n                    return finishNode(result);\n                }\n\n                function parseTypeTag(atToken: AtToken, tagName: Identifier): JSDocTypeTag {\n                    if (forEach(tags, t => t.kind === SyntaxKind.JSDocTypeTag)) {\n                        parseErrorAtPosition(tagName.pos, scanner.getTokenPos() - tagName.pos, Diagnostics._0_tag_already_specified, tagName.escapedText);\n                    }\n\n                    const result = <JSDocTypeTag>createNode(SyntaxKind.JSDocTypeTag, atToken.pos);\n                    result.atToken = atToken;\n                    result.tagName = tagName;\n                    result.typeExpression = parseJSDocTypeExpression(/*mayOmitBraces*/ true);\n                    return finishNode(result);\n                }\n\n                function parseAugmentsTag(atToken: AtToken, tagName: Identifier): JSDocAugmentsTag {\n                    const result = <JSDocAugmentsTag>createNode(SyntaxKind.JSDocAugmentsTag, atToken.pos);\n                    result.atToken = atToken;\n                    result.tagName = tagName;\n                    result.class = parseExpressionWithTypeArgumentsForAugments();\n                    return finishNode(result);\n                }\n\n                function parseExpressionWithTypeArgumentsForAugments(): ExpressionWithTypeArguments & { expression: Identifier | PropertyAccessEntityNameExpression } {\n                    const usedBrace = parseOptional(SyntaxKind.OpenBraceToken);\n                    const node = createNode(SyntaxKind.ExpressionWithTypeArguments) as ExpressionWithTypeArguments & { expression: Identifier | PropertyAccessEntityNameExpression };\n                    node.expression = parsePropertyAccessEntityNameExpression();\n                    node.typeArguments = tryParseTypeArguments();\n                    const res = finishNode(node);\n                    if (usedBrace) {\n                        parseExpected(SyntaxKind.CloseBraceToken);\n                    }\n                    return res;\n                }\n\n                function parsePropertyAccessEntityNameExpression() {\n                    let node: Identifier | PropertyAccessEntityNameExpression = parseJSDocIdentifierName(/*createIfMissing*/ true);\n                    while (parseOptional(SyntaxKind.DotToken)) {\n                        const prop: PropertyAccessEntityNameExpression = createNode(SyntaxKind.PropertyAccessExpression, node.pos) as PropertyAccessEntityNameExpression;\n                        prop.expression = node;\n                        prop.name = parseJSDocIdentifierName();\n                        node = finishNode(prop);\n                    }\n                    return node;\n                }\n\n                function parseClassTag(atToken: AtToken, tagName: Identifier): JSDocClassTag {\n                    const tag = <JSDocClassTag>createNode(SyntaxKind.JSDocClassTag, atToken.pos);\n                    tag.atToken = atToken;\n                    tag.tagName = tagName;\n                    return finishNode(tag);\n                }\n\n                function parseTypedefTag(atToken: AtToken, tagName: Identifier): JSDocTypedefTag {\n                    const typeExpression = tryParseTypeExpression();\n                    skipWhitespace();\n\n                    const typedefTag = <JSDocTypedefTag>createNode(SyntaxKind.JSDocTypedefTag, atToken.pos);\n                    typedefTag.atToken = atToken;\n                    typedefTag.tagName = tagName;\n                    typedefTag.fullName = parseJSDocTypeNameWithNamespace(/*flags*/ 0);\n                    if (typedefTag.fullName) {\n                        let rightNode = typedefTag.fullName;\n                        while (true) {\n                            if (rightNode.kind === SyntaxKind.Identifier || !rightNode.body) {\n                                // if node is identifier - use it as name\n                                // otherwise use name of the rightmost part that we were able to parse\n                                typedefTag.name = rightNode.kind === SyntaxKind.Identifier ? rightNode : rightNode.name;\n                                break;\n                            }\n                            rightNode = rightNode.body;\n                        }\n                    }\n                    skipWhitespace();\n\n                    typedefTag.typeExpression = typeExpression;\n                    if (!typeExpression || isObjectOrObjectArrayTypeReference(typeExpression.type)) {\n                        let child: JSDocTypeTag | JSDocPropertyTag | false;\n                        let jsdocTypeLiteral: JSDocTypeLiteral;\n                        let childTypeTag: JSDocTypeTag;\n                        const start = scanner.getStartPos();\n                        while (child = tryParse(() => parseChildParameterOrPropertyTag(PropertyLikeParse.Property))) {\n                            if (!jsdocTypeLiteral) {\n                                jsdocTypeLiteral = <JSDocTypeLiteral>createNode(SyntaxKind.JSDocTypeLiteral, start);\n                            }\n                            if (child.kind === SyntaxKind.JSDocTypeTag) {\n                                if (childTypeTag) {\n                                    break;\n                                }\n                                else {\n                                    childTypeTag = child;\n                                }\n                            }\n                            else {\n                                jsdocTypeLiteral.jsDocPropertyTags = append(jsdocTypeLiteral.jsDocPropertyTags as MutableNodeArray<JSDocPropertyTag>, child);\n                            }\n                        }\n                        if (jsdocTypeLiteral) {\n                            if (typeExpression && typeExpression.type.kind === SyntaxKind.ArrayType) {\n                                jsdocTypeLiteral.isArrayType = true;\n                            }\n                            typedefTag.typeExpression = childTypeTag && childTypeTag.typeExpression && !isObjectOrObjectArrayTypeReference(childTypeTag.typeExpression.type) ?\n                                childTypeTag.typeExpression :\n                                finishNode(jsdocTypeLiteral);\n                        }\n                    }\n\n                    return finishNode(typedefTag);\n\n                    function parseJSDocTypeNameWithNamespace(flags: NodeFlags) {\n                        const pos = scanner.getTokenPos();\n                        const typeNameOrNamespaceName = parseJSDocIdentifierName();\n\n                        if (typeNameOrNamespaceName && parseOptional(SyntaxKind.DotToken)) {\n                            const jsDocNamespaceNode = <JSDocNamespaceDeclaration>createNode(SyntaxKind.ModuleDeclaration, pos);\n                            jsDocNamespaceNode.flags |= flags;\n                            jsDocNamespaceNode.name = typeNameOrNamespaceName;\n                            jsDocNamespaceNode.body = parseJSDocTypeNameWithNamespace(NodeFlags.NestedNamespace);\n                            return finishNode(jsDocNamespaceNode);\n                        }\n\n                        if (typeNameOrNamespaceName && flags & NodeFlags.NestedNamespace) {\n                            typeNameOrNamespaceName.isInJSDocNamespace = true;\n                        }\n                        return typeNameOrNamespaceName;\n                    }\n                }\n\n                function escapedTextsEqual(a: EntityName, b: EntityName): boolean {\n                    while (!ts.isIdentifier(a) || !ts.isIdentifier(b)) {\n                        if (!ts.isIdentifier(a) && !ts.isIdentifier(b) && a.right.escapedText === b.right.escapedText) {\n                            a = a.left;\n                            b = b.left;\n                        }\n                        else {\n                            return false;\n                        }\n                    }\n                    return a.escapedText === b.escapedText;\n                }\n\n                function parseChildParameterOrPropertyTag(target: PropertyLikeParse.Property): JSDocTypeTag | JSDocPropertyTag | false;\n                function parseChildParameterOrPropertyTag(target: PropertyLikeParse.Parameter, name: EntityName): JSDocParameterTag | false;\n                function parseChildParameterOrPropertyTag(target: PropertyLikeParse, name?: EntityName): JSDocTypeTag | JSDocPropertyTag | JSDocParameterTag | false {\n                    let canParseTag = true;\n                    let seenAsterisk = false;\n                    while (true) {\n                        switch (nextJSDocToken()) {\n                            case SyntaxKind.AtToken:\n                                if (canParseTag) {\n                                    const child = tryParseChildTag(target);\n                                    if (child && child.kind === SyntaxKind.JSDocParameterTag &&\n                                        (ts.isIdentifier(child.name) || !escapedTextsEqual(name, child.name.left))) {\n                                        return false;\n                                    }\n                                    return child;\n                                }\n                                seenAsterisk = false;\n                                break;\n                            case SyntaxKind.NewLineTrivia:\n                                canParseTag = true;\n                                seenAsterisk = false;\n                                break;\n                            case SyntaxKind.AsteriskToken:\n                                if (seenAsterisk) {\n                                    canParseTag = false;\n                                }\n                                seenAsterisk = true;\n                                break;\n                            case SyntaxKind.Identifier:\n                                canParseTag = false;\n                                break;\n                            case SyntaxKind.EndOfFileToken:\n                                return false;\n                        }\n                    }\n                }\n\n                function tryParseChildTag(target: PropertyLikeParse): JSDocTypeTag | JSDocPropertyTag | JSDocParameterTag | false {\n                    Debug.assert(token() === SyntaxKind.AtToken);\n                    const atToken = <AtToken>createNode(SyntaxKind.AtToken);\n                    atToken.end = scanner.getTextPos();\n                    nextJSDocToken();\n\n                    const tagName = parseJSDocIdentifierName();\n                    skipWhitespace();\n                    if (!tagName) {\n                        return false;\n                    }\n                    let t: PropertyLikeParse;\n                    switch (tagName.escapedText) {\n                        case \"type\":\n                            return target === PropertyLikeParse.Property && parseTypeTag(atToken, tagName);\n                        case \"prop\":\n                        case \"property\":\n                            t = PropertyLikeParse.Property;\n                            break;\n                        case \"arg\":\n                        case \"argument\":\n                        case \"param\":\n                            t = PropertyLikeParse.Parameter;\n                            break;\n                        default:\n                            return false;\n                    }\n                    if (target !== t) {\n                        return false;\n                    }\n                    const tag = parseParameterOrPropertyTag(atToken, tagName, target);\n                    tag.comment = parseTagComments(tag.end - tag.pos);\n                    return tag;\n                }\n\n                function parseTemplateTag(atToken: AtToken, tagName: Identifier): JSDocTemplateTag | undefined {\n                    if (some(tags, isJSDocTemplateTag)) {\n                        parseErrorAtPosition(tagName.pos, scanner.getTokenPos() - tagName.pos, Diagnostics._0_tag_already_specified, tagName.escapedText);\n                    }\n\n                    // Type parameter list looks like '@template T,U,V'\n                    const typeParameters = [];\n                    const typeParametersPos = getNodePos();\n\n                    while (true) {\n                        const typeParameter = <TypeParameterDeclaration>createNode(SyntaxKind.TypeParameter);\n                        const name = parseJSDocIdentifierNameWithOptionalBraces();\n                        skipWhitespace();\n                        if (!name) {\n                            parseErrorAtPosition(scanner.getStartPos(), 0, Diagnostics.Identifier_expected);\n                            return undefined;\n                        }\n\n                        typeParameter.name = name;\n                        finishNode(typeParameter);\n\n                        typeParameters.push(typeParameter);\n\n                        if (token() === SyntaxKind.CommaToken) {\n                            nextJSDocToken();\n                            skipWhitespace();\n                        }\n                        else {\n                            break;\n                        }\n                    }\n\n                    const result = <JSDocTemplateTag>createNode(SyntaxKind.JSDocTemplateTag, atToken.pos);\n                    result.atToken = atToken;\n                    result.tagName = tagName;\n                    result.typeParameters = createNodeArray(typeParameters, typeParametersPos);\n                    finishNode(result);\n                    return result;\n                }\n\n                function parseJSDocIdentifierNameWithOptionalBraces(): Identifier | undefined {\n                    const parsedBrace = parseOptional(SyntaxKind.OpenBraceToken);\n                    const res = parseJSDocIdentifierName();\n                    if (parsedBrace) {\n                        parseExpected(SyntaxKind.CloseBraceToken);\n                    }\n                    return res;\n                }\n\n                function nextJSDocToken(): JsDocSyntaxKind {\n                    return currentToken = scanner.scanJSDocToken();\n                }\n\n                function parseJSDocEntityName(): EntityName {\n                    let entity: EntityName = parseJSDocIdentifierName(/*createIfMissing*/ true);\n                    if (parseOptional(SyntaxKind.OpenBracketToken)) {\n                        parseExpected(SyntaxKind.CloseBracketToken);\n                        // Note that y[] is accepted as an entity name, but the postfix brackets are not saved for checking.\n                        // Technically usejsdoc.org requires them for specifying a property of a type equivalent to Array<{ x: ...}>\n                        // but it's not worth it to enforce that restriction.\n                    }\n                    while (parseOptional(SyntaxKind.DotToken)) {\n                        const name = parseJSDocIdentifierName(/*createIfMissing*/ true);\n                        if (parseOptional(SyntaxKind.OpenBracketToken)) {\n                            parseExpected(SyntaxKind.CloseBracketToken);\n                        }\n                        entity = createQualifiedName(entity, name);\n                    }\n                    return entity;\n                }\n\n                function parseJSDocIdentifierName(): Identifier | undefined;\n                function parseJSDocIdentifierName(createIfMissing: true): Identifier;\n                function parseJSDocIdentifierName(createIfMissing = false): Identifier | undefined {\n                    if (!tokenIsIdentifierOrKeyword(token())) {\n                        if (createIfMissing) {\n                            return createMissingNode<Identifier>(SyntaxKind.Identifier, /*reportAtCurrentPosition*/ true, Diagnostics.Identifier_expected);\n                        }\n                        else {\n                            parseErrorAtCurrentToken(Diagnostics.Identifier_expected);\n                            return undefined;\n                        }\n                    }\n\n                    const pos = scanner.getTokenPos();\n                    const end = scanner.getTextPos();\n                    const result = <Identifier>createNode(SyntaxKind.Identifier, pos);\n                    result.escapedText = escapeLeadingUnderscores(content.substring(pos, end));\n                    finishNode(result, end);\n\n                    nextJSDocToken();\n                    return result;\n                }\n            }\n        }\n    }\n\n    namespace IncrementalParser {\n        export function updateSourceFile(sourceFile: SourceFile, newText: string, textChangeRange: TextChangeRange, aggressiveChecks: boolean): SourceFile {\n            aggressiveChecks = aggressiveChecks || Debug.shouldAssert(AssertionLevel.Aggressive);\n\n            checkChangeRange(sourceFile, newText, textChangeRange, aggressiveChecks);\n            if (textChangeRangeIsUnchanged(textChangeRange)) {\n                // if the text didn't change, then we can just return our current source file as-is.\n                return sourceFile;\n            }\n\n            if (sourceFile.statements.length === 0) {\n                // If we don't have any statements in the current source file, then there's no real\n                // way to incrementally parse.  So just do a full parse instead.\n                return Parser.parseSourceFile(sourceFile.fileName, newText, sourceFile.languageVersion, /*syntaxCursor*/ undefined, /*setParentNodes*/ true, sourceFile.scriptKind);\n            }\n\n            // Make sure we're not trying to incrementally update a source file more than once.  Once\n            // we do an update the original source file is considered unusable from that point onwards.\n            //\n            // This is because we do incremental parsing in-place.  i.e. we take nodes from the old\n            // tree and give them new positions and parents.  From that point on, trusting the old\n            // tree at all is not possible as far too much of it may violate invariants.\n            const incrementalSourceFile = <IncrementalNode><Node>sourceFile;\n            Debug.assert(!incrementalSourceFile.hasBeenIncrementallyParsed);\n            incrementalSourceFile.hasBeenIncrementallyParsed = true;\n\n            const oldText = sourceFile.text;\n            const syntaxCursor = createSyntaxCursor(sourceFile);\n\n            // Make the actual change larger so that we know to reparse anything whose lookahead\n            // might have intersected the change.\n            const changeRange = extendToAffectedRange(sourceFile, textChangeRange);\n            checkChangeRange(sourceFile, newText, changeRange, aggressiveChecks);\n\n            // Ensure that extending the affected range only moved the start of the change range\n            // earlier in the file.\n            Debug.assert(changeRange.span.start <= textChangeRange.span.start);\n            Debug.assert(textSpanEnd(changeRange.span) === textSpanEnd(textChangeRange.span));\n            Debug.assert(textSpanEnd(textChangeRangeNewSpan(changeRange)) === textSpanEnd(textChangeRangeNewSpan(textChangeRange)));\n\n            // The is the amount the nodes after the edit range need to be adjusted.  It can be\n            // positive (if the edit added characters), negative (if the edit deleted characters)\n            // or zero (if this was a pure overwrite with nothing added/removed).\n            const delta = textChangeRangeNewSpan(changeRange).length - changeRange.span.length;\n\n            // If we added or removed characters during the edit, then we need to go and adjust all\n            // the nodes after the edit.  Those nodes may move forward (if we inserted chars) or they\n            // may move backward (if we deleted chars).\n            //\n            // Doing this helps us out in two ways.  First, it means that any nodes/tokens we want\n            // to reuse are already at the appropriate position in the new text.  That way when we\n            // reuse them, we don't have to figure out if they need to be adjusted.  Second, it makes\n            // it very easy to determine if we can reuse a node.  If the node's position is at where\n            // we are in the text, then we can reuse it.  Otherwise we can't.  If the node's position\n            // is ahead of us, then we'll need to rescan tokens.  If the node's position is behind\n            // us, then we'll need to skip it or crumble it as appropriate\n            //\n            // We will also adjust the positions of nodes that intersect the change range as well.\n            // By doing this, we ensure that all the positions in the old tree are consistent, not\n            // just the positions of nodes entirely before/after the change range.  By being\n            // consistent, we can then easily map from positions to nodes in the old tree easily.\n            //\n            // Also, mark any syntax elements that intersect the changed span.  We know, up front,\n            // that we cannot reuse these elements.\n            updateTokenPositionsAndMarkElements(incrementalSourceFile,\n                changeRange.span.start, textSpanEnd(changeRange.span), textSpanEnd(textChangeRangeNewSpan(changeRange)), delta, oldText, newText, aggressiveChecks);\n\n            // Now that we've set up our internal incremental state just proceed and parse the\n            // source file in the normal fashion.  When possible the parser will retrieve and\n            // reuse nodes from the old tree.\n            //\n            // Note: passing in 'true' for setNodeParents is very important.  When incrementally\n            // parsing, we will be reusing nodes from the old tree, and placing it into new\n            // parents.  If we don't set the parents now, we'll end up with an observably\n            // inconsistent tree.  Setting the parents on the new tree should be very fast.  We\n            // will immediately bail out of walking any subtrees when we can see that their parents\n            // are already correct.\n            const result = Parser.parseSourceFile(sourceFile.fileName, newText, sourceFile.languageVersion, syntaxCursor, /*setParentNodes*/ true, sourceFile.scriptKind);\n\n            return result;\n        }\n\n        function moveElementEntirelyPastChangeRange(element: IncrementalElement, isArray: boolean, delta: number, oldText: string, newText: string, aggressiveChecks: boolean) {\n            if (isArray) {\n                visitArray(<IncrementalNodeArray>element);\n            }\n            else {\n                visitNode(<IncrementalNode>element);\n            }\n            return;\n\n            function visitNode(node: IncrementalNode) {\n                let text = \"\";\n                if (aggressiveChecks && shouldCheckNode(node)) {\n                    text = oldText.substring(node.pos, node.end);\n                }\n\n                // Ditch any existing LS children we may have created.  This way we can avoid\n                // moving them forward.\n                if (node._children) {\n                    node._children = undefined;\n                }\n\n                node.pos += delta;\n                node.end += delta;\n\n                if (aggressiveChecks && shouldCheckNode(node)) {\n                    Debug.assert(text === newText.substring(node.pos, node.end));\n                }\n\n                forEachChild(node, visitNode, visitArray);\n                if (hasJSDocNodes(node)) {\n                    for (const jsDocComment of node.jsDoc) {\n                        forEachChild(jsDocComment, visitNode, visitArray);\n                    }\n                }\n                checkNodePositions(node, aggressiveChecks);\n            }\n\n            function visitArray(array: IncrementalNodeArray) {\n                array._children = undefined;\n                array.pos += delta;\n                array.end += delta;\n\n                for (const node of array) {\n                    visitNode(node);\n                }\n            }\n        }\n\n        function shouldCheckNode(node: Node) {\n            switch (node.kind) {\n                case SyntaxKind.StringLiteral:\n                case SyntaxKind.NumericLiteral:\n                case SyntaxKind.Identifier:\n                    return true;\n            }\n\n            return false;\n        }\n\n        function adjustIntersectingElement(element: IncrementalElement, changeStart: number, changeRangeOldEnd: number, changeRangeNewEnd: number, delta: number) {\n            Debug.assert(element.end >= changeStart, \"Adjusting an element that was entirely before the change range\");\n            Debug.assert(element.pos <= changeRangeOldEnd, \"Adjusting an element that was entirely after the change range\");\n            Debug.assert(element.pos <= element.end);\n\n            // We have an element that intersects the change range in some way.  It may have its\n            // start, or its end (or both) in the changed range.  We want to adjust any part\n            // that intersects such that the final tree is in a consistent state.  i.e. all\n            // children have spans within the span of their parent, and all siblings are ordered\n            // properly.\n\n            // We may need to update both the 'pos' and the 'end' of the element.\n\n            // If the 'pos' is before the start of the change, then we don't need to touch it.\n            // If it isn't, then the 'pos' must be inside the change.  How we update it will\n            // depend if delta is positive or negative. If delta is positive then we have\n            // something like:\n            //\n            //  -------------------AAA-----------------\n            //  -------------------BBBCCCCCCC-----------------\n            //\n            // In this case, we consider any node that started in the change range to still be\n            // starting at the same position.\n            //\n            // however, if the delta is negative, then we instead have something like this:\n            //\n            //  -------------------XXXYYYYYYY-----------------\n            //  -------------------ZZZ-----------------\n            //\n            // In this case, any element that started in the 'X' range will keep its position.\n            // However any element that started after that will have their pos adjusted to be\n            // at the end of the new range.  i.e. any node that started in the 'Y' range will\n            // be adjusted to have their start at the end of the 'Z' range.\n            //\n            // The element will keep its position if possible.  Or Move backward to the new-end\n            // if it's in the 'Y' range.\n            element.pos = Math.min(element.pos, changeRangeNewEnd);\n\n            // If the 'end' is after the change range, then we always adjust it by the delta\n            // amount.  However, if the end is in the change range, then how we adjust it\n            // will depend on if delta is positive or negative.  If delta is positive then we\n            // have something like:\n            //\n            //  -------------------AAA-----------------\n            //  -------------------BBBCCCCCCC-----------------\n            //\n            // In this case, we consider any node that ended inside the change range to keep its\n            // end position.\n            //\n            // however, if the delta is negative, then we instead have something like this:\n            //\n            //  -------------------XXXYYYYYYY-----------------\n            //  -------------------ZZZ-----------------\n            //\n            // In this case, any element that ended in the 'X' range will keep its position.\n            // However any element that ended after that will have their pos adjusted to be\n            // at the end of the new range.  i.e. any node that ended in the 'Y' range will\n            // be adjusted to have their end at the end of the 'Z' range.\n            if (element.end >= changeRangeOldEnd) {\n                // Element ends after the change range.  Always adjust the end pos.\n                element.end += delta;\n            }\n            else {\n                // Element ends in the change range.  The element will keep its position if\n                // possible. Or Move backward to the new-end if it's in the 'Y' range.\n                element.end = Math.min(element.end, changeRangeNewEnd);\n            }\n\n            Debug.assert(element.pos <= element.end);\n            if (element.parent) {\n                Debug.assert(element.pos >= element.parent.pos);\n                Debug.assert(element.end <= element.parent.end);\n            }\n        }\n\n        function checkNodePositions(node: Node, aggressiveChecks: boolean) {\n            if (aggressiveChecks) {\n                let pos = node.pos;\n                forEachChild(node, child => {\n                    Debug.assert(child.pos >= pos);\n                    pos = child.end;\n                });\n                Debug.assert(pos <= node.end);\n            }\n        }\n\n        function updateTokenPositionsAndMarkElements(\n            sourceFile: IncrementalNode,\n            changeStart: number,\n            changeRangeOldEnd: number,\n            changeRangeNewEnd: number,\n            delta: number,\n            oldText: string,\n            newText: string,\n            aggressiveChecks: boolean): void {\n\n            visitNode(sourceFile);\n            return;\n\n            function visitNode(child: IncrementalNode) {\n                Debug.assert(child.pos <= child.end);\n                if (child.pos > changeRangeOldEnd) {\n                    // Node is entirely past the change range.  We need to move both its pos and\n                    // end, forward or backward appropriately.\n                    moveElementEntirelyPastChangeRange(child, /*isArray*/ false, delta, oldText, newText, aggressiveChecks);\n                    return;\n                }\n\n                // Check if the element intersects the change range.  If it does, then it is not\n                // reusable.  Also, we'll need to recurse to see what constituent portions we may\n                // be able to use.\n                const fullEnd = child.end;\n                if (fullEnd >= changeStart) {\n                    child.intersectsChange = true;\n                    child._children = undefined;\n\n                    // Adjust the pos or end (or both) of the intersecting element accordingly.\n                    adjustIntersectingElement(child, changeStart, changeRangeOldEnd, changeRangeNewEnd, delta);\n                    forEachChild(child, visitNode, visitArray);\n\n                    checkNodePositions(child, aggressiveChecks);\n                    return;\n                }\n\n                // Otherwise, the node is entirely before the change range.  No need to do anything with it.\n                Debug.assert(fullEnd < changeStart);\n            }\n\n            function visitArray(array: IncrementalNodeArray) {\n                Debug.assert(array.pos <= array.end);\n                if (array.pos > changeRangeOldEnd) {\n                    // Array is entirely after the change range.  We need to move it, and move any of\n                    // its children.\n                    moveElementEntirelyPastChangeRange(array, /*isArray*/ true, delta, oldText, newText, aggressiveChecks);\n                    return;\n                }\n\n                // Check if the element intersects the change range.  If it does, then it is not\n                // reusable.  Also, we'll need to recurse to see what constituent portions we may\n                // be able to use.\n                const fullEnd = array.end;\n                if (fullEnd >= changeStart) {\n                    array.intersectsChange = true;\n                    array._children = undefined;\n\n                    // Adjust the pos or end (or both) of the intersecting array accordingly.\n                    adjustIntersectingElement(array, changeStart, changeRangeOldEnd, changeRangeNewEnd, delta);\n                    for (const node of array) {\n                        visitNode(node);\n                    }\n                    return;\n                }\n\n                // Otherwise, the array is entirely before the change range.  No need to do anything with it.\n                Debug.assert(fullEnd < changeStart);\n            }\n        }\n\n        function extendToAffectedRange(sourceFile: SourceFile, changeRange: TextChangeRange): TextChangeRange {\n            // Consider the following code:\n            //      void foo() { /; }\n            //\n            // If the text changes with an insertion of / just before the semicolon then we end up with:\n            //      void foo() { //; }\n            //\n            // If we were to just use the changeRange a is, then we would not rescan the { token\n            // (as it does not intersect the actual original change range).  Because an edit may\n            // change the token touching it, we actually need to look back *at least* one token so\n            // that the prior token sees that change.\n            const maxLookahead = 1;\n\n            let start = changeRange.span.start;\n\n            // the first iteration aligns us with the change start. subsequent iteration move us to\n            // the left by maxLookahead tokens.  We only need to do this as long as we're not at the\n            // start of the tree.\n            for (let i = 0; start > 0 && i <= maxLookahead; i++) {\n                const nearestNode = findNearestNodeStartingBeforeOrAtPosition(sourceFile, start);\n                Debug.assert(nearestNode.pos <= start);\n                const position = nearestNode.pos;\n\n                start = Math.max(0, position - 1);\n            }\n\n            const finalSpan = createTextSpanFromBounds(start, textSpanEnd(changeRange.span));\n            const finalLength = changeRange.newLength + (changeRange.span.start - start);\n\n            return createTextChangeRange(finalSpan, finalLength);\n        }\n\n        function findNearestNodeStartingBeforeOrAtPosition(sourceFile: SourceFile, position: number): Node {\n            let bestResult: Node = sourceFile;\n            let lastNodeEntirelyBeforePosition: Node;\n\n            forEachChild(sourceFile, visit);\n\n            if (lastNodeEntirelyBeforePosition) {\n                const lastChildOfLastEntireNodeBeforePosition = getLastChild(lastNodeEntirelyBeforePosition);\n                if (lastChildOfLastEntireNodeBeforePosition.pos > bestResult.pos) {\n                    bestResult = lastChildOfLastEntireNodeBeforePosition;\n                }\n            }\n\n            return bestResult;\n\n            function getLastChild(node: Node): Node {\n                while (true) {\n                    const lastChild = getLastChildWorker(node);\n                    if (lastChild) {\n                        node = lastChild;\n                    }\n                    else {\n                        return node;\n                    }\n                }\n            }\n\n            function getLastChildWorker(node: Node): Node | undefined {\n                let last: Node = undefined;\n                forEachChild(node, child => {\n                    if (nodeIsPresent(child)) {\n                        last = child;\n                    }\n                });\n                return last;\n            }\n\n            function visit(child: Node) {\n                if (nodeIsMissing(child)) {\n                    // Missing nodes are effectively invisible to us.  We never even consider them\n                    // When trying to find the nearest node before us.\n                    return;\n                }\n\n                // If the child intersects this position, then this node is currently the nearest\n                // node that starts before the position.\n                if (child.pos <= position) {\n                    if (child.pos >= bestResult.pos) {\n                        // This node starts before the position, and is closer to the position than\n                        // the previous best node we found.  It is now the new best node.\n                        bestResult = child;\n                    }\n\n                    // Now, the node may overlap the position, or it may end entirely before the\n                    // position.  If it overlaps with the position, then either it, or one of its\n                    // children must be the nearest node before the position.  So we can just\n                    // recurse into this child to see if we can find something better.\n                    if (position < child.end) {\n                        // The nearest node is either this child, or one of the children inside\n                        // of it.  We've already marked this child as the best so far.  Recurse\n                        // in case one of the children is better.\n                        forEachChild(child, visit);\n\n                        // Once we look at the children of this node, then there's no need to\n                        // continue any further.\n                        return true;\n                    }\n                    else {\n                        Debug.assert(child.end <= position);\n                        // The child ends entirely before this position.  Say you have the following\n                        // (where $ is the position)\n                        //\n                        //      <complex expr 1> ? <complex expr 2> $ : <...> <...>\n                        //\n                        // We would want to find the nearest preceding node in \"complex expr 2\".\n                        // To support that, we keep track of this node, and once we're done searching\n                        // for a best node, we recurse down this node to see if we can find a good\n                        // result in it.\n                        //\n                        // This approach allows us to quickly skip over nodes that are entirely\n                        // before the position, while still allowing us to find any nodes in the\n                        // last one that might be what we want.\n                        lastNodeEntirelyBeforePosition = child;\n                    }\n                }\n                else {\n                    Debug.assert(child.pos > position);\n                    // We're now at a node that is entirely past the position we're searching for.\n                    // This node (and all following nodes) could never contribute to the result,\n                    // so just skip them by returning 'true' here.\n                    return true;\n                }\n            }\n        }\n\n        function checkChangeRange(sourceFile: SourceFile, newText: string, textChangeRange: TextChangeRange, aggressiveChecks: boolean) {\n            const oldText = sourceFile.text;\n            if (textChangeRange) {\n                Debug.assert((oldText.length - textChangeRange.span.length + textChangeRange.newLength) === newText.length);\n\n                if (aggressiveChecks || Debug.shouldAssert(AssertionLevel.VeryAggressive)) {\n                    const oldTextPrefix = oldText.substr(0, textChangeRange.span.start);\n                    const newTextPrefix = newText.substr(0, textChangeRange.span.start);\n                    Debug.assert(oldTextPrefix === newTextPrefix);\n\n                    const oldTextSuffix = oldText.substring(textSpanEnd(textChangeRange.span), oldText.length);\n                    const newTextSuffix = newText.substring(textSpanEnd(textChangeRangeNewSpan(textChangeRange)), newText.length);\n                    Debug.assert(oldTextSuffix === newTextSuffix);\n                }\n            }\n        }\n\n        interface IncrementalElement extends TextRange {\n            parent?: Node;\n            intersectsChange: boolean;\n            length?: number;\n            _children: Node[];\n        }\n\n        export interface IncrementalNode extends Node, IncrementalElement {\n            hasBeenIncrementallyParsed: boolean;\n        }\n\n        interface IncrementalNodeArray extends NodeArray<IncrementalNode>, IncrementalElement {\n            length: number;\n        }\n\n        // Allows finding nodes in the source file at a certain position in an efficient manner.\n        // The implementation takes advantage of the calling pattern it knows the parser will\n        // make in order to optimize finding nodes as quickly as possible.\n        export interface SyntaxCursor {\n            currentNode(position: number): IncrementalNode;\n        }\n\n        function createSyntaxCursor(sourceFile: SourceFile): SyntaxCursor {\n            let currentArray: NodeArray<Node> = sourceFile.statements;\n            let currentArrayIndex = 0;\n\n            Debug.assert(currentArrayIndex < currentArray.length);\n            let current = currentArray[currentArrayIndex];\n            let lastQueriedPosition = InvalidPosition.Value;\n\n            return {\n                currentNode(position: number) {\n                    // Only compute the current node if the position is different than the last time\n                    // we were asked.  The parser commonly asks for the node at the same position\n                    // twice.  Once to know if can read an appropriate list element at a certain point,\n                    // and then to actually read and consume the node.\n                    if (position !== lastQueriedPosition) {\n                        // Much of the time the parser will need the very next node in the array that\n                        // we just returned a node from.So just simply check for that case and move\n                        // forward in the array instead of searching for the node again.\n                        if (current && current.end === position && currentArrayIndex < (currentArray.length - 1)) {\n                            currentArrayIndex++;\n                            current = currentArray[currentArrayIndex];\n                        }\n\n                        // If we don't have a node, or the node we have isn't in the right position,\n                        // then try to find a viable node at the position requested.\n                        if (!current || current.pos !== position) {\n                            findHighestListElementThatStartsAtPosition(position);\n                        }\n                    }\n\n                    // Cache this query so that we don't do any extra work if the parser calls back\n                    // into us.  Note: this is very common as the parser will make pairs of calls like\n                    // 'isListElement -> parseListElement'.  If we were unable to find a node when\n                    // called with 'isListElement', we don't want to redo the work when parseListElement\n                    // is called immediately after.\n                    lastQueriedPosition = position;\n\n                    // Either we don'd have a node, or we have a node at the position being asked for.\n                    Debug.assert(!current || current.pos === position);\n                    return <IncrementalNode>current;\n                }\n            };\n\n            // Finds the highest element in the tree we can find that starts at the provided position.\n            // The element must be a direct child of some node list in the tree.  This way after we\n            // return it, we can easily return its next sibling in the list.\n            function findHighestListElementThatStartsAtPosition(position: number) {\n                // Clear out any cached state about the last node we found.\n                currentArray = undefined;\n                currentArrayIndex = InvalidPosition.Value;\n                current = undefined;\n\n                // Recurse into the source file to find the highest node at this position.\n                forEachChild(sourceFile, visitNode, visitArray);\n                return;\n\n                function visitNode(node: Node) {\n                    if (position >= node.pos && position < node.end) {\n                        // Position was within this node.  Keep searching deeper to find the node.\n                        forEachChild(node, visitNode, visitArray);\n\n                        // don't proceed any further in the search.\n                        return true;\n                    }\n\n                    // position wasn't in this node, have to keep searching.\n                    return false;\n                }\n\n                function visitArray(array: NodeArray<Node>) {\n                    if (position >= array.pos && position < array.end) {\n                        // position was in this array.  Search through this array to see if we find a\n                        // viable element.\n                        for (let i = 0; i < array.length; i++) {\n                            const child = array[i];\n                            if (child) {\n                                if (child.pos === position) {\n                                    // Found the right node.  We're done.\n                                    currentArray = array;\n                                    currentArrayIndex = i;\n                                    current = child;\n                                    return true;\n                                }\n                                else {\n                                    if (child.pos < position && position < child.end) {\n                                        // Position in somewhere within this child.  Search in it and\n                                        // stop searching in this array.\n                                        forEachChild(child, visitNode, visitArray);\n                                        return true;\n                                    }\n                                }\n                            }\n                        }\n                    }\n\n                    // position wasn't in this array, have to keep searching.\n                    return false;\n                }\n            }\n        }\n\n        const enum InvalidPosition {\n            Value = -1\n        }\n    }\n\n    function isDeclarationFileName(fileName: string): boolean {\n        return fileExtensionIs(fileName, Extension.Dts);\n    }\n}\n"
  },
  {
    "path": "examples/typescript/small.ts",
    "content": "class Foo {\n    constructor() {}\n}\n\nfunction foo() {\n    \n}\n\nconst s = `${foo()}`"
  },
  {
    "path": "package.json",
    "content": "{\n\t\"name\": \"vscode-tree-sitter\",\n\t\"displayName\": \"Tree Sitter [Deprecated]\",\n\t\"description\": \"Accurate syntax coloring with tree-sitter\",\n\t\"icon\": \"tree-sitter-small.png\",\n\t\"version\": \"0.1.26\",\n\t\"preview\": true,\n\t\"publisher\": \"georgewfraser\",\n\t\"repository\": {\n\t\t\"type\": \"git\",\n\t\t\"url\": \"https://github.com/georgewfraser/vscode-tree-sitter\"\n\t},\n\t\"license\": \"MIT\",\n\t\"extensionKind\": [\n\t\t\"ui\"\n\t],\n\t\"engines\": {\n\t\t\"vscode\": \"^1.34.0\"\n\t},\n\t\"categories\": [\n\t\t\"Programming Languages\",\n\t\t\"Themes\",\n\t\t\"Other\"\n\t],\n\t\"activationEvents\": [\n\t\t\"onLanguage:go\",\n\t\t\"onLanguage:cpp\",\n\t\t\"onLanguage:rust\",\n\t\t\"onLanguage:ruby\",\n\t\t\"onLanguage:typescript\",\n\t\t\"onLanguage:javascript\"\n\t],\n\t\"main\": \"./out/extension.js\",\n\t\"contributes\": {\n\t\t\"grammars\": [\n\t\t\t{\n\t\t\t\t\"language\": \"go\",\n\t\t\t\t\"scopeName\": \"source.go\",\n\t\t\t\t\"path\": \"./textmate/go.tmLanguage.json\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"language\": \"cpp\",\n\t\t\t\t\"scopeName\": \"source.cpp\",\n\t\t\t\t\"path\": \"./textmate/cpp.tmLanguage.json\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"language\": \"ruby\",\n\t\t\t\t\"scopeName\": \"source.ruby\",\n\t\t\t\t\"path\": \"./textmate/ruby.tmLanguage.json\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"language\": \"rust\",\n\t\t\t\t\"scopeName\": \"source.rust\",\n\t\t\t\t\"path\": \"./textmate/rust.tmLanguage.json\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"language\": \"typescript\",\n\t\t\t\t\"scopeName\": \"source.ts\",\n\t\t\t\t\"path\": \"./textmate/typescript.tmLanguage.json\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"language\": \"javascript\",\n\t\t\t\t\"scopeName\": \"source.ts\",\n\t\t\t\t\"path\": \"./textmate/typescript.tmLanguage.json\"\n\t\t\t}\n\t\t]\n\t},\n\t\"scripts\": {\n\t\t\"vscode:prepublish\": \"npm run compile\",\n\t\t\"compile\": \"tsc -p ./\",\n\t\t\"watch\": \"tsc -watch -p ./\",\n\t\t\"postinstall\": \"node ./node_modules/vscode/bin/install\",\n\t\t\"test\": \"npm run compile && node ./out/test\",\n\t\t\"benchmark\": \"npm run compile && node ./out/benchmark\",\n\t\t\"debug\": \"npm run compile && node --nolazy --inspect-brk=9229 ./out/test\",\n\t\t\"build\": \"vsce package -o build.vsix\",\n\t\t\"publish\": \"vsce publish patch\"\n\t},\n\t\"devDependencies\": {\n\t\t\"@types/mocha\": \"^2.2.42\",\n\t\t\"@types/node\": \"^8.10.25\",\n\t\t\"tree-sitter-cli\": \"^0.16.5\",\n\t\t\"tree-sitter-cpp\": \"^0.16.0\",\n\t\t\"tree-sitter-go\": \"^0.16.0\",\n\t\t\"tree-sitter-javascript\": \"^0.16.0\",\n\t\t\"tree-sitter-ruby\": \"^0.16.1\",\n\t\t\"tree-sitter-rust\": \"^0.16.0\",\n\t\t\"tree-sitter-typescript\": \"^0.16.1\",\n\t\t\"tslint\": \"^6.0.0\",\n\t\t\"typescript\": \"^3.8.2\",\n\t\t\"vsce\": \"^1.73.0\",\n\t\t\"vscode\": \"^1.1.36\"\n\t},\n\t\"dependencies\": {\n\t\t\"jsonc-parser\": \"^2.1.0\",\n\t\t\"tar\": \">=4.4.2\",\n\t\t\"web-tree-sitter\": \"^0.16.2\"\n\t}\n}\n"
  },
  {
    "path": "scripts/build.sh",
    "content": "#!/bin/bash\n\nset -e\n\n# Build vsix\nnpm run-script build\n\ncode --install-extension build.vsix --force\n\necho 'Reload VSCode to update extension'\n"
  },
  {
    "path": "scripts/gen-parsers.sh",
    "content": "#!/usr/bin/env bash\n\n# TODO this still doesn't work on my mac laptop :(\n# fix it and delete parsers/*.wasm from git\n\nset -e\n\n# Build parsers\n./node_modules/.bin/tree-sitter build-wasm ./node_modules/tree-sitter-go\n./node_modules/.bin/tree-sitter build-wasm ./node_modules/tree-sitter-cpp\n./node_modules/.bin/tree-sitter build-wasm ./node_modules/tree-sitter-ruby\n./node_modules/.bin/tree-sitter build-wasm ./node_modules/tree-sitter-rust\n./node_modules/.bin/tree-sitter build-wasm ./node_modules/tree-sitter-typescript/typescript\n./node_modules/.bin/tree-sitter build-wasm ./node_modules/tree-sitter-javascript\n\nmv *.wasm parsers"
  },
  {
    "path": "src/benchmark.ts",
    "content": "// import extension = require('./extension')\nimport Parser = require('web-tree-sitter')\nimport fs = require('fs')\nimport colors = require('./colors')\n\nbenchmarkGo()\n\nasync function benchmarkGo() {\n    await Parser.init()\n    const parser = new Parser()\n    const wasm = 'parsers/tree-sitter-go.wasm'\n    const lang = await Parser.Language.load(wasm)\n    parser.setLanguage(lang)\n    const text = fs.readFileSync('examples/go/proc.go', {encoding: 'utf-8'})\n    const tree = parser.parse(text)\n    for (let i = 0; i < 10; i++) {\n        console.time('colorGo')\n        colors.colorGo(tree, [{start: 0, end: tree.rootNode.endPosition.row}])\n        console.timeEnd('colorGo')\n    }\n}"
  },
  {
    "path": "src/colors.ts",
    "content": "import * as Parser from 'web-tree-sitter'\n\nexport type Range = {start: Parser.Point, end: Parser.Point}\nexport type ColorFunction = (x: Parser.Tree, visibleRanges: {start: number, end: number}[]) => Map<string, Range[]>\n\nexport function colorGo(root: Parser.Tree, visibleRanges: {start: number, end: number}[]) {\n\tconst functions: Range[] = []\n\tconst types: Range[] = []\n\tconst variables: Range[] = []\n\tconst underlines: Range[] = []\n\t// Guess package names based on paths\n\tvar packages: {[id: string]: boolean} = {}\n\tfunction scanImport(x: Parser.SyntaxNode) {\n\t\tif (x.type == 'import_spec') {\n\t\t\tlet str = x.firstChild!.text\n\t\t\tif (str.startsWith('\"')) {\n\t\t\t\tstr = str.substring(1, str.length - 1)\n\t\t\t}\n\t\t\tconst parts = str.split('/')\n\t\t\tconst last = parts[parts.length - 1]\n\t\t\tpackages[last] = true\n\t\t}\n\t\tfor (const child of x.children) {\n\t\t\tscanImport(child)\n\t\t}\n\t}\n\t// Keep track of local vars that shadow packages\n\tconst allScopes: Scope[] = []\n\tclass Scope {\n\t\tprivate locals = new Map<string, {modified: boolean, references: Parser.SyntaxNode[]}>()\n\t\tprivate parent: Scope|null\n\t\n\t\tconstructor(parent: Scope|null) {\n\t\t\tthis.parent = parent\n\t\t\tallScopes.push(this)\n\t\t}\n\n\t\tdeclareLocal(id: string) {\n\t\t\tif (this.isRoot()) return\n\t\t\tif (this.locals.has(id)) {\n\t\t\t\tthis.locals.get(id)!.modified = true\n\t\t\t} else {\n\t\t\t\tthis.locals.set(id, {modified: false, references: []})\n\t\t\t}\n\t\t}\n\n\t\tmodifyLocal(id: string) {\n\t\t\tif (this.isRoot()) return\n\t\t\tif (this.locals.has(id)) this.locals.get(id)!.modified = true\n\t\t\telse if (this.parent) this.parent.modifyLocal(id)\n\t\t}\n\n\t\treferenceLocal(x: Parser.SyntaxNode) {\n\t\t\tif (this.isRoot()) return\n\t\t\tconst id = x.text\n\t\t\tif (this.locals.has(id)) this.locals.get(id)!.references.push(x)\n\t\t\telse if (this.parent) this.parent.referenceLocal(x)\n\t\t}\n\t\n\t\tisLocal(id: string): boolean {\n\t\t\tif (this.locals.has(id)) return true\n\t\t\tif (this.parent) return this.parent.isLocal(id)\n\t\t\treturn false\n\t\t}\n\n\t\tisUnknown(id: string): boolean {\n\t\t\tif (packages[id]) return false\n\t\t\tif (this.locals.has(id)) return false\n\t\t\tif (this.parent) return this.parent.isUnknown(id)\n\t\t\treturn true\n\t\t}\n\n\t\tisModified(id: string): boolean {\n\t\t\tif (this.locals.has(id)) return this.locals.get(id)!.modified\n\t\t\tif (this.parent) return this.parent.isModified(id)\n\t\t\treturn false\n\t\t}\n\n\t\tmodifiedLocals(): Parser.SyntaxNode[] {\n\t\t\tconst all = []\n\t\t\tfor (const {modified, references} of this.locals.values()) {\n\t\t\t\tif (modified) {\n\t\t\t\t\tall.push(...references)\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn all\n\t\t}\n\n\t\tisPackage(id: string): boolean {\n\t\t\treturn packages[id] && !this.isLocal(id)\n\t\t}\n\n\t\tisRoot(): boolean {\n\t\t\treturn this.parent == null\n\t\t}\n\t}\n\tconst rootScope = new Scope(null)\n\tfunction scanSourceFile() {\n\t\tfor (const top of root.rootNode.namedChildren) {\n\t\t\tscanTopLevelDeclaration(top)\n\t\t}\n\t}\n\tfunction scanTopLevelDeclaration(x: Parser.SyntaxNode) {\n\t\tswitch (x.type) {\n\t\t\tcase 'import_declaration':\n\t\t\t\tscanImport(x)\n\t\t\t\tbreak\n\t\t\tcase 'function_declaration':\n\t\t\tcase 'method_declaration':\n\t\t\t\tif (!isVisible(x, visibleRanges)) return\n\t\t\t\tscanFunctionDeclaration(x)\n\t\t\t\tbreak\n\t\t\tcase 'const_declaration':\n\t\t\tcase 'var_declaration':\n\t\t\t\tif (!isVisible(x, visibleRanges)) return\n\t\t\t\tscanVarDeclaration(x)\n\t\t\t\tbreak\n\t\t\tcase 'type_declaration':\n\t\t\t\tif (!isVisible(x, visibleRanges)) return\n\t\t\t\tscanTypeDeclaration(x)\n\t\t\t\tbreak\n\t\t}\n\t}\n\tfunction scanFunctionDeclaration(x: Parser.SyntaxNode) {\n\t\tconst scope = new Scope(rootScope)\n\t\tfor (const child of x.namedChildren) {\n\t\t\tswitch (child.type) {\n\t\t\t\tcase 'identifier':\n\t\t\t\t\tif (isVisible(child, visibleRanges)) {\n\t\t\t\t\t\tfunctions.push({start: child.startPosition, end: child.endPosition});\n\t\t\t\t\t}\n\t\t\t\t\tbreak\n\t\t\t\tdefault:\n\t\t\t\t\tscanExpr(child, scope)\n\t\t\t}\n\t\t}\n\t}\n\tfunction scanVarDeclaration(x: Parser.SyntaxNode) {\n\t\tfor (const varSpec of x.namedChildren) {\n\t\t\tfor (const child of varSpec.namedChildren) {\n\t\t\t\tswitch (child.type) {\n\t\t\t\t\tcase 'identifier':\n\t\t\t\t\t\tif (isVisible(child, visibleRanges)) {\n\t\t\t\t\t\t\tvariables.push({start: child.startPosition, end: child.endPosition});\n\t\t\t\t\t\t}\n\t\t\t\t\t\tbreak\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tscanExpr(child, rootScope)\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tfunction scanTypeDeclaration(x: Parser.SyntaxNode) {\n\t\tfor (const child of x.namedChildren) {\n\t\t\tscanExpr(child, rootScope)\n\t\t}\n\t}\n\tfunction scanExpr(x: Parser.SyntaxNode, scope: Scope) {\n\t\tswitch (x.type) {\n\t\t\tcase 'ERROR':\n\t\t\t\treturn\n\t\t\tcase 'func_literal':\n\t\t\tcase 'method_spec':\n\t\t\tcase 'block':\n\t\t\tcase 'expression_case':\n\t\t\tcase 'type_case':\n\t\t\tcase 'for_statement':\n\t\t\tcase 'if_statement':\n\t\t\t\tscope = new Scope(scope)\n\t\t\t\tbreak\n\t\t\tcase 'parameter_declaration':\n\t\t\tcase 'variadic_parameter_declaration':\n\t\t\tcase 'var_spec':\n\t\t\tcase 'const_spec':\n\t\t\t\tfor (const id of x.namedChildren) {\n\t\t\t\t\tif (id.type == 'identifier') {\n\t\t\t\t\t\tscope.declareLocal(id.text)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'short_var_declaration': \n\t\t\tcase 'range_clause':\n\t\t\t\tfor (const id of x.firstChild!.namedChildren) {\n\t\t\t\t\tif (id.type == 'identifier') {\n\t\t\t\t\t\tscope.declareLocal(id.text)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'type_switch_statement':\n\t\t\t\tscope = new Scope(scope)\n\t\t\t\tif (x.firstNamedChild!.type == 'expression_list') {\n\t\t\t\t\tfor (const id of x.firstNamedChild!.namedChildren) {\n\t\t\t\t\t\tscope.declareLocal(id.text)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'inc_statement':\n\t\t\tcase 'dec_statement':\n\t\t\t\tscope.modifyLocal(x.firstChild!.text)\n\t\t\t\tbreak\n\t\t\tcase 'assignment_statement':\n\t\t\t\tfor (const id of x.firstChild!.namedChildren) {\n\t\t\t\t\tif (id.type == 'identifier') {\n\t\t\t\t\t\tscope.modifyLocal(id.text)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'call_expression':\n\t\t\t\tscanCall(x.firstChild!, scope)\n\t\t\t\tscanExpr(x.lastChild!, scope)\n\t\t\t\treturn\n\t\t\tcase 'identifier':\n\t\t\t\tscope.referenceLocal(x)\n\t\t\t\tif (isVisible(x, visibleRanges) && scope.isUnknown(x.text)) {\n\t\t\t\t\tvariables.push({start: x.startPosition, end: x.endPosition});\n\t\t\t\t}\n\t\t\t\treturn\n\t\t\tcase 'selector_expression':\n\t\t\t\tif (isVisible(x, visibleRanges) && scope.isPackage(x.firstChild!.text)) {\n\t\t\t\t\tvariables.push({start: x.lastChild!.startPosition, end: x.lastChild!.endPosition})\n\t\t\t\t}\n\t\t\t\tscanExpr(x.firstChild!, scope)\n\t\t\t\tscanExpr(x.lastChild!, scope)\n\t\t\t\treturn\n\t\t\tcase 'type_identifier':\n\t\t\t\tif (isVisible(x, visibleRanges)) {\n\t\t\t\t\ttypes.push({start: x.startPosition, end: x.endPosition})\n\t\t\t\t}\n\t\t\t\treturn\n\t\t}\n\t\tfor (const child of x.namedChildren) {\n\t\t\tscanExpr(child, scope)\n\t\t}\n\t}\n\tfunction scanCall(x: Parser.SyntaxNode, scope: Scope) {\n\t\tswitch (x.type) {\n\t\t\tcase 'identifier':\n\t\t\t\tif (isVisible(x, visibleRanges) && scope.isUnknown(x.text)) {\n\t\t\t\t\tfunctions.push({start: x.startPosition, end: x.endPosition})\n\t\t\t\t}\n\t\t\t\tscope.referenceLocal(x)\n\t\t\t\treturn\n\t\t\tcase 'selector_expression':\n\t\t\t\tif (isVisible(x, visibleRanges) && scope.isPackage(x.firstChild!.text)) {\n\t\t\t\t\tfunctions.push({start: x.lastChild!.startPosition, end: x.lastChild!.endPosition})\n\t\t\t\t}\n\t\t\t\tscanExpr(x.firstChild!, scope)\n\t\t\t\tscanExpr(x.lastChild!, scope)\n\t\t\t\treturn\n\t\t\tcase 'unary_expression':\n\t\t\t\tscanCall(x.firstChild!, scope)\n\t\t\t\treturn\n\t\t\tdefault:\n\t\t\t\tscanExpr(x, scope)\n\t\t}\n\t}\n\tscanSourceFile()\n\tfor (const scope of allScopes) {\n\t\tfor (const local of scope.modifiedLocals()) {\n\t\t\tunderlines.push({start: local.startPosition, end: local.endPosition})\n\t\t}\n\t}\n\n\treturn new Map([\n\t\t['entity.name.function', functions],\n\t\t['entity.name.type', types],\n\t\t['variable', variables],\n\t\t['markup.underline', underlines],\n\t])\n}\n\nexport function colorTypescript(root: Parser.Tree, visibleRanges: {start: number, end: number}[]) {\n\tconst functions: Range[] = []\n\tconst types: Range[] = []\n\tconst variables: Range[] = []\n\tconst keywords: Range[] = []\n\tlet visitedChildren = false\n\tlet cursor = root.walk()\n\tlet parents = [cursor.nodeType]\n\twhile (true) {\n\t\t// Advance cursor\n\t\tif (visitedChildren) {\n\t\t\tif (cursor.gotoNextSibling()) {\n\t\t\t\tvisitedChildren = false\n\t\t\t} else if (cursor.gotoParent()) {\n\t\t\t\tparents.pop()\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tconst parent = cursor.nodeType\n\t\t\tif (cursor.gotoFirstChild()) {\n\t\t\t\tparents.push(parent)\n\t\t\t\tvisitedChildren = false\n\t\t\t} else {\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\t// Skip nodes that are not visible\n\t\tif (!visible(cursor, visibleRanges)) {\n\t\t\tvisitedChildren = true\n\t\t\tcontinue\n\t\t}\n\t\t// Color tokens\n\t\tconst parent = parents[parents.length - 1]\n\t\tswitch (cursor.nodeType) {\n\t\t\tcase 'identifier':\n\t\t\t\tif (parent == 'function') {\n\t\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'type_identifier':\n\t\t\tcase 'predefined_type':\n\t\t\t\ttypes.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'property_identifier':\n\t\t\t\tvariables.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'method_definition': \n\t\t\t\tconst firstChild = cursor.currentNode().firstChild!\n\t\t\t\tswitch (firstChild.text) {\n\t\t\t\t\tcase 'get':\n\t\t\t\t\tcase 'set':\n\t\t\t\t\t\tkeywords.push({start: firstChild.startPosition, end: firstChild.endPosition})\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'function_declaration':\n\t\t\t\tconst functionName = cursor.currentNode().firstNamedChild!\n\t\t\t\tfunctions.push({start: functionName.startPosition, end: functionName.endPosition})\n\n\t\t}\n\t}\n\tcursor.delete()\n\treturn new Map([\n\t\t['entity.name.function', functions],\n\t\t['entity.name.type', types],\n\t\t['variable', variables],\n\t\t['keyword', keywords],\n\t])\n}\n\nexport function colorRuby(root: Parser.Tree, visibleRanges: {start: number, end: number}[]) {\n\tconst controlKeywords = new Set(['while', 'until', 'if', 'unless', 'for', 'begin', 'elsif', 'else', 'ensure', 'when', 'case', 'do_block'])\n\tconst classKeywords = new Set(['include', 'prepend', 'extend', 'private', 'protected', 'public', 'attr_reader', 'attr_writer', 'attr_accessor', 'attr', 'private_class_method', 'public_class_method'])\n\tconst moduleKeywords = new Set(['module_function', ...classKeywords])\n\tconst functions: Range[] = []\n\tconst types: Range[] = []\n\tconst variables: Range[] = []\n\tconst keywords: Range[] = []\n\tconst controls: Range[] = []\n\tconst constants: Range[] = []\n\tlet visitedChildren = false\n\tlet cursor = root.walk()\n\tlet parents = [cursor.nodeType]\n\tfunction isChildOf(ancestor: string) {\n\t\tconst parent = parents[parents.length - 1]\n\t\tconst grandparent = parents[parents.length - 2]\n\t\t// class Foo; bar; end\n\t\tif (parent == ancestor) {\n\t\t\treturn true\n\t\t}\n\t\t// class Foo; bar :thing; end\n\t\tif (parent == 'method_call' && grandparent == ancestor) {\n\t\t\treturn true\n\t\t}\n\t\treturn false\n\t}\n\twhile (true) {\n\t\t// Advance cursor\n\t\tif (visitedChildren) {\n\t\t\tif (cursor.gotoNextSibling()) {\n\t\t\t\tvisitedChildren = false\n\t\t\t} else if (cursor.gotoParent()) {\n\t\t\t\tparents.pop()\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tconst parent = cursor.nodeType\n\t\t\tif (cursor.gotoFirstChild()) {\n\t\t\t\tparents.push(parent)\n\t\t\t\tvisitedChildren = false\n\t\t\t} else {\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\t// Skip nodes that are not visible\n\t\tif (!visible(cursor, visibleRanges)) {\n\t\t\tvisitedChildren = true\n\t\t\tcontinue\n\t\t}\n\t\t// Color tokens\n\t\tconst parent = parents[parents.length - 1]\n\t\tswitch (cursor.nodeType) {\n\t\t\tcase 'method':\n\t\t\t\tcursor.gotoFirstChild()\n\t\t\t\tcursor.gotoNextSibling()\n\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tcursor.gotoParent()\n\t\t\t\tbreak\n\t\t\tcase 'singleton_method':\n\t\t\t\tcursor.gotoFirstChild()\n\t\t\t\tcursor.gotoNextSibling()\n\t\t\t\tcursor.gotoNextSibling()\n\t\t\t\tcursor.gotoNextSibling()\n\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tcursor.gotoParent()\n\t\t\t\tbreak\n\t\t\tcase 'instance_variable':\n\t\t\tcase 'class_variable':\n\t\t\tcase 'global_variable':\n\t\t\t\tvariables.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'end':\n\t\t\t\tif (controlKeywords.has(parent)) {\n\t\t\t\t\tcontrols.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t} else {\n\t\t\t\t\tkeywords.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'constant':\n\t\t\t\ttypes.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'symbol':\n\t\t\t\tconstants.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'method_call': {\n\t\t\t\tcursor.gotoFirstChild()\n\t\t\t\tconst text = cursor.currentNode().text\n\t\t\t\tif (!moduleKeywords.has(text)) {\n\t\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t}\n\t\t\t\tcursor.gotoParent()\n\t\t\t\tbreak\n\t\t\t}\n\t\t\tcase 'call':\n\t\t\t\tcursor.gotoFirstChild()\n\t\t\t\tcursor.gotoNextSibling()\n\t\t\t\tcursor.gotoNextSibling()\n\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tcursor.gotoParent()\n\t\t\t\tbreak\n\t\t\tcase 'identifier': {\n\t\t\t\tconst text = cursor.currentNode().text\n\t\t\t\tif (classKeywords.has(text) && isChildOf('class')) {\n\t\t\t\t\tkeywords.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t} else if (moduleKeywords.has(text) && isChildOf('module')) {\n\t\t\t\t\tkeywords.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\tcursor.delete()\n\treturn new Map([\n\t\t['entity.name.function', functions],\n\t\t['entity.name.type', types],\n\t\t['variable', variables],\n\t\t['keyword', keywords],\n\t\t['keyword.control', controls],\n\t\t['constant.language', constants],\n\t])\n}\n\nexport function colorRust(root: Parser.Tree, visibleRanges: {start: number, end: number}[]) {\n\tconst functions: Range[] = []\n\tconst types: Range[] = []\n\tconst variables: Range[] = []\n\tconst keywords: Range[] = []\n\tlet visitedChildren = false\n\tlet cursor = root.walk()\n\tlet parents = [cursor.nodeType]\n\twhile (true) {\n\t\t// Advance cursor\n\t\tif (visitedChildren) {\n\t\t\tif (cursor.gotoNextSibling()) {\n\t\t\t\tvisitedChildren = false\n\t\t\t} else if (cursor.gotoParent()) {\n\t\t\t\tparents.pop()\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tconst parent = cursor.nodeType\n\t\t\tif (cursor.gotoFirstChild()) {\n\t\t\t\tparents.push(parent)\n\t\t\t\tvisitedChildren = false\n\t\t\t} else {\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\t// Skip nodes that are not visible\n\t\tif (!visible(cursor, visibleRanges)) {\n\t\t\tvisitedChildren = true\n\t\t\tcontinue\n\t\t}\n\t\t// Color tokens\n\t\tconst parent = parents[parents.length - 1]\n\t\tconst grandparent = parents[parents.length - 2]\n\t\tswitch (cursor.nodeType) {\n\t\t\tcase 'identifier':\n\t\t\t\tif (parent == 'function_item' && grandparent == 'declaration_list') {\n\t\t\t\t\tvariables.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t} else if (parent == 'function_item') {\n\t\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t} else if (parent == 'scoped_identifier' && grandparent == 'function_declarator') {\n\t\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'type_identifier':\n\t\t\tcase 'primitive_type':\n\t\t\t\ttypes.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'field_identifier':\n\t\t\t\tvariables.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'use_list':\n\n\t\t}\n\t}\n\tcursor.delete()\n\treturn new Map([\n\t\t['entity.name.function', functions],\n\t\t['entity.name.type', types],\n\t\t['variable', variables],\n\t\t['keyword', keywords],\n\t])\n}\n\nexport function colorCpp(root: Parser.Tree, visibleRanges: {start: number, end: number}[]) {\n\tconst functions: Range[] = []\n\tconst types: Range[] = []\n\tconst variables: Range[] = []\n\tlet visitedChildren = false\n\tlet cursor = root.walk()\n\tlet parents = [cursor.nodeType]\n\twhile (true) {\n\t\t// Advance cursor\n\t\tif (visitedChildren) {\n\t\t\tif (cursor.gotoNextSibling()) {\n\t\t\t\tvisitedChildren = false\n\t\t\t} else if (cursor.gotoParent()) {\n\t\t\t\tparents.pop()\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t} else {\n\t\t\t\tbreak\n\t\t\t}\n\t\t} else {\n\t\t\tconst parent = cursor.nodeType\n\t\t\tif (cursor.gotoFirstChild()) {\n\t\t\t\tparents.push(parent)\n\t\t\t\tvisitedChildren = false\n\t\t\t} else {\n\t\t\t\tvisitedChildren = true\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\t\t// Skip nodes that are not visible\n\t\tif (!visible(cursor, visibleRanges)) {\n\t\t\tvisitedChildren = true\n\t\t\tcontinue\n\t\t}\n\t\t// Color tokens\n\t\tconst parent = parents[parents.length - 1]\n\t\tconst grandparent = parents[parents.length - 2]\n\t\tswitch (cursor.nodeType) {\n\t\t\tcase 'identifier':\n\t\t\t\tif (parent == 'function_declarator' || parent == 'scoped_identifier' && grandparent == 'function_declarator') {\n\t\t\t\t\tfunctions.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\tcase 'type_identifier':\n\t\t\t\ttypes.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t\tcase 'field_identifier':\n\t\t\t\tvariables.push({start: cursor.startPosition, end: cursor.endPosition})\n\t\t\t\tbreak\n\t\t}\n\t}\n\tcursor.delete()\n\treturn new Map([\n\t\t['entity.name.function', functions],\n\t\t['entity.name.type', types],\n\t\t['variable', variables],\n\t])\n}\n\nfunction isVisible(x: Parser.SyntaxNode, visibleRanges: {start: number, end: number}[]) {\n\tfor (const {start, end} of visibleRanges) {\n\t\tconst overlap = x.startPosition.row <= end+1 && start-1 <= x.endPosition.row\n\t\tif (overlap) return true\n\t}\n\treturn false\n}\nfunction visible(x: Parser.TreeCursor, visibleRanges: { start: number, end: number }[]) {\n\tfor (const { start, end } of visibleRanges) {\n\t\tconst overlap = x.startPosition.row <= end + 1 && start - 1 <= x.endPosition.row\n\t\tif (overlap) return true\n\t}\n\treturn false\n}"
  },
  {
    "path": "src/extension.ts",
    "content": "import * as vscode from 'vscode'\nimport * as Parser from 'web-tree-sitter'\nimport * as path from 'path'\nimport * as scopes from './scopes'\nimport * as colors from './colors'\n\n// Be sure to declare the language in package.json and include a minimalist grammar.\nconst languages: {[id: string]: {module: string, color: colors.ColorFunction, parser?: Parser}} = {\n\t'go': {module: 'tree-sitter-go', color: colors.colorGo},\n\t'cpp': {module: 'tree-sitter-cpp', color: colors.colorCpp},\n\t'rust': {module: 'tree-sitter-rust', color: colors.colorRust},\n\t'ruby': {module: 'tree-sitter-ruby', color: colors.colorRuby},\n\t'typescript': {module: 'tree-sitter-typescript', color: colors.colorTypescript},\n\t// TODO there is a separate JS grammar now\n\t'javascript': {module: 'tree-sitter-javascript', color: colors.colorTypescript},\n}\n\n// Create decoration types from scopes lazily\nconst decorationCache = new Map<string, vscode.TextEditorDecorationType>()\nfunction decoration(scope: string): vscode.TextEditorDecorationType|undefined {\n\t// If we've already created a decoration for `scope`, use it\n\tif (decorationCache.has(scope)) {\n\t\treturn decorationCache.get(scope)\n\t}\n\t// If `scope` is defined in the current theme, create a decoration for it\n\tconst textmate = scopes.find(scope)\n\tif (textmate) {\n\t\tconst decoration = createDecorationFromTextmate(textmate)\n\t\tdecorationCache.set(scope, decoration)\n\t\treturn decoration\n\t}\n\t// Otherwise, give up, there is no color available for this scope\n\treturn undefined\n}\nfunction createDecorationFromTextmate(themeStyle: scopes.TextMateRuleSettings): vscode.TextEditorDecorationType {\n\tlet options: vscode.DecorationRenderOptions = {}\n\toptions.rangeBehavior = vscode.DecorationRangeBehavior.OpenOpen\n\tif (themeStyle.foreground) {\n\t\toptions.color = themeStyle.foreground\n\t}\n\tif (themeStyle.background) {\n\t\toptions.backgroundColor = themeStyle.background\n\t}\n\tif (themeStyle.fontStyle) {\n\t\tlet parts: string[] = themeStyle.fontStyle.split(\" \")\n\t\tparts.forEach((part) => {\n\t\t\tswitch (part) {\n\t\t\t\tcase \"italic\":\n\t\t\t\t\toptions.fontStyle = \"italic\"\n\t\t\t\t\tbreak\n\t\t\t\tcase \"bold\":\n\t\t\t\t\toptions.fontWeight = \"bold\"\n\t\t\t\t\tbreak\n\t\t\t\tcase \"underline\":\n\t\t\t\t\toptions.textDecoration = \"underline\"\n\t\t\t\t\tbreak\n\t\t\t\tdefault:\n\t\t\t\t\tbreak\n\t\t\t}\n\t\t})\n\t}\n\treturn vscode.window.createTextEditorDecorationType(options)\n}\n\n// Load styles from the current active theme\nasync function loadStyles() {\n\tawait scopes.load()\n\t// Clear old styles\n\tfor (const style of decorationCache.values()) {\n\t\tstyle.dispose()\n\t}\n\tdecorationCache.clear()\n}\n\n// For some reason this crashes if we put it inside activate\nconst initParser = Parser.init() // TODO this isn't a field, suppress package member coloring like Go\n\n// Called when the extension is first activated by user opening a file with the appropriate language\nexport async function activate(context: vscode.ExtensionContext) {\n\tconsole.log(\"Activating tree-sitter...\")\n\t// Parse of all visible documents\n\tconst trees: {[uri: string]: Parser.Tree} = {}\n\tasync function open(editor: vscode.TextEditor) {\n\t\tconst language = languages[editor.document.languageId]\n\t\tif (language == null) return\n\t\tif (language.parser == null) {\n\t\t\tconst absolute = path.join(context.extensionPath, 'parsers', language.module + '.wasm')\n\t\t\tconst wasm = path.relative(process.cwd(), absolute)\n\t\t\tconst lang = await Parser.Language.load(wasm)\n\t\t\tconst parser = new Parser()\n\t\t\tparser.setLanguage(lang)\n\t\t\tlanguage.parser = parser\n\t\t}\n\t\tconst t = language.parser.parse(editor.document.getText()) // TODO don't use getText, use Parser.Input\n\t\ttrees[editor.document.uri.toString()] = t\n\t\tcolorUri(editor.document.uri)\n\t}\n\t// NOTE: if you make this an async function, it seems to cause edit anomalies\n\tfunction edit(edit: vscode.TextDocumentChangeEvent) {\n\t\tconst language = languages[edit.document.languageId]\n\t\tif (language == null || language.parser == null) return\n\t\tupdateTree(language.parser, edit)\n\t\tcolorUri(edit.document.uri)\n\t}\n\tfunction updateTree(parser: Parser, edit: vscode.TextDocumentChangeEvent) {\n\t\tif (edit.contentChanges.length == 0) return\n\t\tconst old = trees[edit.document.uri.toString()]\n\t\tfor (const e of edit.contentChanges) {\n\t\t\tconst startIndex = e.rangeOffset\n\t\t\tconst oldEndIndex = e.rangeOffset + e.rangeLength\n\t\t\tconst newEndIndex = e.rangeOffset + e.text.length\n\t\t\tconst startPos = edit.document.positionAt(startIndex)\n\t\t\tconst oldEndPos = edit.document.positionAt(oldEndIndex)\n\t\t\tconst newEndPos = edit.document.positionAt(newEndIndex)\n\t\t\tconst startPosition = asPoint(startPos)\n\t\t\tconst oldEndPosition = asPoint(oldEndPos)\n\t\t\tconst newEndPosition = asPoint(newEndPos)\n\t\t\tconst delta = {startIndex, oldEndIndex, newEndIndex, startPosition, oldEndPosition, newEndPosition}\n\t\t\told.edit(delta)\n\t\t}\n\t\tconst t = parser.parse(edit.document.getText(), old) // TODO don't use getText, use Parser.Input\n\t\ttrees[edit.document.uri.toString()] = t\n\t}\n\tfunction asPoint(pos: vscode.Position): Parser.Point {\n\t\treturn {row: pos.line, column: pos.character}\n\t}\n\tfunction close(doc: vscode.TextDocument) {\n\t\tdelete trees[doc.uri.toString()]\n\t}\n\tfunction colorUri(uri: vscode.Uri) {\n\t\tfor (const editor of vscode.window.visibleTextEditors) {\n\t\t\tif (editor.document.uri == uri) {\n\t\t\t\tcolorEditor(editor)\n\t\t\t}\n\t\t}\n\t}\n\tconst warnedScopes = new Set<string>()\n\tfunction colorEditor(editor: vscode.TextEditor) {\n\t\tconst t = trees[editor.document.uri.toString()]\n\t\tif (t == null) return\n\t\tconst language = languages[editor.document.languageId]\n\t\tif (language == null) return\n\t\tconst scopes = language.color(t, visibleLines(editor))\n\t\tfor (const scope of scopes.keys()) {\n\t\t\tconst dec = decoration(scope)\n\t\t\tif (dec) {\n\t\t\t\tconst ranges = scopes.get(scope)!.map(range)\n\t\t\t\teditor.setDecorations(dec, ranges)\n\t\t\t} else if (!warnedScopes.has(scope)) {\n\t\t\t\tconsole.warn(scope, 'was not found in the current theme')\n\t\t\t\twarnedScopes.add(scope)\n\t\t\t}\n\t\t}\n\t\tfor (const scope of decorationCache.keys()) {\n\t\t\tif (!scopes.has(scope)) {\n\t\t\t\tconst dec = decorationCache.get(scope)!\n\t\t\t\teditor.setDecorations(dec, [])\n\t\t\t}\n\t\t}\n\t}\n\tasync function colorAllOpen() {\n\t\tfor (const editor of vscode.window.visibleTextEditors) {\n\t\t\tawait open(editor)\n\t\t}\n\t}\n\t// Load active color theme\n\tasync function onChangeConfiguration(event: vscode.ConfigurationChangeEvent) {\n        let colorizationNeedsReload: boolean = event.affectsConfiguration(\"workbench.colorTheme\")\n\t\t\t|| event.affectsConfiguration(\"editor.tokenColorCustomizations\")\n\t\tif (colorizationNeedsReload) {\n\t\t\tawait loadStyles()\n\t\t\tcolorAllOpen()\n\t\t}\n\t}\n    context.subscriptions.push(vscode.workspace.onDidChangeConfiguration(onChangeConfiguration))\n\tcontext.subscriptions.push(vscode.window.onDidChangeVisibleTextEditors(colorAllOpen))\n\tcontext.subscriptions.push(vscode.workspace.onDidChangeTextDocument(edit))\n\tcontext.subscriptions.push(vscode.workspace.onDidCloseTextDocument(close))\n\tcontext.subscriptions.push(vscode.window.onDidChangeTextEditorVisibleRanges(change => colorEditor(change.textEditor)))\n\t// Don't wait for the initial color, it takes too long to inspect the themes and causes VSCode extension host to hang\n\tasync function activateLazily() {\n\t\tawait loadStyles()\n\t\tawait initParser\n\t\tcolorAllOpen()\n\t}\n\tactivateLazily()\n}\n\nfunction visibleLines(editor: vscode.TextEditor) {\n\treturn editor.visibleRanges.map(range => {\n\t\tconst start = range.start.line\n\t\tconst end = range.end.line\n\t\treturn {start, end}\n\t})\n}\n\nfunction range(x: colors.Range): vscode.Range {\n\treturn new vscode.Range(x.start.row, x.start.column, x.end.row, x.end.column)\n}\n\n// this method is called when your extension is deactivated\nexport function deactivate() {}\n"
  },
  {
    "path": "src/print.ts",
    "content": "// import extension = require('./extension')\nimport Parser = require('web-tree-sitter')\nimport fs = require('fs')\n\ntestRust()\n\nasync function testRust() {\n    await Parser.init()\n    const parser = new Parser()\n    const wasm = 'parsers/tree-sitter-rust.wasm'\n    const lang = await Parser.Language.load(wasm)\n    parser.setLanguage(lang)\n    const text = fs.readFileSync('examples/rust/scratch.rs', {encoding: 'utf-8'})\n    const tree = parser.parse(text)\n    const lines = text.split('\\n')\n    const maxLine = maxWidth(lines)\n    for (let line = 0; line < lines.length; line++) {\n        const types: string[] = []\n        collectTypes(tree.rootNode, line, types)\n        let acc = lines[line]\n        for (let i = acc.length; i < maxLine + 1; i++) {\n            acc = acc + ' '\n        }\n        for (const t of types) {\n            acc = acc + ' ' + t\n        }\n        console.log(acc)\n    }\n}\n\nfunction maxWidth(lines: string[]): number {\n    let max = 0\n    for (const line of lines) {\n        if (line.length > max) max = line.length\n    }\n    return max\n}\n\nfunction collectTypes(node: Parser.SyntaxNode, line: number, types: string[]) {\n    if (node.startPosition.row == line) {\n        if (node.endPosition.row == line) {\n            types.push(node.toString())\n        } else {\n            types.push(node.type)\n            for (const child of node.children) {\n                collectTypes(child, line, types)\n            }\n        }\n    } else {\n        for (const child of node.children) {\n            collectTypes(child, line, types)\n        }\n    }\n}"
  },
  {
    "path": "src/scopes.ts",
    "content": "import * as vscode from 'vscode'\nimport * as path from 'path'\nimport * as fs from 'fs'\nimport * as jsonc from \"jsonc-parser\"\n\nexport interface TextMateRule {\n    scope: string|string[]\n    settings: TextMateRuleSettings\n}\n\nexport interface TextMateRuleSettings {\n    foreground: string | undefined\n    background: string | undefined\n    fontStyle: string | undefined\n}\n\n// Current theme colors\nconst colors = new Map<string, TextMateRuleSettings>()\n\nexport function find(scope: string): TextMateRuleSettings|undefined {\n    return colors.get(scope)\n}\n\n// Load all textmate scopes in the currently active theme\nexport async function load() {\n    // Remove any previous theme\n    colors.clear()\n    // Find out current color theme\n    const themeName = vscode.workspace.getConfiguration(\"workbench\").get(\"colorTheme\")\n    if (typeof themeName != 'string') {\n        console.warn('workbench.colorTheme is', themeName)\n        return\n    }\n    // Try to load colors from that theme\n    try {\n        await loadThemeNamed(themeName)\n    } catch(e) {\n\t\tconsole.warn('failed to load theme', themeName, e)\n\t}\n}\n\n// Find current theme on disk\nasync function loadThemeNamed(themeName: string) {\n    for (const extension of vscode.extensions.all) {\n        const extensionPath: string = extension.extensionPath\n        const extensionPackageJsonPath: string = path.join(extensionPath, \"package.json\")\n        if (!await checkFileExists(extensionPackageJsonPath)) {\n            continue\n        }\n        const packageJsonText: string = await readFileText(extensionPackageJsonPath)\n        const packageJson: any = jsonc.parse(packageJsonText)\n        if (packageJson.contributes && packageJson.contributes.themes) {\n            for (const theme of packageJson.contributes.themes) {\n                const id = theme.id || theme.label\n                if (id == themeName) {\n                    const themeRelativePath: string = theme.path\n                    const themeFullPath: string = path.join(extensionPath, themeRelativePath)\n                    await loadThemeFile(themeFullPath)\n                }\n            }\n        }\n    }\n}\n\nasync function loadThemeFile(themePath: string) {\n    if (await checkFileExists(themePath)) {\n        const themeContentText: string = await readFileText(themePath)\n        const themeContent: any = jsonc.parse(themeContentText)\n        if (themeContent && themeContent.tokenColors) {\n            loadColors(themeContent.tokenColors)\n            if (themeContent.include) {\n                // parse included theme file\n                const includedThemePath: string = path.join(path.dirname(themePath), themeContent.include)\n                await loadThemeFile(includedThemePath)\n            }\n        }\n    }\n}\n\nfunction loadColors(textMateRules: TextMateRule[]): void {\n    for (const rule of textMateRules) {\n        if (typeof rule.scope == 'string') {\n            if (!colors.has(rule.scope)) {\n                colors.set(rule.scope, rule.settings)\n            }\n        } else if (rule.scope instanceof Array) {\n            for (const scope of rule.scope) {\n                if (!colors.has(scope)) {\n                    colors.set(scope, rule.settings)\n                }\n            }\n        }\n    }\n}\n\nfunction checkFileExists(filePath: string): Promise<boolean> {\n    return new Promise((resolve, reject) => {\n        fs.stat(filePath, (err, stats) => {\n            if (stats && stats.isFile()) {\n                resolve(true)\n            } else {\n                console.warn('no such file', filePath)\n                resolve(false)\n            }\n        })\n    })\n}\n\nfunction readFileText(filePath: string, encoding: string = \"utf8\"): Promise<string> {\n    return new Promise<string>((resolve, reject) => {\n        fs.readFile(filePath, encoding, (err, data) => {\n            if (err) {\n                reject(err)\n            } else {\n                resolve(data)\n            }\n        })\n    })\n}"
  },
  {
    "path": "src/test.ts",
    "content": "import Parser = require('web-tree-sitter')\nimport colors = require('./colors')\n\ntype Assert = [string, string|{not:string}]\ntype TestCase = [string, ...Assert[]]\n\nconst goTests: TestCase[] = [\n    [\n        `package p; func f() int { }`, \n        ['f', 'entity.name.function'], ['int', 'entity.name.type']\n    ],\n    [\n        `package p; type Foo struct { x int }`, \n        ['Foo', 'entity.name.type'], ['x', {not: 'variable'}]\n    ],\n    [\n        `package p; type Foo interface { GetX() int }`, \n        ['Foo', 'entity.name.type'], ['int', 'entity.name.type'], ['GetX', {not: 'variable'}]\n    ],\n    [\n        `package p; func f() { x := 1; x := 2 }`, \n        ['x', 'markup.underline']\n    ],\n    [\n        `package p; func f(foo T) { foo.Foo() }`, \n        ['Foo', {not: 'entity.name.function'}]\n    ],\n    [\n        `package p; func f() { Foo() }`, \n        ['Foo', 'entity.name.function']\n    ],\n    [\n        `package p; import \"foo\"; func f() { foo.Foo() }`, \n        ['Foo', 'entity.name.function']\n    ],\n    [\n        `package p; import \"foo\"; func f(foo T) { foo.Foo() }`, \n        ['Foo', {not: 'entity.name.function'}]\n    ],\n    [\n        `package p; func f(x other.T) { }`,\n        ['T', 'entity.name.type'],\n    ],\n    [\n        `package p; var _ = f(Foo{})`,\n        ['Foo', 'entity.name.type'],\n    ],\n    [\n        `package p; import (foo \"foobar\"); var _ = foo.Bar()`,\n        ['foo', {not:'variable'}], ['Bar', 'entity.name.function'],\n    ],\n    [\n        `package p\n        func f(a int) int {\n            switch a {\n            case 1: \n                x := 1\n                return x\n            case 2:\n                x := 2\n                return x\n            }\n        }`,\n        ['x', {not:'markup.underline'}]\n    ],\n    [\n        `package p\n        func f(a interface{}) int {\n            switch a.(type) {\n            case *int: \n                x := 1\n                return x\n            case *int:\n                x := 2\n                return x\n            }\n        }`,\n        ['x', {not:'markup.underline'}]\n    ],\n    [\n        `package p\n        func f(a interface{}) int {\n            for i := range 10 {\n                print(i)\n            }\n            for i := range 10 {\n                print(i)\n            }\n        }`,\n        ['i', {not:'markup.underline'}]\n    ],\n    [\n        `package p\n        func f(a interface{}) int {\n            if i := 1; i < 10 {\n                print(i)\n            }\n            if i := 1; i < 10 {\n                print(i)\n            }\n        }`,\n        ['i', {not:'markup.underline'}]\n    ],\n    [\n        `package p\n        func f(a interface{}) {\n            if aa, ok := a.(*type); ok {\n                print(aa)\n            }\n        }`,\n        ['aa', {not:'variable'}]\n    ],\n    [\n        `package p\n        func f(a interface{}) {\n            switch aa := a.(type) {\n                case *int:\n                    print(aa)\n            }\n        }`,\n        ['aa', {not:'variable'}]\n    ],\n    [\n        `package p\n        func f() {\n            switch aa.(type) {\n                case *int:\n                    print(aa)\n            }\n        }`,\n        ['aa', 'variable']\n    ],\n    [\n        `package p\n        func f(a interface{}) {\n            switch aa := a.(type) {\n                case *int:\n                    print(aa)\n            }\n            switch aa := a.(type) {\n                case *int:\n                    print(aa)\n            }\n        }`,\n        ['aa', {not:'markup.underline'}]\n    ],\n    [\n        `package p\n        func f(a ...int) {\n            print(a)\n        }`,\n        ['a', {not:'variable'}]\n    ],\n    [\n        `package p\n        type Foo interface {\n            foo(i int)\n        }`,\n        ['i', {not: 'variable'}]\n    ],\n    [\n        `package p\n        type Foo interface {\n            foo(i int)\n            bar(i int)\n        }`,\n        ['i', {not: 'markup.underline'}]\n    ],\n]\ntest(goTests, 'parsers/tree-sitter-go.wasm', colors.colorGo)\n\nconst rubyTests: TestCase[] = [\n    [\n        `def x.f\n            1\n        end`,\n        ['f', 'entity.name.function'],\n    ],\n    [\n        `def f\n            1\n        end`,\n        ['f', 'entity.name.function'],\n    ],\n    [\n        `class C\n            def f\n                @x = 1\n            end\n        end`,\n        ['@x', 'variable'],\n    ],\n    [\n        `class C\n            private\n            def f\n                1\n            end\n        end`,\n        ['C', 'entity.name.type'], ['private', 'keyword'], ['f', 'entity.name.function'], ['end', 'keyword'],\n    ],\n    [\n        `class C\n            private :f\n            def f\n                1\n            end\n        end`,\n        ['C', 'entity.name.type'], ['private', 'keyword'], [':f', 'constant.language'], ['private', {not:'entity.name.function'}], ['f', 'entity.name.function'], ['end', 'keyword'],\n    ],\n    [\n        `module M\n            private\n            def f\n                1\n            end\n        end`,\n        ['M', 'entity.name.type'], ['private', 'keyword'], ['f', 'entity.name.function'], ['end', 'keyword'],\n    ],\n    [\n        `module M\n            private :f\n            def f\n                1\n            end\n        end`,\n        ['M', 'entity.name.type'], ['private', 'keyword'], ['private', {not:'entity.name.function'}], [':f', 'constant.language'], ['f', 'entity.name.function'], ['end', 'keyword'],\n    ],\n    [\n        `while true\n            puts \"Hi\"\n        end`,\n        ['end', 'keyword.control'], ['end', {not: 'keyword'}],\n    ],\n    [\n        `foo 1`,\n        ['foo', 'entity.name.function'],\n    ],\n    [\n        `foo.bar`,\n        ['bar', 'entity.name.function'],\n    ],\n]\ntest(rubyTests, 'parsers/tree-sitter-ruby.wasm', colors.colorRuby)\n\nasync function test(testCases: TestCase[], wasm: string, color: colors.ColorFunction) {\n    await Parser.init()\n    const parser = new Parser()\n    const lang = await Parser.Language.load(wasm)\n    parser.setLanguage(lang)\n    for (const [src, ...expect] of testCases) {\n        const tree = parser.parse(src)\n        const scope2ranges = color(tree, [{start: 0, end: tree.rootNode.endPosition.row}])\n        const code2scopes = new Map<string, Set<string>>()\n        for (const [scope, ranges] of scope2ranges) {\n            for (const range of ranges) {\n                const start = index(src, range.start)\n                const end = index(src, range.end)\n                const code = src.substring(start, end)\n                if (!code2scopes.has(code)) {\n                    code2scopes.set(code, new Set<string>())\n                }\n                code2scopes.get(code)!.add(scope)\n            }\n        }\n        function printSrcAndTree() {\n            console.error('Source:\\t' + src)\n            console.error('Parsed:\\t' + tree.rootNode.toString())\n        }\n        for (const [code, assert] of expect) {\n            if (typeof assert == 'string') {\n                const scope = assert\n                if (!code2scopes.has(code)) {\n                    console.error(`Error:\\tcode (${code}) was not found in (${join(code2scopes.keys())})`)\n                    printSrcAndTree()\n                    continue\n                }\n                const foundScopes = code2scopes.get(code)!\n                if (!foundScopes.has(scope)) {\n                    console.error(`Error:\\tscope (${scope}) was not among the scopes for (${code}) (${join(foundScopes.keys())})`)\n                    printSrcAndTree()\n                    continue\n                }\n            } else {\n                const scope = assert.not\n                if (!code2scopes.has(code)) {\n                    continue\n                }\n                const foundScopes = code2scopes.get(code)!\n                if (foundScopes.has(scope)) {\n                    console.error(`Error:\\tbanned scope (${scope}) was among the scopes for (${code}) (${join(foundScopes.keys())})`)\n                    printSrcAndTree()\n                    continue\n                }\n            }\n        }\n    }\n}\nfunction index(code: string, point: Parser.Point): number {\n    let row = 0\n    let column = 0\n    for (let i = 0; i < code.length; i++) {\n        if (row == point.row && column == point.column) {\n            return i\n        }\n        if (code[i] == '\\n') {\n            row++\n            column = 0\n        } else {\n            column++\n        }\n    }\n    return code.length\n}\nfunction join(strings: IterableIterator<string>) {\n    var result = ''\n    for (const s of strings) {\n        result = result + s + ', '\n    }\n    return result.substring(0, result.length - 2)\n}"
  },
  {
    "path": "textmate/cpp.tmLanguage.json",
    "content": "{\n\t\"$schema\": \"https://raw.githubusercontent.com/martinring/tmlanguage/master/tmlanguage.json\",\n\t\"version\": \"https://github.com/atom/language-c/commit/3a269f88b12e512fb9495dc006a1dabf325d3d7f\",\n\t\"name\": \"C++\",\n\t\"scopeName\": \"source.cpp\",\n\t\"patterns\": [\n\t\t{\n\t\t\t\"include\": \"#keywords\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#constants\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#strings\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#comments\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#numbers\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#preprocessor-rule-enabled\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#preprocessor-rule-disabled\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#preprocessor-rule-conditional\"\n\t\t},\n\t\t{\n\t\t\t\"begin\": \"(?x)\\n^\\\\s* ((\\\\#)\\\\s*define) \\\\s+    # define\\n((?<id>[a-zA-Z_$][\\\\w$]*))      # macro name\\n(?:\\n  (\\\\()\\n    (\\n      \\\\s* \\\\g<id> \\\\s*         # first argument\\n      ((,) \\\\s* \\\\g<id> \\\\s*)*  # additional arguments\\n      (?:\\\\.\\\\.\\\\.)?            # varargs ellipsis?\\n    )\\n  (\\\\))\\n)?\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.define.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t},\n\t\t\t\t\"5\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.parameters.begin.c\"\n\t\t\t\t},\n\t\t\t\t\"8\": {\n\t\t\t\t\t\"name\": \"punctuation.separator.parameters.c\"\n\t\t\t\t},\n\t\t\t\t\"9\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.parameters.end.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=(?://|/\\\\*))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\"name\": \"meta.preprocessor.macro.c\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-contents\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*(include(?:_next)?|import))\\\\b\\\\s*\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.$3.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=(?://|/\\\\*))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\"name\": \"meta.preprocessor.include.c\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\"\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\"\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.double.include.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"<\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \">\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.other.lt-gt.include.c\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t],\n\t\"repository\": {\n\t\t\"keywords\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(alignas|alignof|and|and_eq|asm|atomic_cancel|atomic_commit|atomic_noexcept|auto|bitand|bitor|bool|char|char8_t|char16_t|char32_t|class|compl|concept|const|consteval|constexpr|const_cast|decltype|double|dynamic_cast|enum|explicit|export|extern|float|friend|inline|int|long|mutable|namespace|noexcept|not|not_eq|nullptr|operator|or|or_eq|private|protected|public|reflexpr|register|reinterpret_cast|requires|short|signed|sizeof|static|static_assert|static_cast|struct|template|this|thread_local|typedef|typeid|typename|union|unsigned|using|virtual|void|volatile|wchar_t|xor|xor_eq)\\\\b\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(override|final|audit|axiom|import|module|transaction_safe|transaction_safe_dynamic)\\\\b\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(_Pragma)\\\\b\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(break|case|catch|continue|co_await|co_return|co_yield|default|delete|do|else|for|goto|if|new|return|switch|synchronized|throw|try|while)\\\\b\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.control.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Terminator\",\n\t\t\t\t\t\"match\": \";\",\n\t\t\t\t\t\"name\": \"keyword.other.semi.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"constants\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(NULL|true|false|TRUE|FALSE)\\\\b\",\n\t\t\t\t\t\"name\": \"constant.numeric.cpp\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"strings\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"(u|u8|U|L)?R\\\"(?:([^ ()\\\\\\\\\\\\t]{0,16})|([^ ()\\\\\\\\\\\\t]*))\\\\(\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.cpp\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"meta.encoding.cpp\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"3\": {\n\t\t\t\t\t\t\t\"name\": \"invalid.illegal.delimiter-too-long.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\\)\\\\2(\\\\3)\\\"\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.cpp\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"invalid.illegal.delimiter-too-long.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.double.raw.cpp\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"(u|u8|U|L)?\\\"\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"meta.encoding.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\"\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.double.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#string_escaped_char\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#string_placeholder\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"'\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"'\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.single.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#string_escaped_char\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"string_escaped_char\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(?x)\\\\\\\\ (\\n\\\\\\\\             |\\n[abefnprtv'\\\"?]   |\\n[0-3]\\\\d{,2}     |\\n[4-7]\\\\d?        |\\nx[a-fA-F0-9]{,2} |\\nu[a-fA-F0-9]{,4} |\\nU[a-fA-F0-9]{,8} )\",\n\t\t\t\t\t\"name\": \"constant.character.escape.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\\\\\.\",\n\t\t\t\t\t\"name\": \"invalid.illegal.unknown-escape.c\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"string_placeholder\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(?x) %\\n(\\\\d+\\\\$)?                           # field (argument #)\\n[#0\\\\- +']*                          # flags\\n[,;:_]?                              # separator character (AltiVec)\\n((-?\\\\d+)|\\\\*(-?\\\\d+\\\\$)?)?          # minimum field width\\n(\\\\.((-?\\\\d+)|\\\\*(-?\\\\d+\\\\$)?)?)?    # precision\\n(hh|h|ll|l|j|t|z|q|L|vh|vl|v|hv|hl)? # length modifier\\n[diouxXDOUeEfFgGaACcSspn%]           # conversion type\",\n\t\t\t\t\t\"name\": \"constant.other.placeholder.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(%)(?!\\\"\\\\s*(PRI|SCN))\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"invalid.illegal.placeholder.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"comments\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"meta.toc-list.banner.block.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"match\": \"^/\\\\* =(\\\\s*.*?)\\\\s*= \\\\*/$\\\\n?\",\n\t\t\t\t\t\"name\": \"comment.block.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"/\\\\*\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.comment.begin.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\\*/\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.comment.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"comment.block.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\*/.*\\\\n\",\n\t\t\t\t\t\"name\": \"invalid.illegal.stray-comment-end.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"meta.toc-list.banner.line.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"match\": \"^// =(\\\\s*.*?)\\\\s*=\\\\s*$\\\\n?\",\n\t\t\t\t\t\"name\": \"comment.line.banner.cpp\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"(^[ \\\\t]+)?(?=//)\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.whitespace.comment.leading.cpp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(?!\\\\G)\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"//\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.comment.cpp\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"comment.line.double-slash.cpp\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"numbers\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b((0(x|X)[0-9a-fA-F]([0-9a-fA-F']*[0-9a-fA-F])?)|(0(b|B)[01]([01']*[01])?)|(([0-9]([0-9']*[0-9])?\\\\.?[0-9]*([0-9']*[0-9])?)|(\\\\.[0-9]([0-9']*[0-9])?))((e|E)(\\\\+|-)?[0-9]([0-9']*[0-9])?)?)(L|l|UL|ul|u|U|F|f|ll|LL|ull|ULL)?\\\\b\",\n\t\t\t\t\t\"name\": \"constant.numeric.c\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"line_continuation_character\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(\\\\\\\\)\\\\n\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"constant.character.escape.line-continuation.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-conditional\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*if(?:n?def)?\\\\b)\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"^\\\\s*((#)\\\\s*endif\\\\b)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-elif\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-else\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-disabled-elif\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"$base\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"^\\\\s*#\\\\s*(else|elif|endif)\\\\b\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"invalid.illegal.stray-$1.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-conditional-block\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*if(?:n?def)?\\\\b)\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"^\\\\s*((#)\\\\s*endif\\\\b)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-elif-block\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-else-block\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-disabled-elif\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#block_innards\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"^\\\\s*#\\\\s*(else|elif|endif)\\\\b\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"invalid.illegal.stray-$1.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-conditional-line\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(?:\\\\bdefined\\\\b\\\\s*$)|(?:\\\\bdefined\\\\b(?=\\\\s*\\\\(*\\\\s*(?:(?!defined\\\\b)[a-zA-Z_$][\\\\w$]*\\\\b)\\\\s*\\\\)*\\\\s*(?:\\\\n|//|/\\\\*|\\\\?|\\\\:|&&|\\\\|\\\\||\\\\\\\\\\\\s*\\\\n)))\",\n\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bdefined\\\\b\",\n\t\t\t\t\t\"name\": \"invalid.illegal.macro-name.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#strings\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#numbers\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\?\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.operator.ternary.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \":\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.operator.ternary.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#operators\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(NULL|true|false|TRUE|FALSE)\\\\b\",\n\t\t\t\t\t\"name\": \"constant.language.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\(\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.parens.begin.bracket.round.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\\)|(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.parens.end.bracket.round.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-disabled\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*if\\\\b)(?=\\\\s*\\\\(*\\\\b0+\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"^\\\\s*((#)\\\\s*endif\\\\b)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-elif\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-else\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-disabled-elif\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:elif|else|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"$base\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.if-branch.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-disabled-block\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*if\\\\b)(?=\\\\s*\\\\(*\\\\b0+\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"^\\\\s*((#)\\\\s*endif\\\\b)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-elif-block\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-enabled-else-block\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-disabled-elif\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:elif|else|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#block_innards\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.if-branch.in-block.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-disabled-elif\": {\n\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)(?=\\\\s*\\\\(*\\\\b0+\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"0\": {\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t},\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:elif|else|endif)\\\\b))\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.elif-branch.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-enabled\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*if\\\\b)(?=\\\\s*\\\\(*\\\\b0*1\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"3\": {\n\t\t\t\t\t\t\t\"name\": \"constant.numeric.preprocessor.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"^\\\\s*((#)\\\\s*endif\\\\b)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*else\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.else-branch.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.if-branch.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"$base\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-enabled-block\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*if\\\\b)(?=\\\\s*\\\\(*\\\\b0*1\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"^\\\\s*((#)\\\\s*endif\\\\b)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?=\\\\n)\",\n\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*else\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.else-branch.in-block.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.if-branch.in-block.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#block_innards\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-enabled-elif\": {\n\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)(?=\\\\s*\\\\(*\\\\b0*1\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"0\": {\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t},\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:endif)\\\\b))\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*(else)\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.elif-branch.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*(elif)\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.elif-branch.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"$base\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-enabled-elif-block\": {\n\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*elif\\\\b)(?=\\\\s*\\\\(*\\\\b0*1\\\\b\\\\)*\\\\s*(?:$|//|/\\\\*))\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"0\": {\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t},\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\G(?=.)(?!//|/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))\",\n\t\t\t\t\t\"end\": \"(?=//)|(?=/\\\\*(?!.*\\\\\\\\\\\\s*\\\\n))|(?<!\\\\\\\\)(?=\\\\n)\",\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-conditional-line\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\n\",\n\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:endif)\\\\b))\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*(else)\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.elif-branch.in-block.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*(elif)\\\\b)\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*(?:else|elif|endif)\\\\b))\",\n\t\t\t\t\t\t\t\"contentName\": \"comment.block.preprocessor.elif-branch.c\",\n\t\t\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#disabled\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"include\": \"#pragma-mark\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#block_innards\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-enabled-else\": {\n\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*else\\\\b)\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"0\": {\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t},\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"$base\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-enabled-else-block\": {\n\t\t\t\"begin\": \"^\\\\s*((#)\\\\s*else\\\\b)\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"0\": {\n\t\t\t\t\t\"name\": \"meta.preprocessor.c\"\n\t\t\t\t},\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"keyword.other.directive.conditional.c\"\n\t\t\t\t},\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"punctuation.definition.directive.c\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"(?=^\\\\s*((#)\\\\s*endif\\\\b))\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#block_innards\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-define-line-contents\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#vararg_ellipses\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"{\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.block.begin.bracket.curly.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"}|(?=\\\\s*#\\\\s*(?:elif|else|endif)\\\\b)|(?<!\\\\\\\\)(?=\\\\s*\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.block.end.bracket.curly.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"meta.block.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-blocks\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\(\",\n\t\t\t\t\t\"name\": \"punctuation.section.parens.begin.bracket.round.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\)\",\n\t\t\t\t\t\"name\": \"punctuation.section.parens.end.bracket.round.c\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\"\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\"|(?<!\\\\\\\\)(?=\\\\s*\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.double.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#string_escaped_char\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#string_placeholder\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"'\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.begin.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"'|(?<!\\\\\\\\)(?=\\\\s*\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.string.end.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"name\": \"string.quoted.single.c\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#string_escaped_char\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#line_continuation_character\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#access\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#libc\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"$base\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-define-line-blocks\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"{\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.block.begin.bracket.curly.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"}|(?=\\\\s*#\\\\s*(?:elif|else|endif)\\\\b)|(?<!\\\\\\\\)(?=\\\\s*\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.block.end.bracket.curly.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-blocks\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-contents\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-contents\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"preprocessor-rule-define-line-functions\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#vararg_ellipses\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#access\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#operators\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"(?x)\\n(?!(?:while|for|do|if|else|switch|catch|enumerate|return|typeid|alignof|alignas|sizeof|[cr]?iterate)\\\\s*\\\\()\\n(\\n(?:[A-Za-z_][A-Za-z0-9_]*+|::)++  # actual name\\n|\\n(?:(?<=operator)(?:[-*&<>=+!]+|\\\\(\\\\)|\\\\[\\\\]))\\n)\\n\\\\s*(\\\\()\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.arguments.begin.bracket.round.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(\\\\))|(?<!\\\\\\\\)(?=\\\\s*\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.arguments.end.bracket.round.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-functions\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\(\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.parens.begin.bracket.round.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(\\\\))|(?<!\\\\\\\\)(?=\\\\s*\\\\n)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.section.parens.end.bracket.round.c\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-functions\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#preprocessor-rule-define-line-contents\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t}\n}"
  },
  {
    "path": "textmate/go.tmLanguage.json",
    "content": "{\n\t\"$schema\": \"https://raw.githubusercontent.com/martinring/tmlanguage/master/tmlanguage.json\",\n\t\"version\": \"https://github.com/atom/language-go/commit/b6fd68f74efa109679e31fe6f4a41ac105262d0e\",\n\t\"name\": \"Go\",\n\t\"scopeName\": \"source.go\",\n\t\"comment\": \"Go language\",\n\t\"patterns\": [\n\t\t{\n\t\t\t\"include\": \"#comments\"\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Interpreted string literals\",\n\t\t\t\"begin\": \"\\\"\",\n\t\t\t\"end\": \"\\\"\",\n\t\t\t\"name\": \"string.quoted.double.go\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string_escaped_char\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string_placeholder\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Raw string literals\",\n\t\t\t\"begin\": \"`\",\n\t\t\t\"end\": \"`\",\n\t\t\t\"name\": \"string.quoted.raw.go\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string_placeholder\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Syntax error receiving channels\",\n\t\t\t\"match\": \"<\\\\-([\\\\t ]+)chan\\\\b\",\n\t\t\t\"captures\": {\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"invalid.illegal.receive-channel.go\"\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Syntax error sending channels\",\n\t\t\t\"match\": \"\\\\bchan([\\\\t ]+)<-\",\n\t\t\t\"captures\": {\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"invalid.illegal.send-channel.go\"\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Syntax error using slices\",\n\t\t\t\"match\": \"\\\\[\\\\](\\\\s+)\",\n\t\t\t\"captures\": {\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"invalid.illegal.slice.go\"\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Floating-point literals\",\n\t\t\t\"match\": \"(\\\\.\\\\d+([Ee][-+]\\\\d+)?i?)\\\\b|\\\\b\\\\d+\\\\.\\\\d*(([Ee][-+]\\\\d+)?i?\\\\b)?\",\n\t\t\t\"name\": \"constant.numeric.floating-point.go\"\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Integers\",\n\t\t\t\"match\": \"\\\\b((0b[0-9]+)|(0x[0-9a-fA-F]+)|(0[0-7]+i?)|(\\\\d+([Ee]\\\\d+)?i?)|(\\\\d+[Ee][-+]\\\\d+i?))\\\\b\",\n\t\t\t\"name\": \"constant.numeric.integer.go\"\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Language constants\",\n\t\t\t\"match\": \"\\\\b(true|false|nil|iota)\\\\b\",\n\t\t\t\"name\": \"constant.numeric.language.go\"\n\t\t},\n\t\t{\n\t\t\t\"comment\": \"Terminators\",\n\t\t\t\"match\": \";\",\n\t\t\t\"name\": \"keyword.other.semi.go\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#keywords\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#operators\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#runes\"\n\t\t}\n\t],\n\t\"repository\": {\n\t\t\"comments\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"/\\\\*\",\n\t\t\t\t\t\"end\": \"\\\\*/\",\n\t\t\t\t\t\"name\": \"comment.block.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"//\",\n\t\t\t\t\t\"end\": \"$\",\n\t\t\t\t\t\"name\": \"comment.line.double-slash.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"keywords\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Flow control keywords\",\n\t\t\t\t\t\"match\": \"\\\\b(break|case|continue|default|defer|panic|recover|else|fallthrough|for|go|goto|if|range|return|select|switch)\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.control.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bchan\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.channel.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bconst\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.const.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bfunc\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.function.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\binterface\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.interface.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bmap\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.map.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bstruct\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.struct.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Syntax error numeric literals\",\n\t\t\t\t\t\"match\": \"\\\\b0[0-7]*[89]\\\\d*\\\\b\",\n\t\t\t\t\t\"name\": \"invalid.illegal.numeric.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Functions\",\n\t\t\t\t\t\"match\": \"\\\\bfunc\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.function.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bpackage\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.package.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\btype\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.type.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bimport\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.import.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\bvar\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.var.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"operators\": {\n\t\t\t\"comment\": \"Note that the order here is very important!\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(\\\\*|&)(?=\\\\w)\",\n\t\t\t\t\t\"name\": \"keyword.operator.address.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"<\\\\-\",\n\t\t\t\t\t\"name\": \"keyword.operator.channel.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\-\\\\-\",\n\t\t\t\t\t\"name\": \"keyword.operator.decrement.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\+\\\\+\",\n\t\t\t\t\t\"name\": \"keyword.operator.increment.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(==|!=|<=|>=|<(?!<)|>(?!>))\",\n\t\t\t\t\t\"name\": \"keyword.operator.comparison.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(&&|\\\\|\\\\||!)\",\n\t\t\t\t\t\"name\": \"keyword.operator.logical.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(=|\\\\+=|\\\\-=|\\\\|=|\\\\^=|\\\\*=|/=|:=|%=|<<=|>>=|&\\\\^=|&=)\",\n\t\t\t\t\t\"name\": \"keyword.operator.assignment.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(\\\\+|\\\\-|\\\\*|/|%)\",\n\t\t\t\t\t\"name\": \"keyword.operator.arithmetic.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(&(?!\\\\^)|\\\\||\\\\^|&\\\\^|<<|>>)\",\n\t\t\t\t\t\"name\": \"keyword.operator.arithmetic.bitwise.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\.\\\\.\\\\.\",\n\t\t\t\t\t\"name\": \"keyword.operator.ellipsis.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"runes\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"'\",\n\t\t\t\t\t\"end\": \"'\",\n\t\t\t\t\t\"name\": \"string.quoted.rune.go\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": \"\\\\G(\\\\\\\\([0-7]{3}|[abfnrtv\\\\\\\\'\\\"]|x[0-9a-fA-F]{2}|u[0-9a-fA-F]{4}|U[0-9a-fA-F]{8})|.)(?=')\",\n\t\t\t\t\t\t\t\"name\": \"constant.other.rune.go\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"match\": \"[^']+\",\n\t\t\t\t\t\t\t\"name\": \"invalid.illegal.unknown-rune.go\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"string_escaped_char\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\\\\\([0-7]{3}|[abfnrtv\\\\\\\\'\\\"]|x[0-9a-fA-F]{2}|u[0-9a-fA-F]{4}|U[0-9a-fA-F]{8})\",\n\t\t\t\t\t\"name\": \"constant.character.escape.go\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\\\\\[^0-7xuUabfnrtv\\\\'\\\"]\",\n\t\t\t\t\t\"name\": \"invalid.illegal.unknown-escape.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"string_placeholder\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"%(\\\\[\\\\d+\\\\])?([\\\\+#\\\\-0\\\\x20]{,2}((\\\\d+|\\\\*)?(\\\\.?(\\\\d+|\\\\*|(\\\\[\\\\d+\\\\])\\\\*?)?(\\\\[\\\\d+\\\\])?)?))?[vT%tbcdoqxXUbeEfFgGsp]\",\n\t\t\t\t\t\"name\": \"constant.other.placeholder.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t}\n}"
  },
  {
    "path": "textmate/ruby.tmLanguage.json",
    "content": "{\n    \"$schema\": \"https://raw.githubusercontent.com/martinring/tmlanguage/master/tmlanguage.json\",\n    \"name\": \"Ruby\",\n    \"scopeName\": \"source.ruby\",\n    \"patterns\": [\n        {\n            \"match\": \"\\\\b(__ENCODING__|__LINE__|__FILE__|alias|and|class|def|module|not|or|self|super|undef)\\\\b\",\n            \"name\": \"keyword.other.ruby\"\n        },\n        {\n            \"match\": \"\\\\b(BEGIN|END|begin|break|case|do|else|elsif|ensure|for|if|in|next|redo|rescue|retry|return|then|unless|until|when|while|yield)\\\\b\",\n            \"name\": \"keyword.control.ruby\"\n        },\n        {\n            \"match\": \"\\\\bdefined[?]\",\n            \"name\": \"keyword.control.ruby\"\n        },\n        {\n            \"comment\": \"These aren't actual keywords, but they're builtin methods that basically function as keywords\",\n            \"match\": \"\\\\b(new|loop|raise|fail|catch|throw)\\\\b\",\n            \"name\": \"keyword.control.special-method.ruby\"\n        },\n        {\n            \"comment\": \"These aren't actual keywords, but they're builtin methods that basically function as keywords\",\n            \"match\": \"\\\\b(refine|using)\\\\b(?![?!])\",\n            \"name\": \"keyword.other.special-method.ruby\"\n        },\n        {\n            \"match\": \"\\\\b(false|nil|true)\\\\b\",\n            \"name\": \"constant.numeric.language.ruby\"\n        },\n        {\n            \"match\":\n                \"\\\\b\\\\d(?>_?\\\\d)*(?=\\\\.\\\\d|[eE])(\\\\.\\\\d(?>_?\\\\d)*)?([eE][-+]?\\\\d(?>_?\\\\d)*)?r?i?\\\\b\",\n            \"name\": \"constant.numeric.float.ruby\"\n        },\n        {\n            \"match\": \"\\\\b(0|(0[dD]\\\\d|[1-9])(?>_?\\\\d)*)r?i?\\\\b\",\n            \"name\": \"constant.numeric.integer.ruby\"\n        },\n        {\n            \"match\": \"\\\\b0[xX]\\\\h(?>_?\\\\h)*r?i?\\\\b\",\n            \"name\": \"constant.numeric.hex.ruby\"\n        },\n        {\n            \"match\": \"\\\\b0[bB][01](?>_?[01])*r?i?\\\\b\",\n            \"name\": \"constant.numeric.binary.ruby\"\n        },\n        {\n            \"match\": \"\\\\b0([oO]?[0-7](?>_?[0-7])*)?r?i?\\\\b\",\n            \"name\": \"constant.numeric.octal.ruby\"\n        },\n        {\n            \"comment\": \"Needs higher precidence than regular expressions.\",\n            \"match\": \"(?<!\\\\()/=\",\n            \"name\": \"keyword.operator.assignment.augmented.ruby\"\n        },\n        {\n            \"begin\": \"'\",\n            \"comment\": \"single quoted string (does not allow interpolation)\",\n            \"end\": \"'\",\n            \"name\": \"string.quoted.single.ruby\",\n            \"patterns\": [\n                {\n                    \"match\": \"\\\\\\\\'|\\\\\\\\\\\\\\\\\",\n                    \"name\": \"constant.character.escape.ruby\"\n                }\n            ]\n        },\n        {\n            \"begin\": \"\\\"\",\n            \"comment\": \"double quoted string (allows for interpolation)\",\n            \"end\": \"\\\"\",\n            \"name\": \"string.quoted.double.ruby\",\n            \"patterns\": [\n                {\n                    \"include\": \"#interpolated_ruby\"\n                },\n                {\n                    \"include\": \"#escaped_char\"\n                }\n            ]\n        },\n        {\n            \"begin\": \"`\",\n            \"comment\": \"execute string (allows for interpolation)\",\n            \"end\": \"`\",\n            \"name\": \"string.interpolated.ruby\",\n            \"patterns\": [\n                {\n                    \"include\": \"#interpolated_ruby\"\n                },\n                {\n                    \"include\": \"#escaped_char\"\n                }\n            ]\n        },\n        {\n            \"include\": \"#percent_literals\"\n        },\n        {\n            \"begin\":\n                \"(?x)\\n\\t\\t\\t   (?:\\n\\t\\t\\t     ^                      # beginning of line\\n\\t\\t\\t   | (?<=                   # or look-behind on:\\n\\t\\t\\t       [=>~(?:\\\\[,|&;]\\n\\t\\t\\t     | [\\\\s;]if\\\\s\\t\\t\\t# keywords\\n\\t\\t\\t     | [\\\\s;]elsif\\\\s\\n\\t\\t\\t     | [\\\\s;]while\\\\s\\n\\t\\t\\t     | [\\\\s;]unless\\\\s\\n\\t\\t\\t     | [\\\\s;]when\\\\s\\n\\t\\t\\t     | [\\\\s;]assert_match\\\\s\\n\\t\\t\\t     | [\\\\s;]or\\\\s\\t\\t\\t# boolean opperators\\n\\t\\t\\t     | [\\\\s;]and\\\\s\\n\\t\\t\\t     | [\\\\s;]not\\\\s\\n\\t\\t\\t     | [\\\\s.]index\\\\s\\t\\t\\t# methods\\n\\t\\t\\t     | [\\\\s.]scan\\\\s\\n\\t\\t\\t     | [\\\\s.]sub\\\\s\\n\\t\\t\\t     | [\\\\s.]sub!\\\\s\\n\\t\\t\\t     | [\\\\s.]gsub\\\\s\\n\\t\\t\\t     | [\\\\s.]gsub!\\\\s\\n\\t\\t\\t     | [\\\\s.]match\\\\s\\n\\t\\t\\t     )\\n\\t\\t\\t   | (?<=                  # or a look-behind with line anchor:\\n\\t\\t\\t        ^when\\\\s            # duplication necessary due to limits of regex\\n\\t\\t\\t      | ^if\\\\s\\n\\t\\t\\t      | ^elsif\\\\s\\n\\t\\t\\t      | ^while\\\\s\\n\\t\\t\\t      | ^unless\\\\s\\n\\t\\t\\t      )\\n\\t\\t\\t   )\\n\\t\\t\\t   \\\\s*((/))(?![*+{}?])\\n\\t\\t\\t\",\n            \"captures\": {\n                \"1\": {\n                    \"name\": \"string.regexp.classic.ruby\"\n                }\n            },\n            \"comment\":\n                \"regular expressions (normal)\\n\\t\\t\\twe only start a regexp if the character before it (excluding whitespace)\\n\\t\\t\\tis what we think is before a regexp\\n\\t\\t\\t\",\n            \"contentName\": \"string.regexp.classic.ruby\",\n            \"end\": \"((/[eimnosux]*))\",\n            \"patterns\": [\n                {\n                    \"include\": \"#regex_sub\"\n                }\n            ]\n        },\n        {\n            \"begin\": \"^=begin\",\n            \"comment\": \"multiline comments\",\n            \"end\": \"^=end\",\n            \"name\": \"comment.block.documentation.ruby\"\n        },\n        {\n            \"begin\": \"(^[ \\\\t]+)?(?=#)\",\n            \"beginCaptures\": {\n                \"1\": {\n                    \"name\": \"punctuation.whitespace.comment.leading.ruby\"\n                }\n            },\n            \"end\": \"(?!\\\\G)\",\n            \"patterns\": [\n                {\n                    \"begin\": \"#\",\n                    \"end\": \"\\\\n\",\n                    \"name\": \"comment.line.number-sign.ruby\"\n                }\n            ]\n        },\n        {\n            \"comment\":\n                \"\\n\\t\\t\\tmatches questionmark-letters.\\n\\n\\t\\t\\texamples (1st alternation = hex):\\n\\t\\t\\t?\\\\x1     ?\\\\x61\\n\\n\\t\\t\\texamples (2nd alternation = octal):\\n\\t\\t\\t?\\\\0      ?\\\\07     ?\\\\017\\n\\n\\t\\t\\texamples (3rd alternation = escaped):\\n\\t\\t\\t?\\\\n      ?\\\\b\\n\\n\\t\\t\\texamples (4th alternation = meta-ctrl):\\n\\t\\t\\t?\\\\C-a    ?\\\\M-a    ?\\\\C-\\\\M-\\\\C-\\\\M-a\\n\\n\\t\\t\\texamples (4th alternation = normal):\\n\\t\\t\\t?a       ?A       ?0 \\n\\t\\t\\t?*       ?\\\"       ?( \\n\\t\\t\\t?.       ?#\\n\\t\\t\\t\\n\\t\\t\\t\\n\\t\\t\\tthe negative lookbehind prevents against matching\\n\\t\\t\\tp(42.tainted?)\\n\\t\\t\\t\",\n            \"match\":\n                \"(?<!\\\\w)\\\\?(\\\\\\\\(x\\\\h{1,2}(?!\\\\h)\\\\b|0[0-7]{0,2}(?![0-7])\\\\b|[^x0MC])|(\\\\\\\\[MC]-)+\\\\w|[^\\\\s\\\\\\\\])\",\n            \"name\": \"constant.numeric.ruby\"\n        },\n        {\n            \"begin\": \"^__END__\\\\n\",\n            \"captures\": {\n                \"0\": {\n                    \"name\": \"string.unquoted.program-block.ruby\"\n                }\n            },\n            \"comment\": \"__END__ marker\",\n            \"contentName\": \"text.plain\",\n            \"end\": \"(?=not)impossible\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?=<?xml|<(?i:html\\\\b)|!DOCTYPE (?i:html\\\\b))\",\n                    \"end\": \"(?=not)impossible\",\n                    \"name\": \"text.html.embedded.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"text.html.basic\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)HTML)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded html\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.html\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)HTML)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"text.html\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"text.html.basic\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)XML)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded xml\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.xml\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)XML)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"text.xml\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"text.xml\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)SQL)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded sql\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.sql\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)SQL)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.sql\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.sql\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)CSS)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded css\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.css\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)CSS)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.css\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.css\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)CPP)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded c++\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.c++\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)CPP)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.c++\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.c++\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)C)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded c\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.c\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)C)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.c\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.c\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)(?:JS|JAVASCRIPT))\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded javascript\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.js\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)(?:JS|JAVASCRIPT))\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.js\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.js\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)JQUERY)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded jQuery javascript\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.js.jquery\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)JQUERY)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.js.jquery\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.js.jquery\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)(?:SH|SHELL))\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded shell\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.shell\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)(?:SH|SHELL))\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.shell\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.shell\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)LUA)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded lua\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.lua\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)LUA)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.lua\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.lua\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?=(?><<[-~](\\\"?)((?:[_\\\\w]+_|)RUBY)\\\\b\\\\1))\",\n            \"comment\": \"Heredoc with embedded ruby\",\n            \"end\": \"(?!\\\\G)\",\n            \"name\": \"meta.embedded.block.ruby\",\n            \"patterns\": [\n                {\n                    \"begin\": \"(?><<[-~](\\\"?)((?:[_\\\\w]+_|)RUBY)\\\\b\\\\1)\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.ruby\",\n                    \"end\": \"\\\\s*\\\\2$\\\\n?\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.unquoted.heredoc.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#heredoc\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        },\n                        {\n                            \"include\": \"source.ruby\"\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?>=\\\\s*<<(\\\\w+))\",\n            \"beginCaptures\": {\n                \"0\": {\n                    \"name\": \"punctuation.definition.string.begin.ruby\"\n                }\n            },\n            \"end\": \"^\\\\1$\",\n            \"endCaptures\": {\n                \"0\": {\n                    \"name\": \"punctuation.definition.string.end.ruby\"\n                }\n            },\n            \"name\": \"string.unquoted.heredoc.ruby\",\n            \"patterns\": [\n                {\n                    \"include\": \"#heredoc\"\n                },\n                {\n                    \"include\": \"#interpolated_ruby\"\n                },\n                {\n                    \"include\": \"#escaped_char\"\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?><<[-~](\\\\w+))\",\n            \"beginCaptures\": {\n                \"0\": {\n                    \"name\": \"punctuation.definition.string.begin.ruby\"\n                }\n            },\n            \"comment\": \"heredoc with indented terminator\",\n            \"end\": \"\\\\s*\\\\1$\",\n            \"endCaptures\": {\n                \"0\": {\n                    \"name\": \"punctuation.definition.string.end.ruby\"\n                }\n            },\n            \"name\": \"string.unquoted.heredoc.ruby\",\n            \"patterns\": [\n                {\n                    \"include\": \"#heredoc\"\n                },\n                {\n                    \"include\": \"#interpolated_ruby\"\n                },\n                {\n                    \"include\": \"#escaped_char\"\n                }\n            ]\n        },\n        {\n            \"begin\": \"(?<=\\\\{|do|\\\\{\\\\s|do\\\\s)(\\\\|)\",\n            \"captures\": {\n                \"1\": {\n                    \"name\": \"punctuation.separator.arguments.ruby\"\n                }\n            },\n            \"end\": \"(?<!\\\\|)(\\\\|)(?!\\\\|)\",\n            \"patterns\": [\n                {\n                    \"include\": \"$self\"\n                },\n                {\n                    \"match\": \"[_a-zA-Z][_a-zA-Z0-9]*\",\n                    \"name\": \"variable.other.block.ruby\"\n                },\n                {\n                    \"match\": \",\",\n                    \"name\": \"punctuation.separator.variable.ruby\"\n                }\n            ]\n        },\n        {\n            \"match\": \"=>\",\n            \"name\": \"punctuation.separator.key-value\"\n        },\n        {\n            \"match\": \"->\",\n            \"name\": \"support.function.kernel.lambda.ruby\"\n        },\n        {\n            \"match\": \"<<=|%=|&{1,2}=|\\\\*=|\\\\*\\\\*=|\\\\+=|-=|\\\\^=|\\\\|{1,2}=|<<\",\n            \"name\": \"keyword.operator.assignment.augmented.ruby\"\n        },\n        {\n            \"match\": \"<=>|<(?!<|=)|>(?!<|=|>)|<=|>=|===|==|=~|!=|!~|(?<=[ \\\\t])\\\\?\",\n            \"name\": \"keyword.operator.comparison.ruby\"\n        },\n        {\n            \"match\": \"(?<!\\\\.)\\\\b(and|not|or)\\\\b(?![?!])\",\n            \"name\": \"keyword.operator.logical.ruby\"\n        },\n        {\n            \"comment\": \"Make sure this goes after assignment and comparison\",\n            \"match\": \"(?<=^|[ \\\\t])!|&&|\\\\|\\\\||\\\\^\",\n            \"name\": \"keyword.operator.logical.ruby\"\n        },\n        {\n            \"captures\": {\n                \"1\": {\n                    \"name\": \"punctuation.separator.method.ruby\"\n                }\n            },\n            \"comment\": \"Safe navigation operator - Added in 2.3\",\n            \"match\": \"(&\\\\.)\\\\s*(?![A-Z])\"\n        },\n        {\n            \"match\": \"(%|&|\\\\*\\\\*|\\\\*|\\\\+|-|/)\",\n            \"name\": \"keyword.operator.arithmetic.ruby\"\n        },\n        {\n            \"match\": \"=\",\n            \"name\": \"keyword.operator.assignment.ruby\"\n        },\n        {\n            \"match\": \"\\\\||~|>>\",\n            \"name\": \"keyword.operator.other.ruby\"\n        },\n        {\n            \"match\": \";\",\n            \"name\": \"punctuation.separator.statement.ruby\"\n        },\n        {\n            \"match\": \",\",\n            \"name\": \"punctuation.separator.object.ruby\"\n        },\n        {\n            \"captures\": {\n                \"1\": {\n                    \"name\": \"punctuation.separator.namespace.ruby\"\n                }\n            },\n            \"comment\": \"Mark as namespace separator if double colons followed by capital letter\",\n            \"match\": \"(::)\\\\s*(?=[A-Z])\"\n        },\n        {\n            \"captures\": {\n                \"1\": {\n                    \"name\": \"punctuation.separator.method.ruby\"\n                }\n            },\n            \"comment\": \"Mark as method separator if double colons not followed by capital letter\",\n            \"match\": \"(\\\\.|::)\\\\s*(?![A-Z])\"\n        },\n        {\n            \"comment\": \"Must come after method and constant separators to prefer double colons\",\n            \"match\": \":\",\n            \"name\": \"punctuation.separator.other.ruby\"\n        },\n        {\n            \"match\": \"\\\\{\",\n            \"name\": \"punctuation.section.scope.begin.ruby\"\n        },\n        {\n            \"match\": \"\\\\}\",\n            \"name\": \"punctuation.section.scope.end.ruby\"\n        },\n        {\n            \"match\": \"\\\\[\",\n            \"name\": \"punctuation.section.array.begin.ruby\"\n        },\n        {\n            \"match\": \"\\\\]\",\n            \"name\": \"punctuation.section.array.end.ruby\"\n        },\n        {\n            \"match\": \"\\\\(|\\\\)\",\n            \"name\": \"punctuation.section.function.ruby\"\n        }\n    ],\n    \"repository\": {\n        \"escaped_char\": {\n            \"match\": \"\\\\\\\\(?:[0-7]{1,3}|x[\\\\da-fA-F]{1,2}|.)\",\n            \"name\": \"constant.character.escape.ruby\"\n        },\n        \"heredoc\": {\n            \"begin\": \"^<<[-~]?\\\\w+\",\n            \"end\": \"$\",\n            \"patterns\": [\n                {\n                    \"include\": \"$self\"\n                }\n            ]\n        },\n        \"interpolated_ruby\": {\n            \"patterns\": [\n                {\n                    \"begin\": \"#\\\\{\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.embedded.begin.ruby\"\n                        }\n                    },\n                    \"contentName\": \"source.ruby\",\n                    \"end\": \"(\\\\})\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.embedded.end.ruby\"\n                        },\n                        \"1\": {\n                            \"name\": \"source.ruby\"\n                        }\n                    },\n                    \"name\": \"meta.embedded.line.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#nest_curly_and_self\"\n                        },\n                        {\n                            \"include\": \"$self\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"nest_curly_and_self\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"punctuation.section.scope.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#nest_curly_and_self\"\n                                        }\n                                    ]\n                                },\n                                {\n                                    \"include\": \"$self\"\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"captures\": {\n                        \"1\": {\n                            \"name\": \"punctuation.definition.variable.ruby\"\n                        }\n                    },\n                    \"match\": \"(#@)[a-zA-Z_]\\\\w*\",\n                    \"name\": \"variable.other.readwrite.instance.ruby\"\n                },\n                {\n                    \"captures\": {\n                        \"1\": {\n                            \"name\": \"punctuation.definition.variable.ruby\"\n                        }\n                    },\n                    \"match\": \"(#@@)[a-zA-Z_]\\\\w*\",\n                    \"name\": \"variable.other.readwrite.class.ruby\"\n                },\n                {\n                    \"captures\": {\n                        \"1\": {\n                            \"name\": \"punctuation.definition.variable.ruby\"\n                        }\n                    },\n                    \"match\": \"(#\\\\$)[a-zA-Z_]\\\\w*\",\n                    \"name\": \"variable.other.readwrite.global.ruby\"\n                }\n            ]\n        },\n        \"percent_literals\": {\n            \"patterns\": [\n                {\n                    \"begin\": \"%i(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.end.ruby\"\n                        }\n                    },\n                    \"name\": \"meta.array.symbol.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#symbol\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\<|\\\\\\\\>\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\{|\\\\\\\\\\\\}\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\[|\\\\\\\\\\\\]\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\(|\\\\\\\\\\\\)\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"symbol\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\\\\\|\\\\\\\\[ ]\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                },\n                                {\n                                    \"match\": \"\\\\S\\\\w*\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%I(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.end.ruby\"\n                        }\n                    },\n                    \"name\": \"meta.array.symbol.interpolated.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                },\n                                {\n                                    \"include\": \"#symbol\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#symbol\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"<\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.other.symbol.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        },\n                                        {\n                                            \"include\": \"#symbol\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"symbol\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"(?=\\\\\\\\|#\\\\{)\",\n                                    \"end\": \"(?!\\\\G)\",\n                                    \"name\": \"constant.other.symbol.ruby\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#escaped_char\"\n                                        },\n                                        {\n                                            \"include\": \"#interpolated_ruby\"\n                                        }\n                                    ]\n                                },\n                                {\n                                    \"match\": \"\\\\S\\\\w*\",\n                                    \"name\": \"constant.other.symbol.ruby\"\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%q(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.quoted.other.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                }\n                            ]\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\<|\\\\\\\\>|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\\\\\{|\\\\\\\\\\\\}|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\\\\\[|\\\\\\\\\\\\]|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\\\\\(|\\\\\\\\\\\\)|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%Q?(?:([(\\\\[{<])|([^\\\\w\\\\s=]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.quoted.other.interpolated.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%r(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"([)\\\\]}>]\\\\2|\\\\1\\\\2)[eimnosux]*\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.regexp.percent.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#regex_sub\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#regex_sub\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#regex_sub\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#regex_sub\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#regex_sub\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%s(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.constant.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.constant.end.ruby\"\n                        }\n                    },\n                    \"name\": \"constant.other.symbol.percent.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                }\n                            ]\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\<|\\\\\\\\>|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\\\\\{|\\\\\\\\\\\\}|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\\\\\[|\\\\\\\\\\\\]|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"match\": \"\\\\\\\\\\\\(|\\\\\\\\\\\\)|\\\\\\\\\\\\\\\\\",\n                                    \"name\": \"constant.character.escape.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%w(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.end.ruby\"\n                        }\n                    },\n                    \"name\": \"meta.array.string.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#string\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\<|\\\\\\\\>\",\n                                    \"name\": \"string.other.ruby\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\{|\\\\\\\\\\\\}\",\n                                    \"name\": \"string.other.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\[|\\\\\\\\\\\\]\",\n                                    \"name\": \"string.other.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\(|\\\\\\\\\\\\)\",\n                                    \"name\": \"string.other.ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"string\": {\n                            \"patterns\": [\n                                {\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"constant.character.escape.ruby\"\n                                        }\n                                    },\n                                    \"match\": \"\\\\\\\\\\\\\\\\|\\\\\\\\[ ]\",\n                                    \"name\": \"string.other.ruby\"\n                                },\n                                {\n                                    \"match\": \"\\\\S\\\\w*\",\n                                    \"name\": \"string.other.ruby\"\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%W(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.section.array.end.ruby\"\n                        }\n                    },\n                    \"name\": \"meta.array.string.interpolated.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                },\n                                {\n                                    \"include\": \"#string\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#string\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"<\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"captures\": {\n                                        \"0\": {\n                                            \"name\": \"string.other.ruby\"\n                                        }\n                                    },\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        },\n                                        {\n                                            \"include\": \"#string\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"string\": {\n                            \"patterns\": [\n                                {\n                                    \"begin\": \"(?=\\\\\\\\|#\\\\{)\",\n                                    \"end\": \"(?!\\\\G)\",\n                                    \"name\": \"string.other.ruby\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#escaped_char\"\n                                        },\n                                        {\n                                            \"include\": \"#interpolated_ruby\"\n                                        }\n                                    ]\n                                },\n                                {\n                                    \"match\": \"\\\\S\\\\w*\",\n                                    \"name\": \"string.other.ruby\"\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"begin\": \"%x(?:([(\\\\[{<])|([^\\\\w\\\\s]|_))\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"[)\\\\]}>]\\\\2|\\\\1\\\\2\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.string.end.ruby\"\n                        }\n                    },\n                    \"name\": \"string.interpolated.percent.ruby\",\n                    \"patterns\": [\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\()(?!\\\\))\",\n                            \"end\": \"(?=\\\\))\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#parens\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\[)(?!\\\\])\",\n                            \"end\": \"(?=\\\\])\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#brackets\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=\\\\{)(?!\\\\})\",\n                            \"end\": \"(?=\\\\})\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#braces\"\n                                }\n                            ]\n                        },\n                        {\n                            \"begin\": \"\\\\G(?<=<)(?!>)\",\n                            \"end\": \"(?=>)\",\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#angles\"\n                                }\n                            ]\n                        },\n                        {\n                            \"include\": \"#escaped_char\"\n                        },\n                        {\n                            \"include\": \"#interpolated_ruby\"\n                        }\n                    ],\n                    \"repository\": {\n                        \"angles\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"<\",\n                                    \"end\": \">\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#angles\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"braces\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\{\",\n                                    \"end\": \"\\\\}\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#braces\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"brackets\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\[\",\n                                    \"end\": \"\\\\]\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#brackets\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"parens\": {\n                            \"patterns\": [\n                                {\n                                    \"include\": \"#escaped_char\"\n                                },\n                                {\n                                    \"include\": \"#interpolated_ruby\"\n                                },\n                                {\n                                    \"begin\": \"\\\\(\",\n                                    \"end\": \"\\\\)\",\n                                    \"patterns\": [\n                                        {\n                                            \"include\": \"#parens\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                }\n            ]\n        },\n        \"regex_sub\": {\n            \"patterns\": [\n                {\n                    \"include\": \"#interpolated_ruby\"\n                },\n                {\n                    \"include\": \"#escaped_char\"\n                },\n                {\n                    \"captures\": {\n                        \"1\": {\n                            \"name\": \"punctuation.definition.quantifier.begin.ruby\"\n                        },\n                        \"3\": {\n                            \"name\": \"punctuation.definition.quantifier.end.ruby\"\n                        }\n                    },\n                    \"match\": \"(\\\\{)\\\\d+(,\\\\d+)?(\\\\})\",\n                    \"name\": \"keyword.operator.quantifier.ruby\"\n                },\n                {\n                    \"begin\": \"\\\\[\\\\^?\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.character-class.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"\\\\]\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.character-class.end.ruby\"\n                        }\n                    },\n                    \"name\": \"constant.other.character-class.set.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                },\n                {\n                    \"begin\": \"\\\\(\\\\?#\",\n                    \"beginCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.comment.begin.ruby\"\n                        }\n                    },\n                    \"end\": \"\\\\)\",\n                    \"endCaptures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.comment.end.ruby\"\n                        }\n                    },\n                    \"name\": \"comment.line.number-sign.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#escaped_char\"\n                        }\n                    ]\n                },\n                {\n                    \"begin\": \"\\\\(\",\n                    \"captures\": {\n                        \"0\": {\n                            \"name\": \"punctuation.definition.group.ruby\"\n                        }\n                    },\n                    \"end\": \"\\\\)\",\n                    \"name\": \"meta.group.regexp.ruby\",\n                    \"patterns\": [\n                        {\n                            \"include\": \"#regex_sub\"\n                        }\n                    ]\n                },\n                {\n                    \"begin\": \"(?<=^|\\\\s)(#)\\\\s(?=[[a-zA-Z0-9,. \\\\t?!-][^\\\\x{00}-\\\\x{7F}]]*$)\",\n                    \"beginCaptures\": {\n                        \"1\": {\n                            \"name\": \"punctuation.definition.comment.ruby\"\n                        }\n                    },\n                    \"comment\":\n                        \"We are restrictive in what we allow to go after the comment character to avoid false positives, since the availability of comments depend on regexp flags.\",\n                    \"end\": \"$\\\\n?\",\n                    \"name\": \"comment.line.number-sign.ruby\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "textmate/rust.tmLanguage.json",
    "content": "{\n\t\"$schema\": \"https://raw.githubusercontent.com/martinring/tmlanguage/master/tmlanguage.json\",\n\t\"version\": \"https://github.com/zargony/atom-language-rust/commit/5238d9834953ed7c58d9b5b9bb0c084c3c11ecd6\",\n\t\"name\": \"Rust\",\n\t\"scopeName\": \"source.rust\",\n\t\"patterns\": [\n\t\t{\n\t\t\t\"comment\": \"Keywords that have a different meaning when used at top-level\",\n\t\t\t\"match\": \"\\\\b(for)\\\\b\",\n\t\t\t\"name\": \"keyword.other.top.rust\"\n\t\t},\n\t\t{\n\t\t\t\"include\": \"#code\"\n\t\t}\n\t],\n\t\"repository\": {\n\t\t\"code\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"By entering a block, we stop matching any top-level patterns that aren't inside #code\",\n\t\t\t\t\t\"begin\": \"{\",\n\t\t\t\t\t\"end\": \"}\",\n\t\t\t\t\t\"name\": \"meta.block.rust\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#code\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#attribute\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#keywords\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#literals\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"attribute\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Attribute\",\n\t\t\t\t\t\"name\": \"meta.attribute.rust\",\n\t\t\t\t\t\"begin\": \"#\\\\!?\\\\[\",\n\t\t\t\t\t\"end\": \"\\\\]\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.tag.attribute.begin.rust\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.tag.attribute.end.rust\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#metaItem\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"metaItem\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"\\\\b\\\\w+\\\\(\",\n\t\t\t\t\t\"end\": \"\\\\)\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.tag.attribute.metaItem.begin.rust\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"punctuation.definition.tag.attribute.metaItem.end.rust\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#metaItem\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b\\\\w+\\\\s*=\",\n\t\t\t\t\t\"name\": \"punctuation.definition.tag.attribute.metaItem.set.rust\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#literals\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comments\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"keywords\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Regular keywords\",\n\t\t\t\t\t\"match\": \"\\\\b(async|as|'static|Self|abstract|box|const|crate|dyn|enum|extern|final|fn|impl|let|macro|mod|mut|override|priv|pub|ref|self|static|struct|super|trait|type|union|unsized|use|virtual|where)\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.other.rust\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Control keywords\",\n\t\t\t\t\t\"match\": \"\\\\b(await|become|break|continue|do|else|for|if|in|loop|match|move|return|try|typeof|unsafe|while|yield)\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.control.rust\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Miscellaneous operator\",\n\t\t\t\t\t\"name\": \"keyword.operator.misc.rust\",\n\t\t\t\t\t\"match\": \"(=>|::)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Comparison operator\",\n\t\t\t\t\t\"name\": \"keyword.operator.comparison.rust\",\n\t\t\t\t\t\"match\": \"(&&|\\\\|\\\\||==|!=)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Assignment operator\",\n\t\t\t\t\t\"name\": \"keyword.operator.assignment.rust\",\n\t\t\t\t\t\"match\": \"(\\\\+=|-=|/=|\\\\*=|%=|\\\\^=|&=|\\\\|=|<<=|>>=|=)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Arithmetic operator\",\n\t\t\t\t\t\"name\": \"keyword.operator.arithmetic.rust\",\n\t\t\t\t\t\"match\": \"(!|\\\\+|-|/|\\\\*|%|\\\\^|&|\\\\||<<|>>)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Sigil\",\n\t\t\t\t\t\"name\": \"keyword.operator.sigil.rust\",\n\t\t\t\t\t\"match\": \"[&*](?=[a-zA-Z0-9_\\\\(\\\\[\\\\|\\\\\\\"]+)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Comparison operator (second group because of regex precedence)\",\n\t\t\t\t\t\"name\": \"keyword.operator.comparison.rust\",\n\t\t\t\t\t\"match\": \"(<=|>=|<|>)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Terminator\",\n\t\t\t\t\t\"match\": \";\",\n\t\t\t\t\t\"name\": \"keyword.other.semi.go\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"literals\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Boolean literals\",\n\t\t\t\t\t\"match\": \"\\\\b(true|false)\\\\b\",\n\t\t\t\t\t\"name\": \"constant.numeric.boolean.rust\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Floating point literal (fraction)\",\n\t\t\t\t\t\"name\": \"constant.numeric.float.rust\",\n\t\t\t\t\t\"match\": \"\\\\b[0-9][0-9_]*\\\\.[0-9][0-9_]*([eE][+-]?[0-9_]+)?(f32|f64)?\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Floating point literal (exponent)\",\n\t\t\t\t\t\"name\": \"constant.numeric.float.rust\",\n\t\t\t\t\t\"match\": \"\\\\b[0-9][0-9_]*(\\\\.[0-9][0-9_]*)?[eE][+-]?[0-9_]+(f32|f64)?\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Floating point literal (typed)\",\n\t\t\t\t\t\"name\": \"constant.numeric.float.rust\",\n\t\t\t\t\t\"match\": \"\\\\b[0-9][0-9_]*(\\\\.[0-9][0-9_]*)?([eE][+-]?[0-9_]+)?(f32|f64)\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Integer literal (decimal)\",\n\t\t\t\t\t\"name\": \"constant.numeric.integer.decimal.rust\",\n\t\t\t\t\t\"match\": \"\\\\b[0-9][0-9_]*([ui](8|16|32|64|128|s|size))?\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Integer literal (hexadecimal)\",\n\t\t\t\t\t\"name\": \"constant.numeric.integer.hexadecimal.rust\",\n\t\t\t\t\t\"match\": \"\\\\b0x[a-fA-F0-9_]+([ui](8|16|32|64|128|s|size))?\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Integer literal (octal)\",\n\t\t\t\t\t\"name\": \"constant.numeric.integer.octal.rust\",\n\t\t\t\t\t\"match\": \"\\\\b0o[0-7_]+([ui](8|16|32|64|128|s|size))?\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Integer literal (binary)\",\n\t\t\t\t\t\"name\": \"constant.numeric.integer.binary.rust\",\n\t\t\t\t\t\"match\": \"\\\\b0b[01_]+([ui](8|16|32|64|128|s|size))?\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Single-quote string literal (character)\",\n\t\t\t\t\t\"name\": \"string.quoted.single.rust\",\n\t\t\t\t\t\"match\": \"b?'([^'\\\\\\\\]|\\\\\\\\(x[0-9A-Fa-f]{2}|[0-2][0-7]{0,2}|3[0-6][0-7]?|37[0-7]?|[4-7][0-7]?|.))'\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Double-quote string literal\",\n\t\t\t\t\t\"name\": \"string.quoted.double.rust\",\n\t\t\t\t\t\"begin\": \"b?\\\"\",\n\t\t\t\t\t\"end\": \"\\\"\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#escaped_character\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Raw double-quote string literal\",\n\t\t\t\t\t\"name\": \"string.quoted.double.raw.rust\",\n\t\t\t\t\t\"begin\": \"b?r(#*)\\\"\",\n\t\t\t\t\t\"end\": \"\\\"\\\\1\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"comments\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#block_doc_comment\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#block_comment\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Single-line documentation comment\",\n\t\t\t\t\t\"name\": \"comment.line.documentation.rust\",\n\t\t\t\t\t\"begin\": \"//[!/](?=[^/])\",\n\t\t\t\t\t\"end\": \"$\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Single-line comment\",\n\t\t\t\t\t\"name\": \"comment.line.double-slash.rust\",\n\t\t\t\t\t\"begin\": \"//\",\n\t\t\t\t\t\"end\": \"$\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"block_doc_comment\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Block documentation comment\",\n\t\t\t\t\t\"name\": \"comment.block.documentation.rust\",\n\t\t\t\t\t\"begin\": \"/\\\\*[\\\\*!](?![\\\\*/])\",\n\t\t\t\t\t\"end\": \"\\\\*/\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#block_doc_comment\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#block_comment\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"block_comment\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Block comment\",\n\t\t\t\t\t\"name\": \"comment.block.rust\",\n\t\t\t\t\t\"begin\": \"/\\\\*\",\n\t\t\t\t\t\"end\": \"\\\\*/\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#block_doc_comment\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#block_comment\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"escaped_character\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.character.escape.rust\",\n\t\t\t\t\t\"match\": \"\\\\\\\\(x[0-9A-Fa-f]{2}|[0-2][0-7]{0,2}|3[0-6][0-7]?|37[0-7]?|[4-7][0-7]?|.)\"\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t}\n}"
  },
  {
    "path": "textmate/typescript.tmLanguage.json",
    "content": "{\n\t\"$schema\": \"https://raw.githubusercontent.com/martinring/tmlanguage/master/tmlanguage.json\",\n\t\"name\": \"TypeScript\",\n\t\"scopeName\": \"source.ts\",\n\t\"fileTypes\": [\n\t\t\"ts\",\n\t\t\"js\"\n\t],\n\t\"patterns\": [\n\t\t{\n\t\t\t\"include\": \"#expression\"\n\t\t}\n\t],\n\t\"repository\": {\n\t\t\"expression\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#keyword\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#decorator\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#regex\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#template\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#literal\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#comment\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"keyword\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(module)\\\\s+(?=[\\\\w'\\\"])\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.ts\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(declare|namespace|interface|type)\\\\s+(?=\\\\w)\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.ts\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(abstract|arguments|as|async|class|const|enum|export|extends|from|function|implements|import|let|package|private|protected|public|static|super|this|var|void|with)\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.other.ts\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(await|break|case|catch|continue|debugger|default|delete|do|in|of|else|eval|finally|for|if|instanceof|new|return|switch|throw|try|typeof|while|yield)\\\\b\",\n\t\t\t\t\t\"name\": \"keyword.control.ts\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"comment\": \"Terminator\",\n\t\t\t\t\t\"match\": \";\",\n\t\t\t\t\t\"name\": \"keyword.other.semi.ts\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"decorator\": {\n\t\t\t\"name\": \"meta.decorator.ts\",\n\t\t\t\"begin\": \"(?<!\\\\.|\\\\$)\\\\@\",\n\t\t\t\"end\": \"(?=\\\\s)\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#expression\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"qstring-double\": {\n\t\t\t\"name\": \"string.quoted.double.ts\",\n\t\t\t\"begin\": \"\\\"\",\n\t\t\t\"end\": \"(\\\")|((?:[^\\\\\\\\\\\\n])$)\",\n\t\t\t\"endCaptures\": {\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"invalid.illegal.newline.ts\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string-character-escape\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"qstring-single\": {\n\t\t\t\"name\": \"string.quoted.single.ts\",\n\t\t\t\"begin\": \"'\",\n\t\t\t\"end\": \"(\\\\')|((?:[^\\\\\\\\\\\\n])$)\",\n\t\t\t\"endCaptures\": {\n\t\t\t\t\"2\": {\n\t\t\t\t\t\"name\": \"invalid.illegal.newline.ts\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string-character-escape\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"regex\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"string.regex.ts\",\n\t\t\t\t\t\"begin\": \"(?<=[=(:,\\\\[?+!]|return|case|=>|&&|\\\\|\\\\||\\\\*\\\\/)\\\\s*(/)(?![/*+?])(?=.*/)\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(/)([gimuy]*)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.ts\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#regexp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"string.regex.ts\",\n\t\t\t\t\t\"begin\": \"/(?![/*])(?=(?:[^/\\\\\\\\\\\\[]|\\\\\\\\.|\\\\[([^\\\\]\\\\\\\\]|\\\\\\\\.)+\\\\])+/(?![/*])[gimy]*(?!\\\\s*[a-zA-Z0-9_$]))\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(/)([gimuy]*)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.other.ts\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#regexp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"regexp\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"keyword.control.anchor.regexp\",\n\t\t\t\t\t\"match\": \"\\\\\\\\[bB]|\\\\^|\\\\$\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"keyword.other.back-reference.regexp\",\n\t\t\t\t\t\"match\": \"\\\\\\\\[1-9]\\\\d*\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"keyword.operator.quantifier.regexp\",\n\t\t\t\t\t\"match\": \"[?+*]|\\\\{(\\\\d+,\\\\d+|\\\\d+,|,\\\\d+|\\\\d+)\\\\}\\\\??\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"keyword.operator.or.regexp\",\n\t\t\t\t\t\"match\": \"\\\\|\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"meta.group.assertion.regexp\",\n\t\t\t\t\t\"begin\": \"(\\\\()((\\\\?=)|(\\\\?!))\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"3\": {\n\t\t\t\t\t\t\t\"name\": \"meta.assertion.look-ahead.regexp\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"4\": {\n\t\t\t\t\t\t\t\"name\": \"meta.assertion.negative-look-ahead.regexp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(\\\\))\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#regexp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"meta.group.regexp\",\n\t\t\t\t\t\"begin\": \"\\\\((\\\\?:)?\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"\\\\)\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#regexp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.other.character-class.set.regexp\",\n\t\t\t\t\t\"begin\": \"(\\\\[)(\\\\^)?\",\n\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"keyword.operator.negation.regexp\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"end\": \"(\\\\])\",\n\t\t\t\t\t\"endCaptures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"name\": \"constant.other.character-class.range.regexp\",\n\t\t\t\t\t\t\t\"match\": \"(?:.|(\\\\\\\\(?:[0-7]{3}|x\\\\h\\\\h|u\\\\h\\\\h\\\\h\\\\h))|(\\\\\\\\c[A-Z])|(\\\\\\\\.))\\\\-(?:[^\\\\]\\\\\\\\]|(\\\\\\\\(?:[0-7]{3}|x\\\\h\\\\h|u\\\\h\\\\h\\\\h\\\\h))|(\\\\\\\\c[A-Z])|(\\\\\\\\.))\",\n\t\t\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"constant.character.numeric.regexp\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"constant.character.control.regexp\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"3\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"constant.character.escape.backslash.regexp\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"4\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"constant.character.numeric.regexp\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"5\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"constant.character.control.regexp\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"6\": {\n\t\t\t\t\t\t\t\t\t\"name\": \"constant.character.escape.backslash.regexp\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#regex-character-class\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#regex-character-class\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"regex-character-class\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.other.character-class.regexp\",\n\t\t\t\t\t\"match\": \"\\\\\\\\[wWsSdDtrnvf]|\\\\.\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.character.numeric.regexp\",\n\t\t\t\t\t\"match\": \"\\\\\\\\([0-7]{3}|x\\\\h\\\\h|u\\\\h\\\\h\\\\h\\\\h)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.character.control.regexp\",\n\t\t\t\t\t\"match\": \"\\\\\\\\c[A-Z]\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.character.escape.backslash.regexp\",\n\t\t\t\t\t\"match\": \"\\\\\\\\.\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"string\": {\n\t\t\t\"name\": \"string.ts\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#qstring-single\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#qstring-double\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"template\": {\n\t\t\t\"name\": \"string.template.ts\",\n\t\t\t\"begin\": \"([_$[:alpha:]][_$[:alnum:]]*)?(`)\",\n\t\t\t\"beginCaptures\": {\n\t\t\t\t\"1\": {\n\t\t\t\t\t\"name\": \"entity.name.function.tagged-template.ts\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"end\": \"`\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#template-substitution-element\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#string-character-escape\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"string-character-escape\": {\n\t\t\t\"name\": \"constant.character.escape.ts\",\n\t\t\t\"match\": \"\\\\\\\\(x\\\\h{2}|[0-2][0-7]{0,2}|3[0-6][0-7]?|37[0-7]?|[4-7][0-7]?|.|$)\"\n\t\t},\n\t\t\"template-substitution-element\": {\n\t\t\t\"name\": \"meta.template.expression.ts\",\n\t\t\t\"begin\": \"\\\\$\\\\{\",\n\t\t\t\"end\": \"\\\\}\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#expression\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"literal\": {\n\t\t\t\"name\": \"literal.ts\",\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(true|false)\\\\b\",\n\t\t\t\t\t\"name\": \"constant.numeric.boolean.ts\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"\\\\b(null|undefined)\\\\b\",\n\t\t\t\t\t\"name\": \"constant.numeric.null.ts\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#numeric-literal\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#undefined-literal\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"include\": \"#numericConstant-literal\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"numeric-literal\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.numeric.hex.ts\",\n\t\t\t\t\t\"match\": \"\\\\b(?<!\\\\$)0(x|X)[0-9a-fA-F]+\\\\b(?!\\\\$)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.numeric.binary.ts\",\n\t\t\t\t\t\"match\": \"\\\\b(?<!\\\\$)0(b|B)[01]+\\\\b(?!\\\\$)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.numeric.octal.ts\",\n\t\t\t\t\t\"match\": \"\\\\b(?<!\\\\$)0(o|O)?[0-7]+\\\\b(?!\\\\$)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(?x)\\n(?<!\\\\$)(?:\\n  (?:\\\\b[0-9]+(\\\\.)[0-9]+[eE][+-]?[0-9]+\\\\b)| # 1.1E+3\\n  (?:\\\\b[0-9]+(\\\\.)[eE][+-]?[0-9]+\\\\b)|       # 1.E+3\\n  (?:\\\\B(\\\\.)[0-9]+[eE][+-]?[0-9]+\\\\b)|       # .1E+3\\n  (?:\\\\b[0-9]+[eE][+-]?[0-9]+\\\\b)|            # 1E+3\\n  (?:\\\\b[0-9]+(\\\\.)[0-9]+\\\\b)|                # 1.1\\n  (?:\\\\b[0-9]+(\\\\.)\\\\B)|                      # 1.\\n  (?:\\\\B(\\\\.)[0-9]+\\\\b)|                      # .1\\n  (?:\\\\b[0-9]+\\\\b(?!\\\\.))                     # 1\\n)(?!\\\\$)\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"constant.numeric.decimal.ts\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"meta.delimiter.decimal.period.ts\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"meta.delimiter.decimal.period.ts\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"3\": {\n\t\t\t\t\t\t\t\"name\": \"meta.delimiter.decimal.period.ts\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"4\": {\n\t\t\t\t\t\t\t\"name\": \"meta.delimiter.decimal.period.ts\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"5\": {\n\t\t\t\t\t\t\t\"name\": \"meta.delimiter.decimal.period.ts\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"6\": {\n\t\t\t\t\t\t\t\"name\": \"meta.delimiter.decimal.period.ts\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"numericConstant-literal\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.language.nan.ts\",\n\t\t\t\t\t\"match\": \"(?<!\\\\.|\\\\$)\\\\bNaN\\\\b(?!\\\\$)\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"constant.language.infinity.ts\",\n\t\t\t\t\t\"match\": \"(?<!\\\\.|\\\\$)\\\\bInfinity\\\\b(?!\\\\$)\"\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"comment\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"comment.block.documentation.ts\",\n\t\t\t\t\t\"begin\": \"/\\\\*\\\\*(?!/)\",\n\t\t\t\t\t\"end\": \"\\\\*/\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t},\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"include\": \"#docblock\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"comment.block.ts\",\n\t\t\t\t\t\"begin\": \"/\\\\*\",\n\t\t\t\t\t\"end\": \"\\\\*/\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"begin\": \"(^[ \\\\t]+)?(?=//)\",\n\t\t\t\t\t\"end\": \"(?=$)\",\n\t\t\t\t\t\"patterns\": [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"name\": \"comment.line.double-slash.ts\",\n\t\t\t\t\t\t\t\"begin\": \"//\",\n\t\t\t\t\t\t\t\"beginCaptures\": {\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"end\": \"(?=$)\"\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"docblock\": {\n\t\t\t\"patterns\": [\n\t\t\t\t{\n\t\t\t\t\t\"name\": \"storage.type.class.jsdoc\",\n\t\t\t\t\t\"match\": \"(?<!\\\\w)@(abstract|access|alias|arg|argument|async|attribute|augments|author|beta|borrows|bubbes|callback|chainable|class|classdesc|code|config|const|constant|constructor|constructs|copyright|default|defaultvalue|define|deprecated|desc|description|dict|emits|enum|event|example|exports?|extends|extension|extension_for|extensionfor|external|file|fileoverview|final|fires|for|function|global|host|ignore|implements|inherit[Dd]oc|inner|instance|interface|kind|lends|license|listens|main|member|memberof|method|mixex|mixins?|module|name|namespace|nocollapse|nosideeffects|override|overview|package|param|preserve|private|prop|property|protected|public|read[Oo]nly|record|require[ds]|returns?|see|since|static|struct|submodule|summary|template|this|throws|todo|tutorial|type|typedef|unrestricted|uses|var|variation|version|virtual|writeOnce)\\\\b\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(?x)\\n(?:(?<=@param)|(?<=@arg)|(?<=@argument)|(?<=@type))\\n\\\\s+\\n({(?:\\n  \\\\* |                                        # {*} any type\\n  \\\\? |                                        # {?} unknown type\\n  (?:                                         # Check for a prefix\\n    \\\\? |                                      # {?string} nullable type\\n    !   |                                     # {!string} non-nullable type\\n    \\\\.{3}                                     # {...string} variable number of parameters\\n  )?\\n  (?:\\n    \\\\(                                        # Opening bracket of multiple types with parenthesis {(string|number)}\\n      [a-zA-Z_$]+\\n      (?:\\n        (?:\\n          [\\\\w$]*\\n          (?:\\\\[\\\\])?                           # {(string[]|number)} type application, an array of strings or a number\\n        ) |\\n        \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>            # {Array<string>} or {Object<string, number>} type application (optional .)\\n      )\\n      (?:\\n        [\\\\.|~]                                # {Foo.bar} namespaced, {string|number} multiple, {Foo~bar} class-specific callback\\n        [a-zA-Z_$]+\\n        (?:\\n          (?:\\n            [\\\\w$]*\\n            (?:\\\\[\\\\])?                        # {(string|number[])} type application, a string or an array of numbers\\n          ) |\\n          \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>         # {Array<string>} or {Object<string, number>} type application (optional .)\\n        )\\n      )*\\n    \\\\) |\\n    [a-zA-Z_$]+\\n    (?:\\n      (?:\\n        [\\\\w$]*\\n        (?:\\\\[\\\\])?                            # {string[]|number} type application, an array of strings or a number\\n      ) |\\n      \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>             # {Array<string>} or {Object<string, number>} type application (optional .)\\n    )\\n    (?:\\n      [\\\\.|~]                                 # {Foo.bar} namespaced, {string|number} multiple, {Foo~bar} class-specific callback\\n      [a-zA-Z_$]+\\n      (?:\\n        [\\\\w$]* |\\n        \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>           # {Array<string>} or {Object<string, number>} type application (optional .)\\n      )\\n    )*\\n  )\\n                                             # Check for suffix\\n  (?:\\\\[\\\\])?                                  # {string[]} type application, an array of strings\\n  =?                                         # {string=} optional parameter\\n)})\\n\\\\s+\\n(\\n  \\\\[                                         # [foo] optional parameter\\n    \\\\s*\\n    (?:\\n      [a-zA-Z_$][\\\\w$]*\\n      (?:\\n        (?:\\\\[\\\\])?                            # Foo[].bar properties within an array\\n        \\\\.                                   # Foo.Bar namespaced parameter\\n        [a-zA-Z_$][\\\\w$]*\\n      )*\\n      (?:\\n        \\\\s*\\n        =                                    # [foo=bar] Default parameter value\\n        \\\\s*\\n        [\\\\w$\\\\s]*\\n      )?\\n    )\\n    \\\\s*\\n  \\\\] |\\n  (?:\\n    [a-zA-Z_$][\\\\w$]*\\n    (?:\\n      (?:\\\\[\\\\])?                              # Foo[].bar properties within an array\\n      \\\\.                                     # Foo.Bar namespaced parameter\\n      [a-zA-Z_$][\\\\w$]*\\n    )*\\n  )?\\n)\\n\\\\s+\\n(?:-\\\\s+)?                                    # optional hyphen before the description\\n((?:(?!\\\\*\\\\/).)*)                             # The type description\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"other.meta.jsdoc\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"variable.other.jsdoc\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"3\": {\n\t\t\t\t\t\t\t\"name\": \"other.description.jsdoc\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"match\": \"(?x)\\n({(?:\\n  \\\\* |                                       # {*} any type\\n  \\\\? |                                       # {?} unknown type\\n\\n  (?:                                        # Check for a prefix\\n    \\\\? |                                     # {?string} nullable type\\n    !   |                                    # {!string} non-nullable type\\n    \\\\.{3}                                    # {...string} variable number of parameters\\n  )?\\n\\n  (?:\\n    \\\\(                                       # Opening bracket of multiple types with parenthesis {(string|number)}\\n      [a-zA-Z_$]+\\n      (?:\\n        [\\\\w$]* |\\n        \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>           # {Array<string>} or {Object<string, number>} type application (optional .)\\n      )\\n      (?:\\n        [\\\\.|~]                               # {Foo.bar} namespaced, {string|number} multiple, {Foo~bar} class-specific callback\\n        [a-zA-Z_$]+\\n        (?:\\n          [\\\\w$]* |\\n          \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>         # {Array<string>} or {Object<string, number>} type application (optional .)\\n        )\\n      )*\\n    \\\\) |\\n    [a-zA-Z_$]+\\n    (?:\\n      [\\\\w$]* |\\n      \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>             # {Array<string>} or {Object<string, number>} type application (optional .)\\n    )\\n    (?:\\n      [\\\\.|~]                                 # {Foo.bar} namespaced, {string|number} multiple, {Foo~bar} class-specific callback\\n      [a-zA-Z_$]+\\n      (?:\\n        [\\\\w$]* |\\n        \\\\.?<[\\\\w$]+(?:,\\\\s+[\\\\w$]+)*>           # {Array<string>} or {Object<string, number>} type application (optional .)\\n      )\\n    )*\\n  )\\n                                             # Check for suffix\\n  (?:\\\\[\\\\])?                                  # {string[]} type application, an array of strings\\n  =?                                         # {string=} optional parameter\\n)})\\n\\\\s+\\n(?:-\\\\s+)?                                    # optional hyphen before the description\\n((?:(?!\\\\*\\\\/).)*)                             # The type description\",\n\t\t\t\t\t\"captures\": {\n\t\t\t\t\t\t\"0\": {\n\t\t\t\t\t\t\t\"name\": \"other.meta.jsdoc\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"1\": {\n\t\t\t\t\t\t\t\"name\": \"entity.name.type.instance.jsdoc\"\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"2\": {\n\t\t\t\t\t\t\t\"name\": \"other.description.jsdoc\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t}\n}"
  },
  {
    "path": "tsconfig.json",
    "content": "{\n\t\"compilerOptions\": {\n\t\t\"module\": \"commonjs\",\n\t\t\"target\": \"es6\",\n\t\t\"outDir\": \"out\",\n\t\t\"lib\": [\n\t\t\t\"es6\"\n\t\t],\n\t\t\"sourceMap\": true,\n\t\t\"rootDir\": \"src\",\n\t\t/* Strict Type-Checking Option */\n\t\t\"strict\": true,   /* enable all strict type-checking options */\n\t\t/* Additional Checks */\n\t\t\"noUnusedLocals\": true /* Report errors on unused locals. */\n\t\t// \"noImplicitReturns\": true, /* Report error when not all code paths in function return a value. */\n\t\t// \"noFallthroughCasesInSwitch\": true, /* Report errors for fallthrough cases in switch statement. */\n\t\t// \"noUnusedParameters\": true,  /* Report errors on unused parameters. */\n\t},\n\t\"exclude\": [\n\t\t\"examples\",\n\t\t\"node_modules\",\n\t\t\".vscode-test\"\n\t]\n}\n"
  },
  {
    "path": "tslint.json",
    "content": "{\n\t\"rules\": {\n\t\t\"no-string-throw\": true,\n\t\t\"no-unused-expression\": true,\n\t\t\"no-duplicate-variable\": true,\n\t\t\"class-name\": true,\n\t\t\"semicolon\": [false, \"never\"]\n\t},\n\t\"defaultSeverity\": \"warning\"\n}\n"
  }
]