[
  {
    "path": ".github/ISSUE_TEMPLATE/content-question.md",
    "content": "---\nname: Content Question\nabout: Ask a question about something you read in the books?\nlabels: \n\n---\n\n**Yes, I promise I've read the [Contributions Guidelines](https://github.com/getify/You-Dont-Know-JS/blob/master/CONTRIBUTING.md)** (please feel free to remove this line).\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/foreign-translation-request.md",
    "content": "---\nname: Foreign Translation Request\nabout: Want to request a translation into a foreign language?\nlabels: \n\n---\n\nPlease check these issues first:\n\n* https://github.com/getify/You-Dont-Know-JS/issues?utf8=%E2%9C%93&q=label%3A%22foreign+language+translations%22+\n* https://github.com/getify/You-Dont-Know-JS/issues/9\n* https://github.com/getify/You-Dont-Know-JS/issues/900\n* https://github.com/getify/You-Dont-Know-JS/issues/1378\n\nTo summarize, the steps for a foreign language translation are:\n\n1. Fork this repo\n2. Make your own translation entirely in your fork, preferably of all six books, but at a minimum of one whole book\n3. File an issue asking for a branch to be made on our main repo, named for that [language's ISO code](http://www.lingoes.net/en/translator/langcode.htm)\n4. Once the branch is created, you can PR to merge your translated work in\n5. Once the merge is complete, I will promote you to a repository maintainer so you can manage any further translation maintenance work on your own branch of this repo\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/report-technical-mistake.md",
    "content": "---\nname: Report Technical Mistake\nabout: Help us fix a mistake in the code.\nlabels: \n\n---\n\n**Yes, I promise I've read the [Contributions Guidelines](https://github.com/getify/You-Dont-Know-JS/blob/master/CONTRIBUTING.md)** (please feel free to remove this line).\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/textual-grammar-typo.md",
    "content": "---\nname: Textual/Grammar Typo\nabout: Help us correct a spelling or grammar error in the text.\nlabels: \n\n---\n\n**Yes, I promise I've read the [Contributions Guidelines](https://github.com/getify/You-Dont-Know-JS/blob/master/CONTRIBUTING.md)** (please feel free to remove this line).\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing\n\nPlease feel free to contribute to the quality of this content by submitting PR's for improvements to code snippets, explanations, etc. If there's any doubt or if you think that a word/phrase is used confusingly, **before submitting a PR, open an issue to ask about it.**\n\nHowever, if you choose to contribute content (not just typo corrections) to this repo, you agree that you're giving me a non-exclusive license to use that content for the book, as I (and my publisher) deem appropriate. You probably guessed that already, but I just have to make sure the lawyers are happy by explicitly stating it.\n\n## Reading Experience (Chapter/Section links, etc)\n\nI understand that reading one long .md file, with no relative cross links to other sections/etc, is not the preferred reading experience for most of you. As such, it's totally reasonable to want to file an issue/PR to add those kinds of features.\n\nThis topic has been brought up many times, and I've considered it. For now, I **do not** accept these kinds of changes into the repo.\n\nThe main purpose of my book repos is to track and manage the content for the purposes of publication (paid-for ebooks and print books). I do this in the open because I also care about providing free and early access to the content, to make sure there is no paywall barrier to learning.\n\nAs such, this repo **is not optimized for your reading experience.**\n\nThe primary reading experience, likely the most pleasant one for many of you, is the ebooks or print books, which [are available for sale](http://ssearch.oreilly.com/?q=%22you+don%27t+know+js%22&x=0&y=0). The balance I'm striking here is releasing the content for free, but selling the reading experience. Other authors make different decisions on that balance, but that's what I've come to for now.\n\nI hope you continue to enjoy and benefit from the content, and I also hope you value it enough to [purchase the best reading experience](http://ssearch.oreilly.com/?q=%22you+don%27t+know+js%22&x=0&y=0) in the ebook/print form.\n\n## Editions\n\nThe current state of this repo is the 1st Edition of the published form of these books. That means that you should have almost exactly the same content here as in the ebooks or printed books, with only minor variances in typos, formatting, etc.\n\nI generally am not accepting any changes to the current repo, as I do not want this content to diverge from what's in the published books. There are over a hundred filed issues/PRs for changes that are being collected for the 2nd Edition, but work has not yet begun on that.\n\nSo, if you find something that should be fixed, just know that it will likely sit for awhile in that batch until it's time to make the 2nd Edition updates. At that time, my plan is to make separate branches to track the editions.\n\n## Typos?\n\nThese books go through official editing with the publisher, and typos are likely all caught at that stage. As such, **typos are not a big concern for this repo**.\n\nIf you're going to submit a PR for typo fixes, please be measured in doing so by collecting several small changes into a single PR (in separate commits). Or, **just don't even worry about them for now,** because we'll get to them later. I promise.\n\n## Search First!\n\nAlso, if you have any questions or concerns, please make sure to search the issues (both open and closed!) first, to keep the churn of issues to a minimum. I want to keep my focus on writing these books as much as possible.\n"
  },
  {
    "path": "ISSUE_TEMPLATE.md",
    "content": "**Yes, I promise I've read the [Contributions Guidelines](https://github.com/getify/You-Dont-Know-JS/blob/master/CONTRIBUTING.md)** (please feel free to remove this line).\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "Attribution-NonCommercial-NoDerivatives 4.0 International\n\n=======================================================================\n\nCreative Commons Corporation (\"Creative Commons\") is not a law firm and\ndoes not provide legal services or legal advice. Distribution of\nCreative Commons public licenses does not create a lawyer-client or\nother relationship. Creative Commons makes its licenses and related\ninformation available on an \"as-is\" basis. Creative Commons gives no\nwarranties regarding its licenses, any material licensed under their\nterms and conditions, or any related information. Creative Commons\ndisclaims all liability for damages resulting from their use to the\nfullest extent possible.\n\nUsing Creative Commons Public Licenses\n\nCreative Commons public licenses provide a standard set of terms and\nconditions that creators and other rights holders may use to share\noriginal works of authorship and other material subject to copyright\nand certain other rights specified in the public license below. The\nfollowing considerations are for informational purposes only, are not\nexhaustive, and do not form part of our licenses.\n\n     Considerations for licensors: Our public licenses are\n     intended for use by those authorized to give the public\n     permission to use material in ways otherwise restricted by\n     copyright and certain other rights. Our licenses are\n     irrevocable. Licensors should read and understand the terms\n     and conditions of the license they choose before applying it.\n     Licensors should also secure all rights necessary before\n     applying our licenses so that the public can reuse the\n     material as expected. Licensors should clearly mark any\n     material not subject to the license. This includes other CC-\n     licensed material, or material used under an exception or\n     limitation to copyright. More considerations for licensors:\n    wiki.creativecommons.org/Considerations_for_licensors\n\n     Considerations for the public: By using one of our public\n     licenses, a licensor grants the public permission to use the\n     licensed material under specified terms and conditions. If\n     the licensor's permission is not necessary for any reason--for\n     example, because of any applicable exception or limitation to\n     copyright--then that use is not regulated by the license. Our\n     licenses grant only permissions under copyright and certain\n     other rights that a licensor has authority to grant. Use of\n     the licensed material may still be restricted for other\n     reasons, including because others have copyright or other\n     rights in the material. A licensor may make special requests,\n     such as asking that all changes be marked or described.\n     Although not required by our licenses, you are encouraged to\n     respect those requests where reasonable. More_considerations\n     for the public:\n    wiki.creativecommons.org/Considerations_for_licensees\n\n=======================================================================\n\nCreative Commons Attribution-NonCommercial-NoDerivatives 4.0\nInternational Public License\n\nBy exercising the Licensed Rights (defined below), You accept and agree\nto be bound by the terms and conditions of this Creative Commons\nAttribution-NonCommercial-NoDerivatives 4.0 International Public\nLicense (\"Public License\"). To the extent this Public License may be\ninterpreted as a contract, You are granted the Licensed Rights in\nconsideration of Your acceptance of these terms and conditions, and the\nLicensor grants You such rights in consideration of benefits the\nLicensor receives from making the Licensed Material available under\nthese terms and conditions.\n\n\nSection 1 -- Definitions.\n\n  a. Adapted Material means material subject to Copyright and Similar\n     Rights that is derived from or based upon the Licensed Material\n     and in which the Licensed Material is translated, altered,\n     arranged, transformed, or otherwise modified in a manner requiring\n     permission under the Copyright and Similar Rights held by the\n     Licensor. For purposes of this Public License, where the Licensed\n     Material is a musical work, performance, or sound recording,\n     Adapted Material is always produced where the Licensed Material is\n     synched in timed relation with a moving image.\n\n  b. Copyright and Similar Rights means copyright and/or similar rights\n     closely related to copyright including, without limitation,\n     performance, broadcast, sound recording, and Sui Generis Database\n     Rights, without regard to how the rights are labeled or\n     categorized. For purposes of this Public License, the rights\n     specified in Section 2(b)(1)-(2) are not Copyright and Similar\n     Rights.\n\n  c. Effective Technological Measures means those measures that, in the\n     absence of proper authority, may not be circumvented under laws\n     fulfilling obligations under Article 11 of the WIPO Copyright\n     Treaty adopted on December 20, 1996, and/or similar international\n     agreements.\n\n  d. Exceptions and Limitations means fair use, fair dealing, and/or\n     any other exception or limitation to Copyright and Similar Rights\n     that applies to Your use of the Licensed Material.\n\n  e. Licensed Material means the artistic or literary work, database,\n     or other material to which the Licensor applied this Public\n     License.\n\n  f. Licensed Rights means the rights granted to You subject to the\n     terms and conditions of this Public License, which are limited to\n     all Copyright and Similar Rights that apply to Your use of the\n     Licensed Material and that the Licensor has authority to license.\n\n  g. Licensor means the individual(s) or entity(ies) granting rights\n     under this Public License.\n\n  h. NonCommercial means not primarily intended for or directed towards\n     commercial advantage or monetary compensation. For purposes of\n     this Public License, the exchange of the Licensed Material for\n     other material subject to Copyright and Similar Rights by digital\n     file-sharing or similar means is NonCommercial provided there is\n     no payment of monetary compensation in connection with the\n     exchange.\n\n  i. Share means to provide material to the public by any means or\n     process that requires permission under the Licensed Rights, such\n     as reproduction, public display, public performance, distribution,\n     dissemination, communication, or importation, and to make material\n     available to the public including in ways that members of the\n     public may access the material from a place and at a time\n     individually chosen by them.\n\n  j. Sui Generis Database Rights means rights other than copyright\n     resulting from Directive 96/9/EC of the European Parliament and of\n     the Council of 11 March 1996 on the legal protection of databases,\n     as amended and/or succeeded, as well as other essentially\n     equivalent rights anywhere in the world.\n\n  k. You means the individual or entity exercising the Licensed Rights\n     under this Public License. Your has a corresponding meaning.\n\n\nSection 2 -- Scope.\n\n  a. License grant.\n\n       1. Subject to the terms and conditions of this Public License,\n          the Licensor hereby grants You a worldwide, royalty-free,\n          non-sublicensable, non-exclusive, irrevocable license to\n          exercise the Licensed Rights in the Licensed Material to:\n\n            a. reproduce and Share the Licensed Material, in whole or\n               in part, for NonCommercial purposes only; and\n\n            b. produce and reproduce, but not Share, Adapted Material\n               for NonCommercial purposes only.\n\n       2. Exceptions and Limitations. For the avoidance of doubt, where\n          Exceptions and Limitations apply to Your use, this Public\n          License does not apply, and You do not need to comply with\n          its terms and conditions.\n\n       3. Term. The term of this Public License is specified in Section\n          6(a).\n\n       4. Media and formats; technical modifications allowed. The\n          Licensor authorizes You to exercise the Licensed Rights in\n          all media and formats whether now known or hereafter created,\n          and to make technical modifications necessary to do so. The\n          Licensor waives and/or agrees not to assert any right or\n          authority to forbid You from making technical modifications\n          necessary to exercise the Licensed Rights, including\n          technical modifications necessary to circumvent Effective\n          Technological Measures. For purposes of this Public License,\n          simply making modifications authorized by this Section 2(a)\n          (4) never produces Adapted Material.\n\n       5. Downstream recipients.\n\n            a. Offer from the Licensor -- Licensed Material. Every\n               recipient of the Licensed Material automatically\n               receives an offer from the Licensor to exercise the\n               Licensed Rights under the terms and conditions of this\n               Public License.\n\n            b. No downstream restrictions. You may not offer or impose\n               any additional or different terms or conditions on, or\n               apply any Effective Technological Measures to, the\n               Licensed Material if doing so restricts exercise of the\n               Licensed Rights by any recipient of the Licensed\n               Material.\n\n       6. No endorsement. Nothing in this Public License constitutes or\n          may be construed as permission to assert or imply that You\n          are, or that Your use of the Licensed Material is, connected\n          with, or sponsored, endorsed, or granted official status by,\n          the Licensor or others designated to receive attribution as\n          provided in Section 3(a)(1)(A)(i).\n\n  b. Other rights.\n\n       1. Moral rights, such as the right of integrity, are not\n          licensed under this Public License, nor are publicity,\n          privacy, and/or other similar personality rights; however, to\n          the extent possible, the Licensor waives and/or agrees not to\n          assert any such rights held by the Licensor to the limited\n          extent necessary to allow You to exercise the Licensed\n          Rights, but not otherwise.\n\n       2. Patent and trademark rights are not licensed under this\n          Public License.\n\n       3. To the extent possible, the Licensor waives any right to\n          collect royalties from You for the exercise of the Licensed\n          Rights, whether directly or through a collecting society\n          under any voluntary or waivable statutory or compulsory\n          licensing scheme. In all other cases the Licensor expressly\n          reserves any right to collect such royalties, including when\n          the Licensed Material is used other than for NonCommercial\n          purposes.\n\n\nSection 3 -- License Conditions.\n\nYour exercise of the Licensed Rights is expressly made subject to the\nfollowing conditions.\n\n  a. Attribution.\n\n       1. If You Share the Licensed Material, You must:\n\n            a. retain the following if it is supplied by the Licensor\n               with the Licensed Material:\n\n                 i. identification of the creator(s) of the Licensed\n                    Material and any others designated to receive\n                    attribution, in any reasonable manner requested by\n                    the Licensor (including by pseudonym if\n                    designated);\n\n                ii. a copyright notice;\n\n               iii. a notice that refers to this Public License;\n\n                iv. a notice that refers to the disclaimer of\n                    warranties;\n\n                 v. a URI or hyperlink to the Licensed Material to the\n                    extent reasonably practicable;\n\n            b. indicate if You modified the Licensed Material and\n               retain an indication of any previous modifications; and\n\n            c. indicate the Licensed Material is licensed under this\n               Public License, and include the text of, or the URI or\n               hyperlink to, this Public License.\n\n          For the avoidance of doubt, You do not have permission under\n          this Public License to Share Adapted Material.\n\n       2. You may satisfy the conditions in Section 3(a)(1) in any\n          reasonable manner based on the medium, means, and context in\n          which You Share the Licensed Material. For example, it may be\n          reasonable to satisfy the conditions by providing a URI or\n          hyperlink to a resource that includes the required\n          information.\n\n       3. If requested by the Licensor, You must remove any of the\n          information required by Section 3(a)(1)(A) to the extent\n          reasonably practicable.\n\n\nSection 4 -- Sui Generis Database Rights.\n\nWhere the Licensed Rights include Sui Generis Database Rights that\napply to Your use of the Licensed Material:\n\n  a. for the avoidance of doubt, Section 2(a)(1) grants You the right\n     to extract, reuse, reproduce, and Share all or a substantial\n     portion of the contents of the database for NonCommercial purposes\n     only and provided You do not Share Adapted Material;\n\n  b. if You include all or a substantial portion of the database\n     contents in a database in which You have Sui Generis Database\n     Rights, then the database in which You have Sui Generis Database\n     Rights (but not its individual contents) is Adapted Material; and\n\n  c. You must comply with the conditions in Section 3(a) if You Share\n     all or a substantial portion of the contents of the database.\n\nFor the avoidance of doubt, this Section 4 supplements and does not\nreplace Your obligations under this Public License where the Licensed\nRights include other Copyright and Similar Rights.\n\n\nSection 5 -- Disclaimer of Warranties and Limitation of Liability.\n\n  a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE\n     EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS\n     AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF\n     ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,\n     IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,\n     WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR\n     PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,\n     ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT\n     KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT\n     ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.\n\n  b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE\n     TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,\n     NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,\n     INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,\n     COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR\n     USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN\n     ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR\n     DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR\n     IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.\n\n  c. The disclaimer of warranties and limitation of liability provided\n     above shall be interpreted in a manner that, to the extent\n     possible, most closely approximates an absolute disclaimer and\n     waiver of all liability.\n\n\nSection 6 -- Term and Termination.\n\n  a. This Public License applies for the term of the Copyright and\n     Similar Rights licensed here. However, if You fail to comply with\n     this Public License, then Your rights under this Public License\n     terminate automatically.\n\n  b. Where Your right to use the Licensed Material has terminated under\n     Section 6(a), it reinstates:\n\n       1. automatically as of the date the violation is cured, provided\n          it is cured within 30 days of Your discovery of the\n          violation; or\n\n       2. upon express reinstatement by the Licensor.\n\n     For the avoidance of doubt, this Section 6(b) does not affect any\n     right the Licensor may have to seek remedies for Your violations\n     of this Public License.\n\n  c. For the avoidance of doubt, the Licensor may also offer the\n     Licensed Material under separate terms or conditions or stop\n     distributing the Licensed Material at any time; however, doing so\n     will not terminate this Public License.\n\n  d. Sections 1, 5, 6, 7, and 8 survive termination of this Public\n     License.\n\n\nSection 7 -- Other Terms and Conditions.\n\n  a. The Licensor shall not be bound by any additional or different\n     terms or conditions communicated by You unless expressly agreed.\n\n  b. Any arrangements, understandings, or agreements regarding the\n     Licensed Material not stated herein are separate from and\n     independent of the terms and conditions of this Public License.\n\n\nSection 8 -- Interpretation.\n\n  a. For the avoidance of doubt, this Public License does not, and\n     shall not be interpreted to, reduce, limit, restrict, or impose\n     conditions on any use of the Licensed Material that could lawfully\n     be made without permission under this Public License.\n\n  b. To the extent possible, if any provision of this Public License is\n     deemed unenforceable, it shall be automatically reformed to the\n     minimum extent necessary to make it enforceable. If the provision\n     cannot be reformed, it shall be severed from this Public License\n     without affecting the enforceability of the remaining terms and\n     conditions.\n\n  c. No term or condition of this Public License will be waived and no\n     failure to comply consented to unless expressly agreed to by the\n     Licensor.\n\n  d. Nothing in this Public License constitutes or may be interpreted\n     as a limitation upon, or waiver of, any privileges and immunities\n     that apply to the Licensor or You, including from the legal\n     processes of any jurisdiction or authority.\n\n=======================================================================\n\nCreative Commons is not a party to its public\nlicenses. Notwithstanding, Creative Commons may elect to apply one of\nits public licenses to material it publishes and in those instances\nwill be considered the “Licensor.” The text of the Creative Commons\npublic licenses is dedicated to the public domain under the CC0 Public\nDomain Dedication. Except for the limited purpose of indicating that\nmaterial is shared under a Creative Commons public license or as\notherwise permitted by the Creative Commons policies published at\ncreativecommons.org/policies, Creative Commons does not authorize the\nuse of the trademark \"Creative Commons\" or any other trademark or logo\nof Creative Commons without its prior written consent including,\nwithout limitation, in connection with any unauthorized modifications\nto any of its public licenses or any other arrangements,\nunderstandings, or agreements concerning use of licensed material. For\nthe avoidance of doubt, this paragraph does not form part of the\npublic licenses.\n\nCreative Commons may be contacted at creativecommons.org.\n"
  },
  {
    "path": "PULL_REQUEST_TEMPLATE.md",
    "content": "**Yes, I promise I've read the [Contributions Guidelines](https://github.com/getify/You-Dont-Know-JS/blob/master/CONTRIBUTING.md)** (please feel free to remove this line).\n"
  },
  {
    "path": "README.md",
    "content": "# [You Dont Know JS中文版](https://github.com/kujian/You-Dont-Know-JS/tree/1ed-zh-CN)\n\n文章目录：\n\n1. [入门与进阶](https://github.com/kujian/You-Dont-Know-JS/blob/1ed-zh-CN/up%20&%20going/README.md#you-dont-know-js-up--going)\n2. [作用域与闭包](https://github.com/kujian/You-Dont-Know-JS/blob/1ed-zh-CN/scope%20&%20closures/README.md#you-dont-know-js-scope--closures)\n3. [this与对象原型](https://github.com/kujian/You-Dont-Know-JS/blob/1ed-zh-CN/this%20&%20object%20prototypes/README.md#you-dont-know-js-this--object-prototypes)\n4. [类型与文法](https://github.com/kujian/You-Dont-Know-JS/blob/1ed-zh-CN/types%20&%20grammar/README.md#you-dont-know-js-types--grammar)\n5. [异步与性能](https://github.com/kujian/You-Dont-Know-JS/blob/1ed-zh-CN/async%20&%20performance/README.md#you-dont-know-js-async--performance)\n6. [ES6与未来](https://github.com/kujian/You-Dont-Know-JS/blob/1ed-zh-CN/es6%20&%20beyond/README.md#you-dont-know-js-es6--beyond)\n\n\n[英文原版](https://github.com/getify/You-Dont-Know-JS)，[中文原版](https://github.com/getify/You-Dont-Know-JS/tree/1ed-zh-CN) ，感谢作者开源，感谢社区的翻译，具体介绍请点击链接查阅\n"
  },
  {
    "path": "async & performance/README.md",
    "content": "# You Don't Know JS: Async & Performance\n\n<img src=\"cover.jpg\" width=\"300\">\n\n-----\n\n**[Purchase digital/print copy from O'Reilly](http://shop.oreilly.com/product/0636920033752.do)**\n\n-----\n\n[Table of Contents](toc.md)\n\n* [Foreword](foreword.md) (by [Jake Archibald](http://jakearchibald.com))\n* [Preface](../preface.md)\n* [Chapter 1: Asynchrony: Now & Later](ch1.md)\n* [Chapter 2: Callbacks](ch2.md)\n* [Chapter 3: Promises](ch3.md)\n* [Chapter 4: Generators](ch4.md)\n* [Chapter 5: Program Performance](ch5.md)\n* [Chapter 6: Benchmarking & Tuning](ch6.md)\n* [Appendix A: Library: asynquence](apA.md)\n* [Appendix B: Advanced Async Patterns](apB.md)\n* [Appendix C: Thank You's!](apC.md)\n"
  },
  {
    "path": "async & performance/apA.md",
    "content": "# You Don't Know JS: Async & Performance\n# Appendix A: *asynquence* Library\n\nChapters 1 and 2 went into quite a bit of detail about typical asynchronous programming patterns and how they're commonly solved with callbacks. But we also saw why callbacks are fatally limited in capability, which led us to Chapters 3 and 4, with Promises and generators offering a much more solid, trustable, and reason-able base to build your asynchrony on.\n\nI referenced my own asynchronous library *asynquence* (http://github.com/getify/asynquence) -- \"async\" + \"sequence\" = \"asynquence\" -- several times in this book, and I want to now briefly explain how it works and why its unique design is important and helpful.\n\nIn the next appendix, we'll explore some advanced async patterns, but you'll probably want a library to make those palatable enough to be useful. We'll use *asynquence* to express those patterns, so you'll want to spend a little time here getting to know the library first.\n\n*asynquence* is obviously not the only option for good async coding; certainly there are many great libraries in this space. But *asynquence* provides a unique perspective by combining the best of all these patterns into a single library, and moreover is built on a single basic abstraction: the (async) sequence.\n\nMy premise is that sophisticated JS programs often need bits and pieces of various different asynchronous patterns woven together, and this is usually left entirely up to each developer to figure out. Instead of having to bring in two or more different async libraries that focus on different aspects of asynchrony, *asynquence* unifies them into variated sequence steps, with just one core library to learn and deploy.\n\nI believe the value is strong enough with *asynquence* to make async flow control programming with Promise-style semantics super easy to accomplish, so that's why we'll exclusively focus on that library here.\n\nTo begin, I'll explain the design principles behind *asynquence*, and then we'll illustrate how its API works with code examples.\n\n## Sequences, Abstraction Design\n\nUnderstanding *asynquence* begins with understanding a fundamental abstraction: any series of steps for a task, whether they separately are synchronous or asynchronous, can be collectively thought of as a \"sequence\". In other words, a sequence is a container that represents a task, and is comprised of individual (potentially async) steps to complete that task.\n\nEach step in the sequence is controlled under the covers by a Promise (see Chapter 3). That is, every step you add to a sequence implicitly creates a Promise that is wired to the previous end of the sequence. Because of the semantics of Promises, every single step advancement in a sequence is asynchronous, even if you synchronously complete the step.\n\nMoreover, a sequence will always proceed linearly from step to step, meaning that step 2 always comes after step 1 finishes, and so on.\n\nOf course, a new sequence can be forked off an existing sequence, meaning the fork only occurs once the main sequence reaches that point in the flow. Sequences can also be combined in various ways, including having one sequence subsumed by another sequence at a particular point in the flow.\n\nA sequence is kind of like a Promise chain. However, with Promise chains, there is no \"handle\" to grab that references the entire chain. Whichever Promise you have a reference to only represents the current step in the chain plus any other steps hanging off it. Essentially, you cannot hold a reference to a Promise chain unless you hold a reference to the first Promise in the chain.\n\nThere are many cases where it turns out to be quite useful to have a handle that references the entire sequence collectively. The most important of those cases is with sequence abort/cancel. As we covered extensively in Chapter 3, Promises themselves should never be able to be canceled, as this violates a fundamental design imperative: external immutability.\n\nBut sequences have no such immutability design principle, mostly because sequences are not passed around as future-value containers that need immutable value semantics. So sequences are the proper level of abstraction to handle abort/cancel behavior. *asynquence* sequences can be `abort()`ed at any time, and the sequence will stop at that point and not go for any reason.\n\nThere's plenty more reasons to prefer a sequence abstraction on top of Promise chains, for flow control purposes.\n\nFirst, Promise chaining is a rather manual process -- one that can get pretty tedious once you start creating and chaining Promises across a wide swath of your programs -- and this tedium can act counterproductively to dissuade the developer from using Promises in places where they are quite appropriate.\n\nAbstractions are meant to reduce boilerplate and tedium, so the sequence abstraction is a good solution to this problem. With Promises, your focus is on the individual step, and there's little assumption that you will keep the chain going. With sequences, the opposite approach is taken, assuming the sequence will keep having more steps added indefinitely.\n\nThis abstraction complexity reduction is especially powerful when you start thinking about higher-order Promise patterns (beyond `race([..])` and `all([..])`.\n\nFor example, in the middle of a sequence, you may want to express a step that is conceptually like a `try..catch` in that the step will always result in success, either the intended main success resolution or a positive nonerror signal for the caught error. Or, you might want to express a step that is like a retry/until loop, where it keeps trying the same step over and over until success occurs.\n\nThese sorts of abstractions are quite nontrivial to express using only Promise primitives, and doing so in the middle of an existing Promise chain is not pretty. But if you abstract your thinking to a sequence, and consider a step as a wrapper around a Promise, that step wrapper can hide such details, freeing you to think about the flow control in the most sensible way without being bothered by the details.\n\nSecond, and perhaps more importantly, thinking of async flow control in terms of steps in a sequence allows you to abstract out the details of what types of asynchronicity are involved with each individual step. Under the covers, a Promise will always control the step, but above the covers, that step can look either like a continuation callback (the simple default), or like a real Promise, or as a run-to-completion generator, or ... Hopefully, you get the picture.\n\nThird, sequences can more easily be twisted to adapt to different modes of thinking, such as event-, stream-, or reactive-based coding. *asynquence* provides a pattern I call \"reactive sequences\" (which we'll cover later) as a variation on the \"reactive observable\" ideas in RxJS (\"Reactive Extensions\"), that lets a repeatable event fire off a new sequence instance each time. Promises are one-shot-only, so it's quite awkward to express repetitious asynchrony with Promises alone.\n\nAnother alternate mode of thinking inverts the resolution/control capability in a pattern I call \"iterable sequences\". Instead of each individual step internally controlling its own completion (and thus advancement of the sequence), the sequence is inverted so the advancement control is through an external iterator, and each step in the *iterable sequence* just responds to the `next(..)` *iterator* control.\n\nWe'll explore all of these different variations as we go throughout the rest of this appendix, so don't worry if we ran over those bits far too quickly just now.\n\nThe takeaway is that sequences are a more powerful and sensible abstraction for complex asynchrony than just Promises (Promise chains) or just generators, and *asynquence* is designed to express that abstraction with just the right level of sugar to make async programming more understandable and more enjoyable.\n\n## *asynquence* API\n\nTo start off, the way you create a sequence (an *asynquence* instance) is with the `ASQ(..)` function. An `ASQ()` call with no parameters creates an empty initial sequence, whereas passing one or more values or functions to `ASQ(..)` sets up the sequence with each argument representing the initial steps of the sequence.\n\n**Note:** For the purposes of all code examples here, I will use the *asynquence* top-level identifier in global browser usage: `ASQ`. If you include and use *asynquence* through a module system (browser or server), you of course can define whichever symbol you prefer, and *asynquence* won't care!\n\nMany of the API methods discussed here are built into the core of *asynquence*, but others are provided through including the optional \"contrib\" plug-ins package. See the documentation for *asynquence* for whether a method is built in or defined via plug-in: http://github.com/getify/asynquence\n\n### Steps\n\nIf a function represents a normal step in the sequence, that function is invoked with the first parameter being the continuation callback, and any subsequent parameters being any messages passed on from the previous step. The step will not complete until the continuation callback is called. Once it's called, any arguments you pass to it will be sent along as messages to the next step in the sequence.\n\nTo add an additional normal step to the sequence, call `then(..)` (which has essentially the exact same semantics as the `ASQ(..)` call):\n\n```js\nASQ(\n\t// step 1\n\tfunction(done){\n\t\tsetTimeout( function(){\n\t\t\tdone( \"Hello\" );\n\t\t}, 100 );\n\t},\n\t// step 2\n\tfunction(done,greeting) {\n\t\tsetTimeout( function(){\n\t\t\tdone( greeting + \" World\" );\n\t\t}, 100 );\n\t}\n)\n// step 3\n.then( function(done,msg){\n\tsetTimeout( function(){\n\t\tdone( msg.toUpperCase() );\n\t}, 100 );\n} )\n// step 4\n.then( function(done,msg){\n\tconsole.log( msg );\t\t\t// HELLO WORLD\n} );\n```\n\n**Note:** Though the name `then(..)` is identical to the native Promises API, this `then(..)` is different. You can pass as few or as many functions or values to `then(..)` as you'd like, and each is taken as a separate step. There's no two-callback fulfilled/rejected semantics involved.\n\nUnlike with Promises, where to chain one Promise to the next you have to create and `return` that Promise from a `then(..)` fulfillment handler, with *asynquence*, all you need to do is call the continuation callback -- I always call it `done()` but you can name it whatever suits you -- and optionally pass it completion messages as arguments.\n\nEach step defined by `then(..)` is assumed to be asynchronous. If you have a step that's synchronous, you can either just call `done(..)` right away, or you can use the simpler `val(..)` step helper:\n\n```js\n// step 1 (sync)\nASQ( function(done){\n\tdone( \"Hello\" );\t// manually synchronous\n} )\n// step 2 (sync)\n.val( function(greeting){\n\treturn greeting + \" World\";\n} )\n// step 3 (async)\n.then( function(done,msg){\n\tsetTimeout( function(){\n\t\tdone( msg.toUpperCase() );\n\t}, 100 );\n} )\n// step 4 (sync)\n.val( function(msg){\n\tconsole.log( msg );\n} );\n```\n\nAs you can see, `val(..)`-invoked steps don't receive a continuation callback, as that part is assumed for you -- and the parameter list is less cluttered as a result! To send a message along to the next step, you simply use `return`.\n\nThink of `val(..)` as representing a synchronous \"value-only\" step, which is useful for synchronous value operations, logging, and the like.\n\n### Errors\n\nOne important difference with *asynquence* compared to Promises is with error handling.\n\nWith Promises, each individual Promise (step) in a chain can have its own independent error, and each subsequent step has the ability to handle the error or not. The main reason for this semantic comes (again) from the focus on individual Promises rather than on the chain (sequence) as a whole.\n\nI believe that most of the time, an error in one part of a sequence is generally not recoverable, so the subsequent steps in the sequence are moot and should be skipped. So, by default, an error at any step of a sequence throws the entire sequence into error mode, and the rest of the normal steps are ignored.\n\nIf you *do* need to have a step where its error is recoverable, there are several different API methods that can accommodate, such as `try(..)` -- previously mentioned as a kind of `try..catch` step -- or `until(..)` -- a retry loop that keeps attempting the step until it succeeds or you manually `break()` the loop. *asynquence* even has `pThen(..)` and `pCatch(..)` methods, which work identically to how normal Promise `then(..)` and `catch(..)` work (see Chapter 3), so you can do localized mid-sequence error handling if you so choose.\n\nThe point is, you have both options, but the more common one in my experience is the default. With Promises, to get a chain of steps to ignore all steps once an error occurs, you have to take care not to register a rejection handler at any step; otherwise, that error gets swallowed as handled, and the sequence may continue (perhaps unexpectedly). This kind of desired behavior is a bit awkward to properly and reliably handle.\n\nTo register a sequence error notification handler, *asynquence* provides an `or(..)` sequence method, which also has an alias of `onerror(..)`. You can call this method anywhere in the sequence, and you can register as many handlers as you'd like. That makes it easy for multiple different consumers to listen in on a sequence to know if it failed or not; it's kind of like an error event handler in that respect.\n\nJust like with Promises, all JS exceptions become sequence errors, or you can programmatically signal a sequence error:\n\n```js\nvar sq = ASQ( function(done){\n\tsetTimeout( function(){\n\t\t// signal an error for the sequence\n\t\tdone.fail( \"Oops\" );\n\t}, 100 );\n} )\n.then( function(done){\n\t// will never get here\n} )\n.or( function(err){\n\tconsole.log( err );\t\t\t// Oops\n} )\n.then( function(done){\n\t// won't get here either\n} );\n\n// later\n\nsq.or( function(err){\n\tconsole.log( err );\t\t\t// Oops\n} );\n```\n\nAnother really important difference with error handling in *asynquence* compared to native Promises is the default behavior of \"unhandled exceptions\". As we discussed at length in Chapter 3, a rejected Promise without a registered rejection handler will just silently hold (aka swallow) the error; you have to remember to always end a chain with a final `catch(..)`.\n\nIn *asynquence*, the assumption is reversed.\n\nIf an error occurs on a sequence, and it **at that moment** has no error handlers registered, the error is reported to the `console`. In other words, unhandled rejections are by default always reported so as not to be swallowed and missed.\n\nAs soon as you register an error handler against a sequence, it opts that sequence out of such reporting, to prevent duplicate noise.\n\nThere may, in fact, be cases where you want to create a sequence that may go into the error state before you have a chance to register the handler. This isn't common, but it can happen from time to time.\n\nIn those cases, you can also **opt a sequence instance out** of error reporting by calling `defer()` on the sequence. You should only opt out of error reporting if you are sure that you're going to eventually handle such errors:\n\n```js\nvar sq1 = ASQ( function(done){\n\tdoesnt.Exist();\t\t\t// will throw exception to console\n} );\n\nvar sq2 = ASQ( function(done){\n\tdoesnt.Exist();\t\t\t// will throw only a sequence error\n} )\n// opt-out of error reporting\n.defer();\n\nsetTimeout( function(){\n\tsq1.or( function(err){\n\t\tconsole.log( err );\t// ReferenceError\n\t} );\n\n\tsq2.or( function(err){\n\t\tconsole.log( err );\t// ReferenceError\n\t} );\n}, 100 );\n\n// ReferenceError (from sq1)\n```\n\nThis is better error handling behavior than Promises themselves have, because it's the Pit of Success, not the Pit of Failure (see Chapter 3).\n\n**Note:** If a sequence is piped into (aka subsumed by) another sequence -- see \"Combining Sequences\"  for a complete description -- then the source sequence is opted out of error reporting, but now the target sequence's error reporting or lack thereof must be considered.\n\n### Parallel Steps\n\nNot all steps in your sequences will have just a single (async) task to perform; some will need to perform multiple steps \"in parallel\" (concurrently). A step in a sequence in which multiple substeps are processing concurrently is called a `gate(..)` -- there's an `all(..)` alias if you prefer -- and is directly symmetric to native `Promise.all([..])`.\n\nIf all the steps in the `gate(..)` complete successfully, all success messages will be passed to the next sequence step. If any of them generate errors, the whole sequence immediately goes into an error state.\n\nConsider:\n\n```js\nASQ( function(done){\n\tsetTimeout( done, 100 );\n} )\n.gate(\n\tfunction(done){\n\t\tsetTimeout( function(){\n\t\t\tdone( \"Hello\" );\n\t\t}, 100 );\n\t},\n\tfunction(done){\n\t\tsetTimeout( function(){\n\t\t\tdone( \"World\", \"!\" );\n\t\t}, 100 );\n\t}\n)\n.val( function(msg1,msg2){\n\tconsole.log( msg1 );\t// Hello\n\tconsole.log( msg2 );\t// [ \"World\", \"!\" ]\n} );\n```\n\nFor illustration, let's compare that example to native Promises:\n\n```js\nnew Promise( function(resolve,reject){\n\tsetTimeout( resolve, 100 );\n} )\n.then( function(){\n\treturn Promise.all( [\n\t\tnew Promise( function(resolve,reject){\n\t\t\tsetTimeout( function(){\n\t\t\t\tresolve( \"Hello\" );\n\t\t\t}, 100 );\n\t\t} ),\n\t\tnew Promise( function(resolve,reject){\n\t\t\tsetTimeout( function(){\n\t\t\t\t// note: we need a [ ] array here\n\t\t\t\tresolve( [ \"World\", \"!\" ] );\n\t\t\t}, 100 );\n\t\t} )\n\t] );\n} )\n.then( function(msgs){\n\tconsole.log( msgs[0] );\t// Hello\n\tconsole.log( msgs[1] );\t// [ \"World\", \"!\" ]\n} );\n```\n\nYuck. Promises require a lot more boilerplate overhead to express the same asynchronous flow control. That's a great illustration of why the *asynquence* API and abstraction make dealing with Promise steps a lot nicer. The improvement only goes higher the more complex your asynchrony is.\n\n#### Step Variations\n\nThere are several variations in the contrib plug-ins on *asynquence*'s `gate(..)` step type that can be quite helpful:\n\n* `any(..)` is like `gate(..)`, except just one segment has to eventually succeed to proceed on the main sequence.\n* `first(..)` is like `any(..)`, except as soon as any segment succeeds, the main sequence proceeds (ignoring subsequent results from other segments).\n* `race(..)` (symmetric with `Promise.race([..])`) is like `first(..)`, except the main sequence proceeds as soon as any segment completes (either success or failure).\n* `last(..)` is like `any(..)`, except only the latest segment to complete successfully sends its message(s) along to the main sequence.\n* `none(..)` is the inverse of `gate(..)`: the main sequence proceeds only if all the segments fail (with all segment error message(s) transposed as success message(s) and vice versa).\n\nLet's first define some helpers to make illustration cleaner:\n\n```js\nfunction success1(done) {\n\tsetTimeout( function(){\n\t\tdone( 1 );\n\t}, 100 );\n}\n\nfunction success2(done) {\n\tsetTimeout( function(){\n\t\tdone( 2 );\n\t}, 100 );\n}\n\nfunction failure3(done) {\n\tsetTimeout( function(){\n\t\tdone.fail( 3 );\n\t}, 100 );\n}\n\nfunction output(msg) {\n\tconsole.log( msg );\n}\n```\n\nNow, let's demonstrate these `gate(..)` step variations:\n\n```js\nASQ().race(\n\tfailure3,\n\tsuccess1\n)\n.or( output );\t\t// 3\n\n\nASQ().any(\n\tsuccess1,\n\tfailure3,\n\tsuccess2\n)\n.val( function(){\n\tvar args = [].slice.call( arguments );\n\tconsole.log(\n\t\targs\t\t// [ 1, undefined, 2 ]\n\t);\n} );\n\n\nASQ().first(\n\tfailure3,\n\tsuccess1,\n\tsuccess2\n)\n.val( output );\t\t// 1\n\n\nASQ().last(\n\tfailure3,\n\tsuccess1,\n\tsuccess2\n)\n.val( output );\t\t// 2\n\nASQ().none(\n\tfailure3\n)\n.val( output )\t\t// 3\n.none(\n\tfailure3\n\tsuccess1\n)\n.or( output );\t\t// 1\n```\n\nAnother step variation is `map(..)`, which lets you asynchronously map elements of an array to different values, and the step doesn't proceed until all the mappings are complete. `map(..)` is very similar to `gate(..)`, except it gets the initial values from an array instead of from separately specified functions, and also because you define a single function callback to operate on each value:\n\n```js\nfunction double(x,done) {\n\tsetTimeout( function(){\n\t\tdone( x * 2 );\n\t}, 100 );\n}\n\nASQ().map( [1,2,3], double )\n.val( output );\t\t\t\t\t// [2,4,6]\n```\n\nAlso, `map(..)` can receive either of its parameters (the array or the callback) from messages passed from the previous step:\n\n```js\nfunction plusOne(x,done) {\n\tsetTimeout( function(){\n\t\tdone( x + 1 );\n\t}, 100 );\n}\n\nASQ( [1,2,3] )\n.map( double )\t\t\t// message `[1,2,3]` comes in\n.map( plusOne )\t\t\t// message `[2,4,6]` comes in\n.val( output );\t\t\t// [3,5,7]\n```\n\nAnother variation is `waterfall(..)`, which is kind of like a mixture between `gate(..)`'s message collection behavior but `then(..)`'s sequential processing.\n\nStep 1 is first executed, then the success message from step 1 is given to step 2, and then both success messages go to step 3, and then all three success messages go to step 4, and so on, such that the messages sort of collect and cascade down the \"waterfall\".\n\nConsider:\n\n```js\nfunction double(done) {\n\tvar args = [].slice.call( arguments, 1 );\n\tconsole.log( args );\n\n\tsetTimeout( function(){\n\t\tdone( args[args.length - 1] * 2 );\n\t}, 100 );\n}\n\nASQ( 3 )\n.waterfall(\n\tdouble,\t\t\t\t\t// [ 3 ]\n\tdouble,\t\t\t\t\t// [ 6 ]\n\tdouble,\t\t\t\t\t// [ 6, 12 ]\n\tdouble\t\t\t\t\t// [ 6, 12, 24 ]\n)\n.val( function(){\n\tvar args = [].slice.call( arguments );\n\tconsole.log( args );\t// [ 6, 12, 24, 48 ]\n} );\n```\n\nIf at any point in the \"waterfall\" an error occurs, the whole sequence immediately goes into an error state.\n\n#### Error Tolerance\n\nSometimes you want to manage errors at the step level and not let them necessarily send the whole sequence into the error state. *asynquence* offers two step variations for that purpose.\n\n`try(..)` attempts a step, and if it succeeds, the sequence proceeds as normal, but if the step fails, the failure is turned into a success message formated as `{ catch: .. }` with the error message(s) filled in:\n\n```js\nASQ()\n.try( success1 )\n.val( output )\t\t\t// 1\n.try( failure3 )\n.val( output )\t\t\t// { catch: 3 }\n.or( function(err){\n\t// never gets here\n} );\n```\n\nYou could instead set up a retry loop using `until(..)`, which tries the step and if it fails, retries the step again on the next event loop tick, and so on.\n\nThis retry loop can continue indefinitely, but if you want to break out of the loop, you can call the `break()` flag on the completion trigger, which sends the main sequence into an error state:\n\n```js\nvar count = 0;\n\nASQ( 3 )\n.until( double )\n.val( output )\t\t\t\t\t// 6\n.until( function(done){\n\tcount++;\n\n\tsetTimeout( function(){\n\t\tif (count < 5) {\n\t\t\tdone.fail();\n\t\t}\n\t\telse {\n\t\t\t// break out of the `until(..)` retry loop\n\t\t\tdone.break( \"Oops\" );\n\t\t}\n\t}, 100 );\n} )\n.or( output );\t\t\t\t\t// Oops\n```\n\n#### Promise-Style Steps\n\nIf you would prefer to have, inline in your sequence, Promise-style semantics like Promises' `then(..)` and `catch(..)` (see Chapter 3), you can use the `pThen` and `pCatch` plug-ins:\n\n```js\nASQ( 21 )\n.pThen( function(msg){\n\treturn msg * 2;\n} )\n.pThen( output )\t\t\t\t// 42\n.pThen( function(){\n\t// throw an exception\n\tdoesnt.Exist();\n} )\n.pCatch( function(err){\n\t// caught the exception (rejection)\n\tconsole.log( err );\t\t\t// ReferenceError\n} )\n.val( function(){\n\t// main sequence is back in a\n\t// success state because previous\n\t// exception was caught by\n\t// `pCatch(..)`\n} );\n```\n\n`pThen(..)` and `pCatch(..)` are designed to run in the sequence, but behave as if it was a normal Promise chain. As such, you can either resolve genuine Promises or *asynquence* sequences from the \"fulfillment\" handler passed to `pThen(..)` (see Chapter 3).\n\n### Forking Sequences\n\nOne feature that can be quite useful about Promises is that you can attach multiple `then(..)` handler registrations to the same promise, effectively \"forking\" the flow-control at that promise:\n\n```js\nvar p = Promise.resolve( 21 );\n\n// fork 1 (from `p`)\np.then( function(msg){\n\treturn msg * 2;\n} )\n.then( function(msg){\n\tconsole.log( msg );\t\t// 42\n} )\n\n// fork 2 (from `p`)\np.then( function(msg){\n\tconsole.log( msg );\t\t// 21\n} );\n```\n\nThe same \"forking\" is easy in *asynquence* with `fork()`:\n\n```js\nvar sq = ASQ(..).then(..).then(..);\n\nvar sq2 = sq.fork();\n\n// fork 1\nsq.then(..)..;\n\n// fork 2\nsq2.then(..)..;\n```\n\n### Combining Sequences\n\nThe reverse of `fork()`ing, you can combine two sequences by subsuming one into another, using the `seq(..)` instance method:\n\n```js\nvar sq = ASQ( function(done){\n\tsetTimeout( function(){\n\t\tdone( \"Hello World\" );\n\t}, 200 );\n} );\n\nASQ( function(done){\n\tsetTimeout( done, 100 );\n} )\n// subsume `sq` sequence into this sequence\n.seq( sq )\n.val( function(msg){\n\tconsole.log( msg );\t\t// Hello World\n} )\n```\n\n`seq(..)` can either accept a sequence itself, as shown here, or a function. If a function, it's expected that the function when called will return a sequence, so the preceding code could have been done with:\n\n```js\n// ..\n.seq( function(){\n\treturn sq;\n} )\n// ..\n```\n\nAlso, that step could instead have been accomplished with a `pipe(..)`:\n\n```js\n// ..\n.then( function(done){\n\t// pipe `sq` into the `done` continuation callback\n\tsq.pipe( done );\n} )\n// ..\n```\n\nWhen a sequence is subsumed, both its success message stream and its error stream are piped in.\n\n**Note:** As mentioned in an earlier note, piping (manually with `pipe(..)` or automatically with `seq(..)`) opts the source sequence out of error-reporting, but doesn't affect the error reporting status of the target sequence.\n\n## Value and Error Sequences\n\nIf any step of a sequence is just a normal value, that value is just mapped to that step's completion message:\n\n```js\nvar sq = ASQ( 42 );\n\nsq.val( function(msg){\n\tconsole.log( msg );\t\t// 42\n} );\n```\n\nIf you want to make a sequence that's automatically errored:\n\n```js\nvar sq = ASQ.failed( \"Oops\" );\n\nASQ()\n.seq( sq )\n.val( function(msg){\n\t// won't get here\n} )\n.or( function(err){\n\tconsole.log( err );\t\t// Oops\n} );\n```\n\nYou also may want to automatically create a delayed-value or a delayed-error sequence. Using the `after` and `failAfter` contrib plug-ins, this is easy:\n\n```js\nvar sq1 = ASQ.after( 100, \"Hello\", \"World\" );\nvar sq2 = ASQ.failAfter( 100, \"Oops\" );\n\nsq1.val( function(msg1,msg2){\n\tconsole.log( msg1, msg2 );\t\t// Hello World\n} );\n\nsq2.or( function(err){\n\tconsole.log( err );\t\t\t\t// Oops\n} );\n```\n\nYou can also insert a delay in the middle of a sequence using `after(..)`:\n\n```js\nASQ( 42 )\n// insert a delay into the sequence\n.after( 100 )\n.val( function(msg){\n\tconsole.log( msg );\t\t// 42\n} );\n```\n\n## Promises and Callbacks\n\nI think *asynquence* sequences provide a lot of value on top of native Promises, and for the most part you'll find it more pleasant and more powerful to work at that level of abstraction. However, integrating *asynquence* with other non-*asynquence* code will be a reality.\n\nYou can easily subsume a promise (e.g., thenable -- see Chapter 3) into a sequence using the `promise(..)` instance method:\n\n```js\nvar p = Promise.resolve( 42 );\n\nASQ()\n.promise( p )\t\t\t// could also: `function(){ return p; }`\n.val( function(msg){\n\tconsole.log( msg );\t// 42\n} );\n```\n\nAnd to go the opposite direction and fork/vend a promise from a sequence at a certain step, use the `toPromise` contrib plug-in:\n\n```js\nvar sq = ASQ.after( 100, \"Hello World\" );\n\nsq.toPromise()\n// this is a standard promise chain now\n.then( function(msg){\n\treturn msg.toUpperCase();\n} )\n.then( function(msg){\n\tconsole.log( msg );\t\t// HELLO WORLD\n} );\n```\n\nTo adapt *asynquence* to systems using callbacks, there are several helper facilities. To automatically generate an \"error-first style\" callback from your sequence to wire into a callback-oriented utility, use `errfcb`:\n\n```js\nvar sq = ASQ( function(done){\n\t// note: expecting \"error-first style\" callback\n\tsomeAsyncFuncWithCB( 1, 2, done.errfcb )\n} )\n.val( function(msg){\n\t// ..\n} )\n.or( function(err){\n\t// ..\n} );\n\n// note: expecting \"error-first style\" callback\nanotherAsyncFuncWithCB( 1, 2, sq.errfcb() );\n```\n\nYou also may want to create a sequence-wrapped version of a utility -- compare to \"promisory\" in Chapter 3 and \"thunkory\" in Chapter 4 -- and *asynquence* provides `ASQ.wrap(..)` for that purpose:\n\n```js\nvar coolUtility = ASQ.wrap( someAsyncFuncWithCB );\n\ncoolUtility( 1, 2 )\n.val( function(msg){\n\t// ..\n} )\n.or( function(err){\n\t// ..\n} );\n```\n\n**Note:** For the sake of clarity (and for fun!), let's coin yet another term, for a sequence-producing function that comes from `ASQ.wrap(..)`, like `coolUtility` here. I propose \"sequory\" (\"sequence\" + \"factory\").\n\n## Iterable Sequences\n\nThe normal paradigm for a sequence is that each step is responsible for completing itself, which is what advances the sequence. Promises work the same way.\n\nThe unfortunate part is that sometimes you need external control over a Promise/step, which leads to awkward \"capability extraction\".\n\nConsider this Promises example:\n\n```js\nvar domready = new Promise( function(resolve,reject){\n\t// don't want to put this here, because\n\t// it belongs logically in another part\n\t// of the code\n\tdocument.addEventListener( \"DOMContentLoaded\", resolve );\n} );\n\n// ..\n\ndomready.then( function(){\n\t// DOM is ready!\n} );\n```\n\nThe \"capability extraction\" anti-pattern with Promises looks like this:\n\n```js\nvar ready;\n\nvar domready = new Promise( function(resolve,reject){\n\t// extract the `resolve()` capability\n\tready = resolve;\n} );\n\n// ..\n\ndomready.then( function(){\n\t// DOM is ready!\n} );\n\n// ..\n\ndocument.addEventListener( \"DOMContentLoaded\", ready );\n```\n\n**Note:** This anti-pattern is an awkward code smell, in my opinion, but some developers like it, for reasons I can't grasp.\n\n*asynquence* offers an inverted sequence type I call \"iterable sequences\", which externalizes the control capability (it's quite useful in use cases like the `domready`):\n\n```js\n// note: `domready` here is an *iterator* that\n// controls the sequence\nvar domready = ASQ.iterable();\n\n// ..\n\ndomready.val( function(){\n\t// DOM is ready\n} );\n\n// ..\n\ndocument.addEventListener( \"DOMContentLoaded\", domready.next );\n```\n\nThere's more to iterable sequences than what we see in this scenario. We'll come back to them in Appendix B.\n\n## Running Generators\n\nIn Chapter 4, we derived a utility called `run(..)` which can run generators to completion, listening for `yield`ed Promises and using them to async resume the generator. *asynquence* has just such a utility built in, called `runner(..)`.\n\nLet's first set up some helpers for illustration:\n\n```js\nfunction doublePr(x) {\n\treturn new Promise( function(resolve,reject){\n\t\tsetTimeout( function(){\n\t\t\tresolve( x * 2 );\n\t\t}, 100 );\n\t} );\n}\n\nfunction doubleSeq(x) {\n\treturn ASQ( function(done){\n\t\tsetTimeout( function(){\n\t\t\tdone( x * 2)\n\t\t}, 100 );\n\t} );\n}\n```\n\nNow, we can use `runner(..)` as a step in the middle of a sequence:\n\n```js\nASQ( 10, 11 )\n.runner( function*(token){\n\tvar x = token.messages[0] + token.messages[1];\n\n\t// yield a real promise\n\tx = yield doublePr( x );\n\n\t// yield a sequence\n\tx = yield doubleSeq( x );\n\n\treturn x;\n} )\n.val( function(msg){\n\tconsole.log( msg );\t\t\t// 84\n} );\n```\n\n### Wrapped Generators\n\nYou can also create a self-packaged generator -- that is, a normal function that runs your specified generator and returns a sequence for its completion -- by `ASQ.wrap(..)`ing it:\n\n```js\nvar foo = ASQ.wrap( function*(token){\n\tvar x = token.messages[0] + token.messages[1];\n\n\t// yield a real promise\n\tx = yield doublePr( x );\n\n\t// yield a sequence\n\tx = yield doubleSeq( x );\n\n\treturn x;\n}, { gen: true } );\n\n// ..\n\nfoo( 8, 9 )\n.val( function(msg){\n\tconsole.log( msg );\t\t\t// 68\n} );\n```\n\nThere's a lot more awesome that `runner(..)` is capable of, but we'll come back to that in Appendix B.\n\n## Review\n\n*asynquence* is a simple abstraction -- a sequence is a series of (async) steps -- on top of Promises, aimed at making working with various asynchronous patterns much easier, without any compromise in capability.\n\nThere are other goodies in the *asynquence* core API and its contrib plug-ins beyond what we saw in this appendix, but we'll leave that as an exercise for the reader to go check the rest of the capabilities out.\n\nYou've now seen the essence and spirit of *asynquence*. The key take away is that a sequence is comprised of steps, and those steps can be any of dozens of different variations on Promises, or they can be a generator-run, or... The choice is up to you, you have all the freedom to weave together whatever async flow control logic is appropriate for your tasks. No more library switching to catch different async patterns.\n\nIf these *asynquence* snippets have made sense to you, you're now pretty well up to speed on the library; it doesn't take that much to learn, actually!\n\nIf you're still a little fuzzy on how it works (or why!), you'll want to spend a little more time examining the previous examples and playing around with *asynquence* yourself, before going on to the next appendix. Appendix B will push *asynquence* into several more advanced and powerful async patterns.\n"
  },
  {
    "path": "async & performance/apB.md",
    "content": "# You Don't Know JS: Async & Performance\n# Appendix B: Advanced Async Patterns\n\nAppendix A introduced the *asynquence* library for sequence-oriented async flow control, primarily based on Promises and generators.\n\nNow we'll explore other advanced asynchronous patterns built on top of that existing understanding and functionality, and see how *asynquence* makes those sophisticated async techniques easy to mix and match in our programs without needing lots of separate libraries.\n\n## Iterable Sequences\n\nWe introduced *asynquence*'s iterable sequences in the previous appendix, but we want to revisit them in more detail.\n\nTo refresh, recall:\n\n```js\nvar domready = ASQ.iterable();\n\n// ..\n\ndomready.val( function(){\n\t// DOM is ready\n} );\n\n// ..\n\ndocument.addEventListener( \"DOMContentLoaded\", domready.next );\n```\n\nNow, let's define a sequence of multiple steps as an iterable sequence:\n\n```js\nvar steps = ASQ.iterable();\n\nsteps\n.then( function STEP1(x){\n\treturn x * 2;\n} )\n.then( function STEP2(x){\n\treturn x + 3;\n} )\n.then( function STEP3(x){\n\treturn x * 4;\n} );\n\nsteps.next( 8 ).value;\t// 16\nsteps.next( 16 ).value;\t// 19\nsteps.next( 19 ).value;\t// 76\nsteps.next().done;\t\t// true\n```\n\nAs you can see, an iterable sequence is a standard-compliant *iterator* (see Chapter 4). So, it can be iterated with an ES6 `for..of` loop, just like a generator (or any other *iterable*) can:\n\n```js\nvar steps = ASQ.iterable();\n\nsteps\n.then( function STEP1(){ return 2; } )\n.then( function STEP2(){ return 4; } )\n.then( function STEP3(){ return 6; } )\n.then( function STEP4(){ return 8; } )\n.then( function STEP5(){ return 10; } );\n\nfor (var v of steps) {\n\tconsole.log( v );\n}\n// 2 4 6 8 10\n```\n\nBeyond the event triggering example shown in the previous appendix, iterable sequences are interesting because in essence they can be seen as a stand-in for generators or Promise chains, but with even more flexibility.\n\nConsider a multiple Ajax request example -- we've seen the same scenario in Chapters 3 and 4, both as a Promise chain and as a generator, respectively -- expressed as an iterable sequence:\n\n```js\n// sequence-aware ajax\nvar request = ASQ.wrap( ajax );\n\nASQ( \"http://some.url.1\" )\n.runner(\n\tASQ.iterable()\n\n\t.then( function STEP1(token){\n\t\tvar url = token.messages[0];\n\t\treturn request( url );\n\t} )\n\n\t.then( function STEP2(resp){\n\t\treturn ASQ().gate(\n\t\t\trequest( \"http://some.url.2/?v=\" + resp ),\n\t\t\trequest( \"http://some.url.3/?v=\" + resp )\n\t\t);\n\t} )\n\n\t.then( function STEP3(r1,r2){ return r1 + r2; } )\n)\n.val( function(msg){\n\tconsole.log( msg );\n} );\n```\n\nThe iterable sequence expresses a sequential series of (sync or async) steps that looks awfully similar to a Promise chain -- in other words, it's much cleaner looking than just plain nested callbacks, but not quite as nice as the `yield`-based sequential syntax of generators.\n\nBut we pass the iterable sequence into `ASQ#runner(..)`, which runs it to completion the same as if it was a generator. The fact that an iterable sequence behaves essentially the same as a generator is notable for a couple of reasons.\n\nFirst, iterable sequences are kind of a pre-ES6 equivalent to a certain subset of ES6 generators, which means you can either author them directly (to run anywhere), or you can author ES6 generators and transpile/convert them to iterable sequences (or Promise chains for that matter!).\n\nThinking of an async-run-to-completion generator as just syntactic sugar for a Promise chain is an important recognition of their isomorphic relationship.\n\nBefore we move on, we should note that the previous snippet could have been expressed in *asynquence* as:\n\n```js\nASQ( \"http://some.url.1\" )\n.seq( /*STEP 1*/ request )\n.seq( function STEP2(resp){\n\treturn ASQ().gate(\n\t\trequest( \"http://some.url.2/?v=\" + resp ),\n\t\trequest( \"http://some.url.3/?v=\" + resp )\n\t);\n} )\n.val( function STEP3(r1,r2){ return r1 + r2; } )\n.val( function(msg){\n\tconsole.log( msg );\n} );\n```\n\nMoreover, step 2 could have even been expressed as:\n\n```js\n.gate(\n\tfunction STEP2a(done,resp) {\n\t\trequest( \"http://some.url.2/?v=\" + resp )\n\t\t.pipe( done );\n\t},\n\tfunction STEP2b(done,resp) {\n\t\trequest( \"http://some.url.3/?v=\" + resp )\n\t\t.pipe( done );\n\t}\n)\n```\n\nSo, why would we go to the trouble of expressing our flow control as an iterable sequence in a `ASQ#runner(..)` step, when it seems like a simpler/flatter *asyquence* chain does the job well?\n\nBecause the iterable sequence form has an important trick up its sleeve that gives us more capability. Read on.\n\n### Extending Iterable Sequences\n\nGenerators, normal *asynquence* sequences, and Promise chains, are all **eagerly evaluated** -- whatever flow control is expressed initially *is* the fixed flow that will be followed.\n\nHowever, iterable sequences are **lazily evaluated**, which means that during execution of the iterable sequence, you can extend the sequence with more steps if desired.\n\n**Note:** You can only append to the end of an iterable sequence, not inject into the middle of the sequence.\n\nLet's first look at a simpler (synchronous) example of that capability to get familiar with it:\n\n```js\nfunction double(x) {\n\tx *= 2;\n\n\t// should we keep extending?\n\tif (x < 500) {\n\t\tisq.then( double );\n\t}\n\n\treturn x;\n}\n\n// setup single-step iterable sequence\nvar isq = ASQ.iterable().then( double );\n\nfor (var v = 10, ret;\n\t(ret = isq.next( v )) && !ret.done;\n) {\n\tv = ret.value;\n\tconsole.log( v );\n}\n```\n\nThe iterable sequence starts out with only one defined step (`isq.then(double)`), but the sequence keeps extending itself under certain conditions (`x < 500`). Both *asynquence* sequences and Promise chains technically *can* do something similar, but we'll see in a little bit why their capability is insufficient.\n\nThough this example is rather trivial and could otherwise be expressed with a `while` loop in a generator, we'll consider more sophisticated cases.\n\nFor instance, you could examine the response from an Ajax request and if it indicates that more data is needed, you conditionally insert more steps into the iterable sequence to make the additional request(s). Or you could conditionally add a value-formatting step to the end of your Ajax handling.\n\nConsider:\n\n```js\nvar steps = ASQ.iterable()\n\n.then( function STEP1(token){\n\tvar url = token.messages[0].url;\n\n\t// was an additional formatting step provided?\n\tif (token.messages[0].format) {\n\t\tsteps.then( token.messages[0].format );\n\t}\n\n\treturn request( url );\n} )\n\n.then( function STEP2(resp){\n\t// add another Ajax request to the sequence?\n\tif (/x1/.test( resp )) {\n\t\tsteps.then( function STEP5(text){\n\t\t\treturn request(\n\t\t\t\t\"http://some.url.4/?v=\" + text\n\t\t\t);\n\t\t} );\n\t}\n\n\treturn ASQ().gate(\n\t\trequest( \"http://some.url.2/?v=\" + resp ),\n\t\trequest( \"http://some.url.3/?v=\" + resp )\n\t);\n} )\n\n.then( function STEP3(r1,r2){ return r1 + r2; } );\n```\n\nYou can see in two different places where we conditionally extend `steps` with `steps.then(..)`. And to run this `steps` iterable sequence, we just wire it into our main program flow with an *asynquence* sequence (called `main` here) using `ASQ#runner(..)`:\n\n```js\nvar main = ASQ( {\n\turl: \"http://some.url.1\",\n\tformat: function STEP4(text){\n\t\treturn text.toUpperCase();\n\t}\n} )\n.runner( steps )\n.val( function(msg){\n\tconsole.log( msg );\n} );\n```\n\nCan the flexibility (conditional behavior) of the `steps` iterable sequence be expressed with a generator? Kind of, but we have to rearrange the logic in a slightly awkward way:\n\n```js\nfunction *steps(token) {\n\t// **STEP 1**\n\tvar resp = yield request( token.messages[0].url );\n\n\t// **STEP 2**\n\tvar rvals = yield ASQ().gate(\n\t\trequest( \"http://some.url.2/?v=\" + resp ),\n\t\trequest( \"http://some.url.3/?v=\" + resp )\n\t);\n\n\t// **STEP 3**\n\tvar text = rvals[0] + rvals[1];\n\n\t// **STEP 4**\n\t// was an additional formatting step provided?\n\tif (token.messages[0].format) {\n\t\ttext = yield token.messages[0].format( text );\n\t}\n\n\t// **STEP 5**\n\t// need another Ajax request added to the sequence?\n\tif (/foobar/.test( resp )) {\n\t\ttext = yield request(\n\t\t\t\"http://some.url.4/?v=\" + text\n\t\t);\n\t}\n\n\treturn text;\n}\n\n// note: `*steps()` can be run by the same `ASQ` sequence\n// as `steps` was previously\n```\n\nSetting aside the already identified benefits of the sequential, synchronous-looking syntax of generators (see Chapter 4), the `steps` logic had to be reordered in the `*steps()` generator form, to fake the dynamicism of the extendable iterable sequence `steps`.\n\nWhat about expressing the functionality with Promises or sequences, though? You *can* do something like this:\n\n```js\nvar steps = something( .. )\n.then( .. )\n.then( function(..){\n\t// ..\n\n\t// extending the chain, right?\n\tsteps = steps.then( .. );\n\n\t// ..\n})\n.then( .. );\n```\n\nThe problem is subtle but important to grasp. So, consider trying to wire up our `steps` Promise chain into our main program flow -- this time expressed with Promises instead of *asynquence*:\n\n```js\nvar main = Promise.resolve( {\n\turl: \"http://some.url.1\",\n\tformat: function STEP4(text){\n\t\treturn text.toUpperCase();\n\t}\n} )\n.then( function(..){\n\treturn steps;\t\t\t// hint!\n} )\n.val( function(msg){\n\tconsole.log( msg );\n} );\n```\n\nCan you spot the problem now? Look closely!\n\nThere's a race condition for sequence steps ordering. When you `return steps`, at that moment `steps` *might* be the originally defined promise chain, or it might now point to the extended promise chain via the `steps = steps.then(..)` call, depending on what order things happen.\n\nHere are the two possible outcomes:\n\n* If `steps` is still the original promise chain, once it's later \"extended\" by `steps = steps.then(..)`, that extended promise on the end of the chain is **not** considered by the `main` flow, as it's already tapped the `steps` chain. This is the unfortunately limiting **eager evaluation**.\n* If `steps` is already the extended promise chain, it works as we expect in that the extended promise is what `main` taps.\n\nOther than the obvious fact that a race condition is intolerable, the first case is the concern; it illustrates **eager evaluation** of the promise chain. By contrast, we easily extended the iterable sequence without such issues, because iterable sequences are **lazily evaluated**.\n\nThe more dynamic you need your flow control, the more iterable sequences will shine.\n\n**Tip:** Check out more information and examples of iterable sequences on the *asynquence* site (https://github.com/getify/asynquence/blob/master/README.md#iterable-sequences).\n\n## Event Reactive\n\nIt should be obvious from (at least!) Chapter 3 that Promises are a very powerful tool in your async toolbox. But one thing that's clearly lacking is in their capability to handle streams of events, as a Promise can only be resolved once. And frankly, this exact same weakness is true of plain *asynquence* sequences, as well.\n\nConsider a scenario where you want to fire off a series of steps every time a certain event is fired. A single Promise or sequence cannot represent all occurrences of that event. So, you have to create a whole new Promise chain (or sequence) for *each* event occurrence, such as:\n\n```js\nlistener.on( \"foobar\", function(data){\n\n\t// create a new event handling promise chain\n\tnew Promise( function(resolve,reject){\n\t\t// ..\n\t} )\n\t.then( .. )\n\t.then( .. );\n\n} );\n```\n\nThe base functionality we need is present in this approach, but it's far from a desirable way to express our intended logic. There are two separate capabilities conflated in this paradigm: the event listening, and responding to the event; separation of concerns would implore us to separate out these capabilities.\n\nThe carefully observant reader will see this problem as somewhat symmetrical to the problems we detailed with callbacks in Chapter 2; it's kind of an inversion of control problem.\n\nImagine uninverting this paradigm, like so:\n\n```js\nvar observable = listener.on( \"foobar\" );\n\n// later\nobservable\n.then( .. )\n.then( .. );\n\n// elsewhere\nobservable\n.then( .. )\n.then( .. );\n```\n\nThe `observable` value is not exactly a Promise, but you can *observe* it much like you can observe a Promise, so it's closely related. In fact, it can be observed many times, and it will send out notifications every time its event (`\"foobar\"`) occurs.\n\n**Tip:** This pattern I've just illustrated is a **massive simplification** of the concepts and motivations behind reactive programming (aka RP), which has been implemented/expounded upon by several great projects and languages. A variation on RP is functional reactive programming (FRP), which refers to applying functional programming techniques (immutability, referential integrity, etc.) to streams of data. \"Reactive\" refers to spreading this functionality out over time in response to events. The interested reader should consider studying \"Reactive Observables\" in the fantastic \"Reactive Extensions\" library (\"RxJS\" for JavaScript) by Microsoft (http://rxjs.codeplex.com/); it's much more sophisticated and powerful than I've just shown. Also, Andre Staltz has an excellent write-up (https://gist.github.com/staltz/868e7e9bc2a7b8c1f754) that pragmatically lays out RP in concrete examples.\n\n### ES7 Observables\n\nAt the time of this writing, there's an early ES7 proposal for a new data type called \"Observable\" (https://github.com/jhusain/asyncgenerator#introducing-observable), which in spirit is similar to what we've laid out here, but is definitely more sophisticated.\n\nThe notion of this kind of Observable is that the way you \"subscribe\" to the events from a stream is to pass in a generator -- actually the *iterator* is the interested party -- whose `next(..)` method will be called for each event.\n\nYou could imagine it sort of like this:\n\n```js\n// `someEventStream` is a stream of events, like from\n// mouse clicks, and the like.\n\nvar observer = new Observer( someEventStream, function*(){\n\twhile (var evt = yield) {\n\t\tconsole.log( evt );\n\t}\n} );\n```\n\nThe generator you pass in will `yield` pause the `while` loop waiting for the next event. The *iterator* attached to the generator instance will have its `next(..)` called each time `someEventStream` has a new event published, and so that event data will resume your generator/*iterator* with the `evt` data.\n\nIn the subscription to events functionality here, it's the *iterator* part that matters, not the generator. So conceptually you could pass in practically any iterable, including `ASQ.iterable()` iterable sequences.\n\nInterestingly, there are also proposed adapters to make it easy to construct Observables from certain types of streams, such as `fromEvent(..)` for DOM events. If you look at a suggested implementation of `fromEvent(..)` in the earlier linked ES7 proposal, it looks an awful lot like the `ASQ.react(..)` we'll see in the next section.\n\nOf course, these are all early proposals, so what shakes out may very well look/behave differently than shown here. But it's exciting to see the early alignments of concepts across different libraries and language proposals!\n\n### Reactive Sequences\n\nWith that crazy brief summary of Observables (and F/RP) as our inspiration and motivation, I will now illustrate an adaptation of a small subset of \"Reactive Observables,\" which I call \"Reactive Sequences.\"\n\nFirst, let's start with how to create an Observable, using an *asynquence* plug-in utility called `react(..)`:\n\n```js\nvar observable = ASQ.react( function setup(next){\n\tlistener.on( \"foobar\", next );\n} );\n```\n\nNow, let's see how to define a sequence that \"reacts\" -- in F/RP, this is typically called \"subscribing\" -- to that `observable`:\n\n```js\nobservable\n.seq( .. )\n.then( .. )\n.val( .. );\n```\n\nSo, you just define the sequence by chaining off the Observable. That's easy, huh?\n\nIn F/RP, the stream of events typically channels through a set of functional transforms, like `scan(..)`, `map(..)`, `reduce(..)`, and so on. With reactive sequences, each event channels through a new instance of the sequence. Let's look at a more concrete example:\n\n```js\nASQ.react( function setup(next){\n\tdocument.getElementById( \"mybtn\" )\n\t.addEventListener( \"click\", next, false );\n} )\n.seq( function(evt){\n\tvar btnID = evt.target.id;\n\treturn request(\n\t\t\"http://some.url.1/?id=\" + btnID\n\t);\n} )\n.val( function(text){\n\tconsole.log( text );\n} );\n```\n\nThe \"reactive\" portion of the reactive sequence comes from assigning one or more event handlers to invoke the event trigger (calling `next(..)`).\n\nThe \"sequence\" portion of the reactive sequence is exactly like the sequences we've already explored: each step can be whatever asynchronous technique makes sense, from continuation callback to Promise to generator.\n\nOnce you set up a reactive sequence, it will continue to initiate instances of the sequence as long as the events keep firing. If you want to stop a reactive sequence, you can call `stop()`.\n\nIf a reactive sequence is `stop()`'d, you likely want the event handler(s) to be unregistered as well; you can register a teardown handler for this purpose:\n\n```js\nvar sq = ASQ.react( function setup(next,registerTeardown){\n\tvar btn = document.getElementById( \"mybtn\" );\n\n\tbtn.addEventListener( \"click\", next, false );\n\n\t// will be called once `sq.stop()` is called\n\tregisterTeardown( function(){\n\t\tbtn.removeEventListener( \"click\", next, false );\n\t} );\n} )\n.seq( .. )\n.then( .. )\n.val( .. );\n\n// later\nsq.stop();\n```\n\n**Note:** The `this` binding reference inside the `setup(..)` handler is the same `sq` reactive sequence, so you can use the `this` reference to add to the reactive sequence definition, call methods like `stop()`, and so on.\n\nHere's an example from the Node.js world, using reactive sequences to handle incoming HTTP requests:\n\n```js\nvar server = http.createServer();\nserver.listen(8000);\n\n// reactive observer\nvar request = ASQ.react( function setup(next,registerTeardown){\n\tserver.addListener( \"request\", next );\n\tserver.addListener( \"close\", this.stop );\n\n\tregisterTeardown( function(){\n\t\tserver.removeListener( \"request\", next );\n\t\tserver.removeListener( \"close\", request.stop );\n\t} );\n});\n\n// respond to requests\nrequest\n.seq( pullFromDatabase )\n.val( function(data,res){\n\tres.end( data );\n} );\n\n// node teardown\nprocess.on( \"SIGINT\", request.stop );\n```\n\nThe `next(..)` trigger can also adapt to node streams easily, using `onStream(..)` and `unStream(..)`:\n\n```js\nASQ.react( function setup(next){\n\tvar fstream = fs.createReadStream( \"/some/file\" );\n\n\t// pipe the stream's \"data\" event to `next(..)`\n\tnext.onStream( fstream );\n\n\t// listen for the end of the stream\n\tfstream.on( \"end\", function(){\n\t\tnext.unStream( fstream );\n\t} );\n} )\n.seq( .. )\n.then( .. )\n.val( .. );\n```\n\nYou can also use sequence combinations to compose multiple reactive sequence streams:\n\n```js\nvar sq1 = ASQ.react( .. ).seq( .. ).then( .. );\nvar sq2 = ASQ.react( .. ).seq( .. ).then( .. );\n\nvar sq3 = ASQ.react(..)\n.gate(\n\tsq1,\n\tsq2\n)\n.then( .. );\n```\n\nThe main takeaway is that `ASQ.react(..)` is a lightweight adaptation of F/RP concepts, enabling the wiring of an event stream to a sequence, hence the term \"reactive sequence.\" Reactive sequences are generally capable enough for basic reactive uses.\n\n**Note:** Here's an example of using `ASQ.react(..)` in managing UI state (http://jsbin.com/rozipaki/6/edit?js,output), and another example of handling HTTP request/response streams with `ASQ.react(..)` (https://gist.github.com/getify/bba5ec0de9d6047b720e).\n\n## Generator Coroutine\n\nHopefully Chapter 4 helped you get pretty familiar with ES6 generators. In particular, we want to revisit the \"Generator Concurrency\" discussion, and push it even further.\n\nWe imagined a `runAll(..)` utility that could take two or more generators and run them concurrently, letting them cooperatively `yield` control from one to the next, with optional message passing.\n\nIn addition to being able to run a single generator to completion, the `ASQ#runner(..)` we discussed in Appendix A is a similar implementation of the concepts of `runAll(..)`, which can run multiple generators concurrently to completion.\n\nSo let's see how we can implement the concurrent Ajax scenario from Chapter 4:\n\n```js\nASQ(\n\t\"http://some.url.2\"\n)\n.runner(\n\tfunction*(token){\n\t\t// transfer control\n\t\tyield token;\n\n\t\tvar url1 = token.messages[0]; // \"http://some.url.1\"\n\n\t\t// clear out messages to start fresh\n\t\ttoken.messages = [];\n\n\t\tvar p1 = request( url1 );\n\n\t\t// transfer control\n\t\tyield token;\n\n\t\ttoken.messages.push( yield p1 );\n\t},\n\tfunction*(token){\n\t\tvar url2 = token.messages[0]; // \"http://some.url.2\"\n\n\t\t// message pass and transfer control\n\t\ttoken.messages[0] = \"http://some.url.1\";\n\t\tyield token;\n\n\t\tvar p2 = request( url2 );\n\n\t\t// transfer control\n\t\tyield token;\n\n\t\ttoken.messages.push( yield p2 );\n\n\t\t// pass along results to next sequence step\n\t\treturn token.messages;\n\t}\n)\n.val( function(res){\n\t// `res[0]` comes from \"http://some.url.1\"\n\t// `res[1]` comes from \"http://some.url.2\"\n} );\n```\n\nThe main differences between `ASQ#runner(..)` and `runAll(..)` are as follows:\n\n* Each generator (coroutine) is provided an argument we call `token`, which is the special value to `yield` when you want to explicitly transfer control to the next coroutine.\n* `token.messages` is an array that holds any messages passed in from the previous sequence step. It's also a data structure that you can use to share messages between coroutines.\n* `yield`ing a Promise (or sequence) value does not transfer control, but instead pauses the coroutine processing until that value is ready.\n* The last `return`ed or `yield`ed value from the coroutine processing run will be forward passed to the next step in the sequence.\n\nIt's also easy to layer helpers on top of the base `ASQ#runner(..)` functionality to suit different uses.\n\n### State Machines\n\nOne example that may be familiar to many programmers is state machines. You can, with the help of a simple cosmetic utility, create an easy-to-express state machine processor.\n\nLet's imagine such a utility. We'll call it `state(..)`, and will pass it two arguments: a state value and a generator that handles that state. `state(..)` will do the dirty work of creating and returning an adapter generator to pass to `ASQ#runner(..)`.\n\nConsider:\n\n```js\nfunction state(val,handler) {\n\t// make a coroutine handler for this state\n\treturn function*(token) {\n\t\t// state transition handler\n\t\tfunction transition(to) {\n\t\t\ttoken.messages[0] = to;\n\t\t}\n\n\t\t// set initial state (if none set yet)\n\t\tif (token.messages.length < 1) {\n\t\t\ttoken.messages[0] = val;\n\t\t}\n\n\t\t// keep going until final state (false) is reached\n\t\twhile (token.messages[0] !== false) {\n\t\t\t// current state matches this handler?\n\t\t\tif (token.messages[0] === val) {\n\t\t\t\t// delegate to state handler\n\t\t\t\tyield *handler( transition );\n\t\t\t}\n\n\t\t\t// transfer control to another state handler?\n\t\t\tif (token.messages[0] !== false) {\n\t\t\t\tyield token;\n\t\t\t}\n\t\t}\n\t};\n}\n```\n\nIf you look closely, you'll see that `state(..)` returns back a generator that accepts a `token`, and then it sets up a `while` loop that will run until the state machine reaches its final state (which we arbitrarily pick as the `false` value); that's exactly the kind of generator we want to pass to `ASQ#runner(..)`!\n\nWe also arbitrarily reserve the `token.messages[0]` slot as the place where the current state of our state machine will be tracked, which means we can even seed the initial state as the value passed in from the previous step in the sequence.\n\nHow do we use the `state(..)` helper along with `ASQ#runner(..)`?\n\n```js\nvar prevState;\n\nASQ(\n\t/* optional: initial state value */\n\t2\n)\n// run our state machine\n// transitions: 2 -> 3 -> 1 -> 3 -> false\n.runner(\n\t// state `1` handler\n\tstate( 1, function *stateOne(transition){\n\t\tconsole.log( \"in state 1\" );\n\n\t\tprevState = 1;\n\t\tyield transition( 3 );\t// goto state `3`\n\t} ),\n\n\t// state `2` handler\n\tstate( 2, function *stateTwo(transition){\n\t\tconsole.log( \"in state 2\" );\n\n\t\tprevState = 2;\n\t\tyield transition( 3 );\t// goto state `3`\n\t} ),\n\n\t// state `3` handler\n\tstate( 3, function *stateThree(transition){\n\t\tconsole.log( \"in state 3\" );\n\n\t\tif (prevState === 2) {\n\t\t\tprevState = 3;\n\t\t\tyield transition( 1 ); // goto state `1`\n\t\t}\n\t\t// all done!\n\t\telse {\n\t\t\tyield \"That's all folks!\";\n\n\t\t\tprevState = 3;\n\t\t\tyield transition( false ); // terminal state\n\t\t}\n\t} )\n)\n// state machine complete, so move on\n.val( function(msg){\n\tconsole.log( msg );\t// That's all folks!\n} );\n```\n\nIt's important to note that the `*stateOne(..)`, `*stateTwo(..)`, and `*stateThree(..)` generators themselves are reinvoked each time that state is entered, and they finish when you `transition(..)` to another value. While not shown here, of course these state generator handlers can be asynchronously paused by `yield`ing Promises/sequences/thunks.\n\nThe underneath hidden generators produced by the `state(..)` helper and actually passed to `ASQ#runner(..)` are the ones that continue to run concurrently for the length of the state machine, and each of them handles cooperatively `yield`ing control to the next, and so on.\n\n**Note:** See this \"ping pong\" example (http://jsbin.com/qutabu/1/edit?js,output) for more illustration of using cooperative concurrency with generators driven by `ASQ#runner(..)`.\n\n## Communicating Sequential Processes (CSP)\n\n\"Communicating Sequential Processes\" (CSP) was first described by C. A. R. Hoare in a 1978 academic paper (http://dl.acm.org/citation.cfm?doid=359576.359585), and later in a 1985 book (http://www.usingcsp.com/) of the same name. CSP describes a formal method for concurrent \"processes\" to interact (aka \"communicate\") during processing.\n\nYou may recall that we examined concurrent \"processes\" back in Chapter 1, so our exploration of CSP here will build upon that understanding.\n\nLike most great concepts in computer science, CSP is heavily steeped in academic formalism, expressed as a process algebra. However, I suspect symbolic algebra theorems won't make much practical difference to the reader, so we will want to find some other way of wrapping our brains around CSP.\n\nI will leave much of the formal description and proof of CSP to Hoare's writing, and to many other fantastic writings since. Instead, we will try to just briefly explain the idea of CSP in as un-academic and hopefully intuitively understandable a way as possible.\n\n### Message Passing\n\nThe core principle in CSP is that all communication/interaction between otherwise independent processes must be through formal message passing. Perhaps counter to your expectations, CSP message passing is described as a synchronous action, where the sender process and the receiver process have to mutually be ready for the message to be passed.\n\nHow could such synchronous messaging possibly be related to asynchronous programming in JavaScript?\n\nThe concreteness of relationship comes from the nature of how ES6 generators are used to produce synchronous-looking actions that under the covers can indeed either be synchronous or (more likely) asynchronous.\n\nIn other words, two or more concurrently running generators can appear to synchronously message each other while preserving the fundamental asynchrony of the system because each generator's code is paused (aka \"blocked\") waiting on resumption of an asynchronous action.\n\nHow does this work?\n\nImagine a generator (aka \"process\") called \"A\" that wants to send a message to generator \"B.\" First, \"A\" `yield`s the message (thus pausing \"A\") to be sent to \"B.\" When \"B\" is ready and takes the message, \"A\" is then resumed (unblocked).\n\nSymmetrically, imagine a generator \"A\" that wants a message **from** \"B.\" \"A\" `yield`s its request (thus pausing \"A\") for the message from \"B,\" and once \"B\" sends a message, \"A\" takes the message and is resumed.\n\nOne of the more popular expressions of this CSP message passing theory comes from ClojureScript's core.async library, and also from the *go* language. These takes on CSP embody the described communication semantics in a conduit that is opened between processes called a \"channel.\"\n\n**Note:** The term *channel* is used in part because there are modes in which more than one value can be sent at once into the \"buffer\" of the channel; this is similar to what you may think of as a stream. We won't go into depth about it here, but it can be a very powerful technique for managing streams of data.\n\nIn the simplest notion of CSP, a channel that we create between \"A\" and \"B\" would have a method called `take(..)` for blocking to receive a value, and a method called `put(..)` for blocking to send a value.\n\nThis might look like:\n\n```js\nvar ch = channel();\n\nfunction *foo() {\n\tvar msg = yield take( ch );\n\n\tconsole.log( msg );\n}\n\nfunction *bar() {\n\tyield put( ch, \"Hello World\" );\n\n\tconsole.log( \"message sent\" );\n}\n\nrun( foo );\nrun( bar );\n// Hello World\n// \"message sent\"\n```\n\nCompare this structured, synchronous(-looking) message passing interaction to the informal and unstructured message sharing that `ASQ#runner(..)` provides through the `token.messages` array and cooperative `yield`ing. In essence, `yield put(..)` is a single operation that both sends the value and pauses execution to transfer control, whereas in earlier examples we did those as separate steps.\n\nMoreover, CSP stresses that you don't really explicitly \"transfer control,\" but rather you design your concurrent routines to block expecting either a value received from the channel, or to block expecting to try to send a message on the channel. The blocking around receiving or sending messages is how you coordinate sequencing of behavior between the coroutines.\n\n**Note:** Fair warning: this pattern is very powerful but it's also a little mind twisting to get used to at first. You will want to practice this a bit to get used to this new way of thinking about coordinating your concurrency.\n\nThere are several great libraries that have implemented this flavor of CSP in JavaScript, most notably \"js-csp\" (https://github.com/ubolonton/js-csp), which James Long (http://twitter.com/jlongster) forked (https://github.com/jlongster/js-csp) and has written extensively about (http://jlongster.com/Taming-the-Asynchronous-Beast-with-CSP-in-JavaScript). Also, it cannot be stressed enough how amazing the many writings of David Nolen (http://twitter.com/swannodette) are on the topic of adapting ClojureScript's go-style core.async CSP into JS generators (http://swannodette.github.io/2013/08/24/es6-generators-and-csp).\n\n### asynquence CSP emulation\n\nBecause we've been discussing async patterns here in the context of my *asynquence* library, you might be interested to see that we can fairly easily add an emulation layer on top of `ASQ#runner(..)` generator handling as a nearly perfect porting of the CSP API and behavior. This emulation layer ships as an optional part of the \"asynquence-contrib\" package alongside *asynquence*.\n\nVery similar to the `state(..)` helper from earlier, `ASQ.csp.go(..)` takes a generator -- in go/core.async terms, it's known as a goroutine -- and adapts it to use with `ASQ#runner(..)` by returning a new generator.\n\nInstead of being passed a `token`, your goroutine receives an initially created channel (`ch` below) that all goroutines in this run will share. You can create more channels (which is often quite helpful!) with `ASQ.csp.chan(..)`.\n\nIn CSP, we model all asynchrony in terms of blocking on channel messages, rather than blocking waiting for a Promise/sequence/thunk to complete.\n\nSo, instead of `yield`ing the Promise returned from `request(..)`, `request(..)` should return a channel that you `take(..)` a value from. In other words, a single-value channel is roughly equivalent in this context/usage to a Promise/sequence.\n\nLet's first make a channel-aware version of `request(..)`:\n\n```js\nfunction request(url) {\n\tvar ch = ASQ.csp.channel();\n\tajax( url ).then( function(content){\n\t\t// `putAsync(..)` is a version of `put(..)` that\n\t\t// can be used outside of a generator. It returns\n\t\t// a promise for the operation's completion. We\n\t\t// don't use that promise here, but we could if\n\t\t// we needed to be notified when the value had\n\t\t// been `take(..)`n.\n\t\tASQ.csp.putAsync( ch, content );\n\t} );\n\treturn ch;\n}\n```\n\nFrom Chapter 3, \"promisory\" is a Promise-producing utility, \"thunkory\" from Chapter 4 is a thunk-producing utility, and finally, in Appendix A we invented \"sequory\" for a sequence-producing utility.\n\nNaturally, we need to coin a symmetric term here for a channel-producing utility. So let's unsurprisingly call it a \"chanory\" (\"channel\" + \"factory\"). As an exercise for the reader, try your hand at defining a `channelify(..)` utility similar to `Promise.wrap(..)`/`promisify(..)` (Chapter 3), `thunkify(..)` (Chapter 4), and `ASQ.wrap(..)` (Appendix A).\n\nNow consider the concurrent Ajax example using *asyquence*-flavored CSP:\n\n```js\nASQ()\n.runner(\n\tASQ.csp.go( function*(ch){\n\t\tyield ASQ.csp.put( ch, \"http://some.url.2\" );\n\n\t\tvar url1 = yield ASQ.csp.take( ch );\n\t\t// \"http://some.url.1\"\n\n\t\tvar res1 = yield ASQ.csp.take( request( url1 ) );\n\n\t\tyield ASQ.csp.put( ch, res1 );\n\t} ),\n\tASQ.csp.go( function*(ch){\n\t\tvar url2 = yield ASQ.csp.take( ch );\n\t\t// \"http://some.url.2\"\n\n\t\tyield ASQ.csp.put( ch, \"http://some.url.1\" );\n\n\t\tvar res2 = yield ASQ.csp.take( request( url2 ) );\n\t\tvar res1 = yield ASQ.csp.take( ch );\n\n\t\t// pass along results to next sequence step\n\t\tch.buffer_size = 2;\n\t\tASQ.csp.put( ch, res1 );\n\t\tASQ.csp.put( ch, res2 );\n\t} )\n)\n.val( function(res1,res2){\n\t// `res1` comes from \"http://some.url.1\"\n\t// `res2` comes from \"http://some.url.2\"\n} );\n```\n\nThe message passing that trades the URL strings between the two goroutines is pretty straightforward. The first goroutine makes an Ajax request to the first URL, and that response is put onto the `ch` channel. The second goroutine makes an Ajax request to the second URL, then gets the first response `res1` off the `ch` channel. At that point, both responses `res1` and `res2` are completed and ready.\n\nIf there are any remaining values in the `ch` channel at the end of the goroutine run, they will be passed along to the next step in the sequence. So, to pass out message(s) from the final goroutine, `put(..)` them into `ch`. As shown, to avoid the blocking of those final `put(..)`s, we switch `ch` into buffering mode by setting its `buffer_size` to `2` (default: `0`).\n\n**Note:** See many more examples of using *asynquence*-flavored CSP here (https://gist.github.com/getify/e0d04f1f5aa24b1947ae).\n\n## Review\n\nPromises and generators provide the foundational building blocks upon which we can build much more sophisticated and capable asynchrony.\n\n*asynquence* has utilities for implementing *iterable sequences*, *reactive sequences* (aka \"Observables\"), *concurrent coroutines*, and even *CSP goroutines*.\n\nThose patterns, combined with the continuation-callback and Promise capabilities, gives *asynquence* a powerful mix of different asynchronous functionalities, all integrated in one clean async flow control abstraction: the sequence.\n"
  },
  {
    "path": "async & performance/apC.md",
    "content": "# You Don't Know JS: Async & Performance\n# Appendix C: Acknowledgments\n\nI have many people to thank for making this book title and the overall series happen.\n\nFirst, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.\n\nI'd like to thank my editors at O'Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into \"open source\" book writing, editing, and production.\n\nThank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, Kris Kowal, Rick Waldron, Jordan Harband, Benjamin Gruenbaum, Vyacheslav Egorov, David Nolen, and many others. A big thank you to Jake Archibald for writing the Foreword for this title.\n\nThank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy \"kangax\" Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, and so many others, I can't even scratch the surface.\n\nThe *You Don't Know JS* book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:\n\n> Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu 'Dilys' Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard\n\nThis book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!\n\nThank you again to all the countless folks I didn't name but who I nonetheless owe thanks. May this book series be \"owned\" by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.\n"
  },
  {
    "path": "async & performance/ch1.md",
    "content": "# You Don't Know JS: Async & Performance\n# Chapter 1: Asynchrony: Now & Later\n\nOne of the most important and yet often misunderstood parts of programming in a language like JavaScript is how to express and manipulate program behavior spread out over a period of time.\n\nThis is not just about what happens from the beginning of a `for` loop to the end of a `for` loop, which of course takes *some time* (microseconds to milliseconds) to complete. It's about what happens when part of your program runs *now*, and another part of your program runs *later* -- there's a gap between *now* and *later* where your program isn't actively executing.\n\nPractically all nontrivial programs ever written (especially in JS) have in some way or another had to manage this gap, whether that be in waiting for user input, requesting data from a database or file system, sending data across the network and waiting for a response, or performing a repeated task at a fixed interval of time (like animation). In all these various ways, your program has to manage state across the gap in time. As they famously say in London (of the chasm between the subway door and the platform): \"mind the gap.\"\n\nIn fact, the relationship between the *now* and *later* parts of your program is at the heart of asynchronous programming.\n\nAsynchronous programming has been around since the beginning of JS, for sure. But most JS developers have never really carefully considered exactly how and why it crops up in their programs, or explored various *other* ways to handle it. The *good enough* approach has always been the humble callback function. Many to this day will insist that callbacks are more than sufficient.\n\nBut as JS continues to grow in both scope and complexity, to meet the ever-widening demands of a first-class programming language that runs in browsers and servers and every conceivable device in between, the pains by which we manage asynchrony are becoming increasingly crippling, and they cry out for approaches that are both more capable and more reason-able.\n\nWhile this all may seem rather abstract right now, I assure you we'll tackle it more completely and concretely as we go on through this book. We'll explore a variety of emerging techniques for async JavaScript programming over the next several chapters.\n\nBut before we can get there, we're going to have to understand much more deeply what asynchrony is and how it operates in JS.\n\n## A Program in Chunks\n\nYou may write your JS program in one *.js* file, but your program is almost certainly comprised of several chunks, only one of which is going to execute *now*, and the rest of which will execute *later*. The most common unit of *chunk* is the `function`.\n\nThe problem most developers new to JS seem to have is that *later* doesn't happen strictly and immediately after *now*. In other words, tasks that cannot complete *now* are, by definition, going to complete asynchronously, and thus we will not have blocking behavior as you might intuitively expect or want.\n\nConsider:\n\n```js\n// ajax(..) is some arbitrary Ajax function given by a library\nvar data = ajax( \"http://some.url.1\" );\n\nconsole.log( data );\n// Oops! `data` generally won't have the Ajax results\n```\n\nYou're probably aware that standard Ajax requests don't complete synchronously, which means the `ajax(..)` function does not yet have any value to return back to be assigned to `data` variable. If `ajax(..)` *could* block until the response came back, then the `data = ..` assignment would work fine.\n\nBut that's not how we do Ajax. We make an asynchronous Ajax request *now*, and we won't get the results back until *later*.\n\nThe simplest (but definitely not only, or necessarily even best!) way of \"waiting\" from *now* until *later* is to use a function, commonly called a callback function:\n\n```js\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", function myCallbackFunction(data){\n\n\tconsole.log( data ); // Yay, I gots me some `data`!\n\n} );\n```\n\n**Warning:** You may have heard that it's possible to make synchronous Ajax requests. While that's technically true, you should never, ever do it, under any circumstances, because it locks the browser UI (buttons, menus, scrolling, etc.) and prevents any user interaction whatsoever. This is a terrible idea, and should always be avoided.\n\nBefore you protest in disagreement, no, your desire to avoid the mess of callbacks is *not* justification for blocking, synchronous Ajax.\n\nFor example, consider this code:\n\n```js\nfunction now() {\n\treturn 21;\n}\n\nfunction later() {\n\tanswer = answer * 2;\n\tconsole.log( \"Meaning of life:\", answer );\n}\n\nvar answer = now();\n\nsetTimeout( later, 1000 ); // Meaning of life: 42\n```\n\nThere are two chunks to this program: the stuff that will run *now*, and the stuff that will run *later*. It should be fairly obvious what those two chunks are, but let's be super explicit:\n\nNow:\n```js\nfunction now() {\n\treturn 21;\n}\n\nfunction later() { .. }\n\nvar answer = now();\n\nsetTimeout( later, 1000 );\n```\n\nLater:\n```js\nanswer = answer * 2;\nconsole.log( \"Meaning of life:\", answer );\n```\n\nThe *now* chunk runs right away, as soon as you execute your program. But `setTimeout(..)` also sets up an event (a timeout) to happen *later*, so the contents of the `later()` function will be executed at a later time (1,000 milliseconds from now).\n\nAny time you wrap a portion of code into a `function` and specify that it should be executed in response to some event (timer, mouse click, Ajax response, etc.), you are creating a *later* chunk of your code, and thus introducing asynchrony to your program.\n\n### Async Console\n\nThere is no specification or set of requirements around how the `console.*` methods work -- they are not officially part of JavaScript, but are instead added to JS by the *hosting environment* (see the *Types & Grammar* title of this book series).\n\nSo, different browsers and JS environments do as they please, which can sometimes lead to confusing behavior.\n\nIn particular, there are some browsers and some conditions that `console.log(..)` does not actually immediately output what it's given. The main reason this may happen is because I/O is a very slow and blocking part of many programs (not just JS). So, it may perform better (from the page/UI perspective) for a browser to handle `console` I/O asynchronously in the background, without you perhaps even knowing that occurred.\n\nA not terribly common, but possible, scenario where this could be *observable* (not from code itself but from the outside):\n\n```js\nvar a = {\n\tindex: 1\n};\n\n// later\nconsole.log( a ); // ??\n\n// even later\na.index++;\n```\n\nWe'd normally expect to see the `a` object be snapshotted at the exact moment of the `console.log(..)` statement, printing something like `{ index: 1 }`, such that in the next statement when `a.index++` happens, it's modifying something different than, or just strictly after, the output of `a`.\n\nMost of the time, the preceding code will probably produce an object representation in your developer tools' console that's what you'd expect. But it's possible this same code could run in a situation where the browser felt it needed to defer the console I/O to the background, in which case it's *possible* that by the time the object is represented in the browser console, the `a.index++` has already happened, and it shows `{ index: 2 }`.\n\nIt's a moving target under what conditions exactly `console` I/O will be deferred, or even whether it will be observable. Just be aware of this possible asynchronicity in I/O in case you ever run into issues in debugging where objects have been modified *after* a `console.log(..)` statement and yet you see the unexpected modifications show up.\n\n**Note:** If you run into this rare scenario, the best option is to use breakpoints in your JS debugger instead of relying on `console` output. The next best option would be to force a \"snapshot\" of the object in question by serializing it to a `string`, like with `JSON.stringify(..)`.\n\n## Event Loop\n\nLet's make a (perhaps shocking) claim: despite clearly allowing asynchronous JS code (like the timeout we just looked at), up until recently (ES6), JavaScript itself has actually never had any direct notion of asynchrony built into it.\n\n**What!?** That seems like a crazy claim, right? In fact, it's quite true. The JS engine itself has never done anything more than execute a single chunk of your program at any given moment, when asked to.\n\n\"Asked to.\" By whom? That's the important part!\n\nThe JS engine doesn't run in isolation. It runs inside a *hosting environment*, which is for most developers the typical web browser. Over the last several years (but by no means exclusively), JS has expanded beyond the browser into other environments, such as servers, via things like Node.js. In fact, JavaScript gets embedded into all kinds of devices these days, from robots to lightbulbs.\n\nBut the one common \"thread\" (that's a not-so-subtle asynchronous joke, for what it's worth) of all these environments is that they have a mechanism in them that handles executing multiple chunks of your program *over time*, at each moment invoking the JS engine, called the \"event loop.\"\n\nIn other words, the JS engine has had no innate sense of *time*, but has instead been an on-demand execution environment for any arbitrary snippet of JS. It's the surrounding environment that has always *scheduled* \"events\" (JS code executions).\n\nSo, for example, when your JS program makes an Ajax request to fetch some data from a server, you set up the \"response\" code in a function (commonly called a \"callback\"), and the JS engine tells the hosting environment, \"Hey, I'm going to suspend execution for now, but whenever you finish with that network request, and you have some data, please *call* this function *back*.\"\n\nThe browser is then set up to listen for the response from the network, and when it has something to give you, it schedules the callback function to be executed by inserting it into the *event loop*.\n\nSo what is the *event loop*?\n\nLet's conceptualize it first through some fake-ish code:\n\n```js\n// `eventLoop` is an array that acts as a queue (first-in, first-out)\nvar eventLoop = [ ];\nvar event;\n\n// keep going \"forever\"\nwhile (true) {\n\t// perform a \"tick\"\n\tif (eventLoop.length > 0) {\n\t\t// get the next event in the queue\n\t\tevent = eventLoop.shift();\n\n\t\t// now, execute the next event\n\t\ttry {\n\t\t\tevent();\n\t\t}\n\t\tcatch (err) {\n\t\t\treportError(err);\n\t\t}\n\t}\n}\n```\n\nThis is, of course, vastly simplified pseudocode to illustrate the concepts. But it should be enough to help get a better understanding.\n\nAs you can see, there's a continuously running loop represented by the `while` loop, and each iteration of this loop is called a \"tick.\" For each tick, if an event is waiting on the queue, it's taken off and executed. These events are your function callbacks.\n\nIt's important to note that `setTimeout(..)` doesn't put your callback on the event loop queue. What it does is set up a timer; when the timer expires, the environment places your callback into the event loop, such that some future tick will pick it up and execute it.\n\nWhat if there are already 20 items in the event loop at that moment? Your callback waits. It gets in line behind the others -- there's not normally a path for preempting the queue and skipping ahead in line. This explains why `setTimeout(..)` timers may not fire with perfect temporal accuracy. You're guaranteed (roughly speaking) that your callback won't fire *before* the time interval you specify, but it can happen at or after that time, depending on the state of the event queue.\n\nSo, in other words, your program is generally broken up into lots of small chunks, which happen one after the other in the event loop queue. And technically, other events not related directly to your program can be interleaved within the queue as well.\n\n**Note:** We mentioned \"up until recently\" in relation to ES6 changing the nature of where the event loop queue is managed. It's mostly a formal technicality, but ES6 now specifies how the event loop works, which means technically it's within the purview of the JS engine, rather than just the *hosting environment*. One main reason for this change is the introduction of ES6 Promises, which we'll discuss in Chapter 3, because they require the ability to have direct, fine-grained control over scheduling operations on the event loop queue (see the discussion of `setTimeout(..0)` in the \"Cooperation\" section).\n\n## Parallel Threading\n\nIt's very common to conflate the terms \"async\" and \"parallel,\" but they are actually quite different. Remember, async is about the gap between *now* and *later*. But parallel is about things being able to occur simultaneously.\n\nThe most common tools for parallel computing are processes and threads. Processes and threads execute independently and may execute simultaneously: on separate processors, or even separate computers, but multiple threads can share the memory of a single process.\n\nAn event loop, by contrast, breaks its work into tasks and executes them in serial, disallowing parallel access and changes to shared memory. Parallelism and \"serialism\" can coexist in the form of cooperating event loops in separate threads.\n\nThe interleaving of parallel threads of execution and the interleaving of asynchronous events occur at very different levels of granularity.\n\nFor example:\n\n```js\nfunction later() {\n\tanswer = answer * 2;\n\tconsole.log( \"Meaning of life:\", answer );\n}\n```\n\nWhile the entire contents of `later()` would be regarded as a single event loop queue entry, when thinking about a thread this code would run on, there's actually perhaps a dozen different low-level operations. For example, `answer = answer * 2` requires first loading the current value of `answer`, then putting `2` somewhere, then performing the multiplication, then taking the result and storing it back into `answer`.\n\nIn a single-threaded environment, it really doesn't matter that the items in the thread queue are low-level operations, because nothing can interrupt the thread. But if you have a parallel system, where two different threads are operating in the same program, you could very likely have unpredictable behavior.\n\nConsider:\n\n```js\nvar a = 20;\n\nfunction foo() {\n\ta = a + 1;\n}\n\nfunction bar() {\n\ta = a * 2;\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\nIn JavaScript's single-threaded behavior, if `foo()` runs before `bar()`, the result is that `a` has `42`, but if `bar()` runs before `foo()` the result in `a` will be `41`.\n\nIf JS events sharing the same data executed in parallel, though, the problems would be much more subtle. Consider these two lists of pseudocode tasks as the threads that could respectively run the code in `foo()` and `bar()`, and consider what happens if they are running at exactly the same time:\n\nThread 1 (`X` and `Y` are temporary memory locations):\n```\nfoo():\n  a. load value of `a` in `X`\n  b. store `1` in `Y`\n  c. add `X` and `Y`, store result in `X`\n  d. store value of `X` in `a`\n```\n\nThread 2 (`X` and `Y` are temporary memory locations):\n```\nbar():\n  a. load value of `a` in `X`\n  b. store `2` in `Y`\n  c. multiply `X` and `Y`, store result in `X`\n  d. store value of `X` in `a`\n```\n\nNow, let's say that the two threads are running truly in parallel. You can probably spot the problem, right? They use shared memory locations `X` and `Y` for their temporary steps.\n\nWhat's the end result in `a` if the steps happen like this?\n\n```\n1a  (load value of `a` in `X`   ==> `20`)\n2a  (load value of `a` in `X`   ==> `20`)\n1b  (store `1` in `Y`   ==> `1`)\n2b  (store `2` in `Y`   ==> `2`)\n1c  (add `X` and `Y`, store result in `X`   ==> `22`)\n1d  (store value of `X` in `a`   ==> `22`)\n2c  (multiply `X` and `Y`, store result in `X`   ==> `44`)\n2d  (store value of `X` in `a`   ==> `44`)\n```\n\nThe result in `a` will be `44`. But what about this ordering?\n\n```\n1a  (load value of `a` in `X`   ==> `20`)\n2a  (load value of `a` in `X`   ==> `20`)\n2b  (store `2` in `Y`   ==> `2`)\n1b  (store `1` in `Y`   ==> `1`)\n2c  (multiply `X` and `Y`, store result in `X`   ==> `20`)\n1c  (add `X` and `Y`, store result in `X`   ==> `21`)\n1d  (store value of `X` in `a`   ==> `21`)\n2d  (store value of `X` in `a`   ==> `21`)\n```\n\nThe result in `a` will be `21`.\n\nSo, threaded programming is very tricky, because if you don't take special steps to prevent this kind of interruption/interleaving from happening, you can get very surprising, nondeterministic behavior that frequently leads to headaches.\n\nJavaScript never shares data across threads, which means *that* level of nondeterminism isn't a concern. But that doesn't mean JS is always deterministic. Remember earlier, where the relative ordering of `foo()` and `bar()` produces two different results (`41` or `42`)?\n\n**Note:** It may not be obvious yet, but not all nondeterminism is bad. Sometimes it's irrelevant, and sometimes it's intentional. We'll see more examples of that throughout this and the next few chapters.\n\n### Run-to-Completion\n\nBecause of JavaScript's single-threading, the code inside of `foo()` (and `bar()`) is atomic, which means that once `foo()` starts running, the entirety of its code will finish before any of the code in `bar()` can run, or vice versa. This is called \"run-to-completion\" behavior.\n\nIn fact, the run-to-completion semantics are more obvious when `foo()` and `bar()` have more code in them, such as:\n\n```js\nvar a = 1;\nvar b = 2;\n\nfunction foo() {\n\ta++;\n\tb = b * a;\n\ta = b + 3;\n}\n\nfunction bar() {\n\tb--;\n\ta = 8 + b;\n\tb = a * 2;\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\nBecause `foo()` can't be interrupted by `bar()`, and `bar()` can't be interrupted by `foo()`, this program only has two possible outcomes depending on which starts running first -- if threading were present, and the individual statements in `foo()` and `bar()` could be interleaved, the number of possible outcomes would be greatly increased!\n\nChunk 1 is synchronous (happens *now*), but chunks 2 and 3 are asynchronous (happen *later*), which means their execution will be separated by a gap of time.\n\nChunk 1:\n```js\nvar a = 1;\nvar b = 2;\n```\n\nChunk 2 (`foo()`):\n```js\na++;\nb = b * a;\na = b + 3;\n```\n\nChunk 3 (`bar()`):\n```js\nb--;\na = 8 + b;\nb = a * 2;\n```\n\nChunks 2 and 3 may happen in either-first order, so there are two possible outcomes for this program, as illustrated here:\n\nOutcome 1:\n```js\nvar a = 1;\nvar b = 2;\n\n// foo()\na++;\nb = b * a;\na = b + 3;\n\n// bar()\nb--;\na = 8 + b;\nb = a * 2;\n\na; // 11\nb; // 22\n```\n\nOutcome 2:\n```js\nvar a = 1;\nvar b = 2;\n\n// bar()\nb--;\na = 8 + b;\nb = a * 2;\n\n// foo()\na++;\nb = b * a;\na = b + 3;\n\na; // 183\nb; // 180\n```\n\nTwo outcomes from the same code means we still have nondeterminism! But it's at the function (event) ordering level, rather than at the statement ordering level (or, in fact, the expression operation ordering level) as it is with threads. In other words, it's *more deterministic* than threads would have been.\n\nAs applied to JavaScript's behavior, this function-ordering nondeterminism is the common term \"race condition,\" as `foo()` and `bar()` are racing against each other to see which runs first. Specifically, it's a \"race condition\" because you cannot predict reliably how `a` and `b` will turn out.\n\n**Note:** If there was a function in JS that somehow did not have run-to-completion behavior, we could have many more possible outcomes, right? It turns out ES6 introduces just such a thing (see Chapter 4 \"Generators\"), but don't worry right now, we'll come back to that!\n\n## Concurrency\n\nLet's imagine a site that displays a list of status updates (like a social network news feed) that progressively loads as the user scrolls down the list. To make such a feature work correctly, (at least) two separate \"processes\" will need to be executing *simultaneously* (i.e., during the same window of time, but not necessarily at the same instant).\n\n**Note:** We're using \"process\" in quotes here because they aren't true operating system–level processes in the computer science sense. They're virtual processes, or tasks, that represent a logically connected, sequential series of operations. We'll simply prefer \"process\" over \"task\" because terminology-wise, it will match the definitions of the concepts we're exploring.\n\nThe first \"process\" will respond to `onscroll` events (making Ajax requests for new content) as they fire when the user has scrolled the page further down. The second \"process\" will receive Ajax responses back (to render content onto the page).\n\nObviously, if a user scrolls fast enough, you may see two or more `onscroll` events fired during the time it takes to get the first response back and process, and thus you're going to have `onscroll` events and Ajax response events firing rapidly, interleaved with each other.\n\nConcurrency is when two or more \"processes\" are executing simultaneously over the same period, regardless of whether their individual constituent operations happen *in parallel* (at the same instant on separate processors or cores) or not. You can think of concurrency then as \"process\"-level (or task-level) parallelism, as opposed to operation-level parallelism (separate-processor threads).\n\n**Note:** Concurrency also introduces an optional notion of these \"processes\" interacting with each other. We'll come back to that later.\n\nFor a given window of time (a few seconds worth of a user scrolling), let's visualize each independent \"process\" as a series of events/operations:\n\n\"Process\" 1 (`onscroll` events):\n```\nonscroll, request 1\nonscroll, request 2\nonscroll, request 3\nonscroll, request 4\nonscroll, request 5\nonscroll, request 6\nonscroll, request 7\n```\n\n\"Process\" 2 (Ajax response events):\n```\nresponse 1\nresponse 2\nresponse 3\nresponse 4\nresponse 5\nresponse 6\nresponse 7\n```\n\nIt's quite possible that an `onscroll` event and an Ajax response event could be ready to be processed at exactly the same *moment*. For example, let's visualize these events in a timeline:\n\n```\nonscroll, request 1\nonscroll, request 2          response 1\nonscroll, request 3          response 2\nresponse 3\nonscroll, request 4\nonscroll, request 5\nonscroll, request 6          response 4\nonscroll, request 7\nresponse 6\nresponse 5\nresponse 7\n```\n\nBut, going back to our notion of the event loop from earlier in the chapter, JS is only going to be able to handle one event at a time, so either `onscroll, request 2` is going to happen first or `response 1` is going to happen first, but they cannot happen at literally the same moment. Just like kids at a school cafeteria, no matter what crowd they form outside the doors, they'll have to merge into a single line to get their lunch!\n\nLet's visualize the interleaving of all these events onto the event loop queue.\n\nEvent Loop Queue:\n```\nonscroll, request 1   <--- Process 1 starts\nonscroll, request 2\nresponse 1            <--- Process 2 starts\nonscroll, request 3\nresponse 2\nresponse 3\nonscroll, request 4\nonscroll, request 5\nonscroll, request 6\nresponse 4\nonscroll, request 7   <--- Process 1 finishes\nresponse 6\nresponse 5\nresponse 7            <--- Process 2 finishes\n```\n\n\"Process 1\" and \"Process 2\" run concurrently (task-level parallel), but their individual events run sequentially on the event loop queue.\n\nBy the way, notice how `response 6` and `response 5` came back out of expected order?\n\nThe single-threaded event loop is one expression of concurrency (there are certainly others, which we'll come back to later).\n\n### Noninteracting\n\nAs two or more \"processes\" are interleaving their steps/events concurrently within the same program, they don't necessarily need to interact with each other if the tasks are unrelated. **If they don't interact, nondeterminism is perfectly acceptable.**\n\nFor example:\n\n```js\nvar res = {};\n\nfunction foo(results) {\n\tres.foo = results;\n}\n\nfunction bar(results) {\n\tres.bar = results;\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\n`foo()` and `bar()` are two concurrent \"processes,\" and it's nondeterminate which order they will be fired in. But we've constructed the program so it doesn't matter what order they fire in, because they act independently and as such don't need to interact.\n\nThis is not a \"race condition\" bug, as the code will always work correctly, regardless of the ordering.\n\n### Interaction\n\nMore commonly, concurrent \"processes\" will by necessity interact, indirectly through scope and/or the DOM. When such interaction will occur, you need to coordinate these interactions to prevent \"race conditions,\" as described earlier.\n\nHere's a simple example of two concurrent \"processes\" that interact because of implied ordering, which is only *sometimes broken*:\n\n```js\nvar res = [];\n\nfunction response(data) {\n\tres.push( data );\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", response );\najax( \"http://some.url.2\", response );\n```\n\nThe concurrent \"processes\" are the two `response()` calls that will be made to handle the Ajax responses. They can happen in either-first order.\n\nLet's assume the expected behavior is that `res[0]` has the results of the `\"http://some.url.1\"` call, and `res[1]` has the results of the `\"http://some.url.2\"` call. Sometimes that will be the case, but sometimes they'll be flipped, depending on which call finishes first. There's a pretty good likelihood that this nondeterminism is a \"race condition\" bug.\n\n**Note:** Be extremely wary of assumptions you might tend to make in these situations. For example, it's not uncommon for a developer to observe that `\"http://some.url.2\"` is \"always\" much slower to respond than `\"http://some.url.1\"`, perhaps by virtue of what tasks they're doing (e.g., one performing a database task and the other just fetching a static file), so the observed ordering seems to always be as expected. Even if both requests go to the same server, and *it* intentionally responds in a certain order, there's no *real* guarantee of what order the responses will arrive back in the browser.\n\nSo, to address such a race condition, you can coordinate ordering interaction:\n\n```js\nvar res = [];\n\nfunction response(data) {\n\tif (data.url == \"http://some.url.1\") {\n\t\tres[0] = data;\n\t}\n\telse if (data.url == \"http://some.url.2\") {\n\t\tres[1] = data;\n\t}\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", response );\najax( \"http://some.url.2\", response );\n```\n\nRegardless of which Ajax response comes back first, we inspect the `data.url` (assuming one is returned from the server, of course!) to figure out which position the response data should occupy in the `res` array. `res[0]` will always hold the `\"http://some.url.1\"` results and `res[1]` will always hold the `\"http://some.url.2\"` results. Through simple coordination, we eliminated the \"race condition\" nondeterminism.\n\nThe same reasoning from this scenario would apply if multiple concurrent function calls were interacting with each other through the shared DOM, like one updating the contents of a `<div>` and the other updating the style or attributes of the `<div>` (e.g., to make the DOM element visible once it has content). You probably wouldn't want to show the DOM element before it had content, so the coordination must ensure proper ordering interaction.\n\nSome concurrency scenarios are *always broken* (not just *sometimes*) without coordinated interaction. Consider:\n\n```js\nvar a, b;\n\nfunction foo(x) {\n\ta = x * 2;\n\tbaz();\n}\n\nfunction bar(y) {\n\tb = y * 2;\n\tbaz();\n}\n\nfunction baz() {\n\tconsole.log(a + b);\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\nIn this example, whether `foo()` or `bar()` fires first, it will always cause `baz()` to run too early (either `a` or `b` will still be `undefined`), but the second invocation of `baz()` will work, as both `a` and `b` will be available.\n\nThere are different ways to address such a condition. Here's one simple way:\n\n```js\nvar a, b;\n\nfunction foo(x) {\n\ta = x * 2;\n\tif (a && b) {\n\t\tbaz();\n\t}\n}\n\nfunction bar(y) {\n\tb = y * 2;\n\tif (a && b) {\n\t\tbaz();\n\t}\n}\n\nfunction baz() {\n\tconsole.log( a + b );\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\nThe `if (a && b)` conditional around the `baz()` call is traditionally called a \"gate,\" because we're not sure what order `a` and `b` will arrive, but we wait for both of them to get there before we proceed to open the gate (call `baz()`).\n\nAnother concurrency interaction condition you may run into is sometimes called a \"race,\" but more correctly called a \"latch.\" It's characterized by \"only the first one wins\" behavior. Here, nondeterminism is acceptable, in that you are explicitly saying it's OK for the \"race\" to the finish line to have only one winner.\n\nConsider this broken code:\n\n```js\nvar a;\n\nfunction foo(x) {\n\ta = x * 2;\n\tbaz();\n}\n\nfunction bar(x) {\n\ta = x / 2;\n\tbaz();\n}\n\nfunction baz() {\n\tconsole.log( a );\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\nWhichever one (`foo()` or `bar()`) fires last will not only overwrite the assigned `a` value from the other, but it will also duplicate the call to `baz()` (likely undesired).\n\nSo, we can coordinate the interaction with a simple latch, to let only the first one through:\n\n```js\nvar a;\n\nfunction foo(x) {\n\tif (a == undefined) {\n\t\ta = x * 2;\n\t\tbaz();\n\t}\n}\n\nfunction bar(x) {\n\tif (a == undefined) {\n\t\ta = x / 2;\n\t\tbaz();\n\t}\n}\n\nfunction baz() {\n\tconsole.log( a );\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", foo );\najax( \"http://some.url.2\", bar );\n```\n\nThe `if (a == undefined)` conditional allows only the first of `foo()` or `bar()` through, and the second (and indeed any subsequent) calls would just be ignored. There's just no virtue in coming in second place!\n\n**Note:** In all these scenarios, we've been using global variables for simplistic illustration purposes, but there's nothing about our reasoning here that requires it. As long as the functions in question can access the variables (via scope), they'll work as intended. Relying on lexically scoped variables (see the *Scope & Closures* title of this book series), and in fact global variables as in these examples, is one obvious downside to these forms of concurrency coordination. As we go through the next few chapters, we'll see other ways of coordination that are much cleaner in that respect.\n\n### Cooperation\n\nAnother expression of concurrency coordination is called \"cooperative concurrency.\" Here, the focus isn't so much on interacting via value sharing in scopes (though that's obviously still allowed!). The goal is to take a long-running \"process\" and break it up into steps or batches so that other concurrent \"processes\" have a chance to interleave their operations into the event loop queue.\n\nFor example, consider an Ajax response handler that needs to run through a long list of results to transform the values. We'll use `Array#map(..)` to keep the code shorter:\n\n```js\nvar res = [];\n\n// `response(..)` receives array of results from the Ajax call\nfunction response(data) {\n\t// add onto existing `res` array\n\tres = res.concat(\n\t\t// make a new transformed array with all `data` values doubled\n\t\tdata.map( function(val){\n\t\t\treturn val * 2;\n\t\t} )\n\t);\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", response );\najax( \"http://some.url.2\", response );\n```\n\nIf `\"http://some.url.1\"` gets its results back first, the entire list will be mapped into `res` all at once. If it's a few thousand or less records, this is not generally a big deal. But if it's say 10 million records, that can take a while to run (several seconds on a powerful laptop, much longer on a mobile device, etc.).\n\nWhile such a \"process\" is running, nothing else in the page can happen, including no other `response(..)` calls, no UI updates, not even user events like scrolling, typing, button clicking, and the like. That's pretty painful.\n\nSo, to make a more cooperatively concurrent system, one that's friendlier and doesn't hog the event loop queue, you can process these results in asynchronous batches, after each one \"yielding\" back to the event loop to let other waiting events happen.\n\nHere's a very simple approach:\n\n```js\nvar res = [];\n\n// `response(..)` receives array of results from the Ajax call\nfunction response(data) {\n\t// let's just do 1000 at a time\n\tvar chunk = data.splice( 0, 1000 );\n\n\t// add onto existing `res` array\n\tres = res.concat(\n\t\t// make a new transformed array with all `chunk` values doubled\n\t\tchunk.map( function(val){\n\t\t\treturn val * 2;\n\t\t} )\n\t);\n\n\t// anything left to process?\n\tif (data.length > 0) {\n\t\t// async schedule next batch\n\t\tsetTimeout( function(){\n\t\t\tresponse( data );\n\t\t}, 0 );\n\t}\n}\n\n// ajax(..) is some arbitrary Ajax function given by a library\najax( \"http://some.url.1\", response );\najax( \"http://some.url.2\", response );\n```\n\nWe process the data set in maximum-sized chunks of 1,000 items. By doing so, we ensure a short-running \"process,\" even if that means many more subsequent \"processes,\" as the interleaving onto the event loop queue will give us a much more responsive (performant) site/app.\n\nOf course, we're not interaction-coordinating the ordering of any of these \"processes,\" so the order of results in `res` won't be predictable. If ordering was required, you'd need to use interaction techniques like those we discussed earlier, or ones we will cover in later chapters of this book.\n\nWe use the `setTimeout(..0)` (hack) for async scheduling, which basically just means \"stick this function at the end of the current event loop queue.\"\n\n**Note:** `setTimeout(..0)` is not technically inserting an item directly onto the event loop queue. The timer will insert the event at its next opportunity. For example, two subsequent `setTimeout(..0)` calls would not be strictly guaranteed to be processed in call order, so it *is* possible to see various conditions like timer drift where the ordering of such events isn't predictable. In Node.js, a similar approach is `process.nextTick(..)`. Despite how convenient (and usually more performant) it would be, there's not a single direct way (at least yet) across all environments to ensure async event ordering. We cover this topic in more detail in the next section.\n\n## Jobs\n\nAs of ES6, there's a new concept layered on top of the event loop queue, called the \"Job queue.\" The most likely exposure you'll have to it is with the asynchronous behavior of Promises (see Chapter 3).\n\nUnfortunately, at the moment it's a mechanism without an exposed API, and thus demonstrating it is a bit more convoluted. So we're going to have to just describe it conceptually, such that when we discuss async behavior with Promises in Chapter 3, you'll understand how those actions are being scheduled and processed.\n\nSo, the best way to think about this that I've found is that the \"Job queue\" is a queue hanging off the end of every tick in the event loop queue. Certain async-implied actions that may occur during a tick of the event loop will not cause a whole new event to be added to the event loop queue, but will instead add an item (aka Job) to the end of the current tick's Job queue.\n\nIt's kinda like saying, \"oh, here's this other thing I need to do *later*, but make sure it happens right away before anything else can happen.\"\n\nOr, to use a metaphor: the event loop queue is like an amusement park ride, where once you finish the ride, you have to go to the back of the line to ride again. But the Job queue is like finishing the ride, but then cutting in line and getting right back on.\n\nA Job can also cause more Jobs to be added to the end of the same queue. So, it's theoretically possible that a Job \"loop\" (a Job that keeps adding another Job, etc.) could spin indefinitely, thus starving the program of the ability to move on to the next event loop tick. This would conceptually be almost the same as just expressing a long-running or infinite loop (like `while (true) ..`) in your code.\n\nJobs are kind of like the spirit of the `setTimeout(..0)` hack, but implemented in such a way as to have a much more well-defined and guaranteed ordering: **later, but as soon as possible**.\n\nLet's imagine an API for scheduling Jobs (directly, without hacks), and call it `schedule(..)`. Consider:\n\n```js\nconsole.log( \"A\" );\n\nsetTimeout( function(){\n\tconsole.log( \"B\" );\n}, 0 );\n\n// theoretical \"Job API\"\nschedule( function(){\n\tconsole.log( \"C\" );\n\n\tschedule( function(){\n\t\tconsole.log( \"D\" );\n\t} );\n} );\n```\n\nYou might expect this to print out `A B C D`, but instead it would print out `A C D B`, because the Jobs happen at the end of the current event loop tick, and the timer fires to schedule for the *next* event loop tick (if available!).\n\nIn Chapter 3, we'll see that the asynchronous behavior of Promises is based on Jobs, so it's important to keep clear how that relates to event loop behavior.\n\n## Statement Ordering\n\nThe order in which we express statements in our code is not necessarily the same order as the JS engine will execute them. That may seem like quite a strange assertion to make, so we'll just briefly explore it.\n\nBut before we do, we should be crystal clear on something: the rules/grammar of the language (see the *Types & Grammar* title of this book series) dictate a very predictable and reliable behavior for statement ordering from the program point of view. So what we're about to discuss are **not things you should ever be able to observe** in your JS program.\n\n**Warning:** If you are ever able to *observe* compiler statement reordering like we're about to illustrate, that'd be a clear violation of the specification, and it would unquestionably be due to a bug in the JS engine in question -- one which should promptly be reported and fixed! But it's vastly more common that you *suspect* something crazy is happening in the JS engine, when in fact it's just a bug (probably a \"race condition\"!) in your own code -- so look there first, and again and again. The JS debugger, using breakpoints and stepping through code line by line, will be your most powerful tool for sniffing out such bugs in *your code*.\n\nConsider:\n\n```js\nvar a, b;\n\na = 10;\nb = 30;\n\na = a + 1;\nb = b + 1;\n\nconsole.log( a + b ); // 42\n```\n\nThis code has no expressed asynchrony to it (other than the rare `console` async I/O discussed earlier!), so the most likely assumption is that it would process line by line in top-down fashion.\n\nBut it's *possible* that the JS engine, after compiling this code (yes, JS is compiled -- see the *Scope & Closures* title of this book series!) might find opportunities to run your code faster by rearranging (safely) the order of these statements. Essentially, as long as you can't observe the reordering, anything's fair game.\n\nFor example, the engine might find it's faster to actually execute the code like this:\n\n```js\nvar a, b;\n\na = 10;\na++;\n\nb = 30;\nb++;\n\nconsole.log( a + b ); // 42\n```\n\nOr this:\n\n```js\nvar a, b;\n\na = 11;\nb = 31;\n\nconsole.log( a + b ); // 42\n```\n\nOr even:\n\n```js\n// because `a` and `b` aren't used anymore, we can\n// inline and don't even need them!\nconsole.log( 42 ); // 42\n```\n\nIn all these cases, the JS engine is performing safe optimizations during its compilation, as the end *observable* result will be the same.\n\nBut here's a scenario where these specific optimizations would be unsafe and thus couldn't be allowed (of course, not to say that it's not optimized at all):\n\n```js\nvar a, b;\n\na = 10;\nb = 30;\n\n// we need `a` and `b` in their preincremented state!\nconsole.log( a * b ); // 300\n\na = a + 1;\nb = b + 1;\n\nconsole.log( a + b ); // 42\n```\n\nOther examples where the compiler reordering could create observable side effects (and thus must be disallowed) would include things like any function call with side effects (even and especially getter functions), or ES6 Proxy objects (see the *ES6 & Beyond* title of this book series).\n\nConsider:\n\n```js\nfunction foo() {\n\tconsole.log( b );\n\treturn 1;\n}\n\nvar a, b, c;\n\n// ES5.1 getter literal syntax\nc = {\n\tget bar() {\n\t\tconsole.log( a );\n\t\treturn 1;\n\t}\n};\n\na = 10;\nb = 30;\n\na += foo();\t\t\t\t// 30\nb += c.bar;\t\t\t\t// 11\n\nconsole.log( a + b );\t// 42\n```\n\nIf it weren't for the `console.log(..)` statements in this snippet (just used as a convenient form of observable side effect for the illustration), the JS engine would likely have been free, if it wanted to (who knows if it would!?), to reorder the code to:\n\n```js\n// ...\n\na = 10 + foo();\nb = 30 + c.bar;\n\n// ...\n```\n\nWhile JS semantics thankfully protect us from the *observable* nightmares that compiler statement reordering would seem to be in danger of, it's still important to understand just how tenuous a link there is between the way source code is authored (in top-down fashion) and the way it runs after compilation.\n\nCompiler statement reordering is almost a micro-metaphor for concurrency and interaction. As a general concept, such awareness can help you understand async JS code flow issues better.\n\n## Review\n\nA JavaScript program is (practically) always broken up into two or more chunks, where the first chunk runs *now* and the next chunk runs *later*, in response to an event. Even though the program is executed chunk-by-chunk, all of them share the same access to the program scope and state, so each modification to state is made on top of the previous state.\n\nWhenever there are events to run, the *event loop* runs until the queue is empty. Each iteration of the event loop is a \"tick.\" User interaction, IO, and timers enqueue events on the event queue.\n\nAt any given moment, only one event can be processed from the queue at a time. While an event is executing, it can directly or indirectly cause one or more subsequent events.\n\nConcurrency is when two or more chains of events interleave over time, such that from a high-level perspective, they appear to be running *simultaneously* (even though at any given moment only one event is being processed).\n\nIt's often necessary to do some form of interaction coordination between these concurrent \"processes\" (as distinct from operating system processes), for instance to ensure ordering or to prevent \"race conditions.\" These \"processes\" can also *cooperate* by breaking themselves into smaller chunks and to allow other \"process\" interleaving.\n"
  },
  {
    "path": "async & performance/ch2.md",
    "content": "# You Don't Know JS: Async & Performance\n# Chapter 2: Callbacks\n\nIn Chapter 1, we explored the terminology and concepts around asynchronous programming in JavaScript. Our focus is on understanding the single-threaded (one-at-a-time) event loop queue that drives all \"events\" (async function invocations). We also explored various ways that concurrency patterns explain the relationships (if any!) between *simultaneously* running chains of events, or \"processes\" (tasks, function calls, etc.).\n\nAll our examples in Chapter 1 used the function as the individual, indivisible unit of operations, whereby inside the function, statements run in predictable order (above the compiler level!), but at the function-ordering level, events (aka async function invocations) can happen in a variety of orders.\n\nIn all these cases, the function is acting as a \"callback,\" because it serves as the target for the event loop to \"call back into\" the program, whenever that item in the queue is processed.\n\nAs you no doubt have observed, callbacks are by far the most common way that asynchrony in JS programs is expressed and managed. Indeed, the callback is the most fundamental async pattern in the language.\n\nCountless JS programs, even very sophisticated and complex ones, have been written upon no other async foundation than the callback (with of course the concurrency interaction patterns we explored in Chapter 1). The callback function is the async work horse for JavaScript, and it does its job respectably.\n\nExcept... callbacks are not without their shortcomings. Many developers are excited by the *promise* (pun intended!) of better async patterns. But it's impossible to effectively use any abstraction if you don't understand what it's abstracting, and why.\n\nIn this chapter, we will explore a couple of those in depth, as motivation for why more sophisticated async patterns (explored in subsequent chapters of this book) are necessary and desired.\n\n## Continuations\n\nLet's go back to the async callback example we started with in Chapter 1, but let me slightly modify it to illustrate a point:\n\n```js\n// A\najax( \"..\", function(..){\n\t// C\n} );\n// B\n```\n\n`// A` and `// B` represent the first half of the program (aka the *now*), and `// C` marks the second half of the program (aka the *later*). The first half executes right away, and then there's a \"pause\" of indeterminate length. At some future moment, if the Ajax call completes, then the program will pick up where it left off, and *continue* with the second half.\n\nIn other words, the callback function wraps or encapsulates the *continuation* of the program.\n\nLet's make the code even simpler:\n\n```js\n// A\nsetTimeout( function(){\n\t// C\n}, 1000 );\n// B\n```\n\nStop for a moment and ask yourself how you'd describe (to someone else less informed about how JS works) the way that program behaves. Go ahead, try it out loud. It's a good exercise that will help my next points make more sense.\n\nMost readers just now probably thought or said something to the effect of: \"Do A, then set up a timeout to wait 1,000 milliseconds, then once that fires, do C.\" How close was your rendition?\n\nYou might have caught yourself and self-edited to: \"Do A, setup the timeout for 1,000 milliseconds, then do B, then after the timeout fires, do C.\" That's more accurate than the first version. Can you spot the difference?\n\nEven though the second version is more accurate, both versions are deficient in explaining this code in a way that matches our brains to the code, and the code to the JS engine. The disconnect is both subtle and monumental, and is at the very heart of understanding the shortcomings of callbacks as async expression and management.\n\nAs soon as we introduce a single continuation (or several dozen as many programs do!) in the form of a callback function, we have allowed a divergence to form between how our brains work and the way the code will operate. Any time these two diverge (and this is by far not the only place that happens, as I'm sure you know!), we run into the inevitable fact that our code becomes harder to understand, reason about, debug, and maintain.\n\n## Sequential Brain\n\nI'm pretty sure most of you readers have heard someone say (even made the claim yourself), \"I'm a multitasker.\" The effects of trying to act as a multitasker range from humorous (e.g., the silly patting-head-rubbing-stomach kids' game) to mundane (chewing gum while walking) to downright dangerous (texting while driving).\n\nBut are we multitaskers? Can we really do two conscious, intentional actions at once and think/reason about both of them at exactly the same moment? Does our highest level of brain functionality have parallel multithreading going on?\n\nThe answer may surprise you: **probably not.**\n\nThat's just not really how our brains appear to be set up. We're much more single taskers than many of us (especially A-type personalities!) would like to admit. We can really only think about one thing at any given instant.\n\nI'm not talking about all our involuntary, subconscious, automatic brain functions, such as heart beating, breathing, and eyelid blinking. Those are all vital tasks to our sustained life, but we don't intentionally allocate any brain power to them. Thankfully, while we obsess about checking social network feeds for the 15th time in three minutes, our brain carries on in the background (threads!) with all those important tasks.\n\nWe're instead talking about whatever task is at the forefront of our minds at the moment. For me, it's writing the text in this book right now. Am I doing any other higher level brain function at exactly this same moment? Nope, not really. I get distracted quickly and easily -- a few dozen times in these last couple of paragraphs!\n\nWhen we *fake* multitasking, such as trying to type something at the same time we're talking to a friend or family member on the phone, what we're actually most likely doing is acting as fast context switchers. In other words, we switch back and forth between two or more tasks in rapid succession, *simultaneously* progressing on each task in tiny, fast little chunks. We do it so fast that to the outside world it appears as if we're doing these things *in parallel*.\n\nDoes that sound suspiciously like async evented concurrency (like the sort that happens in JS) to you?! If not, go back and read Chapter 1 again!\n\nIn fact, one way of simplifying (i.e., abusing) the massively complex world of neurology into something I can remotely hope to discuss here is that our brains work kinda like the event loop queue.\n\nIf you think about every single letter (or word) I type as a single async event, in just this sentence alone there are several dozen opportunities for my brain to be interrupted by some other event, such as from my senses, or even just my random thoughts.\n\nI don't get interrupted and pulled to another \"process\" at every opportunity that I could be (thankfully -- or this book would never be written!). But it happens often enough that I feel my own brain is nearly constantly switching to various different contexts (aka \"processes\"). And that's an awful lot like how the JS engine would probably feel.\n\n### Doing Versus Planning\n\nOK, so our brains can be thought of as operating in single-threaded event loop queue like ways, as can the JS engine. That sounds like a good match.\n\nBut we need to be more nuanced than that in our analysis. There's a big, observable difference between how we plan various tasks, and how our brains actually operate those tasks.\n\nAgain, back to the writing of this text as my metaphor. My rough mental outline plan here is to keep writing and writing, going sequentially through a set of points I have ordered in my thoughts. I don't plan to have any interruptions or nonlinear activity in this writing. But yet, my brain is nevertheless switching around all the time.\n\nEven though at an operational level our brains are async evented, we seem to plan out tasks in a sequential, synchronous way. \"I need to go to the store, then buy some milk, then drop off my dry cleaning.\"\n\nYou'll notice that this higher level thinking (planning) doesn't seem very async evented in its formulation. In fact, it's kind of rare for us to deliberately think solely in terms of events. Instead, we plan things out carefully, sequentially (A then B then C), and we assume to an extent a sort of temporal blocking that forces B to wait on A, and C to wait on B.\n\nWhen a developer writes code, they are planning out a set of actions to occur. If they're any good at being a developer, they're **carefully planning** it out. \"I need to set `z` to the value of `x`, and then `x` to the value of `y`,\" and so forth.\n\nWhen we write out synchronous code, statement by statement, it works a lot like our errands to-do list:\n\n```js\n// swap `x` and `y` (via temp variable `z`)\nz = x;\nx = y;\ny = z;\n```\n\nThese three assignment statements are synchronous, so `x = y` waits for `z = x` to finish, and `y = z` in turn waits for `x = y` to finish. Another way of saying it is that these three statements are temporally bound to execute in a certain order, one right after the other. Thankfully, we don't need to be bothered with any async evented details here. If we did, the code gets a lot more complex, quickly!\n\nSo if synchronous brain planning maps well to synchronous code statements, how well do our brains do at planning out asynchronous code?\n\nIt turns out that how we express asynchrony (with callbacks) in our code doesn't map very well at all to that synchronous brain planning behavior.\n\nCan you actually imagine having a line of thinking that plans out your to-do errands like this?\n\n> \"I need to go to the store, but on the way I'm sure I'll get a phone call, so 'Hi, Mom', and while she starts talking, I'll be looking up the store address on GPS, but that'll take a second to load, so I'll turn down the radio so I can hear Mom better, then I'll realize I forgot to put on a jacket and it's cold outside, but no matter, keep driving and talking to Mom, and then the seatbelt ding reminds me to buckle up, so 'Yes, Mom, I am wearing my seatbelt, I always do!'. Ah, finally the GPS got the directions, now...\"\n\nAs ridiculous as that sounds as a formulation for how we plan our day out and think about what to do and in what order, nonetheless it's exactly how our brains operate at a functional level. Remember, that's not multitasking, it's just fast context switching.\n\nThe reason it's difficult for us as developers to write async evented code, especially when all we have is the callback to do it, is that stream of consciousness thinking/planning is unnatural for most of us.\n\nWe think in step-by-step terms, but the tools (callbacks) available to us in code are not expressed in a step-by-step fashion once we move from synchronous to asynchronous.\n\nAnd **that** is why it's so hard to accurately author and reason about async JS code with callbacks: because it's not how our brain planning works.\n\n**Note:** The only thing worse than not knowing why some code breaks is not knowing why it worked in the first place! It's the classic \"house of cards\" mentality: \"it works, but not sure why, so nobody touch it!\" You may have heard, \"Hell is other people\" (Sartre), and the programmer meme twist, \"Hell is other people's code.\" I believe truly: \"Hell is not understanding my own code.\" And callbacks are one main culprit.\n\n### Nested/Chained Callbacks\n\nConsider:\n\n```js\nlisten( \"click\", function handler(evt){\n\tsetTimeout( function request(){\n\t\tajax( \"http://some.url.1\", function response(text){\n\t\t\tif (text == \"hello\") {\n\t\t\t\thandler();\n\t\t\t}\n\t\t\telse if (text == \"world\") {\n\t\t\t\trequest();\n\t\t\t}\n\t\t} );\n\t}, 500) ;\n} );\n```\n\nThere's a good chance code like that is recognizable to you. We've got a chain of three functions nested together, each one representing a step in an asynchronous series (task, \"process\").\n\nThis kind of code is often called \"callback hell,\" and sometimes also referred to as the \"pyramid of doom\" (for its sideways-facing triangular shape due to the nested indentation).\n\nBut \"callback hell\" actually has almost nothing to do with the nesting/indentation. It's a far deeper problem than that. We'll see how and why as we continue through the rest of this chapter.\n\nFirst, we're waiting for the \"click\" event, then we're waiting for the timer to fire, then we're waiting for the Ajax response to come back, at which point it might do it all again.\n\nAt first glance, this code may seem to map its asynchrony naturally to sequential brain planning.\n\nFirst (*now*), we:\n\n```js\nlisten( \"..\", function handler(..){\n\t// ..\n} );\n```\n\nThen *later*, we:\n\n```js\nsetTimeout( function request(..){\n\t// ..\n}, 500) ;\n```\n\nThen still *later*, we:\n\n```js\najax( \"..\", function response(..){\n\t// ..\n} );\n```\n\nAnd finally (most *later*), we:\n\n```js\nif ( .. ) {\n\t// ..\n}\nelse ..\n```\n\nBut there's several problems with reasoning about this code linearly in such a fashion.\n\nFirst, it's an accident of the example that our steps are on subsequent lines (1, 2, 3, and 4...). In real async JS programs, there's often a lot more noise cluttering things up, noise that we have to deftly maneuver past in our brains as we jump from one function to the next. Understanding the async flow in such callback-laden code is not impossible, but it's certainly not natural or easy, even with lots of practice.\n\nBut also, there's something deeper wrong, which isn't evident just in that code example. Let me make up another scenario (pseudocode-ish) to illustrate it:\n\n```js\ndoA( function(){\n\tdoB();\n\n\tdoC( function(){\n\t\tdoD();\n\t} )\n\n\tdoE();\n} );\n\ndoF();\n```\n\nWhile the experienced among you will correctly identify the true order of operations here, I'm betting it is more than a little confusing at first glance, and takes some concerted mental cycles to arrive at. The operations will happen in this order:\n\n* `doA()`\n* `doF()`\n* `doB()`\n* `doC()`\n* `doE()`\n* `doD()`\n\nDid you get that right the very first time you glanced at the code?\n\nOK, some of you are thinking I was unfair in my function naming, to intentionally lead you astray. I swear I was just naming in top-down appearance order. But let me try again:\n\n```js\ndoA( function(){\n\tdoC();\n\n\tdoD( function(){\n\t\tdoF();\n\t} )\n\n\tdoE();\n} );\n\ndoB();\n```\n\nNow, I've named them alphabetically in order of actual execution. But I still bet, even with experience now in this scenario, tracing through the `A -> B -> C -> D -> E -> F` order doesn't come natural to many if any of you readers. Certainly, your eyes do an awful lot of jumping up and down the code snippet, right?\n\nBut even if that all comes natural to you, there's still one more hazard that could wreak havoc. Can you spot what it is?\n\nWhat if `doA(..)` or `doD(..)` aren't actually async, the way we obviously assumed them to be? Uh oh, now the order is different. If they're both sync (and maybe only sometimes, depending on the conditions of the program at the time), the order is now `A -> C -> D -> F -> E -> B`.\n\nThat sound you just heard faintly in the background is the sighs of thousands of JS developers who just had a face-in-hands moment.\n\nIs nesting the problem? Is that what makes it so hard to trace the async flow? That's part of it, certainly.\n\nBut let me rewrite the previous nested event/timeout/Ajax example without using nesting:\n\n```js\nlisten( \"click\", handler );\n\nfunction handler() {\n\tsetTimeout( request, 500 );\n}\n\nfunction request(){\n\tajax( \"http://some.url.1\", response );\n}\n\nfunction response(text){\n\tif (text == \"hello\") {\n\t\thandler();\n\t}\n\telse if (text == \"world\") {\n\t\trequest();\n\t}\n}\n```\n\nThis formulation of the code is not hardly as recognizable as having the nesting/indentation woes of its previous form, and yet it's every bit as susceptible to \"callback hell.\" Why?\n\nAs we go to linearly (sequentially) reason about this code, we have to skip from one function, to the next, to the next, and bounce all around the code base to \"see\" the sequence flow. And remember, this is simplified code in sort of best-case fashion. We all know that real async JS program code bases are often fantastically more jumbled, which makes such reasoning orders of magnitude more difficult.\n\nAnother thing to notice: to get steps 2, 3, and 4 linked together so they happen in succession, the only affordance callbacks alone gives us is to hardcode step 2 into step 1, step 3 into step 2, step 4 into step 3, and so on. The hardcoding isn't necessarily a bad thing, if it really is a fixed condition that step 2 should always lead to step 3.\n\nBut the hardcoding definitely makes the code a bit more brittle, as it doesn't account for anything going wrong that might cause a deviation in the progression of steps. For example, if step 2 fails, step 3 never gets reached, nor does step 2 retry, or move to an alternate error handling flow, and so on.\n\nAll of these issues are things you *can* manually hardcode into each step, but that code is often very repetitive and not reusable in other steps or in other async flows in your program.\n\nEven though our brains might plan out a series of tasks in a sequential type of way (this, then this, then this), the evented nature of our brain operation makes recovery/retry/forking of flow control almost effortless. If you're out running errands, and you realize you left a shopping list at home, it doesn't end the day because you didn't plan that ahead of time. Your brain routes around this hiccup easily: you go home, get the list, then head right back out to the store.\n\nBut the brittle nature of manually hardcoded callbacks (even with hardcoded error handling) is often far less graceful. Once you end up specifying (aka pre-planning) all the various eventualities/paths, the code becomes so convoluted that it's hard to ever maintain or update it.\n\n**That** is what \"callback hell\" is all about! The nesting/indentation are basically a side show, a red herring.\n\nAnd as if all that's not enough, we haven't even touched what happens when two or more chains of these callback continuations are happening *simultaneously*, or when the third step branches out into \"parallel\" callbacks with gates or latches, or... OMG, my brain hurts, how about yours!?\n\nAre you catching the notion here that our sequential, blocking brain planning behaviors just don't map well onto callback-oriented async code? That's the first major deficiency to articulate about callbacks: they express asynchrony in code in ways our brains have to fight just to keep in sync with (pun intended!).\n\n## Trust Issues\n\nThe mismatch between sequential brain planning and callback-driven async JS code is only part of the problem with callbacks. There's something much deeper to be concerned about.\n\nLet's once again revisit the notion of a callback function as the continuation (aka the second half) of our program:\n\n```js\n// A\najax( \"..\", function(..){\n\t// C\n} );\n// B\n```\n\n`// A` and `// B` happen *now*, under the direct control of the main JS program. But `// C` gets deferred to happen *later*, and under the control of another party -- in this case, the `ajax(..)` function. In a basic sense, that sort of hand-off of control doesn't regularly cause lots of problems for programs.\n\nBut don't be fooled by its infrequency that this control switch isn't a big deal. In fact, it's one of the worst (and yet most subtle) problems about callback-driven design. It revolves around the idea that sometimes `ajax(..)` (i.e., the \"party\" you hand your callback continuation to) is not a function that you wrote, or that you directly control. Many times, it's a utility provided by some third party.\n\nWe call this \"inversion of control,\" when you take part of your program and give over control of its execution to another third party. There's an unspoken \"contract\" that exists between your code and the third-party utility -- a set of things you expect to be maintained.\n\n### Tale of Five Callbacks\n\nIt might not be terribly obvious why this is such a big deal. Let me construct an exaggerated scenario to illustrate the hazards of trust at play.\n\nImagine you're a developer tasked with building out an ecommerce checkout system for a site that sells expensive TVs. You already have all the various pages of the checkout system built out just fine. On the last page, when the user clicks \"confirm\" to buy the TV, you need to call a third-party function (provided say by some analytics tracking company) so that the sale can be tracked.\n\nYou notice that they've provided what looks like an async tracking utility, probably for the sake of performance best practices, which means you need to pass in a callback function. In this continuation that you pass in, you will have the final code that charges the customer's credit card and displays the thank you page.\n\nThis code might look like:\n\n```js\nanalytics.trackPurchase( purchaseData, function(){\n\tchargeCreditCard();\n\tdisplayThankyouPage();\n} );\n```\n\nEasy enough, right? You write the code, test it, everything works, and you deploy to production. Everyone's happy!\n\nSix months go by and no issues. You've almost forgotten you even wrote that code. One morning, you're at a coffee shop before work, casually enjoying your latte, when you get a panicked call from your boss insisting you drop the coffee and rush into work right away.\n\nWhen you arrive, you find out that a high-profile customer has had his credit card charged five times for the same TV, and he's understandably upset. Customer service has already issued an apology and processed a refund. But your boss demands to know how this could possibly have happened. \"Don't we have tests for stuff like this!?\"\n\nYou don't even remember the code you wrote. But you dig back in and start trying to find out what could have gone awry.\n\nAfter digging through some logs, you come to the conclusion that the only explanation is that the analytics utility somehow, for some reason, called your callback five times instead of once. Nothing in their documentation mentions anything about this.\n\nFrustrated, you contact customer support, who of course is as astonished as you are. They agree to escalate it to their developers, and promise to get back to you. The next day, you receive a lengthy email explaining what they found, which you promptly forward to your boss.\n\nApparently, the developers at the analytics company had been working on some experimental code that, under certain conditions, would retry the provided callback once per second, for five seconds, before failing with a timeout. They had never intended to push that into production, but somehow they did, and they're totally embarrassed and apologetic. They go into plenty of detail about how they've identified the breakdown and what they'll do to ensure it never happens again. Yadda, yadda.\n\nWhat's next?\n\nYou talk it over with your boss, but he's not feeling particularly comfortable with the state of things. He insists, and you reluctantly agree, that you can't trust *them* anymore (that's what bit you), and that you'll need to figure out how to protect the checkout code from such a vulnerability again.\n\nAfter some tinkering, you implement some simple ad hoc code like the following, which the team seems happy with:\n\n```js\nvar tracked = false;\n\nanalytics.trackPurchase( purchaseData, function(){\n\tif (!tracked) {\n\t\ttracked = true;\n\t\tchargeCreditCard();\n\t\tdisplayThankyouPage();\n\t}\n} );\n```\n\n**Note:** This should look familiar to you from Chapter 1, because we're essentially creating a latch to handle if there happen to be multiple concurrent invocations of our callback.\n\nBut then one of your QA engineers asks, \"what happens if they never call the callback?\" Oops. Neither of you had thought about that.\n\nYou begin to chase down the rabbit hole, and think of all the possible things that could go wrong with them calling your callback. Here's roughly the list you come up with of ways the analytics utility could misbehave:\n\n* Call the callback too early (before it's been tracked)\n* Call the callback too late (or never)\n* Call the callback too few or too many times (like the problem you encountered!)\n* Fail to pass along any necessary environment/parameters to your callback\n* Swallow any errors/exceptions that may happen\n* ...\n\nThat should feel like a troubling list, because it is. You're probably slowly starting to realize that you're going to have to invent an awful lot of ad hoc logic **in each and every single callback** that's passed to a utility you're not positive you can trust.\n\nNow you realize a bit more completely just how hellish \"callback hell\" is.\n\n### Not Just Others' Code\n\nSome of you may be skeptical at this point whether this is as big a deal as I'm making it out to be. Perhaps you don't interact with truly third-party utilities much if at all. Perhaps you use versioned APIs or self-host such libraries, so that its behavior can't be changed out from underneath you.\n\nSo, contemplate this: can you even *really* trust utilities that you do theoretically control (in your own code base)?\n\nThink of it this way: most of us agree that at least to some extent we should build our own internal functions with some defensive checks on the input parameters, to reduce/prevent unexpected issues.\n\nOverly trusting of input:\n```js\nfunction addNumbers(x,y) {\n\t// + is overloaded with coercion to also be\n\t// string concatenation, so this operation\n\t// isn't strictly safe depending on what's\n\t// passed in.\n\treturn x + y;\n}\n\naddNumbers( 21, 21 );\t// 42\naddNumbers( 21, \"21\" );\t// \"2121\"\n```\n\nDefensive against untrusted input:\n```js\nfunction addNumbers(x,y) {\n\t// ensure numerical input\n\tif (typeof x != \"number\" || typeof y != \"number\") {\n\t\tthrow Error( \"Bad parameters\" );\n\t}\n\n\t// if we get here, + will safely do numeric addition\n\treturn x + y;\n}\n\naddNumbers( 21, 21 );\t// 42\naddNumbers( 21, \"21\" );\t// Error: \"Bad parameters\"\n```\n\nOr perhaps still safe but friendlier:\n```js\nfunction addNumbers(x,y) {\n\t// ensure numerical input\n\tx = Number( x );\n\ty = Number( y );\n\n\t// + will safely do numeric addition\n\treturn x + y;\n}\n\naddNumbers( 21, 21 );\t// 42\naddNumbers( 21, \"21\" );\t// 42\n```\n\nHowever you go about it, these sorts of checks/normalizations are fairly common on function inputs, even with code we theoretically entirely trust. In a crude sort of way, it's like the programming equivalent of the geopolitical principle of \"Trust But Verify.\"\n\nSo, doesn't it stand to reason that we should do the same thing about composition of async function callbacks, not just with truly external code but even with code we know is generally \"under our own control\"? **Of course we should.**\n\nBut callbacks don't really offer anything to assist us. We have to construct all that machinery ourselves, and it often ends up being a lot of boilerplate/overhead that we repeat for every single async callback.\n\nThe most troublesome problem with callbacks is *inversion of control* leading to a complete breakdown along all those trust lines.\n\nIf you have code that uses callbacks, especially but not exclusively with third-party utilities, and you're not already applying some sort of mitigation logic for all these *inversion of control* trust issues, your code *has* bugs in it right now even though they may not have bitten you yet. Latent bugs are still bugs.\n\nHell indeed.\n\n## Trying to Save Callbacks\n\nThere are several variations of callback design that have attempted to address some (not all!) of the trust issues we've just looked at. It's a valiant, but doomed, effort to save the callback pattern from imploding on itself.\n\nFor example, regarding more graceful error handling, some API designs provide for split callbacks (one for the success notification, one for the error notification):\n\n```js\nfunction success(data) {\n\tconsole.log( data );\n}\n\nfunction failure(err) {\n\tconsole.error( err );\n}\n\najax( \"http://some.url.1\", success, failure );\n```\n\nIn APIs of this design, often the `failure()` error handler is optional, and if not provided it will be assumed you want the errors swallowed. Ugh.\n\n**Note:** This split-callback design is what the ES6 Promise API uses. We'll cover ES6 Promises in much more detail in the next chapter.\n\nAnother common callback pattern is called \"error-first style\" (sometimes called \"Node style,\" as it's also the convention used across nearly all Node.js APIs), where the first argument of a single callback is reserved for an error object (if any). If success, this argument will be empty/falsy (and any subsequent arguments will be the success data), but if an error result is being signaled, the first argument is set/truthy (and usually nothing else is passed):\n\n```js\nfunction response(err,data) {\n\t// error?\n\tif (err) {\n\t\tconsole.error( err );\n\t}\n\t// otherwise, assume success\n\telse {\n\t\tconsole.log( data );\n\t}\n}\n\najax( \"http://some.url.1\", response );\n```\n\nIn both of these cases, several things should be observed.\n\nFirst, it has not really resolved the majority of trust issues like it may appear. There's nothing about either callback that prevents or filters unwanted repeated invocations. Moreover, things are worse now, because you may get both success and error signals, or neither, and you still have to code around either of those conditions.\n\nAlso, don't miss the fact that while it's a standard pattern you can employ, it's definitely more verbose and boilerplate-ish without much reuse, so you're going to get weary of typing all that out for every single callback in your application.\n\nWhat about the trust issue of never being called? If this is a concern (and it probably should be!), you likely will need to set up a timeout that cancels the event. You could make a utility (proof-of-concept only shown) to help you with that:\n\n```js\nfunction timeoutify(fn,delay) {\n\tvar intv = setTimeout( function(){\n\t\t\tintv = null;\n\t\t\tfn( new Error( \"Timeout!\" ) );\n\t\t}, delay )\n\t;\n\n\treturn function() {\n\t\t// timeout hasn't happened yet?\n\t\tif (intv) {\n\t\t\tclearTimeout( intv );\n\t\t\tfn.apply( this, [ null ].concat( [].slice.call( arguments ) ) );\n\t\t}\n\t};\n}\n```\n\nHere's how you use it:\n\n```js\n// using \"error-first style\" callback design\nfunction foo(err,data) {\n\tif (err) {\n\t\tconsole.error( err );\n\t}\n\telse {\n\t\tconsole.log( data );\n\t}\n}\n\najax( \"http://some.url.1\", timeoutify( foo, 500 ) );\n```\n\nAnother trust issue is being called \"too early.\" In application-specific terms, this may actually involve being called before some critical task is complete. But more generally, the problem is evident in utilities that can either invoke the callback you provide *now* (synchronously), or *later* (asynchronously).\n\nThis nondeterminism around the sync-or-async behavior is almost always going to lead to very difficult to track down bugs. In some circles, the fictional insanity-inducing monster named Zalgo is used to describe the sync/async nightmares. \"Don't release Zalgo!\" is a common cry, and it leads to very sound advice: always invoke callbacks asynchronously, even if that's \"right away\" on the next turn of the event loop, so that all callbacks are predictably async.\n\n**Note:** For more information on Zalgo, see Oren Golan's \"Don't Release Zalgo!\" (https://github.com/oren/oren.github.io/blob/master/posts/zalgo.md) and Isaac Z. Schlueter's \"Designing APIs for Asynchrony\" (http://blog.izs.me/post/59142742143/designing-apis-for-asynchrony).\n\nConsider:\n\n```js\nfunction result(data) {\n\tconsole.log( a );\n}\n\nvar a = 0;\n\najax( \"..pre-cached-url..\", result );\na++;\n```\n\nWill this code print `0` (sync callback invocation) or `1` (async callback invocation)? Depends... on the conditions.\n\nYou can see just how quickly the unpredictability of Zalgo can threaten any JS program. So the silly-sounding \"never release Zalgo\" is actually incredibly common and solid advice. Always be asyncing.\n\nWhat if you don't know whether the API in question will always execute async? You could invent a utility like this `asyncify(..)` proof-of-concept:\n\n```js\nfunction asyncify(fn) {\n\tvar orig_fn = fn,\n\t\tintv = setTimeout( function(){\n\t\t\tintv = null;\n\t\t\tif (fn) fn();\n\t\t}, 0 )\n\t;\n\n\tfn = null;\n\n\treturn function() {\n\t\t// firing too quickly, before `intv` timer has fired to\n\t\t// indicate async turn has passed?\n\t\tif (intv) {\n\t\t\tfn = orig_fn.bind.apply(\n\t\t\t\torig_fn,\n\t\t\t\t// add the wrapper's `this` to the `bind(..)`\n\t\t\t\t// call parameters, as well as currying any\n\t\t\t\t// passed in parameters\n\t\t\t\t[this].concat( [].slice.call( arguments ) )\n\t\t\t);\n\t\t}\n\t\t// already async\n\t\telse {\n\t\t\t// invoke original function\n\t\t\torig_fn.apply( this, arguments );\n\t\t}\n\t};\n}\n```\n\nYou use `asyncify(..)` like this:\n\n```js\nfunction result(data) {\n\tconsole.log( a );\n}\n\nvar a = 0;\n\najax( \"..pre-cached-url..\", asyncify( result ) );\na++;\n```\n\nWhether the Ajax request is in the cache and resolves to try to call the callback right away, or must be fetched over the wire and thus complete later asynchronously, this code will always output `1` instead of `0` -- `result(..)` cannot help but be invoked asynchronously, which means the `a++` has a chance to run before `result(..)` does.\n\nYay, another trust issued \"solved\"! But it's inefficient, and yet again more bloated boilerplate to weigh your project down.\n\nThat's just the story, over and over again, with callbacks. They can do pretty much anything you want, but you have to be willing to work hard to get it, and oftentimes this effort is much more than you can or should spend on such code reasoning.\n\nYou might find yourself wishing for built-in APIs or other language mechanics to address these issues. Finally ES6 has arrived on the scene with some great answers, so keep reading!\n\n## Review\n\nCallbacks are the fundamental unit of asynchrony in JS. But they're not enough for the evolving landscape of async programming as JS matures.\n\nFirst, our brains plan things out in sequential, blocking, single-threaded semantic ways, but callbacks express asynchronous flow in a rather nonlinear, nonsequential way, which makes reasoning properly about such code much harder. Bad to reason about code is bad code that leads to bad bugs.\n\nWe need a way to express asynchrony in a more synchronous, sequential, blocking manner, just like our brains do.\n\nSecond, and more importantly, callbacks suffer from *inversion of control* in that they implicitly give control over to another party (often a third-party utility not in your control!) to invoke the *continuation* of your program. This control transfer leads us to a troubling list of trust issues, such as whether the callback is called more times than we expect.\n\nInventing ad hoc logic to solve these trust issues is possible, but it's more difficult than it should be, and it produces clunkier and harder to maintain code, as well as code that is likely insufficiently protected from these hazards until you get visibly bitten by the bugs.\n\nWe need a generalized solution to **all of the trust issues**, one that can be reused for as many callbacks as we create without all the extra boilerplate overhead.\n\nWe need something better than callbacks. They've served us well to this point, but the *future* of JavaScript demands more sophisticated and capable async patterns. The subsequent chapters in this book will dive into those emerging evolutions.\n"
  },
  {
    "path": "async & performance/ch3.md",
    "content": "# You Don't Know JS: Async & Performance\n# Chapter 3: Promises\n\nIn Chapter 2, we identified two major categories of deficiencies with using callbacks to express program asynchrony and manage concurrency: lack of sequentiality and lack of trustability. Now that we understand the problems more intimately, it's time we turn our attention to patterns that can address them.\n\nThe issue we want to address first is the *inversion of control*, the trust that is so fragilely held and so easily lost.\n\nRecall that we wrap up the *continuation* of our program in a callback function, and hand that callback over to another party (potentially even external code) and just cross our fingers that it will do the right thing with the invocation of the callback.\n\nWe do this because we want to say, \"here's what happens *later*, after the current step finishes.\"\n\nBut what if we could uninvert that *inversion of control*? What if instead of handing the continuation of our program to another party, we could expect it to return us a capability to know when its task finishes, and then our code could decide what to do next?\n\nThis paradigm is called **Promises**.\n\nPromises are starting to take the JS world by storm, as developers and specification writers alike desperately seek to untangle the insanity of callback hell in their code/design. In fact, most new async APIs being added to JS/DOM platform are being built on Promises. So it's probably a good idea to dig in and learn them, don't you think!?\n\n**Note:** The word \"immediately\" will be used frequently in this chapter, generally to refer to some Promise resolution action. However, in essentially all cases, \"immediately\" means in terms of the Job queue behavior (see Chapter 1), not in the strictly synchronous *now* sense.\n\n## What Is a Promise?\n\nWhen developers decide to learn a new technology or pattern, usually their first step is \"Show me the code!\" It's quite natural for us to just jump in feet first and learn as we go.\n\nBut it turns out that some abstractions get lost on the APIs alone. Promises are one of those tools where it can be painfully obvious from how someone uses it whether they understand what it's for and about versus just learning and using the API.\n\nSo before I show the Promise code, I want to fully explain what a Promise really is conceptually. I hope this will then guide you better as you explore integrating Promise theory into your own async flow.\n\nWith that in mind, let's look at two different analogies for what a Promise *is*.\n\n### Future Value\n\nImagine this scenario: I walk up to the counter at a fast-food restaurant, and place an order for a cheeseburger. I hand the cashier $1.47. By placing my order and paying for it, I've made a request for a *value* back (the cheeseburger). I've started a transaction.\n\nBut often, the cheeseburger is not immediately available for me. The cashier hands me something in place of my cheeseburger: a receipt with an order number on it. This order number is an IOU (\"I owe you\") *promise* that ensures that eventually, I should receive my cheeseburger.\n\nSo I hold onto my receipt and order number. I know it represents my *future cheeseburger*, so I don't need to worry about it anymore -- aside from being hungry!\n\nWhile I wait, I can do other things, like send a text message to a friend that says, \"Hey, can you come join me for lunch? I'm going to eat a cheeseburger.\"\n\nI am reasoning about my *future cheeseburger* already, even though I don't have it in my hands yet. My brain is able to do this because it's treating the order number as a placeholder for the cheeseburger. The placeholder essentially makes the value *time independent*. It's a **future value**.\n\nEventually, I hear, \"Order 113!\" and I gleefully walk back up to the counter with receipt in hand. I hand my receipt to the cashier, and I take my cheeseburger in return.\n\nIn other words, once my *future value* was ready, I exchanged my value-promise for the value itself.\n\nBut there's another possible outcome. They call my order number, but when I go to retrieve my cheeseburger, the cashier regretfully informs me, \"I'm sorry, but we appear to be all out of cheeseburgers.\" Setting aside the customer frustration of this scenario for a moment, we can see an important characteristic of *future values*: they can either indicate a success or failure.\n\nEvery time I order a cheeseburger, I know that I'll either get a cheeseburger eventually, or I'll get the sad news of the cheeseburger shortage, and I'll have to figure out something else to eat for lunch.\n\n**Note:** In code, things are not quite as simple, because metaphorically the order number may never be called, in which case we're left indefinitely in an unresolved state. We'll come back to dealing with that case later.\n\n#### Values Now and Later\n\nThis all might sound too mentally abstract to apply to your code. So let's be more concrete.\n\nHowever, before we can introduce how Promises work in this fashion, we're going to derive in code that we already understand -- callbacks! -- how to handle these *future values*.\n\nWhen you write code to reason about a value, such as performing math on a `number`, whether you realize it or not, you've been assuming something very fundamental about that value, which is that it's a concrete *now* value already:\n\n```js\nvar x, y = 2;\n\nconsole.log( x + y ); // NaN  <-- because `x` isn't set yet\n```\n\nThe `x + y` operation assumes both `x` and `y` are already set. In terms we'll expound on shortly, we assume the `x` and `y` values are already *resolved*.\n\nIt would be nonsense to expect that the `+` operator by itself would somehow be magically capable of detecting and waiting around until both `x` and `y` are resolved (aka ready), only then to do the operation. That would cause chaos in the program if different statements finished *now* and others finished *later*, right?\n\nHow could you possibly reason about the relationships between two statements if either one (or both) of them might not be finished yet? If statement 2 relies on statement 1 being finished, there are just two outcomes: either statement 1 finished right *now* and everything proceeds fine, or statement 1 didn't finish yet, and thus statement 2 is going to fail.\n\nIf this sort of thing sounds familiar from Chapter 1, good!\n\nLet's go back to our `x + y` math operation. Imagine if there was a way to say, \"Add `x` and `y`, but if either of them isn't ready yet, just wait until they are. Add them as soon as you can.\"\n\nYour brain might have just jumped to callbacks. OK, so...\n\n```js\nfunction add(getX,getY,cb) {\n\tvar x, y;\n\tgetX( function(xVal){\n\t\tx = xVal;\n\t\t// both are ready?\n\t\tif (y != undefined) {\n\t\t\tcb( x + y );\t// send along sum\n\t\t}\n\t} );\n\tgetY( function(yVal){\n\t\ty = yVal;\n\t\t// both are ready?\n\t\tif (x != undefined) {\n\t\t\tcb( x + y );\t// send along sum\n\t\t}\n\t} );\n}\n\n// `fetchX()` and `fetchY()` are sync or async\n// functions\nadd( fetchX, fetchY, function(sum){\n\tconsole.log( sum ); // that was easy, huh?\n} );\n```\n\nTake just a moment to let the beauty (or lack thereof) of that snippet sink in (whistles patiently).\n\nWhile the ugliness is undeniable, there's something very important to notice about this async pattern.\n\nIn that snippet, we treated `x` and `y` as future values, and we express an operation `add(..)` that (from the outside) does not care whether `x` or `y` or both are available right away or not. In other words, it normalizes the *now* and *later*, such that we can rely on a predictable outcome of the `add(..)` operation.\n\nBy using an `add(..)` that is temporally consistent -- it behaves the same across *now* and *later* times -- the async code is much easier to reason about.\n\nTo put it more plainly: to consistently handle both *now* and *later*, we make both of them *later*: all operations become async.\n\nOf course, this rough callbacks-based approach leaves much to be desired. It's just a first tiny step toward realizing the benefits of reasoning about *future values* without worrying about the time aspect of when it's available or not.\n\n#### Promise Value\n\nWe'll definitely go into a lot more detail about Promises later in the chapter -- so don't worry if some of this is confusing -- but let's just briefly glimpse at how we can express the `x + y` example via `Promise`s:\n\n```js\nfunction add(xPromise,yPromise) {\n\t// `Promise.all([ .. ])` takes an array of promises,\n\t// and returns a new promise that waits on them\n\t// all to finish\n\treturn Promise.all( [xPromise, yPromise] )\n\n\t// when that promise is resolved, let's take the\n\t// received `X` and `Y` values and add them together.\n\t.then( function(values){\n\t\t// `values` is an array of the messages from the\n\t\t// previously resolved promises\n\t\treturn values[0] + values[1];\n\t} );\n}\n\n// `fetchX()` and `fetchY()` return promises for\n// their respective values, which may be ready\n// *now* or *later*.\nadd( fetchX(), fetchY() )\n\n// we get a promise back for the sum of those\n// two numbers.\n// now we chain-call `then(..)` to wait for the\n// resolution of that returned promise.\n.then( function(sum){\n\tconsole.log( sum ); // that was easier!\n} );\n```\n\nThere are two layers of Promises in this snippet.\n\n`fetchX()` and `fetchY()` are called directly, and the values they return (promises!) are passed into `add(..)`. The underlying values those promises represent may be ready *now* or *later*, but each promise normalizes the behavior to be the same regardless. We reason about `X` and `Y` values in a time-independent way. They are *future values*.\n\nThe second layer is the promise that `add(..)` creates (via `Promise.all([ .. ])`) and returns, which we wait on by calling `then(..)`. When the `add(..)` operation completes, our `sum` *future value* is ready and we can print it out. We hide inside of `add(..)` the logic for waiting on the `X` and `Y` *future values*.\n\n**Note:** Inside `add(..)`, the `Promise.all([ .. ])` call creates a promise (which is waiting on `promiseX` and `promiseY` to resolve). The chained call to `.then(..)` creates another promise, which the `return values[0] + values[1]` line immediately resolves (with the result of the addition). Thus, the `then(..)` call we chain off the end of the `add(..)` call -- at the end of the snippet -- is actually operating on that second promise returned, rather than the first one created by `Promise.all([ .. ])`. Also, though we are not chaining off the end of that second `then(..)`, it too has created another promise, had we chosen to observe/use it. This Promise chaining stuff will be explained in much greater detail later in this chapter.\n\nJust like with cheeseburger orders, it's possible that the resolution of a Promise is rejection instead of fulfillment. Unlike a fulfilled Promise, where the value is always programmatic, a rejection value -- commonly called a \"rejection reason\" -- can either be set directly by the program logic, or it can result implicitly from a runtime exception.\n\nWith Promises, the `then(..)` call can actually take two functions, the first for fulfillment (as shown earlier), and the second for rejection:\n\n```js\nadd( fetchX(), fetchY() )\n.then(\n\t// fulfillment handler\n\tfunction(sum) {\n\t\tconsole.log( sum );\n\t},\n\t// rejection handler\n\tfunction(err) {\n\t\tconsole.error( err ); // bummer!\n\t}\n);\n```\n\nIf something went wrong getting `X` or `Y`, or something somehow failed during the addition, the promise that `add(..)` returns is rejected, and the second callback error handler passed to `then(..)` will receive the rejection value from the promise.\n\nBecause Promises encapsulate the time-dependent state -- waiting on the fulfillment or rejection of the underlying value -- from the outside, the Promise itself is time-independent, and thus Promises can be composed (combined) in predictable ways regardless of the timing or outcome underneath.\n\nMoreover, once a Promise is resolved, it stays that way forever -- it becomes an *immutable value* at that point -- and can then be *observed* as many times as necessary.\n\n**Note:** Because a Promise is externally immutable once resolved, it's now safe to pass that value around to any party and know that it cannot be modified accidentally or maliciously. This is especially true in relation to multiple parties observing the resolution of a Promise. It is not possible for one party to affect another party's ability to observe Promise resolution. Immutability may sound like an academic topic, but it's actually one of the most fundamental and important aspects of Promise design, and shouldn't be casually passed over.\n\nThat's one of the most powerful and important concepts to understand about Promises. With a fair amount of work, you could ad hoc create the same effects with nothing but ugly callback composition, but that's not really an effective strategy, especially because you have to do it over and over again.\n\nPromises are an easily repeatable mechanism for encapsulating and composing *future values*.\n\n### Completion Event\n\nAs we just saw, an individual Promise behaves as a *future value*. But there's another way to think of the resolution of a Promise: as a flow-control mechanism -- a temporal this-then-that -- for two or more steps in an asynchronous task.\n\nLet's imagine calling a function `foo(..)` to perform some task. We don't know about any of its details, nor do we care. It may complete the task right away, or it may take a while.\n\nWe just simply need to know when `foo(..)` finishes so that we can move on to our next task. In other words, we'd like a way to be notified of `foo(..)`'s completion so that we can *continue*.\n\nIn typical JavaScript fashion, if you need to listen for a notification, you'd likely think of that in terms of events. So we could reframe our need for notification as a need to listen for a *completion* (or *continuation*) event emitted by `foo(..)`.\n\n**Note:** Whether you call it a \"completion event\" or a \"continuation event\" depends on your perspective. Is the focus more on what happens with `foo(..)`, or what happens *after* `foo(..)` finishes? Both perspectives are accurate and useful. The event notification tells us that `foo(..)` has *completed*, but also that it's OK to *continue* with the next step. Indeed, the callback you pass to be called for the event notification is itself what we've previously called a *continuation*. Because *completion event* is a bit more focused on the `foo(..)`, which more has our attention at present, we slightly favor *completion event* for the rest of this text.\n\nWith callbacks, the \"notification\" would be our callback invoked by the task (`foo(..)`). But with Promises, we turn the relationship around, and expect that we can listen for an event from `foo(..)`, and when notified, proceed accordingly.\n\nFirst, consider some pseudocode:\n\n```js\nfoo(x) {\n\t// start doing something that could take a while\n}\n\nfoo( 42 )\n\non (foo \"completion\") {\n\t// now we can do the next step!\n}\n\non (foo \"error\") {\n\t// oops, something went wrong in `foo(..)`\n}\n```\n\nWe call `foo(..)` and then we set up two event listeners, one for `\"completion\"` and one for `\"error\"` -- the two possible *final* outcomes of the `foo(..)` call. In essence, `foo(..)` doesn't even appear to be aware that the calling code has subscribed to these events, which makes for a very nice *separation of concerns*.\n\nUnfortunately, such code would require some \"magic\" of the JS environment that doesn't exist (and would likely be a bit impractical). Here's the more natural way we could express that in JS:\n\n```js\nfunction foo(x) {\n\t// start doing something that could take a while\n\n\t// make a `listener` event notification\n\t// capability to return\n\n\treturn listener;\n}\n\nvar evt = foo( 42 );\n\nevt.on( \"completion\", function(){\n\t// now we can do the next step!\n} );\n\nevt.on( \"failure\", function(err){\n\t// oops, something went wrong in `foo(..)`\n} );\n```\n\n`foo(..)` expressly creates an event subscription capability to return back, and the calling code receives and registers the two event handlers against it.\n\nThe inversion from normal callback-oriented code should be obvious, and it's intentional. Instead of passing the callbacks to `foo(..)`, it returns an event capability we call `evt`, which receives the callbacks.\n\nBut if you recall from Chapter 2, callbacks themselves represent an *inversion of control*. So inverting the callback pattern is actually an *inversion of inversion*, or an *uninversion of control* -- restoring control back to the calling code where we wanted it to be in the first place.\n\nOne important benefit is that multiple separate parts of the code can be given the event listening capability, and they can all independently be notified of when `foo(..)` completes to perform subsequent steps after its completion:\n\n```js\nvar evt = foo( 42 );\n\n// let `bar(..)` listen to `foo(..)`'s completion\nbar( evt );\n\n// also, let `baz(..)` listen to `foo(..)`'s completion\nbaz( evt );\n```\n\n*Uninversion of control* enables a nicer *separation of concerns*, where `bar(..)` and `baz(..)` don't need to be involved in how `foo(..)` is called. Similarly, `foo(..)` doesn't need to know or care that `bar(..)` and `baz(..)` exist or are waiting to be notified when `foo(..)` completes.\n\nEssentially, this `evt` object is a neutral third-party negotiation between the separate concerns.\n\n#### Promise \"Events\"\n\nAs you may have guessed by now, the `evt` event listening capability is an analogy for a Promise.\n\nIn a Promise-based approach, the previous snippet would have `foo(..)` creating and returning a `Promise` instance, and that promise would then be passed to `bar(..)` and `baz(..)`.\n\n**Note:** The Promise resolution \"events\" we listen for aren't strictly events (though they certainly behave like events for these purposes), and they're not typically called `\"completion\"` or `\"error\"`. Instead, we use `then(..)` to register a `\"then\"` event. Or perhaps more precisely, `then(..)` registers `\"fulfillment\"` and/or `\"rejection\"` event(s), though we don't see those terms used explicitly in the code.\n\nConsider:\n\n```js\nfunction foo(x) {\n\t// start doing something that could take a while\n\n\t// construct and return a promise\n\treturn new Promise( function(resolve,reject){\n\t\t// eventually, call `resolve(..)` or `reject(..)`,\n\t\t// which are the resolution callbacks for\n\t\t// the promise.\n\t} );\n}\n\nvar p = foo( 42 );\n\nbar( p );\n\nbaz( p );\n```\n\n**Note:** The pattern shown with `new Promise( function(..){ .. } )` is generally called the [\"revealing constructor\"](http://domenic.me/2014/02/13/the-revealing-constructor-pattern/). The function passed in is executed immediately (not async deferred, as callbacks to `then(..)` are), and it's provided two parameters, which in this case we've named `resolve` and `reject`. These are the resolution functions for the promise. `resolve(..)` generally signals fulfillment, and `reject(..)` signals rejection.\n\nYou can probably guess what the internals of `bar(..)` and `baz(..)` might look like:\n\n```js\nfunction bar(fooPromise) {\n\t// listen for `foo(..)` to complete\n\tfooPromise.then(\n\t\tfunction(){\n\t\t\t// `foo(..)` has now finished, so\n\t\t\t// do `bar(..)`'s task\n\t\t},\n\t\tfunction(){\n\t\t\t// oops, something went wrong in `foo(..)`\n\t\t}\n\t);\n}\n\n// ditto for `baz(..)`\n```\n\nPromise resolution doesn't necessarily need to involve sending along a message, as it did when we were examining Promises as *future values*. It can just be a flow-control signal, as used in the previous snippet.\n\nAnother way to approach this is:\n\n```js\nfunction bar() {\n\t// `foo(..)` has definitely finished, so\n\t// do `bar(..)`'s task\n}\n\nfunction oopsBar() {\n\t// oops, something went wrong in `foo(..)`,\n\t// so `bar(..)` didn't run\n}\n\n// ditto for `baz()` and `oopsBaz()`\n\nvar p = foo( 42 );\n\np.then( bar, oopsBar );\n\np.then( baz, oopsBaz );\n```\n\n**Note:** If you've seen Promise-based coding before, you might be tempted to believe that the last two lines of that code could be written as `p.then( .. ).then( .. )`, using chaining, rather than `p.then(..); p.then(..)`. That would have an entirely different behavior, so be careful! The difference might not be clear right now, but it's actually a different async pattern than we've seen thus far: splitting/forking. Don't worry! We'll come back to this point later in this chapter.\n\nInstead of passing the `p` promise to `bar(..)` and `baz(..)`, we use the promise to control when `bar(..)` and `baz(..)` will get executed, if ever. The primary difference is in the error handling.\n\nIn the first snippet's approach, `bar(..)` is called regardless of whether `foo(..)` succeeds or fails, and it handles its own fallback logic if it's notified that `foo(..)` failed. The same is true for `baz(..)`, obviously.\n\nIn the second snippet, `bar(..)` only gets called if `foo(..)` succeeds, and otherwise `oopsBar(..)` gets called. Ditto for `baz(..)`.\n\nNeither approach is *correct* per se. There will be cases where one is preferred over the other.\n\nIn either case, the promise `p` that comes back from `foo(..)` is used to control what happens next.\n\nMoreover, the fact that both snippets end up calling `then(..)` twice against the same promise `p` illustrates the point made earlier, which is that Promises (once resolved) retain their same resolution (fulfillment or rejection) forever, and can subsequently be observed as many times as necessary.\n\nWhenever `p` is resolved, the next step will always be the same, both *now* and *later*.\n\n## Thenable Duck Typing\n\nIn Promises-land, an important detail is how to know for sure if some value is a genuine Promise or not. Or more directly, is it a value that will behave like a Promise?\n\nGiven that Promises are constructed by the `new Promise(..)` syntax, you might think that `p instanceof Promise` would be an acceptable check. But unfortunately, there are a number of reasons that's not totally sufficient.\n\nMainly, you can receive a Promise value from another browser window (iframe, etc.), which would have its own Promise different from the one in the current window/frame, and that check would fail to identify the Promise instance.\n\nMoreover, a library or framework may choose to vend its own Promises and not use the native ES6 `Promise` implementation to do so. In fact, you may very well be using Promises with libraries in older browsers that have no Promise at all.\n\nWhen we discuss Promise resolution processes later in this chapter, it will become more obvious why a non-genuine-but-Promise-like value would still be very important to be able to recognize and assimilate. But for now, just take my word for it that it's a critical piece of the puzzle.\n\nAs such, it was decided that the way to recognize a Promise (or something that behaves like a Promise) would be to define something called a \"thenable\" as any object or function which has a `then(..)` method on it. It is assumed that any such value is a Promise-conforming thenable.\n\nThe general term for \"type checks\" that make assumptions about a value's \"type\" based on its shape (what properties are present) is called \"duck typing\" -- \"If it looks like a duck, and quacks like a duck, it must be a duck\" (see the *Types & Grammar* title of this book series). So the duck typing check for a thenable would roughly be:\n\n```js\nif (\n\tp !== null &&\n\t(\n\t\ttypeof p === \"object\" ||\n\t\ttypeof p === \"function\"\n\t) &&\n\ttypeof p.then === \"function\"\n) {\n\t// assume it's a thenable!\n}\nelse {\n\t// not a thenable\n}\n```\n\nYuck! Setting aside the fact that this logic is a bit ugly to implement in various places, there's something deeper and more troubling going on.\n\nIf you try to fulfill a Promise with any object/function value that happens to have a `then(..)` function on it, but you weren't intending it to be treated as a Promise/thenable, you're out of luck, because it will automatically be recognized as thenable and treated with special rules (see later in the chapter).\n\nThis is even true if you didn't realize the value has a `then(..)` on it. For example:\n\n```js\nvar o = { then: function(){} };\n\n// make `v` be `[[Prototype]]`-linked to `o`\nvar v = Object.create( o );\n\nv.someStuff = \"cool\";\nv.otherStuff = \"not so cool\";\n\nv.hasOwnProperty( \"then\" );\t\t// false\n```\n\n`v` doesn't look like a Promise or thenable at all. It's just a plain object with some properties on it. You're probably just intending to send that value around like any other object.\n\nBut unknown to you, `v` is also `[[Prototype]]`-linked (see the *this & Object Prototypes* title of this book series) to another object `o`, which happens to have a `then(..)` on it. So the thenable duck typing checks will think and assume `v` is a thenable. Uh oh.\n\nIt doesn't even need to be something as directly intentional as that:\n\n```js\nObject.prototype.then = function(){};\nArray.prototype.then = function(){};\n\nvar v1 = { hello: \"world\" };\nvar v2 = [ \"Hello\", \"World\" ];\n```\n\nBoth `v1` and `v2` will be assumed to be thenables. You can't control or predict if any other code accidentally or maliciously adds `then(..)` to `Object.prototype`, `Array.prototype`, or any of the other native prototypes. And if what's specified is a function that doesn't call either of its parameters as callbacks, then any Promise resolved with such a value will just silently hang forever! Crazy.\n\nSound implausible or unlikely? Perhaps.\n\nBut keep in mind that there were several well-known non-Promise libraries preexisting in the community prior to ES6 that happened to already have a method on them called `then(..)`. Some of those libraries chose to rename their own methods to avoid collision (that sucks!). Others have simply been relegated to the unfortunate status of \"incompatible with Promise-based coding\" in reward for their inability to change to get out of the way.\n\nThe standards decision to hijack the previously nonreserved -- and completely general-purpose sounding -- `then` property name means that no value (or any of its delegates), either past, present, or future, can have a `then(..)` function present, either on purpose or by accident, or that value will be confused for a thenable in Promises systems, which will probably create bugs that are really hard to track down.\n\n**Warning:** I do not like how we ended up with duck typing of thenables for Promise recognition. There were other options, such as \"branding\" or even \"anti-branding\"; what we got seems like a worst-case compromise. But it's not all doom and gloom. Thenable duck typing can be helpful, as we'll see later. Just beware that thenable duck typing can be hazardous if it incorrectly identifies something as a Promise that isn't.\n\n## Promise Trust\n\nWe've now seen two strong analogies that explain different aspects of what Promises can do for our async code. But if we stop there, we've missed perhaps the single most important characteristic that the Promise pattern establishes: trust.\n\nWhereas the *future values* and *completion events* analogies play out explicitly in the code patterns we've explored, it won't be entirely obvious why or how Promises are designed to solve all of the *inversion of control* trust issues we laid out in the \"Trust Issues\" section of Chapter 2. But with a little digging, we can uncover some important guarantees that restore the confidence in async coding that Chapter 2 tore down!\n\nLet's start by reviewing the trust issues with callbacks-only coding. When you pass a callback to a utility `foo(..)`, it might:\n\n* Call the callback too early\n* Call the callback too late (or never)\n* Call the callback too few or too many times\n* Fail to pass along any necessary environment/parameters\n* Swallow any errors/exceptions that may happen\n\nThe characteristics of Promises are intentionally designed to provide useful, repeatable answers to all these concerns.\n\n### Calling Too Early\n\nPrimarily, this is a concern of whether code can introduce Zalgo-like effects (see Chapter 2), where sometimes a task finishes synchronously and sometimes asynchronously, which can lead to race conditions.\n\nPromises by definition cannot be susceptible to this concern, because even an immediately fulfilled Promise (like `new Promise(function(resolve){ resolve(42); })`) cannot be *observed* synchronously.\n\nThat is, when you call `then(..)` on a Promise, even if that Promise was already resolved, the callback you provide to `then(..)` will **always** be called asynchronously (for more on this, refer back to \"Jobs\" in Chapter 1).\n\nNo more need to insert your own `setTimeout(..,0)` hacks. Promises prevent Zalgo automatically.\n\n### Calling Too Late\n\nSimilar to the previous point, a Promise's `then(..)` registered observation callbacks are automatically scheduled when either `resolve(..)` or `reject(..)` are called by the Promise creation capability. Those scheduled callbacks will predictably be fired at the next asynchronous moment (see \"Jobs\" in Chapter 1).\n\nIt's not possible for synchronous observation, so it's not possible for a synchronous chain of tasks to run in such a way to in effect \"delay\" another callback from happening as expected. That is, when a Promise is resolved, all `then(..)` registered callbacks on it will be called, in order, immediately at the next asynchronous opportunity (again, see \"Jobs\" in Chapter 1), and nothing that happens inside of one of those callbacks can affect/delay the calling of the other callbacks.\n\nFor example:\n\n```js\np.then( function(){\n\tp.then( function(){\n\t\tconsole.log( \"C\" );\n\t} );\n\tconsole.log( \"A\" );\n} );\np.then( function(){\n\tconsole.log( \"B\" );\n} );\n// A B C\n```\n\nHere, `\"C\"` cannot interrupt and precede `\"B\"`, by virtue of how Promises are defined to operate.\n\n#### Promise Scheduling Quirks\n\nIt's important to note, though, that there are lots of nuances of scheduling where the relative ordering between callbacks chained off two separate Promises is not reliably predictable.\n\nIf two promises `p1` and `p2` are both already resolved, it should be true that `p1.then(..); p2.then(..)` would end up calling the callback(s) for `p1` before the ones for `p2`. But there are subtle cases where that might not be true, such as the following:\n\n```js\nvar p3 = new Promise( function(resolve,reject){\n\tresolve( \"B\" );\n} );\n\nvar p1 = new Promise( function(resolve,reject){\n\tresolve( p3 );\n} );\n\nvar p2 = new Promise( function(resolve,reject){\n\tresolve( \"A\" );\n} );\n\np1.then( function(v){\n\tconsole.log( v );\n} );\n\np2.then( function(v){\n\tconsole.log( v );\n} );\n\n// A B  <-- not  B A  as you might expect\n```\n\nWe'll cover this more later, but as you can see, `p1` is resolved not with an immediate value, but with another promise `p3` which is itself resolved with the value `\"B\"`. The specified behavior is to *unwrap* `p3` into `p1`, but asynchronously, so `p1`'s callback(s) are *behind* `p2`'s callback(s) in the asynchronous Job queue (see Chapter 1).\n\nTo avoid such nuanced nightmares, you should never rely on anything about the ordering/scheduling of callbacks across Promises. In fact, a good practice is not to code in such a way where the ordering of multiple callbacks matters at all. Avoid that if you can.\n\n### Never Calling the Callback\n\nThis is a very common concern. It's addressable in several ways with Promises.\n\nFirst, nothing (not even a JS error) can prevent a Promise from notifying you of its resolution (if it's resolved). If you register both fulfillment and rejection callbacks for a Promise, and the Promise gets resolved, one of the two callbacks will always be called.\n\nOf course, if your callbacks themselves have JS errors, you may not see the outcome you expect, but the callback will in fact have been called. We'll cover later how to be notified of an error in your callback, because even those don't get swallowed.\n\nBut what if the Promise itself never gets resolved either way? Even that is a condition that Promises provide an answer for, using a higher level abstraction called a \"race\":\n\n```js\n// a utility for timing out a Promise\nfunction timeoutPromise(delay) {\n\treturn new Promise( function(resolve,reject){\n\t\tsetTimeout( function(){\n\t\t\treject( \"Timeout!\" );\n\t\t}, delay );\n\t} );\n}\n\n// setup a timeout for `foo()`\nPromise.race( [\n\tfoo(),\t\t\t\t\t// attempt `foo()`\n\ttimeoutPromise( 3000 )\t// give it 3 seconds\n] )\n.then(\n\tfunction(){\n\t\t// `foo(..)` fulfilled in time!\n\t},\n\tfunction(err){\n\t\t// either `foo()` rejected, or it just\n\t\t// didn't finish in time, so inspect\n\t\t// `err` to know which\n\t}\n);\n```\n\nThere are more details to consider with this Promise timeout pattern, but we'll come back to it later.\n\nImportantly, we can ensure a signal as to the outcome of `foo()`, to prevent it from hanging our program indefinitely.\n\n### Calling Too Few or Too Many Times\n\nBy definition, *one* is the appropriate number of times for the callback to be called. The \"too few\" case would be zero calls, which is the same as the \"never\" case we just examined.\n\nThe \"too many\" case is easy to explain. Promises are defined so that they can only be resolved once. If for some reason the Promise creation code tries to call `resolve(..)` or `reject(..)` multiple times, or tries to call both, the Promise will accept only the first resolution, and will silently ignore any subsequent attempts.\n\nBecause a Promise can only be resolved once, any `then(..)` registered callbacks will only ever be called once (each).\n\nOf course, if you register the same callback more than once, (e.g., `p.then(f); p.then(f);`), it'll be called as many times as it was registered.  The guarantee that a response function is called only once does not prevent you from shooting yourself in the foot.\n\n### Failing to Pass Along Any Parameters/Environment\n\nPromises can have, at most, one resolution value (fulfillment or rejection).\n\nIf you don't explicitly resolve with a value either way, the value is `undefined`, as is typical in JS. But whatever the value, it will always be passed to all registered (and appropriate: fulfillment or rejection) callbacks, either *now* or in the future.\n\nSomething to be aware of: If you call `resolve(..)` or `reject(..)` with multiple parameters, all subsequent parameters beyond the first will be silently ignored. Although that might seem a violation of the guarantee we just described, it's not exactly, because it constitutes an invalid usage of the Promise mechanism. Other invalid usages of the API (such as calling `resolve(..)` multiple times) are similarly *protected*, so the Promise behavior here is consistent (if not a tiny bit frustrating).\n\nIf you want to pass along multiple values, you must wrap them in another single value that you pass, such as an `array` or an `object`.\n\nAs for environment, functions in JS always retain their closure of the scope in which they're defined (see the *Scope & Closures* title of this series), so they of course would continue to have access to whatever surrounding state you provide. Of course, the same is true of callbacks-only design, so this isn't a specific augmentation of benefit from Promises -- but it's a guarantee we can rely on nonetheless.\n\n### Swallowing Any Errors/Exceptions\n\nIn the base sense, this is a restatement of the previous point. If you reject a Promise with a *reason* (aka error message), that value is passed to the rejection callback(s).\n\nBut there's something much bigger at play here. If at any point in the creation of a Promise, or in the observation of its resolution, a JS exception error occurs, such as a `TypeError` or `ReferenceError`, that exception will be caught, and it will force the Promise in question to become rejected.\n\nFor example:\n\n```js\nvar p = new Promise( function(resolve,reject){\n\tfoo.bar();\t// `foo` is not defined, so error!\n\tresolve( 42 );\t// never gets here :(\n} );\n\np.then(\n\tfunction fulfilled(){\n\t\t// never gets here :(\n\t},\n\tfunction rejected(err){\n\t\t// `err` will be a `TypeError` exception object\n\t\t// from the `foo.bar()` line.\n\t}\n);\n```\n\nThe JS exception that occurs from `foo.bar()` becomes a Promise rejection that you can catch and respond to.\n\nThis is an important detail, because it effectively solves another potential Zalgo moment, which is that errors could create a synchronous reaction whereas nonerrors would be asynchronous. Promises turn even JS exceptions into asynchronous behavior, thereby reducing the race condition chances greatly.\n\nBut what happens if a Promise is fulfilled, but there's a JS exception error during the observation (in a `then(..)` registered callback)? Even those aren't lost, but you may find how they're handled a bit surprising, until you dig in a little deeper:\n\n```js\nvar p = new Promise( function(resolve,reject){\n\tresolve( 42 );\n} );\n\np.then(\n\tfunction fulfilled(msg){\n\t\tfoo.bar();\n\t\tconsole.log( msg );\t// never gets here :(\n\t},\n\tfunction rejected(err){\n\t\t// never gets here either :(\n\t}\n);\n```\n\nWait, that makes it seem like the exception from `foo.bar()` really did get swallowed. Never fear, it didn't. But something deeper is wrong, which is that we've failed to listen for it. The `p.then(..)` call itself returns another promise, and it's *that* promise that will be rejected with the `TypeError` exception.\n\nWhy couldn't it just call the error handler we have defined there? Seems like a logical behavior on the surface. But it would violate the fundamental principle that Promises are **immutable** once resolved. `p` was already fulfilled to the value `42`, so it can't later be changed to a rejection just because there's an error in observing `p`'s resolution.\n\nBesides the principle violation, such behavior could wreak havoc, if say there were multiple `then(..)` registered callbacks on the promise `p`, because some would get called and others wouldn't, and it would be very opaque as to why.\n\n### Trustable Promise?\n\nThere's one last detail to examine to establish trust based on the Promise pattern.\n\nYou've no doubt noticed that Promises don't get rid of callbacks at all. They just change where the callback is passed to. Instead of passing a callback to `foo(..)`, we get *something* (ostensibly a genuine Promise) back from `foo(..)`, and we pass the callback to that *something* instead.\n\nBut why would this be any more trustable than just callbacks alone? How can we be sure the *something* we get back is in fact a trustable Promise? Isn't it basically all just a house of cards where we can trust only because we already trusted?\n\nOne of the most important, but often overlooked, details of Promises is that they have a solution to this issue as well. Included with the native ES6 `Promise` implementation is `Promise.resolve(..)`.\n\nIf you pass an immediate, non-Promise, non-thenable value to `Promise.resolve(..)`, you get a promise that's fulfilled with that value. In other words, these two promises `p1` and `p2` will behave basically identically:\n\n```js\nvar p1 = new Promise( function(resolve,reject){\n\tresolve( 42 );\n} );\n\nvar p2 = Promise.resolve( 42 );\n```\n\nBut if you pass a genuine Promise to `Promise.resolve(..)`, you just get the same promise back:\n\n```js\nvar p1 = Promise.resolve( 42 );\n\nvar p2 = Promise.resolve( p1 );\n\np1 === p2; // true\n```\n\nEven more importantly, if you pass a non-Promise thenable value to `Promise.resolve(..)`, it will attempt to unwrap that value, and the unwrapping will keep going until a concrete final non-Promise-like value is extracted.\n\nRecall our previous discussion of thenables?\n\nConsider:\n\n```js\nvar p = {\n\tthen: function(cb) {\n\t\tcb( 42 );\n\t}\n};\n\n// this works OK, but only by good fortune\np\n.then(\n\tfunction fulfilled(val){\n\t\tconsole.log( val ); // 42\n\t},\n\tfunction rejected(err){\n\t\t// never gets here\n\t}\n);\n```\n\nThis `p` is a thenable, but it's not a genuine Promise. Luckily, it's reasonable, as most will be. But what if you got back instead something that looked like:\n\n```js\nvar p = {\n\tthen: function(cb,errcb) {\n\t\tcb( 42 );\n\t\terrcb( \"evil laugh\" );\n\t}\n};\n\np\n.then(\n\tfunction fulfilled(val){\n\t\tconsole.log( val ); // 42\n\t},\n\tfunction rejected(err){\n\t\t// oops, shouldn't have run\n\t\tconsole.log( err ); // evil laugh\n\t}\n);\n```\n\nThis `p` is a thenable but it's not so well behaved of a promise. Is it malicious? Or is it just ignorant of how Promises should work? It doesn't really matter, to be honest. In either case, it's not trustable as is.\n\nNonetheless, we can pass either of these versions of `p` to `Promise.resolve(..)`, and we'll get the normalized, safe result we'd expect:\n\n```js\nPromise.resolve( p )\n.then(\n\tfunction fulfilled(val){\n\t\tconsole.log( val ); // 42\n\t},\n\tfunction rejected(err){\n\t\t// never gets here\n\t}\n);\n```\n\n`Promise.resolve(..)` will accept any thenable, and will unwrap it to its non-thenable value. But you get back from `Promise.resolve(..)` a real, genuine Promise in its place, **one that you can trust**. If what you passed in is already a genuine Promise, you just get it right back, so there's no downside at all to filtering through `Promise.resolve(..)` to gain trust.\n\nSo let's say we're calling a `foo(..)` utility and we're not sure we can trust its return value to be a well-behaving Promise, but we know it's at least a thenable. `Promise.resolve(..)` will give us a trustable Promise wrapper to chain off of:\n\n```js\n// don't just do this:\nfoo( 42 )\n.then( function(v){\n\tconsole.log( v );\n} );\n\n// instead, do this:\nPromise.resolve( foo( 42 ) )\n.then( function(v){\n\tconsole.log( v );\n} );\n```\n\n**Note:** Another beneficial side effect of wrapping `Promise.resolve(..)` around any function's return value (thenable or not) is that it's an easy way to normalize that function call into a well-behaving async task. If `foo(42)` returns an immediate value sometimes, or a Promise other times, `Promise.resolve( foo(42) )` makes sure it's always a Promise result. And avoiding Zalgo makes for much better code.\n\n### Trust Built\n\nHopefully the previous discussion now fully \"resolves\" (pun intended) in your mind why the Promise is trustable, and more importantly, why that trust is so critical in building robust, maintainable software.\n\nCan you write async code in JS without trust? Of course you can. We JS developers have been coding async with nothing but callbacks for nearly two decades.\n\nBut once you start questioning just how much you can trust the mechanisms you build upon to actually be predictable and reliable, you start to realize callbacks have a pretty shaky trust foundation.\n\nPromises are a pattern that augments callbacks with trustable semantics, so that the behavior is more reason-able and more reliable. By uninverting the *inversion of control* of callbacks, we place the control with a trustable system (Promises) that was designed specifically to bring sanity to our async.\n\n## Chain Flow\n\nWe've hinted at this a couple of times already, but Promises are not just a mechanism for a single-step *this-then-that* sort of operation. That's the building block, of course, but it turns out we can string multiple Promises together to represent a sequence of async steps.\n\nThe key to making this work is built on two behaviors intrinsic to Promises:\n\n* Every time you call `then(..)` on a Promise, it creates and returns a new Promise, which we can *chain* with.\n* Whatever value you return from the `then(..)` call's fulfillment callback (the first parameter) is automatically set as the fulfillment of the *chained* Promise (from the first point).\n\nLet's first illustrate what that means, and *then* we'll derive how that helps us create async sequences of flow control. Consider the following:\n\n```js\nvar p = Promise.resolve( 21 );\n\nvar p2 = p.then( function(v){\n\tconsole.log( v );\t// 21\n\n\t// fulfill `p2` with value `42`\n\treturn v * 2;\n} );\n\n// chain off `p2`\np2.then( function(v){\n\tconsole.log( v );\t// 42\n} );\n```\n\nBy returning `v * 2` (i.e., `42`), we fulfill the `p2` promise that the first `then(..)` call created and returned. When `p2`'s `then(..)` call runs, it's receiving the fulfillment from the `return v * 2` statement. Of course, `p2.then(..)` creates yet another promise, which we could have stored in a `p3` variable.\n\nBut it's a little annoying to have to create an intermediate variable `p2` (or `p3`, etc.). Thankfully, we can easily just chain these together:\n\n```js\nvar p = Promise.resolve( 21 );\n\np\n.then( function(v){\n\tconsole.log( v );\t// 21\n\n\t// fulfill the chained promise with value `42`\n\treturn v * 2;\n} )\n// here's the chained promise\n.then( function(v){\n\tconsole.log( v );\t// 42\n} );\n```\n\nSo now the first `then(..)` is the first step in an async sequence, and the second `then(..)` is the second step. This could keep going for as long as you needed it to extend. Just keep chaining off a previous `then(..)` with each automatically created Promise.\n\nBut there's something missing here. What if we want step 2 to wait for step 1 to do something asynchronous? We're using an immediate `return` statement, which immediately fulfills the chained promise.\n\nThe key to making a Promise sequence truly async capable at every step is to recall how `Promise.resolve(..)` operates when what you pass to it is a Promise or thenable instead of a final value. `Promise.resolve(..)` directly returns a received genuine Promise, or it unwraps the value of a received thenable -- and keeps going recursively while it keeps unwrapping thenables.\n\nThe same sort of unwrapping happens if you `return` a thenable or Promise from the fulfillment (or rejection) handler. Consider:\n\n```js\nvar p = Promise.resolve( 21 );\n\np.then( function(v){\n\tconsole.log( v );\t// 21\n\n\t// create a promise and return it\n\treturn new Promise( function(resolve,reject){\n\t\t// fulfill with value `42`\n\t\tresolve( v * 2 );\n\t} );\n} )\n.then( function(v){\n\tconsole.log( v );\t// 42\n} );\n```\n\nEven though we wrapped `42` up in a promise that we returned, it still got unwrapped and ended up as the resolution of the chained promise, such that the second `then(..)` still received `42`. If we introduce asynchrony to that wrapping promise, everything still nicely works the same:\n\n```js\nvar p = Promise.resolve( 21 );\n\np.then( function(v){\n\tconsole.log( v );\t// 21\n\n\t// create a promise to return\n\treturn new Promise( function(resolve,reject){\n\t\t// introduce asynchrony!\n\t\tsetTimeout( function(){\n\t\t\t// fulfill with value `42`\n\t\t\tresolve( v * 2 );\n\t\t}, 100 );\n\t} );\n} )\n.then( function(v){\n\t// runs after the 100ms delay in the previous step\n\tconsole.log( v );\t// 42\n} );\n```\n\nThat's incredibly powerful! Now we can construct a sequence of however many async steps we want, and each step can delay the next step (or not!), as necessary.\n\nOf course, the value passing from step to step in these examples is optional. If you don't return an explicit value, an implicit `undefined` is assumed, and the promises still chain together the same way. Each Promise resolution is thus just a signal to proceed to the next step.\n\nTo further the chain illustration, let's generalize a delay-Promise creation (without resolution messages) into a utility we can reuse for multiple steps:\n\n```js\nfunction delay(time) {\n\treturn new Promise( function(resolve,reject){\n\t\tsetTimeout( resolve, time );\n\t} );\n}\n\ndelay( 100 ) // step 1\n.then( function STEP2(){\n\tconsole.log( \"step 2 (after 100ms)\" );\n\treturn delay( 200 );\n} )\n.then( function STEP3(){\n\tconsole.log( \"step 3 (after another 200ms)\" );\n} )\n.then( function STEP4(){\n\tconsole.log( \"step 4 (next Job)\" );\n\treturn delay( 50 );\n} )\n.then( function STEP5(){\n\tconsole.log( \"step 5 (after another 50ms)\" );\n} )\n...\n```\n\nCalling `delay(200)` creates a promise that will fulfill in 200ms, and then we return that from the first `then(..)` fulfillment callback, which causes the second `then(..)`'s promise to wait on that 200ms promise.\n\n**Note:** As described, technically there are two promises in that interchange: the 200ms-delay promise and the chained promise that the second `then(..)` chains from. But you may find it easier to mentally combine these two promises together, because the Promise mechanism automatically merges their states for you. In that respect, you could think of `return delay(200)` as creating a promise that replaces the earlier-returned chained promise.\n\nTo be honest, though, sequences of delays with no message passing isn't a terribly useful example of Promise flow control. Let's look at a scenario that's a little more practical.\n\nInstead of timers, let's consider making Ajax requests:\n\n```js\n// assume an `ajax( {url}, {callback} )` utility\n\n// Promise-aware ajax\nfunction request(url) {\n\treturn new Promise( function(resolve,reject){\n\t\t// the `ajax(..)` callback should be our\n\t\t// promise's `resolve(..)` function\n\t\tajax( url, resolve );\n\t} );\n}\n```\n\nWe first define a `request(..)` utility that constructs a promise to represent the completion of the `ajax(..)` call:\n\n```js\nrequest( \"http://some.url.1/\" )\n.then( function(response1){\n\treturn request( \"http://some.url.2/?v=\" + response1 );\n} )\n.then( function(response2){\n\tconsole.log( response2 );\n} );\n```\n\n**Note:** Developers commonly encounter situations in which they want to do Promise-aware async flow control with utilities that are not themselves Promise-enabled (like `ajax(..)` here, which expects a callback). Although the native ES6 `Promise` mechanism doesn't automatically solve this pattern for us, practically all Promise libraries *do*. They usually call this process \"lifting\" or \"promisifying\" or some variation thereof. We'll come back to this technique later.\n\nUsing the Promise-returning `request(..)`, we create the first step in our chain implicitly by calling it with the first URL, and chain off that returned promise with the first `then(..)`.\n\nOnce `response1` comes back, we use that value to construct a second URL, and make a second `request(..)` call. That second `request(..)` promise is `return`ed so that the third step in our async flow control waits for that Ajax call to complete. Finally, we print `response2` once it returns.\n\nThe Promise chain we construct is not only a flow control that expresses a multistep async sequence, but it also acts as a message channel to propagate messages from step to step.\n\nWhat if something went wrong in one of the steps of the Promise chain? An error/exception is on a per-Promise basis, which means it's possible to catch such an error at any point in the chain, and that catching acts to sort of \"reset\" the chain back to normal operation at that point:\n\n```js\n// step 1:\nrequest( \"http://some.url.1/\" )\n\n// step 2:\n.then( function(response1){\n\tfoo.bar(); // undefined, error!\n\n\t// never gets here\n\treturn request( \"http://some.url.2/?v=\" + response1 );\n} )\n\n// step 3:\n.then(\n\tfunction fulfilled(response2){\n\t\t// never gets here\n\t},\n\t// rejection handler to catch the error\n\tfunction rejected(err){\n\t\tconsole.log( err );\t// `TypeError` from `foo.bar()` error\n\t\treturn 42;\n\t}\n)\n\n// step 4:\n.then( function(msg){\n\tconsole.log( msg );\t\t// 42\n} );\n```\n\nWhen the error occurs in step 2, the rejection handler in step 3 catches it. The return value (`42` in this snippet), if any, from that rejection handler fulfills the promise for the next step (4), such that the chain is now back in a fulfillment state.\n\n**Note:** As we discussed earlier, when returning a promise from a fulfillment handler, it's unwrapped and can delay the next step. That's also true for returning promises from rejection handlers, such that if the `return 42` in step 3 instead returned a promise, that promise could delay step 4. A thrown exception inside either the fulfillment or rejection handler of a `then(..)` call causes the next (chained) promise to be immediately rejected with that exception.\n\nIf you call `then(..)` on a promise, and you only pass a fulfillment handler to it, an assumed rejection handler is substituted:\n\n```js\nvar p = new Promise( function(resolve,reject){\n\treject( \"Oops\" );\n} );\n\nvar p2 = p.then(\n\tfunction fulfilled(){\n\t\t// never gets here\n\t}\n\t// assumed rejection handler, if omitted or\n\t// any other non-function value passed\n\t// function(err) {\n\t//     throw err;\n\t// }\n);\n```\n\nAs you can see, the assumed rejection handler simply rethrows the error, which ends up forcing `p2` (the chained promise) to reject with the same error reason. In essence, this allows the error to continue propagating along a Promise chain until an explicitly defined rejection handler is encountered.\n\n**Note:** We'll cover more details of error handling with Promises a little later, because there are other nuanced details to be concerned about.\n\nIf a proper valid function is not passed as the fulfillment handler parameter to `then(..)`, there's also a default handler substituted:\n\n```js\nvar p = Promise.resolve( 42 );\n\np.then(\n\t// assumed fulfillment handler, if omitted or\n\t// any other non-function value passed\n\t// function(v) {\n\t//     return v;\n\t// }\n\tnull,\n\tfunction rejected(err){\n\t\t// never gets here\n\t}\n);\n```\n\nAs you can see, the default fulfillment handler simply passes whatever value it receives along to the next step (Promise).\n\n**Note:** The `then(null,function(err){ .. })` pattern -- only handling rejections (if any) but letting fulfillments pass through -- has a shortcut in the API: `catch(function(err){ .. })`. We'll cover `catch(..)` more fully in the next section.\n\nLet's review briefly the intrinsic behaviors of Promises that enable chaining flow control:\n\n* A `then(..)` call against one Promise automatically produces a new Promise to return from the call.\n* Inside the fulfillment/rejection handlers, if you return a value or an exception is thrown, the new returned (chainable) Promise is resolved accordingly.\n* If the fulfillment or rejection handler returns a Promise, it is unwrapped, so that whatever its resolution is will become the resolution of the chained Promise returned from the current `then(..)`.\n\nWhile chaining flow control is helpful, it's probably most accurate to think of it as a side benefit of how Promises compose (combine) together, rather than the main intent. As we've discussed in detail several times already, Promises normalize asynchrony and encapsulate time-dependent value state, and *that* is what lets us chain them together in this useful way.\n\nCertainly, the sequential expressiveness of the chain (this-then-this-then-this...) is a big improvement over the tangled mess of callbacks as we identified in Chapter 2. But there's still a fair amount of boilerplate (`then(..)` and `function(){ .. }`) to wade through. In the next chapter, we'll see a significantly nicer pattern for sequential flow control expressivity, with generators.\n\n### Terminology: Resolve, Fulfill, and Reject\n\nThere's some slight confusion around the terms \"resolve,\" \"fulfill,\" and \"reject\" that we need to clear up, before you get too much deeper into learning about Promises. Let's first consider the `Promise(..)` constructor:\n\n```js\nvar p = new Promise( function(X,Y){\n\t// X() for fulfillment\n\t// Y() for rejection\n} );\n```\n\nAs you can see, two callbacks (here labeled `X` and `Y`) are provided. The first is *usually* used to mark the Promise as fulfilled, and the second *always* marks the Promise as rejected. But what's the \"usually\" about, and what does that imply about accurately naming those parameters?\n\nUltimately, it's just your user code and the identifier names aren't interpreted by the engine to mean anything, so it doesn't *technically* matter; `foo(..)` and `bar(..)` are equally functional. But the words you use can affect not only how you are thinking about the code, but how other developers on your team will think about it. Thinking wrongly about carefully orchestrated async code is almost surely going to be worse than the spaghetti-callback alternatives.\n\nSo it actually does kind of matter what you call them.\n\nThe second parameter is easy to decide. Almost all literature uses `reject(..)` as its name, and because that's exactly (and only!) what it does, that's a very good choice for the name. I'd strongly recommend you always use `reject(..)`.\n\nBut there's a little more ambiguity around the first parameter, which in Promise literature is often labeled `resolve(..)`. That word is obviously related to \"resolution,\" which is what's used across the literature (including this book) to describe setting a final value/state to a Promise. We've already used \"resolve the Promise\" several times to mean either fulfilling or rejecting the Promise.\n\nBut if this parameter seems to be used to specifically fulfill the Promise, why shouldn't we call it `fulfill(..)` instead of `resolve(..)` to be more accurate? To answer that question, let's also take a look at two of the `Promise` API methods:\n\n```js\nvar fulfilledPr = Promise.resolve( 42 );\n\nvar rejectedPr = Promise.reject( \"Oops\" );\n```\n\n`Promise.resolve(..)` creates a Promise that's resolved to the value given to it. In this example, `42` is a normal, non-Promise, non-thenable value, so the fulfilled promise `fulfilledPr` is created for the value `42`. `Promise.reject(\"Oops\")` creates the rejected promise `rejectedPr` for the reason `\"Oops\"`.\n\nLet's now illustrate why the word \"resolve\" (such as in `Promise.resolve(..)`) is unambiguous and indeed more accurate, if used explicitly in a context that could result in either fulfillment or rejection:\n\n```js\nvar rejectedTh = {\n\tthen: function(resolved,rejected) {\n\t\trejected( \"Oops\" );\n\t}\n};\n\nvar rejectedPr = Promise.resolve( rejectedTh );\n```\n\nAs we discussed earlier in this chapter, `Promise.resolve(..)` will return a received genuine Promise directly, or unwrap a received thenable. If that thenable unwrapping reveals a rejected state, the Promise returned from `Promise.resolve(..)` is in fact in that same rejected state.\n\nSo `Promise.resolve(..)` is a good, accurate name for the API method, because it can actually result in either fulfillment or rejection.\n\nThe first callback parameter of the `Promise(..)` constructor will unwrap either a thenable (identically to `Promise.resolve(..)`) or a genuine Promise:\n\n```js\nvar rejectedPr = new Promise( function(resolve,reject){\n\t// resolve this promise with a rejected promise\n\tresolve( Promise.reject( \"Oops\" ) );\n} );\n\nrejectedPr.then(\n\tfunction fulfilled(){\n\t\t// never gets here\n\t},\n\tfunction rejected(err){\n\t\tconsole.log( err );\t// \"Oops\"\n\t}\n);\n```\n\nIt should be clear now that `resolve(..)` is the appropriate name for the first callback parameter of the `Promise(..)` constructor.\n\n**Warning:** The previously mentioned `reject(..)` does **not** do the unwrapping that `resolve(..)` does. If you pass a Promise/thenable value to `reject(..)`, that untouched value will be set as the rejection reason. A subsequent rejection handler would receive the actual Promise/thenable you passed to `reject(..)`, not its underlying immediate value.\n\nBut now let's turn our attention to the callbacks provided to `then(..)`. What should they be called (both in literature and in code)? I would suggest `fulfilled(..)` and `rejected(..)`:\n\n```js\nfunction fulfilled(msg) {\n\tconsole.log( msg );\n}\n\nfunction rejected(err) {\n\tconsole.error( err );\n}\n\np.then(\n\tfulfilled,\n\trejected\n);\n```\n\nIn the case of the first parameter to `then(..)`, it's unambiguously always the fulfillment case, so there's no need for the duality of \"resolve\" terminology. As a side note, the ES6 specification uses `onFulfilled(..)` and `onRejected(..)` to label these two callbacks, so they are accurate terms.\n\n## Error Handling\n\nWe've already seen several examples of how Promise rejection -- either intentional through calling `reject(..)` or accidental through JS exceptions -- allows saner error handling in asynchronous programming. Let's circle back though and be explicit about some of the details that we glossed over.\n\nThe most natural form of error handling for most developers is the synchronous `try..catch` construct. Unfortunately, it's synchronous-only, so it fails to help in async code patterns:\n\n```js\nfunction foo() {\n\tsetTimeout( function(){\n\t\tbaz.bar();\n\t}, 100 );\n}\n\ntry {\n\tfoo();\n\t// later throws global error from `baz.bar()`\n}\ncatch (err) {\n\t// never gets here\n}\n```\n\n`try..catch` would certainly be nice to have, but it doesn't work across async operations. That is, unless there's some additional environmental support, which we'll come back to with generators in Chapter 4.\n\nIn callbacks, some standards have emerged for patterned error handling, most notably the \"error-first callback\" style:\n\n```js\nfunction foo(cb) {\n\tsetTimeout( function(){\n\t\ttry {\n\t\t\tvar x = baz.bar();\n\t\t\tcb( null, x ); // success!\n\t\t}\n\t\tcatch (err) {\n\t\t\tcb( err );\n\t\t}\n\t}, 100 );\n}\n\nfoo( function(err,val){\n\tif (err) {\n\t\tconsole.error( err ); // bummer :(\n\t}\n\telse {\n\t\tconsole.log( val );\n\t}\n} );\n```\n\n**Note:** The `try..catch` here works only from the perspective that the `baz.bar()` call will either succeed or fail immediately, synchronously. If `baz.bar()` was itself its own async completing function, any async errors inside it would not be catchable.\n\nThe callback we pass to `foo(..)` expects to receive a signal of an error by the reserved first parameter `err`. If present, error is assumed. If not, success is assumed.\n\nThis sort of error handling is technically *async capable*, but it doesn't compose well at all. Multiple levels of error-first callbacks woven together with these ubiquitous `if` statement checks inevitably will lead you to the perils of callback hell (see Chapter 2).\n\nSo we come back to error handling in Promises, with the rejection handler passed to `then(..)`. Promises don't use the popular \"error-first callback\" design style, but instead use \"split callbacks\" style; there's one callback for fulfillment and one for rejection:\n\n```js\nvar p = Promise.reject( \"Oops\" );\n\np.then(\n\tfunction fulfilled(){\n\t\t// never gets here\n\t},\n\tfunction rejected(err){\n\t\tconsole.log( err ); // \"Oops\"\n\t}\n);\n```\n\nWhile this pattern of error handling makes fine sense on the surface, the nuances of Promise error handling are often a fair bit more difficult to fully grasp.\n\nConsider:\n\n```js\nvar p = Promise.resolve( 42 );\n\np.then(\n\tfunction fulfilled(msg){\n\t\t// numbers don't have string functions,\n\t\t// so will throw an error\n\t\tconsole.log( msg.toLowerCase() );\n\t},\n\tfunction rejected(err){\n\t\t// never gets here\n\t}\n);\n```\n\nIf the `msg.toLowerCase()` legitimately throws an error (it does!), why doesn't our error handler get notified? As we explained earlier, it's because *that* error handler is for the `p` promise, which has already been fulfilled with value `42`. The `p` promise is immutable, so the only promise that can be notified of the error is the one returned from `p.then(..)`, which in this case we don't capture.\n\nThat should paint a clear picture of why error handling with Promises is error-prone (pun intended). It's far too easy to have errors swallowed, as this is very rarely what you'd intend.\n\n**Warning:** If you use the Promise API in an invalid way and an error occurs that prevents proper Promise construction, the result will be an immediately thrown exception, **not a rejected Promise**. Some examples of incorrect usage that fail Promise construction: `new Promise(null)`, `Promise.all()`, `Promise.race(42)`, and so on. You can't get a rejected Promise if you don't use the Promise API validly enough to actually construct a Promise in the first place!\n\n### Pit of Despair\n\nJeff Atwood noted years ago: programming languages are often set up in such a way that by default, developers fall into the \"pit of despair\" (http://blog.codinghorror.com/falling-into-the-pit-of-success/) -- where accidents are punished -- and that you have to try harder to get it right. He implored us to instead create a \"pit of success,\" where by default you fall into expected (successful) action, and thus would have to try hard to fail.\n\nPromise error handling is unquestionably \"pit of despair\" design. By default, it assumes that you want any error to be swallowed by the Promise state, and if you forget to observe that state, the error silently languishes/dies in obscurity -- usually despair.\n\nTo avoid losing an error to the silence of a forgotten/discarded Promise, some developers have claimed that a \"best practice\" for Promise chains is to always end your chain with a final `catch(..)`, like:\n\n```js\nvar p = Promise.resolve( 42 );\n\np.then(\n\tfunction fulfilled(msg){\n\t\t// numbers don't have string functions,\n\t\t// so will throw an error\n\t\tconsole.log( msg.toLowerCase() );\n\t}\n)\n.catch( handleErrors );\n```\n\nBecause we didn't pass a rejection handler to the `then(..)`, the default handler was substituted, which simply propagates the error to the next promise in the chain. As such, both errors that come into `p`, and errors that come *after* `p` in its resolution (like the `msg.toLowerCase()` one) will filter down to the final `handleErrors(..)`.\n\nProblem solved, right? Not so fast!\n\nWhat happens if `handleErrors(..)` itself also has an error in it? Who catches that? There's still yet another unattended promise: the one `catch(..)` returns, which we don't capture and don't register a rejection handler for.\n\nYou can't just stick another `catch(..)` on the end of that chain, because it too could fail. The last step in any Promise chain, whatever it is, always has the possibility, even decreasingly so, of dangling with an uncaught error stuck inside an unobserved Promise.\n\nSound like an impossible conundrum yet?\n\n### Uncaught Handling\n\nIt's not exactly an easy problem to solve completely. There are other ways to approach it which many would say are *better*.\n\nSome Promise libraries have added methods for registering something like a \"global unhandled rejection\" handler, which would be called instead of a globally thrown error. But their solution for how to identify an error as \"uncaught\" is to have an arbitrary-length timer, say 3 seconds, running from time of rejection. If a Promise is rejected but no error handler is registered before the timer fires, then it's assumed that you won't ever be registering a handler, so it's \"uncaught.\"\n\nIn practice, this has worked well for many libraries, as most usage patterns don't typically call for significant delay between Promise rejection and observation of that rejection. But this pattern is troublesome because 3 seconds is so arbitrary (even if empirical), and also because there are indeed some cases where you want a Promise to hold on to its rejectedness for some indefinite period of time, and you don't really want to have your \"uncaught\" handler called for all those false positives (not-yet-handled \"uncaught errors\").\n\nAnother more common suggestion is that Promises should have a `done(..)` added to them, which essentially marks the Promise chain as \"done.\" `done(..)` doesn't create and return a Promise, so the callbacks passed to `done(..)` are obviously not wired up to report problems to a chained Promise that doesn't exist.\n\nSo what happens instead? It's treated as you might usually expect in uncaught error conditions: any exception inside a `done(..)` rejection handler would be thrown as a global uncaught error (in the developer console, basically):\n\n```js\nvar p = Promise.resolve( 42 );\n\np.then(\n\tfunction fulfilled(msg){\n\t\t// numbers don't have string functions,\n\t\t// so will throw an error\n\t\tconsole.log( msg.toLowerCase() );\n\t}\n)\n.done( null, handleErrors );\n\n// if `handleErrors(..)` caused its own exception, it would\n// be thrown globally here\n```\n\nThis might sound more attractive than the never-ending chain or the arbitrary timeouts. But the biggest problem is that it's not part of the ES6 standard, so no matter how good it sounds, at best it's a lot longer way off from being a reliable and ubiquitous solution.\n\nAre we just stuck, then? Not entirely.\n\nBrowsers have a unique capability that our code does not have: they can track and know for sure when any object gets thrown away and garbage collected. So, browsers can track Promise objects, and whenever they get garbage collected, if they have a rejection in them, the browser knows for sure this was a legitimate \"uncaught error,\" and can thus confidently know it should report it to the developer console.\n\n**Note:** At the time of this writing, both Chrome and Firefox have early attempts at that sort of \"uncaught rejection\" capability, though support is incomplete at best.\n\nHowever, if a Promise doesn't get garbage collected -- it's exceedingly easy for that to accidentally happen through lots of different coding patterns -- the browser's garbage collection sniffing won't help you know and diagnose that you have a silently rejected Promise laying around.\n\nIs there any other alternative? Yes.\n\n### Pit of Success\n\nThe following is just theoretical, how Promises *could* be someday changed to behave. I believe it would be far superior to what we currently have. And I think this change would be possible even post-ES6 because I don't think it would break web compatibility with ES6 Promises. Moreover, it can be polyfilled/prollyfilled in, if you're careful. Let's take a look:\n\n* Promises could default to reporting (to the developer console) any rejection, on the next Job or event loop tick, if at that exact moment no error handler has been registered for the Promise.\n* For the cases where you want a rejected Promise to hold onto its rejected state for an indefinite amount of time before observing, you could call `defer()`, which suppresses automatic error reporting on that Promise.\n\nIf a Promise is rejected, it defaults to noisily reporting that fact to the developer console (instead of defaulting to silence). You can opt out of that reporting either implicitly (by registering an error handler before rejection), or explicitly (with `defer()`). In either case, *you* control the false positives.\n\nConsider:\n\n```js\nvar p = Promise.reject( \"Oops\" ).defer();\n\n// `foo(..)` is Promise-aware\nfoo( 42 )\n.then(\n\tfunction fulfilled(){\n\t\treturn p;\n\t},\n\tfunction rejected(err){\n\t\t// handle `foo(..)` error\n\t}\n);\n...\n```\n\nWhen we create `p`, we know we're going to wait a while to use/observe its rejection, so we call `defer()` -- thus no global reporting. `defer()` simply returns the same promise, for chaining purposes.\n\nThe promise returned from `foo(..)` gets an error handler attached *right away*, so it's implicitly opted out and no global reporting for it occurs either.\n\nBut the promise returned from the `then(..)` call has no `defer()` or error handler attached, so if it rejects (from inside either resolution handler), then *it* will be reported to the developer console as an uncaught error.\n\n**This design is a pit of success.** By default, all errors are either handled or reported -- what almost all developers in almost all cases would expect. You either have to register a handler or you have to intentionally opt out, and indicate you intend to defer error handling until *later*; you're opting for the extra responsibility in just that specific case.\n\nThe only real danger in this approach is if you `defer()` a Promise but then fail to actually ever observe/handle its rejection.\n\nBut you had to intentionally call `defer()` to opt into that pit of despair -- the default was the pit of success -- so there's not much else we could do to save you from your own mistakes.\n\nI think there's still hope for Promise error handling (post-ES6). I hope the powers that be will rethink the situation and consider this alternative. In the meantime, you can implement this yourself (a challenging exercise for the reader!), or use a *smarter* Promise library that does so for you!\n\n**Note:** This exact model for error handling/reporting is implemented in my *asynquence* Promise abstraction library, which will be discussed in Appendix A of this book.\n\n## Promise Patterns\n\nWe've already implicitly seen the sequence pattern with Promise chains (this-then-this-then-that flow control) but there are lots of variations on asynchronous patterns that we can build as abstractions on top of Promises. These patterns serve to simplify the expression of async flow control -- which helps make our code more reason-able and more maintainable -- even in the most complex parts of our programs.\n\nTwo such patterns are codified directly into the native ES6 `Promise` implementation, so we get them for free, to use as building blocks for other patterns.\n\n### Promise.all([ .. ])\n\nIn an async sequence (Promise chain), only one async task is being coordinated at any given moment -- step 2 strictly follows step 1, and step 3 strictly follows step 2. But what about doing two or more steps concurrently (aka \"in parallel\")?\n\nIn classic programming terminology, a \"gate\" is a mechanism that waits on two or more parallel/concurrent tasks to complete before continuing. It doesn't matter what order they finish in, just that all of them have to complete for the gate to open and let the flow control through.\n\nIn the Promise API, we call this pattern `all([ .. ])`.\n\nSay you wanted to make two Ajax requests at the same time, and wait for both to finish, regardless of their order, before making a third Ajax request. Consider:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility,\n// like we defined earlier in the chapter\n\nvar p1 = request( \"http://some.url.1/\" );\nvar p2 = request( \"http://some.url.2/\" );\n\nPromise.all( [p1,p2] )\n.then( function(msgs){\n\t// both `p1` and `p2` fulfill and pass in\n\t// their messages here\n\treturn request(\n\t\t\"http://some.url.3/?v=\" + msgs.join(\",\")\n\t);\n} )\n.then( function(msg){\n\tconsole.log( msg );\n} );\n```\n\n`Promise.all([ .. ])` expects a single argument, an `array`, consisting generally of Promise instances. The promise returned from the `Promise.all([ .. ])` call will receive a fulfillment message (`msgs` in this snippet) that is an `array` of all the fulfillment messages from the passed in promises, in the same order as specified (regardless of fulfillment order).\n\n**Note:** Technically, the `array` of values passed into `Promise.all([ .. ])` can include Promises, thenables, or even immediate values. Each value in the list is essentially passed through `Promise.resolve(..)` to make sure it's a genuine Promise to be waited on, so an immediate value will just be normalized into a Promise for that value. If the `array` is empty, the main Promise is immediately fulfilled.\n\nThe main promise returned from `Promise.all([ .. ])` will only be fulfilled if and when all its constituent promises are fulfilled. If any one of those promises instead is rejected, the main `Promise.all([ .. ])` promise is immediately rejected, discarding all results from any other promises.\n\nRemember to always attach a rejection/error handler to every promise, even and especially the one that comes back from `Promise.all([ .. ])`.\n\n### Promise.race([ .. ])\n\nWhile `Promise.all([ .. ])` coordinates multiple Promises concurrently and assumes all are needed for fulfillment, sometimes you only want to respond to the \"first Promise to cross the finish line,\" letting the other Promises fall away.\n\nThis pattern is classically called a \"latch,\" but in Promises it's called a \"race.\"\n\n**Warning:** While the metaphor of \"only the first across the finish line wins\" fits the behavior well, unfortunately \"race\" is kind of a loaded term, because \"race conditions\" are generally taken as bugs in programs (see Chapter 1). Don't confuse `Promise.race([ .. ])` with \"race condition.\"\n\n`Promise.race([ .. ])` also expects a single `array` argument, containing one or more Promises, thenables, or immediate values. It doesn't make much practical sense to have a race with immediate values, because the first one listed will obviously win -- like a foot race where one runner starts at the finish line!\n\nSimilar to `Promise.all([ .. ])`, `Promise.race([ .. ])` will fulfill if and when any Promise resolution is a fulfillment, and it will reject if and when any Promise resolution is a rejection.\n\n**Warning:** A \"race\" requires at least one \"runner,\" so if you pass an empty `array`, instead of immediately resolving, the main `race([..])` Promise will never resolve. This is a footgun! ES6 should have specified that it either fulfills, rejects, or just throws some sort of synchronous error. Unfortunately, because of precedence in Promise libraries predating ES6 `Promise`, they had to leave this gotcha in there, so be careful never to send in an empty `array`.\n\nLet's revisit our previous concurrent Ajax example, but in the context of a race between `p1` and `p2`:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility,\n// like we defined earlier in the chapter\n\nvar p1 = request( \"http://some.url.1/\" );\nvar p2 = request( \"http://some.url.2/\" );\n\nPromise.race( [p1,p2] )\n.then( function(msg){\n\t// either `p1` or `p2` will win the race\n\treturn request(\n\t\t\"http://some.url.3/?v=\" + msg\n\t);\n} )\n.then( function(msg){\n\tconsole.log( msg );\n} );\n```\n\nBecause only one promise wins, the fulfillment value is a single message, not an `array` as it was for `Promise.all([ .. ])`.\n\n#### Timeout Race\n\nWe saw this example earlier, illustrating how `Promise.race([ .. ])` can be used to express the \"promise timeout\" pattern:\n\n```js\n// `foo()` is a Promise-aware function\n\n// `timeoutPromise(..)`, defined ealier, returns\n// a Promise that rejects after a specified delay\n\n// setup a timeout for `foo()`\nPromise.race( [\n\tfoo(),\t\t\t\t\t// attempt `foo()`\n\ttimeoutPromise( 3000 )\t// give it 3 seconds\n] )\n.then(\n\tfunction(){\n\t\t// `foo(..)` fulfilled in time!\n\t},\n\tfunction(err){\n\t\t// either `foo()` rejected, or it just\n\t\t// didn't finish in time, so inspect\n\t\t// `err` to know which\n\t}\n);\n```\n\nThis timeout pattern works well in most cases. But there are some nuances to consider, and frankly they apply to both `Promise.race([ .. ])` and `Promise.all([ .. ])` equally.\n\n#### \"Finally\"\n\nThe key question to ask is, \"What happens to the promises that get discarded/ignored?\" We're not asking that question from the performance perspective -- they would typically end up garbage collection eligible -- but from the behavioral perspective (side effects, etc.). Promises cannot be canceled -- and shouldn't be as that would destroy the external immutability trust discussed in the \"Promise Uncancelable\" section later in this chapter -- so they can only be silently ignored.\n\nBut what if `foo()` in the previous example is reserving some sort of resource for usage, but the timeout fires first and causes that promise to be ignored? Is there anything in this pattern that proactively frees the reserved resource after the timeout, or otherwise cancels any side effects it may have had? What if all you wanted was to log the fact that `foo()` timed out?\n\nSome developers have proposed that Promises need a `finally(..)` callback registration, which is always called when a Promise resolves, and allows you to specify any cleanup that may be necessary. This doesn't exist in the specification at the moment, but it may come in ES7+. We'll have to wait and see.\n\nIt might look like:\n\n```js\nvar p = Promise.resolve( 42 );\n\np.then( something )\n.finally( cleanup )\n.then( another )\n.finally( cleanup );\n```\n\n**Note:** In various Promise libraries, `finally(..)` still creates and returns a new Promise (to keep the chain going). If the `cleanup(..)` function were to return a Promise, it would be linked into the chain, which means you could still have the unhandled rejection issues we discussed earlier.\n\nIn the meantime, we could make a static helper utility that lets us observe (without interfering) the resolution of a Promise:\n\n```js\n// polyfill-safe guard check\nif (!Promise.observe) {\n\tPromise.observe = function(pr,cb) {\n\t\t// side-observe `pr`'s resolution\n\t\tpr.then(\n\t\t\tfunction fulfilled(msg){\n\t\t\t\t// schedule callback async (as Job)\n\t\t\t\tPromise.resolve( msg ).then( cb );\n\t\t\t},\n\t\t\tfunction rejected(err){\n\t\t\t\t// schedule callback async (as Job)\n\t\t\t\tPromise.resolve( err ).then( cb );\n\t\t\t}\n\t\t);\n\n\t\t// return original promise\n\t\treturn pr;\n\t};\n}\n```\n\nHere's how we'd use it in the timeout example from before:\n\n```js\nPromise.race( [\n\tPromise.observe(\n\t\tfoo(),\t\t\t\t\t// attempt `foo()`\n\t\tfunction cleanup(msg){\n\t\t\t// clean up after `foo()`, even if it\n\t\t\t// didn't finish before the timeout\n\t\t}\n\t),\n\ttimeoutPromise( 3000 )\t// give it 3 seconds\n] )\n```\n\nThis `Promise.observe(..)` helper is just an illustration of how you could observe the completions of Promises without interfering with them. Other Promise libraries have their own solutions. Regardless of how you do it, you'll likely have places where you want to make sure your Promises aren't *just* silently ignored by accident.\n\n### Variations on all([ .. ]) and race([ .. ])\n\nWhile native ES6 Promises come with built-in `Promise.all([ .. ])` and `Promise.race([ .. ])`, there are several other commonly used patterns with variations on those semantics:\n\n* `none([ .. ])` is like `all([ .. ])`, but fulfillments and rejections are transposed. All Promises need to be rejected -- rejections become the fulfillment values and vice versa.\n* `any([ .. ])` is like `all([ .. ])`, but it ignores any rejections, so only one needs to fulfill instead of *all* of them.\n* `first([ .. ])` is like a race with `any([ .. ])`, which is that it ignores any rejections and fulfills as soon as the first Promise fulfills.\n* `last([ .. ])` is like `first([ .. ])`, but only the latest fulfillment wins.\n\nSome Promise abstraction libraries provide these, but you could also define them yourself using the mechanics of Promises, `race([ .. ])` and `all([ .. ])`.\n\nFor example, here's how we could define `first([ .. ])`:\n\n```js\n// polyfill-safe guard check\nif (!Promise.first) {\n\tPromise.first = function(prs) {\n\t\treturn new Promise( function(resolve,reject){\n\t\t\t// loop through all promises\n\t\t\tprs.forEach( function(pr){\n\t\t\t\t// normalize the value\n\t\t\t\tPromise.resolve( pr )\n\t\t\t\t// whichever one fulfills first wins, and\n\t\t\t\t// gets to resolve the main promise\n\t\t\t\t.then( resolve );\n\t\t\t} );\n\t\t} );\n\t};\n}\n```\n\n**Note:** This implementation of `first(..)` does not reject if all its promises reject; it simply hangs, much like a `Promise.race([])` does. If desired, you could add additional logic to track each promise rejection and if all reject, call `reject()` on the main promise. We'll leave that as an exercise for the reader.\n\n### Concurrent Iterations\n\nSometimes you want to iterate over a list of Promises and perform some task against all of them, much like you can do with synchronous `array`s (e.g., `forEach(..)`, `map(..)`, `some(..)`, and `every(..)`). If the task to perform against each Promise is fundamentally synchronous, these work fine, just as we used `forEach(..)` in the previous snippet.\n\nBut if the tasks are fundamentally asynchronous, or can/should otherwise be performed concurrently, you can use async versions of these utilities as provided by many libraries.\n\nFor example, let's consider an asynchronous `map(..)` utility that takes an `array` of values (could be Promises or anything else), plus a function (task) to perform against each. `map(..)` itself returns a promise whose fulfillment value is an `array` that holds (in the same mapping order) the async fulfillment value from each task:\n\n```js\nif (!Promise.map) {\n\tPromise.map = function(vals,cb) {\n\t\t// new promise that waits for all mapped promises\n\t\treturn Promise.all(\n\t\t\t// note: regular array `map(..)`, turns\n\t\t\t// the array of values into an array of\n\t\t\t// promises\n\t\t\tvals.map( function(val){\n\t\t\t\t// replace `val` with a new promise that\n\t\t\t\t// resolves after `val` is async mapped\n\t\t\t\treturn new Promise( function(resolve){\n\t\t\t\t\tcb( val, resolve );\n\t\t\t\t} );\n\t\t\t} )\n\t\t);\n\t};\n}\n```\n\n**Note:** In this implementation of `map(..)`, you can't signal async rejection, but if a synchronous exception/error occurs inside of the mapping callback (`cb(..)`), the main `Promise.map(..)` returned promise would reject.\n\nLet's illustrate using `map(..)` with a list of Promises (instead of simple values):\n\n```js\nvar p1 = Promise.resolve( 21 );\nvar p2 = Promise.resolve( 42 );\nvar p3 = Promise.reject( \"Oops\" );\n\n// double values in list even if they're in\n// Promises\nPromise.map( [p1,p2,p3], function(pr,done){\n\t// make sure the item itself is a Promise\n\tPromise.resolve( pr )\n\t.then(\n\t\t// extract value as `v`\n\t\tfunction(v){\n\t\t\t// map fulfillment `v` to new value\n\t\t\tdone( v * 2 );\n\t\t},\n\t\t// or, map to promise rejection message\n\t\tdone\n\t);\n} )\n.then( function(vals){\n\tconsole.log( vals );\t// [42,84,\"Oops\"]\n} );\n```\n\n## Promise API Recap\n\nLet's review the ES6 `Promise` API that we've already seen unfold in bits and pieces throughout this chapter.\n\n**Note:** The following API is native only as of ES6, but there are specification-compliant polyfills (not just extended Promise libraries) which can define `Promise` and all its associated behavior so that you can use native Promises even in pre-ES6 browsers. One such polyfill is \"Native Promise Only\" (http://github.com/getify/native-promise-only), which I wrote!\n\n### new Promise(..) Constructor\n\nThe *revealing constructor* `Promise(..)` must be used with `new`, and must be provided a function callback that is synchronously/immediately called. This function is passed two function callbacks that act as resolution capabilities for the promise. We commonly label these `resolve(..)` and `reject(..)`:\n\n```js\nvar p = new Promise( function(resolve,reject){\n\t// `resolve(..)` to resolve/fulfill the promise\n\t// `reject(..)` to reject the promise\n} );\n```\n\n`reject(..)` simply rejects the promise, but `resolve(..)` can either fulfill the promise or reject it, depending on what it's passed. If `resolve(..)` is passed an immediate, non-Promise, non-thenable value, then the promise is fulfilled with that value.\n\nBut if `resolve(..)` is passed a genuine Promise or thenable value, that value is unwrapped recursively, and whatever its final resolution/state is will be adopted by the promise.\n\n### Promise.resolve(..) and Promise.reject(..)\n\nA shortcut for creating an already-rejected Promise is `Promise.reject(..)`, so these two promises are equivalent:\n\n```js\nvar p1 = new Promise( function(resolve,reject){\n\treject( \"Oops\" );\n} );\n\nvar p2 = Promise.reject( \"Oops\" );\n```\n\n`Promise.resolve(..)` is usually used to create an already-fulfilled Promise in a similar way to `Promise.reject(..)`. However, `Promise.resolve(..)` also unwraps thenable values (as discussed several times already). In that case, the Promise returned adopts the final resolution of the thenable you passed in, which could either be fulfillment or rejection:\n\n```js\nvar fulfilledTh = {\n\tthen: function(cb) { cb( 42 ); }\n};\nvar rejectedTh = {\n\tthen: function(cb,errCb) {\n\t\terrCb( \"Oops\" );\n\t}\n};\n\nvar p1 = Promise.resolve( fulfilledTh );\nvar p2 = Promise.resolve( rejectedTh );\n\n// `p1` will be a fulfilled promise\n// `p2` will be a rejected promise\n```\n\nAnd remember, `Promise.resolve(..)` doesn't do anything if what you pass is already a genuine Promise; it just returns the value directly. So there's no overhead to calling `Promise.resolve(..)` on values that you don't know the nature of, if one happens to already be a genuine Promise.\n\n### then(..) and catch(..)\n\nEach Promise instance (**not** the `Promise` API namespace) has `then(..)` and `catch(..)` methods, which allow registering of fulfillment and rejection handlers for the Promise. Once the Promise is resolved, one or the other of these handlers will be called, but not both, and it will always be called asynchronously (see \"Jobs\" in Chapter 1).\n\n`then(..)` takes one or two parameters, the first for the fulfillment callback, and the second for the rejection callback. If either is omitted or is otherwise passed as a non-function value, a default callback is substituted respectively. The default fulfillment callback simply passes the message along, while the default rejection callback simply rethrows (propagates) the error reason it receives.\n\n`catch(..)` takes only the rejection callback as a parameter, and automatically substitutes the default fulfillment callback, as just discussed. In other words, it's equivalent to `then(null,..)`:\n\n```js\np.then( fulfilled );\n\np.then( fulfilled, rejected );\n\np.catch( rejected ); // or `p.then( null, rejected )`\n```\n\n`then(..)` and `catch(..)` also create and return a new promise, which can be used to express Promise chain flow control. If the fulfillment or rejection callbacks have an exception thrown, the returned promise is rejected. If either callback returns an immediate, non-Promise, non-thenable value, that value is set as the fulfillment for the returned promise. If the fulfillment handler specifically returns a promise or thenable value, that value is unwrapped and becomes the resolution of the returned promise.\n\n### Promise.all([ .. ]) and Promise.race([ .. ])\n\nThe static helpers `Promise.all([ .. ])` and `Promise.race([ .. ])` on the ES6 `Promise` API both create a Promise as their return value. The resolution of that promise is controlled entirely by the array of promises that you pass in.\n\nFor `Promise.all([ .. ])`, all the promises you pass in must fulfill for the returned promise to fulfill. If any promise is rejected, the main returned promise is immediately rejected, too (discarding the results of any of the other promises). For fulfillment, you receive an `array` of all the passed in promises' fulfillment values. For rejection, you receive just the first promise rejection reason value. This pattern is classically called a \"gate\": all must arrive before the gate opens.\n\nFor `Promise.race([ .. ])`, only the first promise to resolve (fulfillment or rejection) \"wins,\" and whatever that resolution is becomes the resolution of the returned promise. This pattern is classically called a \"latch\": first one to open the latch gets through. Consider:\n\n```js\nvar p1 = Promise.resolve( 42 );\nvar p2 = Promise.resolve( \"Hello World\" );\nvar p3 = Promise.reject( \"Oops\" );\n\nPromise.race( [p1,p2,p3] )\n.then( function(msg){\n\tconsole.log( msg );\t\t// 42\n} );\n\nPromise.all( [p1,p2,p3] )\n.catch( function(err){\n\tconsole.error( err );\t// \"Oops\"\n} );\n\nPromise.all( [p1,p2] )\n.then( function(msgs){\n\tconsole.log( msgs );\t// [42,\"Hello World\"]\n} );\n```\n\n**Warning:** Be careful! If an empty `array` is passed to `Promise.all([ .. ])`, it will fulfill immediately, but `Promise.race([ .. ])` will hang forever and never resolve.\n\nThe ES6 `Promise` API is pretty simple and straightforward. It's at least good enough to serve the most basic of async cases, and is a good place to start when rearranging your code from callback hell to something better.\n\nBut there's a whole lot of async sophistication that apps often demand which Promises themselves will be limited in addressing. In the next section, we'll dive into those limitations as motivations for the benefit of Promise libraries.\n\n## Promise Limitations\n\nMany of the details we'll discuss in this section have already been alluded to in this chapter, but we'll just make sure to review these limitations specifically.\n\n### Sequence Error Handling\n\nWe covered Promise-flavored error handling in detail earlier in this chapter. The limitations of how Promises are designed -- how they chain, specifically -- creates a very easy pitfall where an error in a Promise chain can be silently ignored accidentally.\n\nBut there's something else to consider with Promise errors. Because a Promise chain is nothing more than its constituent Promises wired together, there's no entity to refer to the entire chain as a single *thing*, which means there's no external way to observe any errors that may occur.\n\nIf you construct a Promise chain that has no error handling in it, any error anywhere in the chain will propagate indefinitely down the chain, until observed (by registering a rejection handler at some step). So, in that specific case, having a reference to the *last* promise in the chain is enough (`p` in the following snippet), because you can register a rejection handler there, and it will be notified of any propagated errors:\n\n```js\n// `foo(..)`, `STEP2(..)` and `STEP3(..)` are\n// all promise-aware utilities\n\nvar p = foo( 42 )\n.then( STEP2 )\n.then( STEP3 );\n```\n\nAlthough it may seem sneakily confusing, `p` here doesn't point to the first promise in the chain (the one from the `foo(42)` call), but instead from the last promise, the one that comes from the `then(STEP3)` call.\n\nAlso, no step in the promise chain is observably doing its own error handling. That means that you could then register a rejection error handler on `p`, and it would be notified if any errors occur anywhere in the chain:\n\n```\np.catch( handleErrors );\n```\n\nBut if any step of the chain in fact does its own error handling (perhaps hidden/abstracted away from what you can see), your `handleErrors(..)` won't be notified. This may be what you want -- it was, after all, a \"handled rejection\" -- but it also may *not* be what you want. The complete lack of ability to be notified (of \"already handled\" rejection errors) is a limitation that restricts capabilities in some use cases.\n\nIt's basically the same limitation that exists with a `try..catch` that can catch an exception and simply swallow it. So this isn't a limitation **unique to Promises**, but it *is* something we might wish to have a workaround for.\n\nUnfortunately, many times there is no reference kept for the intermediate steps in a Promise-chain sequence, so without such references, you cannot attach error handlers to reliably observe the errors.\n\n### Single Value\n\nPromises by definition only have a single fulfillment value or a single rejection reason. In simple examples, this isn't that big of a deal, but in more sophisticated scenarios, you may find this limiting.\n\nThe typical advice is to construct a values wrapper (such as an `object` or `array`) to contain these multiple messages. This solution works, but it can be quite awkward and tedious to wrap and unwrap your messages with every single step of your Promise chain.\n\n#### Splitting Values\n\nSometimes you can take this as a signal that you could/should decompose the problem into two or more Promises.\n\nImagine you have a utility `foo(..)` that produces two values (`x` and `y`) asynchronously:\n\n```js\nfunction getY(x) {\n\treturn new Promise( function(resolve,reject){\n\t\tsetTimeout( function(){\n\t\t\tresolve( (3 * x) - 1 );\n\t\t}, 100 );\n\t} );\n}\n\nfunction foo(bar,baz) {\n\tvar x = bar * baz;\n\n\treturn getY( x )\n\t.then( function(y){\n\t\t// wrap both values into container\n\t\treturn [x,y];\n\t} );\n}\n\nfoo( 10, 20 )\n.then( function(msgs){\n\tvar x = msgs[0];\n\tvar y = msgs[1];\n\n\tconsole.log( x, y );\t// 200 599\n} );\n```\n\nFirst, let's rearrange what `foo(..)` returns so that we don't have to wrap `x` and `y` into a single `array` value to transport through one Promise. Instead, we can wrap each value into its own promise:\n\n```js\nfunction foo(bar,baz) {\n\tvar x = bar * baz;\n\n\t// return both promises\n\treturn [\n\t\tPromise.resolve( x ),\n\t\tgetY( x )\n\t];\n}\n\nPromise.all(\n\tfoo( 10, 20 )\n)\n.then( function(msgs){\n\tvar x = msgs[0];\n\tvar y = msgs[1];\n\n\tconsole.log( x, y );\n} );\n```\n\nIs an `array` of promises really better than an `array` of values passed through a single promise? Syntactically, it's not much of an improvement.\n\nBut this approach more closely embraces the Promise design theory. It's now easier in the future to refactor to split the calculation of `x` and `y` into separate functions. It's cleaner and more flexible to let the calling code decide how to orchestrate the two promises -- using `Promise.all([ .. ])` here, but certainly not the only option -- rather than to abstract such details away inside of `foo(..)`.\n\n#### Unwrap/Spread Arguments\n\nThe `var x = ..` and `var y = ..` assignments are still awkward overhead. We can employ some functional trickery (hat tip to Reginald Braithwaite, @raganwald on Twitter) in a helper utility:\n\n```js\nfunction spread(fn) {\n\treturn Function.apply.bind( fn, null );\n}\n\nPromise.all(\n\tfoo( 10, 20 )\n)\n.then(\n\tspread( function(x,y){\n\t\tconsole.log( x, y );\t// 200 599\n\t} )\n)\n```\n\nThat's a bit nicer! Of course, you could inline the functional magic to avoid the extra helper:\n\n```js\nPromise.all(\n\tfoo( 10, 20 )\n)\n.then( Function.apply.bind(\n\tfunction(x,y){\n\t\tconsole.log( x, y );\t// 200 599\n\t},\n\tnull\n) );\n```\n\nThese tricks may be neat, but ES6 has an even better answer for us: destructuring. The array destructuring assignment form looks like this:\n\n```js\nPromise.all(\n\tfoo( 10, 20 )\n)\n.then( function(msgs){\n\tvar [x,y] = msgs;\n\n\tconsole.log( x, y );\t// 200 599\n} );\n```\n\nBut best of all, ES6 offers the array parameter destructuring form:\n\n```js\nPromise.all(\n\tfoo( 10, 20 )\n)\n.then( function([x,y]){\n\tconsole.log( x, y );\t// 200 599\n} );\n```\n\nWe've now embraced the one-value-per-Promise mantra, but kept our supporting boilerplate to a minimum!\n\n**Note:** For more information on ES6 destructuring forms, see the *ES6 & Beyond* title of this series.\n\n### Single Resolution\n\nOne of the most intrinsic behaviors of Promises is that a Promise can only be resolved once (fulfillment or rejection). For many async use cases, you're only retrieving a value once, so this works fine.\n\nBut there's also a lot of async cases that fit into a different model -- one that's more akin to events and/or streams of data. It's not clear on the surface how well Promises can fit into such use cases, if at all. Without a significant abstraction on top of Promises, they will completely fall short for handling multiple value resolution.\n\nImagine a scenario where you might want to fire off a sequence of async steps in response to a stimulus (like an event) that can in fact happen multiple times, like a button click.\n\nThis probably won't work the way you want:\n\n```js\n// `click(..)` binds the `\"click\"` event to a DOM element\n// `request(..)` is the previously defined Promise-aware Ajax\n\nvar p = new Promise( function(resolve,reject){\n\tclick( \"#mybtn\", resolve );\n} );\n\np.then( function(evt){\n\tvar btnID = evt.currentTarget.id;\n\treturn request( \"http://some.url.1/?id=\" + btnID );\n} )\n.then( function(text){\n\tconsole.log( text );\n} );\n```\n\nThe behavior here only works if your application calls for the button to be clicked just once. If the button is clicked a second time, the `p` promise has already been resolved, so the second `resolve(..)` call would be ignored.\n\nInstead, you'd probably need to invert the paradigm, creating a whole new Promise chain for each event firing:\n\n```js\nclick( \"#mybtn\", function(evt){\n\tvar btnID = evt.currentTarget.id;\n\n\trequest( \"http://some.url.1/?id=\" + btnID )\n\t.then( function(text){\n\t\tconsole.log( text );\n\t} );\n} );\n```\n\nThis approach will *work* in that a whole new Promise sequence will be fired off for each `\"click\"` event on the button.\n\nBut beyond just the ugliness of having to define the entire Promise chain inside the event handler, this design in some respects violates the idea of separation of concerns/capabilities (SoC). You might very well want to define your event handler in a different place in your code from where you define the *response* to the event (the Promise chain). That's pretty awkward to do in this pattern, without helper mechanisms.\n\n**Note:** Another way of articulating this limitation is that it'd be nice if we could construct some sort of \"observable\" that we can subscribe a Promise chain to. There are libraries that have created these abstractions (such as RxJS -- http://rxjs.codeplex.com/), but the abstractions can seem so heavy that you can't even see the nature of Promises anymore. Such heavy abstraction brings important questions to mind such as whether (sans Promises) these mechanisms are as *trustable* as Promises themselves have been designed to be. We'll revisit the \"Observable\" pattern in Appendix B.\n\n### Inertia\n\nOne concrete barrier to starting to use Promises in your own code is all the code that currently exists which is not already Promise-aware. If you have lots of callback-based code, it's far easier to just keep coding in that same style.\n\n\"A code base in motion (with callbacks) will remain in motion (with callbacks) unless acted upon by a smart, Promises-aware developer.\"\n\nPromises offer a different paradigm, and as such, the approach to the code can be anywhere from just a little different to, in some cases, radically different. You have to be intentional about it, because Promises will not just naturally shake out from the same ol' ways of doing code that have served you well thus far.\n\nConsider a callback-based scenario like the following:\n\n```js\nfunction foo(x,y,cb) {\n\tajax(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y,\n\t\tcb\n\t);\n}\n\nfoo( 11, 31, function(err,text) {\n\tif (err) {\n\t\tconsole.error( err );\n\t}\n\telse {\n\t\tconsole.log( text );\n\t}\n} );\n```\n\nIs it immediately obvious what the first steps are to convert this callback-based code to Promise-aware code? Depends on your experience. The more practice you have with it, the more natural it will feel. But certainly, Promises don't just advertise on the label exactly how to do it -- there's no one-size-fits-all answer -- so the responsibility is up to you.\n\nAs we've covered before, we definitely need an Ajax utility that is Promise-aware instead of callback-based, which we could call `request(..)`. You can make your own, as we have already. But the overhead of having to manually define Promise-aware wrappers for every callback-based utility makes it less likely you'll choose to refactor to Promise-aware coding at all.\n\nPromises offer no direct answer to that limitation. Most Promise libraries do offer a helper, however. But even without a library, imagine a helper like this:\n\n```js\n// polyfill-safe guard check\nif (!Promise.wrap) {\n\tPromise.wrap = function(fn) {\n\t\treturn function() {\n\t\t\tvar args = [].slice.call( arguments );\n\n\t\t\treturn new Promise( function(resolve,reject){\n\t\t\t\tfn.apply(\n\t\t\t\t\tnull,\n\t\t\t\t\targs.concat( function(err,v){\n\t\t\t\t\t\tif (err) {\n\t\t\t\t\t\t\treject( err );\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\tresolve( v );\n\t\t\t\t\t\t}\n\t\t\t\t\t} )\n\t\t\t\t);\n\t\t\t} );\n\t\t};\n\t};\n}\n```\n\nOK, that's more than just a tiny trivial utility. However, although it may look a bit intimidating, it's not as bad as you'd think. It takes a function that expects an error-first style callback as its last parameter, and returns a new one that automatically creates a Promise to return, and substitutes the callback for you, wired up to the Promise fulfillment/rejection.\n\nRather than waste too much time talking about *how* this `Promise.wrap(..)` helper works, let's just look at how we use it:\n\n```js\nvar request = Promise.wrap( ajax );\n\nrequest( \"http://some.url.1/\" )\n.then( .. )\n..\n```\n\nWow, that was pretty easy!\n\n`Promise.wrap(..)` does **not** produce a Promise. It produces a function that will produce Promises. In a sense, a Promise-producing function could be seen as a \"Promise factory.\" I propose \"promisory\" as the name for such a thing (\"Promise\" + \"factory\").\n\nThe act of wrapping a callback-expecting function to be a Promise-aware function is sometimes referred to as \"lifting\" or \"promisifying\". But there doesn't seem to be a standard term for what to call the resultant function other than a \"lifted function\", so I like \"promisory\" better as I think it's more descriptive.\n\n**Note:** Promisory isn't a made-up term. It's a real word, and its definition means to contain or convey a promise. That's exactly what these functions are doing, so it turns out to be a pretty perfect terminology match!\n\nSo, `Promise.wrap(ajax)` produces an `ajax(..)` promisory we call `request(..)`, and that promisory produces Promises for Ajax responses.\n\nIf all functions were already promisories, we wouldn't need to make them ourselves, so the extra step is a tad bit of a shame. But at least the wrapping pattern is (usually) repeatable so we can put it into a `Promise.wrap(..)` helper as shown to aid our promise coding.\n\nSo back to our earlier example, we need a promisory for both `ajax(..)` and `foo(..)`:\n\n```js\n// make a promisory for `ajax(..)`\nvar request = Promise.wrap( ajax );\n\n// refactor `foo(..)`, but keep it externally\n// callback-based for compatibility with other\n// parts of the code for now -- only use\n// `request(..)`'s promise internally.\nfunction foo(x,y,cb) {\n\trequest(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y\n\t)\n\t.then(\n\t\tfunction fulfilled(text){\n\t\t\tcb( null, text );\n\t\t},\n\t\tcb\n\t);\n}\n\n// now, for this code's purposes, make a\n// promisory for `foo(..)`\nvar betterFoo = Promise.wrap( foo );\n\n// and use the promisory\nbetterFoo( 11, 31 )\n.then(\n\tfunction fulfilled(text){\n\t\tconsole.log( text );\n\t},\n\tfunction rejected(err){\n\t\tconsole.error( err );\n\t}\n);\n```\n\nOf course, while we're refactoring `foo(..)` to use our new `request(..)` promisory, we could just make `foo(..)` a promisory itself, instead of remaining callback-based and needing to make and use the subsequent `betterFoo(..)` promisory. This decision just depends on whether `foo(..)` needs to stay callback-based compatible with other parts of the code base or not.\n\nConsider:\n\n```js\n// `foo(..)` is now also a promisory because it\n// delegates to the `request(..)` promisory\nfunction foo(x,y) {\n\treturn request(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y\n\t);\n}\n\nfoo( 11, 31 )\n.then( .. )\n..\n```\n\nWhile ES6 Promises don't natively ship with helpers for such promisory wrapping, most libraries provide them, or you can make your own. Either way, this particular limitation of Promises is addressable without too much pain (certainly compared to the pain of callback hell!).\n\n### Promise Uncancelable\n\nOnce you create a Promise and register a fulfillment and/or rejection handler for it, there's nothing external you can do to stop that progression if something else happens to make that task moot.\n\n**Note:** Many Promise abstraction libraries provide facilities to cancel Promises, but this is a terrible idea! Many developers wish Promises had natively been designed with external cancelation capability, but the problem is that it would let one consumer/observer of a Promise affect some other consumer's ability to observe that same Promise. This violates the future-value's trustability (external immutability), but moreover is the embodiment of the \"action at a distance\" anti-pattern (http://en.wikipedia.org/wiki/Action_at_a_distance_%28computer_programming%29). Regardless of how useful it seems, it will actually lead you straight back into the same nightmares as callbacks.\n\nConsider our Promise timeout scenario from earlier:\n\n```js\nvar p = foo( 42 );\n\nPromise.race( [\n\tp,\n\ttimeoutPromise( 3000 )\n] )\n.then(\n\tdoSomething,\n\thandleError\n);\n\np.then( function(){\n\t// still happens even in the timeout case :(\n} );\n```\n\nThe \"timeout\" was external to the promise `p`, so `p` itself keeps going, which we probably don't want.\n\nOne option is to invasively define your resolution callbacks:\n\n```js\nvar OK = true;\n\nvar p = foo( 42 );\n\nPromise.race( [\n\tp,\n\ttimeoutPromise( 3000 )\n\t.catch( function(err){\n\t\tOK = false;\n\t\tthrow err;\n\t} )\n] )\n.then(\n\tdoSomething,\n\thandleError\n);\n\np.then( function(){\n\tif (OK) {\n\t\t// only happens if no timeout! :)\n\t}\n} );\n```\n\nThis is ugly. It works, but it's far from ideal. Generally, you should try to avoid such scenarios.\n\nBut if you can't, the ugliness of this solution should be a clue that *cancelation* is a functionality that belongs at a higher level of abstraction on top of Promises. I'd recommend you look to Promise abstraction libraries for assistance rather than hacking it yourself.\n\n**Note:** My *asynquence* Promise abstraction library provides just such an abstraction and an `abort()` capability for the sequence, all of which will be discussed in Appendix A.\n\nA single Promise is not really a flow-control mechanism (at least not in a very meaningful sense), which is exactly what *cancelation* refers to; that's why Promise cancelation would feel awkward.\n\nBy contrast, a chain of Promises taken collectively together -- what I like to call a \"sequence\" -- *is* a flow control expression, and thus it's appropriate for cancelation to be defined at that level of abstraction.\n\nNo individual Promise should be cancelable, but it's sensible for a *sequence* to be cancelable, because you don't pass around a sequence as a single immutable value like you do with a Promise.\n\n### Promise Performance\n\nThis particular limitation is both simple and complex.\n\nComparing how many pieces are moving with a basic callback-based async task chain versus a Promise chain, it's clear Promises have a fair bit more going on, which means they are naturally at least a tiny bit slower. Think back to just the simple list of trust guarantees that Promises offer, as compared to the ad hoc solution code you'd have to layer on top of callbacks to achieve the same protections.\n\nMore work to do, more guards to protect, means that Promises *are* slower as compared to naked, untrustable callbacks. That much is obvious, and probably simple to wrap your brain around.\n\nBut how much slower? Well... that's actually proving to be an incredibly difficult question to answer absolutely, across the board.\n\nFrankly, it's kind of an apples-to-oranges comparison, so it's probably the wrong question to ask. You should actually compare whether an ad-hoc callback system with all the same protections manually layered in is faster than a Promise implementation.\n\nIf Promises have a legitimate performance limitation, it's more that they don't really offer a line-item choice as to which trustability protections you want/need or not -- you get them all, always.\n\nNevertheless, if we grant that a Promise is generally a *little bit slower* than its non-Promise, non-trustable callback equivalent -- assuming there are places where you feel you can justify the lack of trustability -- does that mean that Promises should be avoided across the board, as if your entire application is driven by nothing but must-be-utterly-the-fastest code possible?\n\nSanity check: if your code is legitimately like that, **is JavaScript even the right language for such tasks?** JavaScript can be optimized to run applications very performantly (see Chapter 5 and Chapter 6). But is obsessing over tiny performance tradeoffs with Promises, in light of all the benefits they offer, *really* appropriate?\n\nAnother subtle issue is that Promises make *everything* async, which means that some immediately (synchronously) complete steps still defer advancement of the next step to a Job (see Chapter 1). That means that it's possible that a sequence of Promise tasks could complete ever-so-slightly slower than the same sequence wired up with callbacks.\n\nOf course, the question here is this: are these potential slips in tiny fractions of performance *worth* all the other articulated benefits of Promises we've laid out across this chapter?\n\nMy take is that in virtually all cases where you might think Promise performance is slow enough to be concerned, it's actually an anti-pattern to optimize away the benefits of Promise trustability and composability by avoiding them altogether.\n\nInstead, you should default to using them across the code base, and then profile and analyze your application's hot (critical) paths. Are Promises *really* a bottleneck, or are they just a theoretical slowdown? Only *then*, armed with actual valid benchmarks (see Chapter 6) is it responsible and prudent to factor out the Promises in just those identified critical areas.\n\nPromises are a little slower, but in exchange you're getting a lot of trustability, non-Zalgo predictability, and composability built in. Maybe the limitation is not actually their performance, but your lack of perception of their benefits?\n\n## Review\n\nPromises are awesome. Use them. They solve the *inversion of control* issues that plague us with callbacks-only code.\n\nThey don't get rid of callbacks, they just redirect the orchestration of those callbacks to a trustable intermediary mechanism that sits between us and another utility.\n\nPromise chains also begin to address (though certainly not perfectly) a better way of expressing async flow in sequential fashion, which helps our brains plan and maintain async JS code better. We'll see an even better solution to *that* problem in the next chapter!\n"
  },
  {
    "path": "async & performance/ch4.md",
    "content": "# You Don't Know JS: Async & Performance\n# Chapter 4: Generators\n\nIn Chapter 2, we identified two key drawbacks to expressing async flow control with callbacks:\n\n* Callback-based async doesn't fit how our brain plans out steps of a task.\n* Callbacks aren't trustable or composable because of *inversion of control*.\n\nIn Chapter 3, we detailed how Promises uninvert the *inversion of control* of callbacks, restoring trustability/composability.\n\nNow we turn our attention to expressing async flow control in a sequential, synchronous-looking fashion. The \"magic\" that makes it possible is ES6 **generators**.\n\n## Breaking Run-to-Completion\n\nIn Chapter 1, we explained an expectation that JS developers almost universally rely on in their code: once a function starts executing, it runs until it completes, and no other code can interrupt and run in between.\n\nAs bizarre as it may seem, ES6 introduces a new type of function that does not behave with the run-to-completion behavior. This new type of function is called a \"generator.\"\n\nTo understand the implications, let's consider this example:\n\n```js\nvar x = 1;\n\nfunction foo() {\n\tx++;\n\tbar();\t\t\t\t// <-- what about this line?\n\tconsole.log( \"x:\", x );\n}\n\nfunction bar() {\n\tx++;\n}\n\nfoo();\t\t\t\t\t// x: 3\n```\n\nIn this example, we know for sure that `bar()` runs in between `x++` and `console.log(x)`. But what if `bar()` wasn't there? Obviously, the result would be `2` instead of `3`.\n\nNow let's twist your brain. What if `bar()` wasn't present, but it could still somehow run between the `x++` and `console.log(x)` statements? How would that be possible?\n\nIn **preemptive** multithreaded languages, it would essentially be possible for `bar()` to \"interrupt\" and run at exactly the right moment between those two statements. But JS is not preemptive, nor is it (currently) multithreaded. And yet, a **cooperative** form of this \"interruption\" (concurrency) is possible, if `foo()` itself could somehow indicate a \"pause\" at that part in the code.\n\n**Note:** I use the word \"cooperative\" not only because of the connection to classical concurrency terminology (see Chapter 1), but because as you'll see in the next snippet, the ES6 syntax for indicating a pause point in code is `yield` -- suggesting a politely *cooperative* yielding of control.\n\nHere's the ES6 code to accomplish such cooperative concurrency:\n\n```js\nvar x = 1;\n\nfunction *foo() {\n\tx++;\n\tyield; // pause!\n\tconsole.log( \"x:\", x );\n}\n\nfunction bar() {\n\tx++;\n}\n```\n\n**Note:** You will likely see most other JS documentation/code that will format a generator declaration as `function* foo() { .. }` instead of as I've done here with `function *foo() { .. }` -- the only difference being the stylistic positioning of the `*`. The two forms are functionally/syntactically identical, as is a third `function*foo() { .. }` (no space) form. There are arguments for both styles, but I basically prefer `function *foo..` because it then matches when I reference a generator in writing with `*foo()`. If I said only `foo()`, you wouldn't know as clearly if I was talking about a generator or a regular function. It's purely a stylistic preference.\n\nNow, how can we run the code in that previous snippet such that `bar()` executes at the point of the `yield` inside of `*foo()`?\n\n```js\n// construct an iterator `it` to control the generator\nvar it = foo();\n\n// start `foo()` here!\nit.next();\nx;\t\t\t\t\t\t// 2\nbar();\nx;\t\t\t\t\t\t// 3\nit.next();\t\t\t\t// x: 3\n```\n\nOK, there's quite a bit of new and potentially confusing stuff in those two code snippets, so we've got plenty to wade through. But before we explain the different mechanics/syntax with ES6 generators, let's walk through the behavior flow:\n\n1. The `it = foo()` operation does *not* execute the `*foo()` generator yet, but it merely constructs an *iterator* that will control its execution. More on *iterators* in a bit.\n2. The first `it.next()` starts the `*foo()` generator, and runs the `x++` on the first line of `*foo()`.\n3. `*foo()` pauses at the `yield` statement, at which point that first `it.next()` call finishes. At the moment, `*foo()` is still running and active, but it's in a paused state.\n4. We inspect the value of `x`, and it's now `2`.\n5. We call `bar()`, which increments `x` again with `x++`.\n6. We inspect the value of `x` again, and it's now `3`.\n7. The final `it.next()` call resumes the `*foo()` generator from where it was paused, and runs the `console.log(..)` statement, which uses the current value of `x` of `3`.\n\nClearly, `*foo()` started, but did *not* run-to-completion -- it paused at the `yield`. We resumed `*foo()` later, and let it finish, but that wasn't even required.\n\nSo, a generator is a special kind of function that can start and stop one or more times, and doesn't necessarily ever have to finish. While it won't be terribly obvious yet why that's so powerful, as we go throughout the rest of this chapter, that will be one of the fundamental building blocks we use to construct generators-as-async-flow-control as a pattern for our code.\n\n### Input and Output\n\nA generator function is a special function with the new processing model we just alluded to. But it's still a function, which means it still has some basic tenets that haven't changed -- namely, that it still accepts arguments (aka \"input\"), and that it can still return a value (aka \"output\"):\n\n```js\nfunction *foo(x,y) {\n\treturn x * y;\n}\n\nvar it = foo( 6, 7 );\n\nvar res = it.next();\n\nres.value;\t\t// 42\n```\n\nWe pass in the arguments `6` and `7` to `*foo(..)` as the parameters `x` and `y`, respectively. And `*foo(..)` returns the value `42` back to the calling code.\n\nWe now see a difference with how the generator is invoked compared to a normal function. `foo(6,7)` obviously looks familiar. But subtly, the `*foo(..)` generator hasn't actually run yet as it would have with a function.\n\nInstead, we're just creating an *iterator* object, which we assign to the variable `it`, to control the `*foo(..)` generator. Then we call `it.next()`, which instructs the `*foo(..)` generator to advance from its current location, stopping either at the next `yield` or end of the generator.\n\nThe result of that `next(..)` call is an object with a `value` property on it holding whatever value (if anything) was returned from `*foo(..)`. In other words, `yield` caused a value to be sent out from the generator during the middle of its execution, kind of like an intermediate `return`.\n\nAgain, it won't be obvious yet why we need this whole indirect *iterator* object to control the generator. We'll get there, I *promise*.\n\n#### Iteration Messaging\n\nIn addition to generators accepting arguments and having return values, there's even more powerful and compelling input/output messaging capability built into them, via `yield` and `next(..)`.\n\nConsider:\n\n```js\nfunction *foo(x) {\n\tvar y = x * (yield);\n\treturn y;\n}\n\nvar it = foo( 6 );\n\n// start `foo(..)`\nit.next();\n\nvar res = it.next( 7 );\n\nres.value;\t\t// 42\n```\n\nFirst, we pass in `6` as the parameter `x`. Then we call `it.next()`, and it starts up `*foo(..)`.\n\nInside `*foo(..)`, the `var y = x ..` statement starts to be processed, but then it runs across a `yield` expression. At that point, it pauses `*foo(..)` (in the middle of the assignment statement!), and essentially requests the calling code to provide a result value for the `yield` expression. Next, we call `it.next( 7 )`, which is passing the `7` value back in to *be* that result of the paused `yield` expression.\n\nSo, at this point, the assignment statement is essentially `var y = 6 * 7`. Now, `return y` returns that `42` value back as the result of the `it.next( 7 )` call.\n\nNotice something very important but also easily confusing, even to seasoned JS developers: depending on your perspective, there's a mismatch between the `yield` and the `next(..)` call. In general, you're going to have one more `next(..)` call than you have `yield` statements -- the preceding snippet has one `yield` and two `next(..)` calls.\n\nWhy the mismatch?\n\nBecause the first `next(..)` always starts a generator, and runs to the first `yield`. But it's the second `next(..)` call that fulfills the first paused `yield` expression, and the third `next(..)` would fulfill the second `yield`, and so on.\n\n##### Tale of Two Questions\n\nActually, which code you're thinking about primarily will affect whether there's a perceived mismatch or not.\n\nConsider only the generator code:\n\n```js\nvar y = x * (yield);\nreturn y;\n```\n\nThis **first** `yield` is basically *asking a question*: \"What value should I insert here?\"\n\nWho's going to answer that question? Well, the **first** `next()` has already run to get the generator up to this point, so obviously *it* can't answer the question. So, the **second** `next(..)` call must answer the question *posed* by the **first** `yield`.\n\nSee the mismatch -- second-to-first?\n\nBut let's flip our perspective. Let's look at it not from the generator's point of view, but from the iterator's point of view.\n\nTo properly illustrate this perspective, we also need to explain that messages can go in both directions -- `yield ..` as an expression can send out messages in response to `next(..)` calls, and `next(..)` can send values to a paused `yield` expression. Consider this slightly adjusted code:\n\n```js\nfunction *foo(x) {\n\tvar y = x * (yield \"Hello\");\t// <-- yield a value!\n\treturn y;\n}\n\nvar it = foo( 6 );\n\nvar res = it.next();\t// first `next()`, don't pass anything\nres.value;\t\t\t\t// \"Hello\"\n\nres = it.next( 7 );\t\t// pass `7` to waiting `yield`\nres.value;\t\t\t\t// 42\n```\n\n`yield ..` and `next(..)` pair together as a two-way message passing system **during the execution of the generator**.\n\nSo, looking only at the *iterator* code:\n\n```js\nvar res = it.next();\t// first `next()`, don't pass anything\nres.value;\t\t\t\t// \"Hello\"\n\nres = it.next( 7 );\t\t// pass `7` to waiting `yield`\nres.value;\t\t\t\t// 42\n```\n\n**Note:** We don't pass a value to the first `next()` call, and that's on purpose. Only a paused `yield` could accept such a value passed by a `next(..)`, and at the beginning of the generator when we call the first `next()`, there **is no paused `yield`** to accept such a value. The specification and all compliant browsers just silently **discard** anything passed to the first `next()`. It's still a bad idea to pass a value, as you're just creating silently \"failing\" code that's confusing. So, always start a generator with an argument-free `next()`.\n\nThe first `next()` call (with nothing passed to it) is basically *asking a question*: \"What *next* value does the `*foo(..)` generator have to give me?\" And who answers this question? The first `yield \"hello\"` expression.\n\nSee? No mismatch there.\n\nDepending on *who* you think about asking the question, there is either a mismatch between the `yield` and `next(..)` calls, or not.\n\nBut wait! There's still an extra `next()` compared to the number of `yield` statements. So, that final `it.next(7)` call is again asking the question about what *next* value the generator will produce. But there's no more `yield` statements left to answer, is there? So who answers?\n\nThe `return` statement answers the question!\n\nAnd if there **is no `return`** in your generator -- `return` is certainly not any more required in generators than in regular functions -- there's always an assumed/implicit `return;` (aka `return undefined;`), which serves the purpose of default answering the question *posed* by the final `it.next(7)` call.\n\nThese questions and answers -- the two-way message passing with `yield` and `next(..)` -- are quite powerful, but it's not obvious at all how these mechanisms are connected to async flow control. We're getting there!\n\n### Multiple Iterators\n\nIt may appear from the syntactic usage that when you use an *iterator* to control a generator, you're controlling the declared generator function itself. But there's a subtlety that's easy to miss: each time you construct an *iterator*, you are implicitly constructing an instance of the generator which that *iterator* will control.\n\nYou can have multiple instances of the same generator running at the same time, and they can even interact:\n\n```js\nfunction *foo() {\n\tvar x = yield 2;\n\tz++;\n\tvar y = yield (x * z);\n\tconsole.log( x, y, z );\n}\n\nvar z = 1;\n\nvar it1 = foo();\nvar it2 = foo();\n\nvar val1 = it1.next().value;\t\t\t// 2 <-- yield 2\nvar val2 = it2.next().value;\t\t\t// 2 <-- yield 2\n\nval1 = it1.next( val2 * 10 ).value;\t\t// 40  <-- x:20,  z:2\nval2 = it2.next( val1 * 5 ).value;\t\t// 600 <-- x:200, z:3\n\nit1.next( val2 / 2 );\t\t\t\t\t// y:300\n\t\t\t\t\t\t\t\t\t\t// 20 300 3\nit2.next( val1 / 4 );\t\t\t\t\t// y:10\n\t\t\t\t\t\t\t\t\t\t// 200 10 3\n```\n\n**Warning:** The most common usage of multiple instances of the same generator running concurrently is not such interactions, but when the generator is producing its own values without input, perhaps from some independently connected resource. We'll talk more about value production in the next section.\n\nLet's briefly walk through the processing:\n\n1. Both instances of `*foo()` are started at the same time, and both `next()` calls reveal a `value` of `2` from the `yield 2` statements, respectively.\n2. `val2 * 10` is `2 * 10`, which is sent into the first generator instance `it1`, so that `x` gets value `20`. `z` is incremented from `1` to `2`, and then `20 * 2` is `yield`ed out, setting `val1` to `40`.\n3. `val1 * 5` is `40 * 5`, which is sent into the second generator instance `it2`, so that `x` gets value `200`. `z` is incremented again, from `2` to `3`, and then `200 * 3` is `yield`ed out, setting `val2` to `600`.\n4. `val2 / 2` is `600 / 2`, which is sent into the first generator instance `it1`, so that `y` gets value `300`, then printing out `20 300 3` for its `x y z` values, respectively.\n5. `val1 / 4` is `40 / 4`, which is sent into the second generator instance `it2`, so that `y` gets value `10`, then printing out `200 10 3` for its `x y z` values, respectively.\n\nThat's a \"fun\" example to run through in your mind. Did you keep it straight?\n\n#### Interleaving\n\nRecall this scenario from the \"Run-to-completion\" section of Chapter 1:\n\n```js\nvar a = 1;\nvar b = 2;\n\nfunction foo() {\n\ta++;\n\tb = b * a;\n\ta = b + 3;\n}\n\nfunction bar() {\n\tb--;\n\ta = 8 + b;\n\tb = a * 2;\n}\n```\n\nWith normal JS functions, of course either `foo()` can run completely first, or `bar()` can run completely first, but `foo()` cannot interleave its individual statements with `bar()`. So, there are only two possible outcomes to the preceding program.\n\nHowever, with generators, clearly interleaving (even in the middle of statements!) is possible:\n\n```js\nvar a = 1;\nvar b = 2;\n\nfunction *foo() {\n\ta++;\n\tyield;\n\tb = b * a;\n\ta = (yield b) + 3;\n}\n\nfunction *bar() {\n\tb--;\n\tyield;\n\ta = (yield 8) + b;\n\tb = a * (yield 2);\n}\n```\n\nDepending on what respective order the *iterators* controlling `*foo()` and `*bar()` are called, the preceding program could produce several different results. In other words, we can actually illustrate (in a sort of fake-ish way) the theoretical \"threaded race conditions\" circumstances discussed in Chapter 1, by interleaving the two generator iterations over the same shared variables.\n\nFirst, let's make a helper called `step(..)` that controls an *iterator*:\n\n```js\nfunction step(gen) {\n\tvar it = gen();\n\tvar last;\n\n\treturn function() {\n\t\t// whatever is `yield`ed out, just\n\t\t// send it right back in the next time!\n\t\tlast = it.next( last ).value;\n\t};\n}\n```\n\n`step(..)` initializes a generator to create its `it` *iterator*, then returns a function which, when called, advances the *iterator* by one step. Additionally, the previously `yield`ed out value is sent right back in at the *next* step. So, `yield 8` will just become `8` and `yield b` will just be `b` (whatever it was at the time of `yield`).\n\nNow, just for fun, let's experiment to see the effects of interleaving these different chunks of `*foo()` and `*bar()`. We'll start with the boring base case, making sure `*foo()` totally finishes before `*bar()` (just like we did in Chapter 1):\n\n```js\n// make sure to reset `a` and `b`\na = 1;\nb = 2;\n\nvar s1 = step( foo );\nvar s2 = step( bar );\n\n// run `*foo()` completely first\ns1();\ns1();\ns1();\n\n// now run `*bar()`\ns2();\ns2();\ns2();\ns2();\n\nconsole.log( a, b );\t// 11 22\n```\n\nThe end result is `11` and `22`, just as it was in the Chapter 1 version. Now let's mix up the interleaving ordering and see how it changes the final values of `a` and `b`:\n\n```js\n// make sure to reset `a` and `b`\na = 1;\nb = 2;\n\nvar s1 = step( foo );\nvar s2 = step( bar );\n\ns2();\t\t// b--;\ns2();\t\t// yield 8\ns1();\t\t// a++;\ns2();\t\t// a = 8 + b;\n\t\t\t// yield 2\ns1();\t\t// b = b * a;\n\t\t\t// yield b\ns1();\t\t// a = b + 3;\ns2();\t\t// b = a * 2;\n```\n\nBefore I tell you the results, can you figure out what `a` and `b` are after the preceding program? No cheating!\n\n```js\nconsole.log( a, b );\t// 12 18\n```\n\n**Note:** As an exercise for the reader, try to see how many other combinations of results you can get back rearranging the order of the `s1()` and `s2()` calls. Don't forget you'll always need three `s1()` calls and four `s2()` calls. Recall the discussion earlier about matching `next()` with `yield` for the reasons why.\n\nYou almost certainly won't want to intentionally create *this* level of interleaving confusion, as it creates incredibly difficult to understand code. But the exercise is interesting and instructive to understand more about how multiple generators can run concurrently in the same shared scope, because there will be places where this capability is quite useful.\n\nWe'll discuss generator concurrency in more detail at the end of this chapter.\n\n## Generator'ing Values\n\nIn the previous section, we mentioned an interesting use for generators, as a way to produce values. This is **not** the main focus in this chapter, but we'd be remiss if we didn't cover the basics, especially because this use case is essentially the origin of the name: generators.\n\nWe're going to take a slight diversion into the topic of *iterators* for a bit, but we'll circle back to how they relate to generators and using a generator to *generate* values.\n\n### Producers and Iterators\n\nImagine you're producing a series of values where each value has a definable relationship to the previous value. To do this, you're going to need a stateful producer that remembers the last value it gave out.\n\nYou can implement something like that straightforwardly using a function closure (see the *Scope & Closures* title of this series):\n\n```js\nvar gimmeSomething = (function(){\n\tvar nextVal;\n\n\treturn function(){\n\t\tif (nextVal === undefined) {\n\t\t\tnextVal = 1;\n\t\t}\n\t\telse {\n\t\t\tnextVal = (3 * nextVal) + 6;\n\t\t}\n\n\t\treturn nextVal;\n\t};\n})();\n\ngimmeSomething();\t\t// 1\ngimmeSomething();\t\t// 9\ngimmeSomething();\t\t// 33\ngimmeSomething();\t\t// 105\n```\n\n**Note:** The `nextVal` computation logic here could have been simplified, but conceptually, we don't want to calculate the *next value* (aka `nextVal`) until the *next* `gimmeSomething()` call happens, because in general that could be a resource-leaky design for producers of more persistent or resource-limited values than simple `number`s.\n\nGenerating an arbitrary number series isn't a terribly realistic example. But what if you were generating records from a data source? You could imagine much the same code.\n\nIn fact, this task is a very common design pattern, usually solved by iterators. An *iterator* is a well-defined interface for stepping through a series of values from a producer. The JS interface for iterators, as it is in most languages, is to call `next()` each time you want the next value from the producer.\n\nWe could implement the standard *iterator* interface for our number series producer:\n\n```js\nvar something = (function(){\n\tvar nextVal;\n\n\treturn {\n\t\t// needed for `for..of` loops\n\t\t[Symbol.iterator]: function(){ return this; },\n\n\t\t// standard iterator interface method\n\t\tnext: function(){\n\t\t\tif (nextVal === undefined) {\n\t\t\t\tnextVal = 1;\n\t\t\t}\n\t\t\telse {\n\t\t\t\tnextVal = (3 * nextVal) + 6;\n\t\t\t}\n\n\t\t\treturn { done:false, value:nextVal };\n\t\t}\n\t};\n})();\n\nsomething.next().value;\t\t// 1\nsomething.next().value;\t\t// 9\nsomething.next().value;\t\t// 33\nsomething.next().value;\t\t// 105\n```\n\n**Note:** We'll explain why we need the `[Symbol.iterator]: ..` part of this code snippet in the \"Iterables\" section. Syntactically though, two ES6 features are at play. First, the `[ .. ]` syntax is called a *computed property name* (see the *this & Object Prototypes* title of this series). It's a way in an object literal definition to specify an expression and use the result of that expression as the name for the property. Next, `Symbol.iterator` is one of ES6's predefined special `Symbol` values (see the *ES6 & Beyond* title of this book series).\n\nThe `next()` call returns an object with two properties: `done` is a `boolean` value signaling the *iterator's* complete status; `value` holds the iteration value.\n\nES6 also adds the `for..of` loop, which means that a standard *iterator* can automatically be consumed with native loop syntax:\n\n```js\nfor (var v of something) {\n\tconsole.log( v );\n\n\t// don't let the loop run forever!\n\tif (v > 500) {\n\t\tbreak;\n\t}\n}\n// 1 9 33 105 321 969\n```\n\n**Note:** Because our `something` *iterator* always returns `done:false`, this `for..of` loop would run forever, which is why we put the `break` conditional in. It's totally OK for iterators to be never-ending, but there are also cases where the *iterator* will run over a finite set of values and eventually return a `done:true`.\n\nThe `for..of` loop automatically calls `next()` for each iteration -- it doesn't pass any values in to the `next()` -- and it will automatically terminate on receiving a `done:true`. It's quite handy for looping over a set of data.\n\nOf course, you could manually loop over iterators, calling `next()` and checking for the `done:true` condition to know when to stop:\n\n```js\nfor (\n\tvar ret;\n\t(ret = something.next()) && !ret.done;\n) {\n\tconsole.log( ret.value );\n\n\t// don't let the loop run forever!\n\tif (ret.value > 500) {\n\t\tbreak;\n\t}\n}\n// 1 9 33 105 321 969\n```\n\n**Note:** This manual `for` approach is certainly uglier than the ES6 `for..of` loop syntax, but its advantage is that it affords you the opportunity to pass in values to the `next(..)` calls if necessary.\n\nIn addition to making your own *iterators*, many built-in data structures in JS (as of ES6), like `array`s, also have default *iterators*:\n\n```js\nvar a = [1,3,5,7,9];\n\nfor (var v of a) {\n\tconsole.log( v );\n}\n// 1 3 5 7 9\n```\n\nThe `for..of` loop asks `a` for its *iterator*, and automatically uses it to iterate over `a`'s values.\n\n**Note:** It may seem a strange omission by ES6, but regular `object`s intentionally do not come with a default *iterator* the way `array`s do. The reasons go deeper than we will cover here. If all you want is to iterate over the properties of an object (with no particular guarantee of ordering), `Object.keys(..)` returns an `array`, which can then be used like `for (var k of Object.keys(obj)) { ..`. Such a `for..of` loop over an object's keys would be similar to a `for..in` loop, except that `Object.keys(..)` does not include properties from the `[[Prototype]]` chain while `for..in` does (see the *this & Object Prototypes* title of this series).\n\n### Iterables\n\nThe `something` object in our running example is called an *iterator*, as it has the `next()` method on its interface. But a closely related term is *iterable*, which is an `object` that **contains** an *iterator* that can iterate over its values.\n\nAs of ES6, the way to retrieve an *iterator* from an *iterable* is that the *iterable* must have a function on it, with the name being the special ES6 symbol value `Symbol.iterator`. When this function is called, it returns an *iterator*. Though not required, generally each call should return a fresh new *iterator*.\n\n`a` in the previous snippet is an *iterable*. The `for..of` loop automatically calls its `Symbol.iterator` function to construct an *iterator*. But we could of course call the function manually, and use the *iterator* it returns:\n\n```js\nvar a = [1,3,5,7,9];\n\nvar it = a[Symbol.iterator]();\n\nit.next().value;\t// 1\nit.next().value;\t// 3\nit.next().value;\t// 5\n..\n```\n\nIn the previous code listing that defined `something`, you may have noticed this line:\n\n```js\n[Symbol.iterator]: function(){ return this; }\n```\n\nThat little bit of confusing code is making the `something` value -- the interface of the `something` *iterator* -- also an *iterable*; it's now both an *iterable* and an *iterator*. Then, we pass `something` to the `for..of` loop:\n\n```js\nfor (var v of something) {\n\t..\n}\n```\n\nThe `for..of` loop expects `something` to be an *iterable*, so it looks for and calls its `Symbol.iterator` function. We defined that function to simply `return this`, so it just gives itself back, and the `for..of` loop is none the wiser.\n\n### Generator Iterator\n\nLet's turn our attention back to generators, in the context of *iterators*. A generator can be treated as a producer of values that we extract one at a time through an *iterator* interface's `next()` calls.\n\nSo, a generator itself is not technically an *iterable*, though it's very similar -- when you execute the generator, you get an *iterator* back:\n\n```js\nfunction *foo(){ .. }\n\nvar it = foo();\n```\n\nWe can implement the `something` infinite number series producer from earlier with a generator, like this:\n\n```js\nfunction *something() {\n\tvar nextVal;\n\n\twhile (true) {\n\t\tif (nextVal === undefined) {\n\t\t\tnextVal = 1;\n\t\t}\n\t\telse {\n\t\t\tnextVal = (3 * nextVal) + 6;\n\t\t}\n\n\t\tyield nextVal;\n\t}\n}\n```\n\n**Note:** A `while..true` loop would normally be a very bad thing to include in a real JS program, at least if it doesn't have a `break` or `return` in it, as it would likely run forever, synchronously, and block/lock-up the browser UI. However, in a generator, such a loop is generally totally OK if it has a `yield` in it, as the generator will pause at each iteration, `yield`ing back to the main program and/or to the event loop queue. To put it glibly, \"generators put the `while..true` back in JS programming!\"\n\nThat's a fair bit cleaner and simpler, right? Because the generator pauses at each `yield`, the state (scope) of the function `*something()` is kept around, meaning there's no need for the closure boilerplate to preserve variable state across calls.\n\nNot only is it simpler code -- we don't have to make our own *iterator* interface -- it actually is more reason-able code, because it more clearly expresses the intent. For example, the `while..true` loop tells us the generator is intended to run forever -- to keep *generating* values as long as we keep asking for them.\n\nAnd now we can use our shiny new `*something()` generator with a `for..of` loop, and you'll see it works basically identically:\n\n```js\nfor (var v of something()) {\n\tconsole.log( v );\n\n\t// don't let the loop run forever!\n\tif (v > 500) {\n\t\tbreak;\n\t}\n}\n// 1 9 33 105 321 969\n```\n\nBut don't skip over `for (var v of something()) ..`! We didn't just reference `something` as a value like in earlier examples, but instead called the `*something()` generator to get its *iterator* for the `for..of` loop to use.\n\nIf you're paying close attention, two questions may arise from this interaction between the generator and the loop:\n\n* Why couldn't we say `for (var v of something) ..`? Because `something` here is a generator, which is not an *iterable*. We have to call `something()` to construct a producer for the `for..of` loop to iterate over.\n* The `something()` call produces an *iterator*, but the `for..of` loop wants an *iterable*, right? Yep. The generator's *iterator* also has a `Symbol.iterator` function on it, which basically does a `return this`, just like the `something` *iterable* we defined earlier. In other words, a generator's *iterator* is also an *iterable*!\n\n#### Stopping the Generator\n\nIn the previous example, it would appear the *iterator* instance for the `*something()` generator was basically left in a suspended state forever after the `break` in the loop was called.\n\nBut there's a hidden behavior that takes care of that for you. \"Abnormal completion\" (i.e., \"early termination\") of the `for..of` loop -- generally caused by a `break`, `return`, or an uncaught exception -- sends a signal to the generator's *iterator* for it to terminate.\n\n**Note:** Technically, the `for..of` loop also sends this signal to the *iterator* at the normal completion of the loop. For a generator, that's essentially a moot operation, as the generator's *iterator* had to complete first so the `for..of` loop completed. However, custom *iterators* might desire to receive this additional signal from `for..of` loop consumers.\n\nWhile a `for..of` loop will automatically send this signal, you may wish to send the signal manually to an *iterator*; you do this by calling `return(..)`.\n\nIf you specify a `try..finally` clause inside the generator, it will always be run even when the generator is externally completed. This is useful if you need to clean up resources (database connections, etc.):\n\n```js\nfunction *something() {\n\ttry {\n\t\tvar nextVal;\n\n\t\twhile (true) {\n\t\t\tif (nextVal === undefined) {\n\t\t\t\tnextVal = 1;\n\t\t\t}\n\t\t\telse {\n\t\t\t\tnextVal = (3 * nextVal) + 6;\n\t\t\t}\n\n\t\t\tyield nextVal;\n\t\t}\n\t}\n\t// cleanup clause\n\tfinally {\n\t\tconsole.log( \"cleaning up!\" );\n\t}\n}\n```\n\nThe earlier example with `break` in the `for..of` loop will trigger the `finally` clause. But you could instead manually terminate the generator's *iterator* instance from the outside with `return(..)`:\n\n```js\nvar it = something();\nfor (var v of it) {\n\tconsole.log( v );\n\n\t// don't let the loop run forever!\n\tif (v > 500) {\n\t\tconsole.log(\n\t\t\t// complete the generator's iterator\n\t\t\tit.return( \"Hello World\" ).value\n\t\t);\n\t\t// no `break` needed here\n\t}\n}\n// 1 9 33 105 321 969\n// cleaning up!\n// Hello World\n```\n\nWhen we call `it.return(..)`, it immediately terminates the generator, which of course runs the `finally` clause. Also, it sets the returned `value` to whatever you passed in to `return(..)`, which is how `\"Hello World\"` comes right back out. We also don't need to include a `break` now because the generator's *iterator* is set to `done:true`, so the `for..of` loop will terminate on its next iteration.\n\nGenerators owe their namesake mostly to this *consuming produced values* use. But again, that's just one of the uses for generators, and frankly not even the main one we're concerned with in the context of this book.\n\nBut now that we more fully understand some of the mechanics of how they work, we can *next* turn our attention to how generators apply to async concurrency.\n\n## Iterating Generators Asynchronously\n\nWhat do generators have to do with async coding patterns, fixing problems with callbacks, and the like? Let's get to answering that important question.\n\nWe should revisit one of our scenarios from Chapter 3. Let's recall the callback approach:\n\n```js\nfunction foo(x,y,cb) {\n\tajax(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y,\n\t\tcb\n\t);\n}\n\nfoo( 11, 31, function(err,text) {\n\tif (err) {\n\t\tconsole.error( err );\n\t}\n\telse {\n\t\tconsole.log( text );\n\t}\n} );\n```\n\nIf we wanted to express this same task flow control with a generator, we could do:\n\n```js\nfunction foo(x,y) {\n\tajax(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y,\n\t\tfunction(err,data){\n\t\t\tif (err) {\n\t\t\t\t// throw an error into `*main()`\n\t\t\t\tit.throw( err );\n\t\t\t}\n\t\t\telse {\n\t\t\t\t// resume `*main()` with received `data`\n\t\t\t\tit.next( data );\n\t\t\t}\n\t\t}\n\t);\n}\n\nfunction *main() {\n\ttry {\n\t\tvar text = yield foo( 11, 31 );\n\t\tconsole.log( text );\n\t}\n\tcatch (err) {\n\t\tconsole.error( err );\n\t}\n}\n\nvar it = main();\n\n// start it all up!\nit.next();\n```\n\nAt first glance, this snippet is longer, and perhaps a little more complex looking, than the callback snippet before it. But don't let that impression get you off track. The generator snippet is actually **much** better! But there's a lot going on for us to explain.\n\nFirst, let's look at this part of the code, which is the most important:\n\n```js\nvar text = yield foo( 11, 31 );\nconsole.log( text );\n```\n\nThink about how that code works for a moment. We're calling a normal function `foo(..)` and we're apparently able to get back the `text` from the Ajax call, even though it's asynchronous.\n\nHow is that possible? If you recall the beginning of Chapter 1, we had almost identical code:\n\n```js\nvar data = ajax( \"..url 1..\" );\nconsole.log( data );\n```\n\nAnd that code didn't work! Can you spot the difference? It's the `yield` used in a generator.\n\nThat's the magic! That's what allows us to have what appears to be blocking, synchronous code, but it doesn't actually block the whole program; it only pauses/blocks the code in the generator itself.\n\nIn `yield foo(11,31)`, first the `foo(11,31)` call is made, which returns nothing (aka `undefined`), so we're making a call to request data, but we're actually then doing `yield undefined`. That's OK, because the code is not currently relying on a `yield`ed value to do anything interesting. We'll revisit this point later in the chapter.\n\nWe're not using `yield` in a message passing sense here, only in a flow control sense to pause/block. Actually, it will have message passing, but only in one direction, after the generator is resumed.\n\nSo, the generator pauses at the `yield`, essentially asking the question, \"what value should I return to assign to the variable `text`?\" Who's going to answer that question?\n\nLook at `foo(..)`. If the Ajax request is successful, we call:\n\n```js\nit.next( data );\n```\n\nThat's resuming the generator with the response data, which means that our paused `yield` expression receives that value directly, and then as it restarts the generator code, that value gets assigned to the local variable `text`.\n\nPretty cool, huh?\n\nTake a step back and consider the implications. We have totally synchronous-looking code inside the generator (other than the `yield` keyword itself), but hidden behind the scenes, inside of `foo(..)`, the operations can complete asynchronously.\n\n**That's huge!** That's a nearly perfect solution to our previously stated problem with callbacks not being able to express asynchrony in a sequential, synchronous fashion that our brains can relate to.\n\nIn essence, we are abstracting the asynchrony away as an implementation detail, so that we can reason synchronously/sequentially about our flow control: \"Make an Ajax request, and when it finishes print out the response.\" And of course, we just expressed two steps in the flow control, but this same capability extends without bounds, to let us express however many steps we need to.\n\n**Tip:** This is such an important realization, just go back and read the last three paragraphs again to let it sink in!\n\n### Synchronous Error Handling\n\nBut the preceding generator code has even more goodness to *yield* to us. Let's turn our attention to the `try..catch` inside the generator:\n\n```js\ntry {\n\tvar text = yield foo( 11, 31 );\n\tconsole.log( text );\n}\ncatch (err) {\n\tconsole.error( err );\n}\n```\n\nHow does this work? The `foo(..)` call is asynchronously completing, and doesn't `try..catch` fail to catch asynchronous errors, as we looked at in Chapter 3?\n\nWe already saw how the `yield` lets the assignment statement pause to wait for `foo(..)` to finish, so that the completed response can be assigned to `text`. The awesome part is that this `yield` pausing *also* allows the generator to `catch` an error. We throw that error into the generator with this part of the earlier code listing:\n\n```js\nif (err) {\n\t// throw an error into `*main()`\n\tit.throw( err );\n}\n```\n\nThe `yield`-pause nature of generators means that not only do we get synchronous-looking `return` values from async function calls, but we can also synchronously `catch` errors from those async function calls!\n\nSo we've seen we can throw errors *into* a generator, but what about throwing errors *out of* a generator? Exactly as you'd expect:\n\n```js\nfunction *main() {\n\tvar x = yield \"Hello World\";\n\n\tyield x.toLowerCase();\t// cause an exception!\n}\n\nvar it = main();\n\nit.next().value;\t\t\t// Hello World\n\ntry {\n\tit.next( 42 );\n}\ncatch (err) {\n\tconsole.error( err );\t// TypeError\n}\n```\n\nOf course, we could have manually thrown an error with `throw ..` instead of causing an exception.\n\nWe can even `catch` the same error that we `throw(..)` into the generator, essentially giving the generator a chance to handle it but if it doesn't, the *iterator* code must handle it:\n\n```js\nfunction *main() {\n\tvar x = yield \"Hello World\";\n\n\t// never gets here\n\tconsole.log( x );\n}\n\nvar it = main();\n\nit.next();\n\ntry {\n\t// will `*main()` handle this error? we'll see!\n\tit.throw( \"Oops\" );\n}\ncatch (err) {\n\t// nope, didn't handle it!\n\tconsole.error( err );\t\t\t// Oops\n}\n```\n\nSynchronous-looking error handling (via `try..catch`) with async code is a huge win for readability and reason-ability.\n\n## Generators + Promises\n\nIn our previous discussion, we showed how generators can be iterated asynchronously, which is a huge step forward in sequential reason-ability over the spaghetti mess of callbacks. But we lost something very important: the trustability and composability of Promises (see Chapter 3)!\n\nDon't worry -- we can get that back. The best of all worlds in ES6 is to combine generators (synchronous-looking async code) with Promises (trustable and composable).\n\nBut how?\n\nRecall from Chapter 3 the Promise-based approach to our running Ajax example:\n\n```js\nfunction foo(x,y) {\n\treturn request(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y\n\t);\n}\n\nfoo( 11, 31 )\n.then(\n\tfunction(text){\n\t\tconsole.log( text );\n\t},\n\tfunction(err){\n\t\tconsole.error( err );\n\t}\n);\n```\n\nIn our earlier generator code for the running Ajax example, `foo(..)` returned nothing (`undefined`), and our *iterator* control code didn't care about that `yield`ed value.\n\nBut here the Promise-aware `foo(..)` returns a promise after making the Ajax call. That suggests that we could construct a promise with `foo(..)` and then `yield` it from the generator, and then the *iterator* control code would receive that promise.\n\nBut what should the *iterator* do with the promise?\n\nIt should listen for the promise to resolve (fulfillment or rejection), and then either resume the generator with the fulfillment message or throw an error into the generator with the rejection reason.\n\nLet me repeat that, because it's so important. The natural way to get the most out of Promises and generators is **to `yield` a Promise**, and wire that Promise to control the generator's *iterator*.\n\nLet's give it a try! First, we'll put the Promise-aware `foo(..)` together with the generator `*main()`:\n\n```js\nfunction foo(x,y) {\n\treturn request(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y\n\t);\n}\n\nfunction *main() {\n\ttry {\n\t\tvar text = yield foo( 11, 31 );\n\t\tconsole.log( text );\n\t}\n\tcatch (err) {\n\t\tconsole.error( err );\n\t}\n}\n```\n\nThe most powerful revelation in this refactor is that the code inside `*main()` **did not have to change at all!** Inside the generator, whatever values are `yield`ed out is just an opaque implementation detail, so we're not even aware it's happening, nor do we need to worry about it.\n\nBut how are we going to run `*main()` now? We still have some of the implementation plumbing work to do, to receive and wire up the `yield`ed promise so that it resumes the generator upon resolution. We'll start by trying that manually:\n\n```js\nvar it = main();\n\nvar p = it.next().value;\n\n// wait for the `p` promise to resolve\np.then(\n\tfunction(text){\n\t\tit.next( text );\n\t},\n\tfunction(err){\n\t\tit.throw( err );\n\t}\n);\n```\n\nActually, that wasn't so painful at all, was it?\n\nThis snippet should look very similar to what we did earlier with the manually wired generator controlled by the error-first callback. Instead of an `if (err) { it.throw..`, the promise already splits fulfillment (success) and rejection (failure) for us, but otherwise the *iterator* control is identical.\n\nNow, we've glossed over some important details.\n\nMost importantly, we took advantage of the fact that we knew that `*main()` only had one Promise-aware step in it. What if we wanted to be able to Promise-drive a generator no matter how many steps it has? We certainly don't want to manually write out the Promise chain differently for each generator! What would be much nicer is if there was a way to repeat (aka \"loop\" over) the iteration control, and each time a Promise comes out, wait on its resolution before continuing.\n\nAlso, what if the generator throws out an error (intentionally or accidentally) during the `it.next(..)` call? Should we quit, or should we `catch` it and send it right back in? Similarly, what if we `it.throw(..)` a Promise rejection into the generator, but it's not handled, and comes right back out?\n\n### Promise-Aware Generator Runner\n\nThe more you start to explore this path, the more you realize, \"wow, it'd be great if there was just some utility to do it for me.\" And you're absolutely correct. This is such an important pattern, and you don't want to get it wrong (or exhaust yourself repeating it over and over), so your best bet is to use a utility that is specifically designed to *run* Promise-`yield`ing generators in the manner we've illustrated.\n\nSeveral Promise abstraction libraries provide just such a utility, including my *asynquence* library and its `runner(..)`, which will be discussed in Appendix A of this book.\n\nBut for the sake of learning and illustration, let's just define our own standalone utility that we'll call `run(..)`:\n\n```js\n// thanks to Benjamin Gruenbaum (@benjamingr on GitHub) for\n// big improvements here!\nfunction run(gen) {\n\tvar args = [].slice.call( arguments, 1), it;\n\n\t// initialize the generator in the current context\n\tit = gen.apply( this, args );\n\n\t// return a promise for the generator completing\n\treturn Promise.resolve()\n\t\t.then( function handleNext(value){\n\t\t\t// run to the next yielded value\n\t\t\tvar next = it.next( value );\n\n\t\t\treturn (function handleResult(next){\n\t\t\t\t// generator has completed running?\n\t\t\t\tif (next.done) {\n\t\t\t\t\treturn next.value;\n\t\t\t\t}\n\t\t\t\t// otherwise keep going\n\t\t\t\telse {\n\t\t\t\t\treturn Promise.resolve( next.value )\n\t\t\t\t\t\t.then(\n\t\t\t\t\t\t\t// resume the async loop on\n\t\t\t\t\t\t\t// success, sending the resolved\n\t\t\t\t\t\t\t// value back into the generator\n\t\t\t\t\t\t\thandleNext,\n\n\t\t\t\t\t\t\t// if `value` is a rejected\n\t\t\t\t\t\t\t// promise, propagate error back\n\t\t\t\t\t\t\t// into the generator for its own\n\t\t\t\t\t\t\t// error handling\n\t\t\t\t\t\t\tfunction handleErr(err) {\n\t\t\t\t\t\t\t\treturn Promise.resolve(\n\t\t\t\t\t\t\t\t\tit.throw( err )\n\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t.then( handleResult );\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t);\n\t\t\t\t}\n\t\t\t})(next);\n\t\t} );\n}\n```\n\nAs you can see, it's a quite a bit more complex than you'd probably want to author yourself, and you especially wouldn't want to repeat this code for each generator you use. So, a utility/library helper is definitely the way to go. Nevertheless, I encourage you to spend a few minutes studying that code listing to get a better sense of how to manage the generator+Promise negotiation.\n\nHow would you use `run(..)` with `*main()` in our *running* Ajax example?\n\n```js\nfunction *main() {\n\t// ..\n}\n\nrun( main );\n```\n\nThat's it! The way we wired `run(..)`, it will automatically advance the generator you pass to it, asynchronously until completion.\n\n**Note:** The `run(..)` we defined returns a promise which is wired to resolve once the generator is complete, or receive an uncaught exception if the generator doesn't handle it. We don't show that capability here, but we'll come back to it later in the chapter.\n\n#### ES7: `async` and `await`?\n\nThe preceding pattern -- generators yielding Promises that then control the generator's *iterator* to advance it to completion -- is such a powerful and useful approach, it would be nicer if we could do it without the clutter of the library utility helper (aka `run(..)`).\n\nThere's probably good news on that front. At the time of this writing, there's early but strong support for a proposal for more syntactic addition in this realm for the post-ES6, ES7-ish timeframe. Obviously, it's too early to guarantee the details, but there's a pretty decent chance it will shake out similar to the following:\n\n```js\nfunction foo(x,y) {\n\treturn request(\n\t\t\"http://some.url.1/?x=\" + x + \"&y=\" + y\n\t);\n}\n\nasync function main() {\n\ttry {\n\t\tvar text = await foo( 11, 31 );\n\t\tconsole.log( text );\n\t}\n\tcatch (err) {\n\t\tconsole.error( err );\n\t}\n}\n\nmain();\n```\n\nAs you can see, there's no `run(..)` call (meaning no need for a library utility!) to invoke and drive `main()` -- it's just called as a normal function. Also, `main()` isn't declared as a generator function anymore; it's a new kind of function: `async function`. And finally, instead of `yield`ing a Promise, we `await` for it to resolve.\n\nThe `async function` automatically knows what to do if you `await` a Promise -- it will pause the function (just like with generators) until the Promise resolves. We didn't illustrate it in this snippet, but calling an async function like `main()` automatically returns a promise that's resolved whenever the function finishes completely.\n\n**Tip:** The `async` / `await` syntax should look very familiar to readers with  experience in C#, because it's basically identical.\n\nThe proposal essentially codifies support for the pattern we've already derived, into a syntactic mechanism: combining Promises with sync-looking flow control code. That's the best of both worlds combined, to effectively address practically all of the major concerns we outlined with callbacks.\n\nThe mere fact that such a ES7-ish proposal already exists and has early support and enthusiasm is a major vote of confidence in the future importance of this async pattern.\n\n### Promise Concurrency in Generators\n\nSo far, all we've demonstrated is a single-step async flow with Promises+generators. But real-world code will often have many async steps.\n\nIf you're not careful, the sync-looking style of generators may lull you into complacency with how you structure your async concurrency, leading to suboptimal performance patterns. So we want to spend a little time exploring the options.\n\nImagine a scenario where you need to fetch data from two different sources, then combine those responses to make a third request, and finally print out the last response. We explored a similar scenario with Promises in Chapter 3, but let's reconsider it in the context of generators.\n\nYour first instinct might be something like:\n\n```js\nfunction *foo() {\n\tvar r1 = yield request( \"http://some.url.1\" );\n\tvar r2 = yield request( \"http://some.url.2\" );\n\n\tvar r3 = yield request(\n\t\t\"http://some.url.3/?v=\" + r1 + \",\" + r2\n\t);\n\n\tconsole.log( r3 );\n}\n\n// use previously defined `run(..)` utility\nrun( foo );\n```\n\nThis code will work, but in the specifics of our scenario, it's not optimal. Can you spot why?\n\nBecause the `r1` and `r2` requests can -- and for performance reasons, *should* -- run concurrently, but in this code they will run sequentially; the `\"http://some.url.2\"` URL isn't Ajax fetched until after the `\"http://some.url.1\"` request is finished. These two requests are independent, so the better performance approach would likely be to have them run at the same time.\n\nBut how exactly would you do that with a generator and `yield`? We know that `yield` is only a single pause point in the code, so you can't really do two pauses at the same time.\n\nThe most natural and effective answer is to base the async flow on Promises, specifically on their capability to manage state in a time-independent fashion (see \"Future Value\" in Chapter 3).\n\nThe simplest approach:\n\n```js\nfunction *foo() {\n\t// make both requests \"in parallel\"\n\tvar p1 = request( \"http://some.url.1\" );\n\tvar p2 = request( \"http://some.url.2\" );\n\n\t// wait until both promises resolve\n\tvar r1 = yield p1;\n\tvar r2 = yield p2;\n\n\tvar r3 = yield request(\n\t\t\"http://some.url.3/?v=\" + r1 + \",\" + r2\n\t);\n\n\tconsole.log( r3 );\n}\n\n// use previously defined `run(..)` utility\nrun( foo );\n```\n\nWhy is this different from the previous snippet? Look at where the `yield` is and is not. `p1` and `p2` are promises for Ajax requests made concurrently (aka \"in parallel\"). It doesn't matter which one finishes first, because promises will hold onto their resolved state for as long as necessary.\n\nThen we use two subsequent `yield` statements to wait for and retrieve the resolutions from the promises (into `r1` and `r2`, respectively). If `p1` resolves first, the `yield p1` resumes first then waits on the `yield p2` to resume. If `p2` resolves first, it will just patiently hold onto that resolution value until asked, but the `yield p1` will hold on first, until `p1` resolves.\n\nEither way, both `p1` and `p2` will run concurrently, and both have to finish, in either order, before the `r3 = yield request..` Ajax request will be made.\n\nIf that flow control processing model sounds familiar, it's basically the same as what we identified in Chapter 3 as the \"gate\" pattern, enabled by the `Promise.all([ .. ])` utility. So, we could also express the flow control like this:\n\n```js\nfunction *foo() {\n\t// make both requests \"in parallel,\" and\n\t// wait until both promises resolve\n\tvar results = yield Promise.all( [\n\t\trequest( \"http://some.url.1\" ),\n\t\trequest( \"http://some.url.2\" )\n\t] );\n\n\tvar r1 = results[0];\n\tvar r2 = results[1];\n\n\tvar r3 = yield request(\n\t\t\"http://some.url.3/?v=\" + r1 + \",\" + r2\n\t);\n\n\tconsole.log( r3 );\n}\n\n// use previously defined `run(..)` utility\nrun( foo );\n```\n\n**Note:** As we discussed in Chapter 3, we can even use ES6 destructuring assignment to simplify the `var r1 = .. var r2 = ..` assignments, with `var [r1,r2] = results`.\n\nIn other words, all of the concurrency capabilities of Promises are available to us in the generator+Promise approach. So in any place where you need more than sequential this-then-that async flow control steps, Promises are likely your best bet.\n\n#### Promises, Hidden\n\nAs a word of stylistic caution, be careful about how much Promise logic you include **inside your generators**. The whole point of using generators for asynchrony in the way we've described is to create simple, sequential, sync-looking code, and to hide as much of the details of asynchrony away from that code as possible.\n\nFor example, this might be a cleaner approach:\n\n```js\n// note: normal function, not generator\nfunction bar(url1,url2) {\n\treturn Promise.all( [\n\t\trequest( url1 ),\n\t\trequest( url2 )\n\t] );\n}\n\nfunction *foo() {\n\t// hide the Promise-based concurrency details\n\t// inside `bar(..)`\n\tvar results = yield bar(\n\t\t\"http://some.url.1\",\n\t\t\"http://some.url.2\"\n\t);\n\n\tvar r1 = results[0];\n\tvar r2 = results[1];\n\n\tvar r3 = yield request(\n\t\t\"http://some.url.3/?v=\" + r1 + \",\" + r2\n\t);\n\n\tconsole.log( r3 );\n}\n\n// use previously defined `run(..)` utility\nrun( foo );\n```\n\nInside `*foo()`, it's cleaner and clearer that all we're doing is just asking `bar(..)` to get us some `results`, and we'll `yield`-wait on that to happen. We don't have to care that under the covers a `Promise.all([ .. ])` Promise composition will be used to make that happen.\n\n**We treat asynchrony, and indeed Promises, as an implementation detail.**\n\nHiding your Promise logic inside a function that you merely call from your generator is especially useful if you're going to do a sophisticated series flow-control. For example:\n\n```js\nfunction bar() {\n\treturn\tPromise.all( [\n\t\t  baz( .. )\n\t\t  .then( .. ),\n\t\t  Promise.race( [ .. ] )\n\t\t] )\n\t\t.then( .. )\n}\n```\n\nThat kind of logic is sometimes required, and if you dump it directly inside your generator(s), you've defeated most of the reason why you would want to use generators in the first place. We *should* intentionally abstract such details away from our generator code so that they don't clutter up the higher level task expression.\n\nBeyond creating code that is both functional and performant, you should also strive to make code that is as reason-able and maintainable as possible.\n\n**Note:** Abstraction is not *always* a healthy thing for programming -- many times it can increase complexity in exchange for terseness. But in this case, I believe it's much healthier for your generator+Promise async code than the alternatives. As with all such advice, though, pay attention to your specific situations and make proper decisions for you and your team.\n\n## Generator Delegation\n\nIn the previous section, we showed calling regular functions from inside a generator, and how that remains a useful technique for abstracting away implementation details (like async Promise flow). But the main drawback of using a normal function for this task is that it has to behave by the normal function rules, which means it cannot pause itself with `yield` like a generator can.\n\nIt may then occur to you that you might try to call one generator from another generator, using our `run(..)` helper, such as:\n\n```js\nfunction *foo() {\n\tvar r2 = yield request( \"http://some.url.2\" );\n\tvar r3 = yield request( \"http://some.url.3/?v=\" + r2 );\n\n\treturn r3;\n}\n\nfunction *bar() {\n\tvar r1 = yield request( \"http://some.url.1\" );\n\n\t// \"delegating\" to `*foo()` via `run(..)`\n\tvar r3 = yield run( foo );\n\n\tconsole.log( r3 );\n}\n\nrun( bar );\n```\n\nWe run `*foo()` inside of `*bar()` by using our `run(..)` utility again. We take advantage here of the fact that the `run(..)` we defined earlier returns a promise which is resolved when its generator is run to completion (or errors out), so if we `yield` out to a `run(..)` instance the promise from another `run(..)` call, it automatically pauses `*bar()` until `*foo()` finishes.\n\nBut there's an even better way to integrate calling `*foo()` into `*bar()`, and it's called `yield`-delegation. The special syntax for `yield`-delegation is: `yield * __` (notice the extra `*`). Before we see it work in our previous example, let's look at a simpler scenario:\n\n```js\nfunction *foo() {\n\tconsole.log( \"`*foo()` starting\" );\n\tyield 3;\n\tyield 4;\n\tconsole.log( \"`*foo()` finished\" );\n}\n\nfunction *bar() {\n\tyield 1;\n\tyield 2;\n\tyield *foo();\t// `yield`-delegation!\n\tyield 5;\n}\n\nvar it = bar();\n\nit.next().value;\t// 1\nit.next().value;\t// 2\nit.next().value;\t// `*foo()` starting\n\t\t\t\t\t// 3\nit.next().value;\t// 4\nit.next().value;\t// `*foo()` finished\n\t\t\t\t\t// 5\n```\n\n**Note:** Similar to a note earlier in the chapter where I explained why I prefer `function *foo() ..` instead of `function* foo() ..`, I also prefer -- differing from most other documentation on the topic -- to say `yield *foo()` instead of `yield* foo()`. The placement of the `*` is purely stylistic and up to your best judgment. But I find the consistency of styling attractive.\n\nHow does the `yield *foo()` delegation work?\n\nFirst, calling `foo()` creates an *iterator* exactly as we've already seen. Then, `yield *` delegates/transfers the *iterator* instance control (of the present `*bar()` generator) over to this other `*foo()` *iterator*.\n\nSo, the first two `it.next()` calls are controlling `*bar()`, but when we make the third `it.next()` call, now `*foo()` starts up, and now we're controlling `*foo()` instead of `*bar()`. That's why it's called delegation -- `*bar()` delegated its iteration control to `*foo()`.\n\nAs soon as the `it` *iterator* control exhausts the entire `*foo()` *iterator*, it automatically returns to controlling `*bar()`.\n\nSo now back to the previous example with the three sequential Ajax requests:\n\n```js\nfunction *foo() {\n\tvar r2 = yield request( \"http://some.url.2\" );\n\tvar r3 = yield request( \"http://some.url.3/?v=\" + r2 );\n\n\treturn r3;\n}\n\nfunction *bar() {\n\tvar r1 = yield request( \"http://some.url.1\" );\n\n\t// \"delegating\" to `*foo()` via `yield*`\n\tvar r3 = yield *foo();\n\n\tconsole.log( r3 );\n}\n\nrun( bar );\n```\n\nThe only difference between this snippet and the version used earlier is the use of `yield *foo()` instead of the previous `yield run(foo)`.\n\n**Note:** `yield *` yields iteration control, not generator control; when you invoke the `*foo()` generator, you're now `yield`-delegating to its *iterator*. But you can actually `yield`-delegate to any *iterable*; `yield *[1,2,3]` would consume the default *iterator* for the `[1,2,3]` array value.\n\n### Why Delegation?\n\nThe purpose of `yield`-delegation is mostly code organization, and in that way is symmetrical with normal function calling.\n\nImagine two modules that respectively provide methods `foo()` and `bar()`, where `bar()` calls `foo()`. The reason the two are separate is generally because the proper organization of code for the program calls for them to be in separate functions. For example, there may be cases where `foo()` is called standalone, and other places where `bar()` calls `foo()`.\n\nFor all these exact same reasons, keeping generators separate aids in program readability, maintenance, and debuggability. In that respect, `yield *` is a syntactic shortcut for manually iterating over the steps of `*foo()` while inside of `*bar()`.\n\nSuch manual approach would be especially complex if the steps in `*foo()` were asynchronous, which is why you'd probably need to use that `run(..)` utility to do it. And as we've shown, `yield *foo()` eliminates the need for a sub-instance of the `run(..)` utility (like `run(foo)`).\n\n### Delegating Messages\n\nYou may wonder how this `yield`-delegation works not just with *iterator* control but with the two-way message passing. Carefully follow the flow of messages in and out, through the `yield`-delegation:\n\n```js\nfunction *foo() {\n\tconsole.log( \"inside `*foo()`:\", yield \"B\" );\n\n\tconsole.log( \"inside `*foo()`:\", yield \"C\" );\n\n\treturn \"D\";\n}\n\nfunction *bar() {\n\tconsole.log( \"inside `*bar()`:\", yield \"A\" );\n\n\t// `yield`-delegation!\n\tconsole.log( \"inside `*bar()`:\", yield *foo() );\n\n\tconsole.log( \"inside `*bar()`:\", yield \"E\" );\n\n\treturn \"F\";\n}\n\nvar it = bar();\n\nconsole.log( \"outside:\", it.next().value );\n// outside: A\n\nconsole.log( \"outside:\", it.next( 1 ).value );\n// inside `*bar()`: 1\n// outside: B\n\nconsole.log( \"outside:\", it.next( 2 ).value );\n// inside `*foo()`: 2\n// outside: C\n\nconsole.log( \"outside:\", it.next( 3 ).value );\n// inside `*foo()`: 3\n// inside `*bar()`: D\n// outside: E\n\nconsole.log( \"outside:\", it.next( 4 ).value );\n// inside `*bar()`: 4\n// outside: F\n```\n\nPay particular attention to the processing steps after the `it.next(3)` call:\n\n1. The `3` value is passed (through the `yield`-delegation in `*bar()`) into the waiting `yield \"C\"` expression inside of `*foo()`.\n2. `*foo()` then calls `return \"D\"`, but this value doesn't get returned all the way back to the outside `it.next(3)` call.\n3. Instead, the `\"D\"` value is sent as the result of the waiting `yield *foo()` expression inside of `*bar()` -- this `yield`-delegation expression has essentially been paused while all of `*foo()` was exhausted. So `\"D\"` ends up inside of `*bar()` for it to print out.\n4. `yield \"E\"` is called inside of `*bar()`, and the `\"E\"` value is yielded to the outside as the result of the `it.next(3)` call.\n\nFrom the perspective of the external *iterator* (`it`), it doesn't appear any differently between controlling the initial generator or a delegated one.\n\nIn fact, `yield`-delegation doesn't even have to be directed to another generator; it can just be directed to a non-generator, general *iterable*. For example:\n\n```js\nfunction *bar() {\n\tconsole.log( \"inside `*bar()`:\", yield \"A\" );\n\n\t// `yield`-delegation to a non-generator!\n\tconsole.log( \"inside `*bar()`:\", yield *[ \"B\", \"C\", \"D\" ] );\n\n\tconsole.log( \"inside `*bar()`:\", yield \"E\" );\n\n\treturn \"F\";\n}\n\nvar it = bar();\n\nconsole.log( \"outside:\", it.next().value );\n// outside: A\n\nconsole.log( \"outside:\", it.next( 1 ).value );\n// inside `*bar()`: 1\n// outside: B\n\nconsole.log( \"outside:\", it.next( 2 ).value );\n// outside: C\n\nconsole.log( \"outside:\", it.next( 3 ).value );\n// outside: D\n\nconsole.log( \"outside:\", it.next( 4 ).value );\n// inside `*bar()`: undefined\n// outside: E\n\nconsole.log( \"outside:\", it.next( 5 ).value );\n// inside `*bar()`: 5\n// outside: F\n```\n\nNotice the differences in where the messages were received/reported between this example and the one previous.\n\nMost strikingly, the default `array` *iterator* doesn't care about any messages sent in via `next(..)` calls, so the values `2`, `3`, and `4` are essentially ignored. Also, because that *iterator* has no explicit `return` value (unlike the previously used `*foo()`), the `yield *` expression gets an `undefined` when it finishes.\n\n#### Exceptions Delegated, Too!\n\nIn the same way that `yield`-delegation transparently passes messages through in both directions, errors/exceptions also pass in both directions:\n\n```js\nfunction *foo() {\n\ttry {\n\t\tyield \"B\";\n\t}\n\tcatch (err) {\n\t\tconsole.log( \"error caught inside `*foo()`:\", err );\n\t}\n\n\tyield \"C\";\n\n\tthrow \"D\";\n}\n\nfunction *bar() {\n\tyield \"A\";\n\n\ttry {\n\t\tyield *foo();\n\t}\n\tcatch (err) {\n\t\tconsole.log( \"error caught inside `*bar()`:\", err );\n\t}\n\n\tyield \"E\";\n\n\tyield *baz();\n\n\t// note: can't get here!\n\tyield \"G\";\n}\n\nfunction *baz() {\n\tthrow \"F\";\n}\n\nvar it = bar();\n\nconsole.log( \"outside:\", it.next().value );\n// outside: A\n\nconsole.log( \"outside:\", it.next( 1 ).value );\n// outside: B\n\nconsole.log( \"outside:\", it.throw( 2 ).value );\n// error caught inside `*foo()`: 2\n// outside: C\n\nconsole.log( \"outside:\", it.next( 3 ).value );\n// error caught inside `*bar()`: D\n// outside: E\n\ntry {\n\tconsole.log( \"outside:\", it.next( 4 ).value );\n}\ncatch (err) {\n\tconsole.log( \"error caught outside:\", err );\n}\n// error caught outside: F\n```\n\nSome things to note from this snippet:\n\n1. When we call `it.throw(2)`, it sends the error message `2` into `*bar()`, which delegates that to `*foo()`, which then `catch`es it and handles it gracefully. Then, the `yield \"C\"` sends `\"C\"` back out as the return `value` from the `it.throw(2)` call.\n2. The `\"D\"` value that's next `throw`n from inside `*foo()` propagates out to `*bar()`, which `catch`es it and handles it gracefully. Then the `yield \"E\"` sends `\"E\"` back out as the return `value` from the `it.next(3)` call.\n3. Next, the exception `throw`n from `*baz()` isn't caught in `*bar()` -- though we did `catch` it outside -- so both `*baz()` and `*bar()` are set to a completed state. After this snippet, you would not be able to get the `\"G\"` value out with any subsequent `next(..)` call(s) -- they will just return `undefined` for `value`.\n\n### Delegating Asynchrony\n\nLet's finally get back to our earlier `yield`-delegation example with the multiple sequential Ajax requests:\n\n```js\nfunction *foo() {\n\tvar r2 = yield request( \"http://some.url.2\" );\n\tvar r3 = yield request( \"http://some.url.3/?v=\" + r2 );\n\n\treturn r3;\n}\n\nfunction *bar() {\n\tvar r1 = yield request( \"http://some.url.1\" );\n\n\tvar r3 = yield *foo();\n\n\tconsole.log( r3 );\n}\n\nrun( bar );\n```\n\nInstead of calling `yield run(foo)` inside of `*bar()`, we just call `yield *foo()`.\n\nIn the previous version of this example, the Promise mechanism (controlled by `run(..)`) was used to transport the value from `return r3` in `*foo()` to the local variable `r3` inside `*bar()`. Now, that value is just returned back directly via the `yield *` mechanics.\n\nOtherwise, the behavior is pretty much identical.\n\n### Delegating \"Recursion\"\n\nOf course, `yield`-delegation can keep following as many delegation steps as you wire up. You could even use `yield`-delegation for async-capable generator \"recursion\" -- a generator `yield`-delegating to itself:\n\n```js\nfunction *foo(val) {\n\tif (val > 1) {\n\t\t// generator recursion\n\t\tval = yield *foo( val - 1 );\n\t}\n\n\treturn yield request( \"http://some.url/?v=\" + val );\n}\n\nfunction *bar() {\n\tvar r1 = yield *foo( 3 );\n\tconsole.log( r1 );\n}\n\nrun( bar );\n```\n\n**Note:** Our `run(..)` utility could have been called with `run( foo, 3 )`, because it supports additional parameters being passed along to the initialization of the generator. However, we used a parameter-free `*bar()` here to highlight the flexibility of `yield *`.\n\nWhat processing steps follow from that code? Hang on, this is going to be quite intricate to describe in detail:\n\n1. `run(bar)` starts up the `*bar()` generator.\n2. `foo(3)` creates an *iterator* for `*foo(..)` and passes `3` as its `val` parameter.\n3. Because `3 > 1`, `foo(2)` creates another *iterator* and passes in `2` as its `val` parameter.\n4. Because `2 > 1`, `foo(1)` creates yet another *iterator* and passes in `1` as its `val` parameter.\n5. `1 > 1` is `false`, so we next call `request(..)` with the `1` value, and get a promise back for that first Ajax call.\n6. That promise is `yield`ed out, which comes back to the `*foo(2)` generator instance.\n7. The `yield *` passes that promise back out to the `*foo(3)` generator instance. Another `yield *` passes the promise out to the `*bar()` generator instance. And yet again another `yield *` passes the promise out to the `run(..)` utility, which will wait on that promise (for the first Ajax request) to proceed.\n8. When the promise resolves, its fulfillment message is sent to resume `*bar()`, which passes through the `yield *` into the `*foo(3)` instance, which then passes through the `yield *` to the `*foo(2)` generator instance, which then passes through the `yield *` to the normal `yield` that's waiting in the `*foo(3)` generator instance.\n9. That first call's Ajax response is now immediately `return`ed from the `*foo(3)` generator instance, which sends that value back as the result of the `yield *` expression in the `*foo(2)` instance, and assigned to its local `val` variable.\n10. Inside `*foo(2)`, a second Ajax request is made with `request(..)`, whose promise is `yield`ed back to the `*foo(1)` instance, and then `yield *` propagates all the way out to `run(..)` (step 7 again). When the promise resolves, the second Ajax response propagates all the way back into the `*foo(2)` generator instance, and is assigned to its local `val` variable.\n11. Finally, the third Ajax request is made with `request(..)`, its promise goes out to `run(..)`, and then its resolution value comes all the way back, which is then `return`ed so that it comes back to the waiting `yield *` expression in `*bar()`.\n\nPhew! A lot of crazy mental juggling, huh? You might want to read through that a few more times, and then go grab a snack to clear your head!\n\n## Generator Concurrency\n\nAs we discussed in both Chapter 1 and earlier in this chapter, two simultaneously running \"processes\" can cooperatively interleave their operations, and many times this can *yield* (pun intended) very powerful asynchrony expressions.\n\nFrankly, our earlier examples of concurrency interleaving of multiple generators showed how to make it really confusing. But we hinted that there's places where this capability is quite useful.\n\nRecall a scenario we looked at in Chapter 1, where two different simultaneous Ajax response handlers needed to coordinate with each other to make sure that the data communication was not a race condition. We slotted the responses into the `res` array like this:\n\n```js\nfunction response(data) {\n\tif (data.url == \"http://some.url.1\") {\n\t\tres[0] = data;\n\t}\n\telse if (data.url == \"http://some.url.2\") {\n\t\tres[1] = data;\n\t}\n}\n```\n\nBut how can we use multiple generators concurrently for this scenario?\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nvar res = [];\n\nfunction *reqData(url) {\n\tres.push(\n\t\tyield request( url )\n\t);\n}\n```\n\n**Note:** We're going to use two instances of the `*reqData(..)` generator here, but there's no difference to running a single instance of two different generators; both approaches are reasoned about identically. We'll see two different generators coordinating in just a bit.\n\nInstead of having to manually sort out `res[0]` and `res[1]` assignments, we'll use coordinated ordering so that `res.push(..)` properly slots the values in the expected and predictable order. The expressed logic thus should feel a bit cleaner.\n\nBut how will we actually orchestrate this interaction? First, let's just do it manually, with Promises:\n\n```js\nvar it1 = reqData( \"http://some.url.1\" );\nvar it2 = reqData( \"http://some.url.2\" );\n\nvar p1 = it1.next().value;\nvar p2 = it2.next().value;\n\np1\n.then( function(data){\n\tit1.next( data );\n\treturn p2;\n} )\n.then( function(data){\n\tit2.next( data );\n} );\n```\n\n`*reqData(..)`'s two instances are both started to make their Ajax requests, then paused with `yield`. Then we choose to resume the first instance when `p1` resolves, and then `p2`'s resolution will restart the second instance. In this way, we use Promise orchestration to ensure that `res[0]` will have the first response and `res[1]` will have the second response.\n\nBut frankly, this is awfully manual, and it doesn't really let the generators orchestrate themselves, which is where the true power can lie. Let's try it a different way:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nvar res = [];\n\nfunction *reqData(url) {\n\tvar data = yield request( url );\n\n\t// transfer control\n\tyield;\n\n\tres.push( data );\n}\n\nvar it1 = reqData( \"http://some.url.1\" );\nvar it2 = reqData( \"http://some.url.2\" );\n\nvar p1 = it1.next().value;\nvar p2 = it2.next().value;\n\np1.then( function(data){\n\tit1.next( data );\n} );\n\np2.then( function(data){\n\tit2.next( data );\n} );\n\nPromise.all( [p1,p2] )\n.then( function(){\n\tit1.next();\n\tit2.next();\n} );\n```\n\nOK, this is a bit better (though still manual!), because now the two instances of `*reqData(..)` run truly concurrently, and (at least for the first part) independently.\n\nIn the previous snippet, the second instance was not given its data until after the first instance was totally finished. But here, both instances receive their data as soon as their respective responses come back, and then each instance does another `yield` for control transfer purposes. We then choose what order to resume them in the `Promise.all([ .. ])` handler.\n\nWhat may not be as obvious is that this approach hints at an easier form for a reusable utility, because of the symmetry. We can do even better. Let's imagine using a utility called `runAll(..)`:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nvar res = [];\n\nrunAll(\n\tfunction*(){\n\t\tvar p1 = request( \"http://some.url.1\" );\n\n\t\t// transfer control\n\t\tyield;\n\n\t\tres.push( yield p1 );\n\t},\n\tfunction*(){\n\t\tvar p2 = request( \"http://some.url.2\" );\n\n\t\t// transfer control\n\t\tyield;\n\n\t\tres.push( yield p2 );\n\t}\n);\n```\n\n**Note:** We're not including a code listing for `runAll(..)` as it is not only long enough to bog down the text, but is an extension of the logic we've already implemented in `run(..)` earlier. So, as a good supplementary exercise for the reader, try your hand at evolving the code from `run(..)` to work like the imagined `runAll(..)`. Also, my *asynquence* library provides a previously mentioned `runner(..)` utility with this kind of capability already built in, and will be discussed in Appendix A of this book.\n\nHere's how the processing inside `runAll(..)` would operate:\n\n1. The first generator gets a promise for the first Ajax response from `\"http://some.url.1\"`, then `yield`s control back to the `runAll(..)` utility.\n2. The second generator runs and does the same for `\"http://some.url.2\"`, `yield`ing control back to the `runAll(..)` utility.\n3. The first generator resumes, and then `yield`s out its promise `p1`. The `runAll(..)` utility does the same in this case as our previous `run(..)`, in that it waits on that promise to resolve, then resumes the same generator (no control transfer!). When `p1` resolves, `runAll(..)` resumes the first generator again with that resolution value, and then `res[0]` is given its value. When the first generator then finishes, that's an implicit transfer of control.\n4. The second generator resumes, `yield`s out its promise `p2`, and waits for it to resolve. Once it does, `runAll(..)` resumes the second generator with that value, and `res[1]` is set.\n\nIn this running example, we use an outer variable called `res` to store the results of the two different Ajax responses -- that's our concurrency coordination making that possible.\n\nBut it might be quite helpful to further extend `runAll(..)` to provide an inner variable space for the multiple generator instances to *share*, such as an empty object we'll call `data` below. Also, it could take non-Promise values that are `yield`ed and hand them off to the next generator.\n\nConsider:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nrunAll(\n\tfunction*(data){\n\t\tdata.res = [];\n\n\t\t// transfer control (and message pass)\n\t\tvar url1 = yield \"http://some.url.2\";\n\n\t\tvar p1 = request( url1 ); // \"http://some.url.1\"\n\n\t\t// transfer control\n\t\tyield;\n\n\t\tdata.res.push( yield p1 );\n\t},\n\tfunction*(data){\n\t\t// transfer control (and message pass)\n\t\tvar url2 = yield \"http://some.url.1\";\n\n\t\tvar p2 = request( url2 ); // \"http://some.url.2\"\n\n\t\t// transfer control\n\t\tyield;\n\n\t\tdata.res.push( yield p2 );\n\t}\n);\n```\n\nIn this formulation, the two generators are not just coordinating control transfer, but actually communicating with each other, both through `data.res` and the `yield`ed messages that trade `url1` and `url2` values. That's incredibly powerful!\n\nSuch realization also serves as a conceptual base for a more sophisticated asynchrony technique called CSP (Communicating Sequential Processes), which we will cover in Appendix B of this book.\n\n## Thunks\n\nSo far, we've made the assumption that `yield`ing a Promise from a generator -- and having that Promise resume the generator via a helper utility like `run(..)` -- was the best possible way to manage asynchrony with generators. To be clear, it is.\n\nBut we skipped over another pattern that has some mildly widespread adoption, so in the interest of completeness we'll take a brief look at it.\n\nIn general computer science, there's an old pre-JS concept called a \"thunk.\" Without getting bogged down in the historical nature, a narrow expression of a thunk in JS is a function that -- without any parameters -- is wired to call another function.\n\nIn other words, you wrap a function definition around function call -- with any parameters it needs -- to *defer* the execution of that call, and that wrapping function is a thunk. When you later execute the thunk, you end up calling the original function.\n\nFor example:\n\n```js\nfunction foo(x,y) {\n\treturn x + y;\n}\n\nfunction fooThunk() {\n\treturn foo( 3, 4 );\n}\n\n// later\n\nconsole.log( fooThunk() );\t// 7\n```\n\nSo, a synchronous thunk is pretty straightforward. But what about an async thunk? We can essentially extend the narrow thunk definition to include it receiving a callback.\n\nConsider:\n\n```js\nfunction foo(x,y,cb) {\n\tsetTimeout( function(){\n\t\tcb( x + y );\n\t}, 1000 );\n}\n\nfunction fooThunk(cb) {\n\tfoo( 3, 4, cb );\n}\n\n// later\n\nfooThunk( function(sum){\n\tconsole.log( sum );\t\t// 7\n} );\n```\n\nAs you can see, `fooThunk(..)` only expects a `cb(..)` parameter, as it already has values `3` and `4` (for `x` and `y`, respectively) pre-specified and ready to pass to `foo(..)`. A thunk is just waiting around patiently for the last piece it needs to do its job: the callback.\n\nYou don't want to make thunks manually, though. So, let's invent a utility that does this wrapping for us.\n\nConsider:\n\n```js\nfunction thunkify(fn) {\n\tvar args = [].slice.call( arguments, 1 );\n\treturn function(cb) {\n\t\targs.push( cb );\n\t\treturn fn.apply( null, args );\n\t};\n}\n\nvar fooThunk = thunkify( foo, 3, 4 );\n\n// later\n\nfooThunk( function(sum) {\n\tconsole.log( sum );\t\t// 7\n} );\n```\n\n**Tip:** Here we assume that the original (`foo(..)`) function signature expects its callback in the last position, with any other parameters coming before it. This is a pretty ubiquitous \"standard\" for async JS function standards. You might call it \"callback-last style.\" If for some reason you had a need to handle \"callback-first style\" signatures, you would just make a utility that used `args.unshift(..)` instead of `args.push(..)`.\n\nThe preceding formulation of `thunkify(..)` takes both the `foo(..)` function reference, and any parameters it needs, and returns back the thunk itself (`fooThunk(..)`). However, that's not the typical approach you'll find to thunks in JS.\n\nInstead of `thunkify(..)` making the thunk itself, typically -- if not perplexingly -- the `thunkify(..)` utility would produce a function that produces thunks.\n\nUhhhh... yeah.\n\nConsider:\n\n```js\nfunction thunkify(fn) {\n\treturn function() {\n\t\tvar args = [].slice.call( arguments );\n\t\treturn function(cb) {\n\t\t\targs.push( cb );\n\t\t\treturn fn.apply( null, args );\n\t\t};\n\t};\n}\n```\n\nThe main difference here is the extra `return function() { .. }` layer. Here's how its usage differs:\n\n```js\nvar whatIsThis = thunkify( foo );\n\nvar fooThunk = whatIsThis( 3, 4 );\n\n// later\n\nfooThunk( function(sum) {\n\tconsole.log( sum );\t\t// 7\n} );\n```\n\nObviously, the big question this snippet implies is what is `whatIsThis` properly called? It's not the thunk, it's the thing that will produce thunks from `foo(..)` calls. It's kind of like a \"factory\" for \"thunks.\" There doesn't seem to be any kind of standard agreement for naming such a thing.\n\nSo, my proposal is \"thunkory\" (\"thunk\" + \"factory\").  So, `thunkify(..)` produces a thunkory, and a thunkory produces thunks. That reasoning is symmetric to my proposal for \"promisory\" in Chapter 3:\n\n```js\nvar fooThunkory = thunkify( foo );\n\nvar fooThunk1 = fooThunkory( 3, 4 );\nvar fooThunk2 = fooThunkory( 5, 6 );\n\n// later\n\nfooThunk1( function(sum) {\n\tconsole.log( sum );\t\t// 7\n} );\n\nfooThunk2( function(sum) {\n\tconsole.log( sum );\t\t// 11\n} );\n```\n\n**Note:** The running `foo(..)` example expects a style of callback that's not \"error-first style.\" Of course, \"error-first style\" is much more common. If `foo(..)` had some sort of legitimate error-producing expectation, we could change it to expect and use an error-first callback. None of the subsequent `thunkify(..)` machinery cares what style of callback is assumed. The only difference in usage would be `fooThunk1(function(err,sum){..`.\n\nExposing the thunkory method -- instead of how the earlier `thunkify(..)` hides this intermediary step -- may seem like unnecessary complication. But in general, it's quite useful to make thunkories at the beginning of your program to wrap existing API methods, and then be able to pass around and call those thunkories when you need thunks. The two distinct steps preserve a cleaner separation of capability.\n\nTo illustrate:\n\n```js\n// cleaner:\nvar fooThunkory = thunkify( foo );\n\nvar fooThunk1 = fooThunkory( 3, 4 );\nvar fooThunk2 = fooThunkory( 5, 6 );\n\n// instead of:\nvar fooThunk1 = thunkify( foo, 3, 4 );\nvar fooThunk2 = thunkify( foo, 5, 6 );\n```\n\nRegardless of whether you like to deal with the thunkories explicitly or not, the usage of thunks `fooThunk1(..)` and `fooThunk2(..)` remains the same.\n\n### s/promise/thunk/\n\nSo what's all this thunk stuff have to do with generators?\n\nComparing thunks to promises generally: they're not directly interchangable as they're not equivalent in behavior. Promises are vastly more capable and trustable than bare thunks.\n\nBut in another sense, they both can be seen as a request for a value, which may be async in its answering.\n\nRecall from Chapter 3 we defined a utility for promisifying a function, which we called `Promise.wrap(..)` -- we could have called it `promisify(..)`, too! This Promise-wrapping utility doesn't produce Promises; it produces promisories that in turn produce Promises. This is completely symmetric to the thunkories and thunks presently being discussed.\n\nTo illustrate the symmetry, let's first alter the running `foo(..)` example from earlier to assume an \"error-first style\" callback:\n\n```js\nfunction foo(x,y,cb) {\n\tsetTimeout( function(){\n\t\t// assume `cb(..)` as \"error-first style\"\n\t\tcb( null, x + y );\n\t}, 1000 );\n}\n```\n\nNow, we'll compare using `thunkify(..)` and `promisify(..)` (aka `Promise.wrap(..)` from Chapter 3):\n\n```js\n// symmetrical: constructing the question asker\nvar fooThunkory = thunkify( foo );\nvar fooPromisory = promisify( foo );\n\n// symmetrical: asking the question\nvar fooThunk = fooThunkory( 3, 4 );\nvar fooPromise = fooPromisory( 3, 4 );\n\n// get the thunk answer\nfooThunk( function(err,sum){\n\tif (err) {\n\t\tconsole.error( err );\n\t}\n\telse {\n\t\tconsole.log( sum );\t\t// 7\n\t}\n} );\n\n// get the promise answer\nfooPromise\n.then(\n\tfunction(sum){\n\t\tconsole.log( sum );\t\t// 7\n\t},\n\tfunction(err){\n\t\tconsole.error( err );\n\t}\n);\n```\n\nBoth the thunkory and the promisory are essentially asking a question (for a value), and respectively the thunk `fooThunk` and promise `fooPromise` represent the future answers to that question. Presented in that light, the symmetry is clear.\n\nWith that perspective in mind, we can see that generators which `yield` Promises for asynchrony could instead `yield` thunks for asynchrony. All we'd need is a smarter `run(..)` utility (like from before) that can not only look for and wire up to a `yield`ed Promise but also to provide a callback to a `yield`ed thunk.\n\nConsider:\n\n```js\nfunction *foo() {\n\tvar val = yield request( \"http://some.url.1\" );\n\tconsole.log( val );\n}\n\nrun( foo );\n```\n\nIn this example, `request(..)` could either be a promisory that returns a promise, or a thunkory that returns a thunk. From the perspective of what's going on inside the generator code logic, we don't care about that implementation detail, which is quite powerful!\n\nSo, `request(..)` could be either:\n\n```js\n// promisory `request(..)` (see Chapter 3)\nvar request = Promise.wrap( ajax );\n\n// vs.\n\n// thunkory `request(..)`\nvar request = thunkify( ajax );\n```\n\nFinally, as a thunk-aware patch to our earlier `run(..)` utility, we would need logic like this:\n\n```js\n// ..\n// did we receive a thunk back?\nelse if (typeof next.value == \"function\") {\n\treturn new Promise( function(resolve,reject){\n\t\t// call the thunk with an error-first callback\n\t\tnext.value( function(err,msg) {\n\t\t\tif (err) {\n\t\t\t\treject( err );\n\t\t\t}\n\t\t\telse {\n\t\t\t\tresolve( msg );\n\t\t\t}\n\t\t} );\n\t} )\n\t.then(\n\t\thandleNext,\n\t\tfunction handleErr(err) {\n\t\t\treturn Promise.resolve(\n\t\t\t\tit.throw( err )\n\t\t\t)\n\t\t\t.then( handleResult );\n\t\t}\n\t);\n}\n```\n\nNow, our generators can either call promisories to `yield` Promises, or call thunkories to `yield` thunks, and in either case, `run(..)` would handle that value and use it to wait for the completion to resume the generator.\n\nSymmetry wise, these two approaches look identical. However, we should point out that's true only from the perspective of Promises or thunks representing the future value continuation of a generator.\n\nFrom the larger perspective, thunks do not in and of themselves have hardly any of the trustability or composability guarantees that Promises are designed with. Using a thunk as a stand-in for a Promise in this particular generator asynchrony pattern is workable but should be seen as less than ideal when compared to all the benefits that Promises offer (see Chapter 3).\n\nIf you have the option, prefer `yield pr` rather than `yield th`. But there's nothing wrong with having a `run(..)` utility which can handle both value types.\n\n**Note:** The `runner(..)` utility in my *asynquence* library, which will be discussed in Appendix A, handles `yield`s of Promises, thunks and *asynquence* sequences.\n\n## Pre-ES6 Generators\n\nYou're hopefully convinced now that generators are a very important addition to the async programming toolbox. But it's a new syntax in ES6, which means you can't just polyfill generators like you can Promises (which are just a new API). So what can we do to bring generators to our browser JS if we don't have the luxury of ignoring pre-ES6 browsers?\n\nFor all new syntax extensions in ES6, there are tools -- the most common term for them is transpilers, for trans-compilers -- which can take your ES6 syntax and transform it into equivalent (but obviously uglier!) pre-ES6 code. So, generators can be transpiled into code that will have the same behavior but work in ES5 and below.\n\nBut how? The \"magic\" of `yield` doesn't obviously sound like code that's easy to transpile. We actually hinted at a solution in our earlier discussion of closure-based *iterators*.\n\n### Manual Transformation\n\nBefore we discuss the transpilers, let's derive how manual transpilation would work in the case of generators. This isn't just an academic exercise, because doing so will actually help further reinforce how they work.\n\nConsider:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nfunction *foo(url) {\n\ttry {\n\t\tconsole.log( \"requesting:\", url );\n\t\tvar val = yield request( url );\n\t\tconsole.log( val );\n\t}\n\tcatch (err) {\n\t\tconsole.log( \"Oops:\", err );\n\t\treturn false;\n\t}\n}\n\nvar it = foo( \"http://some.url.1\" );\n```\n\nThe first thing to observe is that we'll still need a normal `foo()` function that can be called, and it will still need to return an *iterator*. So, let's sketch out the non-generator transformation:\n\n```js\nfunction foo(url) {\n\n\t// ..\n\n\t// make and return an iterator\n\treturn {\n\t\tnext: function(v) {\n\t\t\t// ..\n\t\t},\n\t\tthrow: function(e) {\n\t\t\t// ..\n\t\t}\n\t};\n}\n\nvar it = foo( \"http://some.url.1\" );\n```\n\nThe next thing to observe is that a generator does its \"magic\" by suspending its scope/state, but we can emulate that with function closure (see the *Scope & Closures* title of this series). To understand how to write such code, we'll first annotate different parts of our generator with state values:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nfunction *foo(url) {\n\t// STATE *1*\n\n\ttry {\n\t\tconsole.log( \"requesting:\", url );\n\t\tvar TMP1 = request( url );\n\n\t\t// STATE *2*\n\t\tvar val = yield TMP1;\n\t\tconsole.log( val );\n\t}\n\tcatch (err) {\n\t\t// STATE *3*\n\t\tconsole.log( \"Oops:\", err );\n\t\treturn false;\n\t}\n}\n```\n\n**Note:** For more accurate illustration, we split up the `val = yield request..` statement into two parts, using the temporary `TMP1` variable. `request(..)` happens in state `*1*`, and the assignment of its completion value to `val` happens in state `*2*`. We'll get rid of that intermediate `TMP1` when we convert the code to its non-generator equivalent.\n\nIn other words, `*1*` is the beginning state, `*2*` is the state if the `request(..)` succeeds, and `*3*` is the state if the `request(..)` fails. You can probably imagine how any extra `yield` steps would just be encoded as extra states.\n\nBack to our transpiled generator, let's define a variable `state` in the closure we can use to keep track of the state:\n\n```js\nfunction foo(url) {\n\t// manage generator state\n\tvar state;\n\n\t// ..\n}\n```\n\nNow, let's define an inner function called `process(..)` inside the closure which handles each state, using a `switch` statement:\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nfunction foo(url) {\n\t// manage generator state\n\tvar state;\n\n\t// generator-wide variable declarations\n\tvar val;\n\n\tfunction process(v) {\n\t\tswitch (state) {\n\t\t\tcase 1:\n\t\t\t\tconsole.log( \"requesting:\", url );\n\t\t\t\treturn request( url );\n\t\t\tcase 2:\n\t\t\t\tval = v;\n\t\t\t\tconsole.log( val );\n\t\t\t\treturn;\n\t\t\tcase 3:\n\t\t\t\tvar err = v;\n\t\t\t\tconsole.log( \"Oops:\", err );\n\t\t\t\treturn false;\n\t\t}\n\t}\n\n\t// ..\n}\n```\n\nEach state in our generator is represented by its own `case` in the `switch` statement. `process(..)` will be called each time we need to process a new state. We'll come back to how that works in just a moment.\n\nFor any generator-wide variable declarations (`val`), we move those to a `var` declaration outside of `process(..)` so they can survive multiple calls to `process(..)`. But the \"block scoped\" `err` variable is only needed for the `*3*` state, so we leave it in place.\n\nIn state `*1*`, instead of `yield request(..)`, we did `return request(..)`. In terminal state `*2*`, there was no explicit `return`, so we just do a `return;` which is the same as `return undefined`. In terminal state `*3*`, there was a `return false`, so we preserve that.\n\nNow we need to define the code in the *iterator* functions so they call `process(..)` appropriately:\n\n```js\nfunction foo(url) {\n\t// manage generator state\n\tvar state;\n\n\t// generator-wide variable declarations\n\tvar val;\n\n\tfunction process(v) {\n\t\tswitch (state) {\n\t\t\tcase 1:\n\t\t\t\tconsole.log( \"requesting:\", url );\n\t\t\t\treturn request( url );\n\t\t\tcase 2:\n\t\t\t\tval = v;\n\t\t\t\tconsole.log( val );\n\t\t\t\treturn;\n\t\t\tcase 3:\n\t\t\t\tvar err = v;\n\t\t\t\tconsole.log( \"Oops:\", err );\n\t\t\t\treturn false;\n\t\t}\n\t}\n\n\t// make and return an iterator\n\treturn {\n\t\tnext: function(v) {\n\t\t\t// initial state\n\t\t\tif (!state) {\n\t\t\t\tstate = 1;\n\t\t\t\treturn {\n\t\t\t\t\tdone: false,\n\t\t\t\t\tvalue: process()\n\t\t\t\t};\n\t\t\t}\n\t\t\t// yield resumed successfully\n\t\t\telse if (state == 1) {\n\t\t\t\tstate = 2;\n\t\t\t\treturn {\n\t\t\t\t\tdone: true,\n\t\t\t\t\tvalue: process( v )\n\t\t\t\t};\n\t\t\t}\n\t\t\t// generator already completed\n\t\t\telse {\n\t\t\t\treturn {\n\t\t\t\t\tdone: true,\n\t\t\t\t\tvalue: undefined\n\t\t\t\t};\n\t\t\t}\n\t\t},\n\t\t\"throw\": function(e) {\n\t\t\t// the only explicit error handling is in\n\t\t\t// state *1*\n\t\t\tif (state == 1) {\n\t\t\t\tstate = 3;\n\t\t\t\treturn {\n\t\t\t\t\tdone: true,\n\t\t\t\t\tvalue: process( e )\n\t\t\t\t};\n\t\t\t}\n\t\t\t// otherwise, an error won't be handled,\n\t\t\t// so just throw it right back out\n\t\t\telse {\n\t\t\t\tthrow e;\n\t\t\t}\n\t\t}\n\t};\n}\n```\n\nHow does this code work?\n\n1. The first call to the *iterator*'s `next()` call would move the generator from the uninitialized state to state `1`, and then call `process()` to handle that state. The return value from `request(..)`, which is the promise for the Ajax response, is returned back as the `value` property from the `next()` call.\n2. If the Ajax request succeeds, the second call to `next(..)` should send in the Ajax response value, which moves our state to `2`. `process(..)` is again called (this time with the passed in Ajax response value), and the `value` property returned from `next(..)` will be `undefined`.\n3. However, if the Ajax request fails, `throw(..)` should be called with the error, which would move the state from `1` to `3` (instead of `2`). Again `process(..)` is called, this time with the error value. That `case` returns `false`, which is set as the `value` property returned from the `throw(..)` call.\n\nFrom the outside -- that is, interacting only with the *iterator* -- this `foo(..)` normal function works pretty much the same as the `*foo(..)` generator would have worked. So we've effectively \"transpiled\" our ES6 generator to pre-ES6 compatibility!\n\nWe could then manually instantiate our generator and control its iterator -- calling `var it = foo(\"..\")` and `it.next(..)` and such -- or better, we could pass it to our previously defined `run(..)` utility as `run(foo,\"..\")`.\n\n### Automatic Transpilation\n\nThe preceding exercise of manually deriving a transformation of our ES6 generator to pre-ES6 equivalent teaches us how generators work conceptually. But that transformation was really intricate and very non-portable to other generators in our code. It would be quite impractical to do this work by hand, and would completely obviate all the benefit of generators.\n\nBut luckily, several tools already exist that can automatically convert ES6 generators to things like what we derived in the previous section. Not only do they do the heavy lifting work for us, but they also handle several complications that we glossed over.\n\nOne such tool is regenerator (https://facebook.github.io/regenerator/), from the smart folks at Facebook.\n\nIf we use regenerator to transpile our previous generator, here's the code produced (at the time of this writing):\n\n```js\n// `request(..)` is a Promise-aware Ajax utility\n\nvar foo = regeneratorRuntime.mark(function foo(url) {\n    var val;\n\n    return regeneratorRuntime.wrap(function foo$(context$1$0) {\n        while (1) switch (context$1$0.prev = context$1$0.next) {\n        case 0:\n            context$1$0.prev = 0;\n            console.log( \"requesting:\", url );\n            context$1$0.next = 4;\n            return request( url );\n        case 4:\n            val = context$1$0.sent;\n            console.log( val );\n            context$1$0.next = 12;\n            break;\n        case 8:\n            context$1$0.prev = 8;\n            context$1$0.t0 = context$1$0.catch(0);\n            console.log(\"Oops:\", context$1$0.t0);\n            return context$1$0.abrupt(\"return\", false);\n        case 12:\n        case \"end\":\n            return context$1$0.stop();\n        }\n    }, foo, this, [[0, 8]]);\n});\n```\n\nThere's some obvious similarities here to our manual derivation, such as the `switch` / `case` statements, and we even see `val` pulled out of the closure just as we did.\n\nOf course, one trade-off is that regenerator's transpilation requires a helper library `regeneratorRuntime` that holds all the reusable logic for managing a general generator / *iterator*. A lot of that boilerplate looks different than our version, but even then, the concepts can be seen, like with `context$1$0.next = 4` keeping track of the next state for the generator.\n\nThe main takeaway is that generators are not restricted to only being useful in ES6+ environments. Once you understand the concepts, you can employ them throughout your code, and use tools to transform the code to be compatible with older environments.\n\nThis is more work than just using a `Promise` API polyfill for pre-ES6 Promises, but the effort is totally worth it, because generators are so much better at expressing async flow control in a reason-able, sensible, synchronous-looking, sequential fashion.\n\nOnce you get hooked on generators, you'll never want to go back to the hell of async spaghetti callbacks!\n\n## Review\n\nGenerators are a new ES6 function type that does not run-to-completion like normal functions. Instead, the generator can be paused in mid-completion (entirely preserving its state), and it can later be resumed from where it left off.\n\nThis pause/resume interchange is cooperative rather than preemptive, which means that the generator has the sole capability to pause itself, using the `yield` keyword, and yet the *iterator* that controls the generator has the sole capability (via `next(..)`) to resume the generator.\n\nThe `yield` / `next(..)` duality is not just a control mechanism, it's actually a two-way message passing mechanism. A `yield ..` expression essentially pauses waiting for a value, and the next `next(..)` call passes a value (or implicit `undefined`) back to that paused `yield` expression.\n\nThe key benefit of generators related to async flow control is that the code inside a generator expresses a sequence of steps for the task in a naturally sync/sequential fashion. The trick is that we essentially hide potential asynchrony behind the `yield` keyword -- moving the asynchrony to the code where the generator's *iterator* is controlled.\n\nIn other words, generators preserve a sequential, synchronous, blocking code pattern for async code, which lets our brains reason about the code much more naturally, addressing one of the two key drawbacks of callback-based async.\n"
  },
  {
    "path": "async & performance/ch5.md",
    "content": "# You Don't Know JS: Async & Performance\n# Chapter 5: Program Performance\n\nThis book so far has been all about how to leverage asynchrony patterns more effectively. But we haven't directly addressed why asynchrony really matters to JS. The most obvious explicit reason is **performance**.\n\nFor example, if you have two Ajax requests to make, and they're independent, but you need to wait on them both to finish before doing the next task, you have two options for modeling that interaction: serial and concurrent.\n\nYou could make the first request and wait to start the second request until the first finishes. Or, as we've seen both with promises and generators, you could make both requests \"in parallel,\" and express the \"gate\" to wait on both of them before moving on.\n\nClearly, the latter is usually going to be more performant than the former. And better performance generally leads to better user experience.\n\nIt's even possible that asynchrony (interleaved concurrency) can improve just the perception of performance, even if the overall program still takes the same amount of time to complete. User perception of performance is every bit -- if not more! -- as important as actual measurable performance.\n\nWe want to now move beyond localized asynchrony patterns to talk about some bigger picture performance details at the program level.\n\n**Note:** You may be wondering about micro-performance issues like if `a++` or `++a` is faster. We'll look at those sorts of performance details in the next chapter on \"Benchmarking & Tuning.\"\n\n## Web Workers\n\nIf you have processing-intensive tasks but you don't want them to run on the main thread (which may slow down the browser/UI), you might have wished that JavaScript could operate in a multithreaded manner.\n\nIn Chapter 1, we talked in detail about how JavaScript is single threaded. And that's still true. But a single thread isn't the only way to organize the execution of your program.\n\nImagine splitting your program into two pieces, and running one of those pieces on the main UI thread, and running the other piece on an entirely separate thread.\n\nWhat kinds of concerns would such an architecture bring up?\n\nFor one, you'd want to know if running on a separate thread meant that it ran in parallel (on systems with multiple CPUs/cores) such that a long-running process on that second thread would **not** block the main program thread. Otherwise, \"virtual threading\" wouldn't be of much benefit over what we already have in JS with async concurrency.\n\nAnd you'd want to know if these two pieces of the program have access to the same shared scope/resources. If they do, then you have all the questions that multithreaded languages (Java, C++, etc.) deal with, such as needing cooperative or preemptive locking (mutexes, etc.). That's a lot of extra work, and shouldn't be undertaken lightly.\n\nAlternatively, you'd want to know how these two pieces could \"communicate\" if they couldn't share scope/resources.\n\nAll these are great questions to consider as we explore a feature added to the web platform circa HTML5 called \"Web Workers.\" This is a feature of the browser (aka host environment) and actually has almost nothing to do with the JS language itself. That is, JavaScript does not *currently* have any features that support threaded execution.\n\nBut an environment like your browser can easily provide multiple instances of the JavaScript engine, each on its own thread, and let you run a different program in each thread. Each of those separate threaded pieces of your program is called a \"(Web) Worker.\" This type of parallelism is called \"task parallelism,\" as the emphasis is on splitting up chunks of your program to run in parallel.\n\nFrom your main JS program (or another Worker), you instantiate a Worker like so:\n\n```js\nvar w1 = new Worker( \"http://some.url.1/mycoolworker.js\" );\n```\n\nThe URL should point to the location of a JS file (not an HTML page!) which is intended to be loaded into a Worker. The browser will then spin up a separate thread and let that file run as an independent program in that thread.\n\n**Note:** The kind of Worker created with such a URL is called a \"Dedicated Worker.\" But instead of providing a URL to an external file, you can also create an \"Inline Worker\" by providing a Blob URL (another HTML5 feature); essentially it's an inline file stored in a single (binary) value. However, Blobs are beyond the scope of what we'll discuss here.\n\nWorkers do not share any scope or resources with each other or the main program -- that would bring all the nightmares of threaded programming to the forefront -- but instead have a basic event messaging mechanism connecting them.\n\nThe `w1` Worker object is an event listener and trigger, which lets you subscribe to events sent by the Worker as well as send events to the Worker.\n\nHere's how to listen for events (actually, the fixed `\"message\"` event):\n\n```js\nw1.addEventListener( \"message\", function(evt){\n\t// evt.data\n} );\n```\n\nAnd you can send the `\"message\"` event to the Worker:\n\n```js\nw1.postMessage( \"something cool to say\" );\n```\n\nInside the Worker, the messaging is totally symmetrical:\n\n```js\n// \"mycoolworker.js\"\n\naddEventListener( \"message\", function(evt){\n\t// evt.data\n} );\n\npostMessage( \"a really cool reply\" );\n```\n\nNotice that a dedicated Worker is in a one-to-one relationship with the program that created it. That is, the `\"message\"` event doesn't need any disambiguation here, because we're sure that it could only have come from this one-to-one relationship -- either it came from the Worker or the main page.\n\nUsually the main page application creates the Workers, but a Worker can instantiate its own child Worker(s) -- known as subworkers -- as necessary. Sometimes this is useful to delegate such details to a sort of \"master\" Worker that spawns other Workers to process parts of a task. Unfortunately, at the time of this writing, Chrome still does not support subworkers, while Firefox does.\n\nTo kill a Worker immediately from the program that created it, call `terminate()` on the Worker object (like `w1` in the previous snippets). Abruptly terminating a Worker thread does not give it any chance to finish up its work or clean up any resources. It's akin to you closing a browser tab to kill a page.\n\nIf you have two or more pages (or multiple tabs with the same page!) in the browser that try to create a Worker from the same file URL, those will actually end up as completely separate Workers. Shortly, we'll discuss a way to \"share\" a Worker.\n\n**Note:** It may seem like a malicious or ignorant JS program could easily perform a denial-of-service attack on a system by spawning hundreds of Workers, seemingly each with their own thread. While it's true that it's somewhat of a guarantee that a Worker will end up on a separate thread, this guarantee is not unlimited. The system is free to decide how many actual threads/CPUs/cores it really wants to create. There's no way to predict or guarantee how many you'll have access to, though many people assume it's at least as many as the number of CPUs/cores available. I think the safest assumption is that there's at least one other thread besides the main UI thread, but that's about it.\n\n### Worker Environment\n\nInside the Worker, you do not have access to any of the main program's resources. That means you cannot access any of its global variables, nor can you access the page's DOM or other resources. Remember: it's a totally separate thread.\n\nYou can, however, perform network operations (Ajax, WebSockets) and set timers. Also, the Worker has access to its own copy of several important global variables/features, including `navigator`, `location`, `JSON`, and `applicationCache`.\n\nYou can also load extra JS scripts into your Worker, using `importScripts(..)`:\n\n```js\n// inside the Worker\nimportScripts( \"foo.js\", \"bar.js\" );\n```\n\nThese scripts are loaded synchronously, which means the `importScripts(..)` call will block the rest of the Worker's execution until the file(s) are finished loading and executing.\n\n**Note:** There have also been some discussions about exposing the `<canvas>` API to Workers, which combined with having canvases be Transferables (see the \"Data Transfer\" section), would allow Workers to perform more sophisticated off-thread graphics processing, which can be useful for high-performance gaming (WebGL) and other similar applications. Although this doesn't exist yet in any browsers, it's likely to happen in the near future.\n\nWhat are some common uses for Web Workers?\n\n* Processing intensive math calculations\n* Sorting large data sets\n* Data operations (compression, audio analysis, image pixel manipulations, etc.)\n* High-traffic network communications\n\n### Data Transfer\n\nYou may notice a common characteristic of most of those uses, which is that they require a large amount of information to be transferred across the barrier between threads using the event mechanism, perhaps in both directions.\n\nIn the early days of Workers, serializing all data to a string value was the only option. In addition to the speed penalty of the two-way serializations, the other major negative was that the data was being copied, which meant a doubling of memory usage (and the subsequent churn of garbage collection).\n\nThankfully, we now have a few better options.\n\nIf you pass an object, a so-called \"Structured Cloning Algorithm\" (https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/The_structured_clone_algorithm) is used to copy/duplicate the object on the other side. This algorithm is fairly sophisticated and can even handle duplicating objects with circular references. The to-string/from-string performance penalty is not paid, but we still have duplication of memory using this approach. There is support for this in IE10 and above, as well as all the other major browsers.\n\nAn even better option, especially for larger data sets, is \"Transferable Objects\" (http://updates.html5rocks.com/2011/12/Transferable-Objects-Lightning-Fast). What happens is that the object's \"ownership\" is transferred, but the data itself is not moved. Once you transfer away an object to a Worker, it's empty or inaccessible in the originating location -- that eliminates the hazards of threaded programming over a shared scope. Of course, transfer of ownership can go in both directions.\n\nThere really isn't much you need to do to opt into a Transferable Object; any data structure that implements the Transferable interface (https://developer.mozilla.org/en-US/docs/Web/API/Transferable) will automatically be transferred this way (support Firefox & Chrome).\n\nFor example, typed arrays like `Uint8Array` (see the *ES6 & Beyond* title of this series) are \"Transferables.\" This is how you'd send a Transferable Object using `postMessage(..)`:\n\n```js\n// `foo` is a `Uint8Array` for instance\n\npostMessage( foo.buffer, [ foo.buffer ] );\n```\n\nThe first parameter is the raw buffer and the second parameter is a list of what to transfer.\n\nBrowsers that don't support Transferable Objects simply degrade to structured cloning, which means performance reduction rather than outright feature breakage.\n\n### Shared Workers\n\nIf your site or app allows for loading multiple tabs of the same page (a common feature), you may very well want to reduce the resource usage of their system by preventing duplicate dedicated Workers; the most common limited resource in this respect is a socket network connection, as browsers limit the number of simultaneous connections to a single host. Of course, limiting multiple connections from a client also eases your server resource requirements.\n\nIn this case, creating a single centralized Worker that all the page instances of your site or app can *share* is quite useful.\n\nThat's called a `SharedWorker`, which you create like so (support for this is limited to Firefox and Chrome):\n\n```js\nvar w1 = new SharedWorker( \"http://some.url.1/mycoolworker.js\" );\n```\n\nBecause a shared Worker can be connected to or from more than one program instance or page on your site, the Worker needs a way to know which program a message comes from. This unique identification is called a \"port\" -- think network socket ports. So the calling program must use the `port` object of the Worker for communication:\n\n```js\nw1.port.addEventListener( \"message\", handleMessages );\n\n// ..\n\nw1.port.postMessage( \"something cool\" );\n```\n\nAlso, the port connection must be initialized, as:\n\n```js\nw1.port.start();\n```\n\nInside the shared Worker, an extra event must be handled: `\"connect\"`. This event provides the port `object` for that particular connection. The most convenient way to keep multiple connections separate is to use closure (see *Scope & Closures* title of this series) over the `port`, as shown next, with the event listening and transmitting for that connection defined inside the handler for the `\"connect\"` event:\n\n```js\n// inside the shared Worker\naddEventListener( \"connect\", function(evt){\n\t// the assigned port for this connection\n\tvar port = evt.ports[0];\n\n\tport.addEventListener( \"message\", function(evt){\n\t\t// ..\n\n\t\tport.postMessage( .. );\n\n\t\t// ..\n\t} );\n\n\t// initialize the port connection\n\tport.start();\n} );\n```\n\nOther than that difference, shared and dedicated Workers have the same capabilities and semantics.\n\n**Note:** Shared Workers survive the termination of a port connection if other port connections are still alive, whereas dedicated Workers are terminated whenever the connection to their initiating program is terminated.\n\n### Polyfilling Web Workers\n\nWeb Workers are very attractive performance-wise for running JS programs in parallel. However, you may be in a position where your code needs to run in older browsers that lack support. Because Workers are an API and not a syntax, they can be polyfilled, to an extent.\n\nIf a browser doesn't support Workers, there's simply no way to fake multithreading from the performance perspective. Iframes are commonly thought of to provide a parallel environment, but in all modern browsers they actually run on the same thread as the main page, so they're not sufficient for faking parallelism.\n\nAs we detailed in Chapter 1, JS's asynchronicity (not parallelism) comes from the event loop queue, so you can force faked Workers to be asynchronous using timers (`setTimeout(..)`, etc.). Then you just need to provide a polyfill for the Worker API. There are some listed here (https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills#web-workers), but frankly none of them look great.\n\nI've written a sketch of a polyfill for `Worker` here (https://gist.github.com/getify/1b26accb1a09aa53ad25). It's basic, but it should get the job done for simple `Worker` support, given that the two-way messaging works correctly as well as `\"onerror\"` handling. You could probably also extend it with more features, such as `terminate()` or faked Shared Workers, as you see fit.\n\n**Note:** You can't fake synchronous blocking, so this polyfill just disallows use of `importScripts(..)`. Another option might have been to parse and transform the Worker's code (once Ajax loaded) to handle rewriting to some asynchronous form of an `importScripts(..)` polyfill, perhaps with a promise-aware interface.\n\n## SIMD\n\nSingle instruction, multiple data (SIMD) is a form of \"data parallelism,\" as contrasted to \"task parallelism\" with Web Workers, because the emphasis is not really on program logic chunks being parallelized, but rather multiple bits of data being processed in parallel.\n\nWith SIMD, threads don't provide the parallelism. Instead, modern CPUs provide SIMD capability with \"vectors\" of numbers -- think: type specialized arrays -- as well as instructions that can operate in parallel across all the numbers; these are low-level operations leveraging instruction-level parallelism.\n\nThe effort to expose SIMD capability to JavaScript is primarily spearheaded by Intel (https://01.org/node/1495), namely by Mohammad Haghighat (at the time of this writing), in cooperation with Firefox and Chrome teams. SIMD is on an early standards track with a good chance of making it into a future revision of JavaScript, likely in the ES7 timeframe.\n\nSIMD JavaScript proposes to expose short vector types and APIs to JS code, which on those SIMD-enabled systems would map the operations directly through to the CPU equivalents, with fallback to non-parallelized operation \"shims\" on non-SIMD systems.\n\nThe performance benefits for data-intensive applications (signal analysis, matrix operations on graphics, etc.) with such parallel math processing are quite obvious!\n\nEarly proposal forms of the SIMD API at the time of this writing look like this:\n\n```js\nvar v1 = SIMD.float32x4( 3.14159, 21.0, 32.3, 55.55 );\nvar v2 = SIMD.float32x4( 2.1, 3.2, 4.3, 5.4 );\n\nvar v3 = SIMD.int32x4( 10, 101, 1001, 10001 );\nvar v4 = SIMD.int32x4( 10, 20, 30, 40 );\n\nSIMD.float32x4.mul( v1, v2 );\t// [ 6.597339, 67.2, 138.89, 299.97 ]\nSIMD.int32x4.add( v3, v4 );\t\t// [ 20, 121, 1031, 10041 ]\n```\n\nShown here are two different vector data types, 32-bit floating-point numbers and 32-bit integer numbers. You can see that these vectors are sized exactly to four 32-bit elements, as this matches the SIMD vector sizes (128-bit) available in most modern CPUs. It's also possible we may see an `x8` (or larger!) version of these APIs in the future.\n\nBesides `mul()` and `add()`, many other operations are likely to be included, such as `sub()`, `div()`, `abs()`, `neg()`, `sqrt()`, `reciprocal()`, `reciprocalSqrt()` (arithmetic), `shuffle()` (rearrange vector elements), `and()`, `or()`, `xor()`, `not()` (logical), `equal()`, `greaterThan()`, `lessThan()` (comparison), `shiftLeft()`, `shiftRightLogical()`, `shiftRightArithmetic()` (shifts), `fromFloat32x4()`, and `fromInt32x4()` (conversions).\n\n**Note:** There's an official \"prollyfill\" (hopeful, expectant, future-leaning polyfill) for the SIMD functionality available (https://github.com/johnmccutchan/ecmascript_simd), which illustrates a lot more of the planned SIMD capability than we've illustrated in this section.\n\n## asm.js\n\n\"asm.js\" (http://asmjs.org/) is a label for a highly optimizable subset of the JavaScript language. By carefully avoiding certain mechanisms and patterns that are *hard* to optimize (garbage collection, coercion, etc.), asm.js-styled code can be recognized by the JS engine and given special attention with aggressive low-level optimizations.\n\nDistinct from other program performance mechanisms discussed in this chapter, asm.js isn't necessarily something that needs to be adopted into the JS language specification. There *is* an asm.js specification (http://asmjs.org/spec/latest/), but it's mostly for tracking an agreed upon set of candidate inferences for optimization rather than a set of requirements of JS engines.\n\nThere's not currently any new syntax being proposed. Instead, asm.js suggests ways to recognize existing standard JS syntax that conforms to the rules of asm.js and let engines implement their own optimizations accordingly.\n\nThere's been some disagreement between browser vendors over exactly how asm.js should be activated in a program. Early versions of the asm.js experiment required a `\"use asm\";` pragma (similar to strict mode's `\"use strict\";`) to help clue the JS engine to be looking for asm.js optimization opportunities and hints. Others have asserted that asm.js should just be a set of heuristics that engines automatically recognize without the author having to do anything extra, meaning that existing programs could theoretically benefit from asm.js-style optimizations without doing anything special.\n\n### How to Optimize with asm.js\n\nThe first thing to understand about asm.js optimizations is around types and coercion (see the *Types & Grammar* title of this series). If the JS engine has to track multiple different types of values in a variable through various operations, so that it can handle coercions between types as necessary, that's a lot of extra work that keeps the program optimization suboptimal.\n\n**Note:** We're going to use asm.js-style code here for illustration purposes, but be aware that it's not commonly expected that you'll author such code by hand. asm.js is more intended to a compilation target from other tools, such as Emscripten (https://github.com/kripken/emscripten/wiki). It's of course possible to write your own asm.js code, but that's usually a bad idea because the code is very low level and managing it can be very time consuming and error prone. Nevertheless, there may be cases where you'd want to hand tweak your code for asm.js optimization purposes.\n\nThere are some \"tricks\" you can use to hint to an asm.js-aware JS engine what the intended type is for variables/operations, so that it can skip these coercion tracking steps.\n\nFor example:\n\n```js\nvar a = 42;\n\n// ..\n\nvar b = a;\n```\n\nIn that program, the `b = a` assignment leaves the door open for type divergence in variables. However, it could instead be written as:\n\n```js\nvar a = 42;\n\n// ..\n\nvar b = a | 0;\n```\n\nHere, we've used the `|` (\"binary OR\") with value `0`, which has no effect on the value other than to make sure it's a 32-bit integer. That code run in a normal JS engine works just fine, but when run in an asm.js-aware JS engine it *can* signal that `b` should always be treated as a 32-bit integer, so the coercion tracking can be skipped.\n\nSimilarly, the addition operation between two variables can be restricted to a more performant integer addition (instead of floating point):\n\n```js\n(a + b) | 0\n```\n\nAgain, the asm.js-aware JS engine can see that hint and infer that the `+` operation should be 32-bit integer addition because the end result of the whole expression would automatically be 32-bit integer conformed anyway.\n\n### asm.js Modules\n\nOne of the biggest detractors to performance in JS is around memory allocation, garbage collection, and scope access. asm.js suggests one of the ways around these issues is to declare a more formalized asm.js \"module\" -- do not confuse these with ES6 modules; see the *ES6 & Beyond* title of this series.\n\nFor an asm.js module, you need to explicitly pass in a tightly conformed namespace -- this is referred to in the spec as `stdlib`, as it should represent standard libraries needed -- to import necessary symbols, rather than just using globals via lexical scope. In the base case, the `window` object is an acceptable `stdlib` object for asm.js module purposes, but you could and perhaps should construct an even more restricted one.\n\nYou also must declare a \"heap\" -- which is just a fancy term for a reserved spot in memory where variables can already be used without asking for more memory or releasing previously used memory -- and pass that in, so that the asm.js module won't need to do anything that would cause memory churn; it can just use the pre-reserved space.\n\nA \"heap\" is likely a typed `ArrayBuffer`, such as:\n\n```js\nvar heap = new ArrayBuffer( 0x10000 );\t// 64k heap\n```\n\nUsing that pre-reserved 64k of binary space, an asm.js module can store and retrieve values in that buffer without any memory allocation or garbage collection penalties. For example, the `heap` buffer could be used inside the module to back an array of 64-bit float values like this:\n\n```js\nvar arr = new Float64Array( heap );\n```\n\nOK, so let's make a quick, silly example of an asm.js-styled module to illustrate how these pieces fit together. We'll define a `foo(..)` that takes a start (`x`) and end (`y`) integer for a range, and calculates all the inner adjacent multiplications of the values in the range, and then finally averages those values together:\n\n```js\nfunction fooASM(stdlib,foreign,heap) {\n\t\"use asm\";\n\n\tvar arr = new stdlib.Int32Array( heap );\n\n\tfunction foo(x,y) {\n\t\tx = x | 0;\n\t\ty = y | 0;\n\n\t\tvar i = 0;\n\t\tvar p = 0;\n\t\tvar sum = 0;\n\t\tvar count = ((y|0) - (x|0)) | 0;\n\n\t\t// calculate all the inner adjacent multiplications\n\t\tfor (i = x | 0;\n\t\t\t(i | 0) < (y | 0);\n\t\t\tp = (p + 8) | 0, i = (i + 1) | 0\n\t\t) {\n\t\t\t// store result\n\t\t\tarr[ p >> 3 ] = (i * (i + 1)) | 0;\n\t\t}\n\n\t\t// calculate average of all intermediate values\n\t\tfor (i = 0, p = 0;\n\t\t\t(i | 0) < (count | 0);\n\t\t\tp = (p + 8) | 0, i = (i + 1) | 0\n\t\t) {\n\t\t\tsum = (sum + arr[ p >> 3 ]) | 0;\n\t\t}\n\n\t\treturn +(sum / count);\n\t}\n\n\treturn {\n\t\tfoo: foo\n\t};\n}\n\nvar heap = new ArrayBuffer( 0x1000 );\nvar foo = fooASM( window, null, heap ).foo;\n\nfoo( 10, 20 );\t\t// 233\n```\n\n**Note:** This asm.js example is hand authored for illustration purposes, so it doesn't represent the same code that would be produced from a compilation tool targeting asm.js. But it does show the typical nature of asm.js code, especially the type hinting and use of the `heap` buffer for temporary variable storage.\n\nThe first call to `fooASM(..)` is what sets up our asm.js module with its `heap` allocation. The result is a `foo(..)` function we can call as many times as necessary. Those `foo(..)` calls should be specially optimized by an asm.js-aware JS engine. Importantly, the preceding code is completely standard JS and would run just fine (without special optimization) in a non-asm.js engine.\n\nObviously, the nature of restrictions that make asm.js code so optimizable reduces the possible uses for such code significantly. asm.js won't necessarily be a general optimization set for any given JS program. Instead, it's intended to provide an optimized way of handling specialized tasks such as intensive math operations (e.g., those used in graphics processing for games).\n\n## Review\n\nThe first four chapters of this book are based on the premise that async coding patterns give you the ability to write more performant code, which is generally a very important improvement. But async behavior only gets you so far, because it's still fundamentally bound to a single event loop thread.\n\nSo in this chapter we've covered several program-level mechanisms for improving performance even further.\n\nWeb Workers let you run a JS file (aka program) in a separate thread using async events to message between the threads. They're wonderful for offloading long-running or resource-intensive tasks to a different thread, leaving the main UI thread more responsive.\n\nSIMD proposes to map CPU-level parallel math operations to JavaScript APIs for high-performance data-parallel operations, like number processing on large data sets.\n\nFinally, asm.js describes a small subset of JavaScript that avoids the hard-to-optimize parts of JS (like garbage collection and coercion) and lets the JS engine recognize and run such code through aggressive optimizations. asm.js could be hand authored, but that's extremely tedious and error prone, akin to hand authoring assembly language (hence the name). Instead, the main intent is that asm.js would be a good target for cross-compilation from other highly optimized program languages -- for example, Emscripten (https://github.com/kripken/emscripten/wiki) transpiling C/C++ to JavaScript.\n\nWhile not covered explicitly in this chapter, there are even more radical ideas under very early discussion for JavaScript, including approximations of direct threaded functionality (not just hidden behind data structure APIs). Whether that happens explicitly, or we just see more parallelism creep into JS behind the scenes, the future of more optimized program-level performance in JS looks really *promising*.\n"
  },
  {
    "path": "async & performance/ch6.md",
    "content": "# You Don't Know JS: Async & Performance\n# Chapter 6: Benchmarking & Tuning\n\nAs the first four chapters of this book were all about performance as a coding pattern (asynchrony and concurrency), and Chapter 5 was about performance at the macro program architecture level, this chapter goes after the topic of performance at the micro level, focusing on single expressions/statements.\n\nOne of the most common areas of curiosity -- indeed, some developers can get quite obsessed about it -- is in analyzing and testing various options for how to write a line or chunk of code, and which one is faster.\n\nWe're going to look at some of these issues, but it's important to understand from the outset that this chapter is **not** about feeding the obsession of micro-performance tuning, like whether some given JS engine can run `++a` faster than `a++`. The more important goal of this chapter is to figure out what kinds of JS performance matter and which ones don't, *and how to tell the difference*.\n\nBut even before we get there, we need to explore how to most accurately and reliably test JS performance, because there's tons of misconceptions and myths that have flooded our collective cult knowledge base. We've got to sift through all that junk to find some clarity.\n\n## Benchmarking\n\nOK, time to start dispelling some misconceptions. I'd wager the vast majority of JS developers, if asked to benchmark the speed (execution time) of a certain operation, would initially go about it something like this:\n\n```js\nvar start = (new Date()).getTime();\t// or `Date.now()`\n\n// do some operation\n\nvar end = (new Date()).getTime();\n\nconsole.log( \"Duration:\", (end - start) );\n```\n\nRaise your hand if that's roughly what came to your mind. Yep, I thought so. There's a lot wrong with this approach, but don't feel bad; **we've all been there.**\n\nWhat did that measurement tell you, exactly? Understanding what it does and doesn't say about the execution time of the operation in question is key to learning how to appropriately benchmark performance in JavaScript.\n\nIf the duration reported is `0`, you may be tempted to believe that it took less than a millisecond. But that's not very accurate. Some platforms don't have single millisecond precision, but instead only update the timer in larger increments. For example, older versions of windows (and thus IE) had only 15ms precision, which means the operation has to take at least that long for anything other than `0` to be reported!\n\nMoreover, whatever duration is reported, the only thing you really know is that the operation took approximately that long on that exact single run. You have near-zero confidence that it will always run at that speed. You have no idea if the engine or system had some sort of interference at that exact moment, and that at other times the operation could run faster.\n\nWhat if the duration reported is `4`? Are you more sure it took about four milliseconds? Nope. It might have taken less time, and there may have been some other delay in getting either `start` or `end` timestamps.\n\nMore troublingly, you also don't know that the circumstances of this operation test aren't overly optimistic. It's possible that the JS engine figured out a way to optimize your isolated test case, but in a more real program such optimization would be diluted or impossible, such that the operation would run slower than your test.\n\nSo... what do we know? Unfortunately, with those realizations stated, **we know very little.** Something of such low confidence isn't even remotely good enough to build your determinations on. Your \"benchmark\" is basically useless. And worse, it's dangerous in that it implies false confidence, not just to you but also to others who don't think critically about the conditions that led to those results.\n\n### Repetition\n\n\"OK,\" you now say, \"Just put a loop around it so the whole test takes longer.\" If you repeat an operation 100 times, and that whole loop reportedly takes a total of 137ms, then you can just divide by 100 and get an average duration of 1.37ms for each operation, right?\n\nWell, not exactly.\n\nA straight mathematical average by itself is definitely not sufficient for making judgments about performance which you plan to extrapolate to the breadth of your entire application. With a hundred iterations, even a couple of outliers (high or low) can skew the average, and then when you apply that conclusion repeatedly, you even further inflate the skew beyond credulity.\n\nInstead of just running for a fixed number of iterations, you can instead choose to run the loop of tests until a certain amount of time has passed. That might be more reliable, but how do you decide how long to run? You might guess that it should be some multiple of how long your operation should take to run once. Wrong.\n\nActually, the length of time to repeat across should be based on the accuracy of the timer you're using, specifically to minimize the chances of inaccuracy. The less precise your timer, the longer you need to run to make sure you've minimized the error percentage. A 15ms timer is pretty bad for accurate benchmarking; to minimize its uncertainty (aka \"error rate\") to less than 1%, you need to run your each cycle of test iterations for 750ms. A 1ms timer only needs a cycle to run for 50ms to get the same confidence.\n\nBut then, that's just a single sample. To be sure you're factoring out the skew, you'll want lots of samples to average across. You'll also want to understand something about just how slow the worst sample is, how fast the best sample is, how far apart those best and worse cases were, and so on. You'll want to know not just a number that tells you how fast something ran, but also to have some quantifiable measure of how trustable that number is.\n\nAlso, you probably want to combine these different techniques (as well as others), so that you get the best balance of all the possible approaches.\n\nThat's all bare minimum just to get started. If you've been approaching performance benchmarking with anything less serious than what I just glossed over, well... \"you don't know: proper benchmarking.\"\n\n### Benchmark.js\n\nAny relevant and reliable benchmark should be based on statistically sound practices. I am not going to write a chapter on statistics here, so I'll hand wave around some terms: standard deviation, variance, margin of error. If you don't know what those terms really mean -- I took a stats class back in college and I'm still a little fuzzy on them -- you are not actually qualified to write your own benchmarking logic.\n\nLuckily, smart folks like John-David Dalton and Mathias Bynens do understand these concepts, and wrote a statistically sound benchmarking tool called Benchmark.js (http://benchmarkjs.com/). So I can end the suspense by simply saying: \"just use that tool.\"\n\nI won't repeat their whole documentation for how Benchmark.js works; they have fantastic API Docs (http://benchmarkjs.com/docs) you should read. Also there are some great (http://calendar.perfplanet.com/2010/bulletproof-javascript-benchmarks/) writeups (http://monsur.hossa.in/2012/12/11/benchmarkjs.html) on more of the details and methodology.\n\nBut just for quick illustration purposes, here's how you could use Benchmark.js to run a quick performance test:\n\n```js\nfunction foo() {\n\t// operation(s) to test\n}\n\nvar bench = new Benchmark(\n\t\"foo test\",\t\t\t\t// test name\n\tfoo,\t\t\t\t\t// function to test (just contents)\n\t{\n\t\t// ..\t\t\t\t// optional extra options (see docs)\n\t}\n);\n\nbench.hz;\t\t\t\t\t// number of operations per second\nbench.stats.moe;\t\t\t// margin of error\nbench.stats.variance;\t\t// variance across samples\n// ..\n```\n\nThere's *lots* more to learn about using Benchmark.js besides this glance I'm including here. But the point is that it's handling all of the complexities of setting up a fair, reliable, and valid performance benchmark for a given piece of JavaScript code. If you're going to try to test and benchmark your code, this library is the first place you should turn.\n\nWe're showing here the usage to test a single operation like X, but it's fairly common that you want to compare X to Y. This is easy to do by simply setting up two different tests in a \"Suite\" (a Benchmark.js organizational feature). Then, you run them head-to-head, and compare the statistics to conclude whether X or Y was faster.\n\nBenchmark.js can of course be used to test JavaScript in a browser (see the \"jsPerf.com\" section later in this chapter), but it can also run in non-browser environments (Node.js, etc.).\n\nOne largely untapped potential use-case for Benchmark.js is to use it in your Dev or QA environments to run automated performance regression tests against critical path parts of your application's JavaScript. Similar to how you might run unit test suites before deployment, you can also compare the performance against previous benchmarks to monitor if you are improving or degrading application performance.\n\n#### Setup/Teardown\n\nIn the previous code snippet, we glossed over the \"extra options\" `{ .. }` object. But there are two options we should discuss: `setup` and `teardown`.\n\nThese two options let you define functions to be called before and after your test case runs.\n\nIt's incredibly important to understand that your `setup` and `teardown` code **does not run for each test iteration**. The best way to think about it is that there's an outer loop (repeating cycles), and an inner loop (repeating test iterations). `setup` and `teardown` are run at the beginning and end of each *outer* loop (aka cycle) iteration, but not inside the inner loop.\n\nWhy does this matter? Let's imagine you have a test case that looks like this:\n\n```js\na = a + \"w\";\nb = a.charAt( 1 );\n```\n\nThen, you set up your test `setup` as follows:\n\n```js\nvar a = \"x\";\n```\n\nYour temptation is probably to believe that `a` is starting out as `\"x\"` for each test iteration.\n\nBut it's not! It's starting `a` at `\"x\"` for each test cycle, and then your repeated `+ \"w\"` concatenations will be making a larger and larger `a` value, even though you're only ever accessing the character `\"w\"` at the `1` position.\n\nWhere this most commonly bites you is when you make side effect changes to something like the DOM, like appending a child element. You may think your parent element is set as empty each time, but it's actually getting lots of elements added, and that can significantly sway the results of your tests.\n\n## Context Is King\n\nDon't forget to check the context of a particular performance benchmark, especially a comparison between X and Y tasks. Just because your test reveals that X is faster than Y doesn't mean that the conclusion \"X is faster than Y\" is actually relevant.\n\nFor example, let's say a performance test reveals that X runs 10,000,000 operations per second, and Y runs at 8,000,000 operations per second. You could claim that Y is 20% slower than X, and you'd be mathematically correct, but your assertion doesn't hold as much water as you'd think.\n\nLet's think about the results more critically: 10,000,000 operations per second is 10,000 operations per millisecond, and 10 operations per microsecond. In other words, a single operation takes 0.1 microseconds, or 100 nanoseconds. It's hard to fathom just how small 100ns is, but for comparison, it's often cited that the human eye isn't generally capable of distinguishing anything less than 100ms, which is one million times slower than the 100ns speed of the X operation.\n\nEven recent scientific studies showing that maybe the brain can process as quick as 13ms (about 8x faster than previously asserted) would mean that X is still running 125,000 times faster than the human brain can perceive a distinct thing happening. **X is going really, really fast.**\n\nBut more importantly, let's talk about the difference between X and Y, the 2,000,000 operations per second difference. If X takes 100ns, and Y takes 80ns, the difference is 20ns, which in the best case is still one 650-thousandth of the interval the human brain can perceive.\n\nWhat's my point? **None of this performance difference matters, at all!**\n\nBut wait, what if this operation is going to happen a whole bunch of times in a row? Then the difference could add up, right?\n\nOK, so what we're asking then is, how likely is it that operation X is going to be run over and over again, one right after the other, and that this has to happen 650,000 times just to get a sliver of a hope the human brain could perceive it. More likely, it'd have to happen 5,000,000 to 10,000,000 times together in a tight loop to even approach relevance.\n\nWhile the computer scientist in you might protest that this is possible, the louder voice of realism in you should sanity check just how likely or unlikely that really is. Even if it is relevant in rare occasions, it's irrelevant in most situations.\n\nThe vast majority of your benchmark results on tiny operations -- like the `++x` vs `x++` myth -- **are just totally bogus** for supporting the conclusion that X should be favored over Y on a performance basis.\n\n### Engine Optimizations\n\nYou simply cannot reliably extrapolate that if X was 10 microseconds faster than Y in your isolated test, that means X is always faster than Y and should always be used. That's not how performance works. It's vastly more complicated.\n\nFor example, let's imagine (purely hypothetical) that you test some microperformance behavior such as comparing:\n\n```js\nvar twelve = \"12\";\nvar foo = \"foo\";\n\n// test 1\nvar X1 = parseInt( twelve );\nvar X2 = parseInt( foo );\n\n// test 2\nvar Y1 = Number( twelve );\nvar Y2 = Number( foo );\n```\n\nIf you understand what `parseInt(..)` does compared to `Number(..)`, you might intuit that `parseInt(..)` potentially has \"more work\" to do, especially in the `foo` case. Or you might intuit that they should have the same amount of work to do in the `foo` case, as both should be able to stop at the first character `\"f\"`.\n\nWhich intuition is correct? I honestly don't know. But I'll make the case it doesn't matter what your intuition is. What might the results be when you test it? Again, I'm making up a pure hypothetical here, I haven't actually tried, nor do I care.\n\nLet's pretend the test comes back that `X` and `Y` are statistically identical. Have you then confirmed your intuition about the `\"f\"` character thing? Nope.\n\nIt's possible in our hypothetical that the engine might recognize that the variables `twelve` and `foo` are only being used in one place in each test, and so it might decide to inline those values. Then it may realize that `Number( \"12\" )` can just be replaced by `12`. And maybe it comes to the same conclusion with `parseInt(..)`, or maybe not.\n\nOr an engine's dead-code removal heuristic could kick in, and it could realize that variables `X` and `Y` aren't being used, so declaring them is irrelevant, so it doesn't end up doing anything at all in either test.\n\nAnd all that's just made with the mindset of assumptions about a single test run. Modern engines are fantastically more complicated than what we're intuiting here. They do all sorts of tricks, like tracing and tracking how a piece of code behaves over a short period of time, or with a particularly constrained set of inputs.\n\nWhat if the engine optimizes a certain way because of the fixed input, but in your real program you give more varied input and the optimization decisions shake out differently (or not at all!)? Or what if the engine kicks in optimizations because it sees the code being run tens of thousands of times by the benchmarking utility, but in your real program it will only run a hundred times in near proximity, and under those conditions the engine determines the optimizations are not worth it?\n\nAnd all those optimizations we just hypothesized about might happen in our constrained test but maybe the engine wouldn't do them in a more complex program (for various reasons). Or it could be reversed -- the engine might not optimize such trivial code but may be more inclined to optimize it more aggressively when the system is already more taxed by a more sophisticated program.\n\nThe point I'm trying to make is that you really don't know for sure exactly what's going on under the covers. All the guesses and hypothesis you can muster don't amount to hardly anything concrete for really making such decisions.\n\nDoes that mean you can't really do any useful testing? **Definitely not!**\n\nWhat this boils down to is that testing *not real* code gives you *not real* results. In so much as is possible and practical, you should test actual real, non-trivial snippets of your code, and under as best of real conditions as you can actually hope to. Only then will the results you get have a chance to approximate reality.\n\nMicrobenchmarks like `++x` vs `x++` are so incredibly likely to be bogus, we might as well just flatly assume them as such.\n\n## jsPerf.com\n\nWhile Benchmark.js is useful for testing the performance of your code in whatever JS environment you're running, it cannot be stressed enough that you need to compile test results from lots of different environments (desktop browsers, mobile devices, etc.) if you want to have any hope of reliable test conclusions.\n\nFor example, Chrome on a high-end desktop machine is not likely to perform anywhere near the same as Chrome mobile on a smartphone. And a smartphone with a full battery charge is not likely to perform anywhere near the same as a smartphone with 2% battery life left, when the device is starting to power down the radio and processor.\n\nIf you want to make assertions like \"X is faster than Y\" in any reasonable sense across more than just a single environment, you're going to need to actually test as many of those real world environments as possible. Just because Chrome executes some X operation faster than Y doesn't mean that all browsers do. And of course you also probably will want to cross-reference the results of multiple browser test runs with the demographics of your users.\n\nThere's an awesome website for this purpose called jsPerf (http://jsperf.com). It uses the Benchmark.js library we talked about earlier to run statistically accurate and reliable tests, and makes the test on an openly available URL that you can pass around to others.\n\nEach time a test is run, the results are collected and persisted with the test, and the cumulative test results are graphed on the page for anyone to see.\n\nWhen creating a test on the site, you start out with two test cases to fill in, but you can add as many as you need. You also have the ability to set up `setup` code that is run at the beginning of each test cycle and `teardown` code run at the end of each cycle.\n\n**Note:** A trick for doing just one test case (if you're benchmarking a single approach instead of a head-to-head) is to fill in the second test input boxes with placeholder text on first creation, then edit the test and leave the second test blank, which will delete it. You can always add more test cases later.\n\nYou can define the initial page setup (importing libraries, defining utility helper functions, declaring variables, etc.). There are also options for defining setup and teardown behavior if needed -- consult the \"Setup/Teardown\" section in the Benchmark.js discussion earlier.\n\n### Sanity Check\n\njsPerf is a fantastic resource, but there's an awful lot of tests published that when you analyze them are quite flawed or bogus, for any of a variety of reasons as outlined so far in this chapter.\n\nConsider:\n\n```js\n// Case 1\nvar x = [];\nfor (var i=0; i<10; i++) {\n\tx[i] = \"x\";\n}\n\n// Case 2\nvar x = [];\nfor (var i=0; i<10; i++) {\n\tx[x.length] = \"x\";\n}\n\n// Case 3\nvar x = [];\nfor (var i=0; i<10; i++) {\n\tx.push( \"x\" );\n}\n```\n\nSome observations to ponder about this test scenario:\n\n* It's extremely common for devs to put their own loops into test cases, and they forget that Benchmark.js already does all the repetition you need. There's a really strong chance that the `for` loops in these cases are totally unnecessary noise.\n* The declaring and initializing of `x` is included in each test case, possibly unnecessarily. Recall from earlier that if `x = []` were in the `setup` code, it wouldn't actually be run before each test iteration, but instead once at the beginning of each cycle. That means `x` would continue growing quite large, not just the size `10` implied by the `for` loops.\n\n   So is the intent to make sure the tests are constrained only to how the JS engine behaves with very small arrays (size `10`)? That *could* be the intent, but if it is, you have to consider if that's not focusing far too much on nuanced internal implementation details.\n\n   On the other hand, does the intent of the test embrace the context that the arrays will actually be growing quite large? Is the JS engines' behavior with larger arrays relevant and accurate when compared with the intended real world usage?\n\n* Is the intent to find out how much `x.length` or `x.push(..)` add to the performance of the operation to append to the `x` array? OK, that might be a valid thing to test. But then again, `push(..)` is a function call, so of course it's going to be slower than `[..]` access. Arguably, cases 1 and 2 are fairer than case 3.\n\n\nHere's another example that illustrates a common apples-to-oranges flaw:\n\n```js\n// Case 1\nvar x = [\"John\",\"Albert\",\"Sue\",\"Frank\",\"Bob\"];\nx.sort();\n\n// Case 2\nvar x = [\"John\",\"Albert\",\"Sue\",\"Frank\",\"Bob\"];\nx.sort( function mySort(a,b){\n\tif (a < b) return -1;\n\tif (a > b) return 1;\n\treturn 0;\n} );\n```\n\nHere, the obvious intent is to find out how much slower the custom `mySort(..)` comparator is than the built-in default comparator. But by specifying the function `mySort(..)` as inline function expression, you've created an unfair/bogus test. Here, the second case is not only testing a custom user JS function, **but it's also testing creating a new function expression for each iteration.**\n\nWould it surprise you to find out that if you run a similar test but update it to isolate only for creating an inline function expression versus using a pre-declared function, the inline function expression creation can be from 2% to 20% slower!?\n\nUnless your intent with this test *is* to consider the inline function expression creation \"cost,\" a better/fairer test would put `mySort(..)`'s declaration in the page setup -- don't put it in the test `setup` as that's unnecessary redeclaration for each cycle -- and simply reference it by name in the test case: `x.sort(mySort)`.\n\nBuilding on the previous example, another pitfall is in opaquely avoiding or adding \"extra work\" to one test case that creates an apples-to-oranges scenario:\n\n```js\n// Case 1\nvar x = [12,-14,0,3,18,0,2.9];\nx.sort();\n\n// Case 2\nvar x = [12,-14,0,3,18,0,2.9];\nx.sort( function mySort(a,b){\n\treturn a - b;\n} );\n```\n\nSetting aside the previously mentioned inline function expression pitfall, the second case's `mySort(..)` works in this case because you have provided it numbers, but would have of course failed with strings. The first case doesn't throw an error, but it actually behaves differently and has a different outcome! It should be obvious, but: **a different outcome between two test cases almost certainly invalidates the entire test!**\n\nBut beyond the different outcomes, in this case, the built in `sort(..)`'s comparator is actually doing \"extra work\" that `mySort()` does not, in that the built-in one coerces the compared values to strings and does lexicographic comparison. The first snippet results in `[-14, 0, 0, 12, 18, 2.9, 3]` while the second snippet results (likely more accurately based on intent) in `[-14, 0, 0, 2.9, 3, 12, 18]`.\n\nSo that test is unfair because it's not actually doing the same task between the cases. Any results you get are bogus.\n\nThese same pitfalls can even be much more subtle:\n\n```js\n// Case 1\nvar x = false;\nvar y = x ? 1 : 2;\n\n// Case 2\nvar x;\nvar y = x ? 1 : 2;\n```\n\nHere, the intent might be to test the performance impact of the coercion to a Boolean that the `? :` operator will do if the `x` expression is not already a Boolean (see the *Types & Grammar* title of this book series). So, you're apparently OK with the fact that there is extra work to do the coercion in the second case.\n\nThe subtle problem? You're setting `x`'s value in the first case and not setting it in the other, so you're actually doing work in the first case that you're not doing in the second. To eliminate any potential (albeit minor) skew, try:\n\n```js\n// Case 1\nvar x = false;\nvar y = x ? 1 : 2;\n\n// Case 2\nvar x = undefined;\nvar y = x ? 1 : 2;\n```\n\nNow there's an assignment in both cases, so the thing you want to test -- the coercion of `x` or not -- has likely been more accurately isolated and tested.\n\n## Writing Good Tests\n\nLet me see if I can articulate the bigger point I'm trying to make here.\n\nGood test authoring requires careful analytical thinking about what differences exist between two test cases and whether the differences between them are *intentional* or *unintentional*.\n\nIntentional differences are of course normal and OK, but it's too easy to create unintentional differences that skew your results. You have to be really, really careful to avoid that skew. Moreover, you may intend a difference but it may not be obvious to other readers of your test what your intent was, so they may doubt (or trust!) your test incorrectly. How do you fix that?\n\n**Write better, clearer tests.** But also, take the time to document (using the jsPerf.com \"Description\" field and/or code comments) exactly what the intent of your test is, even to the nuanced detail. Call out the intentional differences, which will help others and your future self to better identify unintentional differences that could be skewing the test results.\n\nIsolate things which aren't relevant to your test by pre-declaring them in the page or test setup settings so they're outside the timed parts of the test.\n\nInstead of trying to narrow in on a tiny snippet of your real code and benchmarking just that piece out of context, tests and benchmarks are better when they include a larger (while still relevant) context. Those tests also tend to run slower, which means any differences you spot are more relevant in context.\n\n## Microperformance\n\nOK, until now we've been dancing around various microperformance issues and generally looking disfavorably upon obsessing about them. I want to take just a moment to address them directly.\n\nThe first thing you need to get more comfortable with when thinking about performance benchmarking your code is that the code you write is not always the code the engine actually runs. We briefly looked at that topic back in Chapter 1 when we discussed statement reordering by the compiler, but here we're going to suggest the compiler can sometimes decide to run different code than you wrote, not just in different orders but different in substance.\n\nLet's consider this piece of code:\n\n```js\nvar foo = 41;\n\n(function(){\n\t(function(){\n\t\t(function(baz){\n\t\t\tvar bar = foo + baz;\n\t\t\t// ..\n\t\t})(1);\n\t})();\n})();\n```\n\nYou may think about the `foo` reference in the innermost function as needing to do a three-level scope lookup. We covered in the *Scope & Closures* title of this book series how lexical scope works, and the fact that the compiler generally caches such lookups so that referencing `foo` from different scopes doesn't really practically \"cost\" anything extra.\n\nBut there's something deeper to consider. What if the compiler realizes that `foo` isn't referenced anywhere else but that one location, and it further notices that the value never is anything except the `41` as shown?\n\nIsn't it quite possible and acceptable that the JS compiler could decide to just remove the `foo` variable entirely, and *inline* the value, such as this:\n\n```js\n(function(){\n\t(function(){\n\t\t(function(baz){\n\t\t\tvar bar = 41 + baz;\n\t\t\t// ..\n\t\t})(1);\n\t})();\n})();\n```\n\n**Note:** Of course, the compiler could probably also do a similar analysis and rewrite with the `baz` variable here, too.\n\nWhen you begin to think about your JS code as being a hint or suggestion to the engine of what to do, rather than a literal requirement, you realize that a lot of the obsession over discrete syntactic minutia is most likely unfounded.\n\nAnother example:\n\n```js\nfunction factorial(n) {\n\tif (n < 2) return 1;\n\treturn n * factorial( n - 1 );\n}\n\nfactorial( 5 );\t\t// 120\n```\n\nAh, the good ol' fashioned \"factorial\" algorithm! You might assume that the JS engine will run that code mostly as is. And to be honest, it might -- I'm not really sure.\n\nBut as an anecdote, the same code expressed in C and compiled with advanced optimizations would result in the compiler realizing that the call `factorial(5)` can just be replaced with the constant value `120`, eliminating the function and call entirely!\n\nMoreover, some engines have a practice called \"unrolling recursion,\" where it can realize that the recursion you've expressed can actually be done \"easier\" (i.e., more optimally) with a loop. It's possible the preceding code could be *rewritten* by a JS engine to run as:\n\n```js\nfunction factorial(n) {\n\tif (n < 2) return 1;\n\n\tvar res = 1;\n\tfor (var i=n; i>1; i--) {\n\t\tres *= i;\n\t}\n\treturn res;\n}\n\nfactorial( 5 );\t\t// 120\n```\n\nNow, let's imagine that in the earlier snippet you had been worried about whether `n * factorial(n-1)` or `n *= factorial(--n)` runs faster. Maybe you even did a performance benchmark to try to figure out which was better. But you miss the fact that in the bigger context, the engine may not run either line of code because it may unroll the recursion!\n\nSpeaking of `--`, `--n` versus `n--` is often cited as one of those places where you can optimize by choosing the `--n` version, because theoretically it requires less effort down at the assembly level of processing.\n\nThat sort of obsession is basically nonsense in modern JavaScript. That's the kind of thing you should be letting the engine take care of. You should write the code that makes the most sense. Compare these three `for` loops:\n\n```js\n// Option 1\nfor (var i=0; i<10; i++) {\n\tconsole.log( i );\n}\n\n// Option 2\nfor (var i=0; i<10; ++i) {\n\tconsole.log( i );\n}\n\n// Option 3\nfor (var i=-1; ++i<10; ) {\n\tconsole.log( i );\n}\n```\n\nEven if you have some theory where the second or third option is more performant than the first option by a tiny bit, which is dubious at best, the third loop is more confusing because you have to start with `-1` for `i` to account for the fact that `++i` pre-increment is used. And the difference between the first and second options is really quite irrelevant.\n\nIt's entirely possible that a JS engine may see a place where `i++` is used and realize that it can safely replace it with the `++i` equivalent, which means your time spent deciding which one to pick was completely wasted and the outcome moot.\n\nHere's another common example of silly microperformance obsession:\n\n```js\nvar x = [ .. ];\n\n// Option 1\nfor (var i=0; i < x.length; i++) {\n\t// ..\n}\n\n// Option 2\nfor (var i=0, len = x.length; i < len; i++) {\n\t// ..\n}\n```\n\nThe theory here goes that you should cache the length of the `x` array in the variable `len`, because ostensibly it doesn't change, to avoid paying the price of `x.length` being consulted for each iteration of the loop.\n\nIf you run performance benchmarks around `x.length` usage compared to caching it in a `len` variable, you'll find that while the theory sounds nice, in practice any measured differences are statistically completely irrelevant.\n\nIn fact, in some engines like v8, it can be shown (http://mrale.ph/blog/2014/12/24/array-length-caching.html) that you could make things slightly worse by pre-caching the length instead of letting the engine figure it out for you. Don't try to outsmart your JavaScript engine, you'll probably lose when it comes to performance optimizations.\n\n### Not All Engines Are Alike\n\nThe different JS engines in various browsers can all be \"spec compliant\" while having radically different ways of handling code. The JS specification doesn't require anything performance related -- well, except ES6's \"Tail Call Optimization\" covered later in this chapter.\n\nThe engines are free to decide that one operation will receive its attention to optimize, perhaps trading off for lesser performance on another operation. It can be very tenuous to find an approach for an operation that always runs faster in all browsers.\n\nThere's a movement among some in the JS dev community, especially those who work with Node.js, to analyze the specific internal implementation details of the v8 JavaScript engine and make decisions about writing JS code that is tailored to take best advantage of how v8 works. You can actually achieve a surprisingly high degree of performance optimization with such endeavors, so the payoff for the effort can be quite high.\n\nSome commonly cited examples (https://github.com/petkaantonov/bluebird/wiki/Optimization-killers) for v8:\n\n* Don't pass the `arguments` variable from one function to any other function, as such \"leakage\" slows down the function implementation.\n* Isolate a `try..catch` in its own function. Browsers struggle with optimizing any function with a `try..catch` in it, so moving that construct to its own function means you contain the de-optimization harm while letting the surrounding code be optimizable.\n\nBut rather than focus on those tips specifically, let's sanity check the v8-only optimization approach in a general sense.\n\nAre you genuinely writing code that only needs to run in one JS engine? Even if your code is entirely intended for Node.js *right now*, is the assumption that v8 will *always* be the used JS engine reliable? Is it possible that someday a few years from now, there's another server-side JS platform besides Node.js that you choose to run your code on? What if what you optimized for before is now a much slower way of doing that operation on the new engine?\n\nOr what if your code always stays running on v8 from here on out, but v8 decides at some point to change the way some set of operations works such that what used to be fast is now slow, and vice versa?\n\nThese scenarios aren't just theoretical, either. It used to be that it was faster to put multiple string values into an array and then call `join(\"\")` on the array to concatenate the values than to just use `+` concatenation directly with the values. The historical reason for this is nuanced, but it has to do with internal implementation details about how string values were stored and managed in memory.\n\nAs a result, \"best practice\" advice at the time disseminated across the industry suggesting developers always use the array `join(..)` approach. And many followed.\n\nExcept, somewhere along the way, the JS engines changed approaches for internally managing strings, and specifically put in optimizations for `+` concatenation. They didn't slow down `join(..)` per se, but they put more effort into helping `+` usage, as it was still quite a bit more widespread.\n\n**Note:** The practice of standardizing or optimizing some particular approach based mostly on its existing widespread usage is often called (metaphorically) \"paving the cowpath.\"\n\nOnce that new approach to handling strings and concatenation took hold, unfortunately all the code out in the wild that was using array `join(..)` to concatenate strings was then sub-optimal.\n\nAnother example: at one time, the Opera browser differed from other browsers in how it handled the boxing/unboxing of primitive wrapper objects (see the *Types & Grammar* title of this book series). As such, their advice to developers was to use a `String` object instead of the primitive `string` value if properties like `length` or methods like `charAt(..)` needed to be accessed. This advice may have been correct for Opera at the time, but it was literally completely opposite for other major contemporary browsers, as they had optimizations specifically for the `string` primitives and not their object wrapper counterparts.\n\nI think these various gotchas are at least possible, if not likely, for code even today. So I'm very cautious about making wide ranging performance optimizations in my JS code based purely on engine implementation details, **especially if those details are only true of a single engine**.\n\nThe reverse is also something to be wary of: you shouldn't necessarily change a piece of code to work around one engine's difficulty with running a piece of code in an acceptably performant way.\n\nHistorically, IE has been the brunt of many such frustrations, given that there have been plenty of scenarios in older IE versions where it struggled with some performance aspect that other major browsers of the time seemed not to have much trouble with. The string concatenation discussion we just had was actually a real concern back in the IE6 and IE7 days, where it was possible to get better performance out of `join(..)` than `+`.\n\nBut it's troublesome to suggest that just one browser's trouble with performance is justification for using a code approach that quite possibly could be sub-optimal in all other browsers. Even if the browser in question has a large market share for your site's audience, it may be more practical to write the proper code and rely on the browser to update itself with better optimizations eventually.\n\n\"There is nothing more permanent than a temporary hack.\" Chances are, the code you write now to work around some performance bug will probably outlive the performance bug in the browser itself.\n\nIn the days when a browser only updated once every five years, that was a tougher call to make. But as it stands now, browsers across the board are updating at a much more rapid interval (though obviously the mobile world still lags), and they're all competing to optimize web features better and better.\n\nIf you run across a case where a browser *does* have a performance wart that others don't suffer from, make sure to report it to them through whatever means you have available. Most browsers have open public bug trackers suitable for this purpose.\n\n**Tip:** I'd only suggest working around a performance issue in a browser if it was a really drastic show-stopper, not just an annoyance or frustration. And I'd be very careful to check that the performance hack didn't have noticeable negative side effects in another browser.\n\n### Big Picture\n\nInstead of worrying about all these microperformance nuances, we should instead be looking at big-picture types of optimizations.\n\nHow do you know what's big picture or not? You have to first understand if your code is running on a critical path or not. If it's not on the critical path, chances are your optimizations are not worth much.\n\nEver heard the admonition, \"that's premature optimization!\"? It comes from a famous quote from Donald Knuth: \"premature optimization is the root of all evil.\". Many developers cite this quote to suggest that most optimizations are \"premature\" and are thus a waste of effort. The truth is, as usual, more nuanced.\n\nHere is Knuth's quote, in context:\n\n> Programmers waste enormous amounts of time thinking about, or worrying about, the speed of **noncritical** parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that **critical** 3%. [emphasis added]\n\n(http://web.archive.org/web/20130731202547/http://pplab.snu.ac.kr/courses/adv_pl05/papers/p261-knuth.pdf, Computing Surveys, Vol 6, No 4, December 1974)\n\nI believe it's a fair paraphrasing to say that Knuth *meant*: \"non-critical path optimization is the root of all evil.\" So the key is to figure out if your code is on the critical path -- you should optimize it! -- or not.\n\nI'd even go so far as to say this: no amount of time spent optimizing critical paths is wasted, no matter how little is saved; but no amount of optimization on noncritical paths is justified, no matter how much is saved.\n\nIf your code is on the critical path, such as a \"hot\" piece of code that's going to be run over and over again, or in UX critical places where users will notice, like an animation loop or CSS style updates, then you should spare no effort in trying to employ relevant, measurably significant optimizations.\n\nFor example, consider a critical path animation loop that needs to coerce a string value to a number. There are of course multiple ways to do that (see the *Types & Grammar* title of this book series), but which one if any is the fastest?\n\n```js\nvar x = \"42\";\t// need number `42`\n\n// Option 1: let implicit coercion automatically happen\nvar y = x / 2;\n\n// Option 2: use `parseInt(..)`\nvar y = parseInt( x, 0 ) / 2;\n\n// Option 3: use `Number(..)`\nvar y = Number( x ) / 2;\n\n// Option 4: use `+` unary operator\nvar y = +x / 2;\n\n// Option 5: use `|` unary operator\nvar y = (x | 0) / 2;\n```\n\n**Note:** I will leave it as an exercise to the reader to set up a test if you're interested in examining the minute differences in performance among these options.\n\nWhen considering these different options, as they say, \"One of these things is not like the others.\" `parseInt(..)` does the job, but it also does a lot more -- it parses the string rather than just coercing. You can probably guess, correctly, that `parseInt(..)` is a slower option, and you should probably avoid it.\n\nOf course, if `x` can ever be a value that **needs parsing**, such as `\"42px\"` (like from a CSS style lookup), then `parseInt(..)` really is the only suitable option!\n\n`Number(..)` is also a function call. From a behavioral perspective, it's identical to the `+` unary operator option, but it may in fact be a little slower, requiring more machinery to execute the function. Of course, it's also possible that the JS engine recognizes this behavioral symmetry and just handles the inlining of `Number(..)`'s behavior (aka `+x`) for you!\n\nBut remember, obsessing about `+x` versus `x | 0` is in most cases likely a waste of effort. This is a microperformance issue, and one that you shouldn't let dictate/degrade the readability of your program.\n\nWhile performance is very important in critical paths of your program, it's not the only factor. Among several options that are roughly similar in performance, readability should be another important concern.\n\n## Tail Call Optimization (TCO)\n\nAs we briefly mentioned earlier, ES6 includes a specific requirement that ventures into the world of performance. It's related to a specific form of optimization that can occur with function calls: *tail call optimization*.\n\nBriefly, a \"tail call\" is a function call that appears at the \"tail\" of another function, such that after the call finishes, there's nothing left to do (except perhaps return its result value).\n\nFor example, here's a non-recursive setup with tail calls:\n\n```js\nfunction foo(x) {\n\treturn x;\n}\n\nfunction bar(y) {\n\treturn foo( y + 1 );\t// tail call\n}\n\nfunction baz() {\n\treturn 1 + bar( 40 );\t// not tail call\n}\n\nbaz();\t\t\t\t\t\t// 42\n```\n\n`foo(y+1)` is a tail call in `bar(..)` because after `foo(..)` finishes, `bar(..)` is also finished except in this case returning the result of the `foo(..)` call. However, `bar(40)` is *not* a tail call because after it completes, its result value must be added to `1` before `baz()` can return it.\n\nWithout getting into too much nitty-gritty detail, calling a new function requires an extra amount of reserved memory to manage the call stack, called a \"stack frame.\" So the preceding snippet would generally require a stack frame for each of `baz()`, `bar(..)`, and `foo(..)` all at the same time.\n\nHowever, if a TCO-capable engine can realize that the `foo(y+1)` call is in *tail position* meaning `bar(..)` is basically complete, then when calling `foo(..)`, it doesn't need to create a new stack frame, but can instead reuse the existing stack frame from `bar(..)`. That's not only faster, but it also uses less memory.\n\nThat sort of optimization isn't a big deal in a simple snippet, but it becomes a *much bigger deal* when dealing with recursion, especially if the recursion could have resulted in hundreds or thousands of stack frames. With TCO the engine can perform all those calls with a single stack frame!\n\nRecursion is a hairy topic in JS because without TCO, engines have had to implement arbitrary (and different!) limits to how deep they will let the recursion stack get before they stop it, to prevent running out of memory. With TCO, recursive functions with *tail position* calls can essentially run unbounded, because there's never any extra usage of memory!\n\nConsider that recursive `factorial(..)` from before, but rewritten to make it TCO friendly:\n\n```js\nfunction factorial(n) {\n\tfunction fact(n,res) {\n\t\tif (n < 2) return res;\n\n\t\treturn fact( n - 1, n * res );\n\t}\n\n\treturn fact( n, 1 );\n}\n\nfactorial( 5 );\t\t// 120\n```\n\nThis version of `factorial(..)` is still recursive, but it's also optimizable with TCO, because both inner `fact(..)` calls are in *tail position*.\n\n**Note:** It's important to note that TCO only applies if there's actually a tail call. If you write recursive functions without tail calls, the performance will still fall back to normal stack frame allocation, and the engines' limits on such recursive call stacks will still apply. Many recursive functions can be rewritten as we just showed with `factorial(..)`, but it takes careful attention to detail.\n\nOne reason that ES6 requires engines to implement TCO rather than leaving it up to their discretion is because the *lack of TCO* actually tends to reduce the chances that certain algorithms will be implemented in JS using recursion, for fear of the call stack limits.\n\nIf the lack of TCO in the engine would just gracefully degrade to slower performance in all cases, it wouldn't probably have been something that ES6 needed to *require*. But because the lack of TCO can actually make certain programs impractical, it's more an important feature of the language than just a hidden implementation detail.\n\nES6 guarantees that from now on, JS developers will be able to rely on this optimization across all ES6+ compliant browsers. That's a win for JS performance!\n\n## Review\n\nEffectively benchmarking performance of a piece of code, especially to compare it to another option for that same code to see which approach is faster, requires careful attention to detail.\n\nRather than rolling your own statistically valid benchmarking logic, just use the Benchmark.js library, which does that for you. But be careful about how you author tests, because it's far too easy to construct a test that seems valid but that's actually flawed -- even tiny differences can skew the results to be completely unreliable.\n\nIt's important to get as many test results from as many different environments as possible to eliminate hardware/device bias. jsPerf.com is a fantastic website for crowdsourcing performance benchmark test runs.\n\nMany common performance tests unfortunately obsess about irrelevant microperformance details like `x++` versus `++x`. Writing good tests means understanding how to focus on big picture concerns, like optimizing on the critical path, and avoiding falling into traps like different JS engines' implementation details.\n\nTail call optimization (TCO) is a required optimization as of ES6 that will make some recursive patterns practical in JS where they would have been impossible otherwise. TCO allows a function call in the *tail position* of another function to execute without needing any extra resources, which means the engine no longer needs to place arbitrary restrictions on call stack depth for recursive algorithms.\n"
  },
  {
    "path": "async & performance/foreword.md",
    "content": "# You Don't Know JS: Async & Performance\n# Foreword\n\nOver the years, my employer has trusted me enough to conduct interviews. If we're looking for someone with skills in JavaScript, my first line of questioning… actually that's not true, I first check if the candidate needs the bathroom and/or a drink, because comfort is important, but once I'm past the bit about the candidate's fluid in/out-take, I set about determining if the candidate knows JavaScript, or just jQuery.\n\nNot that there's anything wrong with jQuery. It lets you do a lot without really knowing JavaScript, and that's a feature not a bug. But if the job calls for advanced skills in JavaScript performance and maintainability, you need someone who knows how libraries such as jQuery are put together. You need to be able to harness the core of JavaScript the same way they do.\n\nIf I want to get a picture of someone's core JavaScript skill, I'm most interested in what they make of closures (you've read that book of this series already, right?) and how to get the most out of asynchronicity, which brings us to this book.\n\nFor starters, you'll be taken through callbacks, the bread and butter of asynchronous programming. Of course, bread and butter does not make for a particularly satisfying meal, but the next course is full of tasty tasty promises!\n\nIf you don't know promises, now is the time to learn. Promises are now the official way to provide async return values in both JavaScript and the DOM. All future async DOM APIs will use them, many already do, so be prepared! At the time of writing, Promises have shipped in most major browsers, with IE shipping soon. Once you've finished that, I hope you left room for the next course, Generators.\n\nGenerators snuck their way into stable versions of Chrome and Firefox without too much pomp and ceremony, because, frankly, they're more complicated than they are interesting. Or, that's what I thought until I saw them combined with promises. There, they become an important tool in readability and maintenance.\n\nFor dessert, well, I won't spoil the surprise, but prepare to gaze into the future of JavaScript! Features that give you more and more control over concurrency and asynchronicity.\n\nWell, I won't block your enjoyment of the book any longer, on with the show! If you've already read part of the book before reading this Foreword, give yourself 10 asynchronous points! You deserve them!\n\nJake Archibald<br>\n[jakearchibald.com](http://jakearchibald.com), [@jaffathecake](http://twitter.com/jaffathecake)<br>\nDeveloper Advocate at Google Chrome\n"
  },
  {
    "path": "async & performance/toc.md",
    "content": "# You Don't Know JS: Async & Performance\n\n## Table of Contents\n\n* Foreword\n* Preface\n* Chapter 1: Asynchrony: Now & Later\n\t* A Program In Chunks\n\t* Event Loop\n\t* Parallel Threading\n\t* Concurrency\n\t* Jobs\n\t* Statement Ordering\n* Chapter 2: Callbacks\n\t* Continuations\n\t* Sequential Brain\n\t* Trust Issues\n\t* Trying To Save Callbacks\n* Chapter 3: Promises\n\t* What is a Promise?\n\t* Thenable Duck-Typing\n\t* Promise Trust\n\t* Chain Flow\n\t* Error Handling\n\t* Promise Patterns\n\t* Promise API Recap\n\t* Promise Limitations\n* Chapter 4: Generators\n\t* Breaking Run-to-completion\n\t* Generator'ing Values\n\t* Iterating Generators Asynchronously\n\t* Generators + Promises\n\t* Generator Delegation\n\t* Generator Concurrency\n\t* Thunks\n\t* Pre-ES6 Generators\n* Chapter 5: Program Performance\n\t* Web Workers\n\t* SIMD\n\t* asm.js\n* Chapter 6: Benchmarking & Tuning\n\t* Benchmarking\n\t* Context Is King\n\t* jsPerf.com\n\t* Writing Good Tests\n\t* Microperformance\n\t* Tail Call Optimization (TCO)\n* Appendix A: *asynquence* Library\n* Appendix B: Advanced Async Patterns\n* Appendix C: Acknowledgments\n\n"
  },
  {
    "path": "es6 & beyond/README.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n\n<img src=\"cover.jpg\" width=\"300\">\n\n-----\n\n**[Purchase digital/print copy from O'Reilly](http://shop.oreilly.com/product/0636920033769.do)**\n\n-----\n\n[Table of Contents](toc.md)\n\n* [Foreword](foreword.md) (by [Rick Waldron](http://bocoup.com/weblog/author/rick-waldron/))\n* [Preface](../preface.md)\n* [Chapter 1: ES? Now & Future](ch1.md)\n* [Chapter 2: Syntax](ch2.md)\n* [Chapter 3: Organization](ch3.md)\n* [Chapter 4: Async Flow Control](ch4.md)\n* [Chapter 5: Collections](ch5.md)\n* [Chapter 6: API Additions](ch6.md)\n* [Chapter 7: Meta Programming](ch7.md)\n* [Chapter 8: Beyond ES6](ch8.md)\n* [Appendix A: Thank You's!](apA.md)\n"
  },
  {
    "path": "es6 & beyond/apA.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Appendix A: Acknowledgments\n\nI have many people to thank for making this book title and the overall series happen.\n\nFirst, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.\n\nI'd like to thank my editors at O'Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into \"open source\" book writing, editing, and production.\n\nThank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, and many others. A big thank you to Rick Waldron for writing the Foreword for this title.\n\nThank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy \"kangax\" Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, André Bargull, Caitlin Potter, Brian Terlson, Ingvar Stepanyan, Chris Dickinson, Luke Hoban, and so many others, I can't even scratch the surface.\n\nThe *You Don't Know JS* book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:\n\n> Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu 'Dilys' Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard\n\nThis book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!\n\nThank you again to all the countless folks I didn't name but who I nonetheless owe thanks. May this book series be \"owned\" by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.\n"
  },
  {
    "path": "es6 & beyond/ch1.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 1: ES? Now & Future\n\nBefore you dive into this book, you should have a solid working proficiency over JavaScript up to the most recent standard (at the time of this writing), which is commonly called *ES5* (technically ES 5.1). Here, we plan to talk squarely about the upcoming *ES6*, as well as cast our vision beyond to understand how JS will evolve moving forward.\n\nIf you are still looking for confidence with JavaScript, I highly recommend you read the other titles in this series first:\n\n* *Up & Going*: Are you new to programming and JS? This is the roadmap you need to consult as you start your learning journey.\n* *Scope & Closures*: Did you know that JS lexical scope is based on compiler (not interpreter!) semantics? Can you explain how closures are a direct result of lexical scope and functions as values?\n* *this & Object Prototypes*: Can you recite the four simple rules for how `this` is bound? Have you been muddling through fake \"classes\" in JS instead of adopting the simpler \"behavior delegation\" design pattern? Ever heard of *objects linked to other objects* (OLOO)?\n* *Types & Grammar*: Do you know the built-in types in JS, and more importantly, do you know how to properly and safely use coercion between types? How comfortable are you with the nuances of JS grammar/syntax?\n* *Async & Performance*: Are you still using callbacks to manage your asynchrony? Can you explain what a promise is and why/how it solves \"callback hell\"? Do you know how to use generators to improve the legibility of async code? What exactly constitutes mature optimization of JS programs and individual operations?\n\nIf you've already read all those titles and you feel pretty comfortable with the topics they cover, it's time we dive into the evolution of JS to explore all the changes coming not only soon but farther over the horizon.\n\nUnlike ES5, ES6 is not just a modest set of new APIs added to the language. It incorporates a whole slew of new syntactic forms, some of which may take quite a bit of getting used to. There's also a variety of new organization forms and new API helpers for various data types.\n\nES6 is a radical jump forward for the language. Even if you think you know JS in ES5, ES6 is full of new stuff you *don't know yet*, so get ready! This book explores all the major themes of ES6 that you need to get up to speed on, and even gives you a glimpse of future features coming down the track that you should be aware of.\n\n**Warning:** All code in this book assumes an ES6+ environment. At the time of this writing, ES6 support varies quite a bit in browsers and JS environments (like Node.js), so your mileage may vary.\n\n## Versioning\n\nThe JavaScript standard is referred to officially as \"ECMAScript\" (abbreviated \"ES\"), and up until just recently has been versioned entirely by ordinal number (i.e., \"5\" for \"5th edition\").\n\nThe earliest versions, ES1 and ES2, were not widely known or implemented. ES3 was the first widespread baseline for JavaScript, and constitutes the JavaScript standard for browsers like IE6-8 and older Android 2.x mobile browsers. For political reasons beyond what we'll cover here, the ill-fated ES4 never came about.\n\nIn 2009, ES5 was officially finalized (later ES5.1 in 2011), and settled as the widespread standard for JS for the modern revolution and explosion of browsers, such as Firefox, Chrome, Opera, Safari, and many others.\n\nLeading up to the expected *next* version of JS (slipped from 2013 to 2014 and then 2015), the obvious and common label in discourse has been ES6.\n\nHowever, late into the ES6 specification timeline, suggestions have surfaced that versioning may in the future switch to a year-based schema, such as ES2016 (aka ES7) to refer to whatever version of the specification is finalized before the end of 2016. Some disagree, but ES6 will likely maintain its dominant mindshare over the late-change substitute ES2015. However, ES2016 may in fact signal the new year-based schema.\n\nIt has also been observed that the pace of JS evolution is much faster even than single-year versioning. As soon as an idea begins to progress through standards discussions, browsers start prototyping the feature, and early adopters start experimenting with the code.\n\nUsually well before there's an official stamp of approval, a feature is de facto standardized by virtue of this early engine/tooling prototyping. So it's also valid to consider the future of JS versioning to be per-feature rather than per-arbitrary-collection-of-major-features (as it is now) or even per-year (as it may become).\n\nThe takeaway is that the version labels stop being as important, and JavaScript starts to be seen more as an evergreen, living standard. The best way to cope with this is to stop thinking about your code base as being \"ES6-based,\" for instance, and instead consider it feature by feature for support.\n\n## Transpiling\n\nMade even worse by the rapid evolution of features, a problem arises for JS developers who at once may both strongly desire to use new features while at the same time being slapped with the reality that their sites/apps may need to support older browsers without such support.\n\nThe way ES5 appears to have played out in the broader industry, the typical mindset was that code bases waited to adopt ES5 until most if not all pre-ES5 environments had fallen out of their support spectrum. As a result, many are just recently (at the time of this writing) starting to adopt things like `strict` mode, which landed in ES5 over five years ago.\n\nIt's widely considered to be a harmful approach for the future of the JS ecosystem to wait around and trail the specification by so many years. All those responsible for evolving the language desire for developers to begin basing their code on the new features and patterns as soon as they stabilize in specification form and browsers have a chance to implement them.\n\nSo how do we resolve this seeming contradiction? The answer is tooling, specifically a technique called *transpiling* (transformation + compiling). Roughly, the idea is to use a special tool to transform your ES6 code into equivalent (or close!) matches that work in ES5 environments.\n\nFor example, consider shorthand property definitions (see \"Object Literal Extensions\" in Chapter 2). Here's the ES6 form:\n\n```js\nvar foo = [1,2,3];\n\nvar obj = {\n\tfoo\t\t// means `foo: foo`\n};\n\nobj.foo;\t// [1,2,3]\n```\n\nBut (roughly) here's how that transpiles:\n\n```js\nvar foo = [1,2,3];\n\nvar obj = {\n\tfoo: foo\n};\n\nobj.foo;\t// [1,2,3]\n```\n\nThis is a minor but pleasant transformation that lets us shorten the `foo: foo` in an object literal declaration to just `foo`, if the names are the same.\n\nTranspilers perform these transformations for you, usually in a build workflow step similar to how you perform linting, minification, and other similar operations.\n\n### Shims/Polyfills\n\nNot all new ES6 features need a transpiler. Polyfills (aka shims) are a pattern for defining equivalent behavior from a newer environment into an older environment, when possible. Syntax cannot be polyfilled, but APIs often can be.\n\nFor example, `Object.is(..)` is a new utility for checking strict equality of two values but without the nuanced exceptions that `===` has for `NaN` and `-0` values. The polyfill for `Object.is(..)` is pretty easy:\n\n```js\nif (!Object.is) {\n\tObject.is = function(v1, v2) {\n\t\t// test for `-0`\n\t\tif (v1 === 0 && v2 === 0) {\n\t\t\treturn 1 / v1 === 1 / v2;\n\t\t}\n\t\t// test for `NaN`\n\t\tif (v1 !== v1) {\n\t\t\treturn v2 !== v2;\n\t\t}\n\t\t// everything else\n\t\treturn v1 === v2;\n\t};\n}\n```\n\n**Tip:** Pay attention to the outer `if` statement guard wrapped around the polyfill. This is an important detail, which means the snippet only defines its fallback behavior for older environments where the API in question isn't already defined; it would be very rare that you'd want to overwrite an existing API.\n\nThere's a great collection of ES6 shims called \"ES6 Shim\" (https://github.com/paulmillr/es6-shim/) that you should definitely adopt as a standard part of any new JS project!\n\nIt is assumed that JS will continue to evolve constantly, with browsers rolling out support for features continually rather than in large chunks. So the best strategy for keeping updated as it evolves is to just introduce polyfill shims into your code base, and a transpiler step into your build workflow, right now and get used to that new reality.\n\nIf you decide to keep the status quo and just wait around for all browsers without a feature supported to go away before you start using the feature, you're always going to be way behind. You'll sadly be missing out on all the innovations designed to make writing JavaScript more effective, efficient, and robust.\n\n## Review\n\nES6 (some may try to call it ES2015) is just landing as of the time of this writing, and it has lots of new stuff you need to learn!\n\nBut it's even more important to shift your mindset to align with the new way that JavaScript is going to evolve. It's not just waiting around for years for some official document to get a vote of approval, as many have done in the past.\n\nNow, JavaScript features land in browsers as they become ready, and it's up to you whether you'll get on the train early or whether you'll be playing costly catch-up games years from now.\n\nWhatever labels that future JavaScript adopts, it's going to move a lot quicker than it ever has before. Transpilers and shims/polyfills are important tools to keep you on the forefront of where the language is headed.\n\nIf there's any narrative important to understand about the new reality for JavaScript, it's that all JS developers are strongly implored to move from the trailing edge of the curve to the leading edge. And learning ES6 is where that all starts!\n"
  },
  {
    "path": "es6 & beyond/ch2.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 2: Syntax\n\nIf you've been writing JS for any length of time, odds are the syntax is pretty familiar to you. There are certainly many quirks, but overall it's a fairly reasonable and straightforward syntax that draws many similarities from other languages.\n\nHowever, ES6 adds quite a few new syntactic forms that take some getting used to. In this chapter, we'll tour through them to find out what's in store.\n\n**Tip:** At the time of this writing, some of the features discussed in this book have been implemented in various browsers (Firefox, Chrome, etc.), but some have only been partially implemented and many others have not been implemented at all. Your experience may be mixed trying these examples directly. If so, try them out with transpilers, as most of these features are covered by those tools. ES6Fiddle (http://www.es6fiddle.net/) is a great, easy-to-use playground for trying out ES6, as is the online REPL for the Babel transpiler (http://babeljs.io/repl/).\n\n## Block-Scoped Declarations\n\nYou're probably aware that the fundamental unit of variable scoping in JavaScript has always been the `function`. If you needed to create a block of scope, the most prevalent way to do so other than a regular function declaration was the immediately invoked function expression (IIFE). For example:\n\n```js\nvar a = 2;\n\n(function IIFE(){\n\tvar a = 3;\n\tconsole.log( a );\t// 3\n})();\n\nconsole.log( a );\t\t// 2\n```\n\n### `let` Declarations\n\nHowever, we can now create declarations that are bound to any block, called (unsurprisingly) *block scoping*. This means all we need is a pair of `{ .. }` to create a scope. Instead of using `var`, which always declares variables attached to the enclosing function (or global, if top level) scope, use `let`:\n\n```js\nvar a = 2;\n\n{\n\tlet a = 3;\n\tconsole.log( a );\t// 3\n}\n\nconsole.log( a );\t\t// 2\n```\n\nIt's not very common or idiomatic thus far in JS to use a standalone `{ .. }` block, but it's always been valid. And developers from other languages that have *block scoping* will readily recognize that pattern.\n\nI believe this is the best way to create block-scoped variables, with a dedicated `{ .. }` block. Moreover, you should always put the `let` declaration(s) at the very top of that block. If you have more than one to declare, I'd recommend using just one `let`.\n\nStylistically, I even prefer to put the `let` on the same line as the opening `{`, to make it clearer that this block is only for the purpose of declaring the scope for those variables.\n\n```js\n{\tlet a = 2, b, c;\n\t// ..\n}\n```\n\nNow, that's going to look strange and it's not likely going to match the recommendations given in most other ES6 literature. But I have reasons for my madness.\n\nThere's another experimental (not standardized) form of the `let` declaration called the `let`-block, which looks like:\n\n```js\nlet (a = 2, b, c) {\n\t// ..\n}\n```\n\nThat form is what I call *explicit* block scoping, whereas the `let ..` declaration form that mirrors `var` is more *implicit*, as it kind of hijacks whatever `{ .. }` pair it's found in. Generally developers find *explicit* mechanisms a bit more preferable than *implicit* mechanisms, and I claim this is one of those cases.\n\nIf you compare the previous two snippet forms, they're very similar, and in my opinion both qualify stylistically as *explicit* block scoping. Unfortunately, the `let (..) { .. }` form, the most *explicit* of the options, was not adopted in ES6. That may be revisited post-ES6, but for now the former option is our best bet, I think.\n\nTo reinforce the *implicit* nature of `let ..` declarations, consider these usages:\n\n```js\nlet a = 2;\n\nif (a > 1) {\n\tlet b = a * 3;\n\tconsole.log( b );\t\t// 6\n\n\tfor (let i = a; i <= b; i++) {\n\t\tlet j = i + 10;\n\t\tconsole.log( j );\n\t}\n\t// 12 13 14 15 16\n\n\tlet c = a + b;\n\tconsole.log( c );\t\t// 8\n}\n```\n\nQuick quiz without looking back at that snippet: which variable(s) exist only inside the `if` statement, and which variable(s) exist only inside the `for` loop?\n\nThe answers: the `if` statement contains `b` and `c` block-scoped variables, and the `for` loop contains `i` and `j` block-scoped variables.\n\nDid you have to think about it for a moment? Does it surprise you that `i` isn't added to the enclosing `if` statement scope? That mental pause and questioning -- I call it a \"mental tax\" -- comes from the fact that this `let` mechanism is not only new to us, but it's also *implicit*.\n\nThere's also hazard in the `let c = ..` declaration appearing so far down in the scope. Unlike traditional `var`-declared variables, which are attached to the entire enclosing function scope regardless of where they appear, `let` declarations attach to the block scope but are not initialized until they appear in the block.\n\nAccessing a `let`-declared variable earlier than its `let ..` declaration/initialization causes an error, whereas with `var` declarations the ordering doesn't matter (except stylistically).\n\nConsider:\n\n```js\n{\n\tconsole.log( a );\t// undefined\n\tconsole.log( b );\t// ReferenceError!\n\n\tvar a;\n\tlet b;\n}\n```\n\n**Warning:** This `ReferenceError` from accessing too-early `let`-declared references is technically called a *Temporal Dead Zone (TDZ)* error -- you're accessing a variable that's been declared but not yet initialized. This will not be the only time we see TDZ errors -- they crop up in several places in ES6. Also, note that \"initialized\" doesn't require explicitly assigning a value in your code, as `let b;` is totally valid. A variable that's not given an assignment at declaration time is assumed to have been assigned the `undefined` value, so `let b;` is the same as `let b = undefined;`. Explicit assignment or not, you cannot access `b` until the `let b` statement is run.\n\nOne last gotcha: `typeof` behaves differently with TDZ variables than it does with undeclared (or declared!) variables. For example:\n\n```js\n{\n\t// `a` is not declared\n\tif (typeof a === \"undefined\") {\n\t\tconsole.log( \"cool\" );\n\t}\n\n\t// `b` is declared, but in its TDZ\n\tif (typeof b === \"undefined\") {\t\t// ReferenceError!\n\t\t// ..\n\t}\n\n\t// ..\n\n\tlet b;\n}\n```\n\nThe `a` is not declared, so `typeof` is the only safe way to check for its existence or not. But `typeof b` throws the TDZ error because farther down in the code there happens to be a `let b` declaration. Oops.\n\nNow it should be clearer why I insist that `let` declarations should all be at the top of their scope. That totally avoids the accidental errors of accessing too early. It also makes it more *explicit* when you look at the start of a block, any block, what variables it contains.\n\nYour blocks (`if` statements, `while` loops, etc.) don't have to share their original behavior with scoping behavior.\n\nThis explicitness on your part, which is up to you to maintain with discipline, will save you lots of refactor headaches and footguns down the line.\n\n**Note:** For more information on `let` and block scoping, see Chapter 3 of the *Scope & Closures* title of this series.\n\n#### `let` + `for`\n\nThe only exception I'd make to the preference for the *explicit* form of `let` declaration blocking is a `let` that appears in the header of a `for` loop. The reason may seem nuanced, but I believe it to be one of the more important ES6 features.\n\nConsider:\n\n```js\nvar funcs = [];\n\nfor (let i = 0; i < 5; i++) {\n\tfuncs.push( function(){\n\t\tconsole.log( i );\n\t} );\n}\n\nfuncs[3]();\t\t// 3\n```\n\nThe `let i` in the `for` header declares an `i` not just for the `for` loop itself, but it redeclares a new `i` for each iteration of the loop. That means that closures created inside the loop iteration close over those per-iteration variables the way you'd expect.\n\nIf you tried that same snippet but with `var i` in the `for` loop header, you'd get `5` instead of `3`, because there'd only be one `i` in the outer scope that was closed over, instead of a new `i` for each iteration's function to close over.\n\nYou could also have accomplished the same thing slightly more verbosely:\n\n```js\nvar funcs = [];\n\nfor (var i = 0; i < 5; i++) {\n\tlet j = i;\n\tfuncs.push( function(){\n\t\tconsole.log( j );\n\t} );\n}\n\nfuncs[3]();\t\t// 3\n```\n\nHere, we forcibly create a new `j` for each iteration, and then the closure works the same way. I prefer the former approach; that extra special capability is why I endorse the `for (let .. ) ..` form. It could be argued it's somewhat more *implicit*, but it's *explicit* enough, and useful enough, for my tastes.\n\n`let` also works the same way with `for..in` and `for..of` loops (see \"`for..of` Loops\").\n\n### `const` Declarations\n\nThere's one other form of block-scoped declaration to consider: the `const`, which creates *constants*.\n\nWhat exactly is a constant? It's a variable that's read-only after its initial value is set. Consider:\n\n```js\n{\n\tconst a = 2;\n\tconsole.log( a );\t// 2\n\n\ta = 3;\t\t\t\t// TypeError!\n}\n```\n\nYou are not allowed to change the value the variable holds once it's been set, at declaration time. A `const` declaration must have an explicit initialization. If you wanted a *constant* with the `undefined` value, you'd have to declare `const a = undefined` to get it.\n\nConstants are not a restriction on the value itself, but on the variable's assignment of that value. In other words, the value is not frozen or immutable because of `const`, just the assignment of it. If the value is complex, such as an object or array, the contents of the value can still be modified:\n\n```js\n{\n\tconst a = [1,2,3];\n\ta.push( 4 );\n\tconsole.log( a );\t\t// [1,2,3,4]\n\n\ta = 42;\t\t\t\t\t// TypeError!\n}\n```\n\nThe `a` variable doesn't actually hold a constant array; rather, it holds a constant reference to the array. The array itself is freely mutable.\n\n**Warning:** Assigning an object or array as a constant means that value will not be able to be garbage collected until that constant's lexical scope goes away, as the reference to the value can never be unset. That may be desirable, but be careful if it's not your intent!\n\nEssentially, `const` declarations enforce what we've stylistically signaled with our code for years, where we declared a variable name of all uppercase letters and assigned it some literal value that we took care never to change. There's no enforcement on a `var` assignment, but there is now with a `const` assignment, which can help you catch unintended changes.\n\n`const` *can* be used with variable declarations of `for`, `for..in`, and `for..of` loops (see \"`for..of` Loops\"). However, an error will be thrown if there's any attempt to reassign, such as the typical `i++` clause of a `for` loop.\n\n#### `const` Or Not\n\nThere's some rumored assumptions that a `const` could be more optimizable by the JS engine in certain scenarios than a `let` or `var` would be. Theoretically, the engine more easily knows the variable's value/type will never change, so it can eliminate some possible tracking.\n\nWhether `const` really helps here or this is just our own fantasies and intuitions, the much more important decision to make is if you intend constant behavior or not. Remember: one of the most important roles for source code is to communicate clearly, not only to you, but your future self and other code collaborators, what your intent is.\n\nSome developers prefer to start out every variable declaration as a `const` and then relax a declaration back to a `let` if it becomes necessary for its value to change in the code. This is an interesting perspective, but it's not clear that it genuinely improves the readability or reason-ability of code.\n\nIt's not really a *protection*, as many believe, because any later developer who wants to change a value of a `const` can just blindly change `const` to `let` on the declaration. At best, it protects accidental change. But again, other than our intuitions and sensibilities, there doesn't appear to be objective and clear measure of what constitutes \"accidents\" or prevention thereof. Similar mindsets exist around type enforcement.\n\nMy advice: to avoid potentially confusing code, only use `const` for variables that you're intentionally and obviously signaling will not change. In other words, don't *rely on* `const` for code behavior, but instead use it as a tool for signaling intent, when intent can be signaled clearly.\n\n### Block-scoped Functions\n\nStarting with ES6, function declarations that occur inside of blocks are now specified to be scoped to that block. Prior to ES6, the specification did not call for this, but many implementations did it anyway. So now the specification meets reality.\n\nConsider:\n\n```js\n{\n\tfoo();\t\t\t\t\t// works!\n\n\tfunction foo() {\n\t\t// ..\n\t}\n}\n\nfoo();\t\t\t\t\t\t// ReferenceError\n```\n\nThe `foo()` function is declared inside the `{ .. }` block, and as of ES6 is block-scoped there. So it's not available outside that block. But also note that it is \"hoisted\" within the block, as opposed to `let` declarations, which suffer the TDZ error trap mentioned earlier.\n\nBlock-scoping of function declarations could be a problem if you've ever written code like this before, and relied on the old legacy non-block-scoped behavior:\n\n```js\nif (something) {\n\tfunction foo() {\n\t\tconsole.log( \"1\" );\n\t}\n}\nelse {\n\tfunction foo() {\n\t\tconsole.log( \"2\" );\n\t}\n}\n\nfoo();\t\t// ??\n```\n\nIn pre-ES6 environments, `foo()` would print `\"2\"` regardless of the value of `something`, because both function declarations were hoisted out of the blocks, and the second one always wins.\n\nIn ES6, that last line throws a `ReferenceError`.\n\n## Spread/Rest\n\nES6 introduces a new `...` operator that's typically referred to as the *spread* or *rest* operator, depending on where/how it's used. Let's take a look:\n\n```js\nfunction foo(x,y,z) {\n\tconsole.log( x, y, z );\n}\n\nfoo( ...[1,2,3] );\t\t\t\t// 1 2 3\n```\n\nWhen `...` is used in front of an array (actually, any *iterable*, which we cover in Chapter 3), it acts to \"spread\" it out into its individual values.\n\nYou'll typically see that usage as is shown in that previous snippet, when spreading out an array as a set of arguments to a function call. In this usage, `...` acts to give us a simpler syntactic replacement for the `apply(..)` method, which we would typically have used pre-ES6 as:\n\n```js\nfoo.apply( null, [1,2,3] );\t\t// 1 2 3\n```\n\nBut `...` can be used to spread out/expand a value in other contexts as well, such as inside another array declaration:\n\n```js\nvar a = [2,3,4];\nvar b = [ 1, ...a, 5 ];\n\nconsole.log( b );\t\t\t\t\t// [1,2,3,4,5]\n```\n\nIn this usage, `...` is basically replacing `concat(..)`, as it behaves like `[1].concat( a, [5] )` here.\n\nThe other common usage of `...` can be seen as essentially the opposite; instead of spreading a value out, the `...` *gathers* a set of values together into an array. Consider:\n\n```js\nfunction foo(x, y, ...z) {\n\tconsole.log( x, y, z );\n}\n\nfoo( 1, 2, 3, 4, 5 );\t\t\t// 1 2 [3,4,5]\n```\n\nThe `...z` in this snippet is essentially saying: \"gather the *rest* of the arguments (if any) into an array called `z`.\" Because `x` was assigned `1`, and `y` was assigned `2`, the rest of the arguments `3`, `4`, and `5` were gathered into `z`.\n\nOf course, if you don't have any named parameters, the `...` gathers all arguments:\n\n```js\nfunction foo(...args) {\n\tconsole.log( args );\n}\n\nfoo( 1, 2, 3, 4, 5);\t\t\t// [1,2,3,4,5]\n```\n\n**Note:** The `...args` in the `foo(..)` function declaration is usually called \"rest parameters,\" because you're collecting the rest of the parameters. I prefer \"gather,\" because it's more descriptive of what it does rather than what it contains.\n\nThe best part about this usage is that it provides a very solid alternative to using the long-since-deprecated `arguments` array -- actually, it's not really an array, but an array-like object. Because `args` (or whatever you call it -- a lot of people prefer `r` or `rest`) is a real array, we can get rid of lots of silly pre-ES6 tricks we jumped through to make `arguments` into something we can treat as an array.\n\nConsider:\n\n```js\n// doing things the new ES6 way\nfunction foo(...args) {\n\t// `args` is already a real array\n\n\t// discard first element in `args`\n\targs.shift();\n\n\t// pass along all of `args` as arguments\n\t// to `console.log(..)`\n\tconsole.log( ...args );\n}\n\n// doing things the old-school pre-ES6 way\nfunction bar() {\n\t// turn `arguments` into a real array\n\tvar args = Array.prototype.slice.call( arguments );\n\n\t// add some elements on the end\n\targs.push( 4, 5 );\n\n\t// filter out odd numbers\n\targs = args.filter( function(v){\n\t\treturn v % 2 == 0;\n\t} );\n\n\t// pass along all of `args` as arguments\n\t// to `foo(..)`\n\tfoo.apply( null, args );\n}\n\nbar( 0, 1, 2, 3 );\t\t\t\t\t// 2 4\n```\n\nThe `...args` in the `foo(..)` function declaration gathers arguments, and the `...args` in the `console.log(..)` call spreads them out. That's a good illustration of the symmetric but opposite uses of the `...` operator.\n\nBesides the `...` usage in a function declaration, there's another case where `...` is used for gathering values, and we'll look at it in the \"Too Many, Too Few, Just Enough\" section later in this chapter.\n\n## Default Parameter Values\n\nPerhaps one of the most common idioms in JavaScript relates to setting a default value for a function parameter. The way we've done this for years should look quite familiar:\n\n```js\nfunction foo(x,y) {\n\tx = x || 11;\n\ty = y || 31;\n\n\tconsole.log( x + y );\n}\n\nfoo();\t\t\t\t// 42\nfoo( 5, 6 );\t\t// 11\nfoo( 5 );\t\t\t// 36\nfoo( null, 6 );\t\t// 17\n```\n\nOf course, if you've used this pattern before, you know that it's both helpful and a little bit dangerous, if for example you need to be able to pass in what would otherwise be considered a falsy value for one of the parameters. Consider:\n\n```js\nfoo( 0, 42 );\t\t// 53 <-- Oops, not 42\n```\n\nWhy? Because the `0` is falsy, and so the `x || 11` results in `11`, not the directly passed in `0`.\n\nTo fix this gotcha, some people will instead write the check more verbosely like this:\n\n```js\nfunction foo(x,y) {\n\tx = (x !== undefined) ? x : 11;\n\ty = (y !== undefined) ? y : 31;\n\n\tconsole.log( x + y );\n}\n\nfoo( 0, 42 );\t\t\t// 42\nfoo( undefined, 6 );\t// 17\n```\n\nOf course, that means that any value except `undefined` can be directly passed in. However, `undefined` will be assumed to signal, \"I didn't pass this in.\" That works great unless you actually need to be able to pass `undefined` in.\n\nIn that case, you could test to see if the argument is actually omitted, by it actually not being present in the `arguments` array, perhaps like this:\n\n```js\nfunction foo(x,y) {\n\tx = (0 in arguments) ? x : 11;\n\ty = (1 in arguments) ? y : 31;\n\n\tconsole.log( x + y );\n}\n\nfoo( 5 );\t\t\t\t// 36\nfoo( 5, undefined );\t// NaN\n```\n\nBut how would you omit the first `x` argument without the ability to pass in any kind of value (not even `undefined`) that signals \"I'm omitting this argument\"?\n\n`foo(,5)` is tempting, but it's invalid syntax. `foo.apply(null,[,5])` seems like it should do the trick, but `apply(..)`'s quirks here mean that the arguments are treated as `[undefined,5]`, which of course doesn't omit.\n\nIf you investigate further, you'll find you can only omit arguments on the end (i.e., righthand side) by simply passing fewer arguments than \"expected,\" but you cannot omit arguments in the middle or at the beginning of the arguments list. It's just not possible.\n\nThere's a principle applied to JavaScript's design here that is important to remember: `undefined` means *missing*. That is, there's no difference between `undefined` and *missing*, at least as far as function arguments go.\n\n**Note:** There are, confusingly, other places in JS where this particular design principle doesn't apply, such as for arrays with empty slots. See the *Types & Grammar* title of this series for more information.\n\nWith all this in mind, we can now examine a nice helpful syntax added as of ES6 to streamline the assignment of default values to missing arguments:\n\n```js\nfunction foo(x = 11, y = 31) {\n\tconsole.log( x + y );\n}\n\nfoo();\t\t\t\t\t// 42\nfoo( 5, 6 );\t\t\t// 11\nfoo( 0, 42 );\t\t\t// 42\n\nfoo( 5 );\t\t\t\t// 36\nfoo( 5, undefined );\t// 36 <-- `undefined` is missing\nfoo( 5, null );\t\t\t// 5  <-- null coerces to `0`\n\nfoo( undefined, 6 );\t// 17 <-- `undefined` is missing\nfoo( null, 6 );\t\t\t// 6  <-- null coerces to `0`\n```\n\nNotice the results and how they imply both subtle differences and similarities to the earlier approaches.\n\n`x = 11` in a function declaration is more like `x !== undefined ? x : 11` than the much more common idiom `x || 11`, so you'll need to be careful in converting your pre-ES6 code to this ES6 default parameter value syntax.\n\n**Note:** A rest/gather parameter (see \"Spread/Rest\") cannot have a default value. So, while `function foo(...vals=[1,2,3]) {` might seem an intriguing capability, it's not valid syntax. You'll need to continue to apply that sort of logic manually if necessary.\n\n### Default Value Expressions\n\nFunction default values can be more than just simple values like `31`; they can be any valid expression, even a function call:\n\n```js\nfunction bar(val) {\n\tconsole.log( \"bar called!\" );\n\treturn y + val;\n}\n\nfunction foo(x = y + 3, z = bar( x )) {\n\tconsole.log( x, z );\n}\n\nvar y = 5;\nfoo();\t\t\t\t\t\t\t\t// \"bar called\"\n\t\t\t\t\t\t\t\t\t// 8 13\nfoo( 10 );\t\t\t\t\t\t\t// \"bar called\"\n\t\t\t\t\t\t\t\t\t// 10 15\ny = 6;\nfoo( undefined, 10 );\t\t\t\t// 9 10\n```\n\nAs you can see, the default value expressions are lazily evaluated, meaning they're only run if and when they're needed -- that is, when a parameter's argument is omitted or is `undefined`.\n\nIt's a subtle detail, but the formal parameters in a function declaration are in their own scope (think of it as a scope bubble wrapped around just the `( .. )` of the function declaration), not in the function body's scope. That means a reference to an identifier in a default value expression first matches the formal parameters' scope before looking to an outer scope. See the *Scope & Closures* title of this series for more information.\n\nConsider:\n\n```js\nvar w = 1, z = 2;\n\nfunction foo( x = w + 1, y = x + 1, z = z + 1 ) {\n\tconsole.log( x, y, z );\n}\n\nfoo();\t\t\t\t\t// ReferenceError\n```\n\nThe `w` in the `w + 1` default value expression looks for `w` in the formal parameters' scope, but does not find it, so the outer scope's `w` is used. Next, The `x` in the `x + 1` default value expression finds `x` in the formal parameters' scope, and luckily `x` has already been initialized, so the assignment to `y` works fine.\n\nHowever, the `z` in `z + 1` finds `z` as a not-yet-initialized-at-that-moment parameter variable, so it never tries to find the `z` from the outer scope.\n\nAs we mentioned in the \"`let` Declarations\" section earlier in this chapter, ES6 has a TDZ, which prevents a variable from being accessed in its uninitialized state. As such, the `z + 1` default value expression throws a TDZ `ReferenceError` error.\n\nThough it's not necessarily a good idea for code clarity, a default value expression can even be an inline function expression call -- commonly referred to as an immediately invoked function expression (IIFE):\n\n```js\nfunction foo( x =\n\t(function(v){ return v + 11; })( 31 )\n) {\n\tconsole.log( x );\n}\n\nfoo();\t\t\t// 42\n```\n\nThere will very rarely be any cases where an IIFE (or any other executed inline function expression) will be appropriate for default value expressions. If you find yourself tempted to do this, take a step back and reevaluate!\n\n**Warning:** If the IIFE had tried to access the `x` identifier and had not declared its own `x`, this would also have been a TDZ error, just as discussed before.\n\nThe default value expression in the previous snippet is an IIFE in that in the sense that it's a function that's executed right inline, via `(31)`. If we had left that part off, the default value assigned to `x` would have just been a function reference itself, perhaps like a default callback. There will probably be cases where that pattern will be quite useful, such as:\n\n```js\nfunction ajax(url, cb = function(){}) {\n\t// ..\n}\n\najax( \"http://some.url.1\" );\n```\n\nIn this case, we essentially want to default `cb` to be a no-op empty function call if not otherwise specified. The function expression is just a function reference, not a function call itself (no invoking `()` on the end of it), which accomplishes that goal.\n\nSince the early days of JS, there's been a little-known but useful quirk available to us: `Function.prototype` is itself an empty no-op function. So, the declaration could have been `cb = Function.prototype` and saved the inline function expression creation.\n\n## Destructuring\n\nES6 introduces a new syntactic feature called *destructuring*, which may be a little less confusing if you instead think of it as *structured assignment*. To understand this meaning, consider:\n\n```js\nfunction foo() {\n\treturn [1,2,3];\n}\n\nvar tmp = foo(),\n\ta = tmp[0], b = tmp[1], c = tmp[2];\n\nconsole.log( a, b, c );\t\t\t\t// 1 2 3\n```\n\nAs you can see, we created a manual assignment of the values in the array that `foo()` returns to individual variables `a`, `b`, and `c`, and to do so we (unfortunately) needed the `tmp` variable.\n\nSimilarly, we can do the following with objects:\n\n```js\nfunction bar() {\n\treturn {\n\t\tx: 4,\n\t\ty: 5,\n\t\tz: 6\n\t};\n}\n\nvar tmp = bar(),\n\tx = tmp.x, y = tmp.y, z = tmp.z;\n\nconsole.log( x, y, z );\t\t\t\t// 4 5 6\n```\n\nThe `tmp.x` property value is assigned to the `x` variable, and likewise for `tmp.y` to `y` and `tmp.z` to `z`.\n\nManually assigning indexed values from an array or properties from an object can be thought of as *structured assignment*. ES6 adds a dedicated syntax for *destructuring*, specifically *array destructuring* and *object destructuring*. This syntax eliminates the need for the `tmp` variable in the previous snippets, making them much cleaner. Consider:\n\n```js\nvar [ a, b, c ] = foo();\nvar { x: x, y: y, z: z } = bar();\n\nconsole.log( a, b, c );\t\t\t\t// 1 2 3\nconsole.log( x, y, z );\t\t\t\t// 4 5 6\n```\n\nYou're likely more accustomed to seeing syntax like `[a,b,c]` on the righthand side of an `=` assignment, as the value being assigned.\n\nDestructuring symmetrically flips that pattern, so that `[a,b,c]` on the lefthand side of the `=` assignment is treated as a kind of \"pattern\" for decomposing the righthand side array value into separate variable assignments.\n\nSimilarly, `{ x: x, y: y, z: z }` specifies a \"pattern\" to decompose the object value from `bar()` into separate variable assignments.\n\n### Object Property Assignment Pattern\n\nLet's dig into that `{ x: x, .. }` syntax from the previous snippet. If the property name being matched is the same as the variable you want to declare, you can actually shorten the syntax:\n\n```js\nvar { x, y, z } = bar();\n\nconsole.log( x, y, z );\t\t\t\t// 4 5 6\n```\n\nPretty cool, right?\n\nBut is `{ x, .. }` leaving off the `x: ` part or leaving off the `: x` part? We're actually leaving off the `x: ` part when we use the shorter syntax. That may not seem like an important detail, but you'll understand its importance in just a moment.\n\nIf you can write the shorter form, why would you ever write out the longer form? Because that longer form actually allows you to assign a property to a different variable name, which can sometimes be quite useful:\n\n```js\nvar { x: bam, y: baz, z: bap } = bar();\n\nconsole.log( bam, baz, bap );\t\t// 4 5 6\nconsole.log( x, y, z );\t\t\t\t// ReferenceError\n```\n\nThere's a subtle but super-important quirk to understand about this variation of the object destructuring form. To illustrate why it can be a gotcha you need to be careful of, let's consider the \"pattern\" of how normal object literals are specified:\n\n```js\nvar X = 10, Y = 20;\n\nvar o = { a: X, b: Y };\n\nconsole.log( o.a, o.b );\t\t\t// 10 20\n```\n\nIn `{ a: X, b: Y }`, we know that `a` is the object property, and `X` is the source value that gets assigned to it. In other words, the syntactic pattern is `target: source`, or more obviously, `property-alias: value`. We intuitively understand this because it's the same as `=` assignment, where the pattern is `target = source`.\n\nHowever, when you use object destructuring assignment -- that is, putting the `{ .. }` object literal-looking syntax on the lefthand side of the `=` operator -- you invert that `target: source` pattern.\n\nRecall:\n\n```js\nvar { x: bam, y: baz, z: bap } = bar();\n```\n\nThe syntactic pattern here is `source: target` (or `value: variable-alias`). `x: bam` means the `x` property is the source value and `bam` is the target variable to assign to. In other words, object literals are `target <-- source`, and object destructuring assignments are `source --> target`. See how that's flipped?\n\nThere's another way to think about this syntax though, which may help ease the confusion. Consider:\n\n```js\nvar aa = 10, bb = 20;\n\nvar o = { x: aa, y: bb };\nvar     { x: AA, y: BB } = o;\n\nconsole.log( AA, BB );\t\t\t\t// 10 20\n```\n\nIn the `{ x: aa, y: bb }` line, the `x` and `y` represent the object properties. In the `{ x: AA, y: BB }` line, the `x` and the `y` *also* represent the object properties.\n\nRecall how earlier I asserted that `{ x, .. }` was leaving off the `x: ` part? In those two lines, if you erase the `x: ` and `y: ` parts in that snippet, you're left only with `aa, bb` and `AA, BB`, which in effect -- only conceptually, not actually -- are assignments from `aa` to `AA` and from `bb` to `BB`.\n\nSo, that symmetry may help to explain why the syntactic pattern was intentionally flipped for this ES6 feature.\n\n**Note:** I would have preferred the syntax to be `{ AA: x , BB: y }` for the destructuring assignment, as that would have preserved consistency of the more familiar `target: source` pattern for both usages. Alas, I'm having to train my brain for the inversion, as some readers may also have to do.\n\n### Not Just Declarations\n\nSo far, we've used destructuring assignment with `var` declarations (of course, they could also use `let` and `const`), but destructuring is a general assignment operation, not just a declaration.\n\nConsider:\n\n```js\nvar a, b, c, x, y, z;\n\n[a,b,c] = foo();\n( { x, y, z } = bar() );\n\nconsole.log( a, b, c );\t\t\t\t// 1 2 3\nconsole.log( x, y, z );\t\t\t\t// 4 5 6\n```\n\nThe variables can already be declared, and then the destructuring only does assignments, exactly as we've already seen.\n\n**Note:** For the object destructuring form specifically, when leaving off a `var`/`let`/`const` declarator, we had to surround the whole assignment expression in `( )`, because otherwise the `{ .. }` on the lefthand side as the first element in the statement is taken to be a block statement instead of an object.\n\nIn fact, the assignment expressions (`a`, `y`, etc.) don't actually need to be just variable identifiers. Anything that's a valid assignment expression is allowed. For example:\n\n```js\nvar o = {};\n\n[o.a, o.b, o.c] = foo();\n( { x: o.x, y: o.y, z: o.z } = bar() );\n\nconsole.log( o.a, o.b, o.c );\t\t// 1 2 3\nconsole.log( o.x, o.y, o.z );\t\t// 4 5 6\n```\n\nYou can even use computed property expressions in the destructuring. Consider:\n\n```js\nvar which = \"x\",\n\to = {};\n\n( { [which]: o[which] } = bar() );\n\nconsole.log( o.x );\t\t\t\t\t// 4\n```\n\nThe `[which]:` part is the computed property, which results in `x` -- the property to destructure from the object in question as the source of the assignment. The `o[which]` part is just a normal object key reference, which equates to `o.x` as the target of the assignment.\n\nYou can use the general assignments to create object mappings/transformations, such as:\n\n```js\nvar o1 = { a: 1, b: 2, c: 3 },\n\to2 = {};\n\n( { a: o2.x, b: o2.y, c: o2.z } = o1 );\n\nconsole.log( o2.x, o2.y, o2.z );\t// 1 2 3\n```\n\nOr you can map an object to an array, such as:\n\n```js\nvar o1 = { a: 1, b: 2, c: 3 },\n\ta2 = [];\n\n( { a: a2[0], b: a2[1], c: a2[2] } = o1 );\n\nconsole.log( a2 );\t\t\t\t\t// [1,2,3]\n```\n\nOr the other way around:\n\n```js\nvar a1 = [ 1, 2, 3 ],\n\to2 = {};\n\n[ o2.a, o2.b, o2.c ] = a1;\n\nconsole.log( o2.a, o2.b, o2.c );\t// 1 2 3\n```\n\nOr you could reorder one array to another:\n\n```js\nvar a1 = [ 1, 2, 3 ],\n\ta2 = [];\n\n[ a2[2], a2[0], a2[1] ] = a1;\n\nconsole.log( a2 );\t\t\t\t\t// [2,3,1]\n```\n\nYou can even solve the traditional \"swap two variables\" task without a temporary variable:\n\n```js\nvar x = 10, y = 20;\n\n[ y, x ] = [ x, y ];\n\nconsole.log( x, y );\t\t\t\t// 20 10\n```\n\n**Warning:** Be careful: you shouldn't mix in declaration with assignment unless you want all of the assignment expressions *also* to be treated as declarations. Otherwise, you'll get syntax errors. That's why in the earlier example I had to do `var a2 = []` separately from the `[ a2[0], .. ] = ..` destructuring assignment. It wouldn't make any sense to try `var [ a2[0], .. ] = ..`, because `a2[0]` isn't a valid declaration identifier; it also obviously couldn't implicitly create a `var a2 = []` declaration to use.\n\n### Repeated Assignments\n\nThe object destructuring form allows a source property (holding any value type) to be listed multiple times. For example:\n\n```js\nvar { a: X, a: Y } = { a: 1 };\n\nX;\t// 1\nY;\t// 1\n```\n\nThat also means you can both destructure a sub-object/array property and also capture the sub-object/array's value itself. Consider:\n\n```js\nvar { a: { x: X, x: Y }, a } = { a: { x: 1 } };\n\nX;\t// 1\nY;\t// 1\na;\t// { x: 1 }\n\n( { a: X, a: Y, a: [ Z ] } = { a: [ 1 ] } );\n\nX.push( 2 );\nY[0] = 10;\n\nX;\t// [10,2]\nY;\t// [10,2]\nZ;\t// 1\n```\n\nA word of caution about destructuring: it may be tempting to list destructuring assignments all on a single line as has been done thus far in our discussion. However, it's a much better idea to spread destructuring assignment patterns over multiple lines, using proper indentation -- much like you would in JSON or with an object literal value -- for readability sake.\n\n```js\n// harder to read:\nvar { a: { b: [ c, d ], e: { f } }, g } = obj;\n\n// better:\nvar {\n\ta: {\n\t\tb: [ c, d ],\n\t\te: { f }\n\t},\n\tg\n} = obj;\n```\n\nRemember: **the purpose of destructuring is not just less typing, but more declarative readability.**\n\n#### Destructuring Assignment Expressions\n\nThe assignment expression with object or array destructuring has as its completion value the full righthand object/array value. Consider:\n\n```js\nvar o = { a:1, b:2, c:3 },\n\ta, b, c, p;\n\np = { a, b, c } = o;\n\nconsole.log( a, b, c );\t\t\t// 1 2 3\np === o;\t\t\t\t\t\t// true\n```\n\nIn the previous snippet, `p` was assigned the `o` object reference, not one of the `a`, `b`, or `c` values. The same is true of array destructuring:\n\n```js\nvar o = [1,2,3],\n\ta, b, c, p;\n\np = [ a, b, c ] = o;\n\nconsole.log( a, b, c );\t\t\t// 1 2 3\np === o;\t\t\t\t\t\t// true\n```\n\nBy carrying the object/array value through as the completion, you can chain destructuring assignment expressions together:\n\n```js\nvar o = { a:1, b:2, c:3 },\n\tp = [4,5,6],\n\ta, b, c, x, y, z;\n\n( {a} = {b,c} = o );\n[x,y] = [z] = p;\n\nconsole.log( a, b, c );\t\t\t// 1 2 3\nconsole.log( x, y, z );\t\t\t// 4 5 4\n```\n\n### Too Many, Too Few, Just Enough\n\nWith both array destructuring assignment and object destructuring assignment, you do not have to assign all the values that are present. For example:\n\n```js\nvar [,b] = foo();\nvar { x, z } = bar();\n\nconsole.log( b, x, z );\t\t\t\t// 2 4 6\n```\n\nThe `1` and `3` values that came back from `foo()` are discarded, as is the `5` value from `bar()`.\n\nSimilarly, if you try to assign more values than are present in the value you're destructuring/decomposing, you get graceful fallback to `undefined`, as you'd expect:\n\n```js\nvar [,,c,d] = foo();\nvar { w, z } = bar();\n\nconsole.log( c, z );\t\t\t\t// 3 6\nconsole.log( d, w );\t\t\t\t// undefined undefined\n```\n\nThis behavior follows symmetrically from the earlier stated \"`undefined` is missing\" principle.\n\nWe examined the `...` operator earlier in this chapter, and saw that it can sometimes be used to spread an array value out into its separate values, and sometimes it can be used to do the opposite: to gather a set of values together into an array.\n\nIn addition to the gather/rest usage in function declarations, `...` can perform the same behavior in destructuring assignments. To illustrate, let's recall a snippet from earlier in this chapter:\n\n```js\nvar a = [2,3,4];\nvar b = [ 1, ...a, 5 ];\n\nconsole.log( b );\t\t\t\t\t// [1,2,3,4,5]\n```\n\nHere we see that `...a` is spreading `a` out, because it appears in the array `[ .. ]` value position. If `...a` appears in an array destructuring position, it performs the gather behavior:\n\n```js\nvar a = [2,3,4];\nvar [ b, ...c ] = a;\n\nconsole.log( b, c );\t\t\t\t// 2 [3,4]\n```\n\nThe `var [ .. ] = a` destructuring assignment spreads `a` out to be assigned to the pattern described inside the `[ .. ]`. The first part names `b` for the first value in `a` (`2`). But then `...c` gathers the rest of the values (`3` and `4`) into an array and calls it `c`.\n\n**Note:** We've seen how `...` works with arrays, but what about with objects? It's not an ES6 feature, but see Chapter 8 for discussion of a possible \"beyond ES6\" feature where `...` works with spreading or gathering objects.\n\n### Default Value Assignment\n\nBoth forms of destructuring can offer a default value option for an assignment, using the `=` syntax similar to the default function argument values discussed earlier.\n\nConsider:\n\n```js\nvar [ a = 3, b = 6, c = 9, d = 12 ] = foo();\nvar { x = 5, y = 10, z = 15, w = 20 } = bar();\n\nconsole.log( a, b, c, d );\t\t\t// 1 2 3 12\nconsole.log( x, y, z, w );\t\t\t// 4 5 6 20\n```\n\nYou can combine the default value assignment with the alternative assignment expression syntax covered earlier. For example:\n\n```js\nvar { x, y, z, w: WW = 20 } = bar();\n\nconsole.log( x, y, z, WW );\t\t\t// 4 5 6 20\n```\n\nBe careful about confusing yourself (or other developers who read your code) if you use an object or array as the default value in a destructuring. You can create some really hard to understand code:\n\n```js\nvar x = 200, y = 300, z = 100;\nvar o1 = { x: { y: 42 }, z: { y: z } };\n\n( { y: x = { y: y } } = o1 );\n( { z: y = { y: z } } = o1 );\n( { x: z = { y: x } } = o1 );\n```\n\nCan you tell from that snippet what values `x`, `y`, and `z` have at the end? Takes a moment of pondering, I would imagine. I'll end the suspense:\n\n```js\nconsole.log( x.y, y.y, z.y );\t\t// 300 100 42\n```\n\nThe takeaway here: destructuring is great and can be very useful, but it's also a sharp sword that can cause injury (to someone's brain) if used unwisely.\n\n### Nested Destructuring\n\nIf the values you're destructuring have nested objects or arrays, you can destructure those nested values as well:\n\n```js\nvar a1 = [ 1, [2, 3, 4], 5 ];\nvar o1 = { x: { y: { z: 6 } } };\n\nvar [ a, [ b, c, d ], e ] = a1;\nvar { x: { y: { z: w } } } = o1;\n\nconsole.log( a, b, c, d, e );\t\t// 1 2 3 4 5\nconsole.log( w );\t\t\t\t\t// 6\n```\n\nNested destructuring can be a simple way to flatten out object namespaces. For example:\n\n```js\nvar App = {\n\tmodel: {\n\t\tUser: function(){ .. }\n\t}\n};\n\n// instead of:\n// var User = App.model.User;\n\nvar { model: { User } } = App;\n```\n\n### Destructuring Parameters\n\nIn the following snippet, can you spot the assignment?\n\n```js\nfunction foo(x) {\n\tconsole.log( x );\n}\n\nfoo( 42 );\n```\n\nThe assignment is kinda hidden: `42` (the argument) is assigned to `x` (the parameter) when `foo(42)` is executed. If parameter/argument pairing is an assignment, then it stands to reason that it's an assignment that could be destructured, right? Of course!\n\nConsider array destructuring for parameters:\n\n```js\nfunction foo( [ x, y ] ) {\n\tconsole.log( x, y );\n}\n\nfoo( [ 1, 2 ] );\t\t\t\t\t// 1 2\nfoo( [ 1 ] );\t\t\t\t\t\t// 1 undefined\nfoo( [] );\t\t\t\t\t\t\t// undefined undefined\n```\n\nObject destructuring for parameters works, too:\n\n```js\nfunction foo( { x, y } ) {\n\tconsole.log( x, y );\n}\n\nfoo( { y: 1, x: 2 } );\t\t\t\t// 2 1\nfoo( { y: 42 } );\t\t\t\t\t// undefined 42\nfoo( {} );\t\t\t\t\t\t\t// undefined undefined\n```\n\nThis technique is an approximation of named arguments (a long requested feature for JS!), in that the properties on the object map to the destructured parameters of the same names. That also means that we get optional parameters (in any position) for free, as you can see leaving off the `x` \"parameter\" worked as we'd expect.\n\nOf course, all the previously discussed variations of destructuring are available to us with parameter destructuring, including nested destructuring, default values, and more. Destructuring also mixes fine with other ES6 function parameter capabilities, like default parameter values and rest/gather parameters.\n\nConsider these quick illustrations (certainly not exhaustive of the possible variations):\n\n```js\nfunction f1([ x=2, y=3, z ]) { .. }\nfunction f2([ x, y, ...z], w) { .. }\nfunction f3([ x, y, ...z], ...w) { .. }\n\nfunction f4({ x: X, y }) { .. }\nfunction f5({ x: X = 10, y = 20 }) { .. }\nfunction f6({ x = 10 } = {}, { y } = { y: 10 }) { .. }\n```\n\nLet's take one example from this snippet and examine it, for illustration purposes:\n\n```js\nfunction f3([ x, y, ...z], ...w) {\n\tconsole.log( x, y, z, w );\n}\n\nf3( [] );\t\t\t\t\t\t\t// undefined undefined [] []\nf3( [1,2,3,4], 5, 6 );\t\t\t\t// 1 2 [3,4] [5,6]\n```\n\nThere are two `...` operators in use here, and they're both gathering values in arrays (`z` and `w`), though `...z` gathers from the rest of the values left over in the first array argument, while `...w` gathers from the rest of the main arguments left over after the first.\n\n#### Destructuring Defaults + Parameter Defaults\n\nThere's one subtle point you should be particularly careful to notice -- the difference in behavior between a destructuring default value and a function parameter default value. For example:\n\n```js\nfunction f6({ x = 10 } = {}, { y } = { y: 10 }) {\n\tconsole.log( x, y );\n}\n\nf6();\t\t\t\t\t\t\t\t// 10 10\n```\n\nAt first, it would seem that we've declared a default value of `10` for both the `x` and `y` parameters, but in two different ways. However, these two different approaches will behave differently in certain cases, and the difference is awfully subtle.\n\nConsider:\n\n```js\nf6( {}, {} );\t\t\t\t\t\t// 10 undefined\n```\n\nWait, why did that happen? It's pretty clear that named parameter `x` is defaulting to `10` if not passed as a property of that same name in the first argument's object.\n\nBut what about `y` being `undefined`? The `{ y: 10 }` value is an object as a function parameter default value, not a destructuring default value. As such, it only applies if the second argument is not passed at all, or is passed as `undefined`.\n\nIn the previous snippet, we *are* passing a second argument (`{}`), so the default `{ y: 10 }` value is not used, and the `{ y }` destructuring occurs against the passed in `{}` empty object value.\n\nNow, compare `{ y } = { y: 10 }` to `{ x = 10 } = {}`.\n\nFor the `x`'s form usage, if the first function argument is omitted or `undefined`, the `{}` empty object default applies. Then, whatever value is in the first argument position -- either the default `{}` or whatever you passed in -- is destructured with the `{ x = 10 }`, which checks to see if an `x` property is found, and if not found (or `undefined`), the `10` default value is applied to the `x` named parameter.\n\nDeep breath. Read back over those last few paragraphs a couple of times. Let's review via code:\n\n```js\nfunction f6({ x = 10 } = {}, { y } = { y: 10 }) {\n\tconsole.log( x, y );\n}\n\nf6();\t\t\t\t\t\t\t\t// 10 10\nf6( undefined, undefined );\t\t\t// 10 10\nf6( {}, undefined );\t\t\t\t// 10 10\n\nf6( {}, {} );\t\t\t\t\t\t// 10 undefined\nf6( undefined, {} );\t\t\t\t// 10 undefined\n\nf6( { x: 2 }, { y: 3 } );\t\t\t// 2 3\n```\n\nIt would generally seem that the defaulting behavior of the `x` parameter is probably the more desirable and sensible case compared to that of `y`. As such, it's important to understand why and how `{ x = 10 } = {}` form is different from `{ y } = { y: 10 }` form.\n\nIf that's still a bit fuzzy, go back and read it again, and play with this yourself. Your future self will thank you for taking the time to get this very subtle gotcha nuance detail straight.\n\n#### Nested Defaults: Destructured and Restructured\n\nAlthough it may at first be difficult to grasp, an interesting idiom emerges for setting defaults for a nested object's properties: using object destructuring along with what I'd call *restructuring*.\n\nConsider a set of defaults in a nested object structure, like the following:\n\n```js\n// taken from: http://es-discourse.com/t/partial-default-arguments/120/7\n\nvar defaults = {\n\toptions: {\n\t\tremove: true,\n\t\tenable: false,\n\t\tinstance: {}\n\t},\n\tlog: {\n\t\twarn: true,\n\t\terror: true\n\t}\n};\n```\n\nNow, let's say that you have an object called `config`, which has some of these applied, but perhaps not all, and you'd like to set all the defaults into this object in the missing spots, but not override specific settings already present:\n\n```js\nvar config = {\n\toptions: {\n\t\tremove: false,\n\t\tinstance: null\n\t}\n};\n```\n\nYou can of course do so manually, as you might have done in the past:\n\n```js\nconfig.options = config.options || {};\nconfig.options.remove = (config.options.remove !== undefined) ?\n\tconfig.options.remove : defaults.options.remove;\nconfig.options.enable = (config.options.enable !== undefined) ?\n\tconfig.options.enable : defaults.options.enable;\n...\n```\n\nYuck.\n\nOthers may prefer the assign-overwrite approach to this task. You might be tempted by the ES6 `Object.assign(..)` utility (see Chapter 6) to clone the properties first from `defaults` and then overwritten with the cloned properties from `config`, as so:\n\n```js\nconfig = Object.assign( {}, defaults, config );\n```\n\nThat looks way nicer, huh? But there's a major problem! `Object.assign(..)` is shallow, which means when it copies `defaults.options`, it just copies that object reference, not deep cloning that object's properties to a `config.options` object. `Object.assign(..)` would need to be applied (sort of \"recursively\") at all levels of your object's tree to get the deep cloning you're expecting.\n\n**Note:** Many JS utility libraries/frameworks provide their own option for deep cloning of an object, but those approaches and their gotchas are beyond our scope to discuss here.\n\nSo let's examine if ES6 object destructuring with defaults can help at all:\n\n```js\nconfig.options = config.options || {};\nconfig.log = config.log || {};\n({\n\toptions: {\n\t\tremove: config.options.remove = defaults.options.remove,\n\t\tenable: config.options.enable = defaults.options.enable,\n\t\tinstance: config.options.instance = defaults.options.instance\n\t} = {},\n\tlog: {\n\t\twarn: config.log.warn = defaults.log.warn,\n\t\terror: config.log.error = defaults.log.error\n\t} = {}\n} = config);\n```\n\nNot as nice as the false promise of `Object.assign(..)` (being that it's shallow only), but it's better than the manual approach by a fair bit, I think. It is still unfortunately verbose and repetitive, though.\n\nThe previous snippet's approach works because I'm hacking the destructuring and defaults mechanism to do the property `=== undefined` checks and assignment decisions for me. It's a trick in that I'm destructuring `config` (see the `= config` at the end of the snippet), but I'm reassigning all the destructured values right back into `config`, with the `config.options.enable` assignment references.\n\nStill too much, though. Let's see if we can make anything better.\n\nThe following trick works best if you know that all the various properties you're destructuring are uniquely named. You can still do it even if that's not the case, but it's not as nice -- you'll have to do the destructuring in stages, or create unique local variables as temporary aliases.\n\nIf we fully destructure all the properties into top-level variables, we can then immediately restructure to reconstitute the original nested object structure.\n\nBut all those temporary variables hanging around would pollute scope. So, let's use block scoping (see \"Block-Scoped Declarations\" earlier in this chapter) with a general `{ }` enclosing block:\n\n```js\n// merge `defaults` into `config`\n{\n\t// destructure (with default value assignments)\n\tlet {\n\t\toptions: {\n\t\t\tremove = defaults.options.remove,\n\t\t\tenable = defaults.options.enable,\n\t\t\tinstance = defaults.options.instance\n\t\t} = {},\n\t\tlog: {\n\t\t\twarn = defaults.log.warn,\n\t\t\terror = defaults.log.error\n\t\t} = {}\n\t} = config;\n\n\t// restructure\n\tconfig = {\n\t\toptions: { remove, enable, instance },\n\t\tlog: { warn, error }\n\t};\n}\n```\n\nThat seems a fair bit nicer, huh?\n\n**Note:** You could also accomplish the scope enclosure with an arrow IIFE instead of the general `{ }` block and `let` declarations. Your destructuring assignments/defaults would be in the parameter list and your restructuring would be the `return` statement in the function body.\n\nThe `{ warn, error }` syntax in the restructuring part may look new to you; that's called \"concise properties\" and we cover it in the next section!\n\n## Object Literal Extensions\n\nES6 adds a number of important convenience extensions to the humble `{ .. }` object literal.\n\n### Concise Properties\n\nYou're certainly familiar with declaring object literals in this form:\n\n```js\nvar x = 2, y = 3,\n\to = {\n\t\tx: x,\n\t\ty: y\n\t};\n```\n\nIf it's always felt redundant to say `x: x` all over, there's good news. If you need to define a property that is the same name as a lexical identifier, you can shorten it from `x: x` to `x`. Consider:\n\n```js\nvar x = 2, y = 3,\n\to = {\n\t\tx,\n\t\ty\n\t};\n```\n\n### Concise Methods\n\nIn a similar spirit to concise properties we just examined, functions attached to properties in object literals also have a concise form, for convenience.\n\nThe old way:\n\n```js\nvar o = {\n\tx: function(){\n\t\t// ..\n\t},\n\ty: function(){\n\t\t// ..\n\t}\n}\n```\n\nAnd as of ES6:\n\n```js\nvar o = {\n\tx() {\n\t\t// ..\n\t},\n\ty() {\n\t\t// ..\n\t}\n}\n```\n\n**Warning:** While `x() { .. }` seems to just be shorthand for `x: function(){ .. }`, concise methods have special behaviors that their older counterparts don't; specifically, the allowance for `super` (see \"Object `super`\" later in this chapter).\n\nGenerators (see Chapter 4) also have a concise method form:\n\n```js\nvar o = {\n\t*foo() { .. }\n};\n```\n\n#### Concisely Unnamed\n\nWhile that convenience shorthand is quite attractive, there's a subtle gotcha to be aware of. To illustrate, let's examine pre-ES6 code like the following, which you might try to refactor to use concise methods:\n\n```js\nfunction runSomething(o) {\n\tvar x = Math.random(),\n\t\ty = Math.random();\n\n\treturn o.something( x, y );\n}\n\nrunSomething( {\n\tsomething: function something(x,y) {\n\t\tif (x > y) {\n\t\t\t// recursively call with `x`\n\t\t\t// and `y` swapped\n\t\t\treturn something( y, x );\n\t\t}\n\n\t\treturn y - x;\n\t}\n} );\n```\n\nThis obviously silly code just generates two random numbers and subtracts the smaller from the bigger. But what's important here isn't what it does, but rather how it's defined. Let's focus on the object literal and function definition, as we see here:\n\n```js\nrunSomething( {\n\tsomething: function something(x,y) {\n\t\t// ..\n\t}\n} );\n```\n\nWhy do we say both `something:` and `function something`? Isn't that redundant? Actually, no, both are needed for different purposes. The property `something` is how we can call `o.something(..)`, sort of like its public name. But the second `something` is a lexical name to refer to the function from inside itself, for recursion purposes.\n\nCan you see why the line `return something(y,x)` needs the name `something` to refer to the function? There's no lexical name for the object, such that it could have said `return o.something(y,x)` or something of that sort.\n\nThat's actually a pretty common practice when the object literal does have an identifying name, such as:\n\n```js\nvar controller = {\n\tmakeRequest: function(..){\n\t\t// ..\n\t\tcontroller.makeRequest(..);\n\t}\n};\n```\n\nIs this a good idea? Perhaps, perhaps not. You're assuming that the name `controller` will always point to the object in question. But it very well may not -- the `makeRequest(..)` function doesn't control the outer code and so can't force that to be the case. This could come back to bite you.\n\nOthers prefer to use `this` to define such things:\n\n```js\nvar controller = {\n\tmakeRequest: function(..){\n\t\t// ..\n\t\tthis.makeRequest(..);\n\t}\n};\n```\n\nThat looks fine, and should work if you always invoke the method as `controller.makeRequest(..)`. But you now have a `this` binding gotcha if you do something like:\n\n```js\nbtn.addEventListener( \"click\", controller.makeRequest, false );\n```\n\nOf course, you can solve that by passing `controller.makeRequest.bind(controller)` as the handler reference to bind the event to. But yuck -- it isn't very appealing.\n\nOr what if your inner `this.makeRequest(..)` call needs to be made from a nested function? You'll have another `this` binding hazard, which people will often solve with the hacky `var self = this`, such as:\n\n```js\nvar controller = {\n\tmakeRequest: function(..){\n\t\tvar self = this;\n\n\t\tbtn.addEventListener( \"click\", function(){\n\t\t\t// ..\n\t\t\tself.makeRequest(..);\n\t\t}, false );\n\t}\n};\n```\n\nMore yuck.\n\n**Note:** For more information on `this` binding rules and gotchas, see Chapters 1-2 of the *this & Object Prototypes* title of this series.\n\nOK, what does all this have to do with concise methods? Recall our `something(..)` method definition:\n\n```js\nrunSomething( {\n\tsomething: function something(x,y) {\n\t\t// ..\n\t}\n} );\n```\n\nThe second `something` here provides a super convenient lexical identifier that will always point to the function itself, giving us the perfect reference for recursion, event binding/unbinding, and so on -- no messing around with `this` or trying to use an untrustable object reference.\n\nGreat!\n\nSo, now we try to refactor that function reference to this ES6 concise method form:\n\n```js\nrunSomething( {\n\tsomething(x,y) {\n\t\tif (x > y) {\n\t\t\treturn something( y, x );\n\t\t}\n\n\t\treturn y - x;\n\t}\n} );\n```\n\nSeems fine at first glance, except this code will break. The `return something(..)` call will not find a `something` identifier, so you'll get a `ReferenceError`. Oops. But why?\n\nThe above ES6 snippet is interpreted as meaning:\n\n```js\nrunSomething( {\n\tsomething: function(x,y){\n\t\tif (x > y) {\n\t\t\treturn something( y, x );\n\t\t}\n\n\t\treturn y - x;\n\t}\n} );\n```\n\nLook closely. Do you see the problem? The concise method definition implies `something: function(x,y)`. See how the second `something` we were relying on has been omitted? In other words, concise methods imply anonymous function expressions.\n\nYeah, yuck.\n\n**Note:** You may be tempted to think that `=>` arrow functions are a good solution here, but they're equally insufficient, as they're also anonymous function expressions. We'll cover them in \"Arrow Functions\" later in this chapter.\n\nThe partially redeeming news is that our `something(x,y)` concise method won't be totally anonymous. See \"Function Names\" in Chapter 7 for information about ES6 function name inference rules. That won't help us for our recursion, but it helps with debugging at least.\n\nSo what are we left to conclude about concise methods? They're short and sweet, and a nice convenience. But you should only use them if you're never going to need them to do recursion or event binding/unbinding. Otherwise, stick to your old-school `something: function something(..)` method definitions.\n\nA lot of your methods are probably going to benefit from concise method definitions, so that's great news! Just be careful of the few where there's an un-naming hazard.\n\n#### ES5 Getter/Setter\n\nTechnically, ES5 defined getter/setter literals forms, but they didn't seem to get used much, mostly due to the lack of transpilers to handle that new syntax (the only major new syntax added in ES5, really). So while it's not a new ES6 feature, we'll briefly refresh on that form, as it's probably going to be much more useful with ES6 going forward.\n\nConsider:\n\n```js\nvar o = {\n\t__id: 10,\n\tget id() { return this.__id++; },\n\tset id(v) { this.__id = v; }\n}\n\no.id;\t\t\t// 10\no.id;\t\t\t// 11\no.id = 20;\no.id;\t\t\t// 20\n\n// and:\no.__id;\t\t\t// 21\no.__id;\t\t\t// 21 -- still!\n```\n\nThese getter and setter literal forms are also present in classes; see Chapter 3.\n\n**Warning:** It may not be obvious, but the setter literal must have exactly one declared parameter; omitting it or listing others is illegal syntax. The single required parameter *can* use destructuring and defaults (e.g., `set id({ id: v = 0 }) { .. }`), but the gather/rest `...` is not allowed (`set id(...v) { .. }`).\n\n### Computed Property Names\n\nYou've probably been in a situation like the following snippet, where you have one or more property names that come from some sort of expression and thus can't be put into the object literal:\n\n```js\nvar prefix = \"user_\";\n\nvar o = {\n\tbaz: function(..){ .. }\n};\n\no[ prefix + \"foo\" ] = function(..){ .. };\no[ prefix + \"bar\" ] = function(..){ .. };\n..\n```\n\nES6 adds a syntax to the object literal definition which allows you to specify an expression that should be computed, whose result is the property name assigned. Consider:\n\n```js\nvar prefix = \"user_\";\n\nvar o = {\n\tbaz: function(..){ .. },\n\t[ prefix + \"foo\" ]: function(..){ .. },\n\t[ prefix + \"bar\" ]: function(..){ .. }\n\t..\n};\n```\n\nAny valid expression can appear inside the `[ .. ]` that sits in the property name position of the object literal definition.\n\nProbably the most common use of computed property names will be with `Symbol`s (which we cover in \"Symbols\" later in this chapter), such as:\n\n```js\nvar o = {\n\t[Symbol.toStringTag]: \"really cool thing\",\n\t..\n};\n```\n\n`Symbol.toStringTag` is a special built-in value, which we evaluate with the `[ .. ]` syntax, so we can assign the `\"really cool thing\"` value to the special property name.\n\nComputed property names can also appear as the name of a concise method or a concise generator:\n\n```js\nvar o = {\n\t[\"f\" + \"oo\"]() { .. }\t// computed concise method\n\t*[\"b\" + \"ar\"]() { .. }\t// computed concise generator\n};\n```\n\n### Setting `[[Prototype]]`\n\nWe won't cover prototypes in detail here, so for more information, see the *this & Object Prototypes* title of this series.\n\nSometimes it will be helpful to assign the `[[Prototype]]` of an object at the same time you're declaring its object literal. The following has been a nonstandard extension in many JS engines for a while, but is standardized as of ES6:\n\n```js\nvar o1 = {\n\t// ..\n};\n\nvar o2 = {\n\t__proto__: o1,\n\t// ..\n};\n```\n\n`o2` is declared with a normal object literal, but it's also `[[Prototype]]`-linked to `o1`. The `__proto__` property name here can also be a string `\"__proto__\"`, but note that it *cannot* be the result of a computed property name (see the previous section).\n\n`__proto__` is controversial, to say the least. It's a decades-old proprietary extension to JS that is finally standardized, somewhat begrudgingly it seems, in ES6. Many developers feel it shouldn't ever be used. In fact, it's in \"Annex B\" of ES6, which is the section that lists things JS feels it has to standardize for compatibility reasons only.\n\n**Warning:** Though I'm narrowly endorsing `__proto__` as a key in an object literal definition, I definitely do not endorse using it in its object property form, like `o.__proto__`. That form is both a getter and setter (again for compatibility reasons), but there are definitely better options. See the *this & Object Prototypes* title of this series for more information.\n\nFor setting the `[[Prototype]]` of an existing object, you can use the ES6 utility `Object.setPrototypeOf(..)`. Consider:\n\n```js\nvar o1 = {\n\t// ..\n};\n\nvar o2 = {\n\t// ..\n};\n\nObject.setPrototypeOf( o2, o1 );\n```\n\n**Note:** We'll discuss `Object` again in Chapter 6. \"`Object.setPrototypeOf(..)` Static Function\" provides additional details on `Object.setPrototypeOf(..)`. Also see \"`Object.assign(..)` Static Function\" for another form that relates `o2` prototypically to `o1`.\n\n### Object `super`\n\n`super` is typically thought of as being only related to classes. However, due to JS's classless-objects-with-prototypes nature, `super` is equally effective, and nearly the same in behavior, with plain objects' concise methods.\n\nConsider:\n\n```js\nvar o1 = {\n\tfoo() {\n\t\tconsole.log( \"o1:foo\" );\n\t}\n};\n\nvar o2 = {\n\tfoo() {\n\t\tsuper.foo();\n\t\tconsole.log( \"o2:foo\" );\n\t}\n};\n\nObject.setPrototypeOf( o2, o1 );\n\no2.foo();\t\t// o1:foo\n\t\t\t\t// o2:foo\n```\n\n**Warning:** `super` is only allowed in concise methods, not regular function expression properties. It also is only allowed in `super.XXX` form (for property/method access), not in `super()` form.\n\nThe `super` reference in the `o2.foo()` method is locked statically to `o2`, and specifically to the `[[Prototype]]` of `o2`. `super` here would basically be `Object.getPrototypeOf(o2)` -- resolves to `o1` of course -- which is how it finds and calls `o1.foo()`.\n\nFor complete details on `super`, see \"Classes\" in Chapter 3.\n\n## Template Literals\n\nAt the very outset of this section, I'm going to have to call out the name of this ES6 feature as being awfully... misleading, depending on your experiences with what the word *template* means.\n\nMany developers think of templates as being reusable renderable pieces of text, such as the capability provided by most template engines (Mustache, Handlebars, etc.). ES6's use of the word *template* would imply something similar, like a way to declare inline template literals that can be re-rendered. However, that's not at all the right way to think about this feature.\n\nSo, before we go on, I'm renaming to what it should have been called: *interpolated string literals* (or *interpoliterals* for short).\n\nYou're already well aware of declaring string literals with `\"` or `'` delimiters, and you also know that these are not *smart strings* (as some languages have), where the contents would be parsed for interpolation expressions.\n\nHowever, ES6 introduces a new type of string literal, using the `` ` `` backtick as the delimiter. These string literals allow basic string interpolation expressions to be embedded, which are then automatically parsed and evaluated.\n\nHere's the old pre-ES6 way:\n\n```js\nvar name = \"Kyle\";\n\nvar greeting = \"Hello \" + name + \"!\";\n\nconsole.log( greeting );\t\t\t// \"Hello Kyle!\"\nconsole.log( typeof greeting );\t\t// \"string\"\n```\n\nNow, consider the new ES6 way:\n\n```js\nvar name = \"Kyle\";\n\nvar greeting = `Hello ${name}!`;\n\nconsole.log( greeting );\t\t\t// \"Hello Kyle!\"\nconsole.log( typeof greeting );\t\t// \"string\"\n```\n\nAs you can see, we used the `` `..` `` around a series of characters, which are interpreted as a string literal, but any expressions of the form `${..}` are parsed and evaluated inline immediately. The fancy term for such parsing and evaluating is *interpolation* (much more accurate than templating).\n\nThe result of the interpolated string literal expression is just a plain old normal string, assigned to the `greeting` variable.\n\n**Warning:** `typeof greeting == \"string\"` illustrates why it's important not to think of these entities as special template values, as you cannot assign the unevaluated form of the literal to something and reuse it. The `` `..` `` string literal is more like an IIFE in the sense that it's automatically evaluated inline. The result of a `` `..` `` string literal is, simply, just a string.\n\nOne really nice benefit of interpolated string literals is they are allowed to split across multiple lines:\n\n```js\nvar text =\n`Now is the time for all good men\nto come to the aid of their\ncountry!`;\n\nconsole.log( text );\n// Now is the time for all good men\n// to come to the aid of their\n// country!\n```\n\nThe line breaks (newlines) in the interpolated string literal were preserved in the string value.\n\nUnless appearing as explicit escape sequences in the literal value, the value of the `\\r` carriage return character (code point `U+000D`) or the value of the `\\r\\n` carriage return + line feed sequence (code points `U+000D` and `U+000A`) are both normalized to a `\\n` line feed character (code point `U+000A`). Don't worry though; this normalization is rare and would likely only happen if copy-pasting text into your JS file.\n\n### Interpolated Expressions\n\nAny valid expression is allowed to appear inside `${..}` in an interpolated string literal, including function calls, inline function expression calls, and even other interpolated string literals!\n\nConsider:\n\n```js\nfunction upper(s) {\n\treturn s.toUpperCase();\n}\n\nvar who = \"reader\";\n\nvar text =\n`A very ${upper( \"warm\" )} welcome\nto all of you ${upper( `${who}s` )}!`;\n\nconsole.log( text );\n// A very WARM welcome\n// to all of you READERS!\n```\n\nHere, the inner `` `${who}s` `` interpolated string literal was a little bit nicer convenience for us when combining the `who` variable with the `\"s\"` string, as opposed to `who + \"s\"`. There will be cases that nesting interpolated string literals is helpful, but be wary if you find yourself doing that kind of thing often, or if you find yourself nesting several levels deep.\n\nIf that's the case, the odds are good that your string value production could benefit from some abstractions.\n\n**Warning:** As a word of caution, be very careful about the readability of your code with such new found power. Just like with default value expressions and destructuring assignment expressions, just because you *can* do something doesn't mean you *should* do it. Never go so overboard with new ES6 tricks that your code becomes more clever than you or your other team members.\n\n#### Expression Scope\n\nOne quick note about the scope that is used to resolve variables in expressions. I mentioned earlier that an interpolated string literal is kind of like an IIFE, and it turns out thinking about it like that explains the scoping behavior as well.\n\nConsider:\n\n```js\nfunction foo(str) {\n\tvar name = \"foo\";\n\tconsole.log( str );\n}\n\nfunction bar() {\n\tvar name = \"bar\";\n\tfoo( `Hello from ${name}!` );\n}\n\nvar name = \"global\";\n\nbar();\t\t\t\t\t// \"Hello from bar!\"\n```\n\nAt the moment the `` `..` `` string literal is expressed, inside the `bar()` function, the scope available to it finds `bar()`'s `name` variable with value `\"bar\"`. Neither the global `name` nor `foo(..)`'s `name` matter. In other words, an interpolated string literal is just lexically scoped where it appears, not dynamically scoped in any way.\n\n### Tagged Template Literals\n\nAgain, renaming the feature for sanity sake: *tagged string literals*.\n\nTo be honest, this is one of the cooler tricks that ES6 offers. It may seem a little strange, and perhaps not all that generally practical at first. But once you've spent some time with it, tagged string literals may just surprise you in their usefulness.\n\nFor example:\n\n```js\nfunction foo(strings, ...values) {\n\tconsole.log( strings );\n\tconsole.log( values );\n}\n\nvar desc = \"awesome\";\n\nfoo`Everything is ${desc}!`;\n// [ \"Everything is \", \"!\"]\n// [ \"awesome\" ]\n```\n\nLet's take a moment to consider what's happening in the previous snippet. First, the most jarring thing that jumps out is ``foo`Everything...`;``. That doesn't look like anything we've seen before. What is it?\n\nIt's essentially a special kind of function call that doesn't need the `( .. )`. The *tag* -- the `foo` part before the `` `..` `` string literal -- is a function value that should be called. Actually, it can be any expression that results in a function, even a function call that returns another function, like:\n\n```js\nfunction bar() {\n\treturn function foo(strings, ...values) {\n\t\tconsole.log( strings );\n\t\tconsole.log( values );\n\t}\n}\n\nvar desc = \"awesome\";\n\nbar()`Everything is ${desc}!`;\n// [ \"Everything is \", \"!\"]\n// [ \"awesome\" ]\n```\n\nBut what gets passed to the `foo(..)` function when invoked as a tag for a string literal?\n\nThe first argument -- we called it `strings` -- is an array of all the plain strings (the stuff between any interpolated expressions). We get two values in the `strings` array: `\"Everything is \"` and `\"!\"`.\n\nFor convenience sake in our example, we then gather up all subsequent arguments into an array called `values` using the `...` gather/rest operator (see the \"Spread/Rest\" section earlier in this chapter), though you could of course have left them as individual named parameters following the `strings` parameter.\n\nThe argument(s) gathered into our `values` array are the results of the already-evaluated interpolation expressions found in the string literal. So obviously the only element in `values` in our example is `\"awesome\"`.\n\nYou can think of these two arrays as: the values in `values` are the separators if you were to splice them in between the values in `strings`, and then if you joined everything together, you'd get the complete interpolated string value.\n\nA tagged string literal is like a processing step after the interpolation expressions are evaluated but before the final string value is compiled, allowing you more control over generating the string from the literal.\n\nTypically, the string literal tag function (`foo(..)` in the previous snippets) should compute an appropriate string value and return it, so that you can use the tagged string literal as a value just like untagged string literals:\n\n```js\nfunction tag(strings, ...values) {\n\treturn strings.reduce( function(s,v,idx){\n\t\treturn s + (idx > 0 ? values[idx-1] : \"\") + v;\n\t}, \"\" );\n}\n\nvar desc = \"awesome\";\n\nvar text = tag`Everything is ${desc}!`;\n\nconsole.log( text );\t\t\t// Everything is awesome!\n```\n\nIn this snippet, `tag(..)` is a pass-through operation, in that it doesn't perform any special modifications, but just uses `reduce(..)` to loop over and splice/interleave `strings` and `values` together the same way an untagged string literal would have done.\n\nSo what are some practical uses? There are many advanced ones that are beyond our scope to discuss here. But here's a simple idea that formats numbers as U.S. dollars (sort of like basic localization):\n\n```js\nfunction dollabillsyall(strings, ...values) {\n\treturn strings.reduce( function(s,v,idx){\n\t\tif (idx > 0) {\n\t\t\tif (typeof values[idx-1] == \"number\") {\n\t\t\t\t// look, also using interpolated\n\t\t\t\t// string literals!\n\t\t\t\ts += `$${values[idx-1].toFixed( 2 )}`;\n\t\t\t}\n\t\t\telse {\n\t\t\t\ts += values[idx-1];\n\t\t\t}\n\t\t}\n\n\t\treturn s + v;\n\t}, \"\" );\n}\n\nvar amt1 = 11.99,\n\tamt2 = amt1 * 1.08,\n\tname = \"Kyle\";\n\nvar text = dollabillsyall\n`Thanks for your purchase, ${name}! Your\nproduct cost was ${amt1}, which with tax\ncomes out to ${amt2}.`\n\nconsole.log( text );\n// Thanks for your purchase, Kyle! Your\n// product cost was $11.99, which with tax\n// comes out to $12.95.\n```\n\nIf a `number` value is encountered in the `values` array, we put `\"$\"` in front of it and format it to two decimal places with `toFixed(2)`. Otherwise, we let the value pass-through untouched.\n\n#### Raw Strings\n\nIn the previous snippets, our tag functions receive the first argument we called `strings`, which is an array. But there's an additional bit of data included: the raw unprocessed versions of all the strings. You can access those raw string values using the `.raw` property, like this:\n\n```js\nfunction showraw(strings, ...values) {\n\tconsole.log( strings );\n\tconsole.log( strings.raw );\n}\n\nshowraw`Hello\\nWorld`;\n// [ \"Hello\n// World\" ]\n// [ \"Hello\\nWorld\" ]\n```\n\nThe raw version of the value preserves the raw escaped `\\n` sequence (the `\\` and the `n` are separate characters), while the processed version considers it a single newline character. However, the earlier mentioned line-ending normalization is applied to both values.\n\nES6 comes with a built-in function that can be used as a string literal tag: `String.raw(..)`. It simply passes through the raw versions of the `strings` values:\n\n```js\nconsole.log( `Hello\\nWorld` );\n// Hello\n// World\n\nconsole.log( String.raw`Hello\\nWorld` );\n// Hello\\nWorld\n\nString.raw`Hello\\nWorld`.length;\n// 12\n```\n\nOther uses for string literal tags included special processing for internationalization, localization, and more!\n\n## Arrow Functions\n\nWe've touched on `this` binding complications with functions earlier in this chapter, and they're covered at length in the *this & Object Prototypes* title of this series. It's important to understand the frustrations that `this`-based programming with normal functions brings, because that is the primary motivation for the new ES6 `=>` arrow function feature.\n\nLet's first illustrate what an arrow function looks like, as compared to normal functions:\n\n```js\nfunction foo(x,y) {\n\treturn x + y;\n}\n\n// versus\n\nvar foo = (x,y) => x + y;\n```\n\nThe arrow function definition consists of a parameter list (of zero or more parameters, and surrounding `( .. )` if there's not exactly one parameter), followed by the `=>` marker, followed by a function body.\n\nSo, in the previous snippet, the arrow function is just the `(x,y) => x + y` part, and that function reference happens to be assigned to the variable `foo`.\n\nThe body only needs to be enclosed by `{ .. }` if there's more than one expression, or if the body consists of a non-expression statement. If there's only one expression, and you omit the surrounding `{ .. }`, there's an implied `return` in front of the expression, as illustrated in the previous snippet.\n\nHere's some other arrow function variations to consider:\n\n```js\nvar f1 = () => 12;\nvar f2 = x => x * 2;\nvar f3 = (x,y) => {\n\tvar z = x * 2 + y;\n\ty++;\n\tx *= 3;\n\treturn (x + y + z) / 2;\n};\n```\n\nArrow functions are *always* function expressions; there is no arrow function declaration. It also should be clear that they are anonymous function expressions -- they have no named reference for the purposes of recursion or event binding/unbinding -- though \"Function Names\" in Chapter 7 will describe ES6's function name inference rules for debugging purposes.\n\n**Note:** All the capabilities of normal function parameters are available to arrow functions, including default values, destructuring, rest parameters, and so on.\n\nArrow functions have a nice, shorter syntax, which makes them on the surface very attractive for writing terser code. Indeed, nearly all literature on ES6 (other than the titles in this series) seems to immediately and exclusively adopt the arrow function as \"the new function.\"\n\nIt is telling that nearly all examples in discussion of arrow functions are short single statement utilities, such as those passed as callbacks to various utilities. For example:\n\n```js\nvar a = [1,2,3,4,5];\n\na = a.map( v => v * 2 );\n\nconsole.log( a );\t\t\t\t// [2,4,6,8,10]\n```\n\nIn those cases, where you have such inline function expressions, and they fit the pattern of computing a quick calculation in a single statement and returning that result, arrow functions indeed look to be an attractive and lightweight alternative to the more verbose `function` keyword and syntax.\n\nMost people tend to *ooh and aah* at nice terse examples like that, as I imagine you just did!\n\nHowever, I would caution you that it would seem to me somewhat a misapplication of this feature to use arrow function syntax with otherwise normal, multistatement functions, especially those that would otherwise be naturally expressed as function declarations.\n\nRecall the `dollabillsyall(..)` string literal tag function from earlier in this chapter -- let's change it to use `=>` syntax:\n\n```js\nvar dollabillsyall = (strings, ...values) =>\n\tstrings.reduce( (s,v,idx) => {\n\t\tif (idx > 0) {\n\t\t\tif (typeof values[idx-1] == \"number\") {\n\t\t\t\t// look, also using interpolated\n\t\t\t\t// string literals!\n\t\t\t\ts += `$${values[idx-1].toFixed( 2 )}`;\n\t\t\t}\n\t\t\telse {\n\t\t\t\ts += values[idx-1];\n\t\t\t}\n\t\t}\n\n\t\treturn s + v;\n\t}, \"\" );\n```\n\nIn this example,  the only modifications I made were the removal of `function`, `return`, and some `{ .. }`, and then the insertion of `=>` and a `var`. Is this a significant improvement in the readability of the code? Meh.\n\nI'd actually argue that the lack of `return` and outer `{ .. }` partially obscures the fact that the `reduce(..)` call is the only statement in the `dollabillsyall(..)` function and that its result is the intended result of the call. Also, the trained eye that is so used to hunting for the word `function` in code to find scope boundaries now needs to look for the `=>` marker, which can definitely be harder to find in the thick of the code.\n\nWhile not a hard-and-fast rule, I'd say that the readability gains from `=>` arrow function conversion are inversely proportional to the length of the function being converted. The longer the function, the less `=>` helps; the shorter the function, the more `=>` can shine.\n\nI think it's probably more sensible and reasonable to adopt `=>` for the places in code where you do need short inline function expressions, but leave your normal-length main functions as is.\n\n### Not Just Shorter Syntax, But `this`\n\nMost of the popular attention toward `=>` has been on saving those precious keystrokes by dropping `function`, `return`, and `{ .. }` from your code.\n\nBut there's a big detail we've skipped over so far. I said at the beginning of the section that `=>` functions are closely related to `this` binding behavior. In fact, `=>` arrow functions are *primarily designed* to alter `this` behavior in a specific way, solving a particular and common pain point with `this`-aware coding.\n\nThe saving of keystrokes is a red herring, a misleading sideshow at best.\n\nLet's revisit another example from earlier in this chapter:\n\n```js\nvar controller = {\n\tmakeRequest: function(..){\n\t\tvar self = this;\n\n\t\tbtn.addEventListener( \"click\", function(){\n\t\t\t// ..\n\t\t\tself.makeRequest(..);\n\t\t}, false );\n\t}\n};\n```\n\nWe used the `var self = this` hack, and then referenced `self.makeRequest(..)`, because inside the callback function we're passing to `addEventListener(..)`, the `this` binding will not be the same as it is in `makeRequest(..)` itself. In other words, because `this` bindings are dynamic, we fall back to the predictability of lexical scope via the `self` variable.\n\nHerein we finally can see the primary design characteristic of `=>` arrow functions. Inside arrow functions, the `this` binding is not dynamic, but is instead lexical. In the previous snippet, if we used an arrow function for the callback, `this` will be predictably what we wanted it to be.\n\nConsider:\n\n```js\nvar controller = {\n\tmakeRequest: function(..){\n\t\tbtn.addEventListener( \"click\", () => {\n\t\t\t// ..\n\t\t\tthis.makeRequest(..);\n\t\t}, false );\n\t}\n};\n```\n\nLexical `this` in the arrow function callback in the previous snippet now points to the same value as in the enclosing `makeRequest(..)` function. In other words, `=>` is a syntactic stand-in for `var self = this`.\n\nIn cases where `var self = this` (or, alternatively, a function `.bind(this)` call) would normally be helpful, `=>` arrow functions are a nicer alternative operating on the same principle. Sounds great, right?\n\nNot quite so simple.\n\nIf `=>` replaces `var self = this` or `.bind(this)` and it helps, guess what happens if you use `=>` with a `this`-aware function that *doesn't* need `var self = this` to work? You might be able to guess that it's going to mess things up. Yeah.\n\nConsider:\n\n```js\nvar controller = {\n\tmakeRequest: (..) => {\n\t\t// ..\n\t\tthis.helper(..);\n\t},\n\thelper: (..) => {\n\t\t// ..\n\t}\n};\n\ncontroller.makeRequest(..);\n```\n\nAlthough we invoke as `controller.makeRequest(..)`, the `this.helper` reference fails, because `this` here doesn't point to `controller` as it normally would. Where does it point? It lexically inherits `this` from the surrounding scope. In this previous snippet, that's the global scope, where `this` points to the global object. Ugh.\n\nIn addition to lexical `this`, arrow functions also have lexical `arguments` -- they don't have their own `arguments` array but instead inherit from their parent -- as well as lexical `super` and `new.target` (see \"Classes\" in Chapter 3).\n\nSo now we can conclude a more nuanced set of rules for when `=>` is appropriate and not:\n\n* If you have a short, single-statement inline function expression, where the only statement is a `return` of some computed value, *and* that function doesn't already make a `this` reference inside it, *and* there's no self-reference (recursion, event binding/unbinding), *and* you don't reasonably expect the function to ever be that way, you can probably safely refactor it to be an `=>` arrow function.\n* If you have an inner function expression that's relying on a `var self = this` hack or a `.bind(this)` call on it in the enclosing function to ensure proper `this` binding, that inner function expression can probably safely become an `=>` arrow function.\n* If you have an inner function expression that's relying on something like `var args = Array.prototype.slice.call(arguments)` in the enclosing function to make a lexical copy of `arguments`, that inner function expression can probably safely become an `=>` arrow function.\n* For everything else -- normal function declarations, longer multistatement function expressions, functions that need a lexical name identifier self-reference (recursion, etc.), and any other function that doesn't fit the previous characteristics -- you should probably avoid `=>` function syntax.\n\nBottom line: `=>` is about lexical binding of `this`, `arguments`, and `super`. These are intentional features designed to fix some common problems, not bugs, quirks, or mistakes in ES6.\n\nDon't believe any hype that `=>` is primarily, or even mostly, about fewer keystrokes. Whether you save keystrokes or waste them, you should know exactly what you are intentionally doing with every character typed.\n\n**Tip:** If you have a function that for any of these articulated reasons is not a good match for an `=>` arrow function, but it's being declared as part of an object literal, recall from \"Concise Methods\" earlier in this chapter that there's another option for shorter function syntax.\n\nIf you prefer a visual decision chart for how/why to pick an arrow function:\n\n<img src=\"fig1.png\">\n\n## `for..of` Loops\n\nJoining the `for` and `for..in` loops from the JavaScript we're all familiar with, ES6 adds a `for..of` loop, which loops over the set of values produced by an *iterator*.\n\nThe value you loop over with `for..of` must be an *iterable*, or it must be a value which can be coerced/boxed to an object (see the *Types & Grammar* title of this series) that is an iterable. An iterable is simply an object that is able to produce an iterator, which the loop then uses.\n\nLet's compare `for..of` to `for..in` to illustrate the difference:\n\n```js\nvar a = [\"a\",\"b\",\"c\",\"d\",\"e\"];\n\nfor (var idx in a) {\n\tconsole.log( idx );\n}\n// 0 1 2 3 4\n\nfor (var val of a) {\n\tconsole.log( val );\n}\n// \"a\" \"b\" \"c\" \"d\" \"e\"\n```\n\nAs you can see, `for..in` loops over the keys/indexes in the `a` array, while `for..of` loops over the values in `a`.\n\nHere's the pre-ES6 version of the `for..of` from that previous snippet:\n\n```js\nvar a = [\"a\",\"b\",\"c\",\"d\",\"e\"],\n\tk = Object.keys( a );\n\nfor (var val, i = 0; i < k.length; i++) {\n\tval = a[ k[i] ];\n\tconsole.log( val );\n}\n// \"a\" \"b\" \"c\" \"d\" \"e\"\n```\n\nAnd here's the ES6 but non-`for..of` equivalent, which also gives a glimpse at manually iterating an iterator (see \"Iterators\" in Chapter 3):\n\n```js\nvar a = [\"a\",\"b\",\"c\",\"d\",\"e\"];\n\nfor (var val, ret, it = a[Symbol.iterator]();\n\t(ret = it.next()) && !ret.done;\n) {\n\tval = ret.value;\n\tconsole.log( val );\n}\n// \"a\" \"b\" \"c\" \"d\" \"e\"\n```\n\nUnder the covers, the `for..of` loop asks the iterable for an iterator (using the built-in `Symbol.iterator`; see \"Well-Known Symbols\" in Chapter 7), then it repeatedly calls the iterator and assigns its produced value to the loop iteration variable.\n\nStandard built-in values in JavaScript that are by default iterables (or provide them) include:\n\n* Arrays\n* Strings\n* Generators (see Chapter 3)\n* Collections / TypedArrays (see Chapter 5)\n\n**Warning:** Plain objects are not by default suitable for `for..of` looping. That's because they don't have a default iterator, which is intentional, not a mistake. However, we won't go any further into those nuanced reasonings here. In \"Iterators\" in Chapter 3, we'll see how to define iterators for our own objects, which lets `for..of` loop over any object to get a set of values we define.\n\nHere's how to loop over the characters in a primitive string:\n\n```js\nfor (var c of \"hello\") {\n\tconsole.log( c );\n}\n// \"h\" \"e\" \"l\" \"l\" \"o\"\n```\n\nThe `\"hello\"` primitive string value is coerced/boxed to the `String` object wrapper equivalent, which is an iterable by default.\n\nIn `for (XYZ of ABC)..`, the `XYZ` clause can either be an assignment expression or a declaration, identical to that same clause in `for` and `for..in` loops. So you can do stuff like this:\n\n```js\nvar o = {};\n\nfor (o.a of [1,2,3]) {\n\tconsole.log( o.a );\n}\n// 1 2 3\n\nfor ({x: o.a} of [ {x: 1}, {x: 2}, {x: 3} ]) {\n  console.log( o.a );\n}\n// 1 2 3\n```\n\n`for..of` loops can be prematurely stopped, just like other loops, with `break`, `continue`, `return` (if in a function), and thrown exceptions. In any of these cases, the iterator's `return(..)` function is automatically called (if one exists) to let the iterator perform cleanup tasks, if necessary.\n\n**Note:** See \"Iterators\" in Chapter 3 for more complete coverage on iterables and iterators.\n\n## Regular Expressions\n\nLet's face it: regular expressions haven't changed much in JS in a long time. So it's a great thing that they've finally learned a couple of new tricks in ES6. We'll briefly cover the additions here, but the overall topic of regular expressions is so dense that you'll need to turn to chapters/books dedicated to it (of which there are many!) if you need a refresher.\n\n### Unicode Flag\n\nWe'll cover the topic of Unicode in more detail in \"Unicode\" later in this chapter. Here, we'll just look briefly at the new `u` flag for ES6+ regular expressions, which turns on Unicode matching for that expression.\n\nJavaScript strings are typically interpreted as sequences of 16-bit characters, which correspond to the characters in the *Basic Multilingual Plane (BMP)* (http://en.wikipedia.org/wiki/Plane_%28Unicode%29). But there are many UTF-16 characters that fall outside this range, and so strings may have these multibyte characters in them.\n\nPrior to ES6, regular expressions could only match based on BMP characters, which means that those extended characters were treated as two separate characters for matching purposes. This is often not ideal.\n\nSo, as of ES6, the `u` flag tells a regular expression to process a string with the interpretation of Unicode (UTF-16) characters, such that such an extended character will be matched as a single entity.\n\n**Warning:** Despite the name implication, \"UTF-16\" doesn't strictly mean 16 bits. Modern Unicode uses 21 bits, and standards like UTF-8 and UTF-16 refer roughly to how many bits are used in the representation of a character.\n\nAn example (straight from the ES6 specification): 𝄞 (the musical symbol G-clef) is Unicode point U+1D11E (0x1D11E).\n\nIf this character appears in a regular expression pattern (like `/𝄞/`), the standard BMP interpretation would be that it's two separate characters (0xD834 and 0xDD1E) to match with. But the new ES6 Unicode-aware mode means that `/𝄞/u` (or the escaped Unicode form `/\\u{1D11E}/u`) will match `\"𝄞\"` in a string as a single matched character.\n\nYou might be wondering why this matters? In non-Unicode BMP mode, the pattern is treated as two separate characters, but would still find the match in a string with the `\"𝄞\"` character in it, as you can see if you try:\n\n```js\n/𝄞/.test( \"𝄞-clef\" );\t\t\t// true\n```\n\nThe length of the match is what matters. For example:\n\n```js\n/^.-clef/ .test( \"𝄞-clef\" );\t\t// false\n/^.-clef/u.test( \"𝄞-clef\" );\t\t// true\n```\n\nThe `^.-clef` in the pattern says to match only a single character at the beginning before the normal `\"-clef\"` text. In standard BMP mode, the match fails (two characters), but with `u` Unicode mode flagged on, the match succeeds (one character).\n\nIt's also important to note that `u` makes quantifiers like `+` and `*` apply to the entire Unicode code point as a single character, not just the *lower surrogate* (aka rightmost half of the symbol) of the character. The same goes for Unicode characters appearing in character classes, like `/[💩-💫]/u`.\n\n**Note:** There's plenty more nitty-gritty details about `u` behavior in regular expressions, which Mathias Bynens (https://twitter.com/mathias) has written extensively about (https://mathiasbynens.be/notes/es6-unicode-regex).\n\n### Sticky Flag\n\nAnother flag mode added to ES6 regular expressions is `y`, which is often called \"sticky mode.\" *Sticky* essentially means the regular expression has a virtual anchor at its beginning that keeps it rooted to matching at only the position indicated by the regular expression's `lastIndex` property.\n\nTo illustrate, let's consider two regular expressions, the first without sticky mode and the second with:\n\n```js\nvar re1 = /foo/,\n\tstr = \"++foo++\";\n\nre1.lastIndex;\t\t\t// 0\nre1.test( str );\t\t// true\nre1.lastIndex;\t\t\t// 0 -- not updated\n\nre1.lastIndex = 4;\nre1.test( str );\t\t// true -- ignored `lastIndex`\nre1.lastIndex;\t\t\t// 4 -- not updated\n```\n\nThree things to observe about this snippet:\n\n* `test(..)` doesn't pay any attention to `lastIndex`'s value, and always just performs its match from the beginning of the input string.\n* Because our pattern does not have a `^` start-of-input anchor, the search for `\"foo\"` is free to move ahead through the whole string looking for a match.\n* `lastIndex` is not updated by `test(..)`.\n\nNow, let's try a sticky mode regular expression:\n\n```js\nvar re2 = /foo/y,\t\t// <-- notice the `y` sticky flag\n\tstr = \"++foo++\";\n\nre2.lastIndex;\t\t\t// 0\nre2.test( str );\t\t// false -- \"foo\" not found at `0`\nre2.lastIndex;\t\t\t// 0\n\nre2.lastIndex = 2;\nre2.test( str );\t\t// true\nre2.lastIndex;\t\t\t// 5 -- updated to after previous match\n\nre2.test( str );\t\t// false\nre2.lastIndex;\t\t\t// 0 -- reset after previous match failure\n```\n\nAnd so our new observations about sticky mode:\n\n* `test(..)` uses `lastIndex` as the exact and only position in `str` to look to make a match. There is no moving ahead to look for the match -- it's either there at the `lastIndex` position or not.\n* If a match is made, `test(..)` updates `lastIndex` to point to the character immediately following the match. If a match fails, `test(..)` resets `lastIndex` back to `0`.\n\nNormal non-sticky patterns that aren't otherwise `^`-rooted to the start-of-input are free to move ahead in the input string looking for a match. But sticky mode restricts the pattern to matching just at the position of `lastIndex`.\n\nAs I suggested at the beginning of this section, another way of looking at this is that `y` implies a virtual anchor at the beginning of the pattern that is relative (aka constrains the start of the match) to exactly the `lastIndex` position.\n\n**Warning:** In previous literature on the topic, it has alternatively been asserted that this behavior is like `y` implying a `^` (start-of-input) anchor in the pattern. This is inaccurate. We'll explain in further detail in \"Anchored Sticky\" later.\n\n#### Sticky Positioning\n\nIt may seem strangely limiting that to use `y` for repeated matches, you have to manually ensure `lastIndex` is in the exact right position, as it has no move-ahead capability for matching.\n\nHere's one possible scenario: if you know that the match you care about is always going to be at a position that's a multiple of a number (e.g., `0`, `10`, `20`, etc.), you can just construct a limited pattern matching what you care about, but then manually set `lastIndex` each time before match to those fixed positions.\n\nConsider:\n\n```js\nvar re = /f../y,\n\tstr = \"foo       far       fad\";\n\nstr.match( re );\t\t// [\"foo\"]\n\nre.lastIndex = 10;\nstr.match( re );\t\t// [\"far\"]\n\nre.lastIndex = 20;\nstr.match( re );\t\t// [\"fad\"]\n```\n\nHowever, if you're parsing a string that isn't formatted in fixed positions like that, figuring out what to set `lastIndex` to before each match is likely going to be untenable.\n\nThere's a saving nuance to consider here. `y` requires that `lastIndex` be in the exact position for a match to occur. But it doesn't strictly require that *you* manually set `lastIndex`.\n\nInstead, you can construct your expressions in such a way that they capture in each main match everything before and after the thing you care about, up to right before the next thing you'll care to match.\n\nBecause `lastIndex` will set to the next character beyond the end of a match, if you've matched everything up to that point, `lastIndex` will always be in the correct position for the `y` pattern to start from the next time.\n\n**Warning:** If you can't predict the structure of the input string in a sufficiently patterned way like that, this technique may not be suitable and you may not be able to use `y`.\n\nHaving structured string input is likely the most practical scenario where `y` will be capable of performing repeated matching throughout a string. Consider:\n\n```js\nvar re = /\\d+\\.\\s(.*?)(?:\\s|$)/y\n\tstr = \"1. foo 2. bar 3. baz\";\n\nstr.match( re );\t\t// [ \"1. foo \", \"foo\" ]\n\nre.lastIndex;\t\t\t// 7 -- correct position!\nstr.match( re );\t\t// [ \"2. bar \", \"bar\" ]\n\nre.lastIndex;\t\t\t// 14 -- correct position!\nstr.match( re );\t\t// [\"3. baz\", \"baz\"]\n```\n\nThis works because I knew something ahead of time about the structure of the input string: there is always a numeral prefix like `\"1. \"` before the desired match (`\"foo\"`, etc.), and either a space after it, or the end of the string (`$` anchor). So the regular expression I constructed captures all of that in each main match, and then I use a matching group `( )` so that the stuff I really care about is separated out for convenience.\n\nAfter the first match (`\"1. foo \"`), the `lastIndex` is `7`, which is already the position needed to start the next match, for `\"2. bar \"`, and so on.\n\nIf you're going to use `y` sticky mode for repeated matches, you'll probably want to look for opportunities to have `lastIndex` automatically positioned as we've just demonstrated.\n\n#### Sticky Versus Global\n\nSome readers may be aware that you can emulate something like this `lastIndex`-relative matching with the `g` global match flag and the `exec(..)` method, as so:\n\n```js\nvar re = /o+./g,\t\t// <-- look, `g`!\n\tstr = \"foot book more\";\n\nre.exec( str );\t\t\t// [\"oot\"]\nre.lastIndex;\t\t\t// 4\n\nre.exec( str );\t\t\t// [\"ook\"]\nre.lastIndex;\t\t\t// 9\n\nre.exec( str );\t\t\t// [\"or\"]\nre.lastIndex;\t\t\t// 13\n\nre.exec( str );\t\t\t// null -- no more matches!\nre.lastIndex;\t\t\t// 0 -- starts over now!\n```\n\nWhile it's true that `g` pattern matches with `exec(..)` start their matching from `lastIndex`'s current value, and also update `lastIndex` after each match (or failure), this is not the same thing as `y`'s behavior.\n\nNotice in the previous snippet that `\"ook\"`, located at position `6`, was matched and found by the second `exec(..)` call, even though at the time, `lastIndex` was `4` (from the end of the previous match). Why? Because as we said earlier, non-sticky matches are free to move ahead in their matching. A sticky mode expression would have failed here, because it would not be allowed to move ahead.\n\nIn addition to perhaps undesired move-ahead matching behavior, another downside to just using `g` instead of `y` is that `g` changes the behavior of some matching methods, like `str.match(re)`.\n\nConsider:\n\n```js\nvar re = /o+./g,\t\t// <-- look, `g`!\n\tstr = \"foot book more\";\n\nstr.match( re );\t\t// [\"oot\",\"ook\",\"or\"]\n```\n\nSee how all the matches were returned at once? Sometimes that's OK, but sometimes that's not what you want.\n\nThe `y` sticky flag will give you one-at-a-time progressive matching with utilities like `test(..)` and `match(..)`. Just make sure the `lastIndex` is always in the right position for each match!\n\n#### Anchored Sticky\n\nAs we warned earlier, it's inaccurate to think of sticky mode as implying a pattern starts with `^`. The `^` anchor has a distinct meaning in regular expressions, which is *not altered* by sticky mode. `^` is an anchor that *always* refers to the beginning of the input, and *is not* in any way relative to `lastIndex`.\n\nBesides poor/inaccurate documentation on this topic, the confusion is unfortunately strengthened further because an older pre-ES6 experiment with sticky mode in Firefox *did* make `^` relative to `lastIndex`, so that behavior has been around for years.\n\nES6 elected not to do it that way. `^` in a pattern means start-of-input absolutely and only.\n\nAs a consequence, a pattern like `/^foo/y` will always and only find a `\"foo\"` match at the beginning of a string, *if it's allowed to match there*. If `lastIndex` is not `0`, the match will fail. Consider:\n\n```js\nvar re = /^foo/y,\n\tstr = \"foo\";\n\nre.test( str );\t\t\t// true\nre.test( str );\t\t\t// false\nre.lastIndex;\t\t\t// 0 -- reset after failure\n\nre.lastIndex = 1;\nre.test( str );\t\t\t// false -- failed for positioning\nre.lastIndex;\t\t\t// 0 -- reset after failure\n```\n\nBottom line: `y` plus `^` plus `lastIndex > 0` is an incompatible combination that will always cause a failed match.\n\n**Note:** While `y` does not alter the meaning of `^` in any way, the `m` multiline mode *does*, such that `^` means start-of-input *or* start of text after a newline. So, if you combine `y` and `m` flags together for a pattern, you can find multiple `^`-rooted matches in a string. But remember: because it's `y` sticky, you'll have to make sure `lastIndex` is pointing at the correct new line position (likely by matching to the end of the line) each subsequent time, or no subsequent matches will be made.\n\n### Regular Expression `flags`\n\nPrior to ES6, if you wanted to examine a regular expression object to see what flags it had applied, you needed to parse them out -- ironically, probably with another regular expression -- from the content of the `source` property, such as:\n\n```js\nvar re = /foo/ig;\n\nre.toString();\t\t\t// \"/foo/ig\"\n\nvar flags = re.toString().match( /\\/([gim]*)$/ )[1];\n\nflags;\t\t\t\t\t// \"ig\"\n```\n\nAs of ES6, you can now get these values directly, with the new `flags` property:\n\n```js\nvar re = /foo/ig;\n\nre.flags;\t\t\t\t// \"gi\"\n```\n\nIt's a small nuance, but the ES6 specification calls for the expression's flags to be listed in this order: `\"gimuy\"`, regardless of what order the original pattern was specified with. That's the reason for the difference between `/ig` and `\"gi\"`.\n\nNo, the order of flags specified or listed doesn't matter.\n\nAnother tweak from ES6 is that the `RegExp(..)` constructor is now `flags`-aware if you pass it an existing regular expression:\n\n```js\nvar re1 = /foo*/y;\nre1.source;\t\t\t\t\t\t\t// \"foo*\"\nre1.flags;\t\t\t\t\t\t\t// \"y\"\n\nvar re2 = new RegExp( re1 );\nre2.source;\t\t\t\t\t\t\t// \"foo*\"\nre2.flags;\t\t\t\t\t\t\t// \"y\"\n\nvar re3 = new RegExp( re1, \"ig\" );\nre3.source;\t\t\t\t\t\t\t// \"foo*\"\nre3.flags;\t\t\t\t\t\t\t// \"gi\"\n```\n\nPrior to ES6, the `re3` construction would throw an error, but as of ES6 you can override the flags when duplicating.\n\n## Number Literal Extensions\n\nPrior to ES5, number literals looked like the following -- the octal form was not officially specified, only allowed as an extension that browsers had come to de facto agreement on:\n\n```js\nvar dec = 42,\n\toct = 052,\n\thex = 0x2a;\n```\n\n**Note:** Though you are specifying a number in different bases, the number's mathematic value is what is stored, and the default output interpretation is always base-10. The three variables in the previous snippet all have the `42` value stored in them.\n\nTo further illustrate that `052` was a nonstandard form extension, consider:\n\n```js\nNumber( \"42\" );\t\t\t\t// 42\nNumber( \"052\" );\t\t\t// 52\nNumber( \"0x2a\" );\t\t\t// 42\n```\n\nES5 continued to permit the browser-extended octal form (including such inconsistencies), except that in strict mode, the octal literal (`052`) form is disallowed. This restriction was done mainly because many developers had the habit (from other languages) of seemingly innocuously prefixing otherwise base-10 numbers with `0`'s for code alignment purposes, and then running into the accidental fact that they'd changed the number value entirely!\n\nES6 continues the legacy of changes/variations to how number literals outside base-10 numbers can be represented. There's now an official octal form, an amended hexadecimal form, and a brand-new binary form. For web compatibility reasons, the old octal `052` form will continue to be legal (though unspecified) in non-strict mode, but should really never be used anymore.\n\nHere are the new ES6 number literal forms:\n\n```js\nvar dec = 42,\n\toct = 0o52,\t\t\t// or `0O52` :(\n\thex = 0x2a,\t\t\t// or `0X2a` :/\n\tbin = 0b101010;\t\t// or `0B101010` :/\n```\n\nThe only decimal form allowed is base-10. Octal, hexadecimal, and binary are all integer forms.\n\nAnd the string representations of these forms are all able to be coerced/converted to their number equivalent:\n\n```js\nNumber( \"42\" );\t\t\t// 42\nNumber( \"0o52\" );\t\t// 42\nNumber( \"0x2a\" );\t\t// 42\nNumber( \"0b101010\" );\t// 42\n```\n\nThough not strictly new to ES6, it's a little-known fact that you can actually go the opposite direction of conversion (well, sort of):\n\n```js\nvar a = 42;\n\na.toString();\t\t\t// \"42\" -- also `a.toString( 10 )`\na.toString( 8 );\t\t// \"52\"\na.toString( 16 );\t\t// \"2a\"\na.toString( 2 );\t\t// \"101010\"\n```\n\nIn fact, you can represent a number this way in any base from `2` to `36`, though it'd be rare that you'd go outside the standard bases: 2, 8, 10, and 16.\n\n## Unicode\n\nLet me just say that this section is not an exhaustive everything-you-ever-wanted-to-know-about-Unicode resource. I want to cover what you need to know that's *changing* for Unicode in ES6, but we won't go much deeper than that. Mathias Bynens (http://twitter.com/mathias) has written/spoken extensively and brilliantly about JS and Unicode (see https://mathiasbynens.be/notes/javascript-unicode and http://fluentconf.com/javascript-html-2015/public/content/2015/02/18-javascript-loves-unicode).\n\nThe Unicode characters that range from `0x0000` to `0xFFFF` contain all the standard printed characters (in various languages) that you're likely to have seen or interacted with. This group of characters is called the *Basic Multilingual Plane (BMP)*. The BMP even contains fun symbols like this cool snowman: ☃ (U+2603).\n\nThere are lots of other extended Unicode characters beyond this BMP set, which range up to `0x10FFFF`. These symbols are often referred to as *astral* symbols, as that's the name given to the set of 16 *planes* (e.g., layers/groupings) of characters beyond the BMP. Examples of astral symbols include 𝄞 (U+1D11E) and 💩 (U+1F4A9).\n\nPrior to ES6, JavaScript strings could specify Unicode characters using Unicode escaping, such as:\n\n```js\nvar snowman = \"\\u2603\";\nconsole.log( snowman );\t\t\t// \"☃\"\n```\n\nHowever, the `\\uXXXX` Unicode escaping only supports four hexadecimal characters, so you can only represent the BMP set of characters in this way. To represent an astral character using Unicode escaping prior to ES6, you need to use a *surrogate pair* -- basically two specially calculated Unicode-escaped characters side by side, which JS interprets together as a single astral character:\n\n```js\nvar gclef = \"\\uD834\\uDD1E\";\nconsole.log( gclef );\t\t\t// \"𝄞\"\n```\n\nAs of ES6, we now have a new form for Unicode escaping (in strings and regular expressions), called Unicode *code point escaping*:\n\n```js\nvar gclef = \"\\u{1D11E}\";\nconsole.log( gclef );\t\t\t// \"𝄞\"\n```\n\nAs you can see, the difference is the presence of the `{ }` in the escape sequence, which allows it to contain any number of hexadecimal characters. Because you only need six to represent the highest possible code point value in Unicode (i.e., 0x10FFFF), this is sufficient.\n\n### Unicode-Aware String Operations\n\nBy default, JavaScript string operations and methods are not sensitive to astral symbols in string values. So, they treat each BMP character individually, even the two surrogate halves that make up an otherwise single astral character. Consider:\n\n```js\nvar snowman = \"☃\";\nsnowman.length;\t\t\t\t\t// 1\n\nvar gclef = \"𝄞\";\ngclef.length;\t\t\t\t\t// 2\n```\n\nSo, how do we accurately calculate the length of such a string? In this scenario, the following trick will work:\n\n```js\nvar gclef = \"𝄞\";\n\n[...gclef].length;\t\t\t\t// 1\nArray.from( gclef ).length;\t\t// 1\n```\n\nRecall from the \"`for..of` Loops\" section earlier in this chapter that ES6 strings have built-in iterators. This iterator happens to be Unicode-aware, meaning it will automatically output an astral symbol as a single value. We take advantage of that using the `...` spread operator in an array literal, which creates an array of the string's symbols. Then we just inspect the length of that resultant array. ES6's `Array.from(..)` does basically the same thing as `[...XYZ]`, but we'll cover that utility in detail in Chapter 6.\n\n**Warning:** It should be noted that constructing and exhausting an iterator just to get the length of a string is quite expensive on performance, relatively speaking, compared to what a theoretically optimized native utility/property would do.\n\nUnfortunately, the full answer is not as simple or straightforward. In addition to the surrogate pairs (which the string iterator takes care of), there are special Unicode code points that behave in other special ways, which is much harder to account for. For example, there's a set of code points that modify the previous adjacent character, known as *Combining Diacritical Marks*.\n\nConsider these two string outputs:\n\n```js\nconsole.log( s1 );\t\t\t\t// \"é\"\nconsole.log( s2 );\t\t\t\t// \"é\"\n```\n\nThey look the same, but they're not! Here's how we created `s1` and `s2`:\n\n```js\nvar s1 = \"\\xE9\",\n\ts2 = \"e\\u0301\";\n```\n\nAs you can probably guess, our previous `length` trick doesn't work with `s2`:\n\n```js\n[...s1].length;\t\t\t\t\t// 1\n[...s2].length;\t\t\t\t\t// 2\n```\n\nSo what can we do? In this case, we can perform a *Unicode normalization* on the value before inquiring about its length, using the ES6 `String#normalize(..)` utility (which we'll cover more in Chapter 6):\n\n```js\nvar s1 = \"\\xE9\",\n\ts2 = \"e\\u0301\";\n\ns1.normalize().length;\t\t\t// 1\ns2.normalize().length;\t\t\t// 1\n\ns1 === s2;\t\t\t\t\t\t// false\ns1 === s2.normalize();\t\t\t// true\n```\n\nEssentially, `normalize(..)` takes a sequence like `\"e\\u0301\"` and normalizes it to `\"\\xE9\"`. Normalization can even combine multiple adjacent combining marks if there's a suitable Unicode character they combine to:\n\n```js\nvar s1 = \"o\\u0302\\u0300\",\n\ts2 = s1.normalize(),\n\ts3 = \"ồ\";\n\ns1.length;\t\t\t\t\t\t// 3\ns2.length;\t\t\t\t\t\t// 1\ns3.length;\t\t\t\t\t\t// 1\n\ns2 === s3;\t\t\t\t\t\t// true\n```\n\nUnfortunately, normalization isn't fully perfect here, either. If you have multiple combining marks modifying a single character, you may not get the length count you'd expect, because there may not be a single defined normalized character that represents the combination of all the marks. For example:\n\n```js\nvar s1 = \"e\\u0301\\u0330\";\n\nconsole.log( s1 );\t\t\t\t// \"ḛ́\"\n\ns1.normalize().length;\t\t\t// 2\n```\n\nThe further you go down this rabbit hole, the more you realize that it's difficult to get one precise definition for \"length.\" What we see visually rendered as a single character -- more precisely called a *grapheme* -- doesn't always strictly relate to a single \"character\" in the program processing sense.\n\n**Tip:** If you want to see just how deep this rabbit hole goes, check out the \"Grapheme Cluster Boundaries\" algorithm (http://www.Unicode.org/reports/tr29/#Grapheme_Cluster_Boundaries).\n\n### Character Positioning\n\nSimilar to length complications, what does it actually mean to ask, \"what is the character at position 2?\" The naive pre-ES6 answer comes from `charAt(..)`, which will not respect the atomicity of an astral character, nor will it take into account combining marks.\n\nConsider:\n\n```js\nvar s1 = \"abc\\u0301d\",\n\ts2 = \"ab\\u0107d\",\n\ts3 = \"ab\\u{1d49e}d\";\n\nconsole.log( s1 );\t\t\t\t// \"abćd\"\nconsole.log( s2 );\t\t\t\t// \"abćd\"\nconsole.log( s3 );\t\t\t\t// \"ab𝒞d\"\n\ns1.charAt( 2 );\t\t\t\t\t// \"c\"\ns2.charAt( 2 );\t\t\t\t\t// \"ć\"\ns3.charAt( 2 );\t\t\t\t\t// \"\" <-- unprintable surrogate\ns3.charAt( 3 );\t\t\t\t\t// \"\" <-- unprintable surrogate\n```\n\nSo, is ES6 giving us a Unicode-aware version of `charAt(..)`? Unfortunately, no. At the time of this writing, there's a proposal for such a utility that's under consideration for post-ES6.\n\nBut with what we explored in the previous section (and of course with the limitations noted thereof!), we can hack an ES6 answer:\n\n```js\nvar s1 = \"abc\\u0301d\",\n\ts2 = \"ab\\u0107d\",\n\ts3 = \"ab\\u{1d49e}d\";\n\n[...s1.normalize()][2];\t\t\t// \"ć\"\n[...s2.normalize()][2];\t\t\t// \"ć\"\n[...s3.normalize()][2];\t\t\t// \"𝒞\"\n```\n\n**Warning:** Reminder of an earlier warning: constructing and exhausting an iterator each time you want to get at a single character is... not very ideal, performance wise. Let's hope we get a built-in and optimized utility for this soon, post-ES6.\n\nWhat about a Unicode-aware version of the `charCodeAt(..)` utility? ES6 gives us `codePointAt(..)`:\n\n```js\nvar s1 = \"abc\\u0301d\",\n\ts2 = \"ab\\u0107d\",\n\ts3 = \"ab\\u{1d49e}d\";\n\ns1.normalize().codePointAt( 2 ).toString( 16 );\n// \"107\"\n\ns2.normalize().codePointAt( 2 ).toString( 16 );\n// \"107\"\n\ns3.normalize().codePointAt( 2 ).toString( 16 );\n// \"1d49e\"\n```\n\nWhat about the other direction? A Unicode-aware version of `String.fromCharCode(..)` is ES6's `String.fromCodePoint(..)`:\n\n```js\nString.fromCodePoint( 0x107 );\t\t// \"ć\"\n\nString.fromCodePoint( 0x1d49e );\t// \"𝒞\"\n```\n\nSo wait, can we just combine `String.fromCodePoint(..)` and `codePointAt(..)` to get a better version of a Unicode-aware `charAt(..)` from earlier? Yep!\n\n```js\nvar s1 = \"abc\\u0301d\",\n\ts2 = \"ab\\u0107d\",\n\ts3 = \"ab\\u{1d49e}d\";\n\nString.fromCodePoint( s1.normalize().codePointAt( 2 ) );\n// \"ć\"\n\nString.fromCodePoint( s2.normalize().codePointAt( 2 ) );\n// \"ć\"\n\nString.fromCodePoint( s3.normalize().codePointAt( 2 ) );\n// \"𝒞\"\n```\n\nThere's quite a few other string methods we haven't addressed here, including `toUpperCase()`, `toLowerCase()`, `substring(..)`, `indexOf(..)`, `slice(..)`, and a dozen others. None of these have been changed or augmented for full Unicode awareness, so you should be very careful -- probably just avoid them! -- when working with strings containing astral symbols.\n\nThere are also several string methods that use regular expressions for their behavior, like `replace(..)` and `match(..)`. Thankfully, ES6 brings Unicode awareness to regular expressions, as we covered in \"Unicode Flag\" earlier in this chapter.\n\nOK, there we have it! JavaScript's Unicode string support is significantly better over pre-ES6 (though still not perfect) with the various additions we've just covered.\n\n### Unicode Identifier Names\n\nUnicode can also be used in identifier names (variables, properties, etc.). Prior to ES6, you could do this with Unicode-escapes, like:\n\n```js\nvar \\u03A9 = 42;\n\n// same as: var Ω = 42;\n```\n\nAs of ES6, you can also use the earlier explained code point escape syntax:\n\n```js\nvar \\u{2B400} = 42;\n\n// same as: var 𫐀 = 42;\n```\n\nThere's a complex set of rules around exactly which Unicode characters are allowed. Furthermore, some are allowed only if they're not the first character of the identifier name.\n\n**Note:** Mathias Bynens has a great post (https://mathiasbynens.be/notes/javascript-identifiers-es6) on all the nitty-gritty details.\n\nThe reasons for using such unusual characters in identifier names are rather rare and academic. You typically won't be best served by writing code that relies on these esoteric capabilities.\n\n## Symbols\n\nWith ES6, for the first time in quite a while, a new primitive type has been added to JavaScript: the `symbol`. Unlike the other primitive types, however, symbols don't have a literal form.\n\nHere's how you create a symbol:\n\n```js\nvar sym = Symbol( \"some optional description\" );\n\ntypeof sym;\t\t// \"symbol\"\n```\n\nSome things to note:\n\n* You cannot and should not use `new` with `Symbol(..)`. It's not a constructor, nor are you producing an object.\n* The parameter passed to `Symbol(..)` is optional. If passed, it should be a string that gives a friendly description for the symbol's purpose.\n* The `typeof` output is a new value (`\"symbol\"`) that is the primary way to identify a symbol.\n\nThe description, if provided, is solely used for the stringification representation of the symbol:\n\n```js\nsym.toString();\t\t// \"Symbol(some optional description)\"\n```\n\nSimilar to how primitive string values are not instances of `String`, symbols are also not instances of `Symbol`. If, for some reason, you want to construct a boxed wrapper object form of a symbol value, you can do the following:\n\n```js\nsym instanceof Symbol;\t\t// false\n\nvar symObj = Object( sym );\nsymObj instanceof Symbol;\t// true\n\nsymObj.valueOf() === sym;\t// true\n```\n\n**Note:** `symObj` in this snippet is interchangeable with `sym`; either form can be used in all places symbols are utilized. There's not much reason to use the boxed wrapper object form (`symObj`) instead of the primitive form (`sym`). Keeping with similar advice for other primitives, it's probably best to prefer `sym` over `symObj`.\n\nThe internal value of a symbol itself -- referred to as its `name` -- is hidden from the code and cannot be obtained. You can think of this symbol value as an automatically generated, unique (within your application) string value.\n\nBut if the value is hidden and unobtainable, what's the point of having a symbol at all?\n\nThe main point of a symbol is to create a string-like value that can't collide with any other value. So, for example, consider using a symbol as a constant representing an event name:\n\n```js\nconst EVT_LOGIN = Symbol( \"event.login\" );\n```\n\nYou'd then use `EVT_LOGIN` in place of a generic string literal like `\"event.login\"`:\n\n```js\nevthub.listen( EVT_LOGIN, function(data){\n\t// ..\n} );\n```\n\nThe benefit here is that `EVT_LOGIN` holds a value that cannot be duplicated (accidentally or otherwise) by any other value, so it is impossible for there to be any confusion of which event is being dispatched or handled.\n\n**Note:** Under the covers, the `evthub` utility assumed in the previous snippet would almost certainly be using the symbol value from the `EVT_LOGIN` argument directly as the property/key in some internal object (hash) that tracks event handlers. If `evthub` instead needed to use the symbol value as a real string, it would need to explicitly coerce with `String(..)` or `toString()`, as implicit string coercion of symbols is not allowed.\n\nYou may use a symbol directly as a property name/key in an object, such as a special property that you want to treat as hidden or meta in usage. It's important to know that although you intend to treat it as such, it is not *actually* a hidden or untouchable property.\n\nConsider this module that implements the *singleton* pattern behavior -- that is, it only allows itself to be created once:\n\n```js\nconst INSTANCE = Symbol( \"instance\" );\n\nfunction HappyFace() {\n\tif (HappyFace[INSTANCE]) return HappyFace[INSTANCE];\n\n\tfunction smile() { .. }\n\n\treturn HappyFace[INSTANCE] = {\n\t\tsmile: smile\n\t};\n}\n\nvar me = HappyFace(),\n\tyou = HappyFace();\n\nme === you;\t\t\t// true\n```\n\nThe `INSTANCE` symbol value here is a special, almost hidden, meta-like property stored statically on the `HappyFace()` function object.\n\nIt could alternatively have been a plain old property like `__instance`, and the behavior would have been identical. The usage of a symbol simply improves the metaprogramming style, keeping this `INSTANCE` property set apart from any other normal properties.\n\n### Symbol Registry\n\nOne mild downside to using symbols as in the last few examples is that the `EVT_LOGIN` and `INSTANCE` variables had to be stored in an outer scope (perhaps even the global scope), or otherwise somehow stored in a publicly available location, so that all parts of the code that need to use the symbols can access them.\n\nTo aid in organizing code with access to these symbols, you can create symbol values with the *global symbol registry*. For example:\n\n```js\nconst EVT_LOGIN = Symbol.for( \"event.login\" );\n\nconsole.log( EVT_LOGIN );\t\t// Symbol(event.login)\n```\n\nAnd:\n\n```js\nfunction HappyFace() {\n\tconst INSTANCE = Symbol.for( \"instance\" );\n\n\tif (HappyFace[INSTANCE]) return HappyFace[INSTANCE];\n\n\t// ..\n\n\treturn HappyFace[INSTANCE] = { .. };\n}\n```\n\n`Symbol.for(..)` looks in the global symbol registry to see if a symbol is already stored with the provided description text, and returns it if so. If not, it creates one to return. In other words, the global symbol registry treats symbol values, by description text, as singletons themselves.\n\nBut that also means that any part of your application can retrieve the symbol from the registry using `Symbol.for(..)`, as long as the matching description name is used.\n\nIronically, symbols are basically intended to replace the use of *magic strings* (arbitrary string values given special meaning) in your application. But you precisely use *magic* description string values to uniquely identify/locate them in the global symbol registry!\n\nTo avoid accidental collisions, you'll probably want to make your symbol descriptions quite unique. One easy way of doing that is to include prefix/context/namespacing information in them.\n\nFor example, consider a utility such as the following:\n\n```js\nfunction extractValues(str) {\n\tvar key = Symbol.for( \"extractValues.parse\" ),\n\t\tre = extractValues[key] ||\n\t\t\t/[^=&]+?=([^&]+?)(?=&|$)/g,\n\t\tvalues = [], match;\n\n\twhile (match = re.exec( str )) {\n\t\tvalues.push( match[1] );\n\t}\n\n\treturn values;\n}\n```\n\nWe use the magic string value `\"extractValues.parse\"` because it's quite unlikely that any other symbol in the registry would ever collide with that description.\n\nIf a user of this utility wants to override the parsing regular expression, they can also use the symbol registry:\n\n```js\nextractValues[Symbol.for( \"extractValues.parse\" )] =\n\t/..some pattern../g;\n\nextractValues( \"..some string..\" );\n```\n\nAside from the assistance the symbol registry provides in globally storing these values, everything we're seeing here could have been done by just actually using the magic string `\"extractValues.parse\"` as the key, rather than the symbol. The improvements exist at the metaprogramming level more than the functional level.\n\nYou may have occasion to use a symbol value that has been stored in the registry to look up what description text (key) it's stored under. For example, you may need to signal to another part of your application how to locate a symbol in the registry because you cannot pass the symbol value itself.\n\nYou can retrieve a registered symbol's description text (key) using `Symbol.keyFor(..)`:\n\n```js\nvar s = Symbol.for( \"something cool\" );\n\nvar desc = Symbol.keyFor( s );\nconsole.log( desc );\t\t\t// \"something cool\"\n\n// get the symbol from the registry again\nvar s2 = Symbol.for( desc );\n\ns2 === s;\t\t\t\t\t\t// true\n```\n\n### Symbols as Object Properties\n\nIf a symbol is used as a property/key of an object, it's stored in a special way so that the property will not show up in a normal enumeration of the object's properties:\n\n```js\nvar o = {\n\tfoo: 42,\n\t[ Symbol( \"bar\" ) ]: \"hello world\",\n\tbaz: true\n};\n\nObject.getOwnPropertyNames( o );\t// [ \"foo\",\"baz\" ]\n```\n\nTo retrieve an object's symbol properties:\n\n```js\nObject.getOwnPropertySymbols( o );\t// [ Symbol(bar) ]\n```\n\nThis makes it clear that a property symbol is not actually hidden or inaccessible, as you can always see it in the `Object.getOwnPropertySymbols(..)` list.\n\n#### Built-In Symbols\n\nES6 comes with a number of predefined built-in symbols that expose various meta behaviors on JavaScript object values. However, these symbols are *not* registered in the global symbol registry, as one might expect.\n\nInstead, they're stored as properties on the `Symbol` function object. For example, in the \"`for..of`\" section earlier in this chapter, we introduced the `Symbol.iterator` value:\n\n```js\nvar a = [1,2,3];\n\na[Symbol.iterator];\t\t\t// native function\n```\n\nThe specification uses the `@@` prefix notation to refer to the built-in symbols, the most common ones being: `@@iterator`, `@@toStringTag`, `@@toPrimitive`. Several others are defined as well, though they probably won't be used as often.\n\n**Note:** See \"Well Known Symbols\" in Chapter 7 for detailed information about how these built-in symbols are used for meta programming purposes.\n\n## Review\n\nES6 adds a heap of new syntax forms to JavaScript, so there's plenty to learn!\n\nMost of these are designed to ease the pain points of common programming idioms, such as setting default values to function parameters and gathering the \"rest\" of the parameters into an array. Destructuring is a powerful tool for more concisely expressing assignments of values from arrays and nested objects.\n\nWhile features like `=>` arrow functions appear to also be all about shorter and nicer-looking syntax, they actually have very specific behaviors that you should intentionally use only in appropriate situations.\n\nExpanded Unicode support, new tricks for regular expressions, and even a new primitive `symbol` type round out the syntactic evolution of ES6.\n"
  },
  {
    "path": "es6 & beyond/ch3.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 3: Organization\n\nIt's one thing to write JS code, but it's another to properly organize it. Utilizing common patterns for organization and reuse goes a long way to improving the readability and understandability of your code. Remember: code is at least as much about communicating to other developers as it is about feeding the computer instructions.\n\nES6 has several important features that help significantly improve these patterns, including: iterators, generators, modules, and classes.\n\n## Iterators\n\nAn *iterator* is a structured pattern for pulling information from a source in one-at-a-time fashion. This pattern has been around programming for a long time. And to be sure, JS developers have been ad hoc designing and implementing iterators in JS programs since before anyone can remember, so it's not at all a new topic.\n\nWhat ES6 has done is introduce an implicit standardized interface for iterators. Many of the built-in data structures in JavaScript will now expose an iterator implementing this standard. And you can also construct your own iterators adhering to the same standard, for maximal interoperability.\n\nIterators are a way of organizing ordered, sequential, pull-based consumption of data.\n\nFor example, you may implement a utility that produces a new unique identifier each time it's requested. Or you may produce an infinite series of values that rotate through a fixed list, in round-robin fashion. Or you could attach an iterator to a database query result to pull out new rows one at a time.\n\nAlthough they have not commonly been used in JS in such a manner, iterators can also be thought of as controlling behavior one step at a time. This can be illustrated quite clearly when considering generators (see \"Generators\" later in this chapter), though you can certainly do the same without generators.\n\n### Interfaces\n\nAt the time of this writing, ES6 section 25.1.1.2 (https://people.mozilla.org/~jorendorff/es6-draft.html#sec-iterator-interface) details the `Iterator` interface as having the following requirement:\n\n```\nIterator [required]\n\tnext() {method}: retrieves next IteratorResult\n```\n\nThere are two optional members that some iterators are extended with:\n\n```\nIterator [optional]\n\treturn() {method}: stops iterator and returns IteratorResult\n\tthrow() {method}: signals error and returns IteratorResult\n```\n\nThe `IteratorResult` interface is specified as:\n\n```\nIteratorResult\n\tvalue {property}: current iteration value or final return value\n\t\t(optional if `undefined`)\n\tdone {property}: boolean, indicates completion status\n```\n\n**Note:** I call these interfaces implicit not because they're not explicitly called out in the specification -- they are! -- but because they're not exposed as direct objects accessible to code. JavaScript does not, in ES6, support any notion of \"interfaces,\" so adherence for your own code is purely conventional. However, wherever JS expects an iterator -- a `for..of` loop, for instance -- what you provide must adhere to these interfaces or the code will fail.\n\nThere's also an `Iterable` interface, which describes objects that must be able to produce iterators:\n\n```\nIterable\n\t@@iterator() {method}: produces an Iterator\n```\n\nIf you recall from \"Built-In Symbols\" in Chapter 2, `@@iterator` is the special built-in symbol representing the method that can produce iterator(s) for the object.\n\n#### IteratorResult\n\nThe `IteratorResult` interface specifies that the return value from any iterator operation will be an object of the form:\n\n```js\n{ value: .. , done: true / false }\n```\n\nBuilt-in iterators will always return values of this form, but more properties are, of course, allowed to be present on the return value, as necessary.\n\nFor example, a custom iterator may add additional metadata to the result object (e.g., where the data came from, how long it took to retrieve, cache expiration length, frequency for the appropriate next request, etc.).\n\n**Note:** Technically, `value` is optional if it would otherwise be considered absent or unset, such as in the case of the value `undefined`. Because accessing `res.value` will produce `undefined` whether it's present with that value or absent entirely, the presence/absence of the property is more an implementation detail or an optimization (or both), rather than a functional issue.\n\n### `next()` Iteration\n\nLet's look at an array, which is an iterable, and the iterator it can produce to consume its values:\n\n```js\nvar arr = [1,2,3];\n\nvar it = arr[Symbol.iterator]();\n\nit.next();\t\t// { value: 1, done: false }\nit.next();\t\t// { value: 2, done: false }\nit.next();\t\t// { value: 3, done: false }\n\nit.next();\t\t// { value: undefined, done: true }\n```\n\nEach time the method located at `Symbol.iterator` (see Chapters 2 and 7) is invoked on this `arr` value, it will produce a new fresh iterator. Most structures will do the same, including all the built-in data structures in JS.\n\nHowever, a structure like an event queue consumer might only ever produce a single iterator (singleton pattern). Or a structure might only allow one unique iterator at a time, requiring the current one to be completed before a new one can be created.\n\nThe `it` iterator in the previous snippet doesn't report `done: true` when you receive the `3` value. You have to call `next()` again, in essence going beyond the end of the array's values, to get the complete signal `done: true`. It may not be clear why until later in this section, but that design decision will typically be considered a best practice.\n\nPrimitive string values are also iterables by default:\n\n```js\nvar greeting = \"hello world\";\n\nvar it = greeting[Symbol.iterator]();\n\nit.next();\t\t// { value: \"h\", done: false }\nit.next();\t\t// { value: \"e\", done: false }\n..\n```\n\n**Note:** Technically, the primitive value itself isn't iterable, but thanks to \"boxing\", `\"hello world\"` is coerced/converted to its `String` object wrapper form, which *is* an iterable. See the *Types & Grammar* title of this series for more information.\n\nES6 also includes several new data structures, called collections (see Chapter 5). These collections are not only iterables themselves, but they also provide API method(s) to generate an iterator, such as:\n\n```js\nvar m = new Map();\nm.set( \"foo\", 42 );\nm.set( { cool: true }, \"hello world\" );\n\nvar it1 = m[Symbol.iterator]();\nvar it2 = m.entries();\n\nit1.next();\t\t// { value: [ \"foo\", 42 ], done: false }\nit2.next();\t\t// { value: [ \"foo\", 42 ], done: false }\n..\n```\n\nThe `next(..)` method of an iterator can optionally take one or more arguments. The built-in iterators mostly do not exercise this capability, though a generator's iterator definitely does (see \"Generators\" later in this chapter).\n\nBy general convention, including all the built-in iterators, calling `next(..)` on an iterator that's already been exhausted is not an error, but will simply continue to return the result `{ value: undefined, done: true }`.\n\n### Optional: `return(..)` and `throw(..)`\n\nThe optional methods on the iterator interface -- `return(..)` and `throw(..)` -- are not implemented on most of the built-in iterators. However, they definitely do mean something in the context of generators, so see \"Generators\" for more specific information.\n\n`return(..)` is defined as sending a signal to an iterator that the consuming code is complete and will not be pulling any more values from it. This signal can be used to notify the producer (the iterator responding to `next(..)` calls) to perform any cleanup it may need to do, such as releasing/closing network, database, or file handle resources.\n\nIf an iterator has a `return(..)` present and any condition occurs that can automatically be interpreted as abnormal or early termination of consuming the iterator, `return(..)` will automatically be called. You can call `return(..)` manually as well.\n\n`return(..)` will return an `IteratorResult` object just like `next(..)` does. In general, the optional value you send to `return(..)` would be sent back as `value` in this `IteratorResult`, though there are nuanced cases where that might not be true.\n\n`throw(..)` is used to signal an exception/error to an iterator, which possibly may be used differently by the iterator than the completion signal implied by `return(..)`. It does not necessarily imply a complete stop of the iterator as `return(..)` generally does.\n\nFor example, with generator iterators, `throw(..)` actually injects a thrown exception into the generator's paused execution context, which can be caught with a `try..catch`. An uncaught `throw(..)` exception would end up abnormally aborting the generator's iterator.\n\n**Note:** By general convention, an iterator should not produce any more results after having called `return(..)` or `throw(..)`.\n\n### Iterator Loop\n\nAs we covered in the \"`for..of`\" section in Chapter 2, the ES6 `for..of` loop directly consumes a conforming iterable.\n\nIf an iterator is also an iterable, it can be used directly with the `for..of` loop. You make an iterator an iterable by giving it a `Symbol.iterator` method that simply returns the iterator itself:\n\n```js\nvar it = {\n\t// make the `it` iterator an iterable\n\t[Symbol.iterator]() { return this; },\n\n\tnext() { .. },\n\t..\n};\n\nit[Symbol.iterator]() === it;\t\t// true\n```\n\nNow we can consume the `it` iterator with a `for..of` loop:\n\n```js\nfor (var v of it) {\n\tconsole.log( v );\n}\n```\n\nTo fully understand how such a loop works, recall the `for` equivalent of a `for..of` loop from Chapter 2:\n\n```js\nfor (var v, res; (res = it.next()) && !res.done; ) {\n\tv = res.value;\n\tconsole.log( v );\n}\n```\n\nIf you look closely, you'll see that `it.next()` is called before each iteration, and then `res.done` is consulted. If `res.done` is `true`, the expression evaluates to `false` and the iteration doesn't occur.\n\nRecall earlier that we suggested iterators should in general not return `done: true` along with the final intended value from the iterator. Now you can see why.\n\nIf an iterator returned `{ done: true, value: 42 }`, the `for..of` loop would completely discard the `42` value and it'd be lost. For this reason, assuming that your iterator may be consumed by patterns like the `for..of` loop or its manual `for` equivalent, you should probably wait to return `done: true` for signaling completion until after you've already returned all relevant iteration values.\n\n**Warning:** You can, of course, intentionally design your iterator to return some relevant `value` at the same time as returning `done: true`. But don't do this unless you've documented that as the case, and thus implicitly forced consumers of your iterator to use a different pattern for iteration than is implied by `for..of` or its manual equivalent we depicted.\n\n### Custom Iterators\n\nIn addition to the standard built-in iterators, you can make your own! All it takes to make them interoperate with ES6's consumption facilities (e.g., the `for..of` loop and the `...` operator) is to adhere to the proper interface(s).\n\nLet's try constructing an iterator that produces the infinite series of numbers in the Fibonacci sequence:\n\n```js\nvar Fib = {\n\t[Symbol.iterator]() {\n\t\tvar n1 = 1, n2 = 1;\n\n\t\treturn {\n\t\t\t// make the iterator an iterable\n\t\t\t[Symbol.iterator]() { return this; },\n\n\t\t\tnext() {\n\t\t\t\tvar current = n2;\n\t\t\t\tn2 = n1;\n\t\t\t\tn1 = n1 + current;\n\t\t\t\treturn { value: current, done: false };\n\t\t\t},\n\n\t\t\treturn(v) {\n\t\t\t\tconsole.log(\n\t\t\t\t\t\"Fibonacci sequence abandoned.\"\n\t\t\t\t);\n\t\t\t\treturn { value: v, done: true };\n\t\t\t}\n\t\t};\n\t}\n};\n\nfor (var v of Fib) {\n\tconsole.log( v );\n\n\tif (v > 50) break;\n}\n// 1 1 2 3 5 8 13 21 34 55\n// Fibonacci sequence abandoned.\n```\n\n**Warning:** If we hadn't inserted the `break` condition, this `for..of` loop would have run forever, which is probably not the desired result in terms of breaking your program!\n\nThe `Fib[Symbol.iterator]()` method when called returns the iterator object with `next()` and `return(..)` methods on it. State is maintained via `n1` and `n2` variables, which are kept by the closure.\n\nLet's *next* consider an iterator that is designed to run through a series (aka a queue) of actions, one item at a time:\n\n```js\nvar tasks = {\n\t[Symbol.iterator]() {\n\t\tvar steps = this.actions.slice();\n\n\t\treturn {\n\t\t\t// make the iterator an iterable\n\t\t\t[Symbol.iterator]() { return this; },\n\n\t\t\tnext(...args) {\n\t\t\t\tif (steps.length > 0) {\n\t\t\t\t\tlet res = steps.shift()( ...args );\n\t\t\t\t\treturn { value: res, done: false };\n\t\t\t\t}\n\t\t\t\telse {\n\t\t\t\t\treturn { done: true }\n\t\t\t\t}\n\t\t\t},\n\n\t\t\treturn(v) {\n\t\t\t\tsteps.length = 0;\n\t\t\t\treturn { value: v, done: true };\n\t\t\t}\n\t\t};\n\t},\n\tactions: []\n};\n```\n\nThe iterator on `tasks` steps through functions found in the `actions` array property, if any, and executes them one at a time, passing in whatever arguments you pass to `next(..)`, and returning any return value to you in the standard `IteratorResult` object.\n\nHere's how we could use this `tasks` queue:\n\n```js\ntasks.actions.push(\n\tfunction step1(x){\n\t\tconsole.log( \"step 1:\", x );\n\t\treturn x * 2;\n\t},\n\tfunction step2(x,y){\n\t\tconsole.log( \"step 2:\", x, y );\n\t\treturn x + (y * 2);\n\t},\n\tfunction step3(x,y,z){\n\t\tconsole.log( \"step 3:\", x, y, z );\n\t\treturn (x * y) + z;\n\t}\n);\n\nvar it = tasks[Symbol.iterator]();\n\nit.next( 10 );\t\t\t// step 1: 10\n\t\t\t\t\t\t// { value:   20, done: false }\n\nit.next( 20, 50 );\t\t// step 2: 20 50\n\t\t\t\t\t\t// { value:  120, done: false }\n\nit.next( 20, 50, 120 );\t// step 3: 20 50 120\n\t\t\t\t\t\t// { value: 1120, done: false }\n\nit.next();\t\t\t\t// { done: true }\n```\n\nThis particular usage reinforces that iterators can be a pattern for organizing functionality, not just data. It's also reminiscent of what we'll see with generators in the next section.\n\nYou could even get creative and define an iterator that represents meta operations on a single piece of data. For example, we could define an iterator for numbers that by default ranges from `0` up to (or down to, for negative numbers) the number in question.\n\nConsider:\n\n```js\nif (!Number.prototype[Symbol.iterator]) {\n\tObject.defineProperty(\n\t\tNumber.prototype,\n\t\tSymbol.iterator,\n\t\t{\n\t\t\twritable: true,\n\t\t\tconfigurable: true,\n\t\t\tenumerable: false,\n\t\t\tvalue: function iterator(){\n\t\t\t\tvar i, inc, done = false, top = +this;\n\n\t\t\t\t// iterate positively or negatively?\n\t\t\t\tinc = 1 * (top < 0 ? -1 : 1);\n\n\t\t\t\treturn {\n\t\t\t\t\t// make the iterator itself an iterable!\n\t\t\t\t\t[Symbol.iterator](){ return this; },\n\n\t\t\t\t\tnext() {\n\t\t\t\t\t\tif (!done) {\n\t\t\t\t\t\t\t// initial iteration always 0\n\t\t\t\t\t\t\tif (i == null) {\n\t\t\t\t\t\t\t\ti = 0;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t// iterating positively\n\t\t\t\t\t\t\telse if (top >= 0) {\n\t\t\t\t\t\t\t\ti = Math.min(top,i + inc);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t// iterating negatively\n\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\ti = Math.max(top,i + inc);\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t// done after this iteration?\n\t\t\t\t\t\t\tif (i == top) done = true;\n\n\t\t\t\t\t\t\treturn { value: i, done: false };\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\treturn { done: true };\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t};\n\t\t\t}\n\t\t}\n\t);\n}\n```\n\nNow, what tricks does this creativity afford us?\n\n```js\nfor (var i of 3) {\n\tconsole.log( i );\n}\n// 0 1 2 3\n\n[...-3];\t\t\t\t// [0,-1,-2,-3]\n```\n\nThose are some fun tricks, though the practical utility is somewhat debatable. But then again, one might wonder why ES6 didn't just ship with such a minor but delightful feature easter egg!?\n\nI'd be remiss if I didn't at least remind you that extending native prototypes as I'm doing in the previous snippet is something you should only do with caution and awareness of potential hazards.\n\nIn this case, the chances that you'll have a collision with other code or even a future JS feature is probably exceedingly low. But just beware of the slight possibility. And document what you're doing verbosely for posterity's sake.\n\n**Note:** I've expounded on this particular technique in this blog post (http://blog.getify.com/iterating-es6-numbers/) if you want more details. And this comment (http://blog.getify.com/iterating-es6-numbers/comment-page-1/#comment-535294) even suggests a similar trick but for making string character ranges.\n\n### Iterator Consumption\n\nWe've already shown consuming an iterator item by item with the `for..of` loop. But there are other ES6 structures that can consume iterators.\n\nLet's consider the iterator attached to this array (though any iterator we choose would have the following behaviors):\n\n```js\nvar a = [1,2,3,4,5];\n```\n\nThe `...` spread operator fully exhausts an iterator. Consider:\n\n```js\nfunction foo(x,y,z,w,p) {\n\tconsole.log( x + y + z + w + p );\n}\n\nfoo( ...a );\t\t\t// 15\n```\n\n`...` can also spread an iterator inside an array:\n\n```js\nvar b = [ 0, ...a, 6 ];\nb;\t\t\t\t\t\t// [0,1,2,3,4,5,6]\n```\n\nArray destructuring (see \"Destructuring\" in Chapter 2) can partially or completely (if paired with a `...` rest/gather operator) consume an iterator:\n\n```js\nvar it = a[Symbol.iterator]();\n\nvar [x,y] = it;\t\t\t// take just the first two elements from `it`\nvar [z, ...w] = it;\t\t// take the third, then the rest all at once\n\n// is `it` fully exhausted? Yep.\nit.next();\t\t\t\t// { value: undefined, done: true }\n\nx;\t\t\t\t\t\t// 1\ny;\t\t\t\t\t\t// 2\nz;\t\t\t\t\t\t// 3\nw;\t\t\t\t\t\t// [4,5]\n```\n\n## Generators\n\nAll functions run to completion, right? In other words, once a function starts running, it finishes before anything else can interrupt.\n\nAt least that's how it's been for the whole history of JavaScript up to this point. As of ES6, a new somewhat exotic form of function is being introduced, called a generator. A generator can pause itself in mid-execution, and can be resumed either right away or at a later time. So it clearly does not hold the run-to-completion guarantee that normal functions do.\n\nMoreover, each pause/resume cycle in mid-execution is an opportunity for two-way message passing, where the generator can return a value, and the controlling code that resumes it can send a value back in.\n\nAs with iterators in the previous section, there are multiple ways to think about what a generator is, or rather what it's most useful for. There's no one right answer, but we'll try to consider several angles.\n\n**Note:** See the *Async & Performance* title of this series for more information about generators, and also see Chapter 4 of this current title.\n\n### Syntax\n\nThe generator function is declared with this new syntax:\n\n```js\nfunction *foo() {\n\t// ..\n}\n```\n\nThe position of the `*` is not functionally relevant. The same declaration could be written as any of the following:\n\n```js\nfunction *foo()  { .. }\nfunction* foo()  { .. }\nfunction * foo() { .. }\nfunction*foo()   { .. }\n..\n```\n\nThe *only* difference here is stylistic preference. Most other literature seems to prefer `function* foo(..) { .. }`. I prefer `function *foo(..) { .. }`, so that's how I'll present them for the rest of this title.\n\nMy reason is purely didactic in nature. In this text, when referring to a generator function, I will use `*foo(..)`, as opposed to `foo(..)` for a normal function. I observe that `*foo(..)` more closely matches the `*` positioning of `function *foo(..) { .. }`.\n\nMoreover, as we saw in Chapter 2 with concise methods, there's a concise generator form in object literals:\n\n```js\nvar a = {\n\t*foo() { .. }\n};\n```\n\nI would say that with concise generators, `*foo() { .. }` is rather more natural than `* foo() { .. }`. So that further argues for matching the consistency with `*foo()`.\n\nConsistency eases understanding and learning.\n\n#### Executing a Generator\n\nThough a generator is declared with `*`, you still execute it like a normal function:\n\n```js\nfoo();\n```\n\nYou can still pass it arguments, as in:\n\n```js\nfunction *foo(x,y) {\n\t// ..\n}\n\nfoo( 5, 10 );\n```\n\nThe major difference is that executing a generator, like `foo(5,10)` doesn't actually run the code in the generator. Instead, it produces an iterator that will control the generator to execute its code.\n\nWe'll come back to this later in \"Iterator Control,\" but briefly:\n\n```js\nfunction *foo() {\n\t// ..\n}\n\nvar it = foo();\n\n// to start/advanced `*foo()`, call\n// `it.next(..)`\n```\n\n#### `yield`\n\nGenerators also have a new keyword you can use inside them, to signal the pause point: `yield`. Consider:\n\n```js\nfunction *foo() {\n\tvar x = 10;\n\tvar y = 20;\n\n\tyield;\n\n\tvar z = x + y;\n}\n```\n\nIn this `*foo()` generator, the operations on the first two lines would run at the beginning, then `yield` would pause the generator. If and when resumed, the last line of `*foo()` would run. `yield` can appear any number of times (or not at all, technically!) in a generator.\n\nYou can even put `yield` inside a loop, and it can represent a repeated pause point. In fact, a loop that never completes just means a generator that never completes, which is completely valid, and sometimes entirely what you need.\n\n`yield` is not just a pause point. It's an expression that sends out a value when pausing the generator. Here's a `while..true` loop in a generator that for each iteration `yield`s a new random number:\n\n```js\nfunction *foo() {\n\twhile (true) {\n\t\tyield Math.random();\n\t}\n}\n```\n\nThe `yield ..` expression not only sends a value -- `yield` without a value is the same as `yield undefined` -- but also receives (e.g., is replaced by) the eventual resumption value. Consider:\n\n```js\nfunction *foo() {\n\tvar x = yield 10;\n\tconsole.log( x );\n}\n```\n\nThis generator will first `yield` out the value `10` when pausing itself. When you resume the generator -- using the `it.next(..)` we referred to earlier -- whatever value (if any) you resume with will replace/complete the whole `yield 10` expression, meaning that value will be assigned to the `x` variable.\n\nA `yield ..` expression can appear anywhere a normal expression can. For example:\n\n```js\nfunction *foo() {\n\tvar arr = [ yield 1, yield 2, yield 3 ];\n\tconsole.log( arr, yield 4 );\n}\n```\n\n`*foo()` here has four `yield ..` expressions. Each `yield` results in the generator pausing to wait for a resumption value that's then used in the various expression contexts.\n\n`yield` is not technically an operator, though when used like `yield 1` it sure looks like it. Because `yield` can be used all by itself as in `var x = yield;`, thinking of it as an operator can sometimes be confusing.\n\nTechnically, `yield ..` is of the same \"expression precedence\" -- similar conceptually to operator precedence -- as an assignment expression like `a = 3`. That means `yield ..` can basically appear anywhere `a = 3` can validly appear.\n\nLet's illustrate the symmetry:\n\n```js\nvar a, b;\n\na = 3;\t\t\t\t\t// valid\nb = 2 + a = 3;\t\t\t// invalid\nb = 2 + (a = 3);\t\t// valid\n\nyield 3;\t\t\t\t// valid\na = 2 + yield 3;\t\t// invalid\na = 2 + (yield 3);\t\t// valid\n```\n\n**Note:** If you think about it, it makes a sort of conceptual sense that a `yield ..` expression would behave similar to an assignment expression. When a paused `yield` expression is resumed, it's completed/replaced by the resumption value in a way that's not terribly dissimilar from being \"assigned\" that value.\n\nThe takeaway: if you need `yield ..` to appear in a position where an assignment like `a = 3` would not itself be allowed, it needs to be wrapped in a `( )`.\n\nBecause of the low precedence of the `yield` keyword, almost any expression after a `yield ..` will be computed first before being sent with `yield`. Only the `...` spread operator and the `,` comma operator have lower precedence, meaning they'd bind after the `yield` has been evaluated.\n\nSo just like with multiple operators in normal statements, another case where `( )` might be needed is to override (elevate) the low precedence of `yield`, such as the difference between these expressions:\n\n```js\nyield 2 + 3;\t\t\t// same as `yield (2 + 3)`\n\n(yield 2) + 3;\t\t\t// `yield 2` first, then `+ 3`\n```\n\nJust like `=` assignment, `yield` is also \"right-associative,\" which means that multiple `yield` expressions in succession are treated as having been `( .. )` grouped from right to left. So, `yield yield yield 3` is treated as `yield (yield (yield 3))`. A \"left-associative\" interpretation like `((yield) yield) yield 3` would make no sense.\n\nJust like with operators, it's a good idea to use `( .. )` grouping, even if not strictly required, to disambiguate your intent if `yield` is combined with other operators or `yield`s.\n\n**Note:** See the *Types & Grammar* title of this series for more information about operator precedence and associativity.\n\n#### `yield *`\n\nIn the same way that the `*` makes a `function` declaration into `function *` generator declaration, a `*` makes `yield` into `yield *`, which is a very different mechanism, called *yield delegation*. Grammatically, `yield *..` will behave the same as a `yield ..`, as discussed in the previous section.\n\n`yield * ..` requires an iterable; it then invokes that iterable's iterator, and delegates its own host generator's control to that iterator until it's exhausted. Consider:\n\n```js\nfunction *foo() {\n\tyield *[1,2,3];\n}\n```\n\n**Note:** As with the `*` position in a generator's declaration (discussed earlier), the `*` positioning in `yield *` expressions is stylistically up to you. Most other literature prefers `yield* ..`, but I prefer `yield *..`, for very symmetrical reasons as already discussed.\n\nThe `[1,2,3]` value produces an iterator that will step through its values, so the `*foo()` generator will yield those values out as it's consumed. Another way to illustrate the behavior is in yield delegating to another generator:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n}\n\nfunction *bar() {\n\tyield *foo();\n}\n```\n\nThe iterator produced when `*bar()` calls `*foo()` is delegated to via `yield *`, meaning whatever value(s) `*foo()` produces will be produced by `*bar()`.\n\nWhereas with `yield ..` the completion value of the expression comes from resuming the generator with `it.next(..)`, the completion value of the `yield *..` expression comes from the return value (if any) from the delegated-to iterator.\n\nBuilt-in iterators generally don't have return values, as we covered at the end of the \"Iterator Loop\" section earlier in this chapter. But if you define your own custom iterator (or generator), you can design it to `return` a value, which `yield *..` would capture:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n\treturn 4;\n}\n\nfunction *bar() {\n\tvar x = yield *foo();\n\tconsole.log( \"x:\", x );\n}\n\nfor (var v of bar()) {\n\tconsole.log( v );\n}\n// 1 2 3\n// x: 4\n```\n\nWhile the `1`, `2`, and `3` values are `yield`ed out of `*foo()` and then out of `*bar()`, the `4` value returned from `*foo()` is the completion value of the `yield *foo()` expression, which then gets assigned to `x`.\n\nBecause `yield *` can call another generator (by way of delegating to its iterator), it can also perform a sort of generator recursion by calling itself:\n\n```js\nfunction *foo(x) {\n\tif (x < 3) {\n\t\tx = yield *foo( x + 1 );\n\t}\n\treturn x * 2;\n}\n\nfoo( 1 );\n```\n\nThe result from `foo(1)` and then calling the iterator's `next()` to run it through its recursive steps will be `24`. The first `*foo(..)` run has `x` at value `1`, which is `x < 3`. `x + 1` is passed recursively to `*foo(..)`, so `x` is then `2`. One more recursive call results in `x` of `3`.\n\nNow, because `x < 3` fails, the recursion stops, and `return 3 * 2` gives `6` back to the previous call's `yield *..` expression, which is then assigned to `x`. Another `return 6 * 2` returns `12` back to the previous call's `x`. Finally `12 * 2`, or `24`, is returned from the completed run of the `*foo(..)` generator.\n\n### Iterator Control\n\nEarlier, we briefly introduced the concept that generators are controlled by iterators. Let's fully dig into that now.\n\nRecall the recursive `*foo(..)` from the previous section. Here's how we'd run it:\n\n```js\nfunction *foo(x) {\n\tif (x < 3) {\n\t\tx = yield *foo( x + 1 );\n\t}\n\treturn x * 2;\n}\n\nvar it = foo( 1 );\nit.next();\t\t\t\t// { value: 24, done: true }\n```\n\nIn this case, the generator doesn't really ever pause, as there's no `yield ..` expression. Instead, `yield *` just keeps the current iteration step going via the recursive call. So, just one call to the iterator's `next()` function fully runs the generator.\n\nNow let's consider a generator that will have multiple steps and thus multiple produced values:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n}\n```\n\nWe already know we can consume an iterator, even one attached to a generator like `*foo()`, with a `for..of` loop:\n\n```js\nfor (var v of foo()) {\n\tconsole.log( v );\n}\n// 1 2 3\n```\n\n**Note:** The `for..of` loop requires an iterable. A generator function reference (like `foo`) by itself is not an iterable; you must execute it with `foo()` to get the iterator (which is also an iterable, as we explained earlier in this chapter). You could theoretically extend the `GeneratorPrototype` (the prototype of all generator functions) with a `Symbol.iterator` function that essentially just does `return this()`. That would make the `foo` reference itself an iterable, which means `for (var v of foo) { .. }` (notice no `()` on `foo`) will work.\n\nLet's instead iterate the generator manually:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n}\n\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 1, done: false }\nit.next();\t\t\t\t// { value: 2, done: false }\nit.next();\t\t\t\t// { value: 3, done: false }\n\nit.next();\t\t\t\t// { value: undefined, done: true }\n```\n\nIf you look closely, there are three `yield` statements and four `next()` calls. That may seem like a strange mismatch. In fact, there will always be one more `next()` call than `yield` expression, assuming all are evaluated and the generator is fully run to completion.\n\nBut if you look at it from the opposite perspective (inside-out instead of outside-in), the matching between `yield` and `next()` makes more sense.\n\nRecall that the `yield ..` expression will be completed by the value you resume the generator with. That means the argument you pass to `next(..)` completes whatever `yield ..` expression is currently paused waiting for a completion.\n\nLet's illustrate this perspective this way:\n\n```js\nfunction *foo() {\n\tvar x = yield 1;\n\tvar y = yield 2;\n\tvar z = yield 3;\n\tconsole.log( x, y, z );\n}\n```\n\nIn this snippet, each `yield ..` is sending a value out (`1`, `2`, `3`), but more directly, it's pausing the generator to wait for a value. In other words, it's almost like asking the question, \"What value should I use here? I'll wait to hear back.\"\n\nNow, here's how we control `*foo()` to start it up:\n\n```js\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 1, done: false }\n```\n\nThat first `next()` call is starting up the generator from its initial paused state, and running it to the first `yield`. At the moment you call that first `next()`, there's no `yield ..` expression waiting for a completion. If you passed a value to that first `next()` call, it would currently just be thrown away, because no `yield` is waiting to receive such a value.\n\n**Note:** An early proposal for the \"beyond ES6\" timeframe *would* let you access a value passed to an initial `next(..)` call via a separate meta property (see Chapter 7) inside the generator.\n\nNow, let's answer the currently pending question, \"What value should I assign to `x`?\" We'll answer it by sending a value to the *next* `next(..)` call:\n\n```js\nit.next( \"foo\" );\t\t// { value: 2, done: false }\n```\n\nNow, the `x` will have the value `\"foo\"`, but we've also asked a new question, \"What value should I assign to `y`?\" And we answer:\n\n```js\nit.next( \"bar\" );\t\t// { value: 3, done: false }\n```\n\nAnswer given, another question asked. Final answer:\n\n```js\nit.next( \"baz\" );\t\t// \"foo\" \"bar\" \"baz\"\n\t\t\t\t\t\t// { value: undefined, done: true }\n```\n\nNow it should be clearer how each `yield ..` \"question\" is answered by the *next* `next(..)` call, and so the \"extra\" `next()` call we observed is always just the initial one that starts everything going.\n\nLet's put all those steps together:\n\n```js\nvar it = foo();\n\n// start up the generator\nit.next();\t\t\t\t// { value: 1, done: false }\n\n// answer first question\nit.next( \"foo\" );\t\t// { value: 2, done: false }\n\n// answer second question\nit.next( \"bar\" );\t\t// { value: 3, done: false }\n\n// answer third question\nit.next( \"baz\" );\t\t// \"foo\" \"bar\" \"baz\"\n\t\t\t\t\t\t// { value: undefined, done: true }\n```\n\nYou can think of a generator as a producer of values, in which case each iteration is simply producing a value to be consumed.\n\nBut in a more general sense, perhaps it's appropriate to think of generators as controlled, progressive code execution, much like the `tasks` queue example from the earlier \"Custom Iterators\" section.\n\n**Note:** That perspective is exactly the motivation for how we'll revisit generators in Chapter 4. Specifically, there's no reason that `next(..)` has to be called right away after the previous `next(..)` finishes. While the generator's inner execution context is paused, the rest of the program continues unblocked, including the ability for asynchronous actions to control when the generator is resumed.\n\n### Early Completion\n\nAs we covered earlier in this chapter, the iterator attached to a generator supports the optional `return(..)` and `throw(..)` methods. Both of them have the effect of aborting a paused generator immediately.\n\nConsider:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n}\n\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 1, done: false }\n\nit.return( 42 );\t\t// { value: 42, done: true }\n\nit.next();\t\t\t\t// { value: undefined, done: true }\n```\n\n`return(x)` is kind of like forcing a `return x` to be processed at exactly that moment, such that you get the specified value right back. Once a generator is completed, either normally or early as shown, it no longer processes any code or returns any values.\n\nIn addition to `return(..)` being callable manually, it's also called automatically at the end of iteration by any of the ES6 constructs that consume iterators, such as the `for..of` loop and the `...` spread operator.\n\nThe purpose for this capability is so the generator can be notified if the controlling code is no longer going to iterate over it anymore, so that it can perhaps do any cleanup tasks (freeing up resources, resetting status, etc.). Identical to a normal function cleanup pattern, the main way to accomplish this is to use a `finally` clause:\n\n```js\nfunction *foo() {\n\ttry {\n\t\tyield 1;\n\t\tyield 2;\n\t\tyield 3;\n\t}\n\tfinally {\n\t\tconsole.log( \"cleanup!\" );\n\t}\n}\n\nfor (var v of foo()) {\n\tconsole.log( v );\n}\n// 1 2 3\n// cleanup!\n\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 1, done: false }\nit.return( 42 );\t\t// cleanup!\n\t\t\t\t\t\t// { value: 42, done: true }\n```\n\n**Warning:** Do not put a `yield` statement inside the `finally` clause! It's valid and legal, but it's a really terrible idea. It acts in a sense as deferring the completion of the `return(..)` call you made, as any `yield ..` expressions in the `finally` clause are respected to pause and send messages; you don't immediately get a completed generator as expected. There's basically no good reason to opt in to that crazy *bad part*, so avoid doing so!\n\nIn addition to the previous snippet showing how `return(..)` aborts the generator while still triggering the `finally` clause, it also demonstrates that a generator produces a whole new iterator each time it's called. In fact, you can use multiple iterators attached to the same generator concurrently:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n}\n\nvar it1 = foo();\nit1.next();\t\t\t\t// { value: 1, done: false }\nit1.next();\t\t\t\t// { value: 2, done: false }\n\nvar it2 = foo();\nit2.next();\t\t\t\t// { value: 1, done: false }\n\nit1.next();\t\t\t\t// { value: 3, done: false }\n\nit2.next();\t\t\t\t// { value: 2, done: false }\nit2.next();\t\t\t\t// { value: 3, done: false }\n\nit2.next();\t\t\t\t// { value: undefined, done: true }\nit1.next();\t\t\t\t// { value: undefined, done: true }\n```\n\n#### Early Abort\n\nInstead of calling `return(..)`, you can call `throw(..)`. Just like `return(x)` is essentially injecting a `return x` into the generator at its current pause point, calling `throw(x)` is essentially like injecting a `throw x` at the pause point.\n\nOther than the exception behavior (we cover what that means to `try` clauses in the next section), `throw(..)` produces the same sort of early completion that aborts the generator's run at its current pause point. For example:\n\n```js\nfunction *foo() {\n\tyield 1;\n\tyield 2;\n\tyield 3;\n}\n\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 1, done: false }\n\ntry {\n\tit.throw( \"Oops!\" );\n}\ncatch (err) {\n\tconsole.log( err );\t// Exception: Oops!\n}\n\nit.next();\t\t\t\t// { value: undefined, done: true }\n```\n\nBecause `throw(..)` basically injects a `throw ..` in replacement of the `yield 1` line of the generator, and nothing handles this exception, it immediately propagates back out to the calling code, which handles it with a `try..catch`.\n\nUnlike `return(..)`, the iterator's `throw(..)` method is never called automatically.\n\nOf course, though not shown in the previous snippet, if a `try..finally` clause was waiting inside the generator when you call `throw(..)`, the `finally` clause would be given a chance to complete before the exception is propagated back to the calling code.\n\n### Error Handling\n\nAs we've already hinted, error handling with generators can be expressed with `try..catch`, which works in both inbound and outbound directions:\n\n```js\nfunction *foo() {\n\ttry {\n\t\tyield 1;\n\t}\n\tcatch (err) {\n\t\tconsole.log( err );\n\t}\n\n\tyield 2;\n\n\tthrow \"Hello!\";\n}\n\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 1, done: false }\n\ntry {\n\tit.throw( \"Hi!\" );\t// Hi!\n\t\t\t\t\t\t// { value: 2, done: false }\n\tit.next();\n\n\tconsole.log( \"never gets here\" );\n}\ncatch (err) {\n\tconsole.log( err );\t// Hello!\n}\n```\n\nErrors can also propagate in both directions through `yield *` delegation:\n\n```js\nfunction *foo() {\n\ttry {\n\t\tyield 1;\n\t}\n\tcatch (err) {\n\t\tconsole.log( err );\n\t}\n\n\tyield 2;\n\n\tthrow \"foo: e2\";\n}\n\nfunction *bar() {\n\ttry {\n\t\tyield *foo();\n\n\t\tconsole.log( \"never gets here\" );\n\t}\n\tcatch (err) {\n\t\tconsole.log( err );\n\t}\n}\n\nvar it = bar();\n\ntry {\n\tit.next();\t\t\t// { value: 1, done: false }\n\n\tit.throw( \"e1\" );\t// e1\n\t\t\t\t\t\t// { value: 2, done: false }\n\n\tit.next();\t\t\t// foo: e2\n\t\t\t\t\t\t// { value: undefined, done: true }\n}\ncatch (err) {\n\tconsole.log( \"never gets here\" );\n}\n\nit.next();\t\t\t\t// { value: undefined, done: true }\n```\n\nWhen `*foo()` calls `yield 1`, the `1` value passes through `*bar()` untouched, as we've already seen.\n\nBut what's most interesting about this snippet is that when `*foo()` calls `throw \"foo: e2\"`, this error propagates to `*bar()` and is immediately caught by `*bar()`'s `try..catch` block. The error doesn't pass through `*bar()` like the `1` value did.\n\n`*bar()`'s `catch` then does a normal output of `err` (`\"foo: e2\"`) and then `*bar()` finishes normally, which is why the `{ value: undefined, done: true }` iterator result comes back from `it.next()`.\n\nIf `*bar()` didn't have a `try..catch` around the `yield *..` expression, the error would of course propagate all the way out, and on the way through it still would complete (abort) `*bar()`.\n\n### Transpiling a Generator\n\nIs it possible to represent a generator's capabilities prior to ES6? It turns out it is, and there are several great tools that do so, including most notably Facebook's Regenerator tool (https://facebook.github.io/regenerator/).\n\nBut just to better understand generators, let's try our hand at manually converting. Basically, we're going to create a simple closure-based state machine.\n\nWe'll keep our source generator really simple:\n\n```js\nfunction *foo() {\n\tvar x = yield 42;\n\tconsole.log( x );\n}\n```\n\nTo start, we'll need a function called `foo()` that we can execute, which needs to return an iterator:\n\n```js\nfunction foo() {\n\t// ..\n\n\treturn {\n\t\tnext: function(v) {\n\t\t\t// ..\n\t\t}\n\n\t\t// we'll skip `return(..)` and `throw(..)`\n\t};\n}\n```\n\nNow, we need some inner variable to keep track of where we are in the steps of our \"generator\"'s logic. We'll call it `state`. There will be three states: `0` initially, `1` while waiting to fulfill the `yield` expression, and `2` once the generator is complete.\n\nEach time `next(..)` is called, we need to process the next step, and then increment `state`. For convenience, we'll put each step into a `case` clause of a `switch` statement, and we'll hold that in an inner function called `nextState(..)` that `next(..)` can call. Also, because `x` is a variable across the overall scope of the \"generator,\" it needs to live outside the `nextState(..)` function.\n\nHere it is all together (obviously somewhat simplified, to keep the conceptual illustration clearer):\n\n```js\nfunction foo() {\n\tfunction nextState(v) {\n\t\tswitch (state) {\n\t\t\tcase 0:\n\t\t\t\tstate++;\n\n\t\t\t\t// the `yield` expression\n\t\t\t\treturn 42;\n\t\t\tcase 1:\n\t\t\t\tstate++;\n\n\t\t\t\t// `yield` expression fulfilled\n\t\t\t\tx = v;\n\t\t\t\tconsole.log( x );\n\n\t\t\t\t// the implicit `return`\n\t\t\t\treturn undefined;\n\n\t\t\t// no need to handle state `2`\n\t\t}\n\t}\n\n\tvar state = 0, x;\n\n\treturn {\n\t\tnext: function(v) {\n\t\t\tvar ret = nextState( v );\n\n\t\t\treturn { value: ret, done: (state == 2) };\n\t\t}\n\n\t\t// we'll skip `return(..)` and `throw(..)`\n\t};\n}\n```\n\nAnd finally, let's test our pre-ES6 \"generator\":\n\n```js\nvar it = foo();\n\nit.next();\t\t\t\t// { value: 42, done: false }\n\nit.next( 10 );\t\t\t// 10\n\t\t\t\t\t\t// { value: undefined, done: true }\n```\n\nNot bad, huh? Hopefully this exercise solidifies in your mind that generators are actually just simple syntax for state machine logic. That makes them widely applicable.\n\n### Generator Uses\n\nSo, now that we much more deeply understand how generators work, what are they useful for?\n\nWe've seen two major patterns:\n\n* *Producing a series of values:* This usage can be simple (e.g., random strings or incremented numbers), or it can represent more structured data access (e.g., iterating over rows returned from a database query).\n\n   Either way, we use the iterator to control a generator so that some logic can be invoked for each call to `next(..)`. Normal iterators on data structures merely pull values without any controlling logic.\n* *Queue of tasks to perform serially:* This usage often represents flow control for the steps in an algorithm, where each step requires retrieval of data from some external source. The fulfillment of each piece of data may be immediate, or may be asynchronously delayed.\n\n   From the perspective of the code inside the generator, the details of sync or async at a `yield` point are entirely opaque. Moreover, these details are intentionally abstracted away, such as not to obscure the natural sequential expression of steps with such implementation complications. Abstraction also means the implementations can be swapped/refactored often without touching the code in the generator at all.\n\nWhen generators are viewed in light of these uses, they become a lot more than just a different or nicer syntax for a manual state machine. They are a powerful abstraction tool for organizing and controlling orderly production and consumption of data.\n\n## Modules\n\nI don't think it's an exaggeration to suggest that the single most important code organization pattern in all of JavaScript is, and always has been, the module. For myself, and I think for a large cross-section of the community, the module pattern drives the vast majority of code.\n\n### The Old Way\n\nThe traditional module pattern is based on an outer function with inner variables and functions, and a returned \"public API\" with methods that have closure over the inner data and capabilities. It's often expressed like this:\n\n```js\nfunction Hello(name) {\n\tfunction greeting() {\n\t\tconsole.log( \"Hello \" + name + \"!\" );\n\t}\n\n\t// public API\n\treturn {\n\t\tgreeting: greeting\n\t};\n}\n\nvar me = Hello( \"Kyle\" );\nme.greeting();\t\t\t// Hello Kyle!\n```\n\nThis `Hello(..)` module can produce multiple instances by being called subsequent times. Sometimes, a module is only called for as a singleton (i.e., it just needs one instance), in which case a slight variation on the previous snippet, using an IIFE, is common:\n\n```js\nvar me = (function Hello(name){\n\tfunction greeting() {\n\t\tconsole.log( \"Hello \" + name + \"!\" );\n\t}\n\n\t// public API\n\treturn {\n\t\tgreeting: greeting\n\t};\n})( \"Kyle\" );\n\nme.greeting();\t\t\t// Hello Kyle!\n```\n\nThis pattern is tried and tested. It's also flexible enough to have a wide assortment of variations for a number of different scenarios.\n\nOne of the most common is the Asynchronous Module Definition (AMD), and another is the Universal Module Definition (UMD). We won't cover the particulars of these patterns and techniques here, but they're explained extensively in many places online.\n\n### Moving Forward\n\nAs of ES6, we no longer need to rely on the enclosing function and closure to provide us with module support. ES6 modules have first class syntactic and functional support.\n\nBefore we get into the specific syntax, it's important to understand some fairly significant conceptual differences with ES6 modules compared to how you may have dealt with modules in the past:\n\n* ES6 uses file-based modules, meaning one module per file. At this time, there is no standardized way of combining multiple modules into a single file.\n\n   That means that if you are going to load ES6 modules directly into a browser web application, you will be loading them individually, not as a large bundle in a single file as has been common in performance optimization efforts.\n\n   It's expected that the contemporaneous advent of HTTP/2 will significantly mitigate any such performance concerns, as it operates on a persistent socket connection and thus can very efficiently load many smaller files in parallel and interleaved with one another.\n* The API of an ES6 module is static. That is, you define statically what all the top-level exports are on your module's public API, and those cannot be amended later.\n\n   Some uses are accustomed to being able to provide dynamic API definitions, where methods can be added/removed/replaced in response to runtime conditions. Either these uses will have to change to fit with ES6 static APIs, or they will have to restrain the dynamic changes to properties/methods of a second-level object.\n* ES6 modules are singletons. That is, there's only one instance of the module, which maintains its state. Every time you import that module into another module, you get a reference to the one centralized instance. If you want to be able to produce multiple module instances, your module will need to provide some sort of factory to do it.\n* The properties and methods you expose on a module's public API are not just normal assignments of values or references. They are actual bindings (almost like pointers) to the identifiers in your inner module definition.\n\n   In pre-ES6 modules, if you put a property on your public API that holds a primitive value like a number or string, that property assignment was by value-copy, and any internal update of a corresponding variable would be separate and not affect the public copy on the API object.\n\n   With ES6, exporting a local private variable, even if it currently holds a primitive string/number/etc, exports a binding to the variable. If the module changes the  variable's value, the external import binding now resolves to that new value.\n* Importing a module is the same thing as statically requesting it to load (if it hasn't already). If you're in a browser, that implies a blocking load over the network. If you're on a server (i.e., Node.js), it's a blocking load from the filesystem.\n\n   However, don't panic about the performance implications. Because ES6 modules have static definitions, the import requirements can be statically scanned, and loads will happen preemptively, even before you've used the module.\n\n   ES6 doesn't actually specify or handle the mechanics of how these load requests work. There's a separate notion of a Module Loader, where each hosting environment (browser, Node.js, etc.) provides a default Loader appropriate to the environment. The importing of a module uses a string value to represent where to get the module (URL, file path, etc.), but this value is opaque in your program and only meaningful to the Loader itself.\n\n   You can define your own custom Loader if you want more fine-grained control than the default Loader affords -- which is basically none, as it's totally hidden from your program's code.\n\nAs you can see, ES6 modules will serve the overall use case of organizing code with encapsulation, controlling public APIs, and referencing dependency imports. But they have a very particular way of doing so, and that may or may not fit very closely with how you've already been doing modules for years.\n\n#### CommonJS\n\nThere's a similar, but not fully compatible, module syntax called CommonJS, which is familiar to those in the Node.js ecosystem.\n\nFor lack of a more tactful way to say this, in the long run, ES6 modules essentially are bound to supersede all previous formats and standards for modules, even CommonJS, as they are built on syntactic support in the language. This will, in time, inevitably win out as the superior approach, if for no other reason than ubiquity.\n\nWe face a fairly long road to get to that point, though. There are literally hundreds of thousands of CommonJS style modules in the server-side JavaScript world, and 10 times that many modules of varying format standards (UMD, AMD, ad hoc) in the browser world. It will take many years for the transitions to make any significant progress.\n\nIn the interim, module transpilers/converters will be an absolute necessity. You might as well just get used to that new reality. Whether you author in regular modules, AMD, UMD, CommonJS, or ES6, these tools will have to parse and convert to a format that is suitable for whatever environment your code will run in.\n\nFor Node.js, that probably means (for now) that the target is CommonJS. For the browser, it's probably UMD or AMD. Expect lots of flux on this over the next few years as these tools mature and best practices emerge.\n\nFrom here on out, my best advice on modules is this: whatever format you've been religiously attached to with strong affinity, also develop an appreciation for and understanding of ES6 modules, such as they are, and let your other module tendencies fade. They *are* the future of modules in JS, even if that reality is a bit of a ways off.\n\n### The New Way\n\nThe two main new keywords that enable ES6 modules are `import` and `export`. There's lots of nuance to the syntax, so let's take a deeper look.\n\n**Warning:** An important detail that's easy to overlook: both `import` and `export` must always appear in the top-level scope of their respective usage. For example, you cannot put either an `import` or `export` inside an `if` conditional; they must appear outside of all blocks and functions.\n\n#### `export`ing API Members\n\nThe `export` keyword is either put in front of a declaration, or used as an operator (of sorts) with a special list of bindings to export. Consider:\n\n```js\nexport function foo() {\n\t// ..\n}\n\nexport var awesome = 42;\n\nvar bar = [1,2,3];\nexport { bar };\n```\n\nAnother way of expressing the same exports:\n\n```js\nfunction foo() {\n\t// ..\n}\n\nvar awesome = 42;\nvar bar = [1,2,3];\n\nexport { foo, awesome, bar };\n```\n\nThese are all called *named exports*, as you are in effect exporting the name bindings of the variables/functions/etc.\n\nAnything you don't *label* with `export` stays private inside the scope of the module. That is, although something like `var bar = ..` looks like it's declaring at the top-level global scope, the top-level scope is actually the module itself; there is no global scope in modules.\n\n**Note:** Modules *do* still have access to `window` and all the \"globals\" that hang off it, just not as lexical top-level scope. However, you really should stay away from the globals in your modules if at all possible.\n\nYou can also \"rename\" (aka alias) a module member during named export:\n\n```js\nfunction foo() { .. }\n\nexport { foo as bar };\n```\n\nWhen this module is imported, only the `bar` member name is available to import; `foo` stays hidden inside the module.\n\nModule exports are not just normal assignments of values or references, as you're accustomed to with the `=` assignment operator. Actually, when you export something, you're exporting a binding (kinda like a pointer) to that thing (variable, etc.).\n\nWithin your module, if you change the value of a variable you already exported a binding to, even if it's already been imported (see the next section), the imported binding will resolve to the current (updated) value.\n\nConsider:\n\n```js\nvar awesome = 42;\nexport { awesome };\n\n// later\nawesome = 100;\n```\n\nWhen this module is imported, regardless of whether that's before or after the `awesome = 100` setting, once that assignment has happened, the imported binding resolves to the `100` value, not `42`.\n\nThat's because the binding is, in essence, a reference to, or a pointer to, the `awesome` variable itself, rather than a copy of its value. This is a mostly unprecedented concept for JS introduced with ES6 module bindings.\n\nThough you can clearly use `export` multiple times inside a module's definition, ES6 definitely prefers the approach that a module has a single export, which is known as a *default export*. In the words of some members of the TC39 committee, you're \"rewarded with simpler `import` syntax\" if you follow that pattern, and conversely \"penalized\" with more verbose syntax if you don't.\n\nA default export sets a particular exported binding to be the default when importing the module. The name of the binding is literally `default`. As you'll see later, when importing module bindings you can also rename them, as you commonly will with a default export.\n\nThere can only be one `default` per module definition. We'll cover `import` in the next section, and you'll see how the `import` syntax is more concise if the module has a default export.\n\nThere's a subtle nuance to default export syntax that you should pay close attention to. Compare these two snippets:\n\n```js\nfunction foo(..) {\n\t// ..\n}\n\nexport default foo;\n```\n\nAnd this one:\n\n```js\nfunction foo(..) {\n\t// ..\n}\n\nexport { foo as default };\n```\n\nIn the first snippet, you are exporting a binding to the function expression value at that moment, *not* to the identifier `foo`. In other words, `export default ..` takes an expression. If you later assign `foo` to a different value inside your module, the module import still reveals the function originally exported, not the new value.\n\nBy the way, the first snippet could also have been written as:\n\n```js\nexport default function foo(..) {\n\t// ..\n}\n```\n\n**Warning:** Although the `function foo..` part here is technically a function expression, for the purposes of the internal scope of the module, it's treated like a function declaration, in that the `foo` name is bound in the module's top-level scope (often called \"hoisting\"). The same is true for `export default class Foo..`. However, while you *can* do `export var foo = ..`, you currently cannot do `export default var foo = ..` (or `let` or `const`), in a frustrating case of inconsistency. At the time of this writing, there's already discussion of adding that capability in soon, post-ES6, for consistency sake.\n\nRecall the second snippet again:\n\n```js\nfunction foo(..) {\n\t// ..\n}\n\nexport { foo as default };\n```\n\nIn this version of the module export, the default export binding is actually to the `foo` identifier rather than its value, so you get the previously described binding behavior (i.e., if you later change `foo`'s value, the value seen on the import side will also be updated).\n\nBe very careful of this subtle gotcha in default export syntax, especially if your logic calls for export values to be updated. If you never plan to update a default export's value, `export default ..` is fine. If you do plan to update the value, you must use `export { .. as default }`. Either way, make sure to comment your code to explain your intent!\n\nBecause there can only be one `default` per module, you may be tempted to design your module with one default export of an object with all your API methods on it, such as:\n\n```js\nexport default {\n\tfoo() { .. },\n\tbar() { .. },\n\t..\n};\n```\n\nThat pattern seems to map closely to how a lot of developers have already structured their pre-ES6 modules, so it seems like a natural approach. Unfortunately, it has some downsides and is officially discouraged.\n\nIn particular, the JS engine cannot statically analyze the contents of a plain object, which means it cannot do some optimizations for static `import` performance. The advantage of having each member individually and explicitly exported is that the engine *can* do the static analysis and optimization.\n\nIf your API has more than one member already, it seems like these principles -- one default export per module, and all API members as named exports -- are in conflict, doesn't it? But you *can* have a single default export as well as other named exports; they are not mutually exclusive.\n\nSo, instead of this (discouraged) pattern:\n\n```js\nexport default function foo() { .. }\n\nfoo.bar = function() { .. };\nfoo.baz = function() { .. };\n```\n\nYou can do:\n\n```js\nexport default function foo() { .. }\n\nexport function bar() { .. }\nexport function baz() { .. }\n```\n\n**Note:** In this previous snippet, I used the name `foo` for the function that `default` labels. That `foo` name, however, is ignored for the purposes of export -- `default` is actually the exported name. When you import this default binding, you can give it whatever name you want, as you'll see in the next section.\n\nAlternatively, some will prefer:\n\n```js\nfunction foo() { .. }\nfunction bar() { .. }\nfunction baz() { .. }\n\nexport { foo as default, bar, baz, .. };\n```\n\nThe effects of mixing default and named exports will be more clear when we cover `import` shortly. But essentially it means that the most concise default import form would only retrieve the `foo()` function. The user could additionally manually list `bar` and `baz` as named imports, if they want them.\n\nYou can probably imagine how tedious that's going to be for consumers of your module if you have lots of named export bindings. There is a wildcard import form where you import all of a module's exports within a single namespace object, but there's no way to wildcard import to top-level bindings.\n\nAgain, the ES6 module mechanism is intentionally designed to discourage modules with lots of exports; relatively speaking, it's desired that such approaches be a little more difficult, as a sort of social engineering to encourage simple module design in favor of large/complex module design.\n\nI would probably recommend you not mix default export with named exports, especially if you have a large API and refactoring to separate modules isn't practical or desired. In that case, just use all named exports, and document that consumers of your module should probably use the `import * as ..` (namespace import, discussed in the next section) approach to bring the whole API in at once on a single namespace.\n\nWe mentioned this earlier, but let's come back to it in more detail. Other than the `export default ...` form that exports an expression value binding, all other export forms are exporting bindings to local identifiers. For those bindings, if you change the value of a variable inside a module after exporting, the external imported binding will access the updated value:\n\n```js\nvar foo = 42;\nexport { foo as default };\n\nexport var bar = \"hello world\";\n\nfoo = 10;\nbar = \"cool\";\n```\n\nWhen you import this module, the `default` and `bar` exports will be bound to the local variables `foo` and `bar`, meaning they will reveal the updated `10` and `\"cool\"` values. The values at time of export are irrelevant. The values at time of import are irrelevant. The bindings are live links, so all that matters is what the current value is when you access the binding.\n\n**Warning:** Two-way bindings are not allowed. If you import a `foo` from a module, and try to change the value of your imported `foo` variable, an error will be thrown! We'll revisit that in the next section.\n\nYou can also re-export another module's exports, such as:\n\n```js\nexport { foo, bar } from \"baz\";\nexport { foo as FOO, bar as BAR } from \"baz\";\nexport * from \"baz\";\n```\n\nThose forms are similar to just first importing from the `\"baz\"` module then listing its members explicitly for export from your module. However, in these forms, the members of the `\"baz\"` module are never imported to your module's local scope; they sort of pass through untouched.\n\n#### `import`ing API Members\n\nTo import a module, unsurprisingly you use the `import` statement. Just as `export` has several nuanced variations, so does `import`, so spend plenty of time considering the following issues and experimenting with your options.\n\nIf you want to import certain specific named members of a module's API into your top-level scope, you use this syntax:\n\n```js\nimport { foo, bar, baz } from \"foo\";\n```\n\n**Warning:** The `{ .. }` syntax here may look like an object literal, or even an object destructuring syntax. However, its form is special just for modules, so be careful not to confuse it with other `{ .. }` patterns elsewhere.\n\nThe `\"foo\"` string is called a *module specifier*. Because the whole goal is statically analyzable syntax, the module specifier must be a string literal; it cannot be a variable holding the string value.\n\nFrom the perspective of your ES6 code and the JS engine itself, the contents of this string literal are completely opaque and meaningless. The module loader will interpret this string as an instruction of where to find the desired module, either as a URL path or a local filesystem path.\n\nThe `foo`, `bar`, and `baz` identifiers listed must match named exports on the module's API (static analysis and error assertion apply). They are bound as top-level identifiers in your current scope:\n\n```js\nimport { foo } from \"foo\";\n\nfoo();\n```\n\nYou can rename the bound identifiers imported, as:\n\n```js\nimport { foo as theFooFunc } from \"foo\";\n\ntheFooFunc();\n```\n\nIf the module has just a default export that you want to import and bind to an identifier, you can opt to skip the `{ .. }` surrounding syntax for that binding. The `import` in this preferred case gets the nicest and most concise of the `import` syntax forms:\n\n```js\nimport foo from \"foo\";\n\n// or:\nimport { default as foo } from \"foo\";\n```\n\n**Note:** As explained in the previous section, the `default` keyword in a module's `export` specifies a named export where the name is actually `default`, as is illustrated by the second more verbose syntax option. The renaming from `default` to, in this case, `foo`, is explicit in the latter syntax and is identical yet implicit in the former syntax.\n\nYou can also import a default export along with other named exports, if the module has such a definition. Recall this module definition from earlier:\n\n```js\nexport default function foo() { .. }\n\nexport function bar() { .. }\nexport function baz() { .. }\n```\n\nTo import that module's default export and its two named exports:\n\n```js\nimport FOOFN, { bar, baz as BAZ } from \"foo\";\n\nFOOFN();\nbar();\nBAZ();\n```\n\nThe strongly suggested approach from ES6's module philosophy is that you only import the specific bindings from a module that you need. If a module provides 10 API methods, but you only need two of them, some believe it wasteful to bring in the entire set of API bindings.\n\nOne benefit, besides code being more explicit, is that narrow imports make static analysis and error detection (accidentally using the wrong binding name, for instance) more robust.\n\nOf course, that's just the standard position influenced by ES6 design philosophy; there's nothing that requires adherence to that approach.\n\nMany developers would be quick to point out that such approaches can be more tedious, requiring you to regularly revisit and update your `import` statement(s) each time you realize you need something else from a module. The trade-off is in exchange for convenience.\n\nIn that light, the preference might be to import everything from the module into a single namespace, rather than importing individual members, each directly into the scope. Fortunately, the `import` statement has a syntax variation that can support this style of module consumption, called *namespace import*.\n\nConsider a `\"foo\"` module exported as:\n\n```js\nexport function bar() { .. }\nexport var x = 42;\nexport function baz() { .. }\n```\n\nYou can import that entire API to a single module namespace binding:\n\n```js\nimport * as foo from \"foo\";\n\nfoo.bar();\nfoo.x;\t\t\t// 42\nfoo.baz();\n```\n\n**Note:** The `* as ..` clause requires the `*` wildcard. In other words, you cannot do something like `import { bar, x } as foo from \"foo\"` to bring in only part of the API but still bind to the `foo` namespace. I would have liked something like that, but for ES6 it's all or nothing with the namespace import.\n\nIf the module you're importing with `* as ..` has a default export, it is named `default` in the namespace specified. You can additionally name the default import outside of the namespace binding, as a top-level identifier. Consider a `\"world\"` module exported as:\n\n```js\nexport default function foo() { .. }\nexport function bar() { .. }\nexport function baz() { .. }\n```\n\nAnd this `import`:\n\n```js\nimport foofn, * as hello from \"world\";\n\nfoofn();\nhello.default();\nhello.bar();\nhello.baz();\n```\n\nWhile this syntax is valid, it can be rather confusing that one method of the module (the default export) is bound at the top-level of your scope, whereas the rest of the named exports (and one called `default`) are bound as properties on a differently named (`hello`) identifier namespace.\n\nAs I mentioned earlier, my suggestion would be to avoid designing your module exports in this way, to reduce the chances that your module's users will suffer these strange quirks.\n\nAll imported bindings are immutable and/or read-only. Consider the previous import; all of these subsequent assignment attempts will throw `TypeError`s:\n\n```js\nimport foofn, * as hello from \"world\";\n\nfoofn = 42;\t\t\t// (runtime) TypeError!\nhello.default = 42;\t// (runtime) TypeError!\nhello.bar = 42;\t\t// (runtime) TypeError!\nhello.baz = 42;\t\t// (runtime) TypeError!\n```\n\nRecall earlier in the \"`export`ing API Members\" section that we talked about how the `bar` and `baz` bindings are bound to the actual identifiers inside the `\"world\"` module. That means if the module changes those values, `hello.bar` and `hello.baz` now reference the updated values.\n\nBut the immutable/read-only nature of your local imported bindings enforces that you cannot change them from the imported bindings, hence the `TypeError`s. That's pretty important, because without those protections, your changes would end up affecting all other consumers of the module (remember: singleton), which could create some very surprising side effects!\n\nMoreover, though a module *can* change its API members from the inside, you should be very cautious of intentionally designing your modules in that fashion. ES6 modules are *intended* to be static, so deviations from that principle should be rare and should be carefully and verbosely documented.\n\n**Warning:** There are module design philosophies where you actually intend to let a consumer change the value of a property on your API, or module APIs are designed to be \"extended\" by having other \"plug-ins\" add to the API namespace. As we just asserted, ES6 module APIs should be thought of and designed as static and unchangeable, which strongly restricts and discourages these alternative module design patterns. You can get around these limitations by exporting a plain object, which of course can then be changed at will. But be careful and think twice before going down that road.\n\nDeclarations that occur as a result of an `import` are \"hoisted\" (see the *Scope & Closures* title of this series). Consider:\n\n```js\nfoo();\n\nimport { foo } from \"foo\";\n```\n\n`foo()` can run because not only did the static resolution of the `import ..` statement figure out what `foo` is during compilation, but it also \"hoisted\" the declaration to the top of the module's scope, thus making it available throughout the module.\n\nFinally, the most basic form of the `import` looks like this:\n\n```js\nimport \"foo\";\n```\n\nThis form does not actually import any of the module's bindings into your scope. It loads (if not already loaded), compiles (if not already compiled), and evaluates (if not already run) the `\"foo\"` module.\n\nIn general, that sort of import is probably not going to be terribly useful. There may be niche cases where a module's definition has side effects (such as assigning things to the `window`/global object). You could also envision using `import \"foo\"` as a sort of preload for a module that may be needed later.\n\n### Circular Module Dependency\n\nA imports B. B imports A. How does this actually work?\n\nI'll state off the bat that designing systems with intentional circular dependency is generally something I try to avoid. That having been said, I recognize there are reasons people do this and it can solve some sticky design situations.\n\nLet's consider how ES6 handles this. First, module `\"A\"`:\n\n```js\nimport bar from \"B\";\n\nexport default function foo(x) {\n\tif (x > 10) return bar( x - 1 );\n\treturn x * 2;\n}\n```\n\nNow, module `\"B\"`:\n\n```js\nimport foo from \"A\";\n\nexport default function bar(y) {\n\tif (y > 5) return foo( y / 2 );\n\treturn y * 3;\n}\n```\n\nThese two functions, `foo(..)` and `bar(..)`, would work as standard function declarations if they were in the same scope, because the declarations are \"hoisted\" to the whole scope and thus available to each other regardless of authoring order.\n\nWith modules, you have declarations in entirely different scopes, so ES6 has to do extra work to help make these circular references work.\n\nIn a rough conceptual sense, this is how circular `import` dependencies are validated and resolved:\n\n* If the `\"A\"` module is loaded first, the first step is to scan the file and analyze all the exports, so it can register all those bindings available for import. Then it processes the `import .. from \"B\"`, which signals that it needs to go fetch `\"B\"`.\n* Once the engine loads `\"B\"`, it does the same analysis of its export bindings. When it sees the `import .. from \"A\"`, it knows the API of `\"A\"` already, so it can verify the `import` is valid. Now that it knows the `\"B\"` API, it can also validate the `import .. from \"B\"` in the waiting `\"A\"` module.\n\nIn essence, the mutual imports, along with the static verification that's done to validate both `import` statements, virtually composes the two separate module scopes (via the bindings), such that `foo(..)` can call `bar(..)` and vice versa. This is symmetric to if they had originally been declared in the same scope.\n\nNow let's try using the two modules together. First, we'll try `foo(..)`:\n\n```js\nimport foo from \"foo\";\nfoo( 25 );\t\t\t\t// 11\n```\n\nOr we can try `bar(..)`:\n\n```js\nimport bar from \"bar\";\nbar( 25 );\t\t\t\t// 11.5\n```\n\nBy the time either the `foo(25)` or `bar(25)` calls are executed, all the analysis/compilation of all modules has completed. That means `foo(..)` internally knows directly about `bar(..)` and `bar(..)` internally knows directly about `foo(..)`.\n\nIf all we need is to interact with `foo(..)`, then we only need to import the `\"foo\"` module. Likewise with `bar(..)` and the `\"bar\"` module.\n\nOf course, we *can* import and use both of them if we want to:\n\n```js\nimport foo from \"foo\";\nimport bar from \"bar\";\n\nfoo( 25 );\t\t\t\t// 11\nbar( 25 );\t\t\t\t// 11.5\n```\n\nThe static loading semantics of the `import` statement mean that a `\"foo\"` and `\"bar\"` that mutually depend on each other via `import` will ensure that both are loaded, parsed, and compiled before either of them runs. So their circular dependency is statically resolved and this works as you'd expect.\n\n### Module Loading\n\nWe asserted at the beginning of this \"Modules\" section that the `import` statement uses a separate mechanism, provided by the hosting environment (browser, Node.js, etc.), to actually resolve the module specifier string into some useful instruction for finding and loading the desired module. That mechanism is the system *Module Loader*.\n\nThe default module loader provided by the environment will interpret a module specifier as a URL if in the browser, and (generally) as a local filesystem path if on a server such as Node.js. The default behavior is to assume the loaded file is authored in the ES6 standard module format.\n\nMoreover, you will be able to load a module into the browser via an HTML tag, similar to how current script programs are loaded. At the time of this writing, it's not fully clear if this tag will be `<script type=\"module\">` or `<module>`. ES6 doesn't control that decision, but discussions in the appropriate standards bodies are already well along in parallel of ES6.\n\nWhatever the tag looks like, you can be sure that under the covers it will use the default loader (or a customized one you've pre-specified, as we'll discuss in the next section).\n\nJust like the tag you'll use in markup, the module loader itself is not specified by ES6. It is a separate, parallel standard (http://whatwg.github.io/loader/) controlled currently by the WHATWG browser standards group.\n\nAt the time of this writing, the following discussions reflect an early pass at the API design, and things are likely to change.\n\n#### Loading Modules Outside of Modules\n\nOne use for interacting directly with the module loader is if a non-module needs to load a module. Consider:\n\n```js\n// normal script loaded in browser via `<script>`,\n// `import` is illegal here\n\nReflect.Loader.import( \"foo\" ) // returns a promise for `\"foo\"`\n.then( function(foo){\n\tfoo.bar();\n} );\n```\n\nThe `Reflect.Loader.import(..)` utility imports the entire module onto the named parameter (as a namespace), just like the `import * as foo ..` namespace import we discussed earlier.\n\n**Note:** The `Reflect.Loader.import(..)` utility returns a promise that is fulfilled once the module is ready. To import multiple modules, you can compose promises from multiple `Reflect.Loader.import(..)` calls using `Promise.all([ .. ])`. For more information about Promises, see \"Promises\" in Chapter 4.\n\nYou can also use `Reflect.Loader.import(..)` in a real module to dynamically/conditionally load a module, where `import` itself would not work. You might, for instance, choose to load a module containing a polyfill for some ES7+ feature if a feature test reveals it's not defined by the current engine.\n\nFor performance reasons, you'll want to avoid dynamic loading whenever possible, as it hampers the ability of the JS engine to fire off early fetches from its static analysis.\n\n#### Customized Loading\n\nAnother use for directly interacting with the module loader is if you want to customize its behavior through configuration or even redefinition.\n\nAt the time of this writing, there's a polyfill for the module loader API being developed (https://github.com/ModuleLoader/es6-module-loader). While details are scarce and highly subject to change, we can explore what possibilities may eventually land.\n\nThe `Reflect.Loader.import(..)` call may support a second argument for specifying various options to customize the import/load task. For example:\n\n```js\nReflect.Loader.import( \"foo\", { address: \"/path/to/foo.js\" } )\n.then( function(foo){\n\t// ..\n} )\n```\n\nIt's also expected that a customization will be provided (through some means) for hooking into the process of loading a module, where a translation/transpilation could occur after load but before the engine compiles the module.\n\nFor example, you could load something that's not already an ES6-compliant module format (e.g., CoffeeScript, TypeScript, CommonJS, AMD). Your translation step could then convert it to an ES6-compliant module for the engine to then process.\n\n## Classes\n\nFrom nearly the beginning of JavaScript, syntax and development patterns have all strived (read: struggled) to put on a facade of supporting class-oriented development. With things like `new` and `instanceof` and a `.constructor` property, who couldn't help but be teased that JS had classes hidden somewhere inside its prototype system?\n\nOf course, JS \"classes\" aren't nearly the same as classical classes. The differences are well documented, so I won't belabor that point any further here.\n\n**Note:** To learn more about the patterns used in JS to fake \"classes,\" and an alternative view of prototypes called \"delegation,\" see the second half of the *this & Object Prototypes* title of this series.\n\n### `class`\n\nAlthough JS's prototype mechanism doesn't work like traditional classes, that doesn't stop the strong tide of demand on the language to extend the syntactic sugar so that expressing \"classes\" looks more like real classes. Enter the ES6 `class` keyword and its associated mechanism.\n\nThis feature is the result of a highly contentious and drawn-out debate, and represents a smaller subset compromise from several strongly opposed views on how to approach JS classes. Most developers who want full classes in JS will find parts of the new syntax quite inviting, but will find important bits still missing. Don't worry, though. TC39 is already working on additional features to augment classes in the post-ES6 timeframe.\n\nAt the heart of the new ES6 class mechanism is the `class` keyword, which identifies a *block* where the contents define the members of a function's prototype. Consider:\n\n```js\nclass Foo {\n\tconstructor(a,b) {\n\t\tthis.x = a;\n\t\tthis.y = b;\n\t}\n\n\tgimmeXY() {\n\t\treturn this.x * this.y;\n\t}\n}\n```\n\nSome things to note:\n\n* `class Foo` implies creating a (special) function of the name `Foo`, much like you did pre-ES6.\n* `constructor(..)` identifies the signature of that `Foo(..)` function, as well as its body contents.\n* Class methods use the same \"concise method\" syntax available to object literals, as discussed in Chapter 2. This also includes the concise generator form as discussed earlier in this chapter, as well as the ES5 getter/setter syntax. However, class methods are non-enumerable whereas object methods are by default enumerable.\n* Unlike object literals, there are no commas separating members in a `class` body! In fact, they're not even allowed.\n\nThe `class` syntax definition in the previous snippet can be roughly thought of as this pre-ES6 equivalent, which probably will look fairly familiar to those who've done prototype-style coding before:\n\n```js\nfunction Foo(a,b) {\n\tthis.x = a;\n\tthis.y = b;\n}\n\nFoo.prototype.gimmeXY = function() {\n\treturn this.x * this.y;\n}\n```\n\nIn either the pre-ES6 form or the new ES6 `class` form, this \"class\" can now be instantiated and used just as you'd expect:\n\n```js\nvar f = new Foo( 5, 15 );\n\nf.x;\t\t\t\t\t\t// 5\nf.y;\t\t\t\t\t\t// 15\nf.gimmeXY();\t\t\t\t// 75\n```\n\nCaution! Though `class Foo` seems much like `function Foo()`, there are important differences:\n\n* A `Foo(..)` call of `class Foo` *must* be made with `new`, as the pre-ES6 option of `Foo.call( obj )` will *not* work.\n* While `function Foo` is \"hoisted\" (see the *Scope & Closures* title of this series), `class Foo` is not; the `extends ..` clause specifies an expression that cannot be \"hoisted.\" So, you must declare a `class` before you can instantiate it.\n* `class Foo` in the top global scope creates a lexical `Foo` identifier in that scope, but unlike `function Foo` does not create a global object property of that name.\n\nThe established `instanceof` operator still works with ES6 classes, because `class` just creates a constructor function of the same name. However, ES6 introduces a way to customize how `instanceof` works, using `Symbol.hasInstance` (see \"Well-Known Symbols\" in Chapter 7).\n\nAnother way of thinking about `class`, which I find more convenient, is as a *macro* that is used to automatically populate a `prototype` object. Optionally, it also wires up the `[[Prototype]]` relationship if using `extends` (see the next section).\n\nAn ES6 `class` isn't really an entity itself, but a meta concept that wraps around other concrete entities, such as functions and properties, and ties them together.\n\n**Tip:** In addition to the declaration form, a `class` can also be an expression, as in: `var x = class Y { .. }`. This is primarily useful for passing a class definition (technically, the constructor itself) as a function argument or assigning it to an object property.\n\n### `extends` and `super`\n\nES6 classes also have syntactic sugar for establishing the `[[Prototype]]` delegation link between two function prototypes -- commonly mislabeled \"inheritance\" or confusingly labeled \"prototype inheritance\" -- using the class-oriented familiar terminology `extends`:\n\n```js\nclass Bar extends Foo {\n\tconstructor(a,b,c) {\n\t\tsuper( a, b );\n\t\tthis.z = c;\n\t}\n\n\tgimmeXYZ() {\n\t\treturn super.gimmeXY() * this.z;\n\t}\n}\n\nvar b = new Bar( 5, 15, 25 );\n\nb.x;\t\t\t\t\t\t// 5\nb.y;\t\t\t\t\t\t// 15\nb.z;\t\t\t\t\t\t// 25\nb.gimmeXYZ();\t\t\t\t// 1875\n```\n\nA significant new addition is `super`, which is actually something not directly possible pre-ES6 (without some unfortunate hack trade-offs). In the constructor, `super` automatically refers to the \"parent constructor,\" which in the previous example is `Foo(..)`. In a method, it refers to the \"parent object,\" such that you can then make a property/method access off it, such as `super.gimmeXY()`.\n\n`Bar extends Foo` of course means to link the `[[Prototype]]` of `Bar.prototype` to `Foo.prototype`. So, `super` in a method like `gimmeXYZ()` specifically means `Foo.prototype`, whereas `super` means `Foo` when used in the `Bar` constructor.\n\n**Note:** `super` is not limited to `class` declarations. It also works in object literals, in much the same way we're discussing here. See \"Object `super`\" in Chapter 2 for more information.\n\n#### There Be `super` Dragons\n\nIt is not insignificant to note that `super` behaves differently depending on where it appears. In fairness, most of the time, that won't be a problem. But surprises await if you deviate from a narrow norm.\n\nThere may be cases where in the constructor you would want to reference the `Foo.prototype`, such as to directly access one of its properties/methods. However, `super` in the constructor cannot be used in that way; `super.prototype` will not work. `super(..)` means roughly to call `new Foo(..)`, but isn't actually a usable reference to `Foo` itself.\n\nSymmetrically, you may want to reference the `Foo(..)` function from inside a non-constructor method. `super.constructor` will point at `Foo(..)` the function, but beware that this function can *only* be invoked with `new`. `new super.constructor(..)` would be valid, but it wouldn't be terribly useful in most cases, because you can't make that call use or reference the current `this` object context, which is likely what you'd want.\n\nAlso, `super` looks like it might be driven by a function's context just like `this` -- that is, that they'd both be dynamically bound. However, `super` is not dynamic like `this` is. When a constructor or method makes a `super` reference inside it at declaration time (in the `class` body), that `super` is statically bound to that specific class hierarchy, and cannot be overridden (at least in ES6).\n\nWhat does that mean? It means that if you're in the habit of taking a method from one \"class\" and \"borrowing\" it for another class by overriding its `this`, say with `call(..)` or `apply(..)`, that may very well create surprises if the method you're borrowing has a `super` in it. Consider this class hierarchy:\n\n```js\nclass ParentA {\n\tconstructor() { this.id = \"a\"; }\n\tfoo() { console.log( \"ParentA:\", this.id ); }\n}\n\nclass ParentB {\n\tconstructor() { this.id = \"b\"; }\n\tfoo() { console.log( \"ParentB:\", this.id ); }\n}\n\nclass ChildA extends ParentA {\n\tfoo() {\n\t\tsuper.foo();\n\t\tconsole.log( \"ChildA:\", this.id );\n\t}\n}\n\nclass ChildB extends ParentB {\n\tfoo() {\n\t\tsuper.foo();\n\t\tconsole.log( \"ChildB:\", this.id );\n\t}\n}\n\nvar a = new ChildA();\na.foo();\t\t\t\t\t// ParentA: a\n\t\t\t\t\t\t\t// ChildA: a\nvar b = new ChildB();\t\t// ParentB: b\nb.foo();\t\t\t\t\t// ChildB: b\n```\n\nAll seems fairly natural and expected in this previous snippet. However, if you try to borrow `b.foo()` and use it in the context of `a` -- by virtue of dynamic `this` binding, such borrowing is quite common and used in many different ways, including mixins most notably -- you may find this result an ugly surprise:\n\n```js\n// borrow `b.foo()` to use in `a` context\nb.foo.call( a );\t\t\t// ParentB: a\n\t\t\t\t\t\t\t// ChildB: a\n```\n\nAs you can see, the `this.id` reference was dynamically rebound so that `: a` is reported in both cases instead of `: b`. But `b.foo()`'s `super.foo()` reference wasn't dynamically rebound, so it still reported `ParentB` instead of the expected `ParentA`.\n\nBecause `b.foo()` references `super`, it is statically bound to the `ChildB`/`ParentB` hierarchy and cannot be used against the `ChildA`/`ParentA` hierarchy. There is no ES6 solution to this limitation.\n\n`super` seems to work intuitively if you have a static class hierarchy with no cross-pollination. But in all fairness, one of the main benefits of doing `this`-aware coding is exactly that sort of flexibility. Simply, `class` + `super` requires you to avoid such techniques.\n\nThe choice boils down to narrowing your object design to these static hierarchies -- `class`, `extends`, and `super` will be quite nice -- or dropping all attempts to \"fake\" classes and instead embrace dynamic and flexible, classless objects and `[[Prototype]]` delegation (see the *this & Object Prototypes* title of this series).\n\n#### Subclass Constructor\n\nConstructors are not required for classes or subclasses; a default constructor is substituted in both cases if omitted. However, the default substituted constructor is different for a direct class versus an extended class.\n\nSpecifically, the default subclass constructor automatically calls the parent constructor, and passes along any arguments. In other words, you could think of the default subclass constructor sort of like this:\n\n```js\nconstructor(...args) {\n\tsuper(...args);\n}\n```\n\nThis is an important detail to note. Not all class languages have the subclass constructor automatically call the parent constructor. C++ does, but Java does not. But more importantly, in pre-ES6 classes, such automatic \"parent constructor\" calling does not happen. Be careful when converting to ES6 `class` if you've been relying on such calls *not* happening.\n\nAnother perhaps surprising deviation/limitation of ES6 subclass constructors: in a constructor of a subclass, you cannot access `this` until `super(..)` has been called. The reason is nuanced and complicated, but it boils down to the fact that the parent constructor is actually the one creating/initializing your instance's `this`. Pre-ES6, it works oppositely; the `this` object is created by the \"subclass constructor,\" and then you  call a \"parent constructor\" with the context of the \"subclass\" `this`.\n\nLet's illustrate. This works pre-ES6:\n\n```js\nfunction Foo() {\n\tthis.a = 1;\n}\n\nfunction Bar() {\n\tthis.b = 2;\n\tFoo.call( this );\n}\n\n// `Bar` \"extends\" `Foo`\nBar.prototype = Object.create( Foo.prototype );\n```\n\nBut this ES6 equivalent is not allowed:\n\n```js\nclass Foo {\n\tconstructor() { this.a = 1; }\n}\n\nclass Bar extends Foo {\n\tconstructor() {\n\t\tthis.b = 2;\t\t\t// not allowed before `super()`\n\t\tsuper();\t\t\t// to fix swap these two statements\n\t}\n}\n```\n\nIn this case, the fix is simple. Just swap the two statements in the subclass `Bar` constructor. However, if you've been relying pre-ES6 on being able to skip calling the \"parent constructor,\" beware because that won't be allowed anymore.\n\n#### `extend`ing Natives\n\nOne of the most heralded benefits to the new `class` and `extend` design is the ability to (finally!) subclass the built-in natives, like `Array`. Consider:\n\n```js\nclass MyCoolArray extends Array {\n\tfirst() { return this[0]; }\n\tlast() { return this[this.length - 1]; }\n}\n\nvar a = new MyCoolArray( 1, 2, 3 );\n\na.length;\t\t\t\t\t// 3\na;\t\t\t\t\t\t\t// [1,2,3]\n\na.first();\t\t\t\t\t// 1\na.last();\t\t\t\t\t// 3\n```\n\nPrior to ES6, a fake \"subclass\" of `Array` using manual object creation and linking to `Array.prototype` only partially worked. It missed out on the special behaviors of a real array, such as the automatically updating `length` property. ES6 subclasses should fully work with \"inherited\" and augmented behaviors as expected!\n\nAnother common pre-ES6 \"subclass\" limitation is with the `Error` object, in creating custom error \"subclasses.\" When genuine `Error` objects are created, they automatically capture special `stack` information, including the line number and file where the error is created. Pre-ES6 custom error \"subclasses\" have no such special behavior, which severely limits their usefulness.\n\nES6 to the rescue:\n\n```js\nclass Oops extends Error {\n\tconstructor(reason) {\n\t\tsuper(reason);\n\t\tthis.oops = reason;\n\t}\n}\n\n// later:\nvar ouch = new Oops( \"I messed up!\" );\nthrow ouch;\n```\n\nThe `ouch` custom error object in this previous snippet will behave like any other genuine error object, including capturing `stack`. That's a big improvement!\n\n### `new.target`\n\nES6 introduces a new concept called a *meta property* (see Chapter 7), in the form of `new.target`.\n\nIf that looks strange, it is; pairing a keyword with a `.` and a property name is definitely an out-of-the-ordinary pattern for JS.\n\n`new.target` is a new \"magical\" value available in all functions, though in normal functions it will always be `undefined`. In any constructor, `new.target` always points at the constructor that `new` actually directly invoked, even if the constructor is in a parent class and was delegated to by a `super(..)` call from a child constructor. Consider:\n\n```js\nclass Foo {\n\tconstructor() {\n\t\tconsole.log( \"Foo: \", new.target.name );\n\t}\n}\n\nclass Bar extends Foo {\n\tconstructor() {\n\t\tsuper();\n\t\tconsole.log( \"Bar: \", new.target.name );\n\t}\n\tbaz() {\n\t\tconsole.log( \"baz: \", new.target );\n\t}\n}\n\nvar a = new Foo();\n// Foo: Foo\n\nvar b = new Bar();\n// Foo: Bar   <-- respects the `new` call-site\n// Bar: Bar\n\nb.baz();\n// baz: undefined\n```\n\nThe `new.target` meta property doesn't have much purpose in class constructors, except accessing a static property/method (see the next section).\n\nIf `new.target` is `undefined`, you know the function was not called with `new`. You can then force a `new` invocation if that's necessary.\n\n### `static`\n\nWhen a subclass `Bar` extends a parent class `Foo`, we already observed that `Bar.prototype` is `[[Prototype]]`-linked to `Foo.prototype`. But additionally, `Bar()` is `[[Prototype]]`-linked to `Foo()`. That part may not have such an obvious reasoning.\n\nHowever, it's quite useful in the case where you declare `static` methods (not just properties) for a class, as these are added directly to that class's function object, not to the function object's `prototype` object. Consider:\n\n```js\nclass Foo {\n\tstatic cool() { console.log( \"cool\" ); }\n\twow() { console.log( \"wow\" ); }\n}\n\nclass Bar extends Foo {\n\tstatic awesome() {\n\t\tsuper.cool();\n\t\tconsole.log( \"awesome\" );\n\t}\n\tneat() {\n\t\tsuper.wow();\n\t\tconsole.log( \"neat\" );\n\t}\n}\n\nFoo.cool();\t\t\t\t\t// \"cool\"\nBar.cool();\t\t\t\t\t// \"cool\"\nBar.awesome();\t\t\t\t// \"cool\"\n\t\t\t\t\t\t\t// \"awesome\"\n\nvar b = new Bar();\nb.neat();\t\t\t\t\t// \"wow\"\n\t\t\t\t\t\t\t// \"neat\"\n\nb.awesome;\t\t\t\t\t// undefined\nb.cool;\t\t\t\t\t\t// undefined\n```\n\nBe careful not to get confused that `static` members are on the class's prototype chain. They're actually on the dual/parallel chain between the function constructors.\n\n#### `Symbol.species` Constructor Getter\n\nOne place where `static` can be useful is in setting the `Symbol.species` getter (known internally in the specification as `@@species`) for a derived (child) class. This capability allows a child class to signal to a parent class what constructor should be used -- when not intending the child class's constructor itself -- if any parent class method needs to vend a new instance.\n\nFor example, many methods on `Array` create and return a new `Array` instance. If you define a derived class from `Array`, but you want those methods to continue to vend actual `Array` instances instead of from your derived class, this works:\n\n```js\nclass MyCoolArray extends Array {\n\t// force `species` to be parent constructor\n\tstatic get [Symbol.species]() { return Array; }\n}\n\nvar a = new MyCoolArray( 1, 2, 3 ),\n\tb = a.map( function(v){ return v * 2; } );\n\nb instanceof MyCoolArray;\t// false\nb instanceof Array;\t\t\t// true\n```\n\nTo illustrate how a parent class method can use a child's species declaration somewhat like `Array#map(..)` is doing, consider:\n\n```js\nclass Foo {\n\t// defer `species` to derived constructor\n\tstatic get [Symbol.species]() { return this; }\n\tspawn() {\n\t\treturn new this.constructor[Symbol.species]();\n\t}\n}\n\nclass Bar extends Foo {\n\t// force `species` to be parent constructor\n\tstatic get [Symbol.species]() { return Foo; }\n}\n\nvar a = new Foo();\nvar b = a.spawn();\nb instanceof Foo;\t\t\t\t\t// true\n\nvar x = new Bar();\nvar y = x.spawn();\ny instanceof Bar;\t\t\t\t\t// false\ny instanceof Foo;\t\t\t\t\t// true\n```\n\nThe parent class `Symbol.species` does `return this` to defer to any derived class, as you'd normally expect. `Bar` then overrides to manually declare `Foo` to be used for such instance creation. Of course, a derived class can still vend instances of itself using `new this.constructor(..)`.\n\n## Review\n\nES6 introduces several new features that aid in code organization:\n\n* Iterators provide sequential access to data or operations. They can be consumed by new language features like `for..of` and `...`.\n* Generators are locally pause/resume capable functions controlled by an iterator. They can be used to programmatically (and interactively, through `yield`/`next(..)` message passing) *generate* values to be consumed via iteration.\n* Modules allow private encapsulation of implementation details with a publicly exported API. Module definitions are file-based, singleton instances, and statically resolved at compile time.\n* Classes provide cleaner syntax around prototype-based coding. The addition of `super` also solves tricky issues with relative references in the `[[Prototype]]` chain.\n\nThese new tools should be your first stop when trying to improve the architecture of your JS projects by embracing ES6.\n"
  },
  {
    "path": "es6 & beyond/ch4.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 4: Async Flow Control\n\nIt's no secret if you've written any significant amount of JavaScript that asynchronous programming is a required skill. The primary mechanism for managing asynchrony has been the function callback.\n\nHowever, ES6 adds a new feature that helps address significant shortcomings in the callbacks-only approach to async: *Promises*. In addition, we can revisit generators (from the previous chapter) and see a pattern for combining the two that's a major step forward in async flow control programming in JavaScript.\n\n## Promises\n\nLet's clear up some misconceptions: Promises are not about replacing callbacks. Promises provide a trustable intermediary -- that is, between your calling code and the async code that will perform the task -- to manage callbacks.\n\nAnother way of thinking about a Promise is as an event listener, on which you can register to listen for an event that lets you know when a task has completed. It's an event that will only ever fire once, but it can be thought of as an event nonetheless.\n\nPromises can be chained together, which can sequence a series of asychronously completing steps. Together with higher-level abstractions like the `all(..)` method (in classic terms, a \"gate\") and the `race(..)` method (in classic terms, a \"latch\"), promise chains provide a mechanism for async flow control.\n\nYet another way of conceptualizing a Promise is that it's a *future value*, a time-independent container wrapped around a value. This container can be reasoned about identically whether the underlying value is final or not. Observing the resolution of a Promise extracts this value once available. In other words, a Promise is said to be the async version of a sync function's return value.\n\nA Promise can only have one of two possible resolution outcomes: fulfilled or rejected, with an optional single value. If a Promise is fulfilled, the final value is called a fulfillment. If it's rejected, the final value is called a reason (as in, a \"reason for rejection\"). Promises can only be resolved (fulfillment or rejection) *once*. Any further attempts to fulfill or reject are simply ignored. Thus, once a Promise is resolved, it's an immutable value that cannot be changed.\n\nClearly, there are several different ways to think about what a Promise is. No single perspective is fully sufficient, but each provides a separate aspect of the whole. The big takeaway is that they offer a significant improvement over callbacks-only async, namely that they provide order, predictability, and trustability.\n\n### Making and Using Promises\n\nTo construct a promise instance, use the `Promise(..)` constructor:\n\n```js\nvar p = new Promise( function pr(resolve,reject){\n\t// ..\n} );\n```\n\nThe `Promise(..)` constructor takes a single function (`pr(..)`), which is called immediately and receives two control functions as arguments, usually named `resolve(..)` and `reject(..)`. They are used as:\n\n* If you call `reject(..)`, the promise is rejected, and if any value is passed to `reject(..)`, it is set as the reason for rejection.\n* If you call `resolve(..)` with no value, or any non-promise value, the promise is fulfilled.\n* If you call `resolve(..)` and pass another promise, this promise simply adopts the state -- whether immediate or eventual -- of the passed promise (either fulfillment or rejection).\n\nHere's how you'd typically use a promise to refactor a callback-reliant function call. If you start out with an `ajax(..)` utility that expects to be able to call an error-first style callback:\n\n```js\nfunction ajax(url,cb) {\n\t// make request, eventually call `cb(..)`\n}\n\n// ..\n\najax( \"http://some.url.1\", function handler(err,contents){\n\tif (err) {\n\t\t// handle ajax error\n\t}\n\telse {\n\t\t// handle `contents` success\n\t}\n} );\n```\n\nYou can convert it to:\n\n```js\nfunction ajax(url) {\n\treturn new Promise( function pr(resolve,reject){\n\t\t// make request, eventually call\n\t\t// either `resolve(..)` or `reject(..)`\n\t} );\n}\n\n// ..\n\najax( \"http://some.url.1\" )\n.then(\n\tfunction fulfilled(contents){\n\t\t// handle `contents` success\n\t},\n\tfunction rejected(reason){\n\t\t// handle ajax error reason\n\t}\n);\n```\n\nPromises have a `then(..)` method that accepts one or two callback functions. The first function (if present) is treated as the handler to call if the promise is fulfilled successfully. The second function (if present) is treated as the handler to call if the promise is rejected explicitly, or if any error/exception is caught during resolution.\n\nIf one of the arguments is omitted or otherwise not a valid function -- typically you'll use `null` instead -- a default placeholder equivalent is used. The default success callback passes its fulfillment value along and the default error callback propagates its rejection reason along.\n\nThe shorthand for calling `then(null,handleRejection)` is `catch(handleRejection)`.\n\nBoth `then(..)` and `catch(..)` automatically construct and return another promise instance, which is wired to receive the resolution from whatever the return value is from the original promise's fulfillment or rejection handler (whichever is actually called). Consider:\n\n```js\najax( \"http://some.url.1\" )\n.then(\n\tfunction fulfilled(contents){\n\t\treturn contents.toUpperCase();\n\t},\n\tfunction rejected(reason){\n\t\treturn \"DEFAULT VALUE\";\n\t}\n)\n.then( function fulfilled(data){\n\t// handle data from original promise's\n\t// handlers\n} );\n```\n\nIn this snippet, we're returning an immediate value from either `fulfilled(..)` or `rejected(..)`, which then is received on the next event turn in the second `then(..)`'s `fulfilled(..)`. If we instead return a new promise, that new promise is subsumed and adopted as the resolution:\n\n```js\najax( \"http://some.url.1\" )\n.then(\n\tfunction fulfilled(contents){\n\t\treturn ajax(\n\t\t\t\"http://some.url.2?v=\" + contents\n\t\t);\n\t},\n\tfunction rejected(reason){\n\t\treturn ajax(\n\t\t\t\"http://backup.url.3?err=\" + reason\n\t\t);\n\t}\n)\n.then( function fulfilled(contents){\n\t// `contents` comes from the subsequent\n\t// `ajax(..)` call, whichever it was\n} );\n```\n\nIt's important to note that an exception (or rejected promise) in the first `fulfilled(..)` will *not* result in the first `rejected(..)` being called, as that handler only responds to the resolution of the first original promise. Instead, the second promise, which the second `then(..)` is called against, receives that rejection.\n\nIn this previous snippet, we are not listening for that rejection, which means it will be silently held onto for future observation. If you never observe it by calling a `then(..)` or `catch(..)`, then it will go unhandled. Some browser developer consoles may detect these unhandled rejections and report them, but this is not reliably guaranteed; you should always observe promise rejections.\n\n**Note:** This was just a brief overview of Promise theory and behavior. For a much more in-depth exploration, see Chapter 3 of the *Async & Performance* title of this series.\n\n### Thenables\n\nPromises are genuine instances of the `Promise(..)` constructor. However, there are promise-like objects called *thenables* that generally can interoperate with the Promise mechanisms.\n\nAny object (or function) with a `then(..)` function on it is assumed to be a thenable. Any place where the Promise mechanisms can accept and adopt the state of a genuine promise, they can also handle a thenable.\n\nThenables are basically a general label for any promise-like value that may have been created by some other system than the actual `Promise(..)` constructor. In that perspective, a thenable is generally less trustable than a genuine Promise. Consider this misbehaving thenable, for example:\n\n```js\nvar th = {\n\tthen: function thener( fulfilled ) {\n\t\t// call `fulfilled(..)` once every 100ms forever\n\t\tsetInterval( fulfilled, 100 );\n\t}\n};\n```\n\nIf you received that thenable and chained it with `th.then(..)`, you'd likely be surprised that your fulfillment handler is called repeatedly, when normal Promises are supposed to only ever be resolved once.\n\nGenerally, if you're receiving what purports to be a promise or thenable back from some other system, you shouldn't just trust it blindly. In the next section, we'll see a utility included with ES6 Promises that helps address this trust concern.\n\nBut to further understand the perils of this issue, consider that *any* object in *any* piece of code that's ever been defined to have a method on it called `then(..)` can be potentially confused as a thenable -- if used with Promises, of course -- regardless of if that thing was ever intended to even remotely be related to Promise-style async coding.\n\nPrior to ES6, there was never any special reservation made on methods called `then(..)`, and as you can imagine there's been at least a few cases where that method name has been chosen prior to Promises ever showing up on the radar screen. The most likely case of mistaken thenable will be async libraries that use `then(..)` but which are not strictly Promises-compliant -- there are several out in the wild.\n\nThe onus will be on you to guard against directly using values with the Promise mechanism that would be incorrectly assumed to be a thenable.\n\n### `Promise` API\n\nThe `Promise` API also provides some static methods for working with Promises.\n\n`Promise.resolve(..)` creates a promise resolved to the value passed in. Let's compare how it works to the more manual approach:\n\n```js\nvar p1 = Promise.resolve( 42 );\n\nvar p2 = new Promise( function pr(resolve){\n\tresolve( 42 );\n} );\n```\n\n`p1` and `p2` will have essentially identical behavior. The same goes for resolving with a promise:\n\n```js\nvar theP = ajax( .. );\n\nvar p1 = Promise.resolve( theP );\n\nvar p2 = new Promise( function pr(resolve){\n\tresolve( theP );\n} );\n```\n\n**Tip:** `Promise.resolve(..)` is the solution to the thenable trust issue raised in the previous section. Any value that you are not already certain is a trustable promise -- even if it could be an immediate value -- can be normalized by passing it to `Promise.resolve(..)`. If the value is already a recognizable promise or thenable, its state/resolution will simply be adopted, insulating you from misbehavior. If it's instead an immediate value, it will be \"wrapped\" in a genuine promise, thereby normalizing its behavior to be async.\n\n`Promise.reject(..)` creates an immediately rejected promise, the same as its `Promise(..)` constructor counterpart:\n\n```js\nvar p1 = Promise.reject( \"Oops\" );\n\nvar p2 = new Promise( function pr(resolve,reject){\n\treject( \"Oops\" );\n} );\n```\n\nWhile `resolve(..)` and `Promise.resolve(..)` can accept a promise and adopt its state/resolution, `reject(..)` and `Promise.reject(..)` do not differentiate what value they receive. So, if you reject with a promise or thenable, the promise/thenable itself will be set as the rejection reason, not its underlying value.\n\n`Promise.all([ .. ])` accepts an array of one or more values (e.g., immediate values, promises, thenables). It returns a promise back that will be fulfilled if all the values fulfill, or reject immediately once the first of any of them rejects.\n\nStarting with these values/promises:\n\n```js\nvar p1 = Promise.resolve( 42 );\nvar p2 = new Promise( function pr(resolve){\n\tsetTimeout( function(){\n\t\tresolve( 43 );\n\t}, 100 );\n} );\nvar v3 = 44;\nvar p4 = new Promise( function pr(resolve,reject){\n\tsetTimeout( function(){\n\t\treject( \"Oops\" );\n\t}, 10 );\n} );\n```\n\nLet's consider how `Promise.all([ .. ])` works with combinations of those values:\n\n```js\nPromise.all( [p1,p2,v3] )\n.then( function fulfilled(vals){\n\tconsole.log( vals );\t\t\t// [42,43,44]\n} );\n\nPromise.all( [p1,p2,v3,p4] )\n.then(\n\tfunction fulfilled(vals){\n\t\t// never gets here\n\t},\n\tfunction rejected(reason){\n\t\tconsole.log( reason );\t\t// Oops\n\t}\n);\n```\n\nWhile `Promise.all([ .. ])` waits for all fulfillments (or the first rejection), `Promise.race([ .. ])` waits only for either the first fulfillment or rejection. Consider:\n\n```js\n// NOTE: re-setup all test values to\n// avoid timing issues misleading you!\n\nPromise.race( [p2,p1,v3] )\n.then( function fulfilled(val){\n\tconsole.log( val );\t\t\t\t// 42\n} );\n\nPromise.race( [p2,p4] )\n.then(\n\tfunction fulfilled(val){\n\t\t// never gets here\n\t},\n\tfunction rejected(reason){\n\t\tconsole.log( reason );\t\t// Oops\n\t}\n);\n```\n\n**Warning:** While `Promise.all([])` will fulfill right away (with no values), `Promise.race([])` will hang forever. This is a strange inconsistency, and speaks to the suggestion that you should never use these methods with empty arrays.\n\n## Generators + Promises\n\nIt *is* possible to express a series of promises in a chain to represent the async flow control of your program. Consider:\n\n```js\nstep1()\n.then(\n\tstep2,\n\tstep1Failed\n)\n.then(\n\tfunction step3(msg) {\n\t\treturn Promise.all( [\n\t\t\tstep3a( msg ),\n\t\t\tstep3b( msg ),\n\t\t\tstep3c( msg )\n\t\t] )\n\t}\n)\n.then(step4);\n```\n\nHowever, there's a much better option for expressing async flow control, and it will probably be much more preferable in terms of coding style than long promise chains. We can use what we learned in Chapter 3 about generators to express our async flow control.\n\nThe important pattern to recognize: a generator can yield a promise, and that promise can then be wired to resume the generator with its fulfillment value.\n\nConsider the previous snippet's async flow control expressed with a generator:\n\n```js\nfunction *main() {\n\n\ttry {\n\t\tvar ret = yield step1();\n\t}\n\tcatch (err) {\n\t\tret = yield step1Failed( err );\n\t}\n\n\tret = yield step2( ret );\n\n\t// step 3\n\tret = yield Promise.all( [\n\t\tstep3a( ret ),\n\t\tstep3b( ret ),\n\t\tstep3c( ret )\n\t] );\n\n\tyield step4( ret );\n}\n```\n\nOn the surface, this snippet may seem more verbose than the promise chain equivalent in the earlier snippet. However, it offers a much more attractive -- and more importantly, a more understandable and reason-able -- synchronous-looking coding style (with `=` assignment of \"return\" values, etc.) That's especially true in that `try..catch` error handling can be used across those hidden async boundaries.\n\nWhy are we using Promises with the generator? It's certainly possible to do async generator coding without Promises.\n\nPromises are a trustable system that uninverts the inversion of control of normal callbacks or thunks (see the *Async & Performance* title of this series). So, combining the trustability of Promises and the synchronicity of code in generators effectively addresses all the major deficiencies of callbacks. Also, utilities like `Promise.all([ .. ])` are a nice, clean way to express concurrency at a generator's single `yield` step.\n\nSo how does this magic work? We're going to need a *runner* that can run our generator, receive a `yield`ed promise, and wire it up to resume the generator with either the fulfillment success value, or throw an error into the generator with the rejection reason.\n\nMany async-capable utilities/libraries have such a \"runner\"; for example, `Q.spawn(..)` and my asynquence's `runner(..)` plug-in. But here's a stand-alone runner to illustrate how the process works:\n\n```js\nfunction run(gen) {\n\tvar args = [].slice.call( arguments, 1), it;\n\n\tit = gen.apply( this, args );\n\n\treturn Promise.resolve()\n\t\t.then( function handleNext(value){\n\t\t\tvar next = it.next( value );\n\n\t\t\treturn (function handleResult(next){\n\t\t\t\tif (next.done) {\n\t\t\t\t\treturn next.value;\n\t\t\t\t}\n\t\t\t\telse {\n\t\t\t\t\treturn Promise.resolve( next.value )\n\t\t\t\t\t\t.then(\n\t\t\t\t\t\t\thandleNext,\n\t\t\t\t\t\t\tfunction handleErr(err) {\n\t\t\t\t\t\t\t\treturn Promise.resolve(\n\t\t\t\t\t\t\t\t\tit.throw( err )\n\t\t\t\t\t\t\t\t)\n\t\t\t\t\t\t\t\t.then( handleResult );\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t);\n\t\t\t\t}\n\t\t\t})( next );\n\t\t} );\n}\n```\n\n**Note:** For a more prolifically commented version of this utility, see the *Async & Performance* title of this series. Also, the run utilities provided with various async libraries are often more powerful/capable than what we've shown here. For example, asynquence's `runner(..)` can handle `yield`ed promises, sequences, thunks, and immediate (non-promise) values, giving you ultimate flexibility.\n\nSo now running `*main()` as listed in the earlier snippet is as easy as:\n\n```js\nrun( main )\n.then(\n\tfunction fulfilled(){\n\t\t// `*main()` completed successfully\n\t},\n\tfunction rejected(reason){\n\t\t// Oops, something went wrong\n\t}\n);\n```\n\nEssentially, anywhere that you have more than two asynchronous steps of flow control logic in your program, you can *and should* use a promise-yielding generator driven by a run utility to express the flow control in a synchronous fashion. This will make for much easier to understand and maintain code.\n\nThis yield-a-promise-resume-the-generator pattern is going to be so common and so powerful, the next version of JavaScript after ES6 is almost certainly going to introduce a new function type that will do it automatically without needing the run utility. We'll cover `async function`s (as they're expected to be called) in Chapter 8.\n\n## Review\n\nAs JavaScript continues to mature and grow in its widespread adoption, asynchronous programming is more and more of a central concern. Callbacks are not fully sufficient for these tasks, and totally fall down the more sophisticated the need.\n\nThankfully, ES6 adds Promises to address one of the major shortcomings of callbacks: lack of trust in predictable behavior. Promises represent the future completion value from a potentially async task, normalizing behavior across sync and async boundaries.\n\nBut it's the combination of Promises with generators that fully realizes the benefits of rearranging our async flow control code to de-emphasize and abstract away that ugly callback soup (aka \"hell\").\n\nRight now, we can manage these interactions with the aide of various async libraries' runners, but JavaScript is eventually going to support this interaction pattern with dedicated syntax alone!\n"
  },
  {
    "path": "es6 & beyond/ch5.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 5: Collections\n\nStructured collection and access to data is a critical component of just about any JS program. From the beginning of the language up to this point, the array and the object have been our primary mechanism for creating data structures. Of course, many higher-level data structures have been built on top of these, as user-land libraries.\n\nAs of ES6, some of the most useful (and performance-optimizing!) data structure abstractions have been added as native components of the language.\n\nWe'll start this chapter first by looking at *TypedArrays*, technically contemporary to ES5 efforts several years ago, but only standardized as companions to WebGL and not JavaScript itself. As of ES6, these have been adopted directly by the language specification, which gives them first-class status.\n\nMaps are like objects (key/value pairs), but instead of just a string for the key, you can use any value -- even another object or map! Sets are similar to arrays (lists of values), but the values are unique; if you add a duplicate, it's ignored. There are also weak (in relation to memory/garbage collection) counterparts: WeakMap and WeakSet.\n\n## TypedArrays\n\nAs we cover in the *Types & Grammar* title of this series, JS does have a set of built-in types, like `number` and `string`. It'd be tempting to look at a feature named \"typed array\" and assume it means an array of a specific type of values, like an array of only strings.\n\nHowever, typed arrays are really more about providing structured access to binary data using array-like semantics (indexed access, etc.). The \"type\" in the name refers to a \"view\" layered on type of the bucket of bits, which is essentially a mapping of whether the bits should be viewed as an array of 8-bit signed integers, 16-bit signed integers, and so on.\n\nHow do you construct such a bit-bucket? It's called a \"buffer,\" and you construct it most directly with the `ArrayBuffer(..)` constructor:\n\n```js\nvar buf = new ArrayBuffer( 32 );\nbuf.byteLength;\t\t\t\t\t\t\t// 32\n```\n\n`buf` is now a binary buffer that is 32-bytes long (256-bits), that's pre-initialized to all `0`s. A buffer by itself doesn't really allow you any interaction except for checking its `byteLength` property.\n\n**Tip:** Several web platform features use or return array buffers, such as `FileReader#readAsArrayBuffer(..)`, `XMLHttpRequest#send(..)`, and `ImageData` (canvas data).\n\nBut on top of this array buffer, you can then layer a \"view,\" which comes in the form of a typed array. Consider:\n\n```js\nvar arr = new Uint16Array( buf );\narr.length;\t\t\t\t\t\t\t// 16\n```\n\n`arr` is a typed array of 16-bit unsigned integers mapped over the 256-bit `buf` buffer, meaning you get 16 elements.\n\n### Endianness\n\nIt's very important to understand that the `arr` is mapped using the endian-setting (big-endian or little-endian) of the platform the JS is running on. This can be an issue if the binary data is created with one endianness but interpreted on a platform with the opposite endianness.\n\nEndian means if the low-order byte (collection of 8-bits) of a multi-byte number -- such as the 16-bit unsigned ints we created in the earlier snippet -- is on the right or the left of the number's bytes.\n\nFor example, let's imagine the base-10 number `3085`, which takes 16-bits to represent. If you have just one 16-bit number container, it'd be represented in binary as `0000110000001101` (hexadecimal `0c0d`) regardless of endianness.\n\nBut if `3085` was represented with two 8-bit numbers, the endianness would significantly affect its storage in memory:\n\n* `0000110000001101` / `0c0d` (big endian)\n* `0000110100001100` / `0d0c` (little endian)\n\nIf you received the bits of `3085` as `0000110100001100` from a little-endian system, but you layered a view on top of it in a big-endian system, you'd instead see value `3340` (base-10) and `0d0c` (base-16).\n\nLittle endian is the most common representation on the web these days, but there are definitely browsers where that's not true. It's important that you understand the endianness of both the producer and consumer of a chunk of binary data.\n\nFrom MDN, here's a quick way to test the endianness of your JavaScript:\n\n```js\nvar littleEndian = (function() {\n\tvar buffer = new ArrayBuffer( 2 );\n\tnew DataView( buffer ).setInt16( 0, 256, true );\n\treturn new Int16Array( buffer )[0] === 256;\n})();\n```\n\n`littleEndian` will be `true` or `false`; for most browsers, it should return `true`. This test uses `DataView(..)`, which allows more low-level, fine-grained control over accessing (setting/getting) the bits from the view you layer over the buffer. The third parameter of the `setInt16(..)` method in the previous snippet is for telling the `DataView` what endianness you're wanting it to use for that operation.\n\n**Warning:** Do not confuse endianness of underlying binary storage in array buffers with how a given number is represented when exposed in a JS program. For example, `(3085).toString(2)` returns `\"110000001101\"`, which with an assumed leading four `\"0\"`s appears to be the big-endian representation. In fact, this representation is based on a single 16-bit view, not a view of two 8-bit bytes. The `DataView` test above is the best way to determine endianness for your JS environment.\n\n### Multiple Views\n\nA single buffer can have multiple views attached to it, such as:\n\n```js\nvar buf = new ArrayBuffer( 2 );\n\nvar view8 = new Uint8Array( buf );\nvar view16 = new Uint16Array( buf );\n\nview16[0] = 3085;\nview8[0];\t\t\t\t\t\t// 13\nview8[1];\t\t\t\t\t\t// 12\n\nview8[0].toString( 16 );\t\t// \"d\"\nview8[1].toString( 16 );\t\t// \"c\"\n\n// swap (as if endian!)\nvar tmp = view8[0];\nview8[0] = view8[1];\nview8[1] = tmp;\n\nview16[0];\t\t\t\t\t\t// 3340\n```\n\nThe typed array constructors have multiple signature variations. We've shown so far only passing them an existing buffer. However, that form also takes two extra parameters: `byteOffset` and `length`. In other words, you can start the typed array view at a location other than `0` and you can make it span less than the full length of the buffer.\n\nIf the buffer of binary data includes data in non-uniform size/location, this technique can be quite useful.\n\nFor example, consider a binary buffer that has a 2-byte number (aka \"word\") at the beginning, followed by two 1-byte numbers, followed by a 32-bit floating point number. Here's how you can access that data with multiple views on the same buffer, offsets, and lengths:\n\n```js\nvar first = new Uint16Array( buf, 0, 2 )[0],\n\tsecond = new Uint8Array( buf, 2, 1 )[0],\n\tthird = new Uint8Array( buf, 3, 1 )[0],\n\tfourth = new Float32Array( buf, 4, 4 )[0];\n```\n\n### TypedArray Constructors\n\nIn addition to the `(buffer,[offset, [length]])` form examined in the previous section, typed array constructors also support these forms:\n\n* [constructor]`(length)`: Creates a new view over a new buffer of `length` bytes\n* [constructor]`(typedArr)`: Creates a new view and buffer, and copies the contents from the `typedArr` view\n* [constructor]`(obj)`: Creates a new view and buffer, and iterates over the array-like or object `obj` to copy its contents\n\nThe following typed array constructors are available as of ES6:\n\n* `Int8Array` (8-bit signed integers), `Uint8Array` (8-bit unsigned integers)\n\t- `Uint8ClampedArray` (8-bit unsigned integers, each value clamped on setting to the `0`-`255` range)\n* `Int16Array` (16-bit signed integers), `Uint16Array` (16-bit unsigned integers)\n* `Int32Array` (32-bit signed integers), `Uint32Array` (32-bit unsigned integers)\n* `Float32Array` (32-bit floating point, IEEE-754)\n* `Float64Array` (64-bit floating point, IEEE-754)\n\nInstances of typed array constructors are almost the same as regular native arrays. Some differences include having a fixed length and the values all being of the same \"type.\"\n\nHowever, they share most of the same `prototype` methods. As such, you likely will be able to use them as regular arrays without needing to convert.\n\nFor example:\n\n```js\nvar a = new Int32Array( 3 );\na[0] = 10;\na[1] = 20;\na[2] = 30;\n\na.map( function(v){\n\tconsole.log( v );\n} );\n// 10 20 30\n\na.join( \"-\" );\n// \"10-20-30\"\n```\n\n**Warning:** You can't use certain `Array.prototype` methods with TypedArrays that don't make sense, such as the mutators (`splice(..)`, `push(..)`, etc.) and `concat(..)`.\n\nBe aware that the elements in TypedArrays really are constrained to the declared bit sizes. If you have a `Uint8Array` and try to assign something larger than an 8-bit value into one of its elements, the value wraps around so as to stay within the bit length.\n\nThis could cause problems if you were trying to, for instance, square all the values in a TypedArray. Consider:\n\n```js\nvar a = new Uint8Array( 3 );\na[0] = 10;\na[1] = 20;\na[2] = 30;\n\nvar b = a.map( function(v){\n\treturn v * v;\n} );\n\nb;\t\t\t\t// [100, 144, 132]\n```\n\nThe `20` and `30` values, when squared, resulted in bit overflow. To get around such a limitation, you can use the `TypedArray#from(..)` function:\n\n```js\nvar a = new Uint8Array( 3 );\na[0] = 10;\na[1] = 20;\na[2] = 30;\n\nvar b = Uint16Array.from( a, function(v){\n\treturn v * v;\n} );\n\nb;\t\t\t\t// [100, 400, 900]\n```\n\nSee the \"`Array.from(..)` Static Function\" section in Chapter 6 for more information about the `Array.from(..)` that is shared with TypedArrays. Specifically, the \"Mapping\" section explains the mapping function accepted as its second argument.\n\nOne interesting behavior to consider is that TypedArrays have a `sort(..)` method much like regular arrays, but this one defaults to numeric sort comparisons instead of coercing values to strings for lexicographic comparison. For example:\n\n```js\nvar a = [ 10, 1, 2, ];\na.sort();\t\t\t\t\t\t\t\t// [1,10,2]\n\nvar b = new Uint8Array( [ 10, 1, 2 ] );\nb.sort();\t\t\t\t\t\t\t\t// [1,2,10]\n```\n\nThe `TypedArray#sort(..)` takes an optional compare function argument just like `Array#sort(..)`, which works in exactly the same way.\n\n## Maps\n\nIf you have a lot of JS experience, you know that objects are the primary mechanism for creating unordered key/value-pair data structures, otherwise known as maps. However, the major drawback with objects-as-maps is the inability to use a non-string value as the key.\n\nFor example, consider:\n\n```js\nvar m = {};\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm[x] = \"foo\";\nm[y] = \"bar\";\n\nm[x];\t\t\t\t\t\t\t// \"bar\"\nm[y];\t\t\t\t\t\t\t// \"bar\"\n```\n\nWhat's going on here? The two objects `x` and `y` both stringify to `\"[object Object]\"`, so only that one key is being set in `m`.\n\nSome have implemented fake maps by maintaining a parallel array of non-string keys alongside an array of the values, such as:\n\n```js\nvar keys = [], vals = [];\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nkeys.push( x );\nvals.push( \"foo\" );\n\nkeys.push( y );\nvals.push( \"bar\" );\n\nkeys[0] === x;\t\t\t\t\t// true\nvals[0];\t\t\t\t\t\t// \"foo\"\n\nkeys[1] === y;\t\t\t\t\t// true\nvals[1];\t\t\t\t\t\t// \"bar\"\n```\n\nOf course, you wouldn't want to manage those parallel arrays yourself, so you could define a data structure with methods that automatically do the management under the covers. Besides having to do that work yourself, the main drawback is that access is no longer O(1) time-complexity, but instead is O(n).\n\nBut as of ES6, there's no longer any need to do this! Just use `Map(..)`:\n\n```js\nvar m = new Map();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm.set( x, \"foo\" );\nm.set( y, \"bar\" );\n\nm.get( x );\t\t\t\t\t\t// \"foo\"\nm.get( y );\t\t\t\t\t\t// \"bar\"\n```\n\nThe only drawback is that you can't use the `[ ]` bracket access syntax for setting and retrieving values. But `get(..)` and `set(..)` work perfectly suitably instead.\n\nTo delete an element from a map, don't use the `delete` operator, but instead use the `delete(..)` method:\n\n```js\nm.set( x, \"foo\" );\nm.set( y, \"bar\" );\n\nm.delete( y );\n```\n\nYou can clear the entire map's contents with `clear()`. To get the length of a map (i.e., the number of keys), use the `size` property (not `length`):\n\n```js\nm.set( x, \"foo\" );\nm.set( y, \"bar\" );\nm.size;\t\t\t\t\t\t\t// 2\n\nm.clear();\nm.size;\t\t\t\t\t\t\t// 0\n```\n\nThe `Map(..)` constructor can also receive an iterable (see \"Iterators\" in Chapter 3), which must produce a list of arrays, where the first item in each array is the key and the second item is the value. This format for iteration is identical to that produced by the `entries()` method, explained in the next section. That makes it easy to make a copy of a map:\n\n```js\nvar m2 = new Map( m.entries() );\n\n// same as:\nvar m2 = new Map( m );\n```\n\nBecause a map instance is an iterable, and its default iterator is the same as `entries()`, the second shorter form is more preferable.\n\nOf course, you can just manually specify an *entries* list (array of key/value arrays) in the `Map(..)` constructor form:\n\n```js\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nvar m = new Map( [\n\t[ x, \"foo\" ],\n\t[ y, \"bar\" ]\n] );\n\nm.get( x );\t\t\t\t\t\t// \"foo\"\nm.get( y );\t\t\t\t\t\t// \"bar\"\n```\n\n### Map Values\n\nTo get the list of values from a map, use `values(..)`, which returns an iterator. In Chapters 2 and 3, we covered various ways to process an iterator sequentially (like an array), such as the `...` spread operator and the `for..of` loop. Also, \"Arrays\" in Chapter 6 covers the `Array.from(..)` method in detail. Consider:\n\n```js\nvar m = new Map();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm.set( x, \"foo\" );\nm.set( y, \"bar\" );\n\nvar vals = [ ...m.values() ];\n\nvals;\t\t\t\t\t\t\t// [\"foo\",\"bar\"]\nArray.from( m.values() );\t\t// [\"foo\",\"bar\"]\n```\n\nAs discussed in the previous section, you can iterate over a map's entries using `entries()` (or the default map iterator). Consider:\n\n```js\nvar m = new Map();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm.set( x, \"foo\" );\nm.set( y, \"bar\" );\n\nvar vals = [ ...m.entries() ];\n\nvals[0][0] === x;\t\t\t\t// true\nvals[0][1];\t\t\t\t\t\t// \"foo\"\n\nvals[1][0] === y;\t\t\t\t// true\nvals[1][1];\t\t\t\t\t\t// \"bar\"\n```\n\n### Map Keys\n\nTo get the list of keys, use `keys()`, which returns an iterator over the keys in the map:\n\n```js\nvar m = new Map();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm.set( x, \"foo\" );\nm.set( y, \"bar\" );\n\nvar keys = [ ...m.keys() ];\n\nkeys[0] === x;\t\t\t\t\t// true\nkeys[1] === y;\t\t\t\t\t// true\n```\n\nTo determine if a map has a given key, use `has(..)`:\n\n```js\nvar m = new Map();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm.set( x, \"foo\" );\n\nm.has( x );\t\t\t\t\t\t// true\nm.has( y );\t\t\t\t\t\t// false\n```\n\nMaps essentially let you associate some extra piece of information (the value) with an object (the key) without actually putting that information on the object itself.\n\nWhile you can use any kind of value as a key for a map, you typically will use objects, as strings and other primitives are already eligible as keys of normal objects. In other words, you'll probably want to continue to use normal objects for maps unless some or all of the keys need to be objects, in which case map is more appropriate.\n\n**Warning:** If you use an object as a map key and that object is later discarded (all references unset) in attempt to have garbage collection (GC) reclaim its memory, the map itself will still retain its entry. You will need to remove the entry from the map for it to be GC-eligible. In the next section, we'll see WeakMaps as a better option for object keys and GC.\n\n## WeakMaps\n\nWeakMaps are a variation on maps, which has most of the same external behavior but differs underneath in how the memory allocation (specifically its GC) works.\n\nWeakMaps take (only) objects as keys. Those objects are held *weakly*, which means if the object itself is GC'd, the entry in the WeakMap is also removed. This isn't observable behavior, though, as the only way an object can be GC'd is if there's no more references to it -- once there are no more references to it, you have no object reference to check if it exists in the WeakMap.\n\nOtherwise, the API for WeakMap is similar, though more limited:\n\n```js\nvar m = new WeakMap();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nm.set( x, \"foo\" );\n\nm.has( x );\t\t\t\t\t\t// true\nm.has( y );\t\t\t\t\t\t// false\n```\n\nWeakMaps do not have a `size` property or `clear()` method, nor do they expose any iterators over their keys, values, or entries. So even if you unset the `x` reference, which will remove its entry from `m` upon GC, there is no way to tell. You'll just have to take JavaScript's word for it!\n\nJust like Maps, WeakMaps let you soft-associate information with an object. But they are particularly useful if the object is not one you completely control, such as a DOM element. If the object you're using as a map key can be deleted and should be GC-eligible when it is, then a WeakMap is a more appropriate option.\n\nIt's important to note that a WeakMap only holds its *keys* weakly, not its values. Consider:\n\n```js\nvar m = new WeakMap();\n\nvar x = { id: 1 },\n\ty = { id: 2 },\n\tz = { id: 3 },\n\tw = { id: 4 };\n\nm.set( x, y );\n\nx = null;\t\t\t\t\t\t// { id: 1 } is GC-eligible\ny = null;\t\t\t\t\t\t// { id: 2 } is GC-eligible\n\t\t\t\t\t\t\t\t// only because { id: 1 } is\n\nm.set( z, w );\n\nw = null;\t\t\t\t\t\t// { id: 4 } is not GC-eligible\n```\n\nFor this reason, WeakMaps are in my opinion better named \"WeakKeyMaps.\"\n\n## Sets\n\nA set is a collection of unique values (duplicates are ignored).\n\nThe API for a set is similar to map. The `add(..)` method takes the place of the `set(..)` method (somewhat ironically), and there is no `get(..)` method.\n\nConsider:\n\n```js\nvar s = new Set();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\ns.add( x );\ns.add( y );\ns.add( x );\n\ns.size;\t\t\t\t\t\t\t// 2\n\ns.delete( y );\ns.size;\t\t\t\t\t\t\t// 1\n\ns.clear();\ns.size;\t\t\t\t\t\t\t// 0\n```\n\nThe `Set(..)` constructor form is similar to `Map(..)`, in that it can receive an iterable, like another set or simply an array of values. However, unlike how `Map(..)` expects *entries* list (array of key/value arrays), `Set(..)` expects a *values* list (array of values):\n\n```js\nvar x = { id: 1 },\n\ty = { id: 2 };\n\nvar s = new Set( [x,y] );\n```\n\nA set doesn't need a `get(..)` because you don't retrieve a value from a set, but rather test if it is present or not, using `has(..)`:\n\n```js\nvar s = new Set();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\ns.add( x );\n\ns.has( x );\t\t\t\t\t\t// true\ns.has( y );\t\t\t\t\t\t// false\n```\n\n**Note:** The comparison algorithm in `has(..)` is almost identical to `Object.is(..)` (see Chapter 6), except that `-0` and `0` are treated as the same rather than distinct.\n\n### Set Iterators\n\nSets have the same iterator methods as maps. Their behavior is different for sets, but symmetric with the behavior of map iterators. Consider:\n\n```js\nvar s = new Set();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\ns.add( x ).add( y );\n\nvar keys = [ ...s.keys() ],\n\tvals = [ ...s.values() ],\n\tentries = [ ...s.entries() ];\n\nkeys[0] === x;\nkeys[1] === y;\n\nvals[0] === x;\nvals[1] === y;\n\nentries[0][0] === x;\nentries[0][1] === x;\nentries[1][0] === y;\nentries[1][1] === y;\n```\n\nThe `keys()` and `values()` iterators both yield a list of the unique values in the set. The `entries()` iterator yields a list of entry arrays, where both items of the array are the unique set value. The default iterator for a set is its `values()` iterator.\n\nThe inherent uniqueness of a set is its most useful trait. For example:\n\n```js\nvar s = new Set( [1,2,3,4,\"1\",2,4,\"5\"] ),\n\tuniques = [ ...s ];\n\nuniques;\t\t\t\t\t\t// [1,2,3,4,\"1\",\"5\"]\n```\n\nSet uniqueness does not allow coercion, so `1` and `\"1\"` are considered distinct values.\n\n## WeakSets\n\nWhereas a WeakMap holds its keys weakly (but its values strongly), a WeakSet holds its values weakly (there aren't really keys).\n\n```js\nvar s = new WeakSet();\n\nvar x = { id: 1 },\n\ty = { id: 2 };\n\ns.add( x );\ns.add( y );\n\nx = null;\t\t\t\t\t\t// `x` is GC-eligible\ny = null;\t\t\t\t\t\t// `y` is GC-eligible\n```\n\n**Warning:** WeakSet values must be objects, not primitive values as is allowed with sets.\n\n## Review\n\nES6 defines a number of useful collections that make working with data in structured ways more efficient and effective.\n\nTypedArrays provide \"view\"s of binary data buffers that align with various integer types, like 8-bit unsigned integers and 32-bit floats. The array access to binary data makes operations much easier to express and maintain, which enables you to more easily work with complex data like video, audio, canvas data, and so on.\n\nMaps are key-value pairs where the key can be an object instead of just a string/primitive. Sets are unique lists of values (of any type).\n\nWeakMaps are maps where the key (object) is weakly held, so that GC is free to collect the entry if it's the last reference to an object. WeakSets are sets where the value is weakly held, again so that GC can remove the entry if it's the last reference to that object.\n"
  },
  {
    "path": "es6 & beyond/ch6.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 6: API Additions\n\nFrom conversions of values to mathematic calculations, ES6 adds many static properties and methods to various built-in natives and objects to help with common tasks. In addition, instances of some of the natives have new capabilities via various new prototype methods.\n\n**Note:** Most of these features can be faithfully polyfilled. We will not dive into such details here, but check out \"ES6 Shim\" (https://github.com/paulmillr/es6-shim/) for standards-compliant shims/polyfills.\n\n## `Array`\n\nOne of the most commonly extended features in JS by various user libraries is the Array type. It should be no surprise that ES6 adds a number of helpers to Array, both static and prototype (instance).\n\n### `Array.of(..)` Static Function\n\nThere's a well known gotcha with the `Array(..)` constructor, which is that if there's only one argument passed, and that argument is a number, instead of making an array of one element with that number value in it, it constructs an empty array with a `length` property equal to the number. This action produces the unfortunate and quirky \"empty slots\" behavior that's reviled about JS arrays.\n\n`Array.of(..)` replaces `Array(..)` as the preferred function-form constructor for arrays, because `Array.of(..)` does not have that special single-number-argument case. Consider:\n\n```js\nvar a = Array( 3 );\na.length;\t\t\t\t\t\t// 3\na[0];\t\t\t\t\t\t\t// undefined\n\nvar b = Array.of( 3 );\nb.length;\t\t\t\t\t\t// 1\nb[0];\t\t\t\t\t\t\t// 3\n\nvar c = Array.of( 1, 2, 3 );\nc.length;\t\t\t\t\t\t// 3\nc;\t\t\t\t\t\t\t\t// [1,2,3]\n```\n\nUnder what circumstances would you want to use `Array.of(..)` instead of just creating an array with literal syntax, like `c = [1,2,3]`? There's two possible cases.\n\nIf you have a callback that's supposed to wrap argument(s) passed to it in an array, `Array.of(..)` fits the bill perfectly. That's probably not terribly common, but it may scratch an itch for you.\n\nThe other scenario is if you subclass `Array` (see \"Classes\" in Chapter 3) and want to be able to create and initialize elements in an instance of your subclass, such as:\n\n```js\nclass MyCoolArray extends Array {\n\tsum() {\n\t\treturn this.reduce( function reducer(acc,curr){\n\t\t\treturn acc + curr;\n\t\t}, 0 );\n\t}\n}\n\nvar x = new MyCoolArray( 3 );\nx.length;\t\t\t\t\t\t// 3 -- oops!\nx.sum();\t\t\t\t\t\t// 0 -- oops!\n\nvar y = [3];\t\t\t\t\t// Array, not MyCoolArray\ny.length;\t\t\t\t\t\t// 1\ny.sum();\t\t\t\t\t\t// `sum` is not a function\n\nvar z = MyCoolArray.of( 3 );\nz.length;\t\t\t\t\t\t// 1\nz.sum();\t\t\t\t\t\t// 3\n```\n\nYou can't just (easily) create a constructor for `MyCoolArray` that overrides the behavior of the `Array` parent constructor, because that constructor is necessary to actually create a well-behaving array value (initializing the `this`). The \"inherited\" static `of(..)` method on the `MyCoolArray` subclass provides a nice solution.\n\n### `Array.from(..)` Static Function\n\nAn \"array-like object\" in JavaScript is an object that has a `length` property on it, specifically with an integer value of zero or higher.\n\nThese values have been notoriously frustrating to work with in JS; it's been quite common to need to transform them into an actual array, so that the various `Array.prototype` methods (`map(..)`, `indexOf(..)` etc.) are available to use with it. That process usually looks like:\n\n```js\n// array-like object\nvar arrLike = {\n\tlength: 3,\n\t0: \"foo\",\n\t1: \"bar\"\n};\n\nvar arr = Array.prototype.slice.call( arrLike );\n```\n\nAnother common task where `slice(..)` is often used is in duplicating a real array:\n\n```js\nvar arr2 = arr.slice();\n```\n\nIn both cases, the new ES6 `Array.from(..)` method can be a more understandable and graceful -- if also less verbose -- approach:\n\n```js\nvar arr = Array.from( arrLike );\n\nvar arrCopy = Array.from( arr );\n```\n\n`Array.from(..)` looks to see if the first argument is an iterable (see \"Iterators\" in Chapter 3), and if so, it uses the iterator to produce values to \"copy\" into the returned array. Because real arrays have an iterator for those values, that iterator is automatically used.\n\nBut if you pass an array-like object as the first argument to `Array.from(..)`, it behaves basically the same as `slice()` (no arguments!) or `apply(..)` does, which is that it simply loops over the value, accessing numerically named properties from `0` up to whatever the value of `length` is.\n\nConsider:\n\n```js\nvar arrLike = {\n\tlength: 4,\n\t2: \"foo\"\n};\n\nArray.from( arrLike );\n// [ undefined, undefined, \"foo\", undefined ]\n```\n\nBecause positions `0`, `1`, and `3` didn't exist on `arrLike`, the result was the `undefined` value for each of those slots.\n\nYou could produce a similar outcome like this:\n\n```js\nvar emptySlotsArr = [];\nemptySlotsArr.length = 4;\nemptySlotsArr[2] = \"foo\";\n\nArray.from( emptySlotsArr );\n// [ undefined, undefined, \"foo\", undefined ]\n```\n\n#### Avoiding Empty Slots\n\nThere's a subtle but important difference in the previous snippet between the `emptySlotsArr` and the result of the `Array.from(..)` call. `Array.from(..)` never produces empty slots.\n\nPrior to ES6, if you wanted to produce an array initialized to a certain length with actual `undefined` values in each slot (no empty slots!), you had to do extra work:\n\n```js\nvar a = Array( 4 );\t\t\t\t\t\t\t\t// four empty slots!\n\nvar b = Array.apply( null, { length: 4 } );\t\t// four `undefined` values\n```\n\nBut `Array.from(..)` now makes this easier:\n\n```js\nvar c = Array.from( { length: 4 } );\t\t\t// four `undefined` values\n```\n\n**Warning:** Using an empty slot array like `a` in the previous snippets would work with some array functions, but others ignore empty slots (like `map(..)`, etc.). You should never intentionally work with empty slots, as it will almost certainly lead to strange/unpredictable behavior in your programs.\n\n#### Mapping\n\nThe `Array.from(..)` utility has another helpful trick up its sleeve. The second argument, if provided, is a mapping callback (almost the same as the regular `Array#map(..)` expects) which is called to map/transform each value from the source to the returned target. Consider:\n\n```js\nvar arrLike = {\n\tlength: 4,\n\t2: \"foo\"\n};\n\nArray.from( arrLike, function mapper(val,idx){\n\tif (typeof val == \"string\") {\n\t\treturn val.toUpperCase();\n\t}\n\telse {\n\t\treturn idx;\n\t}\n} );\n// [ 0, 1, \"FOO\", 3 ]\n```\n\n**Note:** As with other array methods that take callbacks, `Array.from(..)` takes an optional third argument that if set will specify the `this` binding for the callback passed as the second argument. Otherwise, `this` will be `undefined`.\n\nSee \"TypedArrays\" in Chapter 5 for an example of using `Array.from(..)` in translating values from an array of 8-bit values to an array of 16-bit values.\n\n### Creating Arrays and Subtypes\n\nIn the last couple of sections, we've discussed `Array.of(..)` and `Array.from(..)`, both of which create a new array in a similar way to a constructor. But what do they do in subclasses? Do they create instances of the base `Array` or the derived subclass?\n\n```js\nclass MyCoolArray extends Array {\n\t..\n}\n\nMyCoolArray.from( [1, 2] ) instanceof MyCoolArray;\t// true\n\nArray.from(\n\tMyCoolArray.from( [1, 2] )\n) instanceof MyCoolArray;\t\t\t\t\t\t\t// false\n```\n\nBoth `of(..)` and `from(..)` use the constructor that they're accessed from to construct the array. So if you use the base `Array.of(..)` you'll get an `Array` instance, but if you use `MyCoolArray.of(..)`, you'll get a `MyCoolArray` instance.\n\nIn \"Classes\" in Chapter 3, we covered the `@@species` setting which all the built-in classes (like `Array`) have defined, which is used by any prototype methods if they create a new instance. `slice(..)` is a great example:\n\n```js\nvar x = new MyCoolArray( 1, 2, 3 );\n\nx.slice( 1 ) instanceof MyCoolArray;\t\t\t\t// true\n```\n\nGenerally, that default behavior will probably be desired, but as we discussed in Chapter 3, you *can* override if you want:\n\n```js\nclass MyCoolArray extends Array {\n\t// force `species` to be parent constructor\n\tstatic get [Symbol.species]() { return Array; }\n}\n\nvar x = new MyCoolArray( 1, 2, 3 );\n\nx.slice( 1 ) instanceof MyCoolArray;\t\t\t\t// false\nx.slice( 1 ) instanceof Array;\t\t\t\t\t\t// true\n```\n\nIt's important to note that the `@@species` setting is only used for the prototype methods, like `slice(..)`. It's not used by `of(..)` and `from(..)`; they both just use the `this` binding (whatever constructor is used to make the reference). Consider:\n\n```js\nclass MyCoolArray extends Array {\n\t// force `species` to be parent constructor\n\tstatic get [Symbol.species]() { return Array; }\n}\n\nvar x = new MyCoolArray( 1, 2, 3 );\n\nMyCoolArray.from( x ) instanceof MyCoolArray;\t\t// true\nMyCoolArray.of( [2, 3] ) instanceof MyCoolArray;\t// true\n```\n\n### `copyWithin(..)` Prototype Method\n\n`Array#copyWithin(..)` is a new mutator method available to all arrays (including Typed Arrays; see Chapter 5). `copyWithin(..)` copies a portion of an array to another location in the same array, overwriting whatever was there before.\n\nThe arguments are *target* (the index to copy to), *start* (the inclusive index to start the copying from), and optionally *end* (the exclusive index to stop copying). If any of the arguments are negative, they're taken to be relative from the end of the array.\n\nConsider:\n\n```js\n[1,2,3,4,5].copyWithin( 3, 0 );\t\t\t// [1,2,3,1,2]\n\n[1,2,3,4,5].copyWithin( 3, 0, 1 );\t\t// [1,2,3,1,5]\n\n[1,2,3,4,5].copyWithin( 0, -2 );\t\t// [4,5,3,4,5]\n\n[1,2,3,4,5].copyWithin( 0, -2, -1 );\t// [4,2,3,4,5]\n```\n\nThe `copyWithin(..)` method does not extend the array's length, as the first example in the previous snippet shows. Copying simply stops when the end of the array is reached.\n\nContrary to what you might think, the copying doesn't always go in left-to-right (ascending index) order. It's possible this would result in repeatedly copying an already copied value if the from and target ranges overlap, which is presumably not desired behavior.\n\nSo internally, the algorithm avoids this case by copying in reverse order to avoid that gotcha. Consider:\n\n```js\n[1,2,3,4,5].copyWithin( 2, 1 );\t\t// ???\n```\n\nIf the algorithm was strictly moving left to right, then the `2` should be copied to overwrite the `3`, then *that* copied `2` should be copied to overwrite `4`, then *that* copied `2` should be copied to overwrite `5`, and you'd end up with `[1,2,2,2,2]`.\n\nInstead, the copying algorithm reverses direction and copies `4` to overwrite `5`, then copies `3` to overwrite `4`, then copies `2` to overwrite `3`, and the final result is `[1,2,2,3,4]`. That's probably more \"correct\" in terms of expectation, but it can be confusing if you're only thinking about the copying algorithm in a naive left-to-right fashion.\n\n### `fill(..)` Prototype Method\n\nFilling an existing array entirely (or partially) with a specified value is natively supported as of ES6 with the `Array#fill(..)` method:\n\n```js\nvar a = Array( 4 ).fill( undefined );\na;\n// [undefined,undefined,undefined,undefined]\n```\n\n`fill(..)` optionally takes *start* and *end* parameters, which indicate a subset portion of the array to fill, such as:\n\n```js\nvar a = [ null, null, null, null ].fill( 42, 1, 3 );\n\na;\t\t\t\t\t\t\t\t\t// [null,42,42,null]\n```\n\n### `find(..)` Prototype Method\n\nThe most common way to search for a value in an array has generally been the `indexOf(..)` method, which returns the index the value is found at or `-1` if not found:\n\n```js\nvar a = [1,2,3,4,5];\n\n(a.indexOf( 3 ) != -1);\t\t\t\t// true\n(a.indexOf( 7 ) != -1);\t\t\t\t// false\n\n(a.indexOf( \"2\" ) != -1);\t\t\t// false\n```\n\nThe `indexOf(..)` comparison requires a strict `===` match, so a search for `\"2\"` fails to find a value of `2`, and vice versa. There's no way to override the matching algorithm for `indexOf(..)`. It's also unfortunate/ungraceful to have to make the manual comparison to the `-1` value.\n\n**Tip:** See the *Types & Grammar* title of this series for an interesting (and controversially confusing) technique to work around the `-1` ugliness with the `~` operator.\n\nSince ES5, the most common workaround to have control over the matching logic has been the `some(..)` method. It works by calling a function callback for each element, until one of those calls returns a `true`/truthy value, and then it stops. Because you get to define the callback function, you have full control over how a match is made:\n\n```js\nvar a = [1,2,3,4,5];\n\na.some( function matcher(v){\n\treturn v == \"2\";\n} );\t\t\t\t\t\t\t\t// true\n\na.some( function matcher(v){\n\treturn v == 7;\n} );\t\t\t\t\t\t\t\t// false\n```\n\nBut the downside to this approach is that you only get the `true`/`false` indicating if a suitably matched value was found, but not what the actual matched value was.\n\nES6's `find(..)` addresses this. It works basically the same as `some(..)`, except that once the callback returns a `true`/truthy value, the actual array value is returned:\n\n```js\nvar a = [1,2,3,4,5];\n\na.find( function matcher(v){\n\treturn v == \"2\";\n} );\t\t\t\t\t\t\t\t// 2\n\na.find( function matcher(v){\n\treturn v == 7;\t\t\t\t\t// undefined\n});\n```\n\nUsing a custom `matcher(..)` function also lets you match against complex values like objects:\n\n```js\nvar points = [\n\t{ x: 10, y: 20 },\n\t{ x: 20, y: 30 },\n\t{ x: 30, y: 40 },\n\t{ x: 40, y: 50 },\n\t{ x: 50, y: 60 }\n];\n\npoints.find( function matcher(point) {\n\treturn (\n\t\tpoint.x % 3 == 0 &&\n\t\tpoint.y % 4 == 0\n\t);\n} );\t\t\t\t\t\t\t\t// { x: 30, y: 40 }\n```\n\n**Note:** As with other array methods that take callbacks, `find(..)` takes an optional second argument that if set will specify the `this` binding for the callback passed as the first argument. Otherwise, `this` will be `undefined`.\n\n### `findIndex(..)` Prototype Method\n\nWhile the previous section illustrates how `some(..)` yields a boolean result for a search of an array, and `find(..)` yields the matched value itself from the array search, there's also a need for finding the positional index of the matched value.\n\n`indexOf(..)` does that, but there's no control over its matching logic; it always uses `===` strict equality. So ES6's `findIndex(..)` is the answer:\n\n```js\nvar points = [\n\t{ x: 10, y: 20 },\n\t{ x: 20, y: 30 },\n\t{ x: 30, y: 40 },\n\t{ x: 40, y: 50 },\n\t{ x: 50, y: 60 }\n];\n\npoints.findIndex( function matcher(point) {\n\treturn (\n\t\tpoint.x % 3 == 0 &&\n\t\tpoint.y % 4 == 0\n\t);\n} );\t\t\t\t\t\t\t\t// 2\n\npoints.findIndex( function matcher(point) {\n\treturn (\n\t\tpoint.x % 6 == 0 &&\n\t\tpoint.y % 7 == 0\n\t);\n} );\t\t\t\t\t\t\t\t// -1\n```\n\nDon't use `findIndex(..) != -1` (the way it's always been done with `indexOf(..)`) to get a boolean from the search, because `some(..)` already yields the `true`/`false` you want. And don't do `a[ a.findIndex(..) ]` to get the matched value, because that's what `find(..)` accomplishes. And finally, use `indexOf(..)` if you need the index of a strict match, or `findIndex(..)` if you need the index of a more customized match.\n\n**Note:** As with other array methods that take callbacks, `findIndex(..)` takes an optional second argument that if set will specify the `this` binding for the callback passed as the first argument. Otherwise, `this` will be `undefined`.\n\n### `entries()`, `values()`, `keys()` Prototype Methods\n\nIn Chapter 3, we illustrated how data structures can provide a patterned item-by-item enumeration of their values, via an iterator. We then expounded on this approach in Chapter 5, as we explored how the new ES6 collections (Map, Set, etc.) provide several methods for producing different kinds of iterations.\n\nBecause it's not new to ES6, `Array` might not be thought of traditionally as a \"collection,\" but it is one in the sense that it provides these same iterator methods: `entries()`, `values()`, and `keys()`. Consider:\n\n```js\nvar a = [1,2,3];\n\n[...a.values()];\t\t\t\t\t// [1,2,3]\n[...a.keys()];\t\t\t\t\t\t// [0,1,2]\n[...a.entries()];\t\t\t\t\t// [ [0,1], [1,2], [2,3] ]\n\n[...a[Symbol.iterator]()];\t\t\t// [1,2,3]\n```\n\nJust like with `Set`, the default `Array` iterator is the same as what `values()` returns.\n\nIn \"Avoiding Empty Slots\" earlier in this chapter, we illustrated how `Array.from(..)` treats empty slots in an array as just being present slots with `undefined` in them. That's actually because under the covers, the array iterators behave that way:\n\n```js\nvar a = [];\na.length = 3;\na[1] = 2;\n\n[...a.values()];\t\t// [undefined,2,undefined]\n[...a.keys()];\t\t\t// [0,1,2]\n[...a.entries()];\t\t// [ [0,undefined], [1,2], [2,undefined] ]\n```\n\n## `Object`\n\nA few additional static helpers have been added to `Object`. Traditionally, functions of this sort have been seen as focused on the behaviors/capabilities of object values.\n\nHowever, starting with ES6, `Object` static functions will also be for general-purpose global APIs of any sort that don't already belong more naturally in some other location (i.e., `Array.from(..)`).\n\n### `Object.is(..)` Static Function\n\nThe `Object.is(..)` static function makes value comparisons in an even more strict fashion than the `===` comparison.\n\n`Object.is(..)` invokes the underlying `SameValue` algorithm (ES6 spec, section 7.2.9). The `SameValue` algorithm is basically the same as the `===` Strict Equality Comparison Algorithm (ES6 spec, section 7.2.13), with two important exceptions.\n\nConsider:\n\n```js\nvar x = NaN, y = 0, z = -0;\n\nx === x;\t\t\t\t\t\t\t// false\ny === z;\t\t\t\t\t\t\t// true\n\nObject.is( x, x );\t\t\t\t\t// true\nObject.is( y, z );\t\t\t\t\t// false\n```\n\nYou should continue to use `===` for strict equality comparisons; `Object.is(..)` shouldn't be thought of as a replacement for the operator. However, in cases where you're trying to strictly identify a `NaN` or `-0` value, `Object.is(..)` is now the preferred option.\n\n**Note:** ES6 also adds a `Number.isNaN(..)` utility (discussed later in this chapter) which may be a slightly more convenient test; you may prefer `Number.isNaN(x)` over `Object.is(x,NaN)`. You *can* accurately test for `-0` with a clumsy `x == 0 && 1 / x === -Infinity`, but in this case `Object.is(x,-0)` is much better.\n\n### `Object.getOwnPropertySymbols(..)` Static Function\n\nThe \"Symbols\" section in Chapter 2 discusses the new Symbol primitive value type in ES6.\n\nSymbols are likely going to be mostly used as special (meta) properties on objects. So the `Object.getOwnPropertySymbols(..)` utility was introduced, which retrieves only the symbol properties directly on an object:\n\n```js\nvar o = {\n\tfoo: 42,\n\t[ Symbol( \"bar\" ) ]: \"hello world\",\n\tbaz: true\n};\n\nObject.getOwnPropertySymbols( o );\t// [ Symbol(bar) ]\n```\n\n### `Object.setPrototypeOf(..)` Static Function\n\nAlso in Chapter 2, we mentioned the `Object.setPrototypeOf(..)` utility, which (unsurprisingly) sets the `[[Prototype]]` of an object for the purposes of *behavior delegation* (see the *this & Object Prototypes* title of this series). Consider:\n\n```js\nvar o1 = {\n\tfoo() { console.log( \"foo\" ); }\n};\nvar o2 = {\n\t// .. o2's definition ..\n};\n\nObject.setPrototypeOf( o2, o1 );\n\n// delegates to `o1.foo()`\no2.foo();\t\t\t\t\t\t\t// foo\n```\n\nAlternatively:\n\n```js\nvar o1 = {\n\tfoo() { console.log( \"foo\" ); }\n};\n\nvar o2 = Object.setPrototypeOf( {\n\t// .. o2's definition ..\n}, o1 );\n\n// delegates to `o1.foo()`\no2.foo();\t\t\t\t\t\t\t// foo\n```\n\nIn both previous snippets, the relationship between `o2` and `o1` appears at the end of the `o2` definition. More commonly, the relationship between an `o2` and `o1` is specified at the top of the `o2` definition, as it is with classes, and also with `__proto__` in object literals (see \"Setting `[[Prototype]]`\" in Chapter 2).\n\n**Warning:** Setting a `[[Prototype]]` right after object creation is reasonable, as shown. But changing it much later is generally not a good idea and will usually lead to more confusion than clarity.\n\n### `Object.assign(..)` Static Function\n\nMany JavaScript libraries/frameworks provide utilities for copying/mixing one object's properties into another (e.g., jQuery's `extend(..)`). There are various nuanced differences between these different utilities, such as whether a property with value `undefined` is ignored or not.\n\nES6 adds `Object.assign(..)`, which is a simplified version of these algorithms. The first argument is the *target*, and any other arguments passed are the *sources*, which will be processed in listed order. For each source, its enumerable and own (e.g., not \"inherited\") keys, including symbols, are copied as if by plain `=` assignment. `Object.assign(..)` returns the target object.\n\nConsider this object setup:\n\n```js\nvar target = {},\n\to1 = { a: 1 }, o2 = { b: 2 },\n\to3 = { c: 3 }, o4 = { d: 4 };\n\n// setup read-only property\nObject.defineProperty( o3, \"e\", {\n\tvalue: 5,\n\tenumerable: true,\n\twritable: false,\n\tconfigurable: false\n} );\n\n// setup non-enumerable property\nObject.defineProperty( o3, \"f\", {\n\tvalue: 6,\n\tenumerable: false\n} );\n\no3[ Symbol( \"g\" ) ] = 7;\n\n// setup non-enumerable symbol\nObject.defineProperty( o3, Symbol( \"h\" ), {\n\tvalue: 8,\n\tenumerable: false\n} );\n\nObject.setPrototypeOf( o3, o4 );\n```\n\nOnly the properties `a`, `b`, `c`, `e`, and `Symbol(\"g\")` will be copied to `target`:\n\n```js\nObject.assign( target, o1, o2, o3 );\n\ntarget.a;\t\t\t\t\t\t\t// 1\ntarget.b;\t\t\t\t\t\t\t// 2\ntarget.c;\t\t\t\t\t\t\t// 3\n\nObject.getOwnPropertyDescriptor( target, \"e\" );\n// { value: 5, writable: true, enumerable: true,\n//   configurable: true }\n\nObject.getOwnPropertySymbols( target );\n// [Symbol(\"g\")]\n```\n\nThe `d`, `f`, and `Symbol(\"h\")` properties are omitted from copying; non-enumerable properties and non-owned properties are all excluded from the assignment. Also, `e` is copied as a normal property assignment, not duplicated as a read-only property.\n\nIn an earlier section, we showed using `setPrototypeOf(..)` to set up a `[[Prototype]]` relationship between an `o2` and `o1` object. There's another form that leverages `Object.assign(..)`:\n\n```js\nvar o1 = {\n\tfoo() { console.log( \"foo\" ); }\n};\n\nvar o2 = Object.assign(\n\tObject.create( o1 ),\n\t{\n\t\t// .. o2's definition ..\n\t}\n);\n\n// delegates to `o1.foo()`\no2.foo();\t\t\t\t\t\t\t// foo\n```\n\n**Note:** `Object.create(..)` is the ES5 standard utility that creates an empty object that is `[[Prototype]]`-linked. See the *this & Object Prototypes* title of this series for more information.\n\n## `Math`\n\nES6 adds several new mathematic utilities that fill in holes or aid with common operations. All of these can be manually calculated, but most of them are now defined natively so that in some cases the JS engine can either more optimally perform the calculations, or perform them with better decimal precision than their manual counterparts.\n\nIt's likely that asm.js/transpiled JS code (see the *Async & Performance* title of this series) is the more likely consumer of many of these utilities rather than direct developers.\n\nTrigonometry:\n\n* `cosh(..)` - Hyperbolic cosine\n* `acosh(..)` - Hyperbolic arccosine\n* `sinh(..)` - Hyperbolic sine\n* `asinh(..)` - Hyperbolic arcsine\n* `tanh(..)` - Hyperbolic tangent\n* `atanh(..)` - Hyperbolic arctangent\n* `hypot(..)` - The squareroot of the sum of the squares (i.e., the generalized Pythagorean theorem)\n\nArithmetic:\n\n* `cbrt(..)` - Cube root\n* `clz32(..)` - Count leading zeros in 32-bit binary representation\n* `expm1(..)` - The same as `exp(x) - 1`\n* `log2(..)` - Binary logarithm (log base 2)\n* `log10(..)` - Log base 10\n* `log1p(..)` - The same as `log(x + 1)`\n* `imul(..)` - 32-bit integer multiplication of two numbers\n\nMeta:\n\n* `sign(..)` - Returns the sign of the number\n* `trunc(..)` - Returns only the integer part of a number\n* `fround(..)` - Rounds to nearest 32-bit (single precision) floating-point value\n\n## `Number`\n\nImportantly, for your program to properly work, it must accurately handle numbers. ES6 adds some additional properties and functions to assist with common numeric operations.\n\nTwo additions to `Number` are just references to the preexisting globals: `Number.parseInt(..)` and `Number.parseFloat(..)`.\n\n### Static Properties\n\nES6 adds some helpful numeric constants as static properties:\n\n* `Number.EPSILON` - The minimum value between any two numbers: `2^-52` (see Chapter 2 of the *Types & Grammar* title of this series regarding using this value as a tolerance for imprecision in floating-point arithmetic)\n* `Number.MAX_SAFE_INTEGER` - The highest integer that can \"safely\" be represented unambiguously in a JS number value: `2^53 - 1`\n* `Number.MIN_SAFE_INTEGER` - The lowest integer that can \"safely\" be represented unambiguously in a JS number value: `-(2^53 - 1)` or `(-2)^53 + 1`.\n\n**Note:** See Chapter 2 of the *Types & Grammar* title of this series for more information about \"safe\" integers.\n\n### `Number.isNaN(..)` Static Function\n\nThe standard global `isNaN(..)` utility has been broken since its inception, in that it returns `true` for things that are not numbers, not just for the actual `NaN` value, because it coerces the argument to a number type (which can falsely result in a NaN). ES6 adds a fixed utility `Number.isNaN(..)` that works as it should:\n\n```js\nvar a = NaN, b = \"NaN\", c = 42;\n\nisNaN( a );\t\t\t\t\t\t\t// true\nisNaN( b );\t\t\t\t\t\t\t// true -- oops!\nisNaN( c );\t\t\t\t\t\t\t// false\n\nNumber.isNaN( a );\t\t\t\t\t// true\nNumber.isNaN( b );\t\t\t\t\t// false -- fixed!\nNumber.isNaN( c );\t\t\t\t\t// false\n```\n\n### `Number.isFinite(..)` Static Function\n\nThere's a temptation to look at a function name like `isFinite(..)` and assume it's simply \"not infinite\". That's not quite correct, though. There's more nuance to this new ES6 utility. Consider:\n\n```js\nvar a = NaN, b = Infinity, c = 42;\n\nNumber.isFinite( a );\t\t\t\t// false\nNumber.isFinite( b );\t\t\t\t// false\n\nNumber.isFinite( c );\t\t\t\t// true\n```\n\nThe standard global `isFinite(..)` coerces its argument, but `Number.isFinite(..)` omits the coercive behavior:\n\n```js\nvar a = \"42\";\n\nisFinite( a );\t\t\t\t\t\t// true\nNumber.isFinite( a );\t\t\t\t// false\n```\n\nYou may still prefer the coercion, in which case using the global `isFinite(..)` is a valid choice. Alternatively, and perhaps more sensibly, you can use `Number.isFinite(+x)`, which explicitly coerces `x` to a number before passing it in (see Chapter 4 of the *Types & Grammar* title of this series).\n\n### Integer-Related Static Functions\n\nJavaScript number values are always floating point (IEEE-754). So the notion of determining if a number is an \"integer\" is not about checking its type, because JS makes no such distinction.\n\nInstead, you need to check if there's any non-zero decimal portion of the value. The easiest way to do that has commonly been:\n\n```js\nx === Math.floor( x );\n```\n\nES6 adds a `Number.isInteger(..)` helper utility that potentially can determine this quality slightly more efficiently:\n\n```js\nNumber.isInteger( 4 );\t\t\t\t// true\nNumber.isInteger( 4.2 );\t\t\t// false\n```\n\n**Note:** In JavaScript, there's no difference between `4`, `4.`, `4.0`, or `4.0000`. All of these would be considered an \"integer\", and would thus yield `true` from `Number.isInteger(..)`.\n\nIn addition, `Number.isInteger(..)` filters out some clearly not-integer values that `x === Math.floor(x)` could potentially mix up:\n\n```js\nNumber.isInteger( NaN );\t\t\t// false\nNumber.isInteger( Infinity );\t\t// false\n```\n\nWorking with \"integers\" is sometimes an important bit of information, as it can simplify certain kinds of algorithms. JS code by itself will not run faster just from filtering for only integers, but there are optimization techniques the engine can take (e.g., asm.js) when only integers are being used.\n\nBecause of `Number.isInteger(..)`'s handling of `NaN` and `Infinity` values, defining a `isFloat(..)` utility would not be just as simple as `!Number.isInteger(..)`. You'd need to do something like:\n\n```js\nfunction isFloat(x) {\n\treturn Number.isFinite( x ) && !Number.isInteger( x );\n}\n\nisFloat( 4.2 );\t\t\t\t\t\t// true\nisFloat( 4 );\t\t\t\t\t\t// false\n\nisFloat( NaN );\t\t\t\t\t\t// false\nisFloat( Infinity );\t\t\t\t// false\n```\n\n**Note:** It may seem strange, but Infinity should neither be considered an integer nor a float.\n\nES6 also defines a `Number.isSafeInteger(..)` utility, which checks to make sure the value is both an integer and within the range of `Number.MIN_SAFE_INTEGER`-`Number.MAX_SAFE_INTEGER` (inclusive).\n\n```js\nvar x = Math.pow( 2, 53 ),\n\ty = Math.pow( -2, 53 );\n\nNumber.isSafeInteger( x - 1 );\t\t// true\nNumber.isSafeInteger( y + 1 );\t\t// true\n\nNumber.isSafeInteger( x );\t\t\t// false\nNumber.isSafeInteger( y );\t\t\t// false\n```\n\n## `String`\n\nStrings already have quite a few helpers prior to ES6, but even more have been added to the mix.\n\n### Unicode Functions\n\n\"Unicode-Aware String Operations\" in Chapter 2 discusses `String.fromCodePoint(..)`, `String#codePointAt(..)`, and `String#normalize(..)` in detail. They have been added to improve Unicode support in JS string values.\n\n```js\nString.fromCodePoint( 0x1d49e );\t\t\t// \"𝒞\"\n\n\"ab𝒞d\".codePointAt( 2 ).toString( 16 );\t\t// \"1d49e\"\n```\n\nThe `normalize(..)` string prototype method is used to perform Unicode normalizations that either combine characters with adjacent \"combining marks\" or decompose combined characters.\n\nGenerally, the normalization won't create a visible effect on the contents of the string, but will change the contents of the string, which can affect how things like the `length` property are reported, as well as how character access by position behave:\n\n```js\nvar s1 = \"e\\u0301\";\ns1.length;\t\t\t\t\t\t\t// 2\n\nvar s2 = s1.normalize();\ns2.length;\t\t\t\t\t\t\t// 1\ns2 === \"\\xE9\";\t\t\t\t\t\t// true\n```\n\n`normalize(..)` takes an optional argument that specifies the normalization form to use. This argument must be one of the following four values: `\"NFC\"` (default), `\"NFD\"`, `\"NFKC\"`, or `\"NFKD\"`.\n\n**Note:** Normalization forms and their effects on strings is well beyond the scope of what we'll discuss here. See \"Unicode Normalization Forms\" (http://www.unicode.org/reports/tr15/) for more information.\n\n### `String.raw(..)` Static Function\n\nThe `String.raw(..)` utility is provided as a built-in tag function to use with template string literals (see Chapter 2) for obtaining the raw string value without any processing of escape sequences.\n\nThis function will almost never be called manually, but will be used with tagged template literals:\n\n```js\nvar str = \"bc\";\n\nString.raw`\\ta${str}d\\xE9`;\n// \"\\tabcd\\xE9\", not \"\tabcdé\"\n```\n\nIn the resultant string, `\\` and `t` are separate raw characters, not the one escape sequence character `\\t`. The same is true with the Unicode escape sequence.\n\n### `repeat(..)` Prototype Function\n\nIn languages like Python and Ruby, you can repeat a string as:\n\n```js\n\"foo\" * 3;\t\t\t\t\t\t\t// \"foofoofoo\"\n```\n\nThat doesn't work in JS, because `*` multiplication is only defined for numbers, and thus `\"foo\"` coerces to the `NaN` number.\n\nHowever, ES6 defines a string prototype method `repeat(..)` to accomplish the task:\n\n```js\n\"foo\".repeat( 3 );\t\t\t\t\t// \"foofoofoo\"\n```\n\n### String Inspection Functions\n\nIn addition to `String#indexOf(..)` and `String#lastIndexOf(..)` from prior to ES6, three new methods for searching/inspection have been added: `startsWith(..)`, `endsWith(..)`, and `includes(..)`.\n\n```js\nvar palindrome = \"step on no pets\";\n\npalindrome.startsWith( \"step on\" );\t// true\npalindrome.startsWith( \"on\", 5 );\t// true\n\npalindrome.endsWith( \"no pets\" );\t// true\npalindrome.endsWith( \"no\", 10 );\t// true\n\npalindrome.includes( \"on\" );\t\t// true\npalindrome.includes( \"on\", 6 );\t\t// false\n```\n\nFor all the string search/inspection methods, if you look for an empty string `\"\"`, it will either be found at the beginning or the end of the string.\n\n**Warning:** These methods will not by default accept a regular expression for the search string. See \"Regular Expression Symbols\" in Chapter 7 for information about disabling the `isRegExp` check that is performed on this first argument.\n\n## Review\n\nES6 adds many extra API helpers on the various built-in native objects:\n\n* `Array` adds `of(..)` and `from(..)` static functions, as well as prototype functions like `copyWithin(..)` and `fill(..)`.\n* `Object` adds static functions like `is(..)` and `assign(..)`.\n* `Math` adds static functions like `acosh(..)` and `clz32(..)`.\n* `Number` adds static properties like `Number.EPSILON`, as well as static functions like `Number.isFinite(..)`.\n* `String` adds static functions like `String.fromCodePoint(..)` and `String.raw(..)`, as well as prototype functions like `repeat(..)` and `includes(..)`.\n\nMost of these additions can be polyfilled (see ES6 Shim), and were inspired by utilities in common JS libraries/frameworks.\n"
  },
  {
    "path": "es6 & beyond/ch7.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 7: Meta Programming\n\nMeta programming is programming where the operation targets the behavior of the program itself. In other words, it's programming the programming of your program. Yeah, a mouthful, huh?\n\nFor example, if you probe the relationship between one object `a` and another `b` -- are they `[[Prototype]]` linked? -- using `a.isPrototypeOf(b)`, this is commonly referred to as introspection, a form of meta programming. Macros (which don't exist in JS, yet) --  where the code modifies itself at compile time -- are another obvious example of meta programming. Enumerating the keys of an object with a `for..in` loop, or checking if an object is an *instance of* a \"class constructor\", are other common meta programming tasks.\n\nMeta programming focuses on one or more of the following: code inspecting itself, code modifying itself, or code modifying default language behavior so other code is affected.\n\nThe goal of meta programming is to leverage the language's own intrinsic capabilities to make the rest of your code more descriptive, expressive, and/or flexible. Because of the *meta* nature of meta programming, it's somewhat difficult to put a more precise definition on it than that. The best way to understand meta programming is to see it through examples.\n\nES6 adds several new forms/features for meta programming on top of what JS already had.\n\n## Function Names\n\nThere are cases where your code may want to introspect on itself and ask what the name of some function is. If you ask what a function's name is, the answer is surprisingly somewhat ambiguous. Consider:\n\n```js\nfunction daz() {\n\t// ..\n}\n\nvar obj = {\n\tfoo: function() {\n\t\t// ..\n\t},\n\tbar: function baz() {\n\t\t// ..\n\t},\n\tbam: daz,\n\tzim() {\n\t\t// ..\n\t}\n};\n```\n\nIn this previous snippet, \"what is the name of `obj.foo()`\" is slightly nuanced. Is it `\"foo\"`, `\"\"`, or `undefined`? And what about `obj.bar()` -- is it named `\"bar\"` or `\"baz\"`? Is `obj.bam()` named `\"bam\"` or `\"daz\"`? What about `obj.zim()`?\n\nMoreover, what about functions which are passed as callbacks, like:\n\n```js\nfunction foo(cb) {\n\t// what is the name of `cb()` here?\n}\n\nfoo( function(){\n\t// I'm anonymous!\n} );\n```\n\nThere are quite a few ways that functions can be expressed in programs, and it's not always clear and unambiguous what the \"name\" of that function should be.\n\nMore importantly, we need to distinguish whether the \"name\" of a function refers to its `name` property -- yes, functions have a property called `name` -- or whether it refers to the lexical binding name, such as `bar` in `function bar() { .. }`.\n\nThe lexical binding name is what you use for things like recursion:\n\n```js\nfunction foo(i) {\n\tif (i < 10) return foo( i * 2 );\n\treturn i;\n}\n```\n\nThe `name` property is what you'd use for meta programming purposes, so that's what we'll focus on in this discussion.\n\nThe confusion comes because by default, the lexical name a function has (if any) is also set as its `name` property. Actually there was no official requirement for that behavior by the ES5 (and prior) specifications. The setting of the `name` property was nonstandard but still fairly reliable. As of ES6, it has been standardized.\n\n**Tip:** If a function has a `name` value assigned, that's typically the name used in stack traces in developer tools.\n\n### Inferences\n\nBut what happens to the `name` property if a function has no lexical name?\n\nAs of ES6, there are now inference rules which can determine a sensible `name` property value to assign a function even if that function doesn't have a lexical name to use.\n\nConsider:\n\n```js\nvar abc = function() {\n\t// ..\n};\n\nabc.name;\t\t\t\t// \"abc\"\n```\n\nHad we given the function a lexical name like `abc = function def() { .. }`, the `name` property would of course be `\"def\"`. But in the absence of the lexical name, intuitively the `\"abc\"` name seems appropriate.\n\nHere are other forms that will infer a name (or not) in ES6:\n\n```js\n(function(){ .. });\t\t\t\t\t// name:\n(function*(){ .. });\t\t\t\t// name:\nwindow.foo = function(){ .. };\t\t// name:\n\nclass Awesome {\n\tconstructor() { .. }\t\t\t// name: Awesome\n\tfunny() { .. }\t\t\t\t\t// name: funny\n}\n\nvar c = class Awesome { .. };\t\t// name: Awesome\n\nvar o = {\n\tfoo() { .. },\t\t\t\t\t// name: foo\n\t*bar() { .. },\t\t\t\t\t// name: bar\n\tbaz: () => { .. },\t\t\t\t// name: baz\n\tbam: function(){ .. },\t\t\t// name: bam\n\tget qux() { .. },\t\t\t\t// name: get qux\n\tset fuz() { .. },\t\t\t\t// name: set fuz\n\t[\"b\" + \"iz\"]:\n\t\tfunction(){ .. },\t\t\t// name: biz\n\t[Symbol( \"buz\" )]:\n\t\tfunction(){ .. }\t\t\t// name: [buz]\n};\n\nvar x = o.foo.bind( o );\t\t\t// name: bound foo\n(function(){ .. }).bind( o );\t\t// name: bound\n\nexport default function() { .. }\t// name: default\n\nvar y = new Function();\t\t\t\t// name: anonymous\nvar GeneratorFunction =\n\tfunction*(){}.__proto__.constructor;\nvar z = new GeneratorFunction();\t// name: anonymous\n```\n\nThe `name` property is not writable by default, but it is configurable, meaning you can use `Object.defineProperty(..)` to manually change it if so desired.\n\n## Meta Properties\n\nIn the \"`new.target`\" section of Chapter 3, we introduced a concept new to JS in ES6: the meta property. As the name suggests, meta properties are intended to provide special meta information in the form of a property access that would otherwise not have been possible.\n\nIn the case of `new.target`, the keyword `new` serves as the context for a property access. Clearly `new` is itself not an object, which makes this capability special. However, when `new.target` is used inside a constructor call (a function/method invoked with `new`), `new` becomes a virtual context, so that `new.target` can refer to the target constructor that `new` invoked.\n\nThis is a clear example of a meta programming operation, as the intent is to determine from inside a constructor call what the original `new` target was, generally for the purposes of introspection (examining typing/structure) or static property access.\n\nFor example, you may want to have different behavior in a constructor depending on if it's directly invoked or invoked via a child class:\n\n```js\nclass Parent {\n\tconstructor() {\n\t\tif (new.target === Parent) {\n\t\t\tconsole.log( \"Parent instantiated\" );\n\t\t}\n\t\telse {\n\t\t\tconsole.log( \"A child instantiated\" );\n\t\t}\n\t}\n}\n\nclass Child extends Parent {}\n\nvar a = new Parent();\n// Parent instantiated\n\nvar b = new Child();\n// A child instantiated\n```\n\nThere's a slight nuance here, which is that the `constructor()` inside the `Parent` class definition is actually given the lexical name of the class (`Parent`), even though the syntax implies that the class is a separate entity from the constructor.\n\n**Warning:** As with all meta programming techniques, be careful of creating code that's too clever for your future self or others maintaining your code to understand. Use these tricks with caution.\n\n## Well Known Symbols\n\nIn the \"Symbols\" section of Chapter 2, we covered the new ES6 primitive type `symbol`. In addition to symbols you can define in your own program, JS predefines a number of built-in symbols, referred to as *Well Known Symbols* (WKS).\n\nThese symbol values are defined primarily to expose special meta properties that are being exposed to your JS programs to give you more control over JS's behavior.\n\nWe'll briefly introduce each and discuss their purpose.\n\n### `Symbol.iterator`\n\nIn Chapters 2 and 3, we introduced and used the `@@iterator` symbol, automatically used by `...` spreads and `for..of` loops. We also saw `@@iterator` as defined on the new ES6 collections as defined in Chapter 5.\n\n`Symbol.iterator` represents the special location (property) on any object where the language mechanisms automatically look to find a method that will construct an iterator instance for consuming that object's values. Many objects come with a default one defined.\n\nHowever, we can define our own iterator logic for any object value by setting the `Symbol.iterator` property, even if that's overriding the default iterator. The meta programming aspect is that we are defining behavior which other parts of JS (namely, operators and looping constructs) use when processing an object value we define.\n\nConsider:\n\n```js\nvar arr = [4,5,6,7,8,9];\n\nfor (var v of arr) {\n\tconsole.log( v );\n}\n// 4 5 6 7 8 9\n\n// define iterator that only produces values\n// from odd indexes\narr[Symbol.iterator] = function*() {\n\tvar idx = 1;\n\tdo {\n\t\tyield this[idx];\n\t} while ((idx += 2) < this.length);\n};\n\nfor (var v of arr) {\n\tconsole.log( v );\n}\n// 5 7 9\n```\n\n### `Symbol.toStringTag` and `Symbol.hasInstance`\n\nOne of the most common meta programming tasks is to introspect on a value to find out what *kind* it is, usually to decide what operations are appropriate to perform on it. With objects, the two most common inspection techniques are `toString()` and `instanceof`.\n\nConsider:\n\n```js\nfunction Foo() {}\n\nvar a = new Foo();\n\na.toString();\t\t\t\t// [object Object]\na instanceof Foo;\t\t\t// true\n```\n\nAs of ES6, you can control the behavior of these operations:\n\n```js\nfunction Foo(greeting) {\n\tthis.greeting = greeting;\n}\n\nFoo.prototype[Symbol.toStringTag] = \"Foo\";\n\nObject.defineProperty( Foo, Symbol.hasInstance, {\n\tvalue: function(inst) {\n\t\treturn inst.greeting == \"hello\";\n\t}\n} );\n\nvar a = new Foo( \"hello\" ),\n\tb = new Foo( \"world\" );\n\nb[Symbol.toStringTag] = \"cool\";\n\na.toString();\t\t\t\t// [object Foo]\nString( b );\t\t\t\t// [object cool]\n\na instanceof Foo;\t\t\t// true\nb instanceof Foo;\t\t\t// false\n```\n\nThe `@@toStringTag` symbol on the prototype (or instance itself) specifies a string value to use in the `[object ___]` stringification.\n\nThe `@@hasInstance` symbol is a method on the constructor function which receives the instance object value and lets you decide by returning `true` or `false` if the value should be considered an instance or not.\n\n**Note:** To set `@@hasInstance` on a function, you must use `Object.defineProperty(..)`, as the default one on `Function.prototype` is `writable: false`. See the *this & Object Prototypes* title of this series for more information.\n\n### `Symbol.species`\n\nIn \"Classes\" in Chapter 3, we introduced the `@@species` symbol, which controls which constructor is used by built-in methods of a class that needs to spawn new instances.\n\nThe most common example is when subclassing `Array` and wanting to define which constructor (`Array(..)` or your subclass) inherited methods like `slice(..)` should use. By default, `slice(..)` called on an instance of a subclass of `Array` would produce a new instance of that subclass, which is frankly what you'll likely often want.\n\nHowever, you can meta program by overriding a class's default `@@species` definition:\n\n```js\nclass Cool {\n\t// defer `@@species` to derived constructor\n\tstatic get [Symbol.species]() { return this; }\n\n\tagain() {\n\t\treturn new this.constructor[Symbol.species]();\n\t}\n}\n\nclass Fun extends Cool {}\n\nclass Awesome extends Cool {\n\t// force `@@species` to be parent constructor\n\tstatic get [Symbol.species]() { return Cool; }\n}\n\nvar a = new Fun(),\n\tb = new Awesome(),\n\tc = a.again(),\n\td = b.again();\n\nc instanceof Fun;\t\t\t// true\nd instanceof Awesome;\t\t// false\nd instanceof Cool;\t\t\t// true\n```\n\nThe `Symbol.species` setting defaults on the built-in native constructors to the `return this` behavior as illustrated in the previous snippet in the `Cool` definition. It has no default on user classes, but as shown that behavior is easy to emulate.\n\nIf you need to define methods that generate new instances, use the meta programming of the `new this.constructor[Symbol.species](..)` pattern instead of the hard-wiring of `new this.constructor(..)` or `new XYZ(..)`. Derived classes will then be able to customize `Symbol.species` to control which constructor vends those instances.\n\n### `Symbol.toPrimitive`\n\nIn the *Types & Grammar* title of this series, we discussed the `ToPrimitive` abstract coercion operation, which is used when an object must be coerced to a primitive value for some operation (such as `==` comparison or `+` addition). Prior to ES6, there was no way to control this behavior.\n\nAs of ES6, the `@@toPrimitive` symbol as a property on any object value can customize that `ToPrimitive` coercion by specifying a method.\n\nConsider:\n\n```js\nvar arr = [1,2,3,4,5];\n\narr + 10;\t\t\t\t// 1,2,3,4,510\n\narr[Symbol.toPrimitive] = function(hint) {\n\tif (hint == \"default\" || hint == \"number\") {\n\t\t// sum all numbers\n\t\treturn this.reduce( function(acc,curr){\n\t\t\treturn acc + curr;\n\t\t}, 0 );\n\t}\n};\n\narr + 10;\t\t\t\t// 25\n```\n\nThe `Symbol.toPrimitive` method will be provided with a *hint* of `\"string\"`, `\"number\"`, or `\"default\"` (which should be interpreted as `\"number\"`), depending on what type the operation invoking `ToPrimitive` is expecting. In the previous snippet, the additive `+` operation has no hint (`\"default\"` is passed). A multiplicative `*` operation would hint `\"number\"` and a `String(arr)` would hint `\"string\"`.\n\n**Warning:** The `==` operator will invoke the `ToPrimitive` operation with no hint -- the `@@toPrimitive` method, if any is called with hint `\"default\"` -- on an object if the other value being compared is not an object. However, if both comparison values are objects, the behavior of `==` is identical to `===`, which is that the references themselves are directly compared. In this case, `@@toPrimitive` is not invoked at all. See the *Types & Grammar* title of this series for more information about coercion and the abstract operations.\n\n### Regular Expression Symbols\n\nThere are four well known symbols that can be overridden for regular expression objects, which control how those regular expressions are used by the four corresponding `String.prototype` functions of the same name:\n\n* `@@match`: The `Symbol.match` value of a regular expression is the method used to match all or part of a string value with the given regular expression. It's used by `String.prototype.match(..)` if you pass it a regular expression for the pattern matching.\n\n   The default algorithm for matching is laid out in section 21.2.5.6 of the ES6 specification (http://www.ecma-international.org/ecma-262/6.0/#sec-regexp.prototype-@@match). You could override this default algorithm and provide extra regex features, such as look-behind assertions.\n\n   `Symbol.match` is also used by the `isRegExp` abstract operation (see the note in \"String Inspection Functions\" in Chapter 6) to determine if an object is intended to be used as a regular expression. To force this check to fail for an object so it's not treated as a regular expression, set the `Symbol.match` value to `false` (or something falsy).\n* `@@replace`: The `Symbol.replace` value of a regular expression is the method used by `String.prototype.replace(..)` to replace within a string one or all occurrences of character sequences that match the given regular expression pattern.\n\n   The default algorithm for replacing is laid out in section 21.2.5.8 of the ES6 specification (http://www.ecma-international.org/ecma-262/6.0/#sec-regexp.prototype-@@replace).\n\n   One cool use for overriding the default algorithm is to provide additional `replacer` argument options, such as supporting `\"abaca\".replace(/a/g,[1,2,3])` producing `\"1b2c3\"` by consuming the iterable for successive replacement values.\n* `@@search`: The `Symbol.search` value of a regular expression is the method used by `String.prototype.search(..)` to search for a sub-string within another string as matched by the given regular expression.\n\n   The default algorithm for searching is laid out in section 21.2.5.9 of the ES6 specification (http://www.ecma-international.org/ecma-262/6.0/#sec-regexp.prototype-@@search).\n* `@@split`: The `Symbol.split` value of a regular expression is the method used by `String.prototype.split(..)` to split a string into sub-strings at the location(s) of the delimiter as matched by the given regular expression.\n\n   The default algorithm for splitting is laid out in section 21.2.5.11 of the ES6 specification (http://www.ecma-international.org/ecma-262/6.0/#sec-regexp.prototype-@@split).\n\nOverriding the built-in regular expression algorithms is not for the faint of heart! JS ships with a highly optimized regular expression engine, so your own user code will likely be a lot slower. This kind of meta programming is neat and powerful, but it should only be used in cases where it's really necessary or beneficial.\n\n### `Symbol.isConcatSpreadable`\n\nThe `@@isConcatSpreadable` symbol can be defined as a boolean property (`Symbol.isConcatSpreadable`) on any object (like an array or other iterable) to indicate if it should be *spread out* if passed to an array `concat(..)`.\n\nConsider:\n\n```js\nvar a = [1,2,3],\n\tb = [4,5,6];\n\nb[Symbol.isConcatSpreadable] = false;\n\n[].concat( a, b );\t\t// [1,2,3,[4,5,6]]\n```\n\n### `Symbol.unscopables`\n\nThe `@@unscopables` symbol can be defined as an object property (`Symbol.unscopables`) on any object to indicate which properties can and cannot be exposed as lexical variables in a `with` statement.\n\nConsider:\n\n```js\nvar o = { a:1, b:2, c:3 },\n\ta = 10, b = 20, c = 30;\n\no[Symbol.unscopables] = {\n\ta: false,\n\tb: true,\n\tc: false\n};\n\nwith (o) {\n\tconsole.log( a, b, c );\t\t// 1 20 3\n}\n```\n\nA `true` in the `@@unscopables` object indicates the property should be *unscopable*, and thus filtered out from the lexical scope variables. `false` means it's OK to be included in the lexical scope variables.\n\n**Warning:** The `with` statement is disallowed entirely in `strict` mode, and as such should be considered deprecated from the language. Don't use it. See the *Scope & Closures* title of this series for more information. Because `with` should be avoided, the `@@unscopables` symbol is also moot.\n\n## Proxies\n\nOne of the most obviously meta programming features added to ES6 is the `Proxy` feature.\n\nA proxy is a special kind of object you create that \"wraps\" -- or sits in front of -- another normal object. You can register special handlers (aka *traps*) on the proxy object which are called when various operations are performed against the proxy. These handlers have the opportunity to perform extra logic in addition to *forwarding* the operations on to the original target/wrapped object.\n\nOne example of the kind of *trap* handler you can define on a proxy is `get` that intercepts the `[[Get]]` operation -- performed when you try to access a property on an object. Consider:\n\n```js\nvar obj = { a: 1 },\n\thandlers = {\n\t\tget(target,key,context) {\n\t\t\t// note: target === obj,\n\t\t\t// context === pobj\n\t\t\tconsole.log( \"accessing: \", key );\n\t\t\treturn Reflect.get(\n\t\t\t\ttarget, key, context\n\t\t\t);\n\t\t}\n\t},\n\tpobj = new Proxy( obj, handlers );\n\nobj.a;\n// 1\n\npobj.a;\n// accessing: a\n// 1\n```\n\nWe declare a `get(..)` handler as a named method on the *handler* object (second argument to `Proxy(..)`), which receives a reference to the *target* object (`obj`), the *key* property name (`\"a\"`), and the `self`/receiver/proxy (`pobj`).\n\nAfter the `console.log(..)` tracing statement, we \"forward\" the operation onto `obj` via `Reflect.get(..)`. We will cover the `Reflect` API in the next section, but note that each available proxy trap has a corresponding `Reflect` function of the same name.\n\nThese mappings are symmetric on purpose. The proxy handlers each intercept when a respective meta programming task is performed, and the `Reflect` utilities each perform the respective meta programming task on an object. Each proxy handler has a default definition that automatically calls the corresponding `Reflect` utility. You will almost certainly use both `Proxy` and `Reflect` in tandem.\n\nHere's a list of handlers you can define on a proxy for a *target* object/function, and how/when they are triggered:\n\n* `get(..)`: via `[[Get]]`, a property is accessed on the proxy (`Reflect.get(..)`, `.` property operator, or `[ .. ]` property operator)\n* `set(..)`: via `[[Set]]`, a property value is set on the proxy (`Reflect.set(..)`, the `=` assignment operator, or destructuring assignment if it targets an object property)\n* `deleteProperty(..)`: via `[[Delete]]`, a property is deleted from the proxy (`Reflect.deleteProperty(..)` or `delete`)\n* `apply(..)` (if *target* is a function): via `[[Call]]`, the proxy is invoked as a normal function/method (`Reflect.apply(..)`, `call(..)`, `apply(..)`, or the `(..)` call operator)\n* `construct(..)` (if *target* is a constructor function): via `[[Construct]]`, the proxy is invoked as a constructor function (`Reflect.construct(..)` or `new`)\n* `getOwnPropertyDescriptor(..)`: via `[[GetOwnProperty]]`, a property descriptor is retrieved from the proxy (`Object.getOwnPropertyDescriptor(..)` or `Reflect.getOwnPropertyDescriptor(..)`)\n* `defineProperty(..)`: via `[[DefineOwnProperty]]`, a property descriptor is set on the proxy (`Object.defineProperty(..)` or `Reflect.defineProperty(..)`)\n* `getPrototypeOf(..)`: via `[[GetPrototypeOf]]`, the `[[Prototype]]` of the proxy is retrieved (`Object.getPrototypeOf(..)`, `Reflect.getPrototypeOf(..)`, `__proto__`, `Object#isPrototypeOf(..)`, or `instanceof`)\n* `setPrototypeOf(..)`: via `[[SetPrototypeOf]]`, the `[[Prototype]]` of the proxy is set (`Object.setPrototypeOf(..)`, `Reflect.setPrototypeOf(..)`, or `__proto__`)\n* `preventExtensions(..)`: via `[[PreventExtensions]]`, the proxy is made non-extensible (`Object.preventExtensions(..)` or `Reflect.preventExtensions(..)`)\n* `isExtensible(..)`: via `[[IsExtensible]]`, the extensibility of the proxy is probed (`Object.isExtensible(..)` or `Reflect.isExtensible(..)`)\n* `ownKeys(..)`: via `[[OwnPropertyKeys]]`, the set of owned properties and/or owned symbol properties of the proxy is retrieved (`Object.keys(..)`, `Object.getOwnPropertyNames(..)`, `Object.getOwnSymbolProperties(..)`, `Reflect.ownKeys(..)`, or `JSON.stringify(..)`)\n* `enumerate(..)`: via `[[Enumerate]]`, an iterator is requested for the proxy's enumerable owned and \"inherited\" properties (`Reflect.enumerate(..)` or `for..in`)\n* `has(..)`: via `[[HasProperty]]`, the proxy is probed to see if it has an owned or \"inherited\" property (`Reflect.has(..)`, `Object#hasOwnProperty(..)`, or `\"prop\" in obj`)\n\n**Tip:** For more information about each of these meta programming tasks, see the \"`Reflect` API\" section later in this chapter.\n\nIn addition to the notations in the preceding list about actions that will trigger the various traps, some traps are triggered indirectly by the default actions of another trap. For example:\n\n```js\nvar handlers = {\n\t\tgetOwnPropertyDescriptor(target,prop) {\n\t\t\tconsole.log(\n\t\t\t\t\"getOwnPropertyDescriptor\"\n\t\t\t);\n\t\t\treturn Object.getOwnPropertyDescriptor(\n\t\t\t\ttarget, prop\n\t\t\t);\n\t\t},\n\t\tdefineProperty(target,prop,desc){\n\t\t\tconsole.log( \"defineProperty\" );\n\t\t\treturn Object.defineProperty(\n\t\t\t\ttarget, prop, desc\n\t\t\t);\n\t\t}\n\t},\n\tproxy = new Proxy( {}, handlers );\n\nproxy.a = 2;\n// getOwnPropertyDescriptor\n// defineProperty\n```\n\nThe `getOwnPropertyDescriptor(..)` and `defineProperty(..)` handlers are triggered by the default `set(..)` handler's steps when setting a property value (whether newly adding or updating). If you also define your own `set(..)` handler, you may or may not make the corresponding calls against `context` (not `target`!) which would trigger these proxy traps.\n\n### Proxy Limitations\n\nThese meta programming handlers trap a wide array of fundamental operations you can perform against an object. However, there are some operations which are not (yet, at least) available to intercept.\n\nFor example, none of these operations are trapped and forwarded from `pobj` proxy to `obj` target:\n\n```js\nvar obj = { a:1, b:2 },\n\thandlers = { .. },\n\tpobj = new Proxy( obj, handlers );\n\ntypeof obj;\nString( obj );\nobj + \"\";\nobj == pobj;\nobj === pobj\n```\n\nPerhaps in the future, more of these underlying fundamental operations in the language will be interceptable, giving us even more power to extend JavaScript from within itself.\n\n**Warning:** There are certain *invariants* -- behaviors which cannot be overridden -- that apply to the use of proxy handlers. For example, the result from the `isExtensible(..)` handler is always coerced to a `boolean`. These invariants restrict some of your ability to customize behaviors with proxies, but they do so only to prevent you from creating strange and unusual (or inconsistent) behavior. The conditions for these invariants are complicated so we won't fully go into them here, but this post (http://www.2ality.com/2014/12/es6-proxies.html#invariants) does a great job of covering them.\n\n### Revocable Proxies\n\nA regular proxy always traps for the target object, and cannot be modified after creation -- as long as a reference is kept to the proxy, proxying remains possible. However, there may be cases where you want to create a proxy that can be disabled when you want to stop allowing it to proxy. The solution is to create a *revocable proxy*:\n\n```js\nvar obj = { a: 1 },\n\thandlers = {\n\t\tget(target,key,context) {\n\t\t\t// note: target === obj,\n\t\t\t// context === pobj\n\t\t\tconsole.log( \"accessing: \", key );\n\t\t\treturn target[key];\n\t\t}\n\t},\n\t{ proxy: pobj, revoke: prevoke } =\n\t\tProxy.revocable( obj, handlers );\n\npobj.a;\n// accessing: a\n// 1\n\n// later:\nprevoke();\n\npobj.a;\n// TypeError\n```\n\nA revocable proxy is created with `Proxy.revocable(..)`, which is a regular function, not a constructor like `Proxy(..)`. Otherwise, it takes the same two arguments: *target* and *handlers*.\n\nThe return value of `Proxy.revocable(..)` is not the proxy itself as with `new Proxy(..)`. Instead, it's an object with two properties: *proxy* and *revoke* -- we used object destructuring (see \"Destructuring\" in Chapter 2) to assign these properties to `pobj` and `prevoke()` variables, respectively.\n\nOnce a revocable proxy is revoked, any attempts to access it (trigger any of its traps) will throw a `TypeError`.\n\nAn example of using a revocable proxy might be handing out a proxy to another party in your application that manages data in your model, instead of giving them a reference to the real model object itself. If your model object changes or is replaced, you want to invalidate the proxy you handed out so the other party knows (via the errors!) to request an updated reference to the model.\n\n### Using Proxies\n\nThe meta programming benefits of these Proxy handlers should be obvious. We can almost fully intercept (and thus override) the behavior of objects, meaning we can extend object behavior beyond core JS in some very powerful ways. We'll look at a few example patterns to explore the possibilities.\n\n#### Proxy First, Proxy Last\n\nAs we mentioned earlier, you typically think of a proxy as \"wrapping\" the target object. In that sense, the proxy becomes the primary object that the code interfaces with, and the actual target object remains hidden/protected.\n\nYou might do this because you want to pass the object somewhere that can't be fully \"trusted,\" and so you need to enforce special rules around its access rather than passing the object itself.\n\nConsider:\n\n```js\nvar messages = [],\n\thandlers = {\n\t\tget(target,key) {\n\t\t\t// string value?\n\t\t\tif (typeof target[key] == \"string\") {\n\t\t\t\t// filter out punctuation\n\t\t\t\treturn target[key]\n\t\t\t\t\t.replace( /[^\\w]/g, \"\" );\n\t\t\t}\n\n\t\t\t// pass everything else through\n\t\t\treturn target[key];\n\t\t},\n\t\tset(target,key,val) {\n\t\t\t// only set unique strings, lowercased\n\t\t\tif (typeof val == \"string\") {\n\t\t\t\tval = val.toLowerCase();\n\t\t\t\tif (target.indexOf( val ) == -1) {\n\t\t\t\t\ttarget.push(val);\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn true;\n\t\t}\n\t},\n\tmessages_proxy =\n\t\tnew Proxy( messages, handlers );\n\n// elsewhere:\nmessages_proxy.push(\n\t\"heLLo...\", 42, \"wOrlD!!\", \"WoRld!!\"\n);\n\nmessages_proxy.forEach( function(val){\n\tconsole.log(val);\n} );\n// hello world\n\nmessages.forEach( function(val){\n\tconsole.log(val);\n} );\n// hello... world!!\n```\n\nI call this *proxy first* design, as we interact first (primarily, entirely) with the proxy.\n\nWe enforce some special rules on interacting with `messages_proxy` that aren't enforced for `messages` itself. We only add elements if the value is a string and is also unique; we also lowercase the value. When retrieving values from `messages_proxy`, we filter out any punctuation in the strings.\n\nAlternatively, we can completely invert this pattern, where the target interacts with the proxy instead of the proxy interacting with the target. Thus, code really only interacts with the main object. The easiest way to accomplish this fallback is to have the proxy object in the `[[Prototype]]` chain of the main object.\n\nConsider:\n\n```js\nvar handlers = {\n\t\tget(target,key,context) {\n\t\t\treturn function() {\n\t\t\t\tcontext.speak(key + \"!\");\n\t\t\t};\n\t\t}\n\t},\n\tcatchall = new Proxy( {}, handlers ),\n\tgreeter = {\n\t\tspeak(who = \"someone\") {\n\t\t\tconsole.log( \"hello\", who );\n\t\t}\n\t};\n\n// setup `greeter` to fall back to `catchall`\nObject.setPrototypeOf( greeter, catchall );\n\ngreeter.speak();\t\t\t\t// hello someone\ngreeter.speak( \"world\" );\t\t// hello world\n\ngreeter.everyone();\t\t\t\t// hello everyone!\n```\n\nWe interact directly with `greeter` instead of `catchall`. When we call `speak(..)`, it's found on `greeter` and used directly. But when we try to access a method like `everyone()`, that function doesn't exist on `greeter`.\n\nThe default object property behavior is to check up the `[[Prototype]]` chain (see the *this & Object Prototypes* title of this series), so `catchall` is consulted for an `everyone` property. The proxy `get()` handler then kicks in and returns a function that calls `speak(..)` with the name of the property being accessed (`\"everyone\"`).\n\nI call this pattern *proxy last*, as the proxy is used only as a last resort.\n\n#### \"No Such Property/Method\"\n\nA common complaint about JS is that objects aren't by default very defensive in the situation where you try to access or set a property that doesn't already exist. You may wish to predefine all the properties/methods for an object, and have an error thrown if a nonexistent property name is subsequently used.\n\nWe can accomplish this with a proxy, either in *proxy first* or *proxy last* design. Let's consider both.\n\n```js\nvar obj = {\n\t\ta: 1,\n\t\tfoo() {\n\t\t\tconsole.log( \"a:\", this.a );\n\t\t}\n\t},\n\thandlers = {\n\t\tget(target,key,context) {\n\t\t\tif (Reflect.has( target, key )) {\n\t\t\t\treturn Reflect.get(\n\t\t\t\t\ttarget, key, context\n\t\t\t\t);\n\t\t\t}\n\t\t\telse {\n\t\t\t\tthrow \"No such property/method!\";\n\t\t\t}\n\t\t},\n\t\tset(target,key,val,context) {\n\t\t\tif (Reflect.has( target, key )) {\n\t\t\t\treturn Reflect.set(\n\t\t\t\t\ttarget, key, val, context\n\t\t\t\t);\n\t\t\t}\n\t\t\telse {\n\t\t\t\tthrow \"No such property/method!\";\n\t\t\t}\n\t\t}\n\t},\n\tpobj = new Proxy( obj, handlers );\n\npobj.a = 3;\npobj.foo();\t\t\t// a: 3\n\npobj.b = 4;\t\t\t// Error: No such property/method!\npobj.bar();\t\t\t// Error: No such property/method!\n```\n\nFor both `get(..)` and `set(..)`, we only forward the operation if the target object's property already exists; error thrown otherwise. The proxy object (`pobj`) is the main object code should interact with, as it intercepts these actions to provide the protections.\n\nNow, let's consider inverting with *proxy last* design:\n\n```js\nvar handlers = {\n\t\tget() {\n\t\t\tthrow \"No such property/method!\";\n\t\t},\n\t\tset() {\n\t\t\tthrow \"No such property/method!\";\n\t\t}\n\t},\n\tpobj = new Proxy( {}, handlers ),\n\tobj = {\n\t\ta: 1,\n\t\tfoo() {\n\t\t\tconsole.log( \"a:\", this.a );\n\t\t}\n\t};\n\n// setup `obj` to fall back to `pobj`\nObject.setPrototypeOf( obj, pobj );\n\nobj.a = 3;\nobj.foo();\t\t\t// a: 3\n\nobj.b = 4;\t\t\t// Error: No such property/method!\nobj.bar();\t\t\t// Error: No such property/method!\n```\n\nThe *proxy last* design here is a fair bit simpler with respect to how the handlers are defined. Instead of needing to intercept the `[[Get]]` and `[[Set]]` operations and only forward them if the target property exists, we instead rely on the fact that if either `[[Get]]` or `[[Set]]` get to our `pobj` fallback, the action has already traversed the whole `[[Prototype]]` chain and not found a matching property. We are free at that point to unconditionally throw the error. Cool, huh?\n\n#### Proxy Hacking the `[[Prototype]]` Chain\n\nThe `[[Get]]` operation is the primary channel by which the `[[Prototype]]` mechanism is invoked. When a property is not found on the immediate object, `[[Get]]` automatically hands off the operation to the `[[Prototype]]` object.\n\nThat means you can use the `get(..)` trap of a proxy to emulate or extend the notion of this `[[Prototype]]` mechanism.\n\nThe first hack we'll consider is creating two objects which are circularly linked via `[[Prototype]]` (or, at least it appears that way!). You cannot actually create a real circular `[[Prototype]]` chain, as the engine will throw an error. But a proxy can fake it!\n\nConsider:\n\n```js\nvar handlers = {\n\t\tget(target,key,context) {\n\t\t\tif (Reflect.has( target, key )) {\n\t\t\t\treturn Reflect.get(\n\t\t\t\t\ttarget, key, context\n\t\t\t\t);\n\t\t\t}\n\t\t\t// fake circular `[[Prototype]]`\n\t\t\telse {\n\t\t\t\treturn Reflect.get(\n\t\t\t\t\ttarget[\n\t\t\t\t\t\tSymbol.for( \"[[Prototype]]\" )\n\t\t\t\t\t],\n\t\t\t\t\tkey,\n\t\t\t\t\tcontext\n\t\t\t\t);\n\t\t\t}\n\t\t}\n\t},\n\tobj1 = new Proxy(\n\t\t{\n\t\t\tname: \"obj-1\",\n\t\t\tfoo() {\n\t\t\t\tconsole.log( \"foo:\", this.name );\n\t\t\t}\n\t\t},\n\t\thandlers\n\t),\n\tobj2 = Object.assign(\n\t\tObject.create( obj1 ),\n\t\t{\n\t\t\tname: \"obj-2\",\n\t\t\tbar() {\n\t\t\t\tconsole.log( \"bar:\", this.name );\n\t\t\t\tthis.foo();\n\t\t\t}\n\t\t}\n\t);\n\n// fake circular `[[Prototype]]` link\nobj1[ Symbol.for( \"[[Prototype]]\" ) ] = obj2;\n\nobj1.bar();\n// bar: obj-1 <-- through proxy faking [[Prototype]]\n// foo: obj-1 <-- `this` context still preserved\n\nobj2.foo();\n// foo: obj-2 <-- through [[Prototype]]\n```\n\n**Note:** We didn't need to proxy/forward `[[Set]]` in this example, so we kept things simpler. To be fully `[[Prototype]]` emulation compliant, you'd want to implement a `set(..)` handler that searches the `[[Prototype]]` chain for a matching property and respects its descriptor behavior (e.g., set, writable). See the *this & Object Prototypes* title of this series.\n\nIn the previous snippet, `obj2` is `[[Prototype]]` linked to `obj1` by virtue of the `Object.create(..)` statement. But to create the reverse (circular) linkage, we create property on `obj1` at the symbol location `Symbol.for(\"[[Prototype]]\")` (see \"Symbols\" in Chapter 2). This symbol may look sort of special/magical, but it isn't. It just allows me a conveniently named hook that semantically appears related to the task I'm performing.\n\nThen, the proxy's `get(..)` handler looks first to see if a requested `key` is on the proxy. If not, the operation is manually handed off to the object reference stored in the `Symbol.for(\"[[Prototype]]\")` location of `target`.\n\nOne important advantage of this pattern is that the definitions of `obj1` and `obj2` are mostly not intruded by the setting up of this circular relationship between them. Although the previous snippet has all the steps intertwined for brevity's sake, if you look closely, the proxy handler logic is entirely generic (doesn't know about `obj1` or `obj2` specifically). So, that logic could be pulled out into a simple helper that wires them up, like a `setCircularPrototypeOf(..)` for example. We'll leave that as an exercise for the reader.\n\nNow that we've seen how we can use `get(..)` to emulate a `[[Prototype]]` link, let's push the hackery a bit further. Instead of a circular `[[Prototype]]`, what about multiple `[[Prototype]]` linkages (aka \"multiple inheritance\")? This turns out to be fairly straightforward:\n\n```js\nvar obj1 = {\n\t\tname: \"obj-1\",\n\t\tfoo() {\n\t\t\tconsole.log( \"obj1.foo:\", this.name );\n\t\t},\n\t},\n\tobj2 = {\n\t\tname: \"obj-2\",\n\t\tfoo() {\n\t\t\tconsole.log( \"obj2.foo:\", this.name );\n\t\t},\n\t\tbar() {\n\t\t\tconsole.log( \"obj2.bar:\", this.name );\n\t\t}\n\t},\n\thandlers = {\n\t\tget(target,key,context) {\n\t\t\tif (Reflect.has( target, key )) {\n\t\t\t\treturn Reflect.get(\n\t\t\t\t\ttarget, key, context\n\t\t\t\t);\n\t\t\t}\n\t\t\t// fake multiple `[[Prototype]]`\n\t\t\telse {\n\t\t\t\tfor (var P of target[\n\t\t\t\t\tSymbol.for( \"[[Prototype]]\" )\n\t\t\t\t]) {\n\t\t\t\t\tif (Reflect.has( P, key )) {\n\t\t\t\t\t\treturn Reflect.get(\n\t\t\t\t\t\t\tP, key, context\n\t\t\t\t\t\t);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\tobj3 = new Proxy(\n\t\t{\n\t\t\tname: \"obj-3\",\n\t\t\tbaz() {\n\t\t\t\tthis.foo();\n\t\t\t\tthis.bar();\n\t\t\t}\n\t\t},\n\t\thandlers\n\t);\n\n// fake multiple `[[Prototype]]` links\nobj3[ Symbol.for( \"[[Prototype]]\" ) ] = [\n\tobj1, obj2\n];\n\nobj3.baz();\n// obj1.foo: obj-3\n// obj2.bar: obj-3\n```\n\n**Note:** As mentioned in the note after the earlier circular `[[Prototype]]` example, we didn't implement the `set(..)` handler, but it would be necessary for a complete solution that emulates `[[Set]]` actions as normal `[[Prototype]]`s behave.\n\n`obj3` is set up to multiple-delegate to both `obj1` and `obj2`. In `obj3.baz()`, the `this.foo()` call ends up pulling `foo()` from `obj1` (first-come, first-served, even though there's also a `foo()` on `obj2`). If we reordered the linkage as `obj2, obj1`, the `obj2.foo()` would have been found and used.\n\nBut as is, the `this.bar()` call doesn't find a `bar()` on `obj1`, so it falls over to check `obj2`, where it finds a match.\n\n`obj1` and `obj2` represent two parallel `[[Prototype]]` chains of `obj3`. `obj1` and/or `obj2` could themselves have normal `[[Prototype]]` delegation to other objects, or either could themself be a proxy (like `obj3` is) that can multiple-delegate.\n\nJust as with the circular `[[Prototype]]` example earlier, the definitions of `obj1`, `obj2`, and `obj3` are almost entirely separate from the generic proxy logic that handles the multiple-delegation. It would be trivial to define a utility like `setPrototypesOf(..)` (notice the \"s\"!) that takes a main object and a list of objects to fake the multiple `[[Prototype]]` linkage to. Again, we'll leave that as an exercise for the reader.\n\nHopefully the power of proxies is now becoming clearer after these various examples. There are many other powerful meta programming tasks that proxies enable.\n\n## `Reflect` API\n\nThe `Reflect` object is a plain object (like `Math`), not a function/constructor like the other built-in natives.\n\nIt holds static functions which correspond to various meta programming tasks that you can control. These functions correspond one-to-one with the handler methods (*traps*) that Proxies can define.\n\nSome of the functions will look familiar as functions of the same names on `Object`:\n\n* `Reflect.getOwnPropertyDescriptor(..)`\n* `Reflect.defineProperty(..)`\n* `Reflect.getPrototypeOf(..)`\n* `Reflect.setPrototypeOf(..)`\n* `Reflect.preventExtensions(..)`\n* `Reflect.isExtensible(..)`\n\nThese utilities in general behave the same as their `Object.*` counterparts. However, one difference is that the `Object.*` counterparts attempt to coerce their first argument (the target object) to an object if it's not already one. The `Reflect.*` methods simply throw an error in that case.\n\nAn object's keys can be accessed/inspected using these utilities:\n\n* `Reflect.ownKeys(..)`: Returns the list of all owned keys (not \"inherited\"), as returned by both `Object.getOwnPropertyNames(..)` and `Object.getOwnPropertySymbols(..)`. See the \"Property Enumeration Order\" section for information about the order of keys.\n* `Reflect.enumerate(..)`: Returns an iterator that produces the set of all non-symbol keys (owned and \"inherited\") that are *enumerable* (see the *this & Object Prototypes* title of this series). Essentially, this set of keys is the same as those processed by a `for..in` loop. See the \"Property Enumeration Order\" section for information about the order of keys.\n* `Reflect.has(..)`: Essentially the same as the `in` operator for checking if a property is on an object or its `[[Prototype]]` chain. For example, `Reflect.has(o,\"foo\")` essentially performs `\"foo\" in o`.\n\nFunction calls and constructor invocations can be performed manually, separate of the normal syntax (e.g., `(..)` and `new`) using these utilities:\n\n* `Reflect.apply(..)`: For example, `Reflect.apply(foo,thisObj,[42,\"bar\"])` calls the `foo(..)` function with `thisObj` as its `this`, and passes in the `42` and `\"bar\"` arguments.\n* `Reflect.construct(..)`: For example, `Reflect.construct(foo,[42,\"bar\"])` essentially calls `new foo(42,\"bar\")`.\n\nObject property access, setting, and deletion can be performed manually using these utilities:\n\n* `Reflect.get(..)`: For example, `Reflect.get(o,\"foo\")` retrieves `o.foo`.\n* `Reflect.set(..)`: For example, `Reflect.set(o,\"foo\",42)` essentially performs `o.foo = 42`.\n* `Reflect.deleteProperty(..)`: For example, `Reflect.deleteProperty(o,\"foo\")` essentially performs `delete o.foo`.\n\nThe meta programming capabilities of `Reflect` give you programmatic equivalents to emulate various syntactic features, exposing previously hidden-only abstract operations. For example, you can use these capabilities to extend features and APIs for *domain specific languages* (DSLs).\n\n### Property Ordering\n\nPrior to ES6, the order used to list an object's keys/properties was implementation dependent and undefined by the specification. Generally, most engines have enumerated them in creation order, though developers have been strongly encouraged not to ever rely on this ordering.\n\nAs of ES6, the order for listing owned properties is now defined (ES6 specification, section 9.1.12) by the `[[OwnPropertyKeys]]` algorithm, which produces all owned properties (strings or symbols), regardless of enumerability. This ordering is only guaranteed for `Reflect.ownKeys(..)` (and by extension, `Object.getOwnPropertyNames(..)` and `Object.getOwnPropertySymbols(..)`).\n\nThe ordering is:\n\n1. First, enumerate any owned properties that are integer indexes, in ascending numeric order.\n2. Next, enumerate the rest of the owned string property names in creation order.\n3. Finally, enumerate owned symbol properties in creation order.\n\nConsider:\n\n```js\nvar o = {};\n\no[Symbol(\"c\")] = \"yay\";\no[2] = true;\no[1] = true;\no.b = \"awesome\";\no.a = \"cool\";\n\nReflect.ownKeys( o );\t\t\t\t// [1,2,\"b\",\"a\",Symbol(c)]\nObject.getOwnPropertyNames( o );\t// [1,2,\"b\",\"a\"]\nObject.getOwnPropertySymbols( o );\t// [Symbol(c)]\n```\n\nOn the other hand, the `[[Enumerate]]` algorithm (ES6 specification, section 9.1.11) produces only enumerable properties, from the target object as well as its `[[Prototype]]` chain. It is used by both `Reflect.enumerate(..)` and `for..in`. The observable ordering is implementation dependent and not controlled by the specification.\n\nBy contrast, `Object.keys(..)` invokes the `[[OwnPropertyKeys]]` algorithm to get a list of all owned keys. However, it filters out non-enumerable properties and then reorders the list to match legacy implementation-dependent behavior, specifically with `JSON.stringify(..)` and `for..in`. So, by extension the ordering *also* matches that of `Reflect.enumerate(..)`.\n\nIn other words, all four mechanisms (`Reflect.enumerate(..)`, `Object.keys(..)`, `for..in`, and `JSON.stringify(..)`) will  match with the same implementation-dependent ordering, though they technically get there in different ways.\n\nImplementations are allowed to match these four to the ordering of `[[OwnPropertyKeys]]`, but are not required to. Nevertheless, you will likely observe the following ordering behavior from them:\n\n```js\nvar o = { a: 1, b: 2 };\nvar p = Object.create( o );\np.c = 3;\np.d = 4;\n\nfor (var prop of Reflect.enumerate( p )) {\n\tconsole.log( prop );\n}\n// c d a b\n\nfor (var prop in p) {\n\tconsole.log( prop );\n}\n// c d a b\n\nJSON.stringify( p );\n// {\"c\":3,\"d\":4}\n\nObject.keys( p );\n// [\"c\",\"d\"]\n```\n\nBoiling this all down: as of ES6, `Reflect.ownKeys(..)`, `Object.getOwnPropertyNames(..)`, and `Object.getOwnPropertySymbols(..)` all have predictable and reliable ordering guaranteed by the specification. So it's safe to build code that relies on this ordering.\n\n`Reflect.enumerate(..)`, `Object.keys(..)`, and `for..in` (as well as `JSON.stringify(..)` by extension) continue to share an observable ordering with each other, as they always have. But that ordering will not necessarily be the same as that of `Reflect.ownKeys(..)`. Care should still be taken in relying on their implementation-dependent ordering.\n\n## Feature Testing\n\nWhat is a feature test? It's a test that you run to determine if a feature is available or not. Sometimes, the test is not just for existence, but for conformance to specified behavior -- features can exist but be buggy.\n\nThis is a meta programming technique, to test the environment your program runs in to then determine how your program should behave.\n\nThe most common use of feature tests in JS is checking for the existence of an API and if it's not present, defining a polyfill (see Chapter 1). For example:\n\n```js\nif (!Number.isNaN) {\n\tNumber.isNaN = function(x) {\n\t\treturn x !== x;\n\t};\n}\n```\n\nThe `if` statement in this snippet is meta programming: we're probing our program and its runtime environment to determine if and how we should proceed.\n\nBut what about testing for features that involve new syntax?\n\nYou might try something like:\n\n```js\ntry {\n\ta = () => {};\n\tARROW_FUNCS_ENABLED = true;\n}\ncatch (err) {\n\tARROW_FUNCS_ENABLED = false;\n}\n```\n\nUnfortunately, this doesn't work, because our JS programs are compiled. Thus, the engine will choke on the `() => {}` syntax if it is not already supporting ES6 arrow functions. Having a syntax error in your program prevents it from running, which prevents your program from subsequently responding differently if the feature is supported or not.\n\nTo meta program with feature tests around syntax-related features, we need a way to insulate the test from the initial compile step our program runs through. For instance, if we could store the code for the test in a string, then the JS engine wouldn't by default try to compile the contents of that string, until we asked it to.\n\nDid your mind just jump to using `eval(..)`?\n\nNot so fast. See the *Scope & Closures* title of this series for why `eval(..)` is a bad idea. But there's another option with less downsides: the `Function(..)` constructor.\n\nConsider:\n\n```js\ntry {\n\tnew Function( \"( () => {} )\" );\n\tARROW_FUNCS_ENABLED = true;\n}\ncatch (err) {\n\tARROW_FUNCS_ENABLED = false;\n}\n```\n\nOK, so now we're meta programming by determining if a feature like arrow functions *can* compile in the current engine or not. You might then wonder, what would we do with this information?\n\nWith existence checks for APIs, and defining fallback API polyfills, there's a clear path for what to do with either test success or failure. But what can we do with the information that we get from `ARROW_FUNCS_ENABLED` being `true` or `false`?\n\nBecause the syntax can't appear in a file if the engine doesn't support that feature, you can't just have different functions defined in the file with and without the syntax in question.\n\nWhat you can do is use the test to determine which of a set of JS files you should load. For example, if you had a set of these feature tests in a bootstrapper for your JS application, it could then test the environment to determine if your ES6 code can be loaded and run directly, or if you need to load a transpiled version of your code (see Chapter 1).\n\nThis technique is called *split delivery*.\n\nIt recognizes the reality that your ES6 authored JS programs will sometimes be able to entirely run \"natively\" in ES6+ browsers, but other times need transpilation to run in pre-ES6 browsers. If you always load and use the transpiled code, even in the new ES6-compliant environments, you're running suboptimal code at least some of the time. This is not ideal.\n\nSplit delivery is more complicated and sophisticated, but it represents a more mature and robust approach to bridging the gap between the code you write and the feature support in browsers your programs must run in.\n\n### FeatureTests.io\n\nDefining feature tests for all of the ES6+ syntax, as well as the semantic behaviors, is a daunting task you probably don't want to tackle yourself. Because these tests require dynamic compilation (`new Function(..)`), there's some unfortunate performance cost.\n\nMoreover, running these tests every single time your app runs is probably wasteful, as on average a user's browser only updates once in a several week period at most, and even then, new features aren't necessarily showing up with every update.\n\nFinally, managing the list of feature tests that apply to your specific code base -- rarely will your programs use the entirety of ES6 -- is unruly and error-prone.\n\nThe \"https://featuretests.io\" feature-tests-as-a-service offers solutions to these frustrations.\n\nYou can load the service's library into your page, and it loads the latest test definitions and runs all the feature tests. It does so using background processing with Web Workers, if possible, to reduce the performance overhead. It also uses LocalStorage persistence to cache the results in a way that can be shared across all sites you visit which use the service, which drastically reduces how often the tests need to run on each browser instance.\n\nYou get runtime feature tests in each of your users' browsers, and you can use those tests results dynamically to serve users the most appropriate code (no more, no less) for their environments.\n\nMoreover, the service provides tools and APIs to scan your files to determine what features you need, so you can fully automate your split delivery build processes.\n\nFeatureTests.io makes it practical to use feature tests for all parts of ES6 and beyond to make sure that only the best code is ever loaded and run for any given environment.\n\n## Tail Call Optimization (TCO)\n\nNormally, when a function call is made from inside another function, a second *stack frame* is allocated to separately manage the variables/state of that other function invocation. Not only does this allocation cost some processing time, but it also takes up some extra memory.\n\nA call stack chain typically has at most 10-15 jumps from one function to another and another. In those scenarios, the memory usage is not likely any kind of practical problem.\n\nHowever, when you consider recursive programming (a function calling itself repeatedly) -- or mutual recursion with two or more functions calling each other -- the call stack could easily be hundreds, thousands, or more levels deep. You can probably see the problems that could cause, if memory usage grows unbounded.\n\nJavaScript engines have to set an arbitrary limit to prevent such programming techniques from crashing by running the browser and device out of memory. That's why we get the frustrating \"RangeError: Maximum call stack size exceeded\" thrown if the limit is hit.\n\n**Warning:** The limit of call stack depth is not controlled by the specification. It's implementation dependent, and will vary between browsers and devices. You should never code with strong assumptions of exact observable limits, as they may very well change from release to release.\n\nCertain patterns of function calls, called *tail calls*, can be optimized in a way to avoid the extra allocation of stack frames. If the extra allocation can be avoided, there's no reason to arbitrarily limit the call stack depth, so the engines can let them run unbounded.\n\nA tail call is a `return` statement with a function call, where nothing has to happen after the call except returning its value.\n\nThis optimization can only be applied in `strict` mode. Yet another reason to always be writing all your code as `strict`!\n\nHere's a function call that is *not* in tail position:\n\n```js\n\"use strict\";\n\nfunction foo(x) {\n\treturn x * 2;\n}\n\nfunction bar(x) {\n\t// not a tail call\n\treturn 1 + foo( x );\n}\n\nbar( 10 );\t\t\t\t// 21\n```\n\n`1 + ..` has to be performed after the `foo(x)` call completes, so the state of that `bar(..)` invocation needs to be preserved.\n\nBut the following snippet demonstrates calls to `foo(..)` and `bar(..)` where both *are* in tail position, as they're the last thing to happen in their code path (other than the `return`):\n\n```js\n\"use strict\";\n\nfunction foo(x) {\n\treturn x * 2;\n}\n\nfunction bar(x) {\n\tx = x + 1;\n\tif (x > 10) {\n\t\treturn foo( x );\n\t}\n\telse {\n\t\treturn bar( x + 1 );\n\t}\n}\n\nbar( 5 );\t\t\t\t// 24\nbar( 15 );\t\t\t\t// 32\n```\n\nIn this program, `bar(..)` is clearly recursive, but `foo(..)` is just a regular function call. In both cases, the function calls are in *proper tail position*. The `x + 1` is evaluated before the `bar(..)` call, and whenever that call finishes, all that happens is the `return`.\n\nProper Tail Calls (PTC) of these forms can be optimized -- called tail call optimization (TCO) -- so that the extra stack frame allocation is unnecessary. Instead of creating a new stack frame for the next function call, the engine just reuses the existing stack frame. That works because a function doesn't need to preserve any of the current state, as nothing happens with that state after the PTC.\n\nTCO means there's practically no limit to how deep the call stack can be. That trick slightly improves regular function calls in normal programs, but more importantly opens the door to using recursion for program expression even if the call stack could be tens of thousands of calls deep.\n\nWe're no longer relegated to simply theorizing about recursion for problem solving, but can actually use it in real JavaScript programs!\n\nAs of ES6, all PTC should be optimizable in this way, recursion or not.\n\n### Tail Call Rewrite\n\nThe hitch, however, is that only PTC can be optimized; non-PTC will still work of course, but will cause stack frame allocation as they always did. You'll have to be careful about structuring your functions with PTC if you expect the optimizations to kick in.\n\nIf you have a function that's not written with PTC, you may find the need to manually rearrange your code to be eligible for TCO.\n\nConsider:\n\n```js\n\"use strict\";\n\nfunction foo(x) {\n\tif (x <= 1) return 1;\n\treturn (x / 2) + foo( x - 1 );\n}\n\nfoo( 123456 );\t\t\t// RangeError\n```\n\nThe call to `foo(x-1)` isn't a PTC because its result has to be added to `(x / 2)` before `return`ing.\n\nHowever, to make this code eligible for TCO in an ES6 engine, we can rewrite it as follows:\n\n```js\n\"use strict\";\n\nvar foo = (function(){\n\tfunction _foo(acc,x) {\n\t\tif (x <= 1) return acc;\n\t\treturn _foo( (x / 2) + acc, x - 1 );\n\t}\n\n\treturn function(x) {\n\t\treturn _foo( 1, x );\n\t};\n})();\n\nfoo( 123456 );\t\t\t// 3810376848.5\n```\n\nIf you run the previous snippet in an ES6 engine that implements TCO, you'll get the `3810376848.5` answer as shown. However, it'll still fail with a `RangeError` in non-TCO engines.\n\n### Non-TCO Optimizations\n\nThere are other techniques to rewrite the code so that the call stack isn't growing with each call.\n\nOne such technique is called *trampolining*, which amounts to having each partial result represented as a function that either returns another partial result function or the final result. Then you can simply loop until you stop getting a function, and you'll have the result. Consider:\n\n```js\n\"use strict\";\n\nfunction trampoline( res ) {\n\twhile (typeof res == \"function\") {\n\t\tres = res();\n\t}\n\treturn res;\n}\n\nvar foo = (function(){\n\tfunction _foo(acc,x) {\n\t\tif (x <= 1) return acc;\n\t\treturn function partial(){\n\t\t\treturn _foo( (x / 2) + acc, x - 1 );\n\t\t};\n\t}\n\n\treturn function(x) {\n\t\treturn trampoline( _foo( 1, x ) );\n\t};\n})();\n\nfoo( 123456 );\t\t\t// 3810376848.5\n```\n\nThis reworking required minimal changes to factor out the recursion into the loop in `trampoline(..)`:\n\n1. First, we wrapped the `return _foo ..` line in the `return partial() { ..` function expression.\n2. Then we wrapped the `_foo(1,x)` call in the `trampoline(..)` call.\n\nThe reason this technique doesn't suffer the call stack limitation is that each of those inner `partial(..)` functions is just returned back to the `while` loop in `trampoline(..)`, which runs it and then loop iterates again. In other words, `partial(..)` doesn't recursively call itself, it just returns another function. The stack depth remains constant, so it can run as long as it needs to.\n\nTrampolining expressed in this way uses the closure that the inner `partial()` function has over the `x` and `acc` variables to keep the state from iteration to iteration. The advantage is that the looping logic is pulled out into a reusable `trampoline(..)` utility function, which many libraries provide versions of. You can reuse `trampoline(..)` multiple times in your program with different trampolined algorithms.\n\nOf course, if you really wanted to deeply optimize (and the reusability wasn't a concern), you could discard the closure state and inline the state tracking of `acc` into just one function's scope along with a loop. This technique is generally called *recursion unrolling*:\n\n```js\n\"use strict\";\n\nfunction foo(x) {\n\tvar acc = 1;\n\twhile (x > 1) {\n\t\tacc = (x / 2) + acc;\n\t\tx = x - 1;\n\t}\n\treturn acc;\n}\n\nfoo( 123456 );\t\t\t// 3810376848.5\n```\n\nThis expression of the algorithm is simpler to read, and will likely perform the best (strictly speaking) of the various forms we've explored. That may seem like a clear winner, and you may wonder why you would ever try the other approaches.\n\nThere are some reasons why you might not want to always manually unroll your recursions:\n\n* Instead of factoring out the trampolining (loop) logic for reusability, we've inlined it. This works great when there's only one example to consider, but as soon as you have a half dozen or more of these in your program, there's a good chance you'll want some reusability to keep things shorter and more manageable.\n* The example here is deliberately simple enough to illustrate the different forms. In practice, there are many more complications in recursion algorithms, such as mutual recursion (more than just one function calling itself).\n\n   The farther you go down this rabbit hole, the more manual and intricate the *unrolling* optimizations are. You'll quickly lose all the perceived value of readability. The primary advantage of recursion, even in the PTC form, is that it preserves the algorithm readability, and offloads the performance optimization to the engine.\n\nIf you write your algorithms with PTC, the ES6 engine will apply TCO to let your code run in constant stack depth (by reusing stack frames). You get the readability of recursion with most of the performance benefits and no limitations of run length.\n\n### Meta?\n\nWhat does TCO have to do with meta programming?\n\nAs we covered in the \"Feature Testing\" section earlier, you can determine at runtime what features an engine supports. This includes TCO, though determining it is quite brute force. Consider:\n\n```js\n\"use strict\";\n\ntry {\n\t(function foo(x){\n\t\tif (x < 5E5) return foo( x + 1 );\n\t})( 1 );\n\n\tTCO_ENABLED = true;\n}\ncatch (err) {\n\tTCO_ENABLED = false;\n}\n```\n\nIn a non-TCO engine, the recursive loop will fail out eventually, throwing an exception caught by the `try..catch`. Otherwise, the loop completes easily thanks to TCO.\n\nYuck, right?\n\nBut how could meta programming around the TCO feature (or rather, the lack thereof) benefit our code? The simple answer is that you could use such a feature test to decide to load a version of your application's code that uses recursion, or an alternative one that's been converted/transpiled to not need recursion.\n\n#### Self-Adjusting Code\n\nBut here's another way of looking at the problem:\n\n```js\n\"use strict\";\n\nfunction foo(x) {\n\tfunction _foo() {\n\t\tif (x > 1) {\n\t\t\tacc = acc + (x / 2);\n\t\t\tx = x - 1;\n\t\t\treturn _foo();\n\t\t}\n\t}\n\n\tvar acc = 1;\n\n\twhile (x > 1) {\n\t\ttry {\n\t\t\t_foo();\n\t\t}\n\t\tcatch (err) { }\n\t}\n\n\treturn acc;\n}\n\nfoo( 123456 );\t\t\t// 3810376848.5\n```\n\nThis algorithm works by attempting to do as much of the work with recursion as possible, but keeping track of the progress via scoped variables `x` and `acc`. If the entire problem can be solved with recursion without an error, great. If the engine kills the recursion at some point, we simply catch that with the `try..catch` and then try again, picking up where we left off.\n\nI consider this a form of meta programming in that you are probing during runtime the ability of the engine to fully (recursively) finish the task, and working around any (non-TCO) engine limitations that may restrict you.\n\nAt first (or even second!) glance, my bet is this code seems much uglier to you compared to some of the earlier versions. It also runs a fair bit slower (on larger runs in a non-TCO environment).\n\nThe primary advantage, other than it being able to complete any size task even in non-TCO engines, is that this \"solution\" to the recursion stack limitation is much more flexible than the trampolining or manual unrolling techniques shown previously.\n\nEssentially, `_foo()` in this case is a sort of stand-in for practically any recursive task, even mutual recursion. The rest is the boilerplate that should work for just about any algorithm.\n\nThe only \"catch\" is that to be able to resume in the event of a recursion limit being hit, the state of the recursion must be in scoped variables that exist outside the recursive function(s). We did that by leaving `x` and `acc` outside of the `_foo()` function, instead of passing them as arguments to `_foo()` as earlier.\n\nAlmost any recursive algorithm can be adapted to work this way. That means it's the most widely applicable way of leveraging TCO with recursion in your programs, with minimal rewriting.\n\nThis approach still uses a PTC, meaning that this code will *progressively enhance* from running using the loop many times (recursion batches) in an older browser to fully leveraging TCO'd recursion in an ES6+ environment. I think that's pretty cool!\n\n## Review\n\nMeta programming is when you turn the logic of your program to focus on itself (or its runtime environment), either to inspect its own structure or to modify it. The primary value of meta programming is to extend the normal mechanisms of the language to provide additional capabilities.\n\nPrior to ES6, JavaScript already had quite a bit of meta programming capability, but ES6 significantly ramps that up with several new features.\n\nFrom function name inferences for anonymous functions to meta properties that give you information about things like how a constructor was invoked, you can inspect the program structure while it runs more than ever before. Well Known Symbols let you override intrinsic behaviors, such as coercion of an object to a primitive value. Proxies can intercept and customize various low-level operations on objects, and `Reflect` provides utilities to emulate them.\n\nFeature testing, even for subtle semantic behaviors like Tail Call Optimization, shifts the meta programming focus from your program to the JS engine capabilities itself. By knowing more about what the environment can do, your programs can adjust themselves to the best fit as they run.\n\nShould you meta program? My advice is: first focus on learning how the core mechanics of the language really work. But once you fully know what JS itself can do, it's time to start leveraging these powerful meta programming capabilities to push the language further!\n"
  },
  {
    "path": "es6 & beyond/ch8.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Chapter 8: Beyond ES6\n\nAt the time of this writing, the final draft of ES6 (*ECMAScript 2015*) is shortly headed toward its final official vote of approval by ECMA. But even as ES6 is being finalized, the TC39 committee is already hard at work on features for ES7/2016 and beyond.\n\nAs we discussed in Chapter 1, it's expected that the cadence of progress for JS is going to accelerate from updating once every several years to having an official version update once per year (hence the year-based naming). That alone is going to radically change how JS developers learn about and keep up with the language.\n\nBut even more importantly, the committee is actually going to work feature by feature. As soon as a feature is spec-complete and has its kinks worked out through implementation experiments in a few browsers, that feature will be considered stable enough to start using. We're all strongly encouraged to adopt features once they're ready instead of waiting for some official standards vote. If you haven't already learned ES6, the time is *past due* to get on board!\n\nAs the time of this writing, a list of future proposals and their status can be seen here (https://github.com/tc39/ecma262#current-proposals).\n\nTranspilers and polyfills are how we'll bridge to these new features even before all browsers we support have implemented them. Babel, Traceur, and several other major transpilers already have support for some of the post-ES6 features that are most likely to stabilize.\n\nWith that in mind, it's already time for us to look at some of them. Let's jump in!\n\n**Warning:** These features are all in various stages of development. While they're likely to land, and probably will look similar, take the contents of this chapter with more than a few grains of salt. This chapter will evolve in future editions of this title as these (and other!) features finalize.\n\n## `async function`s\n\nIn \"Generators + Promises\" in Chapter 4, we mentioned that there's a proposal for direct syntactic support for the pattern of generators `yield`ing promises to a runner-like utility that will resume it on promise completion. Let's take a brief look at that proposed feature, called `async function`.\n\nRecall this generator example from Chapter 4:\n\n```js\nrun( function *main() {\n\tvar ret = yield step1();\n\n\ttry {\n\t\tret = yield step2( ret );\n\t}\n\tcatch (err) {\n\t\tret = yield step2Failed( err );\n\t}\n\n\tret = yield Promise.all([\n\t\tstep3a( ret ),\n\t\tstep3b( ret ),\n\t\tstep3c( ret )\n\t]);\n\n\tyield step4( ret );\n} )\n.then(\n\tfunction fulfilled(){\n\t\t// `*main()` completed successfully\n\t},\n\tfunction rejected(reason){\n\t\t// Oops, something went wrong\n\t}\n);\n```\n\nThe proposed `async function` syntax can express this same flow control logic without needing the `run(..)` utility, because JS will automatically know how to look for promises to wait and resume. Consider:\n\n```js\nasync function main() {\n\tvar ret = await step1();\n\n\ttry {\n\t\tret = await step2( ret );\n\t}\n\tcatch (err) {\n\t\tret = await step2Failed( err );\n\t}\n\n\tret = await Promise.all( [\n\t\tstep3a( ret ),\n\t\tstep3b( ret ),\n\t\tstep3c( ret )\n\t] );\n\n\tawait step4( ret );\n}\n\nmain()\n.then(\n\tfunction fulfilled(){\n\t\t// `main()` completed successfully\n\t},\n\tfunction rejected(reason){\n\t\t// Oops, something went wrong\n\t}\n);\n```\n\nInstead of the `function *main() { ..` declaration, we declare with the `async function main() { ..` form. And instead of `yield`ing a promise, we `await` the promise. The call to run the function `main()` actually returns a promise that we can directly observe. That's the equivalent to the promise that we get back from a `run(main)` call.\n\nDo you see the symmetry? `async function` is essentially syntactic sugar for the generators + promises + `run(..)` pattern; under the covers, it operates the same!\n\nIf you're a C# developer and this `async`/`await` looks familiar, it's because this feature is directly inspired by C#'s feature. It's nice to see language precedence informing convergence!\n\nBabel, Traceur and other transpilers already have early support for the current status of `async function`s, so you can start using them already. However, in the next section \"Caveats\", we'll see why you perhaps shouldn't jump on that ship quite yet.\n\n**Note:** There's also a proposal for `async function*`, which would be called an \"async generator.\" You can both `yield` and `await` in the same code, and even combine those operations in the same statement: `x = await yield y`. The \"async generator\" proposal seems to be more in flux -- namely, its return value is not fully worked out yet. Some feel it should be an *observable*, which is kind of like the combination of an iterator and a promise. For now, we won't go further into that topic, but stay tuned as it evolves.\n\n### Caveats\n\nOne unresolved point of contention with `async function` is that because it only returns a promise, there's no way from the outside to *cancel* an `async function` instance that's currently running. This can be a problem if the async operation is resource intensive, and you want to free up the resources as soon as you're sure the result won't be needed.\n\nFor example:\n\n```js\nasync function request(url) {\n\tvar resp = await (\n\t\tnew Promise( function(resolve,reject){\n\t\t\tvar xhr = new XMLHttpRequest();\n\t\t\txhr.open( \"GET\", url );\n\t\t\txhr.onreadystatechange = function(){\n\t\t\t\tif (xhr.readyState == 4) {\n\t\t\t\t\tif (xhr.status == 200) {\n\t\t\t\t\t\tresolve( xhr );\n\t\t\t\t\t}\n\t\t\t\t\telse {\n\t\t\t\t\t\treject( xhr.statusText );\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t};\n\t\t\txhr.send();\n\t\t} )\n\t);\n\n\treturn resp.responseText;\n}\n\nvar pr = request( \"http://some.url.1\" );\n\npr.then(\n\tfunction fulfilled(responseText){\n\t\t// ajax success\n\t},\n\tfunction rejected(reason){\n\t\t// Oops, something went wrong\n\t}\n);\n```\n\nThis `request(..)` that I've conceived is somewhat like the `fetch(..)` utility that's recently been proposed for inclusion into the web platform. So the concern is, what happens if you want to use the `pr` value to somehow indicate that you want to cancel a long-running Ajax request, for example?\n\nPromises are not cancelable (at the time of writing, anyway). In my opinion, as well as many others, they never should be (see the *Async & Performance* title of this series). And even if a promise did have a `cancel()` method on it, does that necessarily mean that calling `pr.cancel()` should actually propagate a cancelation signal all the way back up the promise chain to the `async function`?\n\nSeveral possible resolutions to this debate have surfaced:\n\n* `async function`s won't be cancelable at all (status quo)\n* A \"cancel token\" can be passed to an async function at call time\n* Return value changes to a cancelable-promise type that's added\n* Return value changes to something else non-promise (e.g., observable, or control token with promise and cancel capabilities)\n\nAt the time of this writing, `async function`s return regular promises, so it's less likely that the return value will entirely change. But it's too early to tell where things will land. Keep an eye on this discussion.\n\n## `Object.observe(..)`\n\nOne of the holy grails of front-end web development is data binding -- listening for updates to a data object and syncing the DOM representation of that data. Most JS frameworks provide some mechanism for these sorts of operations.\n\nIt appears likely that post ES6, we'll see support added directly to the language, via a utility called `Object.observe(..)`. Essentially, the idea is that you can set up a listener to observe an object's changes, and have a callback called any time a change occurs. You can then update the DOM accordingly, for instance.\n\nThere are six types of changes that you can observe:\n\n* add\n* update\n* delete\n* reconfigure\n* setPrototype\n* preventExtensions\n\nBy default, you'll be notified of all these change types, but you can filter down to only the ones you care about.\n\nConsider:\n\n```js\nvar obj = { a: 1, b: 2 };\n\nObject.observe(\n\tobj,\n\tfunction(changes){\n\t\tfor (var change of changes) {\n\t\t\tconsole.log( change );\n\t\t}\n\t},\n\t[ \"add\", \"update\", \"delete\" ]\n);\n\nobj.c = 3;\n// { name: \"c\", object: obj, type: \"add\" }\n\nobj.a = 42;\n// { name: \"a\", object: obj, type: \"update\", oldValue: 1 }\n\ndelete obj.b;\n// { name: \"b\", object: obj, type: \"delete\", oldValue: 2 }\n```\n\nIn addition to the main `\"add\"`, `\"update\"`, and `\"delete\"` change types:\n\n* The `\"reconfigure\"` change event is fired if one of the object's properties is reconfigured with `Object.defineProperty(..)`, such as changing its `writable` attribute. See the *this & Object Prototypes* title of this series for more information.\n* The `\"preventExtensions\"` change event is fired if the object is made non-extensible via `Object.preventExtensions(..)`.\n\n   Because both `Object.seal(..)` and `Object.freeze(..)` also imply `Object.preventExtensions(..)`, they'll also fire its corresponding change event. In addition, `\"reconfigure\"` change events will also be fired for each property on the object.\n* The `\"setPrototype\"` change event is fired if the `[[Prototype]]` of an object is changed, either by setting it with the `__proto__` setter, or using `Object.setPrototypeOf(..)`.\n\nNotice that these change events are notified immediately after said change. Don't confuse this with proxies (see Chapter 7) where you can intercept the actions before they occur. Object observation lets you respond after a change (or set of changes) occurs.\n\n### Custom Change Events\n\nIn addition to the six built-in change event types, you can also listen for and fire custom change events.\n\nConsider:\n\n```js\nfunction observer(changes){\n\tfor (var change of changes) {\n\t\tif (change.type == \"recalc\") {\n\t\t\tchange.object.c =\n\t\t\t\tchange.object.oldValue +\n\t\t\t\tchange.object.a +\n\t\t\t\tchange.object.b;\n\t\t}\n\t}\n}\n\nfunction changeObj(a,b) {\n\tvar notifier = Object.getNotifier( obj );\n\n\tobj.a = a * 2;\n\tobj.b = b * 3;\n\n\t// queue up change events into a set\n\tnotifier.notify( {\n\t\ttype: \"recalc\",\n\t\tname: \"c\",\n\t\toldValue: obj.c\n\t} );\n}\n\nvar obj = { a: 1, b: 2, c: 3 };\n\nObject.observe(\n\tobj,\n\tobserver,\n\t[\"recalc\"]\n);\n\nchangeObj( 3, 11 );\n\nobj.a;\t\t\t// 12\nobj.b;\t\t\t// 30\nobj.c;\t\t\t// 3\n```\n\nThe change set (`\"recalc\"` custom event) has been queued for delivery to the observer, but not delivered yet, which is why `obj.c` is still `3`.\n\nThe changes are by default delivered at the end of the current event loop (see the *Async & Performance* title of this series). If you want to deliver them immediately, use `Object.deliverChangeRecords(observer)`. Once the change events are delivered, you can observe `obj.c` updated as expected:\n\n```js\nobj.c;\t\t\t// 42\n```\n\nIn the previous example, we called `notifier.notify(..)` with the complete change event record. An alternative form for queuing change records is to use `performChange(..)`, which separates specifying the type of the event from the rest of event record's properties (via a function callback). Consider:\n\n```js\nnotifier.performChange( \"recalc\", function(){\n\treturn {\n\t\tname: \"c\",\n\t\t// `this` is the object under observation\n\t\toldValue: this.c\n\t};\n} );\n```\n\nIn certain circumstances, this separation of concerns may map more cleanly to your usage pattern.\n\n### Ending Observation\n\nJust like with normal event listeners, you may wish to stop observing an object's change events. For that, you use `Object.unobserve(..)`.\n\nFor example:\n\n```js\nvar obj = { a: 1, b: 2 };\n\nObject.observe( obj, function observer(changes) {\n\tfor (var change of changes) {\n\t\tif (change.type == \"setPrototype\") {\n\t\t\tObject.unobserve(\n\t\t\t\tchange.object, observer\n\t\t\t);\n\t\t\tbreak;\n\t\t}\n\t}\n} );\n```\n\nIn this trivial example, we listen for change events until we see the `\"setPrototype\"` event come through, at which time we stop observing any more change events.\n\n## Exponentiation Operator\n\nAn operator has been proposed for JavaScript to perform exponentiation in the same way that `Math.pow(..)` does. Consider:\n\n```js\nvar a = 2;\n\na ** 4;\t\t\t// Math.pow( a, 4 ) == 16\n\na **= 3;\t\t// a = Math.pow( a, 3 )\na;\t\t\t\t// 8\n```\n\n**Note:** `**` is essentially the same as it appears in Python, Ruby, Perl, and others.\n\n## Objects Properties and `...`\n\nAs we saw in the \"Too Many, Too Few, Just Enough\" section of Chapter 2, the `...` operator is pretty obvious in how it relates to spreading or gathering arrays. But what about objects?\n\nSuch a feature was considered for ES6, but was deferred to be considered after ES6 (aka \"ES7\" or \"ES2016\" or ...). Here's how it might work in that \"beyond ES6\" timeframe:\n\n```js\nvar o1 = { a: 1, b: 2 },\n\to2 = { c: 3 },\n\to3 = { ...o1, ...o2, d: 4 };\n\nconsole.log( o3.a, o3.b, o3.c, o3.d );\n// 1 2 3 4\n```\n\nThe `...` operator might also be used to gather an object's destructured properties back into an object:\n\n```js\nvar o1 = { b: 2, c: 3, d: 4 };\nvar { b, ...o2 } = o1;\n\nconsole.log( b, o2.c, o2.d );\t\t// 2 3 4\n```\n\nHere, the `...o2` re-gathers the destructured `c` and `d` properties back into an `o2` object (`o2` does not have a `b` property like `o1` does).\n\nAgain, these are just proposals under consideration beyond ES6. But it'll be cool if they do land.\n\n## `Array#includes(..)`\n\nOne extremely common task JS developers need to perform is searching for a value inside an array of values. The way this has always been done is:\n\n```js\nvar vals = [ \"foo\", \"bar\", 42, \"baz\" ];\n\nif (vals.indexOf( 42 ) >= 0) {\n\t// found it!\n}\n```\n\nThe reason for the `>= 0` check is because `indexOf(..)` returns a numeric value of `0` or greater if found, or `-1` if not found. In other words, we're using an index-returning function in a boolean context. But because `-1` is truthy instead of falsy, we have to be more manual with our checks.\n\nIn the *Types & Grammar* title of this series, I explored another pattern that I slightly prefer:\n\n```js\nvar vals = [ \"foo\", \"bar\", 42, \"baz\" ];\n\nif (~vals.indexOf( 42 )) {\n\t// found it!\n}\n```\n\nThe `~` operator here conforms the return value of `indexOf(..)` to a value range that is suitably boolean coercible. That is, `-1` produces `0` (falsy), and anything else produces a non-zero (truthy) value, which is what we for deciding if we found the value or not.\n\nWhile I think that's an improvement, others strongly disagree. However, no one can argue that `indexOf(..)`'s searching logic is perfect. It fails to find `NaN` values in the array, for example.\n\nSo a proposal has surfaced and gained a lot of support for adding a real boolean-returning array search method, called `includes(..)`:\n\n```js\nvar vals = [ \"foo\", \"bar\", 42, \"baz\" ];\n\nif (vals.includes( 42 )) {\n\t// found it!\n}\n```\n\n**Note:** `Array#includes(..)` uses matching logic that will find `NaN` values, but will not distinguish between `-0` and `0` (see the *Types & Grammar* title of this series). If you don't care about `-0` values in your programs, this will likely be exactly what you're hoping for. If you *do* care about `-0`, you'll need to do your own searching logic, likely using the `Object.is(..)` utility (see Chapter 6).\n\n## SIMD\n\nWe cover Single Instruction, Multiple Data (SIMD) in more detail in the *Async & Performance* title of this series, but it bears a brief mention here, as it's one of the next likely features to land in a future JS.\n\nThe SIMD API exposes various low-level (CPU) instructions that can operate on more than a single number value at a time. For example, you'll be able to specify two *vectors* of 4 or 8 numbers each, and multiply the respective elements all at once (data parallelism!).\n\nConsider:\n\n```js\nvar v1 = SIMD.float32x4( 3.14159, 21.0, 32.3, 55.55 );\nvar v2 = SIMD.float32x4( 2.1, 3.2, 4.3, 5.4 );\n\nSIMD.float32x4.mul( v1, v2 );\n// [ 6.597339, 67.2, 138.89, 299.97 ]\n```\n\nSIMD will include several other operations besides `mul(..)` (multiplication), such as `sub()`, `div()`, `abs()`, `neg()`, `sqrt()`, and many more.\n\nParallel math operations are critical for the next generations of high performance JS applications.\n\n## WebAssembly (WASM)\n\nBrendan Eich made a late breaking announcement near the completion of the first edition of this title that has the potential to significantly impact the future path of JavaScript: WebAssembly (WASM). We will not be able to cover WASM in detail here, as it's extremely early at the time of this writing. But this title would be incomplete without at least a brief mention of it.\n\nOne of the strongest pressures on the recent (and near future) design changes of the JS language has been the desire that it become a more suitable target for transpilation/cross-compilation from other languages (like C/C++, ClojureScript, etc.). Obviously, performance of code running as JavaScript has been a primary concern.\n\nAs discussed in the *Async & Performance* title of this series, a few years ago a group of developers at Mozilla introduced an idea to JavaScript called ASM.js. ASM.js is a subset of valid JS that most significantly restricts certain actions that make code hard for the JS engine to optimize. The result is that ASM.js compatible code running in an ASM-aware engine can run remarkably faster, nearly on par with native optimized C equivalents. Many viewed ASM.js as the most likely backbone on which performance-hungry applications would ride in JavaScript.\n\nIn other words, all roads to running code in the browser *lead through JavaScript*.\n\nThat is, until the WASM announcement. WASM provides an alternate path for other languages to target the browser's runtime environment without having to first pass through JavaScript. Essentially, if WASM takes off, JS engines will grow an extra capability to execute a binary format of code that can be seen as somewhat similar to a bytecode (like that which runs on the JVM).\n\nWASM proposes a format for a binary representation of a highly compressed AST (syntax tree) of code, which can then give instructions directly to the JS engine and its underpinnings, without having to be parsed by JS, or even behave by the rules of JS. Languages like C or C++ can be compiled directly to the WASM format instead of ASM.js, and gain an extra speed advantage by skipping the JS parsing.\n\nThe near term for WASM is to have parity with ASM.js and indeed JS. But eventually, it's expected that WASM would grow new capabilities that surpass anything JS could do. For example, the pressure for JS to evolve radical features like threads -- a change that would certainly send major shockwaves through the JS ecosystem -- has a more hopeful future as a future WASM extension, relieving the pressure to change JS.\n\nIn fact, this new roadmap opens up many new roads for many languages to target the web runtime. That's an exciting new future path for the web platform!\n\nWhat does it mean for JS? Will JS become irrelevant or \"die\"? Absolutely not. ASM.js will likely not see much of a future beyond the next couple of years, but the majority of JS is quite safely anchored in the web platform story.\n\nProponents of WASM suggest its success will mean that the design of JS will be protected from pressures that would have eventually stretched it beyond assumed breaking points of reasonability. It is projected that WASM will become the preferred target for high-performance parts of applications, as authored in any of a myriad of different languages.\n\nInterestingly, JavaScript is one of the lesser likely languages to target WASM in the future. There may be future changes that carve out subsets of JS that might be tenable for such targeting, but that path doesn't seem high on the priority list.\n\nWhile JS likely won't be much of a WASM funnel, JS code and WASM code will be able to interoperate in the most significant ways, just as naturally as current module interactions. You can imagine calling a JS function like `foo()` and having that actually invoke a WASM function of that name with the power to run well outside the constraints of the rest of your JS.\n\nThings which are currently written in JS will probably continue to always be written in JS, at least for the foreseeable future. Things which are transpiled to JS will probably eventually at least consider targeting WASM instead. For things which need the utmost in performance with minimal tolerance for layers of abstraction, the likely choice will be to find a suitable non-JS language to author in, then targeting WASM.\n\nThere's a good chance this shift will be slow, and will be years in the making. WASM landing in all the major browser platforms is probably a few years out at best. In the meantime, the WASM project (https://github.com/WebAssembly) has an early polyfill to demonstrate proof-of-concept for its basic tenets.\n\nBut as time goes on, and as WASM learns new non-JS tricks, it's not too much a stretch of imagination to see some currently-JS things being refactored to a WASM-targetable language. For example, the performance sensitive parts of frameworks, game engines, and other heavily used tools might very well benefit from such a shift. Developers using these tools in their web applications likely won't notice much difference in usage or integration, but will just automatically take advantage of the performance and capabilities.\n\nWhat's certain is that the more real WASM becomes over time, the more it means to the trajectory and design of JavaScript. It's perhaps one of the most important \"beyond ES6\" topics developers should keep an eye on.\n\n## Review\n\nIf all the other books in this series essentially propose this challenge, \"you (may) not know JS (as much as you thought),\" this book has instead suggested, \"you don't know JS anymore.\" The book has covered a ton of new stuff added to the language in ES6. It's an exciting collection of new language features and paradigms that will forever improve our JS programs.\n\nBut JS is not done with ES6! Not even close. There's already quite a few features in various stages of development for the \"beyond ES6\" timeframe. In this chapter, we briefly looked at some of the most likely candidates to land in JS very soon.\n\n`async function`s are powerful syntactic sugar on top of the generators + promises pattern (see Chapter 4). `Object.observe(..)` adds direct native support for observing object change events, which is critical for implementing data binding. The `**` exponentiation operator, `...` for object properties, and `Array#includes(..)` are all simple but helpful improvements to existing mechanisms. Finally, SIMD ushers in a new era in the evolution of high performance JS.\n\nCliché as it sounds, the future of JS is really bright! The challenge of this series, and indeed of this book, is incumbent on every reader now. What are you waiting for? It's time to get learning and exploring!\n"
  },
  {
    "path": "es6 & beyond/foreword.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n# Foreword\n\nKyle Simpson is a thorough pragmatist.\n\nI can't think of higher praise than this. To me, these are two of the most important qualities that a software developer must have. That's right: *must*, not *should*. Kyle's keen ability to tease apart layers of the JavaScript programming language and present them in understandable and meaningful portions is second to none.\n\n*ES6 & Beyond* will be familiar to readers of the *You Don't Know JS* series:  they can expect to be deeply immersed in everything from the obvious, to the very subtle -- revealing semantics that were either taken for granted or never even considered. Until now, the *You Don't Know JS* book series has covered material that has at least some degree of familiarity to its readers. They have either seen or heard about the subject matter; they may even have experience with it. This volume covers material that only a very small portion of the JavaScript developer community has been exposed to: the  evolutionary changes to the language introduced in the ECMAScript 2015 Language Specification.\n\nOver the last couple years, I've witnessed Kyle's tireless efforts to familiarize himself with this material to a level of expertise that is rivaled by only a handful of his professional peers. That's quite a feat, considering that at the time of this writing, the language specification document hasn't been formally published! But what I've said is true, and I've read every word that Kyle's written for this book. I've followed every change, and each time, the content only gets better and provides yet a deeper level of understanding.\n\nThis book is about shaking up your sense of understanding by exposing you to the new and unknown. The intention is to evolve your knowledge in step with your tools by bestowing you with new capabilities. It exists to give you the confidence to fully embrace the next major era of JavaScript programming.\n\nRick Waldron<br>\n[@rwaldron](http://twitter.com/rwaldron)<br>\nOpen Web Engineer at Bocoup<br>\nEcma/TC39 Representative for jQuery\n"
  },
  {
    "path": "es6 & beyond/toc.md",
    "content": "# You Don't Know JS: ES6 & Beyond\n\n## Table of Contents\n\n* Foreword\n* Preface\n* Chapter 1: ES? Now & Future\n\t* Versioning\n\t* Transpiling\n* Chapter 2: Syntax\n\t* Block-Scoped Declarations\n\t* Spread / Rest\n\t* Default Parameter Values\n\t* Destructuring\n\t* Object Literal Extensions\n\t* Template Literals\n\t* Arrow Functions\n\t* `for..of` Loops\n\t* Regular Expression Extensions\n\t* Number Literal Extensions\n\t* Unicode\n\t* Symbols\n* Chapter 3: Organization\n\t* Iterators\n\t* Generators\n\t* Modules\n\t* Classes\n* Chapter 4: Async Flow Control\n\t* Promises\n\t* Generators + Promises\n* Chapter 5: Collections\n\t* TypedArrays\n\t* Maps\n\t* WeakMaps\n\t* Sets\n\t* WeakSets\n* Chapter 6: API Additions\n\t* `Array`\n\t* `Object`\n\t* `Math`\n\t* `Number`\n\t* `String`\n* Chapter 7: Meta Programming\n\t* Function Names\n\t* Meta Properties\n\t* Well Known Symbols\n\t* Proxies\n\t* `Reflect` API\n\t* Feature Testing\n\t* Tail Call Optimization (TCO)\n* Chapter 8: Beyond ES6\n\t* `async function`s\n\t* `Object.observe(..)`\n\t* Exponentiation Operator\n\t* Object Properties and `...`\n\t* `Array#includes(..)`\n\t* SIMD\n* Appendix A: Acknowledgments\n"
  },
  {
    "path": "preface.md",
    "content": "# You Don't Know JS\n# Preface\n\nI'm sure you noticed, but \"JS\" in the book series title is not an abbreviation for words used to curse about JavaScript, though cursing at the language's quirks is something we can probably all identify with!\n\nFrom the earliest days of the web, JavaScript has been a foundational technology that drives interactive experience around the content we consume. While flickering mouse trails and annoying pop-up prompts may be where JavaScript started, nearly 2 decades later, the technology and capability of JavaScript has grown many orders of magnitude, and few doubt its importance at the heart of the world's most widely available software platform: the web.\n\nBut as a language, it has perpetually been a target for a great deal of criticism, owing partly to its heritage but even more to its design philosophy. Even the name evokes, as Brendan Eich once put it, \"dumb kid brother\" status next to its more mature older brother \"Java\". But the name is merely an accident of politics and marketing. The two languages are vastly different in many important ways. \"JavaScript\" is as related to \"Java\" as \"Carnival\" is to \"Car\".\n\nBecause JavaScript borrows concepts and syntax idioms from several languages, including proud C-style procedural roots as well as subtle, less obvious Scheme/Lisp-style functional roots, it is exceedingly approachable to a broad audience of developers, even those with just little to no programming experience. The \"Hello World\" of JavaScript is so simple that the language is inviting and easy to get comfortable with in early exposure.\n\nWhile JavaScript is perhaps one of the easiest languages to get up and running with, its eccentricities make solid mastery of the language a vastly less common occurrence than in many other languages. Where it takes a pretty in-depth knowledge of a language like C or C++ to write a full-scale program, full-scale production JavaScript can, and often does, barely scratch the surface of what the language can do.\n\nSophisticated concepts which are deeply rooted into the language tend instead to surface themselves in *seemingly* simplistic ways, such as passing around functions as callbacks, which encourages the JavaScript developer to just use the language as-is and not worry too much about what's going on under the hood.\n\nIt is simultaneously a simple, easy-to-use language that has broad appeal, and a complex and nuanced collection of language mechanics which without careful study will elude *true understanding* even for the most seasoned of JavaScript developers.\n\nTherein lies the paradox of JavaScript, the Achilles' Heel of the language, the challenge we are presently addressing. Because JavaScript *can* be used without understanding, the understanding of the language is often never attained.\n\n## Mission\n\nIf at every point that you encounter a surprise or frustration in JavaScript, your response is to add it to the blacklist, as some are accustomed to doing, you soon will be relegated to a hollow shell of the richness of JavaScript.\n\nWhile this subset has been famously dubbed \"The Good Parts\", I would implore you, dear reader, to instead consider it the \"The Easy Parts\", \"The Safe Parts\", or even \"The Incomplete Parts\".\n\nThis *You Don't Know JavaScript* book series offers a contrary challenge: learn and deeply understand *all* of JavaScript, even and especially \"The Tough Parts\".\n\nHere, we address head on the tendency of JS developers to learn \"just enough\" to get by, without ever forcing themselves to learn exactly how and why the language behaves the way it does. Furthermore, we eschew the common advice to *retreat* when the road gets rough.\n\nI am not content, nor should you be, at stopping once something *just works*, and not really knowing *why*. I gently challenge you to journey down that bumpy \"road less traveled\" and embrace all that JavaScript is and can do. With that knowledge, no technique, no framework, no popular buzzword acronym of the week, will be beyond your understanding.\n\nThese books each take on specific core parts of the language which are most commonly misunderstood or under-understood, and dive very deep and exhaustively into them. You should come away from reading with a firm confidence in your understanding, not just of the theoretical, but the practical \"what you need to know\" bits.\n\nThe JavaScript you know *right now* is probably *parts* handed down to you by others who've been burned by incomplete understanding. *That* JavaScript is but a shadow of the true language. You don't *really* know JavaScript, *yet*, but if you dig into this series, you *will*. Read on, my friends. JavaScript awaits you.\n\n## Summary\n\nJavaScript is awesome. It's easy to learn partially, and much harder to learn completely (or even *sufficiently*). When developers encounter confusion, they usually blame the language instead of their lack of understanding. These books aim to fix that, inspiring a strong appreciation for the language you can now, and *should*, deeply *know*.\n\nNote: Many of the examples in this book assume modern (and future-reaching) JavaScript engine environments, such as ES6. Some code may not work as described if run in older (pre-ES6) engines.\n"
  },
  {
    "path": "scope & closures/README.md",
    "content": "# You Don't Know JS: Scope & Closures\n\n<img src=\"cover.jpg\" width=\"300\">\n\n-----\n\n**[Purchase digital/print copy from O'Reilly](http://shop.oreilly.com/product/0636920026327.do)**\n\n-----\n\n[Table of Contents](toc.md)\n\n* [Foreword](https://shanehudson.net/2014/06/03/foreword-dont-know-js/) (by [Shane Hudson](https://github.com/shanehudson))\n* [Preface](../preface.md)\n* [Chapter 1: What is Scope?](ch1.md)\n* [Chapter 2: Lexical Scope](ch2.md)\n* [Chapter 3: Function vs. Block Scope](ch3.md)\n* [Chapter 4: Hoisting](ch4.md)\n* [Chapter 5: Scope Closures](ch5.md)\n* [Appendix A: Dynamic Scope](apA.md)\n* [Appendix B: Polyfilling Block Scope](apB.md)\n* [Appendix C: Lexical-this](apC.md)\n* [Appendix D: Thank You's!](apD.md)\n"
  },
  {
    "path": "scope & closures/apA.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Appendix A: Dynamic Scope\n\nIn Chapter 2, we talked about \"Dynamic Scope\" as a contrast to the \"Lexical Scope\" model, which is how scope works in JavaScript (and in fact, most other languages).\n\nWe will briefly examine dynamic scope, to hammer home the contrast. But, more importantly, dynamic scope actually is a near cousin to another mechanism (`this`) in JavaScript, which we covered in the \"*this & Object Prototypes*\" title of this book series.\n\nAs we saw in Chapter 2, lexical scope is the set of rules about how the *Engine* can look-up a variable and where it will find it. The key characteristic of lexical scope is that it is defined at author-time, when the code is written (assuming you don't cheat with `eval()` or `with`).\n\nDynamic scope seems to imply, and for good reason, that there's a model whereby scope can be determined dynamically at runtime, rather than statically at author-time. That is in fact the case. Let's illustrate via code:\n\n```js\nfunction foo() {\n\tconsole.log( a ); // 2\n}\n\nfunction bar() {\n\tvar a = 3;\n\tfoo();\n}\n\nvar a = 2;\n\nbar();\n```\n\nLexical scope holds that the RHS reference to `a` in `foo()` will be resolved to the global variable `a`, which will result in value `2` being output.\n\nDynamic scope, by contrast, doesn't concern itself with how and where functions and scopes are declared, but rather **where they are called from**. In other words, the scope chain is based on the call-stack, not the nesting of scopes in code.\n\nSo, if JavaScript had dynamic scope, when `foo()` is executed, **theoretically** the code below would instead result in `3` as the output.\n\n```js\nfunction foo() {\n\tconsole.log( a ); // 3  (not 2!)\n}\n\nfunction bar() {\n\tvar a = 3;\n\tfoo();\n}\n\nvar a = 2;\n\nbar();\n```\n\nHow can this be? Because when `foo()` cannot resolve the variable reference for `a`, instead of stepping up the nested (lexical) scope chain, it walks up the call-stack, to find where `foo()` was *called from*. Since `foo()` was called from `bar()`, it checks the variables in scope for `bar()`, and finds an `a` there with value `3`.\n\nStrange? You're probably thinking so, at the moment.\n\nBut that's just because you've probably only ever worked on (or at least deeply considered) code which is lexically scoped. So dynamic scoping seems foreign. If you had only ever written code in a dynamically scoped language, it would seem natural, and lexical scope would be the odd-ball.\n\nTo be clear, JavaScript **does not, in fact, have dynamic scope**. It has lexical scope. Plain and simple. But the `this` mechanism is kind of like dynamic scope.\n\nThe key contrast: **lexical scope is write-time, whereas dynamic scope (and `this`!) are runtime**. Lexical scope cares *where a function was declared*, but dynamic scope cares where a function was *called from*.\n\nFinally: `this` cares *how a function was called*, which shows how closely related the `this` mechanism is to the idea of dynamic scoping. To dig more into `this`, read the title \"*this & Object Prototypes*\".\n"
  },
  {
    "path": "scope & closures/apB.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Appendix B: Polyfilling Block Scope\n\nIn Chapter 3, we explored Block Scope. We saw that `with` and the `catch` clause are both tiny examples of block scope that have existed in JavaScript since at least the introduction of ES3.\n\nBut it's ES6's introduction of `let` that finally gives full, unfettered block-scoping capability to our code. There are many exciting things, both functionally and code-stylistically, that block scope will enable.\n\nBut what if we wanted to use block scope in pre-ES6 environments?\n\nConsider this code:\n\n```js\n{\n\tlet a = 2;\n\tconsole.log( a ); // 2\n}\n\nconsole.log( a ); // ReferenceError\n```\n\nThis will work great in ES6 environments. But can we do so pre-ES6? `catch` is the answer.\n\n```js\ntry{throw 2}catch(a){\n\tconsole.log( a ); // 2\n}\n\nconsole.log( a ); // ReferenceError\n```\n\nWhoa! That's some ugly, weird looking code. We see a `try/catch` that appears to forcibly throw an error, but the \"error\" it throws is just a value `2`, and then the variable declaration that receives it is in the `catch(a)` clause. Mind: blown.\n\nThat's right, the `catch` clause has block-scoping to it, which means it can be used as a polyfill for block scope in pre-ES6 environments.\n\n\"But...\", you say. \"...no one wants to write ugly code like that!\" That's true. No one writes (some of) the code output by the CoffeeScript compiler, either. That's not the point.\n\nThe point is that tools can transpile ES6 code to work in pre-ES6 environments. You can write code using block-scoping, and benefit from such functionality, and let a build-step tool take care of producing code that will actually *work* when deployed.\n\nThis is actually the preferred migration path for all (ahem, most) of ES6: to use a code transpiler to take ES6 code and produce ES5-compatible code during the transition from pre-ES6 to ES6.\n\n## Traceur\n\nGoogle maintains a project called \"Traceur\" [^note-traceur], which is exactly tasked with transpiling ES6 features into pre-ES6 (mostly ES5, but not all!) for general usage. The TC39 committee relies on this tool (and others) to test out the semantics of the features they specify.\n\nWhat does Traceur produce from our snippet? You guessed it!\n\n```js\n{\n\ttry {\n\t\tthrow undefined;\n\t} catch (a) {\n\t\ta = 2;\n\t\tconsole.log( a );\n\t}\n}\n\nconsole.log( a );\n```\n\nSo, with the use of such tools, we can start taking advantage of block scope regardless of if we are targeting ES6 or not, because `try/catch` has been around (and worked this way) from ES3 days.\n\n## Implicit vs. Explicit Blocks\n\nIn Chapter 3, we identified some potential pitfalls to code maintainability/refactorability when we introduce block-scoping. Is there another way to take advantage of block scope but to reduce this downside?\n\nConsider this alternate form of `let`, called the \"let block\" or \"let statement\" (contrasted with \"let declarations\" from before).\n\n```js\nlet (a = 2) {\n\tconsole.log( a ); // 2\n}\n\nconsole.log( a ); // ReferenceError\n```\n\nInstead of implicitly hijacking an existing block, the let-statement creates an explicit block for its scope binding. Not only does the explicit block stand out more, and perhaps fare more robustly in code refactoring, it produces somewhat cleaner code by, grammatically, forcing all the declarations to the top of the block. This makes it easier to look at any block and know what's scoped to it and not.\n\nAs a pattern, it mirrors the approach many people take in function-scoping when they manually move/hoist all their `var` declarations to the top of the function. The let-statement puts them there at the top of the block by intent, and if you don't use `let` declarations strewn throughout, your block-scoping declarations are somewhat easier to identify and maintain.\n\nBut, there's a problem. The let-statement form is not included in ES6. Neither does the official Traceur compiler accept that form of code.\n\nWe have two options. We can format using ES6-valid syntax and a little sprinkle of code discipline:\n\n```js\n/*let*/ { let a = 2;\n\tconsole.log( a );\n}\n\nconsole.log( a ); // ReferenceError\n```\n\nBut, tools are meant to solve our problems. So the other option is to write explicit let statement blocks, and let a tool convert them to valid, working code.\n\nSo, I built a tool called \"let-er\" [^note-let_er] to address just this issue. *let-er* is a build-step code transpiler, but its only task is to find let-statement forms and transpile them. It will leave alone any of the rest of your code, including any let-declarations. You can safely use *let-er* as the first ES6 transpiler step, and then pass your code through something like Traceur if necessary.\n\nMoreover, *let-er* has a configuration flag `--es6`, which when turned on (off by default), changes the kind of code produced. Instead of the `try/catch` ES3 polyfill hack, *let-er* would take our snippet and produce the fully ES6-compliant, non-hacky:\n\n```js\n{\n\tlet a = 2;\n\tconsole.log( a );\n}\n\nconsole.log( a ); // ReferenceError\n```\n\nSo, you can start using *let-er* right away, and target all pre-ES6 environments, and when you only care about ES6, you can add the flag and instantly target only ES6.\n\nAnd most importantly, **you can use the more preferable and more explicit let-statement form** even though it is not an official part of any ES version (yet).\n\n## Performance\n\nLet me add one last quick note on the performance of `try/catch`, and/or to address the question, \"why not just use an IIFE to create the scope?\"\n\nFirstly, the performance of `try/catch` *is* slower, but there's no reasonable assumption that it *has* to be that way, or even that it *always will be* that way. Since the official TC39-approved ES6 transpiler uses `try/catch`, the Traceur team has asked Chrome to improve the performance of `try/catch`, and they are obviously motivated to do so.\n\nSecondly, IIFE is not a fair apples-to-apples comparison with `try/catch`, because a function wrapped around any arbitrary code changes the meaning, inside of that code, of `this`, `return`, `break`, and `continue`. IIFE is not a suitable general substitute. It could only be used manually in certain cases.\n\nThe question really becomes: do you want block-scoping, or not. If you do, these tools provide you that option. If not, keep using `var` and go on about your coding!\n\n[^note-traceur]: [Google Traceur](http://google.github.io/traceur-compiler/demo/repl.html)\n\n[^note-let_er]\\: [let-er](https://github.com/getify/let-er)\n"
  },
  {
    "path": "scope & closures/apC.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Appendix C: Lexical-this\n\nThough this title does not address the `this` mechanism in any detail, there's one ES6 topic which relates `this` to lexical scope in an important way, which we will quickly examine.\n\nES6 adds a special syntactic form of function declaration called the \"arrow function\". It looks like this:\n\n```js\nvar foo = a => {\n\tconsole.log( a );\n};\n\nfoo( 2 ); // 2\n```\n\nThe so-called \"fat arrow\" is often mentioned as a short-hand for the *tediously verbose* (sarcasm) `function` keyword.\n\nBut there's something much more important going on with arrow-functions that has nothing to do with saving keystrokes in your declaration.\n\nBriefly, this code suffers a problem:\n\n```js\n\nvar obj = {\n\tid: \"awesome\",\n\tcool: function coolFn() {\n\t\tconsole.log( this.id );\n\t}\n};\n\nvar id = \"not awesome\";\n\nobj.cool(); // awesome\n\nsetTimeout( obj.cool, 100 ); // not awesome\n```\n\nThe problem is the loss of `this` binding on the `cool()` function. There are various ways to address that problem, but one often-repeated solution is `var self = this;`.\n\nThat might look like:\n\n```js\nvar obj = {\n\tcount: 0,\n\tcool: function coolFn() {\n\t\tvar self = this;\n\n\t\tif (self.count < 1) {\n\t\t\tsetTimeout( function timer(){\n\t\t\t\tself.count++;\n\t\t\t\tconsole.log( \"awesome?\" );\n\t\t\t}, 100 );\n\t\t}\n\t}\n};\n\nobj.cool(); // awesome?\n```\n\nWithout getting too much into the weeds here, the `var self = this` \"solution\" just dispenses with the whole problem of understanding and properly using `this` binding, and instead falls back to something we're perhaps more comfortable with: lexical scope. `self` becomes just an identifier that can be resolved via lexical scope and closure, and cares not what happened to the `this` binding along the way.\n\nPeople don't like writing verbose stuff, especially when they do it over and over again. So, a motivation of ES6 is to help alleviate these scenarios, and indeed, *fix* common idiom problems, such as this one.\n\nThe ES6 solution, the arrow-function, introduces a behavior called \"lexical this\".\n\n```js\nvar obj = {\n\tcount: 0,\n\tcool: function coolFn() {\n\t\tif (this.count < 1) {\n\t\t\tsetTimeout( () => { // arrow-function ftw?\n\t\t\t\tthis.count++;\n\t\t\t\tconsole.log( \"awesome?\" );\n\t\t\t}, 100 );\n\t\t}\n\t}\n};\n\nobj.cool(); // awesome?\n```\n\nThe short explanation is that arrow-functions do not behave at all like normal functions when it comes to their `this` binding. They discard all the normal rules for `this` binding, and instead take on the `this` value of their immediate lexical enclosing scope, whatever it is.\n\nSo, in that snippet, the arrow-function doesn't get its `this` unbound in some unpredictable way, it just \"inherits\" the `this` binding of the `cool()` function (which is correct if we invoke it as shown!).\n\nWhile this makes for shorter code, my perspective is that arrow-functions are really just codifying into the language syntax a common *mistake* of developers, which is to confuse and conflate \"this binding\" rules with \"lexical scope\" rules.\n\nPut another way: why go to the trouble and verbosity of using the `this` style coding paradigm, only to cut it off at the knees by mixing it with lexical references. It seems natural to embrace one approach or the other for any given piece of code, and not mix them in the same piece of code.\n\n**Note:** one other detraction from arrow-functions is that they are anonymous, not named. See Chapter 3 for the reasons why anonymous functions are less desirable than named functions.\n\nA more appropriate approach, in my perspective, to this \"problem\", is to use and embrace the `this` mechanism correctly.\n\n```js\nvar obj = {\n\tcount: 0,\n\tcool: function coolFn() {\n\t\tif (this.count < 1) {\n\t\t\tsetTimeout( function timer(){\n\t\t\t\tthis.count++; // `this` is safe because of `bind(..)`\n\t\t\t\tconsole.log( \"more awesome\" );\n\t\t\t}.bind( this ), 100 ); // look, `bind()`!\n\t\t}\n\t}\n};\n\nobj.cool(); // more awesome\n```\n\nWhether you prefer the new lexical-this behavior of arrow-functions, or you prefer the tried-and-true `bind()`, it's important to note that arrow-functions are **not** just about less typing of \"function\".\n\nThey have an *intentional behavioral difference* that we should learn and understand, and if we so choose, leverage.\n\nNow that we fully understand lexical scoping (and closure!), understanding lexical-this should be a breeze!\n"
  },
  {
    "path": "scope & closures/apD.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Appendix D: Acknowledgments\n\nI have many people to thank for making this book title and the overall series happen.\n\nFirst, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.\n\nI'd like to thank my editors at O'Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into \"open source\" book writing, editing, and production.\n\nThank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, and many others. A big thank you to Shane Hudson for writing the Foreword for this title.\n\nThank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy \"kangax\" Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, and so many others, I can't even scratch the surface.\n\nThe *You Don't Know JS* book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:\n\n> Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu 'Dilys' Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma), Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard\n\nThis book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!\n\nThank you again to all the countless folks I didn't name but who I nonetheless owe thanks. May this book series be \"owned\" by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.\n"
  },
  {
    "path": "scope & closures/ch1.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Chapter 1: What is Scope?\n\nOne of the most fundamental paradigms of nearly all programming languages is the ability to store values in variables, and later retrieve or modify those values. In fact, the ability to store values and pull values out of variables is what gives a program *state*.\n\nWithout such a concept, a program could perform some tasks, but they would be extremely limited and not terribly interesting.\n\nBut the inclusion of variables into our program begets the most interesting questions we will now address: where do those variables *live*? In other words, where are they stored? And, most importantly, how does our program find them when it needs them?\n\nThese questions speak to the need for a well-defined set of rules for storing variables in some location, and for finding those variables at a later time. We'll call that set of rules: *Scope*.\n\nBut, where and how do these *Scope* rules get set?\n\n## Compiler Theory\n\nIt may be self-evident, or it may be surprising, depending on your level of interaction with various languages, but despite the fact that JavaScript falls under the general category of \"dynamic\" or \"interpreted\" languages, it is in fact a compiled language. It is *not* compiled well in advance, as are many traditionally-compiled languages, nor are the results of compilation portable among various distributed systems.\n\nBut, nevertheless, the JavaScript engine performs many of the same steps, albeit in more sophisticated ways than we may commonly be aware, of any traditional language-compiler.\n\nIn a traditional compiled-language process, a chunk of source code, your program, will undergo typically three steps *before* it is executed, roughly called \"compilation\":\n\n1. **Tokenizing/Lexing:** breaking up a string of characters into meaningful (to the language) chunks, called tokens. For instance, consider the program: `var a = 2;`. This program would likely be broken up into the following tokens: `var`, `a`, `=`, `2`, and `;`. Whitespace may or may not be persisted as a token, depending on whether it's meaningful or not.\n\n    **Note:** The difference between tokenizing and lexing is subtle and academic, but it centers on whether or not these tokens are identified in a *stateless* or *stateful* way. Put simply, if the tokenizer were to invoke stateful parsing rules to figure out whether `a` should be considered a distinct token or just part of another token, *that* would be **lexing**.\n\n2. **Parsing:** taking a stream (array) of tokens and turning it into a tree of nested elements, which collectively represent the grammatical structure of the program. This tree is called an \"AST\" (<b>A</b>bstract <b>S</b>yntax <b>T</b>ree).\n\n    The tree for `var a = 2;` might start with a top-level node called `VariableDeclaration`, with a child node called `Identifier` (whose value is `a`), and another child called `AssignmentExpression` which itself has a child called `NumericLiteral` (whose value is `2`).\n\n3. **Code-Generation:** the process of taking an AST and turning it into executable code. This part varies greatly depending on the language, the platform it's targeting, etc.\n\n    So, rather than get mired in details, we'll just handwave and say that there's a way to take our above described AST for `var a = 2;` and turn it into a set of machine instructions to actually *create* a variable called `a` (including reserving memory, etc.), and then store a value into `a`.\n\n    **Note:** The details of how the engine manages system resources are deeper than we will dig, so we'll just take it for granted that the engine is able to create and store variables as needed.\n\nThe JavaScript engine is vastly more complex than *just* those three steps, as are most other language compilers. For instance, in the process of parsing and code-generation, there are certainly steps to optimize the performance of the execution, including collapsing redundant elements, etc.\n\nSo, I'm painting only with broad strokes here. But I think you'll see shortly why *these* details we *do* cover, even at a high level, are relevant.\n\nFor one thing, JavaScript engines don't get the luxury (like other language compilers) of having plenty of time to optimize, because JavaScript compilation doesn't happen in a build step ahead of time, as with other languages.\n\nFor JavaScript, the compilation that occurs happens, in many cases, mere microseconds (or less!) before the code is executed. To ensure the fastest performance, JS engines use all kinds of tricks (like JITs, which lazy compile and even hot re-compile, etc.) which are well beyond the \"scope\" of our discussion here.\n\nLet's just say, for simplicity's sake, that any snippet of JavaScript has to be compiled before (usually *right* before!) it's executed. So, the JS compiler will take the program `var a = 2;` and compile it *first*, and then be ready to execute it, usually right away.\n\n## Understanding Scope\n\nThe way we will approach learning about scope is to think of the process in terms of a conversation. But, *who* is having the conversation?\n\n### The Cast\n\nLet's meet the cast of characters that interact to process the program `var a = 2;`, so we understand their conversations that we'll listen in on shortly:\n\n1. *Engine*: responsible for start-to-finish compilation and execution of our JavaScript program.\n\n2. *Compiler*: one of *Engine*'s friends; handles all the dirty work of parsing and code-generation (see previous section).\n\n3. *Scope*: another friend of *Engine*; collects and maintains a look-up list of all the declared identifiers (variables), and enforces a strict set of rules as to how these are accessible to currently executing code.\n\nFor you to *fully understand* how JavaScript works, you need to begin to *think* like *Engine* (and friends) think, ask the questions they ask, and answer those questions the same.\n\n### Back & Forth\n\nWhen you see the program `var a = 2;`, you most likely think of that as one statement. But that's not how our new friend *Engine* sees it. In fact, *Engine* sees two distinct statements, one which *Compiler* will handle during compilation, and one which *Engine* will handle during execution.\n\nSo, let's break down how *Engine* and friends will approach the program `var a = 2;`.\n\nThe first thing *Compiler* will do with this program is perform lexing to break it down into tokens, which it will then parse into a tree. But when *Compiler* gets to code-generation, it will treat this program somewhat differently than perhaps assumed.\n\nA reasonable assumption would be that *Compiler* will produce code that could be summed up by this pseudo-code: \"Allocate memory for a variable, label it `a`, then stick the value `2` into that variable.\" Unfortunately, that's not quite accurate.\n\n*Compiler* will instead proceed as:\n\n1. Encountering `var a`, *Compiler* asks *Scope* to see if a variable `a` already exists for that particular scope collection. If so, *Compiler* ignores this declaration and moves on. Otherwise, *Compiler* asks *Scope* to declare a new variable called `a` for that scope collection.\n\n2. *Compiler* then produces code for *Engine* to later execute, to handle the `a = 2` assignment. The code *Engine* runs will first ask *Scope* if there is a variable called `a` accessible in the current scope collection. If so, *Engine* uses that variable. If not, *Engine* looks *elsewhere* (see nested *Scope* section below).\n\nIf *Engine* eventually finds a variable, it assigns the value `2` to it. If not, *Engine* will raise its hand and yell out an error!\n\nTo summarize: two distinct actions are taken for a variable assignment: First, *Compiler* declares a variable (if not previously declared in the current scope), and second, when executing, *Engine* looks up the variable in *Scope* and assigns to it, if found.\n\n### Compiler Speak\n\nWe need a little bit more compiler terminology to proceed further with understanding.\n\nWhen *Engine* executes the code that *Compiler* produced for step (2), it has to look-up the variable `a` to see if it has been declared, and this look-up is consulting *Scope*. But the type of look-up *Engine* performs affects the outcome of the look-up.\n\nIn our case, it is said that *Engine* would be performing an \"LHS\" look-up for the variable `a`. The other type of look-up is called \"RHS\".\n\nI bet you can guess what the \"L\" and \"R\" mean. These terms stand for \"Left-hand Side\" and \"Right-hand Side\".\n\nSide... of what? **Of an assignment operation.**\n\nIn other words, an LHS look-up is done when a variable appears on the left-hand side of an assignment operation, and an RHS look-up is done when a variable appears on the right-hand side of an assignment operation.\n\nActually, let's be a little more precise. An RHS look-up is indistinguishable, for our purposes, from simply a look-up of the value of some variable, whereas the LHS look-up is trying to find the variable container itself, so that it can assign. In this way, RHS doesn't *really* mean \"right-hand side of an assignment\" per se, it just, more accurately, means \"not left-hand side\".\n\nBeing slightly glib for a moment, you could also think \"RHS\" instead means \"retrieve his/her source (value)\", implying that RHS means \"go get the value of...\".\n\nLet's dig into that deeper.\n\nWhen I say:\n\n```js\nconsole.log( a );\n```\n\nThe reference to `a` is an RHS reference, because nothing is being assigned to `a` here. Instead, we're looking-up to retrieve the value of `a`, so that the value can be passed to `console.log(..)`.\n\nBy contrast:\n\n```js\na = 2;\n```\n\nThe reference to `a` here is an LHS reference, because we don't actually care what the current value is, we simply want to find the variable as a target for the `= 2` assignment operation.\n\n**Note:** LHS and RHS meaning \"left/right-hand side of an assignment\" doesn't necessarily literally mean \"left/right side of the `=` assignment operator\". There are several other ways that assignments happen, and so it's better to conceptually think about it as: \"who's the target of the assignment (LHS)\" and \"who's the source of the assignment (RHS)\".\n\nConsider this program, which has both LHS and RHS references:\n\n```js\nfunction foo(a) {\n\tconsole.log( a ); // 2\n}\n\nfoo( 2 );\n```\n\nThe last line that invokes `foo(..)` as a function call requires an RHS reference to `foo`, meaning, \"go look-up the value of `foo`, and give it to me.\" Moreover, `(..)` means the value of `foo` should be executed, so it'd better actually be a function!\n\nThere's a subtle but important assignment here. **Did you spot it?**\n\nYou may have missed the implied `a = 2` in this code snippet. It happens when the value `2` is passed as an argument to the `foo(..)` function, in which case the `2` value is **assigned** to the parameter `a`. To (implicitly) assign to parameter `a`, an LHS look-up is performed.\n\nThere's also an RHS reference for the value of `a`, and that resulting value is passed to `console.log(..)`. `console.log(..)` needs a reference to execute. It's an RHS look-up for the `console` object, then a property-resolution occurs to see if it has a method called `log`.\n\nFinally, we can conceptualize that there's an LHS/RHS exchange of passing the value `2` (by way of variable `a`'s RHS look-up) into `log(..)`. Inside of the native implementation of `log(..)`, we can assume it has parameters, the first of which (perhaps called `arg1`) has an LHS reference look-up, before assigning `2` to it.\n\n**Note:** You might be tempted to conceptualize the function declaration `function foo(a) {...` as a normal variable declaration and assignment, such as `var foo` and `foo = function(a){...`. In so doing, it would be tempting to think of this function declaration as involving an LHS look-up.\n\nHowever, the subtle but important difference is that *Compiler* handles both the declaration and the value definition during code-generation, such that when *Engine* is executing code, there's no processing necessary to \"assign\" a function value to `foo`. Thus, it's not really appropriate to think of a function declaration as an LHS look-up assignment in the way we're discussing them here.\n\n### Engine/Scope Conversation\n\n```js\nfunction foo(a) {\n\tconsole.log( a ); // 2\n}\n\nfoo( 2 );\n```\n\nLet's imagine the above exchange (which processes this code snippet) as a conversation. The conversation would go a little something like this:\n\n> ***Engine***: Hey *Scope*, I have an RHS reference for `foo`. Ever heard of it?\n\n> ***Scope***: Why yes, I have. *Compiler* declared it just a second ago. He's a function. Here you go.\n\n> ***Engine***: Great, thanks! OK, I'm executing `foo`.\n\n> ***Engine***: Hey, *Scope*, I've got an LHS reference for `a`, ever heard of it?\n\n> ***Scope***: Why yes, I have. *Compiler* declared it as a formal parameter to `foo` just recently. Here you go.\n\n> ***Engine***: Helpful as always, *Scope*. Thanks again. Now, time to assign `2` to `a`.\n\n> ***Engine***: Hey, *Scope*, sorry to bother you again. I need an RHS look-up for `console`. Ever heard of it?\n\n> ***Scope***: No problem, *Engine*, this is what I do all day. Yes, I've got `console`. He's built-in. Here ya go.\n\n> ***Engine***: Perfect. Looking up `log(..)`. OK, great, it's a function.\n\n> ***Engine***: Yo, *Scope*. Can you help me out with an RHS reference to `a`. I think I remember it, but just want to double-check.\n\n> ***Scope***: You're right, *Engine*. Same guy, hasn't changed. Here ya go.\n\n> ***Engine***: Cool. Passing the value of `a`, which is `2`, into `log(..)`.\n\n> ...\n\n### Quiz\n\nCheck your understanding so far. Make sure to play the part of *Engine* and have a \"conversation\" with the *Scope*:\n\n```js\nfunction foo(a) {\n\tvar b = a;\n\treturn a + b;\n}\n\nvar c = foo( 2 );\n```\n\n1. Identify all the LHS look-ups (there are 3!).\n\n2. Identify all the RHS look-ups (there are 4!).\n\n**Note:** See the chapter review for the quiz answers!\n\n## Nested Scope\n\nWe said that *Scope* is a set of rules for looking up variables by their identifier name. There's usually more than one *Scope* to consider, however.\n\nJust as a block or function is nested inside another block or function, scopes are nested inside other scopes. So, if a variable cannot be found in the immediate scope, *Engine* consults the next outer containing scope, continuing until found or until the outermost (aka, global) scope has been reached.\n\nConsider:\n\n```js\nfunction foo(a) {\n\tconsole.log( a + b );\n}\n\nvar b = 2;\n\nfoo( 2 ); // 4\n```\n\nThe RHS reference for `b` cannot be resolved inside the function `foo`, but it can be resolved in the *Scope* surrounding it (in this case, the global).\n\nSo, revisiting the conversations between *Engine* and *Scope*, we'd overhear:\n\n> ***Engine***: \"Hey, *Scope* of `foo`, ever heard of `b`? Got an RHS reference for it.\"\n\n> ***Scope***: \"Nope, never heard of it. Go fish.\"\n\n> ***Engine***: \"Hey, *Scope* outside of `foo`, oh you're the global *Scope*, ok cool. Ever heard of `b`? Got an RHS reference for it.\"\n\n> ***Scope***: \"Yep, sure have. Here ya go.\"\n\nThe simple rules for traversing nested *Scope*: *Engine* starts at the currently executing *Scope*, looks for the variable there, then if not found, keeps going up one level, and so on. If the outermost global scope is reached, the search stops, whether it finds the variable or not.\n\n### Building on Metaphors\n\nTo visualize the process of nested *Scope* resolution, I want you to think of this tall building.\n\n<img src=\"fig1.png\" width=\"250\">\n\nThe building represents our program's nested *Scope* rule set. The first floor of the building represents your currently executing *Scope*, wherever you are. The top level of the building is the global *Scope*.\n\nYou resolve LHS and RHS references by looking on your current floor, and if you don't find it, taking the elevator to the next floor, looking there, then the next, and so on. Once you get to the top floor (the global *Scope*), you either find what you're looking for, or you don't. But you have to stop regardless.\n\n## Errors\n\nWhy does it matter whether we call it LHS or RHS?\n\nBecause these two types of look-ups behave differently in the circumstance where the variable has not yet been declared (is not found in any consulted *Scope*).\n\nConsider:\n\n```js\nfunction foo(a) {\n\tconsole.log( a + b );\n\tb = a;\n}\n\nfoo( 2 );\n```\n\nWhen the RHS look-up occurs for `b` the first time, it will not be found. This is said to be an \"undeclared\" variable, because it is not found in the scope.\n\nIf an RHS look-up fails to ever find a variable, anywhere in the nested *Scope*s, this results in a `ReferenceError` being thrown by the *Engine*. It's important to note that the error is of the type `ReferenceError`.\n\nBy contrast, if the *Engine* is performing an LHS look-up and arrives at the top floor (global *Scope*) without finding it, and if the program is not running in \"Strict Mode\" [^note-strictmode], then the global *Scope* will create a new variable of that name **in the global scope**, and hand it back to *Engine*.\n\n*\"No, there wasn't one before, but I was helpful and created one for you.\"*\n\n\"Strict Mode\" [^note-strictmode], which was added in ES5, has a number of different behaviors from normal/relaxed/lazy mode. One such behavior is that it disallows the automatic/implicit global variable creation. In that case, there would be no global *Scope*'d variable to hand back from an LHS look-up, and *Engine* would throw a `ReferenceError` similarly to the RHS case.\n\nNow, if a variable is found for an RHS look-up, but you try to do something with its value that is impossible, such as trying to execute-as-function a non-function value, or reference a property on a `null` or `undefined` value, then *Engine* throws a different kind of error, called a `TypeError`.\n\n`ReferenceError` is *Scope* resolution-failure related, whereas `TypeError` implies that *Scope* resolution was successful, but that there was an illegal/impossible action attempted against the result.\n\n## Review (TL;DR)\n\nScope is the set of rules that determines where and how a variable (identifier) can be looked-up. This look-up may be for the purposes of assigning to the variable, which is an LHS (left-hand-side) reference, or it may be for the purposes of retrieving its value, which is an RHS (right-hand-side) reference.\n\nLHS references result from assignment operations. *Scope*-related assignments can occur either with the `=` operator or by passing arguments to (assign to) function parameters.\n\nThe JavaScript *Engine* first compiles code before it executes, and in so doing, it splits up statements like `var a = 2;` into two separate steps:\n\n1. First, `var a` to declare it in that *Scope*. This is performed at the beginning, before code execution.\n\n2. Later, `a = 2` to look up the variable (LHS reference) and assign to it if found.\n\nBoth LHS and RHS reference look-ups start at the currently executing *Scope*, and if need be (that is, they don't find what they're looking for there), they work their way up the nested *Scope*, one scope (floor) at a time, looking for the identifier, until they get to the global (top floor) and stop, and either find it, or don't.\n\nUnfulfilled RHS references result in `ReferenceError`s being thrown. Unfulfilled LHS references result in an automatic, implicitly-created global of that name (if not in \"Strict Mode\" [^note-strictmode]), or a `ReferenceError` (if in \"Strict Mode\" [^note-strictmode]).\n\n### Quiz Answers\n\n```js\nfunction foo(a) {\n\tvar b = a;\n\treturn a + b;\n}\n\nvar c = foo( 2 );\n```\n\n1. Identify all the LHS look-ups (there are 3!).\n\n\t**`c = ..`, `a = 2` (implicit param assignment) and `b = ..`**\n\n2. Identify all the RHS look-ups (there are 4!).\n\n    **`foo(2..`, `= a;`, `a + ..` and `.. + b`**\n\n\n[^note-strictmode]: MDN: [Strict Mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions_and_function_scope/Strict_mode)\n"
  },
  {
    "path": "scope & closures/ch2.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Chapter 2: Lexical Scope\n\nIn Chapter 1, we defined \"scope\" as the set of rules that govern how the *Engine* can look up a variable by its identifier name and find it, either in the current *Scope*, or in any of the *Nested Scopes* it's contained within.\n\nThere are two predominant models for how scope works. The first of these is by far the most common, used by the vast majority of programming languages. It's called **Lexical Scope**, and we will examine it in-depth. The other model, which is still used by some languages (such as Bash scripting, some modes in Perl, etc.) is called **Dynamic Scope**.\n\nDynamic Scope is covered in Appendix A. I mention it here only to provide a contrast with Lexical Scope, which is the scope model that JavaScript employs.\n\n## Lex-time\n\nAs we discussed in Chapter 1, the first traditional phase of a standard language compiler is called lexing (aka, tokenizing). If you recall, the lexing process examines a string of source code characters and assigns semantic meaning to the tokens as a result of some stateful parsing.\n\nIt is this concept which provides the foundation to understand what lexical scope is and where the name comes from.\n\nTo define it somewhat circularly, lexical scope is scope that is defined at lexing time. In other words, lexical scope is based on where variables and blocks of scope are authored, by you, at write time, and thus is (mostly) set in stone by the time the lexer processes your code.\n\n**Note:** We will see in a little bit there are some ways to cheat lexical scope, thereby modifying it after the lexer has passed by, but these are frowned upon. It is considered best practice to treat lexical scope as, in fact, lexical-only, and thus entirely author-time in nature.\n\nLet's consider this block of code:\n\n```js\nfunction foo(a) {\n\n\tvar b = a * 2;\n\n\tfunction bar(c) {\n\t\tconsole.log( a, b, c );\n\t}\n\n\tbar(b * 3);\n}\n\nfoo( 2 ); // 2 4 12\n```\n\nThere are three nested scopes inherent in this code example. It may be helpful to think about these scopes as bubbles inside of each other.\n\n<img src=\"fig2.png\" width=\"500\">\n\n**Bubble 1** encompasses the global scope, and has just one identifier in it: `foo`.\n\n**Bubble 2** encompasses the scope of `foo`, which includes the three identifiers: `a`, `bar` and `b`.\n\n**Bubble 3** encompasses the scope of `bar`, and it includes just one identifier: `c`.\n\nScope bubbles are defined by where the blocks of scope are written, which one is nested inside the other, etc. In the next chapter, we'll discuss different units of scope, but for now, let's just assume that each function creates a new bubble of scope.\n\nThe bubble for `bar` is entirely contained within the bubble for `foo`, because (and only because) that's where we chose to define the function `bar`.\n\nNotice that these nested bubbles are strictly nested. We're not talking about Venn diagrams where the bubbles can cross boundaries. In other words, no bubble for some function can simultaneously exist (partially) inside two other outer scope bubbles, just as no function can partially be inside each of two parent functions.\n\n### Look-ups\n\nThe structure and relative placement of these scope bubbles fully explains to the *Engine* all the places it needs to look to find an identifier.\n\nIn the above code snippet, the *Engine* executes the `console.log(..)` statement and goes looking for the three referenced variables `a`, `b`, and `c`. It first starts with the innermost scope bubble, the scope of the `bar(..)` function. It won't find `a` there, so it goes up one level, out to the next nearest scope bubble, the scope of `foo(..)`. It finds `a` there, and so it uses that `a`. Same thing for `b`. But `c`, it does find inside of `bar(..)`.\n\nHad there been a `c` both inside of `bar(..)` and inside of `foo(..)`, the `console.log(..)` statement would have found and used the one in `bar(..)`, never getting to the one in `foo(..)`.\n\n**Scope look-up stops once it finds the first match**. The same identifier name can be specified at multiple layers of nested scope, which is called \"shadowing\" (the inner identifier \"shadows\" the outer identifier). Regardless of shadowing, scope look-up always starts at the innermost scope being executed at the time, and works its way outward/upward until the first match, and stops.\n\n**Note:** Global variables are also automatically properties of the global object (`window` in browsers, etc.), so it *is* possible to reference a global variable not directly by its lexical name, but instead indirectly as a property reference of the global object.\n\n```js\nwindow.a\n```\n\nThis technique gives access to a global variable which would otherwise be inaccessible due to it being shadowed. However, non-global shadowed variables cannot be accessed.\n\nNo matter *where* a function is invoked from, or even *how* it is invoked, its lexical scope is **only** defined by where the function was declared.\n\nThe lexical scope look-up process *only* applies to first-class identifiers, such as the `a`, `b`, and `c`. If you had a reference to `foo.bar.baz` in a piece of code, the lexical scope look-up would apply to finding the `foo` identifier, but once it locates that variable, object property-access rules take over to resolve the `bar` and `baz` properties, respectively.\n\n## Cheating Lexical\n\nIf lexical scope is defined only by where a function is declared, which is entirely an author-time decision, how could there possibly be a way to \"modify\" (aka, cheat) lexical scope at run-time?\n\nJavaScript has two such mechanisms. Both of them are equally frowned-upon in the wider community as bad practices to use in your code. But the typical arguments against them are often missing the most important point: **cheating lexical scope leads to poorer performance.**\n\nBefore I explain the performance issue, though, let's look at how these two mechanisms work.\n\n### `eval`\n\nThe `eval(..)` function in JavaScript takes a string as an argument, and treats the contents of the string as if it had actually been authored code at that point in the program. In other words, you can programmatically generate code inside of your authored code, and run the generated code as if it had been there at author time.\n\nEvaluating `eval(..)` (pun intended) in that light, it should be clear how `eval(..)` allows you to modify the lexical scope environment by cheating and pretending that author-time (aka, lexical) code was there all along.\n\nOn subsequent lines of code after an `eval(..)` has executed, the *Engine* will not \"know\" or \"care\" that the previous code in question was dynamically interpreted and thus modified the lexical scope environment. The *Engine* will simply perform its lexical scope look-ups as it always does.\n\nConsider the following code:\n\n```js\nfunction foo(str, a) {\n\teval( str ); // cheating!\n\tconsole.log( a, b );\n}\n\nvar b = 2;\n\nfoo( \"var b = 3;\", 1 ); // 1 3\n```\n\nThe string `\"var b = 3;\"` is treated, at the point of the `eval(..)` call, as code that was there all along. Because that code happens to declare a new variable `b`, it modifies the existing lexical scope of `foo(..)`. In fact, as mentioned above, this code actually creates variable `b` inside of `foo(..)` that shadows the `b` that was declared in the outer (global) scope.\n\nWhen the `console.log(..)` call occurs, it finds both `a` and `b` in the scope of `foo(..)`, and never finds the outer `b`. Thus, we print out \"1 3\" instead of \"1 2\" as would have normally been the case.\n\n**Note:** In this example, for simplicity's sake, the string of \"code\" we pass in was a fixed literal. But it could easily have been programmatically created by adding characters together based on your program's logic. `eval(..)` is usually used to execute dynamically created code, as dynamically evaluating essentially static code from a string literal would provide no real benefit to just authoring the code directly.\n\nBy default, if a string of code that `eval(..)` executes contains one or more declarations (either variables or functions), this action modifies the existing lexical scope in which the `eval(..)` resides. Technically, `eval(..)` can be invoked \"indirectly\", through various tricks (beyond our discussion here), which causes it to instead execute in the context of the global scope, thus modifying it. But in either case, `eval(..)` can at runtime modify an author-time lexical scope.\n\n**Note:** `eval(..)` when used in a strict-mode program operates in its own lexical scope, which means declarations made inside of the `eval()` do not actually modify the enclosing scope.\n\n```js\nfunction foo(str) {\n   \"use strict\";\n   eval( str );\n   console.log( a ); // ReferenceError: a is not defined\n}\n\nfoo( \"var a = 2\" );\n```\n\nThere are other facilities in JavaScript which amount to a very similar effect to `eval(..)`. `setTimeout(..)` and `setInterval(..)` *can* take a string for their respective first argument, the contents of which are `eval`uated as the code of a dynamically-generated function. This is old, legacy behavior and long-since deprecated. Don't do it!\n\nThe `new Function(..)` function constructor similarly takes a string of code in its **last** argument to turn into a dynamically-generated function (the first argument(s), if any, are the named parameters for the new function). This function-constructor syntax is slightly safer than `eval(..)`, but it should still be avoided in your code.\n\nThe use-cases for dynamically generating code inside your program are incredibly rare, as the performance degradations are almost never worth the capability.\n\n### `with`\n\nThe other frowned-upon (and now deprecated!) feature in JavaScript which cheats lexical scope is the `with` keyword. There are multiple valid ways that `with` can be explained, but I will choose here to explain it from the perspective of how it interacts with and affects lexical scope.\n\n`with` is typically explained as a short-hand for making multiple property references against an object *without* repeating the object reference itself each time.\n\nFor example:\n\n```js\nvar obj = {\n\ta: 1,\n\tb: 2,\n\tc: 3\n};\n\n// more \"tedious\" to repeat \"obj\"\nobj.a = 2;\nobj.b = 3;\nobj.c = 4;\n\n// \"easier\" short-hand\nwith (obj) {\n\ta = 3;\n\tb = 4;\n\tc = 5;\n}\n```\n\nHowever, there's much more going on here than just a convenient short-hand for object property access. Consider:\n\n```js\nfunction foo(obj) {\n\twith (obj) {\n\t\ta = 2;\n\t}\n}\n\nvar o1 = {\n\ta: 3\n};\n\nvar o2 = {\n\tb: 3\n};\n\nfoo( o1 );\nconsole.log( o1.a ); // 2\n\nfoo( o2 );\nconsole.log( o2.a ); // undefined\nconsole.log( a ); // 2 -- Oops, leaked global!\n```\n\nIn this code example, two objects `o1` and `o2` are created. One has an `a` property, and the other does not. The `foo(..)` function takes an object reference `obj` as an argument, and calls `with (obj) { .. }` on the reference. Inside the `with` block, we make what appears to be a normal lexical reference to a variable `a`, an LHS reference in fact (see Chapter 1), to assign to it the value of `2`.\n\nWhen we pass in `o1`, the `a = 2` assignment finds the property `o1.a` and assigns it the value `2`, as reflected in the subsequent `console.log(o1.a)` statement. However, when we pass in `o2`, since it does not have an `a` property, no such property is created, and `o2.a` remains `undefined`.\n\nBut then we note a peculiar side-effect, the fact that a global variable `a` was created by the `a = 2` assignment. How can this be?\n\nThe `with` statement takes an object, one which has zero or more properties, and **treats that object as if *it* is a wholly separate lexical scope**, and thus the object's properties are treated as lexically defined identifiers in that \"scope\".\n\n**Note:** Even though a `with` block treats an object like a lexical scope, a normal `var` declaration inside that `with` block will not be scoped to that `with` block, but instead the containing function scope.\n\nWhile the `eval(..)` function can modify existing lexical scope if it takes a string of code with one or more declarations in it, the `with` statement actually creates a **whole new lexical scope** out of thin air, from the object you pass to it.\n\nUnderstood in this way, the \"scope\" declared by the `with` statement when we passed in `o1` was `o1`, and that \"scope\" had an \"identifier\" in it which corresponds to the `o1.a` property. But when we used `o2` as the \"scope\", it had no such `a` \"identifier\" in it, and so the normal rules of LHS identifier look-up (see Chapter 1) occurred.\n\nNeither the \"scope\" of `o2`, nor the scope of `foo(..)`, nor the global scope even, has an `a` identifier to be found, so when `a = 2` is executed, it results in the automatic-global being created (since we're in non-strict mode).\n\nIt is a strange sort of mind-bending thought to see `with` turning, at runtime, an object and its properties into a \"scope\" *with* \"identifiers\". But that is the clearest explanation I can give for the results we see.\n\n**Note:** In addition to being a bad idea to use, both `eval(..)` and `with` are affected (restricted) by Strict Mode. `with` is outright disallowed, whereas various forms of indirect or unsafe `eval(..)` are disallowed while retaining the core functionality.\n\n### Performance\n\nBoth `eval(..)` and `with` cheat the otherwise author-time defined lexical scope by modifying or creating new lexical scope at runtime.\n\nSo, what's the big deal, you ask? If they offer more sophisticated functionality and coding flexibility, aren't these *good* features? **No.**\n\nThe JavaScript *Engine* has a number of performance optimizations that it performs during the compilation phase. Some of these boil down to being able to essentially statically analyze the code as it lexes, and pre-determine where all the variable and function declarations are, so that it takes less effort to resolve identifiers during execution.\n\nBut if the *Engine* finds an `eval(..)` or `with` in the code, it essentially has to *assume* that all its awareness of identifier location may be invalid, because it cannot know at lexing time exactly what code you may pass to `eval(..)` to modify the lexical scope, or the contents of the object you may pass to `with` to create a new lexical scope to be consulted.\n\nIn other words, in the pessimistic sense, most of those optimizations it *would* make are pointless if `eval(..)` or `with` are present, so it simply doesn't perform the optimizations *at all*.\n\nYour code will almost certainly tend to run slower simply by the fact that you include an `eval(..)` or `with` anywhere in the code. No matter how smart the *Engine* may be about trying to limit the side-effects of these pessimistic assumptions, **there's no getting around the fact that without the optimizations, code runs slower.**\n\n## Review (TL;DR)\n\nLexical scope means that scope is defined by author-time decisions of where functions are declared. The lexing phase of compilation is essentially able to know where and how all identifiers are declared, and thus predict how they will be looked-up during execution.\n\nTwo mechanisms in JavaScript can \"cheat\" lexical scope: `eval(..)` and `with`. The former can modify existing lexical scope (at runtime) by evaluating a string of \"code\" which has one or more declarations in it. The latter essentially creates a whole new lexical scope (again, at runtime) by treating an object reference *as* a \"scope\" and that object's properties as scoped identifiers.\n\nThe downside to these mechanisms is that it defeats the *Engine*'s ability to perform compile-time optimizations regarding scope look-up, because the *Engine* has to assume pessimistically that such optimizations will be invalid. Code *will* run slower as a result of using either feature. **Don't use them.**\n"
  },
  {
    "path": "scope & closures/ch3.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Chapter 3: Function vs. Block Scope\n\nAs we explored in Chapter 2, scope consists of a series of \"bubbles\" that each act as a container or bucket, in which identifiers (variables, functions) are declared. These bubbles nest neatly inside each other, and this nesting is defined at author-time.\n\nBut what exactly makes a new bubble? Is it only the function? Can other structures in JavaScript create bubbles of scope?\n\n## Scope From Functions\n\nThe most common answer to those questions is that JavaScript has function-based scope. That is, each function you declare creates a bubble for itself, but no other structures create their own scope bubbles. As we'll see in just a little bit, this is not quite true.\n\nBut first, let's explore function scope and its implications.\n\nConsider this code:\n\n```js\nfunction foo(a) {\n\tvar b = 2;\n\n\t// some code\n\n\tfunction bar() {\n\t\t// ...\n\t}\n\n\t// more code\n\n\tvar c = 3;\n}\n```\n\nIn this snippet, the scope bubble for `foo(..)` includes identifiers `a`, `b`, `c` and `bar`. **It doesn't matter** *where* in the scope a declaration appears, the variable or function belongs to the containing scope bubble, regardless. We'll explore how exactly *that* works in the next chapter.\n\n`bar(..)` has its own scope bubble. So does the global scope, which has just one identifier attached to it: `foo`.\n\nBecause `a`, `b`, `c`, and `bar` all belong to the scope bubble of `foo(..)`, they are not accessible outside of `foo(..)`. That is, the following code would all result in `ReferenceError` errors, as the identifiers are not available to the global scope:\n\n```js\nbar(); // fails\n\nconsole.log( a, b, c ); // all 3 fail\n```\n\nHowever, all these identifiers (`a`, `b`, `c`, `foo`, and `bar`) are accessible *inside* of `foo(..)`, and indeed also available inside of `bar(..)` (assuming there are no shadow identifier declarations inside `bar(..)`).\n\nFunction scope encourages the idea that all variables belong to the function, and can be used and reused throughout the entirety of the function (and indeed, accessible even to nested scopes). This design approach can be quite useful, and certainly can make full use of the \"dynamic\" nature of JavaScript variables to take on values of different types as needed.\n\nOn the other hand, if you don't take careful precautions, variables existing across the entirety of a scope can lead to some unexpected pitfalls.\n\n## Hiding In Plain Scope\n\nThe traditional way of thinking about functions is that you declare a function, and then add code inside it. But the inverse thinking is equally powerful and useful: take any arbitrary section of code you've written, and wrap a function declaration around it, which in effect \"hides\" the code.\n\nThe practical result is to create a scope bubble around the code in question, which means that any declarations (variable or function) in that code will now be tied to the scope of the new wrapping function, rather than the previously enclosing scope. In other words, you can \"hide\" variables and functions by enclosing them in the scope of a function.\n\nWhy would \"hiding\" variables and functions be a useful technique?\n\nThere's a variety of reasons motivating this scope-based hiding. They tend to arise from the software design principle \"Principle of Least Privilege\" [^note-leastprivilege], also sometimes called \"Least Authority\" or \"Least Exposure\". This principle states that in the design of software, such as the API for a module/object, you should expose only what is minimally necessary, and \"hide\" everything else.\n\nThis principle extends to the choice of which scope to contain variables and functions. If all variables and functions were in the global scope, they would of course be accessible to any nested scope. But this would violate the \"Least...\" principle in that you are (likely) exposing many variables or functions which you should otherwise keep private, as proper use of the code would discourage access to those variables/functions.\n\nFor example:\n\n```js\nfunction doSomething(a) {\n\tb = a + doSomethingElse( a * 2 );\n\n\tconsole.log( b * 3 );\n}\n\nfunction doSomethingElse(a) {\n\treturn a - 1;\n}\n\nvar b;\n\ndoSomething( 2 ); // 15\n```\n\nIn this snippet, the `b` variable and the `doSomethingElse(..)` function are likely \"private\" details of how `doSomething(..)` does its job. Giving the enclosing scope \"access\" to `b` and `doSomethingElse(..)` is not only unnecessary but also possibly \"dangerous\", in that they may be used in unexpected ways, intentionally or not, and this may violate pre-condition assumptions of `doSomething(..)`.\n\nA more \"proper\" design would hide these private details inside the scope of `doSomething(..)`, such as:\n\n```js\nfunction doSomething(a) {\n\tfunction doSomethingElse(a) {\n\t\treturn a - 1;\n\t}\n\n\tvar b;\n\n\tb = a + doSomethingElse( a * 2 );\n\n\tconsole.log( b * 3 );\n}\n\ndoSomething( 2 ); // 15\n```\n\nNow, `b` and `doSomethingElse(..)` are not accessible to any outside influence, instead controlled only by `doSomething(..)`. The functionality and end-result has not been affected, but the design keeps private details private, which is usually considered better software.\n\n### Collision Avoidance\n\nAnother benefit of \"hiding\" variables and functions inside a scope is to avoid unintended collision between two different identifiers with the same name but different intended usages. Collision results often in unexpected overwriting of values.\n\nFor example:\n\n```js\nfunction foo() {\n\tfunction bar(a) {\n\t\ti = 3; // changing the `i` in the enclosing scope's for-loop\n\t\tconsole.log( a + i );\n\t}\n\n\tfor (var i=0; i<10; i++) {\n\t\tbar( i * 2 ); // oops, infinite loop ahead!\n\t}\n}\n\nfoo();\n```\n\nThe `i = 3` assignment inside of `bar(..)` overwrites, unexpectedly, the `i` that was declared in `foo(..)` at the for-loop. In this case, it will result in an infinite loop, because `i` is set to a fixed value of `3` and that will forever remain `< 10`.\n\nThe assignment inside `bar(..)` needs to declare a local variable to use, regardless of what identifier name is chosen. `var i = 3;` would fix the problem (and would create the previously mentioned \"shadowed variable\" declaration for `i`). An *additional*, not alternate, option is to pick another identifier name entirely, such as `var j = 3;`. But your software design may naturally call for the same identifier name, so utilizing scope to \"hide\" your inner declaration is your best/only option in that case.\n\n#### Global \"Namespaces\"\n\nA particularly strong example of (likely) variable collision occurs in the global scope. Multiple libraries loaded into your program can quite easily collide with each other if they don't properly hide their internal/private functions and variables.\n\nSuch libraries typically will create a single variable declaration, often an object, with a sufficiently unique name, in the global scope. This object is then used as a \"namespace\" for that library, where all specific exposures of functionality are made as properties of that object (namespace), rather than as top-level lexically scoped identifiers themselves.\n\nFor example:\n\n```js\nvar MyReallyCoolLibrary = {\n\tawesome: \"stuff\",\n\tdoSomething: function() {\n\t\t// ...\n\t},\n\tdoAnotherThing: function() {\n\t\t// ...\n\t}\n};\n```\n\n#### Module Management\n\nAnother option for collision avoidance is the more modern \"module\" approach, using any of various dependency managers. Using these tools, no libraries ever add any identifiers to the global scope, but are instead required to have their identifier(s) be explicitly imported into another specific scope through usage of the dependency manager's various mechanisms.\n\nIt should be observed that these tools do not possess \"magic\" functionality that is exempt from lexical scoping rules. They simply use the rules of scoping as explained here to enforce that no identifiers are injected into any shared scope, and are instead kept in private, non-collision-susceptible scopes, which prevents any accidental scope collisions.\n\nAs such, you can code defensively and achieve the same results as the dependency managers do without actually needing to use them, if you so choose. See the Chapter 5 for more information about the module pattern.\n\n## Functions As Scopes\n\nWe've seen that we can take any snippet of code and wrap a function around it, and that effectively \"hides\" any enclosed variable or function declarations from the outside scope inside that function's inner scope.\n\nFor example:\n\n```js\nvar a = 2;\n\nfunction foo() { // <-- insert this\n\n\tvar a = 3;\n\tconsole.log( a ); // 3\n\n} // <-- and this\nfoo(); // <-- and this\n\nconsole.log( a ); // 2\n```\n\nWhile this technique \"works\", it is not necessarily very ideal. There are a few problems it introduces. The first is that we have to declare a named-function `foo()`, which means that the identifier name `foo` itself \"pollutes\" the enclosing scope (global, in this case). We also have to explicitly call the function by name (`foo()`) so that the wrapped code actually executes.\n\nIt would be more ideal if the function didn't need a name (or, rather, the name didn't pollute the enclosing scope), and if the function could automatically be executed.\n\nFortunately, JavaScript offers a solution to both problems.\n\n```js\nvar a = 2;\n\n(function foo(){ // <-- insert this\n\n\tvar a = 3;\n\tconsole.log( a ); // 3\n\n})(); // <-- and this\n\nconsole.log( a ); // 2\n```\n\nLet's break down what's happening here.\n\nFirst, notice that the wrapping function statement starts with `(function...` as opposed to just `function...`. While this may seem like a minor detail, it's actually a major change. Instead of treating the function as a standard declaration, the function is treated as a function-expression.\n\n**Note:** The easiest way to distinguish declaration vs. expression is the position of the word \"function\" in the statement (not just a line, but a distinct statement). If \"function\" is the very first thing in the statement, then it's a function declaration. Otherwise, it's a function expression.\n\nThe key difference we can observe here between a function declaration and a function expression relates to where its name is bound as an identifier.\n\nCompare the previous two snippets. In the first snippet, the name `foo` is bound in the enclosing scope, and we call it directly with `foo()`. In the second snippet, the name `foo` is not bound in the enclosing scope, but instead is bound only inside of its own function.\n\nIn other words, `(function foo(){ .. })` as an expression means the identifier `foo` is found *only* in the scope where the `..` indicates, not in the outer scope. Hiding the name `foo` inside itself means it does not pollute the enclosing scope unnecessarily.\n\n### Anonymous vs. Named\n\nYou are probably most familiar with function expressions as callback parameters, such as:\n\n```js\nsetTimeout( function(){\n\tconsole.log(\"I waited 1 second!\");\n}, 1000 );\n```\n\nThis is called an \"anonymous function expression\", because `function()...` has no name identifier on it. Function expressions can be anonymous, but function declarations cannot omit the name -- that would be illegal JS grammar.\n\nAnonymous function expressions are quick and easy to type, and many libraries and tools tend to encourage this idiomatic style of code. However, they have several draw-backs to consider:\n\n1. Anonymous functions have no useful name to display in stack traces, which can make debugging more difficult.\n\n2. Without a name, if the function needs to refer to itself, for recursion, etc., the **deprecated** `arguments.callee` reference is unfortunately required. Another example of needing to self-reference is when an event handler function wants to unbind itself after it fires.\n\n3. Anonymous functions omit a name that is often helpful in providing more readable/understandable code. A descriptive name helps self-document the code in question.\n\n**Inline function expressions** are powerful and useful -- the question of anonymous vs. named doesn't detract from that. Providing a name for your function expression quite effectively addresses all these draw-backs, but has no tangible downsides. The best practice is to always name your function expressions:\n\n```js\nsetTimeout( function timeoutHandler(){ // <-- Look, I have a name!\n\tconsole.log( \"I waited 1 second!\" );\n}, 1000 );\n```\n\n### Invoking Function Expressions Immediately\n\n```js\nvar a = 2;\n\n(function foo(){\n\n\tvar a = 3;\n\tconsole.log( a ); // 3\n\n})();\n\nconsole.log( a ); // 2\n```\n\nNow that we have a function as an expression by virtue of wrapping it in a `( )` pair, we can execute that function by adding another `()` on the end, like `(function foo(){ .. })()`. The first enclosing `( )` pair makes the function an expression, and the second `()` executes the function.\n\nThis pattern is so common, a few years ago the community agreed on a term for it: **IIFE**, which stands for **I**mmediately **I**nvoked **F**unction **E**xpression.\n\nOf course, IIFE's don't need names, necessarily -- the most common form of IIFE is to use an anonymous function expression. While certainly less common, naming an IIFE has all the aforementioned benefits over anonymous function expressions, so it's a good practice to adopt.\n\n```js\nvar a = 2;\n\n(function IIFE(){\n\n\tvar a = 3;\n\tconsole.log( a ); // 3\n\n})();\n\nconsole.log( a ); // 2\n```\n\nThere's a slight variation on the traditional IIFE form, which some prefer: `(function(){ .. }())`. Look closely to see the difference. In the first form, the function expression is wrapped in `( )`, and then the invoking `()` pair is on the outside right after it. In the second form, the invoking `()` pair is moved to the inside of the outer `( )` wrapping pair.\n\nThese two forms are identical in functionality. **It's purely a stylistic choice which you prefer.**\n\nAnother variation on IIFE's which is quite common is to use the fact that they are, in fact, just function calls, and pass in argument(s).\n\nFor instance:\n\n```js\nvar a = 2;\n\n(function IIFE( global ){\n\n\tvar a = 3;\n\tconsole.log( a ); // 3\n\tconsole.log( global.a ); // 2\n\n})( window );\n\nconsole.log( a ); // 2\n```\n\nWe pass in the `window` object reference, but we name the parameter `global`, so that we have a clear stylistic delineation for global vs. non-global references. Of course, you can pass in anything from an enclosing scope you want, and you can name the parameter(s) anything that suits you. This is mostly just stylistic choice.\n\nAnother application of this pattern addresses the (minor niche) concern that the default `undefined` identifier might have its value incorrectly overwritten, causing unexpected results. By naming a parameter `undefined`, but not passing any value for that argument, we can guarantee that the `undefined` identifier is in fact the undefined value in a block of code:\n\n```js\nundefined = true; // setting a land-mine for other code! avoid!\n\n(function IIFE( undefined ){\n\n\tvar a;\n\tif (a === undefined) {\n\t\tconsole.log( \"Undefined is safe here!\" );\n\t}\n\n})();\n```\n\nStill another variation of the IIFE inverts the order of things, where the function to execute is given second, *after* the invocation and parameters to pass to it. This pattern is used in the UMD (Universal Module Definition) project. Some people find it a little cleaner to understand, though it is slightly more verbose.\n\n```js\nvar a = 2;\n\n(function IIFE( def ){\n\tdef( window );\n})(function def( global ){\n\n\tvar a = 3;\n\tconsole.log( a ); // 3\n\tconsole.log( global.a ); // 2\n\n});\n```\n\nThe `def` function expression is defined in the second-half of the snippet, and then passed as a parameter (also called `def`) to the `IIFE` function defined in the first half of the snippet. Finally, the parameter `def` (the function) is invoked, passing `window` in as the `global` parameter.\n\n## Blocks As Scopes\n\nWhile functions are the most common unit of scope, and certainly the most wide-spread of the design approaches in the majority of JS in circulation, other units of scope are possible, and the usage of these other scope units can lead to even better, cleaner to maintain code.\n\nMany languages other than JavaScript support Block Scope, and so developers from those languages are accustomed to the mindset, whereas those who've primarily only worked in JavaScript may find the concept slightly foreign.\n\nBut even if you've never written a single line of code in block-scoped fashion, you are still probably familiar with this extremely common idiom in JavaScript:\n\n```js\nfor (var i=0; i<10; i++) {\n\tconsole.log( i );\n}\n```\n\nWe declare the variable `i` directly inside the for-loop head, most likely because our *intent* is to use `i` only within the context of that for-loop, and essentially ignore the fact that the variable actually scopes itself to the enclosing scope (function or global).\n\nThat's what block-scoping is all about. Declaring variables as close as possible, as local as possible, to where they will be used. Another example:\n\n```js\nvar foo = true;\n\nif (foo) {\n\tvar bar = foo * 2;\n\tbar = something( bar );\n\tconsole.log( bar );\n}\n```\n\nWe are using a `bar` variable only in the context of the if-statement, so it makes a kind of sense that we would declare it inside the if-block. However, where we declare variables is not relevant when using `var`, because they will always belong to the enclosing scope. This snippet is essentially \"fake\" block-scoping, for stylistic reasons, and relying on self-enforcement not to accidentally use `bar` in another place in that scope.\n\nBlock scope is a tool to extend the earlier \"Principle of Least ~~Privilege~~ Exposure\" [^note-leastprivilege] from hiding information in functions to hiding information in blocks of our code.\n\nConsider the for-loop example again:\n\n```js\nfor (var i=0; i<10; i++) {\n\tconsole.log( i );\n}\n```\n\nWhy pollute the entire scope of a function with the `i` variable that is only going to be (or only *should be*, at least) used for the for-loop?\n\nBut more importantly, developers may prefer to *check* themselves against accidentally (re)using variables outside of their intended purpose, such as being issued an error about an unknown variable if you try to use it in the wrong place. Block-scoping (if it were possible) for the `i` variable would make `i` available only for the for-loop, causing an error if `i` is accessed elsewhere in the function. This helps ensure variables are not re-used in confusing or hard-to-maintain ways.\n\nBut, the sad reality is that, on the surface, JavaScript has no facility for block scope.\n\nThat is, until you dig a little further.\n\n### `with`\n\nWe learned about `with` in Chapter 2. While it is a frowned upon construct, it *is* an example of (a form of) block scope, in that the scope that is created from the object only exists for the lifetime of that `with` statement, and not in the enclosing scope.\n\n### `try/catch`\n\nIt's a *very* little known fact that JavaScript in ES3 specified the variable declaration in the `catch` clause of a `try/catch` to be block-scoped to the `catch` block.\n\nFor instance:\n\n```js\ntry {\n\tundefined(); // illegal operation to force an exception!\n}\ncatch (err) {\n\tconsole.log( err ); // works!\n}\n\nconsole.log( err ); // ReferenceError: `err` not found\n```\n\nAs you can see, `err` exists only in the `catch` clause, and throws an error when you try to reference it elsewhere.\n\n**Note:** While this behavior has been specified and true of practically all standard JS environments (except perhaps old IE), many linters seem to still complain if you have two or more `catch` clauses in the same scope which each declare their error variable with the same identifier name. This is not actually a re-definition, since the variables are safely block-scoped, but the linters still seem to, annoyingly, complain about this fact.\n\nTo avoid these unnecessary warnings, some devs will name their `catch` variables `err1`, `err2`, etc. Other devs will simply turn off the linting check for duplicate variable names.\n\nThe block-scoping nature of `catch` may seem like a useless academic fact, but see Appendix B for more information on just how useful it might be.\n\n### `let`\n\nThus far, we've seen that JavaScript only has some strange niche behaviors which expose block scope functionality. If that were all we had, and *it was* for many, many years, then block scoping would not be terribly useful to the JavaScript developer.\n\nFortunately, ES6 changes that, and introduces a new keyword `let` which sits alongside `var` as another way to declare variables.\n\nThe `let` keyword attaches the variable declaration to the scope of whatever block (commonly a `{ .. }` pair) it's contained in. In other words, `let` implicitly hijacks any block's scope for its variable declaration.\n\n```js\nvar foo = true;\n\nif (foo) {\n\tlet bar = foo * 2;\n\tbar = something( bar );\n\tconsole.log( bar );\n}\n\nconsole.log( bar ); // ReferenceError\n```\n\nUsing `let` to attach a variable to an existing block is somewhat implicit. It can confuse you if you're not paying close attention to which blocks have variables scoped to them, and are in the habit of moving blocks around, wrapping them in other blocks, etc., as you develop and evolve code.\n\nCreating explicit blocks for block-scoping can address some of these concerns, making it more obvious where variables are attached and not. Usually, explicit code is preferable over implicit or subtle code. This explicit block-scoping style is easy to achieve, and fits more naturally with how block-scoping works in other languages:\n\n```js\nvar foo = true;\n\nif (foo) {\n\t{ // <-- explicit block\n\t\tlet bar = foo * 2;\n\t\tbar = something( bar );\n\t\tconsole.log( bar );\n\t}\n}\n\nconsole.log( bar ); // ReferenceError\n```\n\nWe can create an arbitrary block for `let` to bind to by simply including a `{ .. }` pair anywhere a statement is valid grammar. In this case, we've made an explicit block *inside* the if-statement, which may be easier as a whole block to move around later in refactoring, without affecting the position and semantics of the enclosing if-statement.\n\n**Note:** For another way to express explicit block scopes, see Appendix B.\n\nIn Chapter 4, we will address hoisting, which talks about declarations being taken as existing for the entire scope in which they occur.\n\nHowever, declarations made with `let` will *not* hoist to the entire scope of the block they appear in. Such declarations will not observably \"exist\" in the block until the declaration statement.\n\n```js\n{\n   console.log( bar ); // ReferenceError!\n   let bar = 2;\n}\n```\n\n#### Garbage Collection\n\nAnother reason block-scoping is useful relates to closures and garbage collection to reclaim memory. We'll briefly illustrate here, but the closure mechanism is explained in detail in Chapter 5.\n\nConsider:\n\n```js\nfunction process(data) {\n\t// do something interesting\n}\n\nvar someReallyBigData = { .. };\n\nprocess( someReallyBigData );\n\nvar btn = document.getElementById( \"my_button\" );\n\nbtn.addEventListener( \"click\", function click(evt){\n\tconsole.log(\"button clicked\");\n}, /*capturingPhase=*/false );\n```\n\nThe `click` function click handler callback doesn't *need* the `someReallyBigData` variable at all. That means, theoretically, after `process(..)` runs, the big memory-heavy data structure could be garbage collected. However, it's quite likely (though implementation dependent) that the JS engine will still have to keep the structure around, since the `click` function has a closure over the entire scope.\n\nBlock-scoping can address this concern, making it clearer to the engine that it does not need to keep `someReallyBigData` around:\n\n```js\nfunction process(data) {\n\t// do something interesting\n}\n\n// anything declared inside this block can go away after!\n{\n\tlet someReallyBigData = { .. };\n\n\tprocess( someReallyBigData );\n}\n\nvar btn = document.getElementById( \"my_button\" );\n\nbtn.addEventListener( \"click\", function click(evt){\n\tconsole.log(\"button clicked\");\n}, /*capturingPhase=*/false );\n```\n\nDeclaring explicit blocks for variables to locally bind to is a powerful tool that you can add to your code toolbox.\n\n#### `let` Loops\n\nA particular case where `let` shines is in the for-loop case as we discussed previously.\n\n```js\nfor (let i=0; i<10; i++) {\n\tconsole.log( i );\n}\n\nconsole.log( i ); // ReferenceError\n```\n\nNot only does `let` in the for-loop header bind the `i` to the for-loop body, but in fact, it **re-binds it** to each *iteration* of the loop, making sure to re-assign it the value from the end of the previous loop iteration.\n\nHere's another way of illustrating the per-iteration binding behavior that occurs:\n\n```js\n{\n\tlet j;\n\tfor (j=0; j<10; j++) {\n\t\tlet i = j; // re-bound for each iteration!\n\t\tconsole.log( i );\n\t}\n}\n```\n\nThe reason why this per-iteration binding is interesting will become clear in Chapter 5 when we discuss closures.\n\nBecause `let` declarations attach to arbitrary blocks rather than to the enclosing function's scope (or global), there can be gotchas where existing code has a hidden reliance on function-scoped `var` declarations, and replacing the `var` with `let` may require additional care when refactoring code.\n\nConsider:\n\n```js\nvar foo = true, baz = 10;\n\nif (foo) {\n\tvar bar = 3;\n\n\tif (baz > bar) {\n\t\tconsole.log( baz );\n\t}\n\n\t// ...\n}\n```\n\nThis code is fairly easily re-factored as:\n\n```js\nvar foo = true, baz = 10;\n\nif (foo) {\n\tvar bar = 3;\n\n\t// ...\n}\n\nif (baz > bar) {\n\tconsole.log( baz );\n}\n```\n\nBut, be careful of such changes when using block-scoped variables:\n\n```js\nvar foo = true, baz = 10;\n\nif (foo) {\n\tlet bar = 3;\n\n\tif (baz > bar) { // <-- don't forget `bar` when moving!\n\t\tconsole.log( baz );\n\t}\n}\n```\n\nSee Appendix B for an alternate (more explicit) style of block-scoping which may provide easier to maintain/refactor code that's more robust to these scenarios.\n\n### `const`\n\nIn addition to `let`, ES6 introduces `const`, which also creates a block-scoped variable, but whose value is fixed (constant). Any attempt to change that value at a later time results in an error.\n\n```js\nvar foo = true;\n\nif (foo) {\n\tvar a = 2;\n\tconst b = 3; // block-scoped to the containing `if`\n\n\ta = 3; // just fine!\n\tb = 4; // error!\n}\n\nconsole.log( a ); // 3\nconsole.log( b ); // ReferenceError!\n```\n\n## Review (TL;DR)\n\nFunctions are the most common unit of scope in JavaScript. Variables and functions that are declared inside another function are essentially \"hidden\" from any of the enclosing \"scopes\", which is an intentional design principle of good software.\n\nBut functions are by no means the only unit of scope. Block-scope refers to the idea that variables and functions can belong to an arbitrary block (generally, any `{ .. }` pair) of code, rather than only to the enclosing function.\n\nStarting with ES3, the `try/catch` structure has block-scope in the `catch` clause.\n\nIn ES6, the `let` keyword (a cousin to the `var` keyword) is introduced to allow declarations of variables in any arbitrary block of code. `if (..) { let a = 2; }` will declare a variable `a` that essentially hijacks the scope of the `if`'s `{ .. }` block and attaches itself there.\n\nThough some seem to believe so, block scope should not be taken as an outright replacement of `var` function scope. Both functionalities co-exist, and developers can and should use both function-scope and block-scope techniques where respectively appropriate to produce better, more readable/maintainable code.\n\n[^note-leastprivilege]: [Principle of Least Privilege](http://en.wikipedia.org/wiki/Principle_of_least_privilege)\n"
  },
  {
    "path": "scope & closures/ch4.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Chapter 4: Hoisting\n\nBy now, you should be fairly comfortable with the idea of scope, and how variables are attached to different levels of scope depending on where and how they are declared. Both function scope and block scope behave by the same rules in this regard: any variable declared within a scope is attached to that scope.\n\nBut there's a subtle detail of how scope attachment works with declarations that appear in various locations within a scope, and that detail is what we will examine here.\n\n## Chicken Or The Egg?\n\nThere's a temptation to think that all of the code you see in a JavaScript program is interpreted line-by-line, top-down in order, as the program executes. While that is substantially true, there's one part of that assumption which can lead to incorrect thinking about your program.\n\nConsider this code:\n\n```js\na = 2;\n\nvar a;\n\nconsole.log( a );\n```\n\nWhat do you expect to be printed in the `console.log(..)` statement?\n\nMany developers would expect `undefined`, since the `var a` statement comes after the `a = 2`, and it would seem natural to assume that the variable is re-defined, and thus assigned the default `undefined`. However, the output will be `2`.\n\nConsider another piece of code:\n\n```js\nconsole.log( a );\n\nvar a = 2;\n```\n\nYou might be tempted to assume that, since the previous snippet exhibited some less-than-top-down looking behavior, perhaps in this snippet, `2` will also be printed. Others may think that since the `a` variable is used before it is declared, this must result in a `ReferenceError` being thrown.\n\nUnfortunately, both guesses are incorrect. `undefined` is the output.\n\n**So, what's going on here?** It would appear we have a chicken-and-the-egg question. Which comes first, the declaration (\"egg\"), or the assignment (\"chicken\")?\n\n## The Compiler Strikes Again\n\nTo answer this question, we need to refer back to Chapter 1, and our discussion of compilers. Recall that the *Engine* actually will compile your JavaScript code before it interprets it. Part of the compilation phase was to find and associate all declarations with their appropriate scopes. Chapter 2 showed us that this is the heart of Lexical Scope.\n\nSo, the best way to think about things is that all declarations, both variables and functions, are processed first, before any part of your code is executed.\n\nWhen you see `var a = 2;`, you probably think of that as one statement. But JavaScript actually thinks of it as two statements: `var a;` and `a = 2;`. The first statement, the declaration, is processed during the compilation phase. The second statement, the assignment, is left **in place** for the execution phase.\n\nOur first snippet then should be thought of as being handled like this:\n\n```js\nvar a;\n```\n```js\na = 2;\n\nconsole.log( a );\n```\n\n...where the first part is the compilation and the second part is the execution.\n\nSimilarly, our second snippet is actually processed as:\n\n```js\nvar a;\n```\n```js\nconsole.log( a );\n\na = 2;\n```\n\nSo, one way of thinking, sort of metaphorically, about this process, is that variable and function declarations are \"moved\" from where they appear in the flow of the code to the top of the code. This gives rise to the name \"Hoisting\".\n\nIn other words, **the egg (declaration) comes before the chicken (assignment)**.\n\n**Note:** Only the declarations themselves are hoisted, while any assignments or other executable logic are left *in place*. If hoisting were to re-arrange the executable logic of our code, that could wreak havoc.\n\n```js\nfoo();\n\nfunction foo() {\n\tconsole.log( a ); // undefined\n\n\tvar a = 2;\n}\n```\n\nThe function `foo`'s declaration (which in this case *includes* the implied value of it as an actual function) is hoisted, such that the call on the first line is able to execute.\n\nIt's also important to note that hoisting is **per-scope**. So while our previous snippets were simplified in that they only included global scope, the `foo(..)` function we are now examining itself exhibits that `var a` is hoisted to the top of `foo(..)` (not, obviously, to the top of the program). So the program can perhaps be more accurately interpreted like this:\n\n```js\nfunction foo() {\n\tvar a;\n\n\tconsole.log( a ); // undefined\n\n\ta = 2;\n}\n\nfoo();\n```\n\nFunction declarations are hoisted, as we just saw. But function expressions are not.\n\n```js\nfoo(); // not ReferenceError, but TypeError!\n\nvar foo = function bar() {\n\t// ...\n};\n```\n\nThe variable identifier `foo` is hoisted and attached to the enclosing scope (global) of this program, so `foo()` doesn't fail as a `ReferenceError`. But `foo` has no value yet (as it would if it had been a true function declaration instead of expression). So, `foo()` is attempting to invoke the `undefined` value, which is a `TypeError` illegal operation.\n\nAlso recall that even though it's a named function expression, the name identifier is not available in the enclosing scope:\n\n```js\nfoo(); // TypeError\nbar(); // ReferenceError\n\nvar foo = function bar() {\n\t// ...\n};\n```\n\nThis snippet is more accurately interpreted (with hoisting) as:\n\n```js\nvar foo;\n\nfoo(); // TypeError\nbar(); // ReferenceError\n\nfoo = function() {\n\tvar bar = ...self...\n\t// ...\n}\n```\n\n## Functions First\n\nBoth function declarations and variable declarations are hoisted. But a subtle detail (that *can* show up in code with multiple \"duplicate\" declarations) is that functions are hoisted first, and then variables.\n\nConsider:\n\n```js\nfoo(); // 1\n\nvar foo;\n\nfunction foo() {\n\tconsole.log( 1 );\n}\n\nfoo = function() {\n\tconsole.log( 2 );\n};\n```\n\n`1` is printed instead of `2`! This snippet is interpreted by the *Engine* as:\n\n```js\nfunction foo() {\n\tconsole.log( 1 );\n}\n\nfoo(); // 1\n\nfoo = function() {\n\tconsole.log( 2 );\n};\n```\n\nNotice that `var foo` was the duplicate (and thus ignored) declaration, even though it came before the `function foo()...` declaration, because function declarations are hoisted before normal variables.\n\nWhile multiple/duplicate `var` declarations are effectively ignored, subsequent function declarations *do* override previous ones.\n\n```js\nfoo(); // 3\n\nfunction foo() {\n\tconsole.log( 1 );\n}\n\nvar foo = function() {\n\tconsole.log( 2 );\n};\n\nfunction foo() {\n\tconsole.log( 3 );\n}\n```\n\nWhile this all may sound like nothing more than interesting academic trivia, it highlights the fact that duplicate definitions in the same scope are a really bad idea and will often lead to confusing results.\n\nFunction declarations that appear inside of normal blocks typically hoist to the enclosing scope, rather than being conditional as this code implies:\n\n```js\nfoo(); // \"b\"\n\nvar a = true;\nif (a) {\n   function foo() { console.log( \"a\" ); }\n}\nelse {\n   function foo() { console.log( \"b\" ); }\n}\n```\n\nHowever, it's important to note that this behavior is not reliable and is subject to change in future versions of JavaScript, so it's probably best to avoid declaring functions in blocks.\n\n## Review (TL;DR)\n\nWe can be tempted to look at `var a = 2;` as one statement, but the JavaScript *Engine* does not see it that way. It sees `var a` and `a = 2` as two separate statements, the first one a compiler-phase task, and the second one an execution-phase task.\n\nWhat this leads to is that all declarations in a scope, regardless of where they appear, are processed *first* before the code itself is executed. You can visualize this as declarations (variables and functions) being \"moved\" to the top of their respective scopes, which we call \"hoisting\".\n\nDeclarations themselves are hoisted, but assignments, even assignments of function expressions, are *not* hoisted.\n\nBe careful about duplicate declarations, especially mixed between normal var declarations and function declarations -- peril awaits if you do!\n"
  },
  {
    "path": "scope & closures/ch5.md",
    "content": "# You Don't Know JS: Scope & Closures\n# Chapter 5: Scope Closure\n\nWe arrive at this point with hopefully a very healthy, solid understanding of how scope works.\n\nWe turn our attention to an incredibly important, but persistently elusive, *almost mythological*, part of the language: **closure**. If you have followed our discussion of lexical scope thus far, the payoff is that closure is going to be, largely, anticlimactic, almost self-obvious. *There's a man behind the wizard's curtain, and we're about to see him*. No, his name is not Crockford!\n\nIf however you have nagging questions about lexical scope, now would be a good time to go back and review Chapter 2 before proceeding.\n\n## Enlightenment\n\nFor those who are somewhat experienced in JavaScript, but have perhaps never fully grasped the concept of closures, *understanding closure* can seem like a special nirvana that one must strive and sacrifice to attain.\n\nI recall years back when I had a firm grasp on JavaScript, but had no idea what closure was. The hint that there was *this other side* to the language, one which promised even more capability than I already possessed, teased and taunted me. I remember reading through the source code of early frameworks trying to understand how it actually worked. I remember the first time something of the \"module pattern\" began to emerge in my mind. I remember the *a-ha!* moments quite vividly.\n\nWhat I didn't know back then, what took me years to understand, and what I hope to impart to you presently, is this secret: **closure is all around you in JavaScript, you just have to recognize and embrace it.** Closures are not a special opt-in tool that you must learn new syntax and patterns for. No, closures are not even a weapon that you must learn to wield and master as Luke trained in The Force.\n\nClosures happen as a result of writing code that relies on lexical scope. They just happen. You do not even really have to intentionally create closures to take advantage of them. Closures are created and used for you all over your code. What you are *missing* is the proper mental context to recognize, embrace, and leverage closures for your own will.\n\nThe enlightenment moment should be: **oh, closures are already occurring all over my code, I can finally *see* them now.** Understanding closures is like when Neo sees the Matrix for the first time.\n\n## Nitty Gritty\n\nOK, enough hyperbole and shameless movie references.\n\nHere's a down-n-dirty definition of what you need to know to understand and recognize closures:\n\n> Closure is when a function is able to remember and access its lexical scope even when that function is executing outside its lexical scope.\n\nLet's jump into some code to illustrate that definition.\n\n```js\nfunction foo() {\n\tvar a = 2;\n\n\tfunction bar() {\n\t\tconsole.log( a ); // 2\n\t}\n\n\tbar();\n}\n\nfoo();\n```\n\nThis code should look familiar from our discussions of Nested Scope. Function `bar()` has *access* to the variable `a` in the outer enclosing scope because of lexical scope look-up rules (in this case, it's an RHS reference look-up).\n\nIs this \"closure\"?\n\nWell, technically... *perhaps*. But by our what-you-need-to-know definition above... *not exactly*. I think the most accurate way to explain `bar()` referencing `a` is via lexical scope look-up rules, and those rules are *only* (an important!) **part** of what closure is.\n\nFrom a purely academic perspective, what is said of the above snippet is that the function `bar()` has a *closure* over the scope of `foo()` (and indeed, even over the rest of the scopes it has access to, such as the global scope in our case). Put slightly differently, it's said that `bar()` closes over the scope of `foo()`. Why? Because `bar()` appears nested inside of `foo()`. Plain and simple.\n\nBut, closure defined in this way is not directly *observable*, nor do we see closure *exercised* in that snippet. We clearly see lexical scope, but closure remains sort of a mysterious shifting shadow behind the code.\n\nLet us then consider code which brings closure into full light:\n\n```js\nfunction foo() {\n\tvar a = 2;\n\n\tfunction bar() {\n\t\tconsole.log( a );\n\t}\n\n\treturn bar;\n}\n\nvar baz = foo();\n\nbaz(); // 2 -- Whoa, closure was just observed, man.\n```\n\nThe function `bar()` has lexical scope access to the inner scope of `foo()`. But then, we take `bar()`, the function itself, and pass it *as* a value. In this case, we `return` the function object itself that `bar` references.\n\nAfter we execute `foo()`, we assign the value it returned (our inner `bar()` function) to a variable called `baz`, and then we actually invoke `baz()`, which of course is invoking our inner function `bar()`, just by a different identifier reference.\n\n`bar()` is executed, for sure. But in this case, it's executed *outside* of its declared lexical scope.\n\nAfter `foo()` executed, normally we would expect that the entirety of the inner scope of `foo()` would go away, because we know that the *Engine* employs a *Garbage Collector* that comes along and frees up memory once it's no longer in use. Since it would appear that the contents of `foo()` are no longer in use, it would seem natural that they should be considered *gone*.\n\nBut the \"magic\" of closures does not let this happen. That inner scope is in fact *still* \"in use\", and thus does not go away. Who's using it? **The function `bar()` itself**.\n\nBy virtue of where it was declared, `bar()` has a lexical scope closure over that inner scope of `foo()`, which keeps that scope alive for `bar()` to reference at any later time.\n\n**`bar()` still has a reference to that scope, and that reference is called closure.**\n\nSo, a few microseconds later, when the variable `baz` is invoked (invoking the inner function we initially labeled `bar`), it duly has *access* to author-time lexical scope, so it can access the variable `a` just as we'd expect.\n\nThe function is being invoked well outside of its author-time lexical scope. **Closure** lets the function continue to access the lexical scope it was defined in at author-time.\n\nOf course, any of the various ways that functions can be *passed around* as values, and indeed invoked in other locations, are all examples of observing/exercising closure.\n\n```js\nfunction foo() {\n\tvar a = 2;\n\n\tfunction baz() {\n\t\tconsole.log( a ); // 2\n\t}\n\n\tbar( baz );\n}\n\nfunction bar(fn) {\n\tfn(); // look ma, I saw closure!\n}\n```\n\nWe pass the inner function `baz` over to `bar`, and call that inner function (labeled `fn` now), and when we do, its closure over the inner scope of `foo()` is observed, by accessing `a`.\n\nThese passings-around of functions can be indirect, too.\n\n```js\nvar fn;\n\nfunction foo() {\n\tvar a = 2;\n\n\tfunction baz() {\n\t\tconsole.log( a );\n\t}\n\n\tfn = baz; // assign `baz` to global variable\n}\n\nfunction bar() {\n\tfn(); // look ma, I saw closure!\n}\n\nfoo();\n\nbar(); // 2\n```\n\nWhatever facility we use to *transport* an inner function outside of its lexical scope, it will maintain a scope reference to where it was originally declared, and wherever we execute it, that closure will be exercised.\n\n## Now I Can See\n\nThe previous code snippets are somewhat academic and artificially constructed to illustrate *using closure*. But I promised you something more than just a cool new toy. I promised that closure was something all around you in your existing code. Let us now *see* that truth.\n\n```js\nfunction wait(message) {\n\n\tsetTimeout( function timer(){\n\t\tconsole.log( message );\n\t}, 1000 );\n\n}\n\nwait( \"Hello, closure!\" );\n```\n\nWe take an inner function (named `timer`) and pass it to `setTimeout(..)`. But `timer` has a scope closure over the scope of `wait(..)`, indeed keeping and using a reference to the variable `message`.\n\nA thousand milliseconds after we have executed `wait(..)`, and its inner scope should otherwise be long gone, that inner function `timer` still has closure over that scope.\n\nDeep down in the guts of the *Engine*, the built-in utility `setTimeout(..)` has reference to some parameter, probably called `fn` or `func` or something like that. *Engine* goes to invoke that function, which is invoking our inner `timer` function, and the lexical scope reference is still intact.\n\n**Closure.**\n\nOr, if you're of the jQuery persuasion (or any JS framework, for that matter):\n\n```js\nfunction setupBot(name,selector) {\n\t$( selector ).click( function activator(){\n\t\tconsole.log( \"Activating: \" + name );\n\t} );\n}\n\nsetupBot( \"Closure Bot 1\", \"#bot_1\" );\nsetupBot( \"Closure Bot 2\", \"#bot_2\" );\n```\n\nI am not sure what kind of code you write, but I regularly write code which is responsible for controlling an entire global drone army of closure bots, so this is totally realistic!\n\n(Some) joking aside, essentially *whenever* and *wherever* you treat functions (which access their own respective lexical scopes) as first-class values and pass them around, you are likely to see those functions exercising closure. Be that timers, event handlers, Ajax requests, cross-window messaging, web workers, or any of the other asynchronous (or synchronous!) tasks, when you pass in a *callback function*, get ready to sling some closure around!\n\n**Note:** Chapter 3 introduced the IIFE pattern. While it is often said that IIFE (alone) is an example of observed closure, I would somewhat disagree, by our definition above.\n\n```js\nvar a = 2;\n\n(function IIFE(){\n\tconsole.log( a );\n})();\n```\n\nThis code \"works\", but it's not strictly an observation of closure. Why? Because the function (which we named \"IIFE\" here) is not executed outside its lexical scope. It's still invoked right there in the same scope as it was declared (the enclosing/global scope that also holds `a`). `a` is found via normal lexical scope look-up, not really via closure.\n\nWhile closure might technically be happening at declaration time, it is *not* strictly observable, and so, as they say, *it's a tree falling in the forest with no one around to hear it.*\n\nThough an IIFE is not *itself* an example of closure, it absolutely creates scope, and it's one of the most common tools we use to create scope which can be closed over. So IIFEs are indeed heavily related to closure, even if not exercising closure themselves.\n\nPut this book down right now, dear reader. I have a task for you. Go open up some of your recent JavaScript code. Look for your functions-as-values and identify where you are already using closure and maybe didn't even know it before.\n\nI'll wait.\n\nNow... you see!\n\n## Loops + Closure\n\nThe most common canonical example used to illustrate closure involves the humble for-loop.\n\n```js\nfor (var i=1; i<=5; i++) {\n\tsetTimeout( function timer(){\n\t\tconsole.log( i );\n\t}, i*1000 );\n}\n```\n\n**Note:** Linters often complain when you put functions inside of loops, because the mistakes of not understanding closure are **so common among developers**. We explain how to do so properly here, leveraging the full power of closure. But that subtlety is often lost on linters and they will complain regardless, assuming you don't *actually* know what you're doing.\n\nThe spirit of this code snippet is that we would normally *expect* for the behavior to be that the numbers \"1\", \"2\", .. \"5\" would be printed out, one at a time, one per second, respectively.\n\nIn fact, if you run this code, you get \"6\" printed out 5 times, at the one-second intervals.\n\n**Huh?**\n\nFirstly, let's explain where `6` comes from. The terminating condition of the loop is when `i` is *not* `<=5`. The first time that's the case is when `i` is 6. So, the output is reflecting the final value of the `i` after the loop terminates.\n\nThis actually seems obvious on second glance. The timeout function callbacks are all running well after the completion of the loop. In fact, as timers go, even if it was `setTimeout(.., 0)` on each iteration, all those function callbacks would still run strictly after the completion of the loop, and thus print `6` each time.\n\nBut there's a deeper question at play here. What's *missing* from our code to actually have it behave as we semantically have implied?\n\nWhat's missing is that we are trying to *imply* that each iteration of the loop \"captures\" its own copy of `i`, at the time of the iteration. But, the way scope works, all 5 of those functions, though they are defined separately in each loop iteration, all **are closed over the same shared global scope**, which has, in fact, only one `i` in it.\n\nPut that way, *of course* all functions share a reference to the same `i`. Something about the loop structure tends to confuse us into thinking there's something else more sophisticated at work. There is not. There's no difference than if each of the 5 timeout callbacks were just declared one right after the other, with no loop at all.\n\nOK, so, back to our burning question. What's missing? We need more ~~cowbell~~ closured scope. Specifically, we need a new closured scope for each iteration of the loop.\n\nWe learned in Chapter 3 that the IIFE creates scope by declaring a function and immediately executing it.\n\nLet's try:\n\n```js\nfor (var i=1; i<=5; i++) {\n\t(function(){\n\t\tsetTimeout( function timer(){\n\t\t\tconsole.log( i );\n\t\t}, i*1000 );\n\t})();\n}\n```\n\nDoes that work? Try it. Again, I'll wait.\n\nI'll end the suspense for you. **Nope.** But why? We now obviously have more lexical scope. Each timeout function callback is indeed closing over its own per-iteration scope created respectively by each IIFE.\n\nIt's not enough to have a scope to close over **if that scope is empty**. Look closely. Our IIFE is just an empty do-nothing scope. It needs *something* in it to be useful to us.\n\nIt needs its own variable, with a copy of the `i` value at each iteration.\n\n```js\nfor (var i=1; i<=5; i++) {\n\t(function(){\n\t\tvar j = i;\n\t\tsetTimeout( function timer(){\n\t\t\tconsole.log( j );\n\t\t}, j*1000 );\n\t})();\n}\n```\n\n**Eureka! It works!**\n\nA slight variation some prefer is:\n\n```js\nfor (var i=1; i<=5; i++) {\n\t(function(j){\n\t\tsetTimeout( function timer(){\n\t\t\tconsole.log( j );\n\t\t}, j*1000 );\n\t})( i );\n}\n```\n\nOf course, since these IIFEs are just functions, we can pass in `i`, and we can call it `j` if we prefer, or we can even call it `i` again. Either way, the code works now.\n\nThe use of an IIFE inside each iteration created a new scope for each iteration, which gave our timeout function callbacks the opportunity to close over a new scope for each iteration, one which had a variable with the right per-iteration value in it for us to access.\n\nProblem solved!\n\n### Block Scoping Revisited\n\nLook carefully at our analysis of the previous solution. We used an IIFE to create new scope per-iteration. In other words, we actually *needed* a per-iteration **block scope**. Chapter 3 showed us the `let` declaration, which hijacks a block and declares a variable right there in the block.\n\n**It essentially turns a block into a scope that we can close over.** So, the following awesome code \"just works\":\n\n```js\nfor (var i=1; i<=5; i++) {\n\tlet j = i; // yay, block-scope for closure!\n\tsetTimeout( function timer(){\n\t\tconsole.log( j );\n\t}, j*1000 );\n}\n```\n\n*But, that's not all!* (in my best Bob Barker voice). There's a special behavior defined for `let` declarations used in the head of a for-loop. This behavior says that the variable will be declared not just once for the loop, **but each iteration**. And, it will, helpfully, be initialized at each subsequent iteration with the value from the end of the previous iteration.\n\n```js\nfor (let i=1; i<=5; i++) {\n\tsetTimeout( function timer(){\n\t\tconsole.log( i );\n\t}, i*1000 );\n}\n```\n\nHow cool is that? Block scoping and closure working hand-in-hand, solving all the world's problems. I don't know about you, but that makes me a happy JavaScripter.\n\n## Modules\n\nThere are other code patterns which leverage the power of closure but which do not on the surface appear to be about callbacks. Let's examine the most powerful of them: *the module*.\n\n```js\nfunction foo() {\n\tvar something = \"cool\";\n\tvar another = [1, 2, 3];\n\n\tfunction doSomething() {\n\t\tconsole.log( something );\n\t}\n\n\tfunction doAnother() {\n\t\tconsole.log( another.join( \" ! \" ) );\n\t}\n}\n```\n\nAs this code stands right now, there's no observable closure going on. We simply have some private data variables `something` and `another`, and a couple of inner functions `doSomething()` and `doAnother()`, which both have lexical scope (and thus closure!) over the inner scope of `foo()`.\n\nBut now consider:\n\n```js\nfunction CoolModule() {\n\tvar something = \"cool\";\n\tvar another = [1, 2, 3];\n\n\tfunction doSomething() {\n\t\tconsole.log( something );\n\t}\n\n\tfunction doAnother() {\n\t\tconsole.log( another.join( \" ! \" ) );\n\t}\n\n\treturn {\n\t\tdoSomething: doSomething,\n\t\tdoAnother: doAnother\n\t};\n}\n\nvar foo = CoolModule();\n\nfoo.doSomething(); // cool\nfoo.doAnother(); // 1 ! 2 ! 3\n```\n\nThis is the pattern in JavaScript we call *module*. The most common way of implementing the module pattern is often called \"Revealing Module\", and it's the variation we present here.\n\nLet's examine some things about this code.\n\nFirstly, `CoolModule()` is just a function, but it *has to be invoked* for there to be a module instance created. Without the execution of the outer function, the creation of the inner scope and the closures would not occur.\n\nSecondly, the `CoolModule()` function returns an object, denoted by the object-literal syntax `{ key: value, ... }`. The object we return has references on it to our inner functions, but *not* to our inner data variables. We keep those hidden and private. It's appropriate to think of this object return value as essentially a **public API for our module**.\n\nThis object return value is ultimately assigned to the outer variable `foo`, and then we can access those property methods on the API, like `foo.doSomething()`.\n\n**Note:** It is not required that we return an actual object (literal) from our module. We could just return back an inner function directly. jQuery is actually a good example of this. The `jQuery` and `$` identifiers are the public API for the jQuery \"module\", but they are, themselves, just a function (which can itself have properties, since all functions are objects).\n\nThe `doSomething()` and `doAnother()` functions have closure over the inner scope of the module \"instance\" (arrived at by actually invoking `CoolModule()`). When we transport those functions outside of the lexical scope, by way of property references on the object we return, we have now set up a condition by which closure can be observed and exercised.\n\nTo state it more simply, there are two \"requirements\" for the module pattern to be exercised:\n\n1. There must be an outer enclosing function, and it must be invoked at least once (each time creates a new module instance).\n\n2. The enclosing function must return back at least one inner function, so that this inner function has closure over the private scope, and can access and/or modify that private state.\n\nAn object with a function property on it alone is not *really* a module. An object which is returned from a function invocation which only has data properties on it and no closured functions is not *really* a module, in the observable sense.\n\nThe code snippet above shows a standalone module creator called `CoolModule()` which can be invoked any number of times, each time creating a new module instance. A slight variation on this pattern is when you only care to have one instance, a \"singleton\" of sorts:\n\n```js\nvar foo = (function CoolModule() {\n\tvar something = \"cool\";\n\tvar another = [1, 2, 3];\n\n\tfunction doSomething() {\n\t\tconsole.log( something );\n\t}\n\n\tfunction doAnother() {\n\t\tconsole.log( another.join( \" ! \" ) );\n\t}\n\n\treturn {\n\t\tdoSomething: doSomething,\n\t\tdoAnother: doAnother\n\t};\n})();\n\nfoo.doSomething(); // cool\nfoo.doAnother(); // 1 ! 2 ! 3\n```\n\nHere, we turned our module function into an IIFE (see Chapter 3), and we *immediately* invoked it and assigned its return value directly to our single module instance identifier `foo`.\n\nModules are just functions, so they can receive parameters:\n\n```js\nfunction CoolModule(id) {\n\tfunction identify() {\n\t\tconsole.log( id );\n\t}\n\n\treturn {\n\t\tidentify: identify\n\t};\n}\n\nvar foo1 = CoolModule( \"foo 1\" );\nvar foo2 = CoolModule( \"foo 2\" );\n\nfoo1.identify(); // \"foo 1\"\nfoo2.identify(); // \"foo 2\"\n```\n\nAnother slight but powerful variation on the module pattern is to name the object you are returning as your public API:\n\n```js\nvar foo = (function CoolModule(id) {\n\tfunction change() {\n\t\t// modifying the public API\n\t\tpublicAPI.identify = identify2;\n\t}\n\n\tfunction identify1() {\n\t\tconsole.log( id );\n\t}\n\n\tfunction identify2() {\n\t\tconsole.log( id.toUpperCase() );\n\t}\n\n\tvar publicAPI = {\n\t\tchange: change,\n\t\tidentify: identify1\n\t};\n\n\treturn publicAPI;\n})( \"foo module\" );\n\nfoo.identify(); // foo module\nfoo.change();\nfoo.identify(); // FOO MODULE\n```\n\nBy retaining an inner reference to the public API object inside your module instance, you can modify that module instance **from the inside**, including adding and removing methods, properties, *and* changing their values.\n\n### Modern Modules\n\nVarious module dependency loaders/managers essentially wrap up this pattern of module definition into a friendly API. Rather than examine any one particular library, let me present a *very simple* proof of concept **for illustration purposes (only)**:\n\n```js\nvar MyModules = (function Manager() {\n\tvar modules = {};\n\n\tfunction define(name, deps, impl) {\n\t\tfor (var i=0; i<deps.length; i++) {\n\t\t\tdeps[i] = modules[deps[i]];\n\t\t}\n\t\tmodules[name] = impl.apply( impl, deps );\n\t}\n\n\tfunction get(name) {\n\t\treturn modules[name];\n\t}\n\n\treturn {\n\t\tdefine: define,\n\t\tget: get\n\t};\n})();\n```\n\nThe key part of this code is `modules[name] = impl.apply(impl, deps)`. This is invoking the definition wrapper function for a module (passing in any dependencies), and storing the return value, the module's API, into an internal list of modules tracked by name.\n\nAnd here's how I might use it to define some modules:\n\n```js\nMyModules.define( \"bar\", [], function(){\n\tfunction hello(who) {\n\t\treturn \"Let me introduce: \" + who;\n\t}\n\n\treturn {\n\t\thello: hello\n\t};\n} );\n\nMyModules.define( \"foo\", [\"bar\"], function(bar){\n\tvar hungry = \"hippo\";\n\n\tfunction awesome() {\n\t\tconsole.log( bar.hello( hungry ).toUpperCase() );\n\t}\n\n\treturn {\n\t\tawesome: awesome\n\t};\n} );\n\nvar bar = MyModules.get( \"bar\" );\nvar foo = MyModules.get( \"foo\" );\n\nconsole.log(\n\tbar.hello( \"hippo\" )\n); // Let me introduce: hippo\n\nfoo.awesome(); // LET ME INTRODUCE: HIPPO\n```\n\nBoth the \"foo\" and \"bar\" modules are defined with a function that returns a public API. \"foo\" even receives the instance of \"bar\" as a dependency parameter, and can use it accordingly.\n\nSpend some time examining these code snippets to fully understand the power of closures put to use for our own good purposes. The key take-away is that there's not really any particular \"magic\" to module managers. They fulfill both characteristics of the module pattern I listed above: invoking a function definition wrapper, and keeping its return value as the API for that module.\n\nIn other words, modules are just modules, even if you put a friendly wrapper tool on top of them.\n\n### Future Modules\n\nES6 adds first-class syntax support for the concept of modules. When loaded via the module system, ES6 treats a file as a separate module. Each module can both import other modules or specific API members, as well export their own public API members.\n\n**Note:** Function-based modules aren't a statically recognized pattern (something the compiler knows about), so their API semantics aren't considered until run-time. That is, you can actually modify a module's API during the run-time (see earlier `publicAPI` discussion).\n\nBy contrast, ES6 Module APIs are static (the APIs don't change at run-time). Since the compiler knows *that*, it can (and does!) check during (file loading and) compilation that a reference to a member of an imported module's API *actually exists*. If the API reference doesn't exist, the compiler throws an \"early\" error at compile-time, rather than waiting for traditional dynamic run-time resolution (and errors, if any).\n\nES6 modules **do not** have an \"inline\" format, they must be defined in separate files (one per module). The browsers/engines have a default \"module loader\" (which is overridable, but that's well-beyond our discussion here) which synchronously loads a module file when it's imported.\n\nConsider:\n\n**bar.js**\n```js\nfunction hello(who) {\n\treturn \"Let me introduce: \" + who;\n}\n\nexport hello;\n```\n\n**foo.js**\n```js\n// import only `hello()` from the \"bar\" module\nimport hello from \"bar\";\n\nvar hungry = \"hippo\";\n\nfunction awesome() {\n\tconsole.log(\n\t\thello( hungry ).toUpperCase()\n\t);\n}\n\nexport awesome;\n```\n\n```js\n// import the entire \"foo\" and \"bar\" modules\nmodule foo from \"foo\";\nmodule bar from \"bar\";\n\nconsole.log(\n\tbar.hello( \"rhino\" )\n); // Let me introduce: rhino\n\nfoo.awesome(); // LET ME INTRODUCE: HIPPO\n```\n\n**Note:** Separate files **\"foo.js\"** and **\"bar.js\"** would need to be created, with the contents as shown in the first two snippets, respectively. Then, your program would load/import those modules to use them, as shown in the third snippet.\n\n`import` imports one or more members from a module's API into the current scope, each to a bound variable (`hello` in our case). `module` imports an entire module API to a bound variable (`foo`, `bar` in our case). `export` exports an identifier (variable, function) to the public API for the current module. These operators can be used as many times in a module's definition as is necessary.\n\nThe contents inside the *module file* are treated as if enclosed in a scope closure, just like with the function-closure modules seen earlier.\n\n## Review (TL;DR)\n\nClosure seems to the un-enlightened like a mystical world set apart inside of JavaScript which only the few bravest souls can reach. But it's actually just a standard and almost obvious fact of how we write code in a lexically scoped environment, where functions are values and can be passed around at will.\n\n**Closure is when a function can remember and access its lexical scope even when it's invoked outside its lexical scope.**\n\nClosures can trip us up, for instance with loops, if we're not careful to recognize them and how they work. But they are also an immensely powerful tool, enabling patterns like *modules* in their various forms.\n\nModules require two key characteristics: 1) an outer wrapping function being invoked, to create the enclosing scope 2) the return value of the wrapping function must include reference to at least one inner function that then has closure over the private inner scope of the wrapper.\n\nNow we can see closures all around our existing code, and we have the ability to recognize and leverage them to our own benefit!\n"
  },
  {
    "path": "scope & closures/toc.md",
    "content": "# You Don't Know JS: Scope & Closures\n\n## Table of Contents\n\n* Foreword\n* Preface\n* Chapter 1: What is Scope?\n\t* Compiler Theory\n\t* Understanding Scope\n\t* Nested Scope\n\t* Errors\n* Chapter 2: Lexical Scope\n\t* Lex-time\n\t* Cheating Lexical\n* Chapter 3: Function vs. Block Scope\n\t* Scope From Functions\n\t* Hiding In Plain Scope\n\t* Functions As Scopes\n\t* Blocks As Scopes\n* Chapter 4: Hoisting\n\t* Chicken Or The Egg?\n\t* The Compiler Strikes Again\n\t* Functions First\n* Chapter 5: Scope Closures\n\t* Enlightenment\n\t* Nitty Gritty\n\t* Now I Can See\n\t* Loops + Closure\n\t* Modules\n* Appendix A: Dynamic Scope\n* Appendix B: Polyfilling Block Scope\n* Appendix C: Lexical-this\n* Appendix D: Acknowledgments\n"
  },
  {
    "path": "this & object prototypes/README.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n\n<img src=\"cover.jpg\" width=\"300\">\n\n-----\n\n**[Purchase digital/print copy from O'Reilly](http://shop.oreilly.com/product/0636920033738.do)**\n\n-----\n\n[Table of Contents](toc.md)\n\n* [Foreword](foreword.md) (by [Nick Berardi](https://github.com/nberardi))\n* [Preface](../preface.md)\n* [Chapter 1: *this* Or That?](ch1.md)\n* [Chapter 2: *this* All Makes Sense Now!](ch2.md)\n* [Chapter 3: Objects](ch3.md)\n* [Chapter 4: Mixing (Up) \"Class\" Objects](ch4.md)\n* [Chapter 5: Prototypes](ch5.md)\n* [Chapter 6: Behavior Delegation](ch6.md)\n* [Appendix A: ES6 *class*](apA.md)\n* [Appendix B: Thank You's!](apB.md)\n"
  },
  {
    "path": "this & object prototypes/apA.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Appendix A: ES6 `class`\n\nIf there's any take-away message from the second half of this book (Chapters 4-6), it's that classes are an optional design pattern for code (not a necessary given), and that furthermore they are often quite awkward to implement in a `[[Prototype]]` language like JavaScript.\n\nThis awkwardness is *not* just about syntax, although that's a big part of it. Chapters 4 and 5 examined quite a bit of syntactic ugliness, from verbosity of `.prototype` references cluttering the code, to *explicit pseudo-polymorphism* (see Chapter 4) when you give methods the same name at different levels of the chain and try to implement a polymorphic reference from a lower-level method to a higher-level method. `.constructor` being wrongly interpreted as \"was constructed by\" and yet being unreliable for that definition is yet another syntactic ugly.\n\nBut the problems with class design are much deeper. Chapter 4 points out that classes in traditional class-oriented languages actually produce a *copy* action from parent to child to instance, whereas in `[[Prototype]]`, the action is **not** a copy, but rather the opposite -- a delegation link.\n\nWhen compared to the simplicity of OLOO-style code and behavior delegation (see Chapter 6), which embrace `[[Prototype]]` rather than hide from it, classes stand out as a sore thumb in JS.\n\n## `class`\n\nBut we *don't* need to re-argue that case again. I re-mention those issues briefly only so that you keep them fresh in your mind now that we turn our attention to the ES6 `class` mechanism. We'll demonstrate here how it works, and look at whether or not `class` does anything substantial to address any of those \"class\" concerns.\n\nLet's revisit the `Widget` / `Button` example from Chapter 6:\n\n```js\nclass Widget {\n\tconstructor(width,height) {\n\t\tthis.width = width || 50;\n\t\tthis.height = height || 50;\n\t\tthis.$elem = null;\n\t}\n\trender($where){\n\t\tif (this.$elem) {\n\t\t\tthis.$elem.css( {\n\t\t\t\twidth: this.width + \"px\",\n\t\t\t\theight: this.height + \"px\"\n\t\t\t} ).appendTo( $where );\n\t\t}\n\t}\n}\n\nclass Button extends Widget {\n\tconstructor(width,height,label) {\n\t\tsuper( width, height );\n\t\tthis.label = label || \"Default\";\n\t\tthis.$elem = $( \"<button>\" ).text( this.label );\n\t}\n\trender($where) {\n\t\tsuper.render( $where );\n\t\tthis.$elem.click( this.onClick.bind( this ) );\n\t}\n\tonClick(evt) {\n\t\tconsole.log( \"Button '\" + this.label + \"' clicked!\" );\n\t}\n}\n```\n\nBeyond this syntax *looking* nicer, what problems does ES6 solve?\n\n1. There's no more (well, sorta, see below!) references to `.prototype` cluttering the code.\n2. `Button` is declared directly to \"inherit from\" (aka `extends`) `Widget`, instead of needing to use `Object.create(..)` to replace a `.prototype` object that's linked, or having to set with `.__proto__` or `Object.setPrototypeOf(..)`.\n3. `super(..)` now gives us a very helpful **relative polymorphism** capability, so that any method at one level of the chain can refer relatively one level up the chain to a method of the same name. This includes a solution to the note from Chapter 4 about the weirdness of constructors not belonging to their class, and so being unrelated -- `super()` works inside constructors exactly as you'd expect.\n4. `class` literal syntax has no affordance for specifying properties (only methods). This might seem limiting to some, but it's expected that the vast majority of cases where a property (state) exists elsewhere but the end-chain \"instances\", this is usually a mistake and surprising (as it's state that's implicitly \"shared\" among all \"instances\"). So, one *could* say the `class` syntax is protecting you from mistakes.\n5. `extends` lets you extend even built-in object (sub)types, like `Array` or `RegExp`, in a very natural way. Doing so without `class .. extends` has long been an exceedingly complex and frustrating task, one that only the most adept of framework authors have ever been able to accurately tackle. Now, it will be rather trivial!\n\nIn all fairness, those are some substantial solutions to many of the most obvious (syntactic) issues and surprises people have with classical prototype-style code.\n\n## `class` Gotchas\n\nIt's not all bubblegum and roses, though. There are still some deep and profoundly troubling issues with using \"classes\" as a design pattern in JS.\n\nFirstly, the `class` syntax may convince you a new \"class\" mechanism exists in JS as of ES6. **Not so.** `class` is, mostly, just syntactic sugar on top of the existing `[[Prototype]]` (delegation!) mechanism.\n\nThat means `class` is not actually copying definitions statically at declaration time the way it does in traditional class-oriented languages. If you change/replace a method (on purpose or by accident) on the parent \"class\", the child \"class\" and/or instances will still be \"affected\", in that they didn't get copies at declaration time, they are all still using the live-delegation model based on `[[Prototype]]`:\n\n```js\nclass C {\n\tconstructor() {\n\t\tthis.num = Math.random();\n\t}\n\trand() {\n\t\tconsole.log( \"Random: \" + this.num );\n\t}\n}\n\nvar c1 = new C();\nc1.rand(); // \"Random: 0.4324299...\"\n\nC.prototype.rand = function() {\n\tconsole.log( \"Random: \" + Math.round( this.num * 1000 ));\n};\n\nvar c2 = new C();\nc2.rand(); // \"Random: 867\"\n\nc1.rand(); // \"Random: 432\" -- oops!!!\n```\n\nThis only seems like reasonable behavior *if you already know* about the delegation nature of things, rather than expecting *copies* from \"real classes\". So the question to ask yourself is, why are you choosing `class` syntax for something fundamentally different from classes?\n\nDoesn't the ES6 `class` syntax **just make it harder** to see and understand the difference between traditional classes and delegated objects?\n\n`class` syntax *does not* provide a way to declare class member properties (only methods). So if you need to do that to track shared state among instances, then you end up going back to the ugly `.prototype` syntax, like this:\n\n```js\nclass C {\n\tconstructor() {\n\t\t// make sure to modify the shared state,\n\t\t// not set a shadowed property on the\n\t\t// instances!\n\t\tC.prototype.count++;\n\n\t\t// here, `this.count` works as expected\n\t\t// via delegation\n\t\tconsole.log( \"Hello: \" + this.count );\n\t}\n}\n\n// add a property for shared state directly to\n// prototype object\nC.prototype.count = 0;\n\nvar c1 = new C();\n// Hello: 1\n\nvar c2 = new C();\n// Hello: 2\n\nc1.count === 2; // true\nc1.count === c2.count; // true\n```\n\nThe biggest problem here is that it betrays the `class` syntax by exposing (leakage!) `.prototype` as an implementation detail.\n\nBut, we also still have the surprise gotcha that `this.count++` would implicitly create a separate shadowed `.count` property on both `c1` and `c2` objects, rather than updating the shared state. `class` offers us no consolation from that issue, except (presumably) to imply by lack of syntactic support that you shouldn't be doing that *at all*.\n\nMoreover, accidental shadowing is still a hazard:\n\n```js\nclass C {\n\tconstructor(id) {\n\t\t// oops, gotcha, we're shadowing `id()` method\n\t\t// with a property value on the instance\n\t\tthis.id = id;\n\t}\n\tid() {\n\t\tconsole.log( \"Id: \" + this.id );\n\t}\n}\n\nvar c1 = new C( \"c1\" );\nc1.id(); // TypeError -- `c1.id` is now the string \"c1\"\n```\n\nThere's also some very subtle nuanced issues with how `super` works. You might assume that `super` would be bound in an analogous way to how `this` gets bound (see Chapter 2), which is that `super` would always be bound to one level higher than whatever the current method's position in the `[[Prototype]]` chain is.\n\nHowever, for performance reasons (`this` binding is already expensive), `super` is not bound dynamically. It's bound sort of \"statically\", as declaration time. No big deal, right?\n\nEhh... maybe, maybe not. If you, like most JS devs, start assigning functions around to different objects (which came from `class` definitions), in various different ways, you probably won't be very aware that in all those cases, the `super` mechanism under the covers is having to be re-bound each time.\n\nAnd depending on what sorts of syntactic approaches you take to these assignments, there may very well be cases where the `super` can't be properly bound (at least, not where you suspect), so you may (at time of writing, TC39 discussion is ongoing on the topic) have to manually bind `super` with `toMethod(..)` (kinda like you have to do `bind(..)` for `this` -- see Chapter 2).\n\nYou're used to being able to assign around methods to different objects to *automatically* take advantage of the dynamism of `this` via the *implicit binding* rule (see Chapter 2). But the same will likely not be true with methods that use `super`.\n\nConsider what `super` should do here (against `D` and `E`):\n\n```js\nclass P {\n\tfoo() { console.log( \"P.foo\" ); }\n}\n\nclass C extends P {\n\tfoo() {\n\t\tsuper();\n\t}\n}\n\nvar c1 = new C();\nc1.foo(); // \"P.foo\"\n\nvar D = {\n\tfoo: function() { console.log( \"D.foo\" ); }\n};\n\nvar E = {\n\tfoo: C.prototype.foo\n};\n\n// Link E to D for delegation\nObject.setPrototypeOf( E, D );\n\nE.foo(); // \"P.foo\"\n```\n\nIf you were thinking (quite reasonably!) that `super` would be bound dynamically at call-time, you might expect that `super()` would automatically recognize that `E` delegates to `D`, so `E.foo()` using `super()` should call to `D.foo()`.\n\n**Not so.** For performance pragmatism reasons, `super` is not *late bound* (aka, dynamically bound) like `this` is. Instead it's derived at call-time from `[[HomeObject]].[[Prototype]]`, where `[[HomeObject]]` is statically bound at creation time.\n\nIn this particular case, `super()` is still resolving to `P.foo()`, since the method's `[[HomeObject]]` is still `C` and `C.[[Prototype]]` is `P`.\n\nThere will *probably* be ways to manually address such gotchas. Using `toMethod(..)` to bind/rebind a method's `[[HomeObject]]` (along with setting the `[[Prototype]]` of that object!) appears to work in this scenario:\n\n```js\nvar D = {\n\tfoo: function() { console.log( \"D.foo\" ); }\n};\n\n// Link E to D for delegation\nvar E = Object.create( D );\n\n// manually bind `foo`s `[[HomeObject]]` as\n// `E`, and `E.[[Prototype]]` is `D`, so thus\n// `super()` is `D.foo()`\nE.foo = C.prototype.foo.toMethod( E, \"foo\" );\n\nE.foo(); // \"D.foo\"\n```\n\n**Note:** `toMethod(..)` clones the method, and takes `homeObject` as its first parameter (which is why we pass `E`), and the second parameter (optionally) sets a `name` for the new method (which keep at \"foo\").\n\nIt remains to be seen if there are other corner case gotchas that devs will run into beyond this scenario. Regardless, you will have to be diligent and stay aware of which places the engine automatically figures out `super` for you, and which places you have to manually take care of it. **Ugh!**\n\n# Static > Dynamic?\n\nBut the biggest problem of all about ES6 `class` is that all these various gotchas mean `class` sorta opts you into a syntax which seems to imply (like traditional classes) that once you declare a `class`, it's a static definition of a (future instantiated) thing. You completely lose sight of the fact that `C` is an object, a concrete thing, which you can directly interact with.\n\nIn traditional class-oriented languages, you never adjust the definition of a class later, so the class design pattern doesn't suggest such capabilities. But **one of the most powerful parts** of JS is that it *is* dynamic, and the definition of any object is (unless you make it immutable) a fluid and mutable *thing*.\n\n`class` seems to imply you shouldn't do such things, by forcing you into the uglier `.prototype` syntax to do so, or forcing you to think about `super` gotchas, etc. It also offers *very little* support for any of the pitfalls that this dynamism can bring.\n\nIn other words, it's as if `class` is telling you: \"dynamic is too hard, so it's probably not a good idea. Here's a static-looking syntax, so code your stuff statically.\"\n\nWhat a sad commentary on JavaScript: **dynamic is too hard, let's pretend to be (but not actually be!) static**.\n\nThese are the reasons why ES6 `class` is masquerading as a nice solution to syntactic headaches, but it's actually muddying the waters further and making things worse for JS and for clear and concise understanding.\n\n**Note:** If you use the `.bind(..)` utility to make a hard-bound function (see Chapter 2), the function created is not subclassable with ES6 `extend` like normal functions are.\n\n## Review (TL;DR)\n\n`class` does a very good job of pretending to fix the problems with the class/inheritance design pattern in JS. But it actually does the opposite: **it hides many of the problems, and introduces other subtle but dangerous ones**.\n\n`class` contributes to the ongoing confusion of \"class\" in JavaScript which has plagued the language for nearly two decades. In some respects, it asks more questions than it answers, and it feels in totality like a very unnatural fit on top of the elegant simplicity of the `[[Prototype]]` mechanism.\n\nBottom line: if ES6 `class` makes it harder to robustly leverage `[[Prototype]]`, and hides the most important nature of the JS object mechanism -- **the live delegation links between objects** -- shouldn't we see `class` as creating more troubles than it solves, and just relegate it to an anti-pattern?\n\nI can't really answer that question for you. But I hope this book has fully explored the issue at a deeper level than you've ever gone before, and has given you the information you need *to answer it yourself*.\n"
  },
  {
    "path": "this & object prototypes/apB.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Appendix B: Acknowledgments\n\nI have many people to thank for making this book title and the overall series happen.\n\nFirst, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.\n\nI'd like to thank my editors at O'Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into \"open source\" book writing, editing, and production.\n\nThank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, and many others. A big thank you to Nick Berardi for writing the Foreword for this title.\n\nThank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy \"kangax\" Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, and so many others, I can't even scratch the surface.\n\nThe *You Don't Know JS* book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:\n\n> Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu 'Dilys' Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard\n\nThis book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!\n\nThank you again to all the countless folks I didn't name but who I nonetheless owe thanks. May this book series be \"owned\" by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.\n"
  },
  {
    "path": "this & object prototypes/ch1.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Chapter 1: `this` Or That?\n\nOne of the most confused mechanisms in JavaScript is the `this` keyword. It's a special identifier keyword that's automatically defined in the scope of every function, but what exactly it refers to bedevils even seasoned JavaScript developers.\n\n> Any sufficiently *advanced* technology is indistinguishable from magic. -- Arthur C. Clarke\n\nJavaScript's `this` mechanism isn't actually *that* advanced, but developers often paraphrase that quote in their own mind by inserting \"complex\" or \"confusing\", and there's no question that without lack of clear understanding, `this` can seem downright magical in *your* confusion.\n\n**Note:** The word \"this\" is a terribly common pronoun in general discourse. So, it can be very difficult, especially verbally, to determine whether we are using \"this\" as a pronoun or using it to refer to the actual keyword identifier. For clarity, I will always use `this` to refer to the special keyword, and \"this\" or *this* or this otherwise.\n\n## Why `this`?\n\nIf the `this` mechanism is so confusing, even to seasoned JavaScript developers, one may wonder why it's even useful? Is it more trouble than it's worth? Before we jump into the *how*, we should examine the *why*.\n\nLet's try to illustrate the motivation and utility of `this`:\n\n```js\nfunction identify() {\n\treturn this.name.toUpperCase();\n}\n\nfunction speak() {\n\tvar greeting = \"Hello, I'm \" + identify.call( this );\n\tconsole.log( greeting );\n}\n\nvar me = {\n\tname: \"Kyle\"\n};\n\nvar you = {\n\tname: \"Reader\"\n};\n\nidentify.call( me ); // KYLE\nidentify.call( you ); // READER\n\nspeak.call( me ); // Hello, I'm KYLE\nspeak.call( you ); // Hello, I'm READER\n```\n\nIf the *how* of this snippet confuses you, don't worry! We'll get to that shortly. Just set those questions aside briefly so we can look into the *why* more clearly.\n\nThis code snippet allows the `identify()` and `speak()` functions to be re-used against multiple *context* (`me` and `you`) objects, rather than needing a separate version of the function for each object.\n\nInstead of relying on `this`, you could have explicitly passed in a context object to both `identify()` and `speak()`.\n\n```js\nfunction identify(context) {\n\treturn context.name.toUpperCase();\n}\n\nfunction speak(context) {\n\tvar greeting = \"Hello, I'm \" + identify( context );\n\tconsole.log( greeting );\n}\n\nidentify( you ); // READER\nspeak( me ); // Hello, I'm KYLE\n```\n\nHowever, the `this` mechanism provides a more elegant way of implicitly \"passing along\" an object reference, leading to cleaner API design and easier re-use.\n\nThe more complex your usage pattern is, the more clearly you'll see that passing context around as an explicit parameter is often messier than passing around a `this` context. When we explore objects and prototypes, you will see the helpfulness of a collection of functions being able to automatically reference the proper context object.\n\n## Confusions\n\nWe'll soon begin to explain how `this` *actually* works, but first we must  dispel some misconceptions about how it *doesn't* actually work.\n\nThe name \"this\" creates confusion when developers try to think about it too literally. There are two meanings often assumed, but both are incorrect.\n\n### Itself\n\nThe first common temptation is to assume `this` refers to the function itself. That's a reasonable grammatical inference, at least.\n\nWhy would you want to refer to a function from inside itself? The most common reasons would be things like recursion (calling a function from inside itself) or having an event handler that can unbind itself when it's first called.\n\nDevelopers new to JS's mechanisms often think that referencing the function as an object (all functions in JavaScript are objects!) lets you store *state* (values in properties) between function calls. While this is certainly possible and has some limited uses, the rest of the book will expound on many other patterns for *better* places to store state besides the function object.\n\nBut for just a moment, we'll explore that pattern, to illustrate how `this` doesn't let a function get a reference to itself like we might have assumed.\n\nConsider the following code, where we attempt to track how many times a function (`foo`) was called:\n\n```js\nfunction foo(num) {\n\tconsole.log( \"foo: \" + num );\n\n\t// keep track of how many times `foo` is called\n\tthis.count++;\n}\n\nfoo.count = 0;\n\nvar i;\n\nfor (i=0; i<10; i++) {\n\tif (i > 5) {\n\t\tfoo( i );\n\t}\n}\n// foo: 6\n// foo: 7\n// foo: 8\n// foo: 9\n\n// how many times was `foo` called?\nconsole.log( foo.count ); // 0 -- WTF?\n```\n\n`foo.count` is *still* `0`, even though the four `console.log` statements clearly indicate `foo(..)` was in fact called four times. The frustration stems from a *too literal* interpretation of what `this` (in `this.count++`) means.\n\nWhen the code executes `foo.count = 0`, indeed it's adding a property `count` to the function object `foo`. But for the `this.count` reference inside of the function, `this` is not in fact pointing *at all* to that function object, and so even though the property names are the same, the root objects are different, and confusion ensues.\n\n**Note:** A responsible developer *should* ask at this point, \"If I was incrementing a `count` property but it wasn't the one I expected, which `count` *was* I incrementing?\" In fact, were she to dig deeper, she would find that she had accidentally created a global variable `count` (see Chapter 2 for *how* that happened!), and it currently has the value `NaN`. Of course, once she identifies this peculiar outcome, she then has a whole other set of questions: \"How was it global, and why did it end up `NaN` instead of some proper count value?\" (see Chapter 2).\n\nInstead of stopping at this point and digging into why the `this` reference doesn't seem to be behaving as *expected*, and answering those tough but important questions, many developers simply avoid the issue altogether, and hack toward some other solution, such as creating another object to hold the `count` property:\n\n```js\nfunction foo(num) {\n\tconsole.log( \"foo: \" + num );\n\n\t// keep track of how many times `foo` is called\n\tdata.count++;\n}\n\nvar data = {\n\tcount: 0\n};\n\nvar i;\n\nfor (i=0; i<10; i++) {\n\tif (i > 5) {\n\t\tfoo( i );\n\t}\n}\n// foo: 6\n// foo: 7\n// foo: 8\n// foo: 9\n\n// how many times was `foo` called?\nconsole.log( data.count ); // 4\n```\n\nWhile it is true that this approach \"solves\" the problem, unfortunately it simply ignores the real problem -- lack of understanding what `this` means and how it works -- and instead falls back to the comfort zone of a more familiar mechanism: lexical scope.\n\n**Note:** Lexical scope is a perfectly fine and useful mechanism; I am not belittling the use of it, by any means (see *\"Scope & Closures\"* title of this book series). But constantly *guessing* at how to use `this`, and usually being *wrong*, is not a good reason to retreat back to lexical scope and never learn *why* `this` eludes you.\n\nTo reference a function object from inside itself, `this` by itself will typically be insufficient. You generally need a reference to the function object via a lexical identifier (variable) that points at it.\n\nConsider these two functions:\n\n```js\nfunction foo() {\n\tfoo.count = 4; // `foo` refers to itself\n}\n\nsetTimeout( function(){\n\t// anonymous function (no name), cannot\n\t// refer to itself\n}, 10 );\n```\n\nIn the first function, called a \"named function\", `foo` is a reference that can be used to refer to the function from inside itself.\n\nBut in the second example, the function callback passed to `setTimeout(..)` has no name identifier (so called an \"anonymous function\"), so there's no proper way to refer to the function object itself.\n\n**Note:** The old-school but now deprecated and frowned-upon `arguments.callee` reference inside a function *also* points to the function object of the currently executing function. This reference is typically the only way to access an anonymous function's object from inside itself. The best approach, however, is to avoid the use of anonymous functions altogether, at least for those which require a self-reference, and instead use a named function (expression). `arguments.callee` is deprecated and should not be used.\n\nSo another solution to our running example would have been to use the `foo` identifier as a function object reference in each place, and not use `this` at all, which *works*:\n\n```js\nfunction foo(num) {\n\tconsole.log( \"foo: \" + num );\n\n\t// keep track of how many times `foo` is called\n\tfoo.count++;\n}\n\nfoo.count = 0;\n\nvar i;\n\nfor (i=0; i<10; i++) {\n\tif (i > 5) {\n\t\tfoo( i );\n\t}\n}\n// foo: 6\n// foo: 7\n// foo: 8\n// foo: 9\n\n// how many times was `foo` called?\nconsole.log( foo.count ); // 4\n```\n\nHowever, that approach similarly side-steps *actual* understanding of `this` and relies entirely on the lexical scoping of variable `foo`.\n\nYet another way of approaching the issue is to force `this` to actually point at the `foo` function object:\n\n```js\nfunction foo(num) {\n\tconsole.log( \"foo: \" + num );\n\n\t// keep track of how many times `foo` is called\n\t// Note: `this` IS actually `foo` now, based on\n\t// how `foo` is called (see below)\n\tthis.count++;\n}\n\nfoo.count = 0;\n\nvar i;\n\nfor (i=0; i<10; i++) {\n\tif (i > 5) {\n\t\t// using `call(..)`, we ensure the `this`\n\t\t// points at the function object (`foo`) itself\n\t\tfoo.call( foo, i );\n\t}\n}\n// foo: 6\n// foo: 7\n// foo: 8\n// foo: 9\n\n// how many times was `foo` called?\nconsole.log( foo.count ); // 4\n```\n\n**Instead of avoiding `this`, we embrace it.** We'll explain in a little bit *how* such techniques work much more completely, so don't worry if you're still a bit confused!\n\n### Its Scope\n\nThe next most common misconception about the meaning of `this` is that it somehow refers to the function's scope. It's a tricky question, because in one sense there is some truth, but in the other sense, it's quite misguided.\n\nTo be clear, `this` does not, in any way, refer to a function's **lexical scope**. It is true that internally, scope is kind of like an object with properties for each of the available identifiers. But the scope \"object\" is not accessible to JavaScript code. It's an inner part of the *Engine*'s implementation.\n\nConsider code which attempts (and fails!) to cross over the boundary and use `this` to implicitly refer to a function's lexical scope:\n\n```js\nfunction foo() {\n\tvar a = 2;\n\tthis.bar();\n}\n\nfunction bar() {\n\tconsole.log( this.a );\n}\n\nfoo(); //undefined\n```\n\nThere's more than one mistake in this snippet. While it may seem contrived, the code you see is a distillation of actual real-world code that has been exchanged in public community help forums. It's a wonderful (if not sad) illustration of just how misguided `this` assumptions can be.\n\nFirstly, an attempt is made to reference the `bar()` function via `this.bar()`. It is almost certainly an *accident* that it works, but we'll explain the *how* of that shortly. The most natural way to have invoked `bar()` would have been to omit the leading `this.` and just make a lexical reference to the identifier.\n\nHowever, the developer who writes such code is attempting to use `this` to create a bridge between the lexical scopes of `foo()` and `bar()`, so that `bar()` has access to the variable `a` in the inner scope of `foo()`. **No such bridge is possible.** You cannot use a `this` reference to look something up in a lexical scope. It is not possible.\n\nEvery time you feel yourself trying to mix lexical scope look-ups with `this`, remind yourself: *there is no bridge*.\n\n## What's `this`?\n\nHaving set aside various incorrect assumptions, let us now turn our attention to how the `this` mechanism really works.\n\nWe said earlier that `this` is not an author-time binding but a runtime binding. It is contextual based on the conditions of the function's invocation. `this` binding has nothing to do with where a function is declared, but has instead everything to do with the manner in which the function is called.\n\nWhen a function is invoked, an activation record, otherwise known as an execution context, is created. This record contains information about where the function was called from (the call-stack), *how* the function was invoked, what parameters were passed, etc. One of the properties of this record is the `this` reference which will be used for the duration of that function's execution.\n\nIn the next chapter, we will learn to find a function's **call-site** to determine how its execution will bind `this`.\n\n## Review (TL;DR)\n\n`this` binding is a constant source of confusion for the JavaScript developer who does not take the time to learn how the mechanism actually works. Guesses, trial-and-error, and blind copy-n-paste from Stack Overflow answers is not an effective or proper way to leverage *this* important `this` mechanism.\n\nTo learn `this`, you first have to learn what `this` is *not*, despite any assumptions or misconceptions that may lead you down those paths. `this` is neither a reference to the function itself, nor is it a reference to the function's *lexical* scope.\n\n`this` is actually a binding that is made when a function is invoked, and *what* it references is determined entirely by the call-site where the function is called.\n"
  },
  {
    "path": "this & object prototypes/ch2.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Chapter 2: `this` All Makes Sense Now!\n\nIn Chapter 1, we discarded various misconceptions about `this` and learned instead that `this` is a binding made for each function invocation, based entirely on its **call-site** (how the function is called).\n\n## Call-site\n\nTo understand `this` binding, we have to understand the call-site: the location in code where a function is called (**not where it's declared**). We must inspect the call-site to answer the question: what's *this* `this` a reference to?\n\nFinding the call-site is generally: \"go locate where a function is called from\", but it's not always that easy, as certain coding patterns can obscure the *true* call-site.\n\nWhat's important is to think about the **call-stack** (the stack of functions that have been called to get us to the current moment in execution). The call-site we care about is *in* the invocation *before* the currently executing function.\n\nLet's demonstrate call-stack and call-site:\n\n```js\nfunction baz() {\n    // call-stack is: `baz`\n    // so, our call-site is in the global scope\n\n    console.log( \"baz\" );\n    bar(); // <-- call-site for `bar`\n}\n\nfunction bar() {\n    // call-stack is: `baz` -> `bar`\n    // so, our call-site is in `baz`\n\n    console.log( \"bar\" );\n    foo(); // <-- call-site for `foo`\n}\n\nfunction foo() {\n    // call-stack is: `baz` -> `bar` -> `foo`\n    // so, our call-site is in `bar`\n\n    console.log( \"foo\" );\n}\n\nbaz(); // <-- call-site for `baz`\n```\n\nTake care when analyzing code to find the actual call-site (from the call-stack), because it's the only thing that matters for `this` binding.\n\n**Note:** You can visualize a call-stack in your mind by looking at the chain of function calls in order, as we did with the comments in the above snippet. But this is painstaking and error-prone. Another way of seeing the call-stack is using a debugger tool in your browser. Most modern desktop browsers have built-in developer tools, which includes a JS debugger. In the above snippet, you could have set a breakpoint in the tools for the first line of the `foo()` function, or simply inserted the `debugger;` statement on that first line. When you run the page, the debugger will pause at this location, and will show you a list of the functions that have been called to get to that line, which will be your call stack. So, if you're trying to diagnose `this` binding, use the developer tools to get the call-stack, then find the second item from the top, and that will show you the real call-site.\n\n## Nothing But Rules\n\nWe turn our attention now to *how* the call-site determines where `this` will point during the execution of a function.\n\nYou must inspect the call-site and determine which of 4 rules applies. We will first explain each of these 4 rules independently, and then we will illustrate their order of precedence, if multiple rules *could* apply to the call-site.\n\n### Default Binding\n\nThe first rule we will examine comes from the most common case of function calls: standalone function invocation. Think of *this* `this` rule as the default catch-all rule when none of the other rules apply.\n\nConsider this code:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar a = 2;\n\nfoo(); // 2\n```\n\nThe first thing to note, if you were not already aware, is that variables declared in the global scope, as `var a = 2` is, are synonymous with global-object properties of the same name. They're not copies of each other, they *are* each other. Think of it as two sides of the same coin.\n\nSecondly, we see that when `foo()` is called, `this.a` resolves to our global variable `a`. Why? Because in this case, the *default binding* for `this` applies to the function call, and so points `this` at the global object.\n\nHow do we know that the *default binding* rule applies here? We examine the call-site to see how `foo()` is called. In our snippet, `foo()` is called with a plain, un-decorated function reference. None of the other rules we will demonstrate will apply here, so the *default binding* applies instead.\n\nIf `strict mode` is in effect, the global object is not eligible for the *default binding*, so the `this` is instead set to `undefined`.\n\n```js\nfunction foo() {\n\t\"use strict\";\n\n\tconsole.log( this.a );\n}\n\nvar a = 2;\n\nfoo(); // TypeError: `this` is `undefined`\n```\n\nA subtle but important detail is: even though the overall `this` binding rules are entirely based on the call-site, the global object is **only** eligible for the *default binding* if the **contents** of `foo()` are **not** running in `strict mode`; the `strict mode` state of the call-site of `foo()` is irrelevant.\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar a = 2;\n\n(function(){\n\t\"use strict\";\n\n\tfoo(); // 2\n})();\n```\n\n**Note:** Intentionally mixing `strict mode` and non-`strict mode` together in your own code is generally frowned upon. Your entire program should probably either be **Strict** or **non-Strict**. However, sometimes you include a third-party library that has different **Strict**'ness than your own code, so care must be taken over these subtle compatibility details.\n\n### Implicit Binding\n\nAnother rule to consider is: does the call-site have a context object, also referred to as an owning or containing object, though *these* alternate terms could be slightly misleading.\n\nConsider:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj = {\n\ta: 2,\n\tfoo: foo\n};\n\nobj.foo(); // 2\n```\n\nFirstly, notice the manner in which `foo()` is declared and then later added as a reference property onto `obj`. Regardless of whether `foo()` is initially declared *on* `obj`, or is added as a reference later (as this snippet shows), in neither case is the **function** really \"owned\" or \"contained\" by the `obj` object.\n\nHowever, the call-site *uses* the `obj` context to **reference** the function, so you *could* say that the `obj` object \"owns\" or \"contains\" the **function reference** at the time the function is called.\n\nWhatever you choose to call this pattern, at the point that `foo()` is called, it's preceded by an object reference to `obj`. When there is a context object for a function reference, the *implicit binding* rule says that it's *that* object which should be used for the function call's `this` binding.\n\nBecause `obj` is the `this` for the `foo()` call, `this.a` is synonymous with `obj.a`.\n\nOnly the top/last level of an object property reference chain matters to the call-site. For instance:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj2 = {\n\ta: 42,\n\tfoo: foo\n};\n\nvar obj1 = {\n\ta: 2,\n\tobj2: obj2\n};\n\nobj1.obj2.foo(); // 42\n```\n\n#### Implicitly Lost\n\nOne of the most common frustrations that `this` binding creates is when an *implicitly bound* function loses that binding, which usually means it falls back to the *default binding*, of either the global object or `undefined`, depending on `strict mode`.\n\nConsider:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj = {\n\ta: 2,\n\tfoo: foo\n};\n\nvar bar = obj.foo; // function reference/alias!\n\nvar a = \"oops, global\"; // `a` also property on global object\n\nbar(); // \"oops, global\"\n```\n\nEven though `bar` appears to be a reference to `obj.foo`, in fact, it's really just another reference to `foo` itself. Moreover, the call-site is what matters, and the call-site is `bar()`, which is a plain, un-decorated call and thus the *default binding* applies.\n\nThe more subtle, more common, and more unexpected way this occurs is when we consider passing a callback function:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nfunction doFoo(fn) {\n\t// `fn` is just another reference to `foo`\n\n\tfn(); // <-- call-site!\n}\n\nvar obj = {\n\ta: 2,\n\tfoo: foo\n};\n\nvar a = \"oops, global\"; // `a` also property on global object\n\ndoFoo( obj.foo ); // \"oops, global\"\n```\n\nParameter passing is just an implicit assignment, and since we're passing a function, it's an implicit reference assignment, so the end result is the same as the previous snippet.\n\nWhat if the function you're passing your callback to is not your own, but built-in to the language? No difference, same outcome.\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj = {\n\ta: 2,\n\tfoo: foo\n};\n\nvar a = \"oops, global\"; // `a` also property on global object\n\nsetTimeout( obj.foo, 100 ); // \"oops, global\"\n```\n\nThink about this crude theoretical pseudo-implementation of `setTimeout()` provided as a built-in from the JavaScript environment:\n\n```js\nfunction setTimeout(fn,delay) {\n\t// wait (somehow) for `delay` milliseconds\n\tfn(); // <-- call-site!\n}\n```\n\nIt's quite common that our function callbacks *lose* their `this` binding, as we've just seen. But another way that `this` can surprise us is when the function we've passed our callback to intentionally changes the `this` for the call. Event handlers in popular JavaScript libraries are quite fond of forcing your callback to have a `this` which points to, for instance, the DOM element that triggered the event. While that may sometimes be useful, other times it can be downright infuriating. Unfortunately, these tools rarely let you choose.\n\nEither way the `this` is changed unexpectedly, you are not really in control of how your callback function reference will be executed, so you have no way (yet) of controlling the call-site to give your intended binding. We'll see shortly a way of \"fixing\" that problem by *fixing* the `this`.\n\n### Explicit Binding\n\nWith *implicit binding* as we just saw, we had to mutate the object in question to include a reference on itself to the function, and use this property function reference to indirectly (implicitly) bind `this` to the object.\n\nBut, what if you want to force a function call to use a particular object for the `this` binding, without putting a property function reference on the object?\n\n\"All\" functions in the language have some utilities available to them (via their `[[Prototype]]` -- more on that later) which can be useful for this task. Specifically, functions have `call(..)` and `apply(..)` methods. Technically, JavaScript host environments sometimes provide functions which are special enough (a kind way of putting it!) that they do not have such functionality. But those are few. The vast majority of functions provided, and certainly all functions you will create, do have access to `call(..)` and `apply(..)`.\n\nHow do these utilities work? They both take, as their first parameter, an object to use for the `this`, and then invoke the function with that `this` specified. Since you are directly stating what you want the `this` to be, we call it *explicit binding*.\n\nConsider:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj = {\n\ta: 2\n};\n\nfoo.call( obj ); // 2\n```\n\nInvoking `foo` with *explicit binding* by `foo.call(..)` allows us to force its `this` to be `obj`.\n\nIf you pass a simple primitive value (of type `string`, `boolean`, or `number`) as the `this` binding, the primitive value is wrapped in its object-form (`new String(..)`, `new Boolean(..)`, or `new Number(..)`, respectively). This is often referred to as \"boxing\".\n\n**Note:** With respect to `this` binding, `call(..)` and `apply(..)` are identical. They *do* behave differently with their additional parameters, but that's not something we care about presently.\n\nUnfortunately, *explicit binding* alone still doesn't offer any solution to the issue mentioned previously, of a function \"losing\" its intended `this` binding, or just having it paved over by a framework, etc.\n\n#### Hard Binding\n\nBut a variation pattern around *explicit binding* actually does the trick. Consider:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj = {\n\ta: 2\n};\n\nvar bar = function() {\n\tfoo.call( obj );\n};\n\nbar(); // 2\nsetTimeout( bar, 100 ); // 2\n\n// `bar` hard binds `foo`'s `this` to `obj`\n// so that it cannot be overriden\nbar.call( window ); // 2\n```\n\nLet's examine how this variation works. We create a function `bar()` which, internally, manually calls `foo.call(obj)`, thereby forcibly invoking `foo` with `obj` binding for `this`. No matter how you later invoke the function `bar`, it will always manually invoke `foo` with `obj`. This binding is both explicit and strong, so we call it *hard binding*.\n\nThe most typical way to wrap a function with a *hard binding* creates a pass-thru of any arguments passed and any return value received:\n\n```js\nfunction foo(something) {\n\tconsole.log( this.a, something );\n\treturn this.a + something;\n}\n\nvar obj = {\n\ta: 2\n};\n\nvar bar = function() {\n\treturn foo.apply( obj, arguments );\n};\n\nvar b = bar( 3 ); // 2 3\nconsole.log( b ); // 5\n```\n\nAnother way to express this pattern is to create a re-usable helper:\n\n```js\nfunction foo(something) {\n\tconsole.log( this.a, something );\n\treturn this.a + something;\n}\n\n// simple `bind` helper\nfunction bind(fn, obj) {\n\treturn function() {\n\t\treturn fn.apply( obj, arguments );\n\t};\n}\n\nvar obj = {\n\ta: 2\n};\n\nvar bar = bind( foo, obj );\n\nvar b = bar( 3 ); // 2 3\nconsole.log( b ); // 5\n```\n\nSince *hard binding* is such a common pattern, it's provided with a built-in utility as of ES5: `Function.prototype.bind`, and it's used like this:\n\n```js\nfunction foo(something) {\n\tconsole.log( this.a, something );\n\treturn this.a + something;\n}\n\nvar obj = {\n\ta: 2\n};\n\nvar bar = foo.bind( obj );\n\nvar b = bar( 3 ); // 2 3\nconsole.log( b ); // 5\n```\n\n`bind(..)` returns a new function that is hard-coded to call the original function with the `this` context set as you specified.\n\n**Note:** As of ES6, the hard-bound function produced by `bind(..)` has a `.name` property that derives from the original *target function*. For example: `bar = foo.bind(..)` should have a `bar.name` value of `\"bound foo\"`, which is the function call name that should show up in a stack trace.\n\n#### API Call \"Contexts\"\n\nMany libraries' functions, and indeed many new built-in functions in the JavaScript language and host environment, provide an optional parameter, usually called \"context\", which is designed as a work-around for you not having to use `bind(..)` to ensure your callback function uses a particular `this`.\n\nFor instance:\n\n```js\nfunction foo(el) {\n\tconsole.log( el, this.id );\n}\n\nvar obj = {\n\tid: \"awesome\"\n};\n\n// use `obj` as `this` for `foo(..)` calls\n[1, 2, 3].forEach( foo, obj ); // 1 awesome  2 awesome  3 awesome\n```\n\nInternally, these various functions almost certainly use *explicit binding* via `call(..)` or `apply(..)`, saving you the trouble.\n\n### `new` Binding\n\nThe fourth and final rule for `this` binding requires us to re-think a very common misconception about functions and objects in JavaScript.\n\nIn traditional class-oriented languages, \"constructors\" are special methods attached to classes, that when the class is instantiated with a `new` operator, the constructor of that class is called. This usually looks something like:\n\n```js\nsomething = new MyClass(..);\n```\n\nJavaScript has a `new` operator, and the code pattern to use it looks basically identical to what we see in those class-oriented languages; most developers assume that JavaScript's mechanism is doing something similar. However, there really is *no connection* to class-oriented functionality implied by `new` usage in JS.\n\nFirst, let's re-define what a \"constructor\" in JavaScript is. In JS, constructors are **just functions** that happen to be called with the `new` operator in front of them. They are not attached to classes, nor are they instantiating a class. They are not even special types of functions. They're just regular functions that are, in essence, hijacked by the use of `new` in their invocation.\n\nFor example, the `Number(..)` function acting as a constructor, quoting from the ES5.1 spec:\n\n> 15.7.2 The Number Constructor\n>\n> When Number is called as part of a new expression it is a constructor: it initialises the newly created object.\n\nSo, pretty much any ol' function, including the built-in object functions like `Number(..)` (see Chapter 3) can be called with `new` in front of it, and that makes that function call a *constructor call*. This is an important but subtle distinction: there's really no such thing as \"constructor functions\", but rather construction calls *of* functions.\n\nWhen a function is invoked with `new` in front of it, otherwise known as a constructor call, the following things are done automatically:\n\n1. a brand new object is created (aka, constructed) out of thin air\n2. *the newly constructed object is `[[Prototype]]`-linked*\n3. the newly constructed object is set as the `this` binding for that function call\n4. unless the function returns its own alternate **object**, the `new`-invoked function call will *automatically* return the newly constructed object.\n\nSteps 1, 3, and 4 apply to our current discussion. We'll skip over step 2 for now and come back to it in Chapter 5.\n\nConsider this code:\n\n```js\nfunction foo(a) {\n\tthis.a = a;\n}\n\nvar bar = new foo( 2 );\nconsole.log( bar.a ); // 2\n```\n\nBy calling `foo(..)` with `new` in front of it, we've constructed a new object and set that new object as the `this` for the call of `foo(..)`. **So `new` is the final way that a function call's `this` can be bound.** We'll call this *new binding*.\n\n## Everything In Order\n\nSo, now we've uncovered the 4 rules for binding `this` in function calls. *All* you need to do is find the call-site and inspect it to see which rule applies. But, what if the call-site has multiple eligible rules? There must be an order of precedence to these rules, and so we will next demonstrate what order to apply the rules.\n\nIt should be clear that the *default binding* is the lowest priority rule of the 4. So we'll just set that one aside.\n\nWhich is more precedent, *implicit binding* or *explicit binding*? Let's test it:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar obj1 = {\n\ta: 2,\n\tfoo: foo\n};\n\nvar obj2 = {\n\ta: 3,\n\tfoo: foo\n};\n\nobj1.foo(); // 2\nobj2.foo(); // 3\n\nobj1.foo.call( obj2 ); // 3\nobj2.foo.call( obj1 ); // 2\n```\n\nSo, *explicit binding* takes precedence over *implicit binding*, which means you should ask **first** if *explicit binding* applies before checking for *implicit binding*.\n\nNow, we just need to figure out where *new binding* fits in the precedence.\n\n```js\nfunction foo(something) {\n\tthis.a = something;\n}\n\nvar obj1 = {\n\tfoo: foo\n};\n\nvar obj2 = {};\n\nobj1.foo( 2 );\nconsole.log( obj1.a ); // 2\n\nobj1.foo.call( obj2, 3 );\nconsole.log( obj2.a ); // 3\n\nvar bar = new obj1.foo( 4 );\nconsole.log( obj1.a ); // 2\nconsole.log( bar.a ); // 4\n```\n\nOK, *new binding* is more precedent than *implicit binding*. But do you think *new binding* is more or less precedent than *explicit binding*?\n\n**Note:** `new` and `call`/`apply` cannot be used together, so `new foo.call(obj1)` is not allowed, to test *new binding* directly against *explicit binding*. But we can still use a *hard binding* to test the precedence of the two rules.\n\nBefore we explore that in a code listing, think back to how *hard binding* physically works, which is that `Function.prototype.bind(..)` creates a new wrapper function that is hard-coded to ignore its own `this` binding (whatever it may be), and use a manual one we provide.\n\nBy that reasoning, it would seem obvious to assume that *hard binding* (which is a form of *explicit binding*) is more precedent than *new binding*, and thus cannot be overridden with `new`.\n\nLet's check:\n\n```js\nfunction foo(something) {\n\tthis.a = something;\n}\n\nvar obj1 = {};\n\nvar bar = foo.bind( obj1 );\nbar( 2 );\nconsole.log( obj1.a ); // 2\n\nvar baz = new bar( 3 );\nconsole.log( obj1.a ); // 2\nconsole.log( baz.a ); // 3\n```\n\nWhoa! `bar` is hard-bound against `obj1`, but `new bar(3)` did **not** change `obj1.a` to be `3` as we would have expected. Instead, the *hard bound* (to `obj1`) call to `bar(..)` ***is*** able to be overridden with `new`. Since `new` was applied, we got the newly created object back, which we named `baz`, and we see in fact that  `baz.a` has the value `3`.\n\nThis should be surprising if you go back to our \"fake\" bind helper:\n\n```js\nfunction bind(fn, obj) {\n\treturn function() {\n\t\tfn.apply( obj, arguments );\n\t};\n}\n```\n\nIf you reason about how the helper's code works, it does not have a way for a `new` operator call to override the hard-binding to `obj` as we just observed.\n\nBut the built-in `Function.prototype.bind(..)` as of ES5 is more sophisticated, quite a bit so in fact. Here is the (slightly reformatted) polyfill provided by the MDN page for `bind(..)`:\n\n```js\nif (!Function.prototype.bind) {\n\tFunction.prototype.bind = function(oThis) {\n\t\tif (typeof this !== \"function\") {\n\t\t\t// closest thing possible to the ECMAScript 5\n\t\t\t// internal IsCallable function\n\t\t\tthrow new TypeError( \"Function.prototype.bind - what \" +\n\t\t\t\t\"is trying to be bound is not callable\"\n\t\t\t);\n\t\t}\n\n\t\tvar aArgs = Array.prototype.slice.call( arguments, 1 ),\n\t\t\tfToBind = this,\n\t\t\tfNOP = function(){},\n\t\t\tfBound = function(){\n\t\t\t\treturn fToBind.apply(\n\t\t\t\t\t(\n\t\t\t\t\t\tthis instanceof fNOP &&\n\t\t\t\t\t\toThis ? this : oThis\n\t\t\t\t\t),\n\t\t\t\t\taArgs.concat( Array.prototype.slice.call( arguments ) )\n\t\t\t\t);\n\t\t\t}\n\t\t;\n\n\t\tfNOP.prototype = this.prototype;\n\t\tfBound.prototype = new fNOP();\n\n\t\treturn fBound;\n\t};\n}\n```\n\n**Note:** The `bind(..)` polyfill shown above differs from the built-in `bind(..)` in ES5 with respect to hard-bound functions that will be used with `new` (see below for why that's useful). Because the polyfill cannot create a function without a `.prototype` as the built-in utility does, there's some nuanced indirection to approximate the same behavior. Tread carefully if you plan to use `new` with a hard-bound function and you rely on this polyfill.\n\nThe part that's allowing `new` overriding is:\n\n```js\nthis instanceof fNOP &&\noThis ? this : oThis\n\n// ... and:\n\nfNOP.prototype = this.prototype;\nfBound.prototype = new fNOP();\n```\n\nWe won't actually dive into explaining how this trickery works (it's complicated and beyond our scope here), but essentially the utility determines whether or not the hard-bound function has been called with `new` (resulting in a newly constructed object being its `this`), and if so, it uses *that* newly created `this` rather than the previously specified *hard binding* for `this`.\n\nWhy is `new` being able to override *hard binding* useful?\n\nThe primary reason for this behavior is to create a function (that can be used with `new` for constructing objects) that essentially ignores the `this` *hard binding* but which presets some or all of the function's arguments. One of the capabilities of `bind(..)` is that any arguments passed after the first `this` binding argument are defaulted as standard arguments to the underlying function (technically called \"partial application\", which is a subset of \"currying\").\n\nFor example:\n\n```js\nfunction foo(p1,p2) {\n\tthis.val = p1 + p2;\n}\n\n// using `null` here because we don't care about\n// the `this` hard-binding in this scenario, and\n// it will be overridden by the `new` call anyway!\nvar bar = foo.bind( null, \"p1\" );\n\nvar baz = new bar( \"p2\" );\n\nbaz.val; // p1p2\n```\n\n### Determining `this`\n\nNow, we can summarize the rules for determining `this` from a function call's call-site, in their order of precedence. Ask these questions in this order, and stop when the first rule applies.\n\n1. Is the function called with `new` (**new binding**)? If so, `this` is the newly constructed object.\n\n    `var bar = new foo()`\n\n2. Is the function called with `call` or `apply` (**explicit binding**), even hidden inside a `bind` *hard binding*? If so, `this` is the explicitly specified object.\n\n    `var bar = foo.call( obj2 )`\n\n3. Is the function called with a context (**implicit binding**), otherwise known as an owning or containing object? If so, `this` is *that* context object.\n\n    `var bar = obj1.foo()`\n\n4. Otherwise, default the `this` (**default binding**). If in `strict mode`, pick `undefined`, otherwise pick the `global` object.\n\n    `var bar = foo()`\n\nThat's it. That's *all it takes* to understand the rules of `this` binding for normal function calls. Well... almost.\n\n## Binding Exceptions\n\nAs usual, there are some *exceptions* to the \"rules\".\n\nThe `this`-binding behavior can in some scenarios be surprising, where you intended a different binding but you end up with binding behavior from the *default binding* rule (see previous).\n\n### Ignored `this`\n\nIf you pass `null` or `undefined` as a `this` binding parameter to `call`, `apply`, or `bind`, those values are effectively ignored, and instead the *default binding* rule applies to the invocation.\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar a = 2;\n\nfoo.call( null ); // 2\n```\n\nWhy would you intentionally pass something like `null` for a `this` binding?\n\nIt's quite common to use `apply(..)` for spreading out arrays of values as parameters to a function call. Similarly, `bind(..)` can curry parameters (pre-set values), which can be very helpful.\n\n```js\nfunction foo(a,b) {\n\tconsole.log( \"a:\" + a + \", b:\" + b );\n}\n\n// spreading out array as parameters\nfoo.apply( null, [2, 3] ); // a:2, b:3\n\n// currying with `bind(..)`\nvar bar = foo.bind( null, 2 );\nbar( 3 ); // a:2, b:3\n```\n\nBoth these utilities require a `this` binding for the first parameter. If the functions in question don't care about `this`, you need a placeholder value, and `null` might seem like a reasonable choice as shown in this snippet.\n\n**Note:** We don't cover it in this book, but ES6 has the `...` spread operator which will let you syntactically \"spread out\" an array as parameters without needing `apply(..)`, such as `foo(...[1,2])`, which amounts to `foo(1,2)` -- syntactically avoiding a `this` binding if it's unnecessary. Unfortunately, there's no ES6 syntactic substitute for currying, so the `this` parameter of the `bind(..)` call still needs attention.\n\nHowever, there's a slight hidden \"danger\" in always using `null` when you don't care about the `this` binding. If you ever use that against a function call (for instance, a third-party library function that you don't control), and that function *does* make a `this` reference, the *default binding* rule means it might inadvertently reference (or worse, mutate!) the `global` object (`window` in the browser).\n\nObviously, such a pitfall can lead to a variety of *very difficult* to diagnose/track-down bugs.\n\n#### Safer `this`\n\nPerhaps a somewhat \"safer\" practice is to pass a specifically set up object for `this` which is guaranteed not to be an object that can create problematic side effects in your program. Borrowing terminology from networking (and the military), we can create a \"DMZ\" (de-militarized zone) object -- nothing more special than a completely empty, non-delegated (see Chapters 5 and 6) object.\n\nIf we always pass a DMZ object for ignored `this` bindings we don't think we need to care about, we're sure any hidden/unexpected usage of `this` will be restricted to the empty object, which insulates our program's `global` object from side-effects.\n\nSince this object is totally empty, I personally like to give it the variable name `ø` (the lowercase mathematical symbol for the empty set). On many keyboards (like US-layout on Mac), this symbol is easily typed with `⌥`+`o` (option+`o`). Some systems also let you set up hotkeys for specific symbols. If you don't like the `ø` symbol, or your keyboard doesn't make that as easy to type, you can of course call it whatever you want.\n\nWhatever you call it, the easiest way to set it up as **totally empty** is `Object.create(null)` (see Chapter 5). `Object.create(null)` is similar to `{ }`, but without the delegation to `Object.prototype`, so it's \"more empty\" than just `{ }`.\n\n```js\nfunction foo(a,b) {\n\tconsole.log( \"a:\" + a + \", b:\" + b );\n}\n\n// our DMZ empty object\nvar ø = Object.create( null );\n\n// spreading out array as parameters\nfoo.apply( ø, [2, 3] ); // a:2, b:3\n\n// currying with `bind(..)`\nvar bar = foo.bind( ø, 2 );\nbar( 3 ); // a:2, b:3\n```\n\nNot only functionally \"safer\", there's a sort of stylistic benefit to `ø`, in that it semantically conveys \"I want the `this` to be empty\" a little more clearly than `null` might. But again, name your DMZ object whatever you prefer.\n\n### Indirection\n\nAnother thing to be aware of is you can (intentionally or not!) create \"indirect references\" to functions, and in those cases,  when that function reference is invoked, the *default binding* rule also applies.\n\nOne of the most common ways that *indirect references* occur is from an assignment:\n\n```js\nfunction foo() {\n\tconsole.log( this.a );\n}\n\nvar a = 2;\nvar o = { a: 3, foo: foo };\nvar p = { a: 4 };\n\no.foo(); // 3\n(p.foo = o.foo)(); // 2\n```\n\nThe *result value* of the assignment expression `p.foo = o.foo` is a reference to just the underlying function object. As such, the effective call-site is just `foo()`, not `p.foo()` or `o.foo()` as you might expect. Per the rules above, the *default binding* rule applies.\n\nReminder: regardless of how you get to a function invocation using the *default binding* rule, the `strict mode` status of the **contents** of the invoked function making the `this` reference -- not the function call-site -- determines the *default binding* value: either the `global` object if in non-`strict mode` or `undefined` if in `strict mode`.\n\n### Softening Binding\n\nWe saw earlier that *hard binding* was one strategy for preventing a function call falling back to the *default binding* rule inadvertently, by forcing it to be bound to a specific `this` (unless you use `new` to override it!). The problem is, *hard-binding* greatly reduces the flexibility of a function, preventing manual `this` override with either the *implicit binding* or even subsequent *explicit binding* attempts.\n\nIt would be nice if there was a way to provide a different default for *default binding* (not `global` or `undefined`), while still leaving the function able to be manually `this` bound via *implicit binding* or *explicit binding* techniques.\n\nWe can construct a so-called *soft binding* utility which emulates our desired behavior.\n\n```js\nif (!Function.prototype.softBind) {\n\tFunction.prototype.softBind = function(obj) {\n\t\tvar fn = this,\n\t\t\tcurried = [].slice.call( arguments, 1 ),\n\t\t\tbound = function bound() {\n\t\t\t\treturn fn.apply(\n\t\t\t\t\t(!this ||\n\t\t\t\t\t\t(typeof window !== \"undefined\" &&\n\t\t\t\t\t\t\tthis === window) ||\n\t\t\t\t\t\t(typeof global !== \"undefined\" &&\n\t\t\t\t\t\t\tthis === global)\n\t\t\t\t\t) ? obj : this,\n\t\t\t\t\tcurried.concat.apply( curried, arguments )\n\t\t\t\t);\n\t\t\t};\n\t\tbound.prototype = Object.create( fn.prototype );\n\t\treturn bound;\n\t};\n}\n```\n\nThe `softBind(..)` utility provided here works similarly to the built-in ES5 `bind(..)` utility, except with our *soft binding* behavior. It wraps the specified function in logic that checks the `this` at call-time and if it's `global` or `undefined`, uses a pre-specified alternate *default* (`obj`). Otherwise the `this` is left untouched. It also provides optional currying (see the `bind(..)` discussion earlier).\n\nLet's demonstrate its usage:\n\n```js\nfunction foo() {\n   console.log(\"name: \" + this.name);\n}\n\nvar obj = { name: \"obj\" },\n    obj2 = { name: \"obj2\" },\n    obj3 = { name: \"obj3\" };\n\nvar fooOBJ = foo.softBind( obj );\n\nfooOBJ(); // name: obj\n\nobj2.foo = foo.softBind(obj);\nobj2.foo(); // name: obj2   <---- look!!!\n\nfooOBJ.call( obj3 ); // name: obj3   <---- look!\n\nsetTimeout( obj2.foo, 10 ); // name: obj   <---- falls back to soft-binding\n```\n\nThe soft-bound version of the `foo()` function can be manually `this`-bound to `obj2` or `obj3` as shown, but it falls back to `obj` if the *default binding* would otherwise apply.\n\n## Lexical `this`\n\nNormal functions abide by the 4 rules we just covered. But ES6 introduces a special kind of function that does not use these rules: arrow-function.\n\nArrow-functions are signified not by the `function` keyword, but by the `=>` so called \"fat arrow\" operator. Instead of using the four standard `this` rules, arrow-functions adopt the `this` binding from the enclosing (function or global) scope.\n\nLet's illustrate arrow-function lexical scope:\n\n```js\nfunction foo() {\n\t// return an arrow function\n\treturn (a) => {\n\t\t// `this` here is lexically adopted from `foo()`\n\t\tconsole.log( this.a );\n\t};\n}\n\nvar obj1 = {\n\ta: 2\n};\n\nvar obj2 = {\n\ta: 3\n};\n\nvar bar = foo.call( obj1 );\nbar.call( obj2 ); // 2, not 3!\n```\n\nThe arrow-function created in `foo()` lexically captures whatever `foo()`s `this` is at its call-time. Since `foo()` was `this`-bound to `obj1`, `bar` (a reference to the returned arrow-function) will also be `this`-bound to `obj1`. The lexical binding of an arrow-function cannot be overridden (even with `new`!).\n\nThe most common use-case will likely be in the use of callbacks, such as event handlers or timers:\n\n```js\nfunction foo() {\n\tsetTimeout(() => {\n\t\t// `this` here is lexically adopted from `foo()`\n\t\tconsole.log( this.a );\n\t},100);\n}\n\nvar obj = {\n\ta: 2\n};\n\nfoo.call( obj ); // 2\n```\n\nWhile arrow-functions provide an alternative to using `bind(..)` on a function to ensure its `this`, which can seem attractive, it's important to note that they essentially are disabling the traditional `this` mechanism in favor of more widely-understood lexical scoping. Pre-ES6, we already have a fairly common pattern for doing so, which is basically almost indistinguishable from the spirit of ES6 arrow-functions:\n\n```js\nfunction foo() {\n\tvar self = this; // lexical capture of `this`\n\tsetTimeout( function(){\n\t\tconsole.log( self.a );\n\t}, 100 );\n}\n\nvar obj = {\n\ta: 2\n};\n\nfoo.call( obj ); // 2\n```\n\nWhile `self = this` and arrow-functions both seem like good \"solutions\" to not wanting to use `bind(..)`, they are essentially fleeing from `this` instead of understanding and embracing it.\n\nIf you find yourself writing `this`-style code, but most or all the time, you defeat the `this` mechanism with lexical `self = this` or arrow-function \"tricks\", perhaps you should either:\n\n1. Use only lexical scope and forget the false pretense of `this`-style code.\n\n2. Embrace `this`-style mechanisms completely, including using `bind(..)` where necessary, and try to avoid `self = this` and arrow-function \"lexical this\" tricks.\n\nA program can effectively use both styles of code (lexical and `this`), but inside of the same function, and indeed for the same sorts of look-ups, mixing the two mechanisms is usually asking for harder-to-maintain code, and probably working too hard to be clever.\n\n## Review (TL;DR)\n\nDetermining the `this` binding for an executing function requires finding the direct call-site of that function. Once examined, four rules can be applied to the call-site, in *this* order of precedence:\n\n1. Called with `new`? Use the newly constructed object.\n\n2. Called with `call` or `apply` (or `bind`)? Use the specified object.\n\n3. Called with a context object owning the call? Use that context object.\n\n4. Default: `undefined` in `strict mode`, global object otherwise.\n\nBe careful of accidental/unintentional invoking of the *default binding* rule. In cases where you want to \"safely\" ignore a `this` binding, a \"DMZ\" object like `ø = Object.create(null)` is a good placeholder value that protects the `global` object from unintended side-effects.\n\nInstead of the four standard binding rules, ES6 arrow-functions use lexical scoping for `this` binding, which means they adopt the `this` binding (whatever it is) from its enclosing function call. They are essentially a syntactic replacement of `self = this` in pre-ES6 coding.\n"
  },
  {
    "path": "this & object prototypes/ch3.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Chapter 3: Objects\n\nIn Chapters 1 and 2, we explained how the `this` binding points to various objects depending on the call-site of the function invocation. But what exactly are objects, and why do we need to point to them? We will explore objects in detail in this chapter.\n\n## Syntax\n\nObjects come in two forms: the declarative (literal) form, and the constructed form.\n\nThe literal syntax for an object looks like this:\n\n```js\nvar myObj = {\n\tkey: value\n\t// ...\n};\n```\n\nThe constructed form looks like this:\n\n```js\nvar myObj = new Object();\nmyObj.key = value;\n```\n\nThe constructed form and the literal form result in exactly the same sort of object. The only difference really is that you can add one or more key/value pairs to the literal declaration, whereas with constructed-form objects, you must add the properties one-by-one.\n\n**Note:** It's extremely uncommon to use the \"constructed form\" for creating objects as just shown. You would pretty much always want to use the literal syntax form. The same will be true of most of the built-in objects (see below).\n\n## Type\n\nObjects are the general building block upon which much of JS is built. They are one of the 6 primary types (called \"language types\" in the specification) in JS:\n\n* `string`\n* `number`\n* `boolean`\n* `null`\n* `undefined`\n* `object`\n\nNote that the *simple primitives* (`string`, `number`, `boolean`, `null`, and `undefined`) are **not** themselves `objects`. `null` is sometimes referred to as an object type, but this misconception stems from a bug in the language which causes `typeof null` to return the string `\"object\"` incorrectly (and confusingly). In fact, `null` is its own primitive type.\n\n**It's a common mis-statement that \"everything in JavaScript is an object\". This is clearly not true.**\n\nBy contrast, there *are* a few special object sub-types, which we can refer to as *complex primitives*.\n\n`function` is a sub-type of object (technically, a \"callable object\"). Functions in JS are said to be \"first class\" in that they are basically just normal objects (with callable behavior semantics bolted on), and so they can be handled like any other plain object.\n\nArrays are also a form of objects, with extra behavior. The organization of contents in arrays is slightly more structured than for general objects.\n\n### Built-in Objects\n\nThere are several other object sub-types, usually referred to as built-in objects. For some of them, their names seem to imply they are directly related to their simple primitives counter-parts, but in fact, their relationship is more complicated, which we'll explore shortly.\n\n* `String`\n* `Number`\n* `Boolean`\n* `Object`\n* `Function`\n* `Array`\n* `Date`\n* `RegExp`\n* `Error`\n\nThese built-ins have the appearance of being actual types, even classes, if you rely on the similarity to other languages such as Java's `String` class.\n\nBut in JS, these are actually just built-in functions. Each of these built-in functions can be used as a constructor (that is, a function call with the `new` operator -- see Chapter 2), with the result being a newly *constructed* object of the sub-type in question. For instance:\n\n```js\nvar strPrimitive = \"I am a string\";\ntypeof strPrimitive;\t\t\t\t\t\t\t// \"string\"\nstrPrimitive instanceof String;\t\t\t\t\t// false\n\nvar strObject = new String( \"I am a string\" );\ntypeof strObject; \t\t\t\t\t\t\t\t// \"object\"\nstrObject instanceof String;\t\t\t\t\t// true\n\n// inspect the object sub-type\nObject.prototype.toString.call( strObject );\t// [object String]\n```\n\nWe'll see in detail in a later chapter exactly how the `Object.prototype.toString...` bit works, but briefly, we can inspect the internal sub-type by borrowing the base default `toString()` method, and you can see it reveals that `strObject` is an object that was in fact created by the `String` constructor.\n\nThe primitive value `\"I am a string\"` is not an object, it's a primitive literal and immutable value. To perform operations on it, such as checking its length, accessing its individual character contents, etc, a `String` object is required.\n\nLuckily, the language automatically coerces a `\"string\"` primitive to a `String` object when necessary, which means you almost never need to explicitly create the Object form. It is **strongly preferred** by the majority of the JS community to use the literal form for a value, where possible, rather than the constructed object form.\n\nConsider:\n\n```js\nvar strPrimitive = \"I am a string\";\n\nconsole.log( strPrimitive.length );\t\t\t// 13\n\nconsole.log( strPrimitive.charAt( 3 ) );\t// \"m\"\n```\n\nIn both cases, we call a property or method on a string primitive, and the engine automatically coerces it to a `String` object, so that the property/method access works.\n\nThe same sort of coercion happens between the number literal primitive `42` and the `new Number(42)` object wrapper, when using methods like `42.359.toFixed(2)`. Likewise for `Boolean` objects from `\"boolean\"` primitives.\n\n`null` and `undefined` have no object wrapper form, only their primitive values. By contrast, `Date` values can *only* be created with their constructed object form, as they have no literal form counter-part.\n\n`Object`s, `Array`s, `Function`s, and `RegExp`s (regular expressions) are all objects regardless of whether the literal or constructed form is used. The constructed form does offer, in some cases, more options in creation than the literal form counterpart. Since objects are created either way, the simpler literal form is almost universally preferred. **Only use the constructed form if you need the extra options.**\n\n`Error` objects are rarely created explicitly in code, but usually created automatically when exceptions are thrown. They can be created with the constructed form `new Error(..)`, but it's often unnecessary.\n\n## Contents\n\nAs mentioned earlier, the contents of an object consist of values (any type) stored at specifically named *locations*, which we call properties.\n\nIt's important to note that while we say \"contents\" which implies that these values are *actually* stored inside the object, that's merely an appearance. The engine stores values in implementation-dependent ways, and may very well not store them *in* some object container. What *is* stored in the container are these property names, which act as pointers (technically, *references*) to where the values are stored.\n\nConsider:\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nmyObject.a;\t\t// 2\n\nmyObject[\"a\"];\t// 2\n```\n\nTo access the value at the *location* `a` in `myObject`, we need to use either the `.` operator or the `[ ]` operator. The `.a` syntax is usually referred to as \"property\" access, whereas the `[\"a\"]` syntax is usually referred to as \"key\" access. In reality, they both access the same *location*, and will pull out the same value, `2`, so the terms can be used interchangeably. We will use the most common term, \"property access\" from here on.\n\nThe main difference between the two syntaxes is that the `.` operator requires an `Identifier` compatible property name after it, whereas the `[\"..\"]` syntax can take basically any UTF-8/unicode compatible string as the name for the property. To reference a property of the name \"Super-Fun!\", for instance, you would have to use the `[\"Super-Fun!\"]` access syntax, as `Super-Fun!` is not a valid `Identifier` property name.\n\nAlso, since the `[\"..\"]` syntax uses a string's **value** to specify the location, this means the program can programmatically build up the value of the string, such as:\n\n```js\nvar wantA = true;\nvar myObject = {\n\ta: 2\n};\n\nvar idx;\n\nif (wantA) {\n\tidx = \"a\";\n}\n\n// later\n\nconsole.log( myObject[idx] ); // 2\n```\n\nIn objects, property names are **always** strings. If you use any other value besides a `string` (primitive) as the property, it will first be converted to a string. This even includes numbers, which are commonly used as array indexes, so be careful not to confuse the use of numbers between objects and arrays.\n\n```js\nvar myObject = { };\n\nmyObject[true] = \"foo\";\nmyObject[3] = \"bar\";\nmyObject[myObject] = \"baz\";\n\nmyObject[\"true\"];\t\t\t\t// \"foo\"\nmyObject[\"3\"];\t\t\t\t\t// \"bar\"\nmyObject[\"[object Object]\"];\t// \"baz\"\n```\n\n### Computed Property Names\n\nThe `myObject[..]` property access syntax we just described is useful if you need to use a computed expression value *as* the key name, like `myObject[prefix + name]`. But that's not really helpful when declaring objects using the object-literal syntax.\n\nES6 adds *computed property names*, where you can specify an expression, surrounded by a `[ ]` pair, in the key-name position of an object-literal declaration:\n\n```js\nvar prefix = \"foo\";\n\nvar myObject = {\n\t[prefix + \"bar\"]: \"hello\",\n\t[prefix + \"baz\"]: \"world\"\n};\n\nmyObject[\"foobar\"]; // hello\nmyObject[\"foobaz\"]; // world\n```\n\nThe most common usage of *computed property names* will probably be for ES6 `Symbol`s, which we will not be covering in detail in this book. In short, they're a new primitive data type which has an opaque unguessable value (technically a `string` value). You will be strongly discouraged from working with the *actual value* of a `Symbol` (which can theoretically be different between different JS engines), so the name of the `Symbol`, like `Symbol.Something` (just a made up name!), will be what you use:\n\n```js\nvar myObject = {\n\t[Symbol.Something]: \"hello world\"\n};\n```\n\n### Property vs. Method\n\nSome developers like to make a distinction when talking about a property access on an object, if the value being accessed happens to be a function. Because it's tempting to think of the function as *belonging* to the object, and in other languages, functions which belong to objects (aka, \"classes\") are referred to as \"methods\", it's not uncommon to hear, \"method access\" as opposed to \"property access\".\n\n**The specification makes this same distinction**, interestingly.\n\nTechnically, functions never \"belong\" to objects, so saying that a function that just happens to be accessed on an object reference is automatically a \"method\" seems a bit of a stretch of semantics.\n\nIt *is* true that some functions have `this` references in them, and that *sometimes* these `this` references refer to the object reference at the call-site. But this usage really does not make that function any more a \"method\" than any other function, as `this` is dynamically bound at run-time, at the call-site, and thus its relationship to the object is indirect, at best.\n\nEvery time you access a property on an object, that is a **property access**, regardless of the type of value you get back. If you *happen* to get a function from that property access, it's not magically a \"method\" at that point. There's nothing special (outside of possible implicit `this` binding as explained earlier) about a function that comes from a property access.\n\nFor instance:\n\n```js\nfunction foo() {\n\tconsole.log( \"foo\" );\n}\n\nvar someFoo = foo;\t// variable reference to `foo`\n\nvar myObject = {\n\tsomeFoo: foo\n};\n\nfoo;\t\t\t\t// function foo(){..}\n\nsomeFoo;\t\t\t// function foo(){..}\n\nmyObject.someFoo;\t// function foo(){..}\n```\n\n`someFoo` and `myObject.someFoo` are just two separate references to the same function, and neither implies anything about the function being special or \"owned\" by any other object. If `foo()` above was defined to have a `this` reference inside it, that `myObject.someFoo` *implicit binding* would be the **only** observable difference between the two references. Neither reference really makes sense to be called a \"method\".\n\n**Perhaps one could argue** that a function *becomes a method*, not at definition time, but during run-time just for that invocation, depending on how it's called at its call-site (with an object reference context or not -- see Chapter 2 for more details). Even this interpretation is a bit of a stretch.\n\nThe safest conclusion is probably that \"function\" and \"method\" are interchangeable in JavaScript.\n\n**Note:** ES6 adds a `super` reference, which is typically going to be used with `class` (see Appendix A). The way `super` behaves (static binding rather than late binding as `this`) gives further weight to the idea that a function which is `super` bound somewhere is more a \"method\" than \"function\". But again, these are just subtle semantic (and mechanical) nuances.\n\nEven when you declare a function expression as part of the object-literal, that function doesn't magically *belong* more to the object -- still just multiple references to the same function object:\n\n```js\nvar myObject = {\n\tfoo: function foo() {\n\t\tconsole.log( \"foo\" );\n\t}\n};\n\nvar someFoo = myObject.foo;\n\nsomeFoo;\t\t// function foo(){..}\n\nmyObject.foo;\t// function foo(){..}\n```\n\n**Note:** In Chapter 6, we will cover an ES6 short-hand for that `foo: function foo(){ .. }` declaration syntax in our object-literal.\n\n### Arrays\n\nArrays also use the `[ ]` access form, but as mentioned above, they have slightly more structured organization for how and where values are stored (though still no restriction on what *type* of values are stored). Arrays assume *numeric indexing*, which means that values are stored in locations, usually called *indices*, at non-negative integers, such as `0` and `42`.\n\n```js\nvar myArray = [ \"foo\", 42, \"bar\" ];\n\nmyArray.length;\t\t// 3\n\nmyArray[0];\t\t\t// \"foo\"\n\nmyArray[2];\t\t\t// \"bar\"\n```\n\nArrays *are* objects, so even though each index is a positive integer, you can *also* add properties onto the array:\n\n```js\nvar myArray = [ \"foo\", 42, \"bar\" ];\n\nmyArray.baz = \"baz\";\n\nmyArray.length;\t// 3\n\nmyArray.baz;\t// \"baz\"\n```\n\nNotice that adding named properties (regardless of `.` or `[ ]` operator syntax) does not change the reported `length` of the array.\n\nYou *could* use an array as a plain key/value object, and never add any numeric indices, but this is a bad idea because arrays have behavior and optimizations specific to their intended use, and likewise with plain objects. Use objects to store key/value pairs, and arrays to store values at numeric indices.\n\n**Be careful:** If you try to add a property to an array, but the property name *looks* like a number, it will end up instead as a numeric index (thus modifying the array contents):\n\n```js\nvar myArray = [ \"foo\", 42, \"bar\" ];\n\nmyArray[\"3\"] = \"baz\";\n\nmyArray.length;\t// 4\n\nmyArray[3];\t\t// \"baz\"\n```\n\n### Duplicating Objects\n\nOne of the most commonly requested features when developers newly take up the JavaScript language is how to duplicate an object. It would seem like there should just be a built-in `copy()` method, right? It turns out that it's a little more complicated than that, because it's not fully clear what, by default, should be the algorithm for the duplication.\n\nFor example, consider this object:\n\n```js\nfunction anotherFunction() { /*..*/ }\n\nvar anotherObject = {\n\tc: true\n};\n\nvar anotherArray = [];\n\nvar myObject = {\n\ta: 2,\n\tb: anotherObject,\t// reference, not a copy!\n\tc: anotherArray,\t// another reference!\n\td: anotherFunction\n};\n\nanotherArray.push( anotherObject, myObject );\n```\n\nWhat exactly should be the representation of a *copy* of `myObject`?\n\nFirstly, we should answer if it should be a *shallow* or *deep* copy. A *shallow copy* would end up with `a` on the new object as a copy of the value `2`, but `b`, `c`, and `d` properties as just references to the same places as the references in the original object. A *deep copy* would duplicate not only `myObject`, but `anotherObject` and `anotherArray`. But then we have issues that `anotherArray` has references to `anotherObject` and `myObject` in it, so *those* should also be duplicated rather than reference-preserved. Now we have an infinite circular duplication problem because of the circular reference.\n\nShould we detect a circular reference and just break the circular traversal (leaving the deep element not fully duplicated)? Should we error out completely? Something in between?\n\nMoreover, it's not really clear what \"duplicating\" a function would mean? There are some hacks like pulling out the `toString()` serialization of a function's source code (which varies across implementations and is not even reliable in all engines depending on the type of function being inspected).\n\nSo how do we resolve all these tricky questions? Various JS frameworks have each picked their own interpretations and made their own decisions. But which of these (if any) should JS adopt as *the* standard? For a long time, there was no clear answer.\n\nOne subset solution is that objects which are JSON-safe (that is, can be serialized to a JSON string and then re-parsed to an object with the same structure and values) can easily be *duplicated* with:\n\n```js\nvar newObj = JSON.parse( JSON.stringify( someObj ) );\n```\n\nOf course, that requires you to ensure your object is JSON safe. For some situations, that's trivial. For others, it's insufficient.\n\nAt the same time, a shallow copy is fairly understandable and has far less issues, so ES6 has now defined `Object.assign(..)` for this task. `Object.assign(..)` takes a *target* object as its first parameter, and one or more *source* objects as its subsequent parameters. It iterates over all the *enumerable* (see below), *owned keys* (**immediately present**) on the *source* object(s) and copies them (via `=` assignment only) to *target*. It also, helpfully, returns *target*, as you can see below:\n\n```js\nvar newObj = Object.assign( {}, myObject );\n\nnewObj.a;\t\t\t\t\t\t// 2\nnewObj.b === anotherObject;\t\t// true\nnewObj.c === anotherArray;\t\t// true\nnewObj.d === anotherFunction;\t// true\n```\n\n**Note:** In the next section, we describe \"property descriptors\" (property characteristics) and show the use of `Object.defineProperty(..)`. The duplication that occurs for `Object.assign(..)` however is purely `=` style assignment, so any special characteristics of a property (like `writable`) on a source object **are not preserved** on the target object.\n\n### Property Descriptors\n\nPrior to ES5, the JavaScript language gave no direct way for your code to inspect or draw any distinction between the characteristics of properties, such as whether the property was read-only or not.\n\nBut as of ES5, all properties are described in terms of a **property descriptor**.\n\nConsider this code:\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nObject.getOwnPropertyDescriptor( myObject, \"a\" );\n// {\n//    value: 2,\n//    writable: true,\n//    enumerable: true,\n//    configurable: true\n// }\n```\n\nAs you can see, the property descriptor (called a \"data descriptor\" since it's only for holding a data value) for our normal object property `a` is much more than just its `value` of `2`. It includes 3 other characteristics: `writable`, `enumerable`, and `configurable`.\n\nWhile we can see what the default values for the property descriptor characteristics are when we create a normal property, we can use `Object.defineProperty(..)` to add a new property, or modify an existing one (if it's `configurable`!), with the desired characteristics.\n\nFor example:\n\n```js\nvar myObject = {};\n\nObject.defineProperty( myObject, \"a\", {\n\tvalue: 2,\n\twritable: true,\n\tconfigurable: true,\n\tenumerable: true\n} );\n\nmyObject.a; // 2\n```\n\nUsing `defineProperty(..)`, we added the plain, normal `a` property to `myObject` in a manually explicit way. However, you generally wouldn't use this manual approach unless you wanted to modify one of the descriptor characteristics from its normal behavior.\n\n#### Writable\n\nThe ability for you to change the value of a property is controlled by `writable`.\n\nConsider:\n\n```js\nvar myObject = {};\n\nObject.defineProperty( myObject, \"a\", {\n\tvalue: 2,\n\twritable: false, // not writable!\n\tconfigurable: true,\n\tenumerable: true\n} );\n\nmyObject.a = 3;\n\nmyObject.a; // 2\n```\n\nAs you can see, our modification of the `value` silently failed. If we try in `strict mode`, we get an error:\n\n```js\n\"use strict\";\n\nvar myObject = {};\n\nObject.defineProperty( myObject, \"a\", {\n\tvalue: 2,\n\twritable: false, // not writable!\n\tconfigurable: true,\n\tenumerable: true\n} );\n\nmyObject.a = 3; // TypeError\n```\n\nThe `TypeError` tells us we cannot change a non-writable property.\n\n**Note:** We will discuss getters/setters shortly, but briefly, you can observe that `writable:false` means a value cannot be changed, which is somewhat equivalent to if you defined a no-op setter. Actually, your no-op setter would need to throw a `TypeError` when called, to be truly conformant to `writable:false`.\n\n#### Configurable\n\nAs long as a property is currently configurable, we can modify its descriptor definition, using the same `defineProperty(..)` utility.\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nmyObject.a = 3;\nmyObject.a;\t\t\t\t\t// 3\n\nObject.defineProperty( myObject, \"a\", {\n\tvalue: 4,\n\twritable: true,\n\tconfigurable: false,\t// not configurable!\n\tenumerable: true\n} );\n\nmyObject.a;\t\t\t\t\t// 4\nmyObject.a = 5;\nmyObject.a;\t\t\t\t\t// 5\n\nObject.defineProperty( myObject, \"a\", {\n\tvalue: 6,\n\twritable: true,\n\tconfigurable: true,\n\tenumerable: true\n} ); // TypeError\n```\n\nThe final `defineProperty(..)` call results in a TypeError, regardless of `strict mode`, if you attempt to change the descriptor definition of a non-configurable property. Be careful: as you can see, changing `configurable` to `false` is a **one-way action, and cannot be undone!**\n\n**Note:** There's a nuanced exception to be aware of: even if the property is already `configurable:false`, `writable` can always be changed from `true` to `false` without error, but not back to `true` if already `false`.\n\nAnother thing `configurable:false` prevents is the ability to use the `delete` operator to remove an existing property.\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nmyObject.a;\t\t\t\t// 2\ndelete myObject.a;\nmyObject.a;\t\t\t\t// undefined\n\nObject.defineProperty( myObject, \"a\", {\n\tvalue: 2,\n\twritable: true,\n\tconfigurable: false,\n\tenumerable: true\n} );\n\nmyObject.a;\t\t\t\t// 2\ndelete myObject.a;\nmyObject.a;\t\t\t\t// 2\n```\n\nAs you can see, the last `delete` call failed (silently) because we made the `a` property non-configurable.\n\n`delete` is only used to remove object properties (which can be removed) directly from the object in question. If an object property is the last remaining *reference* to some object/function, and you `delete` it, that removes the reference and now that unreferenced object/function can be garbage collected. But, it is **not** proper to think of `delete` as a tool to free up allocated memory as it does in other languages (like C/C++). `delete` is just an object property removal operation -- nothing more.\n\n#### Enumerable\n\nThe final descriptor characteristic we will mention here (there are two others, which we deal with shortly when we discuss getter/setters) is `enumerable`.\n\nThe name probably makes it obvious, but this characteristic controls if a property will show up in certain object-property enumerations, such as the `for..in` loop. Set to `false` to keep it from showing up in such enumerations, even though it's still completely accessible. Set to `true` to keep it present.\n\nAll normal user-defined properties are defaulted to `enumerable`, as this is most commonly what you want. But if you have a special property you want to hide from enumeration, set it to `enumerable:false`.\n\nWe'll demonstrate enumerability in much more detail shortly, so keep a mental bookmark on this topic.\n\n### Immutability\n\nIt is sometimes desired to make properties or objects that cannot be changed (either by accident or intentionally). ES5 adds support for handling that in a variety of different nuanced ways.\n\nIt's important to note that **all** of these approaches create shallow immutability. That is, they affect only the object and its direct property characteristics. If an object has a reference to another object (array, object, function, etc), the *contents* of that object are not affected, and remain mutable.\n\n```js\nmyImmutableObject.foo; // [1,2,3]\nmyImmutableObject.foo.push( 4 );\nmyImmutableObject.foo; // [1,2,3,4]\n```\n\nWe assume in this snippet that `myImmutableObject` is already created and protected as immutable. But, to also protect the contents of `myImmutableObject.foo` (which is its own object -- array), you would also need to make `foo` immutable, using one or more of the following functionalities.\n\n**Note:** It is not terribly common to create deeply entrenched immutable objects in JS programs. Special cases can certainly call for it, but as a general design pattern, if you find yourself wanting to *seal* or *freeze* all your objects, you may want to take a step back and reconsider your program design to be more robust to potential changes in objects' values.\n\n#### Object Constant\n\nBy combining `writable:false` and `configurable:false`, you can essentially create a *constant* (cannot be changed, redefined or deleted) as an object property, like:\n\n```js\nvar myObject = {};\n\nObject.defineProperty( myObject, \"FAVORITE_NUMBER\", {\n\tvalue: 42,\n\twritable: false,\n\tconfigurable: false\n} );\n```\n\n#### Prevent Extensions\n\nIf you want to prevent an object from having new properties added to it, but otherwise leave the rest of the object's properties alone, call `Object.preventExtensions(..)`:\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nObject.preventExtensions( myObject );\n\nmyObject.b = 3;\nmyObject.b; // undefined\n```\n\nIn `non-strict mode`, the creation of `b` fails silently. In `strict mode`, it throws a `TypeError`.\n\n#### Seal\n\n`Object.seal(..)` creates a \"sealed\" object, which means it takes an existing object and essentially calls `Object.preventExtensions(..)` on it, but also marks all its existing properties as `configurable:false`.\n\nSo, not only can you not add any more properties, but you also cannot reconfigure or delete any existing properties (though you *can* still modify their values).\n\n#### Freeze\n\n`Object.freeze(..)` creates a frozen object, which means it takes an existing object and essentially calls `Object.seal(..)` on it, but it also marks all \"data accessor\" properties as `writable:false`, so that their values cannot be changed.\n\nThis approach is the highest level of immutability that you can attain for an object itself, as it prevents any changes to the object or to any of its direct properties (though, as mentioned above, the contents of any referenced other objects are unaffected).\n\nYou could \"deep freeze\" an object by calling `Object.freeze(..)` on the object, and then recursively iterating over all objects it references (which would have been unaffected thus far), and calling `Object.freeze(..)` on them as well. Be careful, though, as that could affect other (shared) objects you're not intending to affect.\n\n\n### `[[Get]]`\n\nThere's a subtle, but important, detail about how property accesses are performed.\n\nConsider:\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nmyObject.a; // 2\n```\n\nThe `myObject.a` is a property access, but it doesn't *just* look in `myObject` for a property of the name `a`, as it might seem.\n\nAccording to the spec, the code above actually performs a `[[Get]]` operation (kinda like a function call: `[[Get]]()`) on the `myObject`. The default built-in `[[Get]]` operation for an object *first* inspects the object for a property of the requested name, and if it finds it, it will return the value accordingly.\n\nHowever, the `[[Get]]` algorithm defines other important behavior if it does *not* find a property of the requested name. We will examine in Chapter 5 what happens *next* (traversal of the `[[Prototype]]` chain, if any).\n\nBut one important result of this `[[Get]]` operation is that if it cannot through any means come up with a value for the requested property, it instead returns the value `undefined`.\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nmyObject.b; // undefined\n```\n\nThis behavior is different from when you reference *variables* by their identifier names. If you reference a variable that cannot be resolved within the applicable lexical scope look-up, the result is not `undefined` as it is for object properties, but instead a `ReferenceError` is thrown.\n\n```js\nvar myObject = {\n\ta: undefined\n};\n\nmyObject.a; // undefined\n\nmyObject.b; // undefined\n```\n\nFrom a *value* perspective, there is no difference between these two references -- they both result in `undefined`. However, the `[[Get]]` operation underneath, though subtle at a glance, potentially performed a bit more \"work\" for the reference `myObject.b` than for the reference `myObject.a`.\n\nInspecting only the value results, you cannot distinguish whether a property exists and holds the explicit value `undefined`, or whether the property does *not* exist and `undefined` was the default return value after `[[Get]]` failed to return something explicitly. However, we will see shortly how you *can* distinguish these two scenarios.\n\n### `[[Put]]`\n\nSince there's an internally defined `[[Get]]` operation for getting a value from a property, it should be obvious there's also a default `[[Put]]` operation.\n\nIt may be tempting to think that an assignment to a property on an object would just invoke `[[Put]]` to set or create that property on the object in question. But the situation is more nuanced than that.\n\nWhen invoking `[[Put]]`, how it behaves differs based on a number of factors, including (most impactfully) whether the property is already present on the object or not.\n\nIf the property is present, the `[[Put]]` algorithm will roughly check:\n\n1. Is the property an accessor descriptor (see \"Getters & Setters\" section below)? **If so, call the setter, if any.**\n2. Is the property a data descriptor with `writable` of `false`? **If so, silently fail in `non-strict mode`, or throw `TypeError` in `strict mode`.**\n3. Otherwise, set the value to the existing property as normal.\n\nIf the property is not yet present on the object in question, the `[[Put]]` operation is even more nuanced and complex. We will revisit this scenario in Chapter 5 when we discuss `[[Prototype]]` to give it more clarity.\n\n### Getters & Setters\n\nThe default `[[Put]]` and `[[Get]]` operations for objects completely control how values are set to existing or new properties, or retrieved from existing properties, respectively.\n\n**Note:** Using future/advanced capabilities of the language, it may be possible to override the default `[[Get]]` or `[[Put]]` operations for an entire object (not just per property). This is beyond the scope of our discussion in this book, but will be covered later in the \"You Don't Know JS\" series.\n\nES5 introduced a way to override part of these default operations, not on an object level but a per-property level, through the use of getters and setters. Getters are properties which actually call a hidden function to retrieve a value. Setters are properties which actually call a hidden function to set a value.\n\nWhen you define a property to have either a getter or a setter or both, its definition becomes an \"accessor descriptor\" (as opposed to a \"data descriptor\"). For accessor-descriptors, the `value` and `writable` characteristics of the descriptor are moot and ignored, and instead JS considers the `set` and `get` characteristics of the property (as well as `configurable` and `enumerable`).\n\nConsider:\n\n```js\nvar myObject = {\n\t// define a getter for `a`\n\tget a() {\n\t\treturn 2;\n\t}\n};\n\nObject.defineProperty(\n\tmyObject,\t// target\n\t\"b\",\t\t// property name\n\t{\t\t\t// descriptor\n\t\t// define a getter for `b`\n\t\tget: function(){ return this.a * 2 },\n\n\t\t// make sure `b` shows up as an object property\n\t\tenumerable: true\n\t}\n);\n\nmyObject.a; // 2\n\nmyObject.b; // 4\n```\n\nEither through object-literal syntax with `get a() { .. }` or through explicit definition with `defineProperty(..)`, in both cases we created a property on the object that actually doesn't hold a value, but whose access automatically results in a hidden function call to the getter function, with whatever value it returns being the result of the property access.\n\n```js\nvar myObject = {\n\t// define a getter for `a`\n\tget a() {\n\t\treturn 2;\n\t}\n};\n\nmyObject.a = 3;\n\nmyObject.a; // 2\n```\n\nSince we only defined a getter for `a`, if we try to set the value of `a` later, the set operation won't throw an error but will just silently throw the assignment away. Even if there was a valid setter, our custom getter is hard-coded to return only `2`, so the set operation would be moot.\n\nTo make this scenario more sensible, properties should also be defined with setters, which override the default `[[Put]]` operation (aka, assignment), per-property, just as you'd expect. You will almost certainly want to always declare both getter and setter (having only one or the other often leads to unexpected/surprising behavior):\n\n```js\nvar myObject = {\n\t// define a getter for `a`\n\tget a() {\n\t\treturn this._a_;\n\t},\n\n\t// define a setter for `a`\n\tset a(val) {\n\t\tthis._a_ = val * 2;\n\t}\n};\n\nmyObject.a = 2;\n\nmyObject.a; // 4\n```\n\n**Note:** In this example, we actually store the specified value `2` of the assignment (`[[Put]]` operation) into another variable `_a_`. The `_a_` name is purely by convention for this example and implies nothing special about its behavior -- it's a normal property like any other.\n\n### Existence\n\nWe showed earlier that a property access like `myObject.a` may result in an `undefined` value if either the explicit `undefined` is stored there or the `a` property doesn't exist at all. So, if the value is the same in both cases, how else do we distinguish them?\n\nWe can ask an object if it has a certain property *without* asking to get that property's value:\n\n```js\nvar myObject = {\n\ta: 2\n};\n\n(\"a\" in myObject);\t\t\t\t// true\n(\"b\" in myObject);\t\t\t\t// false\n\nmyObject.hasOwnProperty( \"a\" );\t// true\nmyObject.hasOwnProperty( \"b\" );\t// false\n```\n\nThe `in` operator will check to see if the property is *in* the object, or if it exists at any higher level of the `[[Prototype]]` chain object traversal (see Chapter 5). By contrast, `hasOwnProperty(..)` checks to see if *only* `myObject` has the property or not, and will *not* consult the `[[Prototype]]` chain. We'll come back to the important differences between these two operations in Chapter 5 when we explore `[[Prototype]]`s in detail.\n\n`hasOwnProperty(..)` is accessible for all normal objects via delegation to `Object.prototype` (see Chapter 5). But it's possible to create an object that does not link to `Object.prototype` (via `Object.create(null)` -- see Chapter 5). In this case, a method call like `myObject.hasOwnProperty(..)` would fail.\n\nIn that scenario, a more robust way of performing such a check is `Object.prototype.hasOwnProperty.call(myObject,\"a\")`, which borrows the base `hasOwnProperty(..)` method and uses *explicit `this` binding* (see Chapter 2) to apply it against our `myObject`.\n\n**Note:** The `in` operator has the appearance that it will check for the existence of a *value* inside a container, but it actually checks for the existence of a property name. This difference is important to note with respect to arrays, as the temptation to try a check like `4 in [2, 4, 6]` is strong, but this will not behave as expected.\n\n#### Enumeration\n\nPreviously, we explained briefly the idea of \"enumerability\" when we looked at the `enumerable` property descriptor characteristic. Let's revisit that and examine it in more close detail.\n\n```js\nvar myObject = { };\n\nObject.defineProperty(\n\tmyObject,\n\t\"a\",\n\t// make `a` enumerable, as normal\n\t{ enumerable: true, value: 2 }\n);\n\nObject.defineProperty(\n\tmyObject,\n\t\"b\",\n\t// make `b` NON-enumerable\n\t{ enumerable: false, value: 3 }\n);\n\nmyObject.b; // 3\n(\"b\" in myObject); // true\nmyObject.hasOwnProperty( \"b\" ); // true\n\n// .......\n\nfor (var k in myObject) {\n\tconsole.log( k, myObject[k] );\n}\n// \"a\" 2\n```\n\nYou'll notice that `myObject.b` in fact **exists** and has an accessible value, but it doesn't show up in a `for..in` loop (though, surprisingly, it **is** revealed by the `in` operator existence check). That's because \"enumerable\" basically means \"will be included if the object's properties are iterated through\".\n\n**Note:** `for..in` loops applied to arrays can give somewhat unexpected results, in that the enumeration of an array will include not only all the numeric indices, but also any enumerable properties. It's a good idea to use `for..in` loops *only* on objects, and traditional `for` loops with numeric index iteration for the values stored in arrays.\n\nAnother way that enumerable and non-enumerable properties can be distinguished:\n\n```js\nvar myObject = { };\n\nObject.defineProperty(\n\tmyObject,\n\t\"a\",\n\t// make `a` enumerable, as normal\n\t{ enumerable: true, value: 2 }\n);\n\nObject.defineProperty(\n\tmyObject,\n\t\"b\",\n\t// make `b` non-enumerable\n\t{ enumerable: false, value: 3 }\n);\n\nmyObject.propertyIsEnumerable( \"a\" ); // true\nmyObject.propertyIsEnumerable( \"b\" ); // false\n\nObject.keys( myObject ); // [\"a\"]\nObject.getOwnPropertyNames( myObject ); // [\"a\", \"b\"]\n```\n\n`propertyIsEnumerable(..)` tests whether the given property name exists *directly* on the object and is also `enumerable:true`.\n\n`Object.keys(..)` returns an array of all enumerable properties, whereas `Object.getOwnPropertyNames(..)` returns an array of *all* properties, enumerable or not.\n\nWhereas `in` vs. `hasOwnProperty(..)` differ in whether they consult the `[[Prototype]]` chain or not, `Object.keys(..)` and `Object.getOwnPropertyNames(..)` both inspect *only* the direct object specified.\n\nThere's (currently) no built-in way to get a list of **all properties** which is equivalent to what the `in` operator test would consult (traversing all properties on the entire `[[Prototype]]` chain, as explained in Chapter 5). You could approximate such a utility by recursively traversing the `[[Prototype]]` chain of an object, and for each level, capturing the list from `Object.keys(..)` -- only enumerable properties.\n\n## Iteration\n\nThe `for..in` loop iterates over the list of enumerable properties on an object (including its `[[Prototype]]` chain). But what if you instead want to iterate over the values?\n\nWith numerically-indexed arrays, iterating over the values is typically done with a standard `for` loop, like:\n\n```js\nvar myArray = [1, 2, 3];\n\nfor (var i = 0; i < myArray.length; i++) {\n\tconsole.log( myArray[i] );\n}\n// 1 2 3\n```\n\nThis isn't iterating over the values, though, but iterating over the indices, where you then use the index to reference the value, as `myArray[i]`.\n\nES5 also added several iteration helpers for arrays, including `forEach(..)`, `every(..)`, and `some(..)`. Each of these helpers accepts a function callback to apply to each element in the array, differing only in how they respectively respond to a return value from the callback.\n\n`forEach(..)` will iterate over all values in the array, and ignores any callback return values. `every(..)` keeps going until the end *or* the callback returns a `false` (or \"falsy\") value, whereas `some(..)` keeps going until the end *or* the callback returns a `true` (or \"truthy\") value.\n\nThese special return values inside `every(..)` and `some(..)` act somewhat like a `break` statement inside a normal `for` loop, in that they stop the iteration early before it reaches the end.\n\nIf you iterate on an object with a `for..in` loop, you're also only getting at the values indirectly, because it's actually iterating only over the enumerable properties of the object, leaving you to access the properties manually to get the values.\n\n**Note:** As contrasted with iterating over an array's indices in a numerically ordered way (`for` loop or other iterators), the order of iteration over an object's properties is **not guaranteed** and may vary between different JS engines. **Do not rely** on any observed ordering for anything that requires consistency among environments, as any observed agreement is unreliable.\n\nBut what if you want to iterate over the values directly instead of the array indices (or object properties)? Helpfully, ES6 adds a `for..of` loop syntax for iterating over arrays (and objects, if the object defines its own custom iterator):\n\n```js\nvar myArray = [ 1, 2, 3 ];\n\nfor (var v of myArray) {\n\tconsole.log( v );\n}\n// 1\n// 2\n// 3\n```\n\nThe `for..of` loop asks for an iterator object (from a default internal function known as `@@iterator` in spec-speak) of the *thing* to be iterated, and the loop then iterates over the successive return values from calling that iterator object's `next()` method, once for each loop iteration.\n\nArrays have a built-in `@@iterator`, so `for..of` works easily on them, as shown. But let's manually iterate the array, using the built-in `@@iterator`, to see how it works:\n\n```js\nvar myArray = [ 1, 2, 3 ];\nvar it = myArray[Symbol.iterator]();\n\nit.next(); // { value:1, done:false }\nit.next(); // { value:2, done:false }\nit.next(); // { value:3, done:false }\nit.next(); // { done:true }\n```\n\n**Note:** We get at the `@@iterator` *internal property* of an object using an ES6 `Symbol`: `Symbol.iterator`. We briefly mentioned `Symbol` semantics earlier in the chapter (see \"Computed Property Names\"), so the same reasoning applies here. You'll always want to reference such special properties by `Symbol` name reference instead of by the special value it may hold. Also, despite the name's implications, `@@iterator` is **not the iterator object** itself, but a **function that returns** the iterator object -- a subtle but important detail!\n\nAs the above snippet reveals, the return value from an iterator's `next()` call is an object of the form `{ value: .. , done: .. }`, where `value` is the current iteration value, and `done` is a `boolean` that indicates if there's more to iterate.\n\nNotice the value `3` was returned with a `done:false`, which seems strange at first glance. You have to call the `next()` a fourth time (which the `for..of` loop in the previous snippet automatically does) to get `done:true` and know you're truly done iterating. The reason for this quirk is beyond the scope of what we'll discuss here, but it comes from the semantics of ES6 generator functions.\n\nWhile arrays do automatically iterate in `for..of` loops, regular objects **do not have a built-in `@@iterator`**. The reasons for this intentional omission are more complex than we will examine here, but in general it was better to not include some implementation that could prove troublesome for future types of objects.\n\nIt *is* possible to define your own default `@@iterator` for any object that you care to iterate over. For example:\n\n```js\nvar myObject = {\n\ta: 2,\n\tb: 3\n};\n\nObject.defineProperty( myObject, Symbol.iterator, {\n\tenumerable: false,\n\twritable: false,\n\tconfigurable: true,\n\tvalue: function() {\n\t\tvar o = this;\n\t\tvar idx = 0;\n\t\tvar ks = Object.keys( o );\n\t\treturn {\n\t\t\tnext: function() {\n\t\t\t\treturn {\n\t\t\t\t\tvalue: o[ks[idx++]],\n\t\t\t\t\tdone: (idx > ks.length)\n\t\t\t\t};\n\t\t\t}\n\t\t};\n\t}\n} );\n\n// iterate `myObject` manually\nvar it = myObject[Symbol.iterator]();\nit.next(); // { value:2, done:false }\nit.next(); // { value:3, done:false }\nit.next(); // { value:undefined, done:true }\n\n// iterate `myObject` with `for..of`\nfor (var v of myObject) {\n\tconsole.log( v );\n}\n// 2\n// 3\n```\n\n**Note:** We used `Object.defineProperty(..)` to define our custom `@@iterator` (mostly so we could make it non-enumerable), but using the `Symbol` as a *computed property name* (covered earlier in this chapter), we could have declared it directly, like `var myObject = { a:2, b:3, [Symbol.iterator]: function(){ /* .. */ } }`.\n\nEach time the `for..of` loop calls `next()` on `myObject`'s iterator object, the internal pointer will advance and return back the next value from the object's properties list (see a previous note about iteration ordering on object properties/values).\n\nThe iteration we just demonstrated is a simple value-by-value iteration, but you can of course define arbitrarily complex iterations for your custom data structures, as you see fit. Custom iterators combined with ES6's `for..of` loop are a powerful new syntactic tool for manipulating user-defined objects.\n\nFor example, a list of `Pixel` objects (with `x` and `y` coordinate values) could decide to order its iteration based on the linear distance from the `(0,0)` origin, or filter out points that are \"too far away\", etc. As long as your iterator returns the expected `{ value: .. }` return values from `next()` calls, and a `{ done: true }` after the iteration is complete, ES6's `for..of` can iterate over it.\n\nIn fact, you can even generate \"infinite\" iterators which never \"finish\" and always return a new value (such as a random number, an incremented value, a unique identifier, etc), though you probably will not use such iterators with an unbounded `for..of` loop, as it would never end and would hang your program.\n\n```js\nvar randoms = {\n\t[Symbol.iterator]: function() {\n\t\treturn {\n\t\t\tnext: function() {\n\t\t\t\treturn { value: Math.random() };\n\t\t\t}\n\t\t};\n\t}\n};\n\nvar randoms_pool = [];\nfor (var n of randoms) {\n\trandoms_pool.push( n );\n\n\t// don't proceed unbounded!\n\tif (randoms_pool.length === 100) break;\n}\n```\n\nThis iterator will generate random numbers \"forever\", so we're careful to only pull out 100 values so our program doesn't hang.\n\n## Review (TL;DR)\n\nObjects in JS have both a literal form (such as `var a = { .. }`) and a constructed form (such as `var a = new Array(..)`). The literal form is almost always preferred, but the constructed form offers, in some cases, more creation options.\n\nMany people mistakenly claim \"everything in JavaScript is an object\", but this is incorrect. Objects are one of the 6 (or 7, depending on your perspective) primitive types. Objects have sub-types, including `function`, and also can be behavior-specialized, like `[object Array]` as the internal label representing the array object sub-type.\n\nObjects are collections of key/value pairs. The values can be accessed as properties, via `.propName` or `[\"propName\"]` syntax. Whenever a property is accessed, the engine actually invokes the internal default `[[Get]]` operation (and `[[Put]]` for setting values), which not only looks for the property directly on the object, but which will traverse the `[[Prototype]]` chain (see Chapter 5) if not found.\n\nProperties have certain characteristics that can be controlled through property descriptors, such as `writable` and `configurable`. In addition, objects can have their mutability (and that of their properties) controlled to various levels of immutability using `Object.preventExtensions(..)`, `Object.seal(..)`, and `Object.freeze(..)`.\n\nProperties don't have to contain values -- they can be \"accessor properties\" as well, with getters/setters. They can also be either *enumerable* or not, which controls if they show up in `for..in` loop iterations, for instance.\n\nYou can also iterate over **the values** in data structures (arrays, objects, etc) using the ES6 `for..of` syntax, which looks for either a built-in or custom `@@iterator` object consisting of a `next()` method to advance through the data values one at a time.\n"
  },
  {
    "path": "this & object prototypes/ch4.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Chapter 4: Mixing (Up) \"Class\" Objects\n\nFollowing our exploration of objects from the previous chapter, it's natural that we now turn our attention to \"object oriented (OO) programming\", with \"classes\". We'll first look at \"class orientation\" as a design pattern, before examining the mechanics of \"classes\": \"instantiation\", \"inheritance\" and \"(relative) polymorphism\".\n\nWe'll see that these concepts don't really map very naturally to the object mechanism in JS, and the lengths (mixins, etc.) many JavaScript developers go to overcome such challenges.\n\n**Note:** This chapter spends quite a bit of time (the first half!) on heavy \"objected oriented programming\" theory. We eventually relate these ideas to real concrete JavaScript code in the second half, when we talk about \"Mixins\". But there's a lot of concept and pseudo-code to wade through first, so don't get lost -- just stick with it!\n\n## Class Theory\n\n\"Class/Inheritance\" describes a certain form of code organization and architecture -- a way of modeling real world problem domains in our software.\n\nOO or class oriented programming stresses that data intrinsically has associated behavior (of course, different depending on the type and nature of the data!) that operates on it, so proper design is to package up (aka, encapsulate) the data and the behavior together. This is sometimes called \"data structures\" in formal computer science.\n\nFor example, a series of characters that represents a word or phrase is usually called a \"string\". The characters are the data. But you almost never just care about the data, you usually want to *do things* with the data, so the behaviors that can apply *to* that data (calculating its length, appending data, searching, etc.) are all designed as methods of a `String` class.\n\nAny given string is just an instance of this class, which means that it's a neatly collected packaging of both the character data and the functionality we can perform on it.\n\nClasses also imply a way of *classifying* a certain data structure. The way we do this is to think about any given structure as a specific variation of a more general base definition.\n\nLet's explore this classification process by looking at a commonly cited example. A *car* can be described as a specific implementation of a more general \"class\" of thing, called a *vehicle*.\n\nWe model this relationship in software with classes by defining a `Vehicle` class and a `Car` class.\n\nThe definition of `Vehicle` might include things like propulsion (engines, etc.), the ability to carry people, etc., which would all be the behaviors. What we define in `Vehicle` is all the stuff that is common to all (or most of) the different types of vehicles (the \"planes, trains, and automobiles\").\n\nIt might not make sense in our software to re-define the basic essence of \"ability to carry people\" over and over again for each different type of vehicle. Instead, we define that capability once in `Vehicle`, and then when we define `Car`, we simply indicate that it \"inherits\" (or \"extends\") the base definition from `Vehicle`. The definition of `Car` is said to specialize the general `Vehicle` definition.\n\nWhile `Vehicle` and `Car` collectively define the behavior by way of methods, the data in an instance would be things like the unique VIN of a specific car, etc.\n\n**And thus, classes, inheritance, and instantiation emerge.**\n\nAnother key concept with classes is \"polymorphism\", which describes the idea that a general behavior from a parent class can be overridden in a child class to give it more specifics. In fact, relative polymorphism lets us reference the base behavior from the overridden behavior.\n\nClass theory strongly suggests that a parent class and a child class share the same method name for a certain behavior, so that the child overrides the parent (differentially). As we'll see later, doing so in your JavaScript code is opting into frustration and code brittleness.\n\n### \"Class\" Design Pattern\n\nYou may never have thought about classes as a \"design pattern\", since it's most common to see discussion of popular \"OO Design Patterns\", like \"Iterator\", \"Observer\", \"Factory\", \"Singleton\", etc. As presented this way, it's almost an assumption that OO classes are the lower-level mechanics by which we implement all (higher level) design patterns, as if OO is a given foundation for *all* (proper) code.\n\nDepending on your level of formal education in programming, you may have heard of \"procedural programming\" as a way of describing code which only consists of procedures (aka, functions) calling other functions, without any higher abstractions. You may have been taught that classes were the *proper* way to transform procedural-style \"spaghetti code\" into well-formed, well-organized code.\n\nOf course, if you have experience with \"functional programming\" (Monads, etc.), you know very well that classes are just one of several common design patterns. But for others, this may be the first time you've asked yourself if classes really are a fundamental foundation for code, or if they are an optional abstraction on top of code.\n\nSome languages (like Java) don't give you the choice, so it's not very *optional* at all -- everything's a class. Other languages like C/C++ or PHP give you both procedural and class-oriented syntaxes, and it's left more to the developer's choice which style or mixture of styles is appropriate.\n\n### JavaScript \"Classes\"\n\nWhere does JavaScript fall in this regard? JS has had *some* class-like syntactic elements (like `new` and `instanceof`) for quite awhile, and more recently in ES6, some additions, like the `class` keyword (see Appendix A).\n\nBut does that mean JavaScript actually *has* classes? Plain and simple: **No.**\n\nSince classes are a design pattern, you *can*, with quite a bit of effort (as we'll see throughout the rest of this chapter), implement approximations for much of classical class functionality. JS tries to satisfy the extremely pervasive *desire* to design with classes by providing seemingly class-like syntax.\n\nWhile we may have a syntax that looks like classes, it's as if JavaScript mechanics are fighting against you using the *class design pattern*, because behind the curtain, the mechanisms that you build on are operating quite differently. Syntactic sugar and (extremely widely used) JS \"Class\" libraries go a long way toward hiding this reality from you, but sooner or later you will face the fact that the *classes* you have in other languages are not like the \"classes\" you're faking in JS.\n\nWhat this boils down to is that classes are an optional pattern in software design, and you have the choice to use them in JavaScript or not. Since many developers have a strong affinity to class oriented software design, we'll spend the rest of this chapter exploring what it takes to maintain the illusion of classes with what JS provides, and the pain points we experience.\n\n## Class Mechanics\n\nIn many class-oriented languages, the \"standard library\" provides a \"stack\" data structure (push, pop, etc.) as a `Stack` class. This class would have an internal set of variables that stores the data, and it would have a set of publicly accessible behaviors (\"methods\") provided by the class, which gives your code the ability to interact with the (hidden) data (adding & removing data, etc.).\n\nBut in such languages, you don't really operate directly on `Stack` (unless making a **Static** class member reference, which is outside the scope of our discussion). The `Stack` class is merely an abstract explanation of what *any* \"stack\" should do, but it's not itself *a* \"stack\". You must **instantiate** the `Stack` class before you have a concrete data structure *thing* to operate against.\n\n### Building\n\nThe traditional metaphor for \"class\" and \"instance\" based thinking comes from a building construction.\n\nAn architect plans out all the characteristics of a building: how wide, how tall, how many windows and in what locations, even what type of material to use for the walls and roof. She doesn't necessarily care, at this point, *where* the building will be built, nor does she care *how many* copies of that building will be built.\n\nShe also doesn't care very much about the contents of the building -- the furniture, wall paper, ceiling fans, etc. -- only what type of structure they will be contained by.\n\nThe architectural blue-prints she produces are only *plans* for a building. They don't actually constitute a building we can walk into and sit down. We need a builder for that task. A builder will take those plans and follow them, exactly, as he *builds* the building. In a very real sense, he is *copying* the intended characteristics from the plans to the physical building.\n\nOnce complete, the building is a physical instantiation of the blue-print plans, hopefully an essentially perfect *copy*. And then the builder can move to the open lot next door and do it all over again, creating yet another *copy*.\n\nThe relationship between building and blue-print is indirect. You can examine a blue-print to understand how the building was structured, for any parts where direct inspection of the building itself was insufficient. But if you want to open a door, you have to go to the building itself -- the blue-print merely has lines drawn on a page that *represent* where the door should be.\n\nA class is a blue-print. To actually *get* an object we can interact with, we must build (aka, \"instantiate\") something from the class. The end result of such \"construction\" is an object, typically called an \"instance\", which we can directly call methods on and access any public data properties from, as necessary.\n\n**This object is a *copy*** of all the characteristics described by the class.\n\nYou likely wouldn't expect to walk into a building and find, framed and hanging on the wall, a copy of the blue-prints used to plan the building, though the blue-prints are probably on file with a public records office. Similarly, you don't generally use an object instance to directly access and manipulate its class, but it is usually possible to at least determine *which class* an object instance comes from.\n\nIt's more useful to consider the direct relationship of a class to an object instance, rather than any indirect relationship between an object instance and the class it came from. **A class is instantiated into object form by a copy operation.**\n\n<img src=\"fig1.png\">\n\nAs you can see, the arrows move from left to right, and from top to bottom, which indicates the copy operations that occur, both conceptually and physically.\n\n### Constructor\n\nInstances of classes are constructed by a special method of the class, usually of the same name as the class, called a *constructor*. This method's explicit job is to initialize any information (state) the instance will need.\n\nFor example, consider this loose pseudo-code (invented syntax) for classes:\n\n```js\nclass CoolGuy {\n\tspecialTrick = nothing\n\n\tCoolGuy( trick ) {\n\t\tspecialTrick = trick\n\t}\n\n\tshowOff() {\n\t\toutput( \"Here's my trick: \", specialTrick )\n\t}\n}\n```\n\nTo *make* a `CoolGuy` instance, we would call the class constructor:\n\n```js\nJoe = new CoolGuy( \"jumping rope\" )\n\nJoe.showOff() // Here's my trick: jumping rope\n```\n\nNotice that the `CoolGuy` class has a constructor `CoolGuy()`, which is actually what we call when we say `new CoolGuy(..)`. We get an object back (an instance of our class) from the constructor, and we can call the method `showOff()`, which prints out that particular `CoolGuy`s special trick.\n\n*Obviously, jumping rope makes Joe a pretty cool guy.*\n\nThe constructor of a class *belongs* to the class, almost universally with the same name as the class. Also, constructors pretty much always need to be called with `new` to let the language engine know you want to construct a *new* class instance.\n\n## Class Inheritance\n\nIn class-oriented languages, not only can you define a class which can be instantiated itself, but you can define another class that **inherits** from the first class.\n\nThe second class is often said to be a \"child class\" whereas the first is the \"parent class\". These terms obviously come from the metaphor of parents and children, though the metaphors here are a bit stretched, as you'll see shortly.\n\nWhen a parent has a biological child, the genetic characteristics of the parent are copied into the child. Obviously, in most biological reproduction systems, there are two parents who co-equally contribute genes to the mix. But for the purposes of the metaphor, we'll assume just one parent.\n\nOnce the child exists, he or she is separate from the parent. The child was heavily influenced by the inheritance from his or her parent, but is unique and distinct. If a child ends up with red hair, that doesn't mean the parent's hair *was* or automatically *becomes* red.\n\nIn a similar way, once a child class is defined, it's separate and distinct from the parent class. The child class contains an initial copy of the behavior from the parent, but can then override any inherited behavior and even define new behavior.\n\nIt's important to remember that we're talking about parent and child **classes**, which aren't physical things. This is where the metaphor of parent and child gets a little confusing, because we actually should say that a parent class is like a parent's DNA and a child class is like a child's DNA. We have to make (aka \"instantiate\") a person out of each set of DNA to actually have a physical person to have a conversation with.\n\nLet's set aside biological parents and children, and look at inheritance through a slightly different lens: different types of vehicles. That's one of the most canonical (and often groan-worthy) metaphors to understand inheritance.\n\nLet's revisit the `Vehicle` and `Car` discussion from earlier in this chapter. Consider this loose pseudo-code (invented syntax) for inherited classes:\n\n```js\nclass Vehicle {\n\tengines = 1\n\n\tignition() {\n\t\toutput( \"Turning on my engine.\" )\n\t}\n\n\tdrive() {\n\t\tignition()\n\t\toutput( \"Steering and moving forward!\" )\n\t}\n}\n\nclass Car inherits Vehicle {\n\twheels = 4\n\n\tdrive() {\n\t\tinherited:drive()\n\t\toutput( \"Rolling on all \", wheels, \" wheels!\" )\n\t}\n}\n\nclass SpeedBoat inherits Vehicle {\n\tengines = 2\n\n\tignition() {\n\t\toutput( \"Turning on my \", engines, \" engines.\" )\n\t}\n\n\tpilot() {\n\t\tinherited:drive()\n\t\toutput( \"Speeding through the water with ease!\" )\n\t}\n}\n```\n\n**Note:** For clarity and brevity, constructors for these classes have been omitted.\n\nWe define the `Vehicle` class to assume an engine, a way to turn on the ignition, and a way to drive around. But you wouldn't ever manufacture just a generic \"vehicle\", so it's really just an abstract concept at this point.\n\nSo then we define two specific kinds of vehicle: `Car` and `SpeedBoat`. They each inherit the general characteristics of `Vehicle`, but then they specialize the characteristics appropriately for each kind. A car needs 4 wheels, and a speed boat needs 2 engines, which means it needs extra attention to turn on the ignition of both engines.\n\n### Polymorphism\n\n`Car` defines its own `drive()` method, which overrides the method of the same name it inherited from `Vehicle`. But then, `Car`s `drive()` method calls `inherited:drive()`, which indicates that `Car` can reference the original pre-overridden `drive()` it inherited. `SpeedBoat`s `pilot()` method also makes a reference to its inherited copy of `drive()`.\n\nThis technique is called \"polymorphism\", or \"virtual polymorphism\". More specifically to our current point, we'll call it \"relative polymorphism\".\n\nPolymorphism is a much broader topic than we will exhaust here, but our current \"relative\" semantics refers to one particular aspect: the idea that any method can reference another method (of the same or different name) at a higher level of the inheritance hierarchy. We say \"relative\" because we don't absolutely define which inheritance level (aka, class) we want to access, but rather relatively reference it by essentially saying \"look one level up\".\n\nIn many languages, the keyword `super` is used, in place of this example's `inherited:`, which leans on the idea that a \"super class\" is the parent/ancestor of the current class.\n\nAnother aspect of polymorphism is that a method name can have multiple definitions at different levels of the inheritance chain, and these definitions are automatically selected as appropriate when resolving which methods are being called.\n\nWe see two occurrences of that behavior in our example above: `drive()` is defined in both `Vehicle` and `Car`, and `ignition()` is defined in both `Vehicle` and `SpeedBoat`.\n\n**Note:** Another thing that traditional class-oriented languages give you via `super` is a direct way for the constructor of a child class to reference the constructor of its parent class. This is largely true because with real classes, the constructor belongs to the class. However, in JS, it's the reverse -- it's actually more appropriate to think of the \"class\" belonging to the constructor (the `Foo.prototype...` type references). Since in JS the relationship between child and parent exists only between the two `.prototype` objects of the respective constructors, the constructors themselves are not directly related, and thus there's no simple way to relatively reference one from the other (see Appendix A for ES6 `class` which \"solves\" this with `super`).\n\nAn interesting implication of polymorphism can be seen specifically with `ignition()`. Inside `pilot()`, a relative-polymorphic reference is made to (the inherited) `Vehicle`s version of `drive()`. But that `drive()` references an `ignition()` method just by name (no relative reference).\n\nWhich version of `ignition()` will the language engine use, the one from `Vehicle` or the one from `SpeedBoat`? **It uses the `SpeedBoat` version of `ignition()`.** If you *were* to instantiate `Vehicle` class itself, and then call its `drive()`, the language engine would instead just use `Vehicle`s `ignition()` method definition.\n\nPut another way, the definition for the method `ignition()` *polymorphs* (changes) depending on which class (level of inheritance) you are referencing an instance of.\n\nThis may seem like overly deep academic detail. But understanding these details is necessary to properly contrast similar (but distinct) behaviors in JavaScript's `[[Prototype]]` mechanism.\n\nWhen classes are inherited, there is a way **for the classes themselves** (not the object instances created from them!) to *relatively* reference the class inherited from, and this relative reference is usually called `super`.\n\nRemember this figure from earlier:\n\n<img src=\"fig1.png\">\n\nNotice how for both instantiation (`a1`, `a2`, `b1`, and `b2`) *and* inheritance (`Bar`), the arrows indicate a copy operation.\n\nConceptually, it would seem a child class `Bar` can access  behavior in its parent class `Foo` using a relative polymorphic reference (aka, `super`). However, in reality, the child class is merely given a copy of the inherited behavior from its parent class. If the child \"overrides\" a method it inherits, both the original and overridden versions of the method are actually maintained, so that they are both accessible.\n\nDon't let polymorphism confuse you into thinking a child class is linked to its parent class. A child class instead gets a copy of what it needs from the parent class. **Class inheritance implies copies.**\n\n### Multiple Inheritance\n\nRecall our earlier discussion of parent(s) and children and DNA? We said that the metaphor was a bit weird because biologically most offspring come from two parents. If a class could inherit from two other classes, it would more closely fit the parent/child metaphor.\n\nSome class-oriented languages allow you to specify more than one \"parent\" class to \"inherit\" from. Multiple-inheritance means that each parent class definition is copied into the child class.\n\nOn the surface, this seems like a powerful addition to class-orientation, giving us the ability to compose more functionality together. However, there are certainly some complicating questions that arise. If both parent classes provide a method called `drive()`, which version would a `drive()` reference in the child resolve to? Would you always have to manually specify which parent's `drive()` you meant, thus losing some of the gracefulness of polymorphic inheritance?\n\nThere's another variation, the so called \"Diamond Problem\", which refers to the scenario where a child class \"D\" inherits from two parent classes (\"B\" and \"C\"), and each of those in turn inherits from a common \"A\" parent. If \"A\" provides a method `drive()`, and both \"B\" and \"C\" override (polymorph) that method, when `D` references `drive()`, which version should it use (`B:drive()` or `C:drive()`)?\n\n<img src=\"fig2.png\">\n\nThese complications go even much deeper than this quick glance. We address them here only so we can contrast to how JavaScript's mechanisms work.\n\nJavaScript is simpler: it does not provide a native mechanism for \"multiple inheritance\". Many see this as a good thing, because the complexity savings more than make up for the \"reduced\" functionality. But this doesn't stop developers from trying to fake it in various ways, as we'll see next.\n\n## Mixins\n\nJavaScript's object mechanism does not *automatically* perform copy behavior when you \"inherit\" or \"instantiate\". Plainly, there are no \"classes\" in JavaScript to instantiate, only objects. And objects don't get copied to other objects, they get *linked together* (more on that in Chapter 5).\n\nSince observed class behaviors in other languages imply copies, let's examine how JS developers **fake** the *missing* copy behavior of classes in JavaScript: mixins. We'll look at two types of \"mixin\": **explicit** and **implicit**.\n\n### Explicit Mixins\n\nLet's again revisit our `Vehicle` and `Car` example from before. Since JavaScript will not automatically copy behavior from `Vehicle` to `Car`, we can instead create a utility that manually copies. Such a utility is often called `extend(..)` by many libraries/frameworks, but we will call it `mixin(..)` here for illustrative purposes.\n\n```js\n// vastly simplified `mixin(..)` example:\nfunction mixin( sourceObj, targetObj ) {\n\tfor (var key in sourceObj) {\n\t\t// only copy if not already present\n\t\tif (!(key in targetObj)) {\n\t\t\ttargetObj[key] = sourceObj[key];\n\t\t}\n\t}\n\n\treturn targetObj;\n}\n\nvar Vehicle = {\n\tengines: 1,\n\n\tignition: function() {\n\t\tconsole.log( \"Turning on my engine.\" );\n\t},\n\n\tdrive: function() {\n\t\tthis.ignition();\n\t\tconsole.log( \"Steering and moving forward!\" );\n\t}\n};\n\nvar Car = mixin( Vehicle, {\n\twheels: 4,\n\n\tdrive: function() {\n\t\tVehicle.drive.call( this );\n\t\tconsole.log( \"Rolling on all \" + this.wheels + \" wheels!\" );\n\t}\n} );\n```\n\n**Note:** Subtly but importantly, we're not dealing with classes anymore, because there are no classes in JavaScript. `Vehicle` and `Car` are just objects that we make copies from and to, respectively.\n\n`Car` now has a copy of the properties and functions from `Vehicle`. Technically, functions are not actually duplicated, but rather *references* to the functions are copied. So, `Car` now has a property called `ignition`, which is a copied reference to the `ignition()` function, as well as a property called `engines` with the copied value of `1` from `Vehicle`.\n\n`Car` *already* had a `drive` property (function), so that property reference was not overridden (see the `if` statement in `mixin(..)` above).\n\n#### \"Polymorphism\" Revisited\n\nLet's examine this statement: `Vehicle.drive.call( this )`. This is what I call \"explicit pseudo-polymorphism\". Recall in our previous pseudo-code this line was `inherited:drive()`, which we called \"relative polymorphism\".\n\nJavaScript does not have (prior to ES6; see Appendix A) a facility for relative polymorphism. So, **because both `Car` and `Vehicle` had a function of the same name: `drive()`**, to distinguish a call to one or the other, we must make an absolute (not relative) reference. We explicitly specify the `Vehicle` object by name, and call the `drive()` function on it.\n\nBut if we said `Vehicle.drive()`, the `this` binding for that function call would be the `Vehicle` object instead of the `Car` object (see Chapter 2), which is not what we want. So, instead we use `.call( this )` (Chapter 2) to ensure that `drive()` is executed in the context of the `Car` object.\n\n**Note:** If the function name identifier for `Car.drive()` hadn't overlapped with (aka, \"shadowed\"; see Chapter 5) `Vehicle.drive()`, we wouldn't have been exercising \"method polymorphism\". So, a reference to `Vehicle.drive()` would have been copied over by the `mixin(..)` call, and we could have accessed directly with `this.drive()`. The chosen identifier overlap **shadowing** is *why* we have to use the more complex *explicit pseudo-polymorphism* approach.\n\nIn class-oriented languages, which have relative polymorphism, the linkage between `Car` and `Vehicle` is established once, at the top of the class definition, which makes for only one place to maintain such relationships.\n\nBut because of JavaScript's peculiarities, explicit pseudo-polymorphism (because of shadowing!) creates brittle manual/explicit linkage **in every single function where you need such a (pseudo-)polymorphic reference**. This can significantly increase the maintenance cost. Moreover, while explicit pseudo-polymorphism can emulate the behavior of \"multiple inheritance\", it only increases the complexity and brittleness.\n\nThe result of such approaches is usually more complex, harder-to-read, *and* harder-to-maintain code. **Explicit pseudo-polymorphism should be avoided wherever possible**, because the cost outweighs the benefit in most respects.\n\n#### Mixing Copies\n\nRecall the `mixin(..)` utility from above:\n\n```js\n// vastly simplified `mixin()` example:\nfunction mixin( sourceObj, targetObj ) {\n\tfor (var key in sourceObj) {\n\t\t// only copy if not already present\n\t\tif (!(key in targetObj)) {\n\t\t\ttargetObj[key] = sourceObj[key];\n\t\t}\n\t}\n\n\treturn targetObj;\n}\n```\n\nNow, let's examine how `mixin(..)` works. It iterates over the properties of `sourceObj` (`Vehicle` in our example) and if there's no matching property of that name in `targetObj` (`Car` in our example), it makes a copy. Since we're making the copy after the initial object exists, we are careful to not copy over a target property.\n\nIf we made the copies first, before specifying the `Car` specific contents, we could omit this check against `targetObj`, but that's a little more clunky and less efficient, so it's generally less preferred:\n\n```js\n// alternate mixin, less \"safe\" to overwrites\nfunction mixin( sourceObj, targetObj ) {\n\tfor (var key in sourceObj) {\n\t\ttargetObj[key] = sourceObj[key];\n\t}\n\n\treturn targetObj;\n}\n\nvar Vehicle = {\n\t// ...\n};\n\n// first, create an empty object with\n// Vehicle's stuff copied in\nvar Car = mixin( Vehicle, { } );\n\n// now copy the intended contents into Car\nmixin( {\n\twheels: 4,\n\n\tdrive: function() {\n\t\t// ...\n\t}\n}, Car );\n```\n\nEither approach, we have explicitly copied the non-overlapping contents of `Vehicle` into `Car`. The name \"mixin\" comes from an alternate way of explaining the task: `Car` has `Vehicle`s contents **mixed-in**, just like you mix in chocolate chips into your favorite cookie dough.\n\nAs a result of the copy operation, `Car` will operate somewhat separately from `Vehicle`. If you add a property onto `Car`, it will not affect `Vehicle`, and vice versa.\n\n**Note:** A few minor details have been skimmed over here. There are still some subtle ways the two objects can \"affect\" each other even after copying, such as if they both share a reference to a common object (such as an array).\n\nSince the two objects also share references to their common functions, that means that **even manual copying of functions (aka, mixins) from one object to another doesn't *actually emulate* the real duplication from class to instance that occurs in class-oriented languages**.\n\nJavaScript functions can't really be duplicated (in a standard, reliable way), so what you end up with instead is a **duplicated reference** to the same shared function object (functions are objects; see Chapter 3). If you modified one of the shared **function objects** (like `ignition()`) by adding properties on top of it, for instance, both `Vehicle` and `Car` would be \"affected\" via the shared reference.\n\nExplicit mixins are a fine mechanism in JavaScript. But they appear more powerful than they really are. Not much benefit is *actually* derived from copying a property from one object to another, **as opposed to just defining the properties twice**, once on each object. And that's especially true given the function-object reference nuance we just mentioned.\n\nIf you explicitly mix-in two or more objects into your target object, you can **partially emulate** the behavior of \"multiple inheritance\", but there's no direct way to handle collisions if the same method or property is being copied from more than one source. Some developers/libraries have come up with \"late binding\" techniques and other exotic work-arounds, but fundamentally these \"tricks\" are *usually* more effort (and lesser performance!) than the pay-off.\n\nTake care only to use explicit mixins where it actually helps make more readable code, and avoid the pattern if you find it making code that's harder to trace, or if you find it creates unnecessary or unwieldy dependencies between objects.\n\n**If it starts to get *harder* to properly use mixins than before you used them**, you should probably stop using mixins. In fact, if you have to use a complex library/utility to work out all these details, it might be a sign that you're going about it the harder way, perhaps unnecessarily. In Chapter 6, we'll try to distill a simpler way that accomplishes the desired outcomes without all the fuss.\n\n#### Parasitic Inheritance\n\nA variation on this explicit mixin pattern, which is both in some ways explicit and in other ways implicit, is called \"parasitic inheritance\", popularized mainly by Douglas Crockford.\n\nHere's how it can work:\n\n```js\n// \"Traditional JS Class\" `Vehicle`\nfunction Vehicle() {\n\tthis.engines = 1;\n}\nVehicle.prototype.ignition = function() {\n\tconsole.log( \"Turning on my engine.\" );\n};\nVehicle.prototype.drive = function() {\n\tthis.ignition();\n\tconsole.log( \"Steering and moving forward!\" );\n};\n\n// \"Parasitic Class\" `Car`\nfunction Car() {\n\t// first, `car` is a `Vehicle`\n\tvar car = new Vehicle();\n\n\t// now, let's modify our `car` to specialize it\n\tcar.wheels = 4;\n\n\t// save a privileged reference to `Vehicle::drive()`\n\tvar vehDrive = car.drive;\n\n\t// override `Vehicle::drive()`\n\tcar.drive = function() {\n\t\tvehDrive.call( this );\n\t\tconsole.log( \"Rolling on all \" + this.wheels + \" wheels!\" );\n\t};\n\n\treturn car;\n}\n\nvar myCar = new Car();\n\nmyCar.drive();\n// Turning on my engine.\n// Steering and moving forward!\n// Rolling on all 4 wheels!\n```\n\nAs you can see, we initially make a copy of the definition from the `Vehicle` \"parent class\" (object), then mixin our \"child class\" (object) definition (preserving privileged parent-class references as needed), and pass off this composed object `car` as our child instance.\n\n**Note:** when we call `new Car()`, a new object is created and referenced by `Car`s `this` reference (see Chapter 2). But since we don't use that object, and instead return our own `car` object, the initially created object is just discarded. So, `Car()` could be called without the `new` keyword, and the functionality above would be identical, but without the wasted object creation/garbage-collection.\n\n### Implicit Mixins\n\nImplicit mixins are closely related to *explicit pseudo-polymorphism* as explained previously. As such, they come with the same caveats and warnings.\n\nConsider this code:\n\n```js\nvar Something = {\n\tcool: function() {\n\t\tthis.greeting = \"Hello World\";\n\t\tthis.count = this.count ? this.count + 1 : 1;\n\t}\n};\n\nSomething.cool();\nSomething.greeting; // \"Hello World\"\nSomething.count; // 1\n\nvar Another = {\n\tcool: function() {\n\t\t// implicit mixin of `Something` to `Another`\n\t\tSomething.cool.call( this );\n\t}\n};\n\nAnother.cool();\nAnother.greeting; // \"Hello World\"\nAnother.count; // 1 (not shared state with `Something`)\n```\n\nWith `Something.cool.call( this )`, which can happen either in a \"constructor\" call (most common) or in a method call (shown here), we essentially \"borrow\" the function `Something.cool()` and call it in the context of `Another` (via its `this` binding; see Chapter 2) instead of `Something`. The end result is that the assignments that `Something.cool()` makes are applied against the `Another` object rather than the `Something` object.\n\nSo, it is said that we \"mixed in\" `Something`s behavior with (or into) `Another`.\n\nWhile this sort of technique seems to take useful advantage of `this` rebinding functionality, it is the brittle `Something.cool.call( this )` call, which cannot be made into a relative (and thus more flexible) reference, that you should **heed with caution**. Generally, **avoid such constructs where possible** to keep cleaner and more maintainable code.\n\n## Review (TL;DR)\n\nClasses are a design pattern. Many languages provide syntax which enables natural class-oriented software design. JS also has a similar syntax, but it behaves **very differently** from what you're used to with classes in those other languages.\n\n**Classes mean copies.**\n\nWhen traditional classes are instantiated, a copy of behavior from class to instance occurs. When classes are inherited, a copy of behavior from parent to child also occurs.\n\nPolymorphism (having different functions at multiple levels of an inheritance chain with the same name) may seem like it implies a referential relative link from child back to parent, but it's still just a result of copy behavior.\n\nJavaScript **does not automatically** create copies (as classes imply) between objects.\n\nThe mixin pattern (both explicit and implicit) is often used to *sort of* emulate class copy behavior, but this usually leads to ugly and brittle syntax like explicit pseudo-polymorphism (`OtherObj.methodName.call(this, ...)`), which often results in harder to understand and maintain code.\n\nExplicit mixins are also not exactly the same as class *copy*, since objects (and functions!) only have shared references duplicated, not the objects/functions duplicated themselves. Not paying attention to such nuance is the source of a variety of gotchas.\n\nIn general, faking classes in JS often sets more landmines for future coding than solving present *real* problems.\n"
  },
  {
    "path": "this & object prototypes/ch5.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Chapter 5: Prototypes\n\nIn Chapters 3 and 4, we mentioned the `[[Prototype]]` chain several times, but haven't said what exactly it is. We will now examine prototypes in detail.\n\n**Note:** All of the attempts to emulate class-copy behavior, as described previously in Chapter 4, labeled as variations of \"mixins\", completely circumvent the `[[Prototype]]` chain mechanism we examine here in this chapter.\n\n## `[[Prototype]]`\n\nObjects in JavaScript have an internal property, denoted in the specification as `[[Prototype]]`, which is simply a reference to another object. Almost all objects are given a non-`null` value for this property, at the time of their creation.\n\n**Note:** We will see shortly that it *is* possible for an object to have an empty `[[Prototype]]` linkage, though this is somewhat less common.\n\nConsider:\n\n```js\nvar myObject = {\n\ta: 2\n};\n\nmyObject.a; // 2\n```\n\nWhat is the `[[Prototype]]` reference used for? In Chapter 3, we examined the `[[Get]]` operation that is invoked when you reference a property on an object, such as `myObject.a`. For that default `[[Get]]` operation, the first step is to check if the object itself has a property `a` on it, and if so, it's used.\n\n**Note:** ES6 Proxies are outside of our discussion scope in this book (will be covered in a later book in the series!), but everything we discuss here about normal `[[Get]]` and `[[Put]]` behavior does not apply if a `Proxy` is involved.\n\nBut it's what happens if `a` **isn't** present on `myObject` that brings our attention now to the `[[Prototype]]` link of the object.\n\nThe default `[[Get]]` operation proceeds to follow the `[[Prototype]]` **link** of the object if it cannot find the requested property on the object directly.\n\n```js\nvar anotherObject = {\n\ta: 2\n};\n\n// create an object linked to `anotherObject`\nvar myObject = Object.create( anotherObject );\n\nmyObject.a; // 2\n```\n\n**Note:** We will explain what `Object.create(..)` does, and how it operates, shortly. For now, just assume it creates an object with the `[[Prototype]]` linkage we're examining to the object specified.\n\nSo, we have `myObject` that is now `[[Prototype]]` linked to `anotherObject`. Clearly `myObject.a` doesn't actually exist, but nevertheless, the property access succeeds (being found on `anotherObject` instead) and indeed finds the value `2`.\n\nBut, if `a` weren't found on `anotherObject` either, its `[[Prototype]]` chain, if non-empty, is again consulted and followed.\n\nThis process continues until either a matching property name is found, or the `[[Prototype]]` chain ends. If no matching property is *ever* found by the end of the chain, the return result from the `[[Get]]` operation is `undefined`.\n\nSimilar to this `[[Prototype]]` chain look-up process, if you use a `for..in` loop to iterate over an object, any property that can be reached via its chain (and is also `enumerable` -- see Chapter 3) will be enumerated. If you use the `in` operator to test for the existence of a property on an object, `in` will check the entire chain of the object (regardless of *enumerability*).\n\n```js\nvar anotherObject = {\n\ta: 2\n};\n\n// create an object linked to `anotherObject`\nvar myObject = Object.create( anotherObject );\n\nfor (var k in myObject) {\n\tconsole.log(\"found: \" + k);\n}\n// found: a\n\n(\"a\" in myObject); // true\n```\n\nSo, the `[[Prototype]]` chain is consulted, one link at a time, when you perform property look-ups in various fashions. The look-up stops once the property is found or the chain ends.\n\n### `Object.prototype`\n\nBut *where* exactly does the `[[Prototype]]` chain \"end\"?\n\nThe top-end of every *normal* `[[Prototype]]` chain is the built-in `Object.prototype`. This object includes a variety of common utilities used all over JS, because all normal (built-in, not host-specific extension) objects in JavaScript \"descend from\" (aka, have at the top of their `[[Prototype]]` chain) the `Object.prototype` object.\n\nSome utilities found here you may be familiar with include `.toString()` and `.valueOf()`. In Chapter 3, we introduced another: `.hasOwnProperty(..)`. And yet another function on `Object.prototype` you may not be familiar with, but which we'll address later in this chapter, is `.isPrototypeOf(..)`.\n\n### Setting & Shadowing Properties\n\nBack in Chapter 3, we mentioned that setting properties on an object was more nuanced than just adding a new property to the object or changing an existing property's value. We will now revisit this situation more completely.\n\n```js\nmyObject.foo = \"bar\";\n```\n\nIf the `myObject` object already has a normal data accessor property called `foo` directly present on it, the assignment is as simple as changing the value of the existing property.\n\nIf `foo` is not already present directly on `myObject`, the `[[Prototype]]` chain is traversed, just like for the `[[Get]]` operation. If `foo` is not found anywhere in the chain, the property `foo` is added directly to `myObject` with the specified value, as expected.\n\nHowever, if `foo` is already present somewhere higher in the chain, nuanced (and perhaps surprising) behavior can occur with the `myObject.foo = \"bar\"` assignment. We'll examine that more in just a moment.\n\nIf the property name `foo` ends up both on `myObject` itself and at a higher level of the `[[Prototype]]` chain that starts at `myObject`, this is called *shadowing*. The `foo` property directly on `myObject` *shadows* any `foo` property which appears higher in the chain, because the `myObject.foo` look-up would always find the `foo` property that's lowest in the chain.\n\nAs we just hinted, shadowing `foo` on `myObject` is not as simple as it may seem. We will now examine three scenarios for the `myObject.foo = \"bar\"` assignment when `foo` is **not** already on `myObject` directly, but **is** at a higher level of `myObject`'s `[[Prototype]]` chain:\n\n1. If a normal data accessor (see Chapter 3) property named `foo` is found anywhere higher on the `[[Prototype]]` chain, **and it's not marked as read-only (`writable:false`)** then a new property called `foo` is added directly to `myObject`, resulting in a **shadowed property**.\n2. If a `foo` is found higher on the `[[Prototype]]` chain, but it's marked as **read-only (`writable:false`)**, then both the setting of that existing property as well as the creation of the shadowed property on `myObject` **are disallowed**. If the code is running in `strict mode`, an error will be thrown. Otherwise, the setting of the property value will silently be ignored. Either way, **no shadowing occurs**.\n3. If a `foo` is found higher on the `[[Prototype]]` chain and it's a setter (see Chapter 3), then the setter will always be called. No `foo` will be added to (aka, shadowed on) `myObject`, nor will the `foo` setter be redefined.\n\nMost developers assume that assignment of a property (`[[Put]]`) will always result in shadowing if the property already exists higher on the `[[Prototype]]` chain, but as you can see, that's only true in one (#1) of the three situations just described.\n\nIf you want to shadow `foo` in cases #2 and #3, you cannot use `=` assignment, but must instead use `Object.defineProperty(..)` (see Chapter 3) to add `foo` to `myObject`.\n\n**Note:** Case #2 may be the most surprising of the three. The presence of a *read-only* property prevents a property of the same name being implicitly created (shadowed) at a lower level of a `[[Prototype]]` chain. The reason for this restriction is primarily to reinforce the illusion of class-inherited properties. If you think of the `foo` at a higher level of the chain as having been inherited (copied down) to `myObject`, then it makes sense to enforce the non-writable nature of that `foo` property on `myObject`. If you however separate the illusion from the fact, and recognize that no such inheritance copying *actually* occurred (see Chapters 4 and 5), it's a little unnatural that `myObject` would be prevented from having a `foo` property just because some other object had a non-writable `foo` on it. It's even stranger that this restriction only applies to `=` assignment, but is not enforced when using `Object.defineProperty(..)`.\n\nShadowing with **methods** leads to ugly *explicit pseudo-polymorphism* (see Chapter 4) if you need to delegate between them. Usually, shadowing is more complicated and nuanced than it's worth, **so you should try to avoid it if possible**. See Chapter 6 for an alternative design pattern, which among other things discourages shadowing in favor of cleaner alternatives.\n\nShadowing can even occur implicitly in subtle ways, so care must be taken if trying to avoid it. Consider:\n\n```js\nvar anotherObject = {\n\ta: 2\n};\n\nvar myObject = Object.create( anotherObject );\n\nanotherObject.a; // 2\nmyObject.a; // 2\n\nanotherObject.hasOwnProperty( \"a\" ); // true\nmyObject.hasOwnProperty( \"a\" ); // false\n\nmyObject.a++; // oops, implicit shadowing!\n\nanotherObject.a; // 2\nmyObject.a; // 3\n\nmyObject.hasOwnProperty( \"a\" ); // true\n```\n\nThough it may appear that `myObject.a++` should (via delegation) look-up and just increment the `anotherObject.a` property itself *in place*, instead the `++` operation corresponds to `myObject.a = myObject.a + 1`. The result is `[[Get]]` looking up `a` property via `[[Prototype]]` to get the current value `2` from `anotherObject.a`, incrementing the value by one, then `[[Put]]` assigning the `3` value to a new shadowed property `a` on `myObject`. Oops!\n\nBe very careful when dealing with delegated properties that you modify. If you wanted to increment `anotherObject.a`, the only proper way is `anotherObject.a++`.\n\n## \"Class\"\n\nAt this point, you might be wondering: \"*Why* does one object need to link to another object?\" What's the real benefit? That is a very appropriate question to ask, but we must first understand what `[[Prototype]]` is **not** before we can fully understand and appreciate what it *is* and how it's useful.\n\nAs we explained in Chapter 4, in JavaScript, there are no abstract patterns/blueprints for objects called \"classes\" as there are in class-oriented languages. JavaScript **just** has objects.\n\nIn fact, JavaScript is **almost unique** among languages as perhaps the only language with the right to use the label \"object oriented\", because it's one of a very short list of languages where an object can be created directly, without a class at all.\n\nIn JavaScript, classes can't (being that they don't exist!) describe what an object can do. The object defines its own behavior directly. **There's *just* the object.**\n\n### \"Class\" Functions\n\nThere's a peculiar kind of behavior in JavaScript that has been shamelessly abused for years to *hack* something that *looks* like \"classes\". We'll examine this approach in detail.\n\nThe peculiar \"sort-of class\" behavior hinges on a strange characteristic of functions: all functions by default get a public, non-enumerable (see Chapter 3) property on them called `prototype`, which points at an otherwise arbitrary object.\n\n```js\nfunction Foo() {\n\t// ...\n}\n\nFoo.prototype; // { }\n```\n\nThis object is often called \"Foo's prototype\", because we access it via an unfortunately-named `Foo.prototype` property reference. However, that terminology is hopelessly destined to lead us into confusion, as we'll see shortly. Instead, I will call it \"the object formerly known as Foo's prototype\". Just kidding. How about: \"object arbitrarily labeled 'Foo dot prototype'\"?\n\nWhatever we call it, what exactly is this object?\n\nThe most direct way to explain it is that each object created from calling `new Foo()` (see Chapter 2) will end up (somewhat arbitrarily) `[[Prototype]]`-linked to this \"Foo dot prototype\" object.\n\nLet's illustrate:\n\n```js\nfunction Foo() {\n\t// ...\n}\n\nvar a = new Foo();\n\nObject.getPrototypeOf( a ) === Foo.prototype; // true\n```\n\nWhen `a` is created by calling `new Foo()`, one of the things (see Chapter 2 for all *four* steps) that happens is that `a` gets an internal `[[Prototype]]` link to the object that `Foo.prototype` is pointing at.\n\nStop for a moment and ponder the implications of that statement.\n\nIn class-oriented languages, multiple **copies** (aka, \"instances\") of a class can be made, like stamping something out from a mold. As we saw in Chapter 4, this happens because the process of instantiating (or inheriting from) a class means, \"copy the behavior plan from that class into a physical object\", and this is done again for each new instance.\n\nBut in JavaScript, there are no such copy-actions performed. You don't create multiple instances of a class. You can create multiple objects that `[[Prototype]]` *link* to a common object. But by default, no copying occurs, and thus these objects don't end up totally separate and disconnected from each other, but rather, quite ***linked***.\n\n`new Foo()` results in a new object (we called it `a`), and **that** new object `a` is internally `[[Prototype]]` linked to the `Foo.prototype` object.\n\n**We end up with two objects, linked to each other.** That's *it*. We didn't instantiate a class. We certainly didn't do any copying of behavior from a \"class\" into a concrete object. We just caused two objects to be linked to each other.\n\nIn fact, the secret, which eludes most JS developers, is that the `new Foo()` function calling had really almost nothing *direct* to do with the process of creating the link. **It was sort of an accidental side-effect.** `new Foo()` is an indirect, round-about way to end up with what we want: **a new object linked to another object**.\n\nCan we get what we want in a more *direct* way? **Yes!** The hero is `Object.create(..)`. But we'll get to that in a little bit.\n\n#### What's in a name?\n\nIn JavaScript, we don't make *copies* from one object (\"class\") to another (\"instance\"). We make *links* between objects. For the `[[Prototype]]` mechanism, visually, the arrows move from right to left, and from bottom to top.\n\n<img src=\"fig3.png\">\n\nThis mechanism is often called \"prototypal inheritance\" (we'll explore the code in detail shortly), which is commonly said to be the dynamic-language version of \"classical inheritance\". It's an attempt to piggy-back on the common understanding of what \"inheritance\" means in the class-oriented world, but *tweak* (**read: pave over**) the understood semantics, to fit dynamic scripting.\n\nThe word \"inheritance\" has a very strong meaning (see Chapter 4), with plenty of mental precedent. Merely adding \"prototypal\" in front to distinguish the *actually nearly opposite* behavior in JavaScript has left in its wake nearly two decades of miry confusion.\n\nI like to say that sticking \"prototypal\" in front of \"inheritance\" to drastically reverse its actual meaning is like holding an orange in one hand, an apple in the other, and insisting on calling the apple a \"red orange\". No matter what confusing label I put in front of it, that doesn't change the *fact* that one fruit is an apple and the other is an orange.\n\nThe better approach is to plainly call an apple an apple -- to use the most accurate and direct terminology. That makes it easier to understand both their similarities and their **many differences**, because we all have a simple, shared understanding of what \"apple\" means.\n\nBecause of the confusion and conflation of terms, I believe the label \"prototypal inheritance\" itself (and trying to mis-apply all its associated class-orientation terminology, like \"class\", \"constructor\", \"instance\", \"polymorphism\", etc) has done **more harm than good** in explaining how JavaScript's mechanism *really* works.\n\n\"Inheritance\" implies a *copy* operation, and JavaScript doesn't copy object properties (natively, by default). Instead, JS creates a link between two objects, where one object can essentially *delegate* property/function access to another object. \"Delegation\" (see Chapter 6) is a much more accurate term for JavaScript's object-linking mechanism.\n\nAnother term which is sometimes thrown around in JavaScript is \"differential inheritance\". The idea here is that we describe an object's behavior in terms of what is *different* from a more general descriptor. For example, you explain that a car is a kind of vehicle, but one that has exactly 4 wheels, rather than re-describing all the specifics of what makes up a general vehicle (engine, etc).\n\nIf you try to think of any given object in JS as the sum total of all behavior that is *available* via delegation, and **in your mind you flatten** all that behavior into one tangible *thing*, then you can (sorta) see how \"differential inheritance\" might fit.\n\nBut just like with \"prototypal inheritance\", \"differential inheritance\" pretends that your mental model is more important than what is physically happening in the language. It overlooks the fact that object `B` is not actually differentially constructed, but is instead built with specific characteristics defined, alongside \"holes\" where nothing is defined. It is in these \"holes\" (gaps in, or lack of, definition) that delegation *can* take over and, on the fly, \"fill them in\" with delegated behavior.\n\nThe object is not, by native default, flattened into the single differential object, **through copying**, that the mental model of \"differential inheritance\" implies. As such, \"differential inheritance\" is just not as natural a fit for describing how JavaScript's `[[Prototype]]` mechanism actually works.\n\nYou *can choose* to prefer the \"differential inheritance\" terminology and mental model, as a matter of taste, but there's no denying the fact that it *only* fits the mental acrobatics in your mind, not the physical behavior in the engine.\n\n### \"Constructors\"\n\nLet's go back to some earlier code:\n\n```js\nfunction Foo() {\n\t// ...\n}\n\nvar a = new Foo();\n```\n\nWhat exactly leads us to think `Foo` is a \"class\"?\n\nFor one, we see the use of the `new` keyword, just like class-oriented languages do when they construct class instances. For another, it appears that we are in fact executing a *constructor* method of a class, because `Foo()` is actually a method that gets called, just like how a real class's constructor gets called when you instantiate that class.\n\nTo further the confusion of \"constructor\" semantics, the arbitrarily labeled `Foo.prototype` object has another trick up its sleeve. Consider this code:\n\n```js\nfunction Foo() {\n\t// ...\n}\n\nFoo.prototype.constructor === Foo; // true\n\nvar a = new Foo();\na.constructor === Foo; // true\n```\n\nThe `Foo.prototype` object by default (at declaration time on line 1 of the snippet!) gets a public, non-enumerable (see Chapter 3) property called `.constructor`, and this property is a reference back to the function (`Foo` in this case) that the object is associated with. Moreover, we see that object `a` created by the \"constructor\" call `new Foo()` *seems* to also have a property on it called `.constructor` which similarly points to \"the function which created it\".\n\n**Note:** This is not actually true. `a` has no `.constructor` property on it, and though `a.constructor` does in fact resolve to the `Foo` function, \"constructor\" **does not actually mean** \"was constructed by\", as it appears. We'll explain this strangeness shortly.\n\nOh, yeah, also... by convention in the JavaScript world, \"class\"es are named with a capital letter, so the fact that it's `Foo` instead of `foo` is a strong clue that we intend it to be a \"class\". That's totally obvious to you, right!?\n\n**Note:** This convention is so strong that many JS linters actually *complain* if you call `new` on a method with a lowercase name, or if we don't call `new` on a function that happens to start with a capital letter. That sort of boggles the mind that we struggle so much to get (fake) \"class-orientation\" *right* in JavaScript that we create linter rules to ensure we use capital letters, even though the capital letter doesn't mean ***anything* at all** to the JS engine.\n\n#### Constructor Or Call?\n\nIn the above snippet, it's tempting to think that `Foo` is a \"constructor\", because we call it with `new` and we observe that it \"constructs\" an object.\n\nIn reality, `Foo` is no more a \"constructor\" than any other function in your program. Functions themselves are **not** constructors. However, when you put the `new` keyword in front of a normal function call, that makes that function call a \"constructor call\". In fact, `new` sort of hijacks any normal function and calls it in a fashion that constructs an object, **in addition to whatever else it was going to do**.\n\nFor example:\n\n```js\nfunction NothingSpecial() {\n\tconsole.log( \"Don't mind me!\" );\n}\n\nvar a = new NothingSpecial();\n// \"Don't mind me!\"\n\na; // {}\n```\n\n`NothingSpecial` is just a plain old normal function, but when called with `new`, it *constructs* an object, almost as a side-effect, which we happen to assign to `a`. The **call** was a *constructor call*, but `NothingSpecial` is not, in and of itself, a *constructor*.\n\nIn other words, in JavaScript, it's most appropriate to say that a \"constructor\" is **any function called with the `new` keyword** in front of it.\n\nFunctions aren't constructors, but function calls are \"constructor calls\" if and only if `new` is used.\n\n### Mechanics\n\nAre *those* the only common triggers for ill-fated \"class\" discussions in JavaScript?\n\n**Not quite.** JS developers have strived to simulate as much as they can of class-orientation:\n\n```js\nfunction Foo(name) {\n\tthis.name = name;\n}\n\nFoo.prototype.myName = function() {\n\treturn this.name;\n};\n\nvar a = new Foo( \"a\" );\nvar b = new Foo( \"b\" );\n\na.myName(); // \"a\"\nb.myName(); // \"b\"\n```\n\nThis snippet shows two additional \"class-orientation\" tricks in play:\n\n1. `this.name = name`: adds the `.name` property onto each object (`a` and `b`, respectively; see Chapter 2 about `this` binding), similar to how class instances encapsulate data values.\n\n2. `Foo.prototype.myName = ...`: perhaps the more interesting technique, this adds a property (function) to the `Foo.prototype` object. Now, `a.myName()` works, but perhaps surprisingly. How?\n\nIn the above snippet, it's strongly tempting to think that when `a` and `b` are created, the properties/functions on the `Foo.prototype` object are *copied* over to each of `a` and `b` objects. **However, that's not what happens.**\n\nAt the beginning of this chapter, we explained the `[[Prototype]]` link, and how it provides the fall-back look-up steps if a property reference isn't found directly on an object, as part of the default `[[Get]]` algorithm.\n\nSo, by virtue of how they are created, `a` and `b` each end up with an internal `[[Prototype]]` linkage to `Foo.prototype`. When `myName` is not found on `a` or `b`, respectively, it's instead found (through delegation, see Chapter 6) on `Foo.prototype`.\n\n#### \"Constructor\" Redux\n\nRecall the discussion from earlier about the `.constructor` property, and how it *seems* like `a.constructor === Foo` being true means that `a` has an actual `.constructor` property on it, pointing at `Foo`? **Not correct.**\n\nThis is just unfortunate confusion. In actuality, the `.constructor` reference is also *delegated* up to `Foo.prototype`, which **happens to**, by default, have a `.constructor` that points at `Foo`.\n\nIt *seems* awfully convenient that an object `a` \"constructed by\" `Foo` would have access to a `.constructor` property that points to `Foo`. But that's nothing more than a false sense of security. It's a happy accident, almost tangentially, that `a.constructor` *happens* to point at `Foo` via this default `[[Prototype]]` delegation. There are actually several ways that the ill-fated assumption of `.constructor` meaning \"was constructed by\" can come back to bite you.\n\nFor one, the `.constructor` property on `Foo.prototype` is only there by default on the object created when `Foo` the function is declared. If you create a new object, and replace a function's default `.prototype` object reference, the new object will not by default magically get a `.constructor` on it.\n\nConsider:\n\n```js\nfunction Foo() { /* .. */ }\n\nFoo.prototype = { /* .. */ }; // create a new prototype object\n\nvar a1 = new Foo();\na1.constructor === Foo; // false!\na1.constructor === Object; // true!\n```\n\n`Object(..)` didn't \"construct\" `a1` did it? It sure seems like `Foo()` \"constructed\" it. Many developers think of `Foo()` as doing the construction, but where everything falls apart is when you think \"constructor\" means \"was constructed by\", because by that reasoning, `a1.constructor` should be `Foo`, but it isn't!\n\nWhat's happening? `a1` has no `.constructor` property, so it delegates up the `[[Prototype]]` chain to `Foo.prototype`. But that object doesn't have a `.constructor` either (like the default `Foo.prototype` object would have had!), so it keeps delegating, this time up to `Object.prototype`, the top of the delegation chain. *That* object indeed has a `.constructor` on it, which points to the built-in `Object(..)` function.\n\n**Misconception, busted.**\n\nOf course, you can add `.constructor` back to the `Foo.prototype` object, but this takes manual work, especially if you want to match native behavior and have it be non-enumerable (see Chapter 3).\n\nFor example:\n\n```js\nfunction Foo() { /* .. */ }\n\nFoo.prototype = { /* .. */ }; // create a new prototype object\n\n// Need to properly \"fix\" the missing `.constructor`\n// property on the new object serving as `Foo.prototype`.\n// See Chapter 3 for `defineProperty(..)`.\nObject.defineProperty( Foo.prototype, \"constructor\" , {\n\tenumerable: false,\n\twritable: true,\n\tconfigurable: true,\n\tvalue: Foo    // point `.constructor` at `Foo`\n} );\n```\n\nThat's a lot of manual work to fix `.constructor`. Moreover, all we're really doing is perpetuating the misconception that \"constructor\" means \"was constructed by\". That's an *expensive* illusion.\n\nThe fact is, `.constructor` on an object arbitrarily points, by default, at a function who, reciprocally, has a reference back to the object -- a reference which it calls `.prototype`. The words \"constructor\" and \"prototype\" only have a loose default meaning that might or might not hold true later. The best thing to do is remind yourself, \"constructor does not mean constructed by\".\n\n`.constructor` is not a magic immutable property. It *is* non-enumerable (see snippet above), but its value is writable (can be changed), and moreover, you can add or overwrite (intentionally or accidentally) a property of the name `constructor` on any object in any `[[Prototype]]` chain, with any value you see fit.\n\nBy virtue of how the `[[Get]]` algorithm traverses the `[[Prototype]]` chain, a `.constructor` property reference found anywhere may resolve quite differently than you'd expect.\n\nSee how arbitrary its meaning actually is?\n\nThe result? Some arbitrary object-property reference like `a1.constructor` cannot actually be *trusted* to be the assumed default function reference. Moreover, as we'll see shortly, just by simple omission, `a1.constructor` can even end up pointing somewhere quite surprising and insensible.\n\n`.constructor` is extremely unreliable, and an unsafe reference to rely upon in your code. **Generally, such references should be avoided where possible.**\n\n## \"(Prototypal) Inheritance\"\n\nWe've seen some approximations of \"class\" mechanics as typically hacked into JavaScript programs. But JavaScript \"class\"es would be rather hollow if we didn't have an approximation of \"inheritance\".\n\nActually, we've already seen the mechanism which is commonly called \"prototypal inheritance\" at work when `a` was able to \"inherit from\" `Foo.prototype`, and thus get access to the `myName()` function. But we traditionally think of \"inheritance\" as being a relationship between two \"classes\", rather than between \"class\" and \"instance\".\n\n<img src=\"fig3.png\">\n\nRecall this figure from earlier, which shows not only delegation from an object (aka, \"instance\") `a1` to object `Foo.prototype`, but from `Bar.prototype` to `Foo.prototype`, which somewhat resembles the concept of Parent-Child class inheritance. *Resembles*, except of course for the direction of the arrows, which show these are delegation links rather than copy operations.\n\nAnd, here's the typical \"prototype style\" code that creates such links:\n\n```js\nfunction Foo(name) {\n\tthis.name = name;\n}\n\nFoo.prototype.myName = function() {\n\treturn this.name;\n};\n\nfunction Bar(name,label) {\n\tFoo.call( this, name );\n\tthis.label = label;\n}\n\n// here, we make a new `Bar.prototype`\n// linked to `Foo.prototype`\nBar.prototype = Object.create( Foo.prototype );\n\n// Beware! Now `Bar.prototype.constructor` is gone,\n// and might need to be manually \"fixed\" if you're\n// in the habit of relying on such properties!\n\nBar.prototype.myLabel = function() {\n\treturn this.label;\n};\n\nvar a = new Bar( \"a\", \"obj a\" );\n\na.myName(); // \"a\"\na.myLabel(); // \"obj a\"\n```\n\n**Note:** To understand why `this` points to `a` in the above code snippet, see Chapter 2.\n\nThe important part is `Bar.prototype = Object.create( Foo.prototype )`. `Object.create(..)` *creates* a \"new\" object out of thin air, and links that new object's internal `[[Prototype]]` to the object you specify (`Foo.prototype` in this case).\n\nIn other words, that line says: \"make a *new* 'Bar dot prototype' object that's linked to 'Foo dot prototype'.\"\n\nWhen `function Bar() { .. }` is declared, `Bar`, like any other function, has a `.prototype` link to its default object. But *that* object is not linked to `Foo.prototype` like we want. So, we create a *new* object that *is* linked as we want, effectively throwing away the original incorrectly-linked object.\n\n**Note:** A common mis-conception/confusion here is that either of the following approaches would *also* work, but they do not work as you'd expect:\n\n```js\n// doesn't work like you want!\nBar.prototype = Foo.prototype;\n\n// works kinda like you want, but with\n// side-effects you probably don't want :(\nBar.prototype = new Foo();\n```\n\n`Bar.prototype = Foo.prototype` doesn't create a new object for `Bar.prototype` to be linked to. It just makes `Bar.prototype` be another reference to `Foo.prototype`, which effectively links `Bar` directly to **the same object as** `Foo` links to: `Foo.prototype`. This means when you start assigning, like `Bar.prototype.myLabel = ...`, you're modifying **not a separate object** but *the* shared `Foo.prototype` object itself, which would affect any objects linked to `Foo.prototype`. This is almost certainly not what you want. If it *is* what you want, then you likely don't need `Bar` at all, and should just use only `Foo` and make your code simpler.\n\n`Bar.prototype = new Foo()` **does in fact** create a new object which is duly linked to `Foo.prototype` as we'd want. But, it uses the `Foo(..)` \"constructor call\" to do it. If that function has any side-effects (such as logging, changing state, registering against other objects, **adding data properties to `this`**, etc.), those side-effects happen at the time of this linking (and likely against the wrong object!), rather than only when the eventual `Bar()` \"descendants\" are created, as would likely be expected.\n\nSo, we're left with using `Object.create(..)` to make a new object that's properly linked, but without having the side-effects of calling `Foo(..)`. The slight downside is that we have to create a new object, throwing the old one away, instead of modifying the existing default object we're provided.\n\nIt would be *nice* if there was a standard and reliable way to modify the linkage of an existing object. Prior to ES6, there's a non-standard and not fully-cross-browser way, via the `.__proto__` property, which is settable. ES6 adds a `Object.setPrototypeOf(..)` helper utility, which does the trick in a standard and predictable way.\n\nCompare the pre-ES6 and ES6-standardized techniques for linking `Bar.prototype` to `Foo.prototype`, side-by-side:\n\n```js\n// pre-ES6\n// throws away default existing `Bar.prototype`\nBar.prototype = Object.create( Foo.prototype );\n\n// ES6+\n// modifies existing `Bar.prototype`\nObject.setPrototypeOf( Bar.prototype, Foo.prototype );\n```\n\nIgnoring the slight performance disadvantage (throwing away an object that's later garbage collected) of the `Object.create(..)` approach, it's a little bit shorter and may be perhaps a little easier to read than the ES6+ approach. But it's probably a syntactic wash either way.\n\n### Inspecting \"Class\" Relationships\n\nWhat if you have an object like `a` and want to find out what object (if any) it delegates to? Inspecting an instance (just an object in JS) for its inheritance ancestry (delegation linkage in JS) is often called *introspection* (or *reflection*) in traditional class-oriented environments.\n\nConsider:\n\n```js\nfunction Foo() {\n\t// ...\n}\n\nFoo.prototype.blah = ...;\n\nvar a = new Foo();\n```\n\nHow do we then introspect `a` to find out its \"ancestry\" (delegation linkage)? The first approach embraces the \"class\" confusion:\n\n```js\na instanceof Foo; // true\n```\n\nThe `instanceof` operator takes a plain object as its left-hand operand and a **function** as its right-hand operand. The question `instanceof` answers is: **in the entire `[[Prototype]]` chain of `a`, does the object arbitrarily pointed to by `Foo.prototype` ever appear?**\n\nUnfortunately, this means that you can only inquire about the \"ancestry\" of some object (`a`) if you have some **function** (`Foo`, with its attached `.prototype` reference) to test with. If you have two arbitrary objects, say `a` and `b`, and want to find out if *the objects* are related to each other through a `[[Prototype]]` chain, `instanceof` alone can't help.\n\n**Note:** If you use the built-in `.bind(..)` utility to make a hard-bound function (see Chapter 2), the function created will not have a `.prototype` property. Using `instanceof` with such a function transparently substitutes the `.prototype` of the *target function* that the hard-bound function was created from.\n\nIt's fairly uncommon to use hard-bound functions as \"constructor calls\", but if you do, it will behave as if the original *target function* was invoked instead, which means that using `instanceof` with a hard-bound function also behaves according to the original function.\n\nThis snippet illustrates the ridiculousness of trying to reason about relationships between **two objects** using \"class\" semantics and `instanceof`:\n\n```js\n// helper utility to see if `o1` is\n// related to (delegates to) `o2`\nfunction isRelatedTo(o1, o2) {\n\tfunction F(){}\n\tF.prototype = o2;\n\treturn o1 instanceof F;\n}\n\nvar a = {};\nvar b = Object.create( a );\n\nisRelatedTo( b, a ); // true\n```\n\nInside `isRelatedTo(..)`, we borrow a throw-away function `F`, reassign its `.prototype` to arbitrarily point to some object `o2`, then ask if `o1` is an \"instance of\" `F`. Obviously `o1` isn't *actually* inherited or descended or even constructed from `F`, so it should be clear why this kind of exercise is silly and confusing. **The problem comes down to the awkwardness of class semantics forced upon JavaScript**, in this case as revealed by the indirect semantics of `instanceof`.\n\nThe second, and much cleaner, approach to `[[Prototype]]` reflection is:\n\n```js\nFoo.prototype.isPrototypeOf( a ); // true\n```\n\nNotice that in this case, we don't really care about (or even *need*) `Foo`, we just need an **object** (in our case, arbitrarily labeled `Foo.prototype`) to test against another **object**. The question `isPrototypeOf(..)` answers is: **in the entire `[[Prototype]]` chain of `a`, does `Foo.prototype` ever appear?**\n\nSame question, and exact same answer. But in this second approach, we don't actually need the indirection of referencing a **function** (`Foo`) whose `.prototype` property will automatically be consulted.\n\nWe *just need* two **objects** to inspect a relationship between them. For example:\n\n```js\n// Simply: does `b` appear anywhere in\n// `c`s [[Prototype]] chain?\nb.isPrototypeOf( c );\n```\n\nNotice, this approach doesn't require a function (\"class\") at all. It just uses object references directly to `b` and `c`, and inquires about their relationship. In other words, our `isRelatedTo(..)` utility above is built-in to the language, and it's called `isPrototypeOf(..)`.\n\nWe can also directly retrieve the `[[Prototype]]` of an object. As of ES5, the standard way to do this is:\n\n```js\nObject.getPrototypeOf( a );\n```\n\nAnd you'll notice that object reference is what we'd expect:\n\n```js\nObject.getPrototypeOf( a ) === Foo.prototype; // true\n```\n\nMost browsers (not all!) have also long supported a non-standard alternate way of accessing the internal `[[Prototype]]`:\n\n```js\na.__proto__ === Foo.prototype; // true\n```\n\nThe strange `.__proto__` (not standardized until ES6!) property \"magically\" retrieves the internal `[[Prototype]]` of an object as a reference, which is quite helpful if you want to directly inspect (or even traverse: `.__proto__.__proto__...`) the chain.\n\nJust as we saw earlier with `.constructor`, `.__proto__` doesn't actually exist on the object you're inspecting (`a` in our running example). In fact, it exists (non-enumerable; see Chapter 2) on the built-in `Object.prototype`, along with the other common utilities (`.toString()`, `.isPrototypeOf(..)`, etc).\n\nMoreover, `.__proto__` looks like a property, but it's actually more appropriate to think of it as a getter/setter (see Chapter 3).\n\nRoughly, we could envision `.__proto__` implemented (see Chapter 3 for object property definitions) like this:\n\n```js\nObject.defineProperty( Object.prototype, \"__proto__\", {\n\tget: function() {\n\t\treturn Object.getPrototypeOf( this );\n\t},\n\tset: function(o) {\n\t\t// setPrototypeOf(..) as of ES6\n\t\tObject.setPrototypeOf( this, o );\n\t\treturn o;\n\t}\n} );\n```\n\nSo, when we access (retrieve the value of) `a.__proto__`, it's like calling `a.__proto__()` (calling the getter function). *That* function call has `a` as its `this` even though the getter function exists on the `Object.prototype` object (see Chapter 2 for `this` binding rules), so it's just like saying `Object.getPrototypeOf( a )`.\n\n`.__proto__` is also a settable property, just like using ES6's `Object.setPrototypeOf(..)` shown earlier. However, generally you **should not change the `[[Prototype]]` of an existing object**.\n\nThere are some very complex, advanced techniques used deep in some frameworks that allow tricks like \"subclassing\" an `Array`, but this is commonly frowned on in general programming practice, as it usually leads to *much* harder to understand/maintain code.\n\n**Note:** As of ES6, the `class` keyword will allow something that approximates \"subclassing\" of built-in's like `Array`. See Appendix A for discussion of the `class` syntax added in ES6.\n\nThe only other narrow exception (as mentioned earlier) would be setting the `[[Prototype]]` of a default function's `.prototype` object to reference some other object (besides `Object.prototype`). That would avoid replacing that default object entirely with a new linked object. Otherwise, **it's best to treat object `[[Prototype]]` linkage as a read-only characteristic** for ease of reading your code later.\n\n**Note:** The JavaScript community unofficially coined a term for the double-underscore, specifically the leading one in properties like `__proto__`: \"dunder\". So, the \"cool kids\" in JavaScript would generally pronounce `__proto__` as \"dunder proto\".\n\n## Object Links\n\nAs we've now seen, the `[[Prototype]]` mechanism is an internal link that exists on one object which references some other object.\n\nThis linkage is (primarily) exercised when a property/method reference is made against the first object, and no such property/method exists. In that case, the `[[Prototype]]` linkage tells the engine to look for the property/method on the linked-to object. In turn, if that object cannot fulfill the look-up, its `[[Prototype]]` is followed, and so on. This series of links between objects forms what is called the \"prototype chain\".\n\n### `Create()`ing Links\n\nWe've thoroughly debunked why JavaScript's `[[Prototype]]` mechanism is **not** like *classes*, and we've seen how it instead creates **links** between proper objects.\n\nWhat's the point of the `[[Prototype]]` mechanism? Why is it so common for JS developers to go to so much effort (emulating classes) in their code to wire up these linkages?\n\nRemember we said much earlier in this chapter that `Object.create(..)` would be a hero? Now, we're ready to see how.\n\n```js\nvar foo = {\n\tsomething: function() {\n\t\tconsole.log( \"Tell me something good...\" );\n\t}\n};\n\nvar bar = Object.create( foo );\n\nbar.something(); // Tell me something good...\n```\n\n`Object.create(..)` creates a new object (`bar`) linked to the object we specified (`foo`), which gives us all the power (delegation) of the `[[Prototype]]` mechanism, but without any of the unnecessary complication of `new` functions acting as classes and constructor calls, confusing `.prototype` and `.constructor` references, or any of that extra stuff.\n\n**Note:** `Object.create(null)` creates an object that has an empty (aka, `null`) `[[Prototype]]` linkage, and thus the object can't delegate anywhere. Since such an object has no prototype chain, the `instanceof` operator (explained earlier) has nothing to check, so it will always return `false`. These special empty-`[[Prototype]]` objects are often called \"dictionaries\" as they are typically used purely for storing data in properties, mostly because they have no possible surprise effects from any delegated properties/functions on the `[[Prototype]]` chain, and are thus purely flat data storage.\n\nWe don't *need* classes to create meaningful relationships between two objects. The only thing we should **really care about** is objects linked together for delegation, and `Object.create(..)` gives us that linkage without all the class cruft.\n\n#### `Object.create()` Polyfilled\n\n`Object.create(..)` was added in ES5. You may need to support pre-ES5 environments (like older IE's), so let's take a look at a simple **partial** polyfill for `Object.create(..)` that gives us the capability that we need even in those older JS environments:\n\n```js\nif (!Object.create) {\n\tObject.create = function(o) {\n\t\tfunction F(){}\n\t\tF.prototype = o;\n\t\treturn new F();\n\t};\n}\n```\n\nThis polyfill works by using a throw-away `F` function and overriding its `.prototype` property to point to the object we want to link to. Then we use `new F()` construction to make a new object that will be linked as we specified.\n\nThis usage of `Object.create(..)` is by far the most common usage, because it's the part that *can be* polyfilled. There's an additional set of functionality that the standard ES5 built-in `Object.create(..)` provides, which is **not polyfillable** for pre-ES5. As such, this capability is far-less commonly used. For completeness sake, let's look at that additional functionality:\n\n```js\nvar anotherObject = {\n\ta: 2\n};\n\nvar myObject = Object.create( anotherObject, {\n\tb: {\n\t\tenumerable: false,\n\t\twritable: true,\n\t\tconfigurable: false,\n\t\tvalue: 3\n\t},\n\tc: {\n\t\tenumerable: true,\n\t\twritable: false,\n\t\tconfigurable: false,\n\t\tvalue: 4\n\t}\n} );\n\nmyObject.hasOwnProperty( \"a\" ); // false\nmyObject.hasOwnProperty( \"b\" ); // true\nmyObject.hasOwnProperty( \"c\" ); // true\n\nmyObject.a; // 2\nmyObject.b; // 3\nmyObject.c; // 4\n```\n\nThe second argument to `Object.create(..)` specifies property names to add to the newly created object, via declaring each new property's *property descriptor* (see Chapter 3). Because polyfilling property descriptors into pre-ES5 is not possible, this additional functionality on `Object.create(..)` also cannot be polyfilled.\n\nThe vast majority of usage of `Object.create(..)` uses the polyfill-safe subset of functionality, so most developers are fine with using the **partial polyfill** in pre-ES5 environments.\n\nSome developers take a much stricter view, which is that no function should be polyfilled unless it can be *fully* polyfilled. Since `Object.create(..)` is one of those partial-polyfill'able utilities, this narrower perspective says that if you need to use any of the functionality of `Object.create(..)` in a pre-ES5 environment, instead of polyfilling, you should use a custom utility, and stay away from using the name `Object.create` entirely. You could instead define your own utility, like:\n\n```js\nfunction createAndLinkObject(o) {\n\tfunction F(){}\n\tF.prototype = o;\n\treturn new F();\n}\n\nvar anotherObject = {\n\ta: 2\n};\n\nvar myObject = createAndLinkObject( anotherObject );\n\nmyObject.a; // 2\n```\n\nI do not share this strict opinion. I fully endorse the common partial-polyfill of `Object.create(..)` as shown above, and using it in your code even in pre-ES5. I'll leave it to you to make your own decision.\n\n### Links As Fallbacks?\n\nIt may be tempting to think that these links between objects *primarily* provide a sort of fallback for \"missing\" properties or methods. While that may be an observed outcome, I don't think it represents the right way of thinking about `[[Prototype]]`.\n\nConsider:\n\n```js\nvar anotherObject = {\n\tcool: function() {\n\t\tconsole.log( \"cool!\" );\n\t}\n};\n\nvar myObject = Object.create( anotherObject );\n\nmyObject.cool(); // \"cool!\"\n```\n\nThat code will work by virtue of `[[Prototype]]`, but if you wrote it that way so that `anotherObject` was acting as a fallback **just in case** `myObject` couldn't handle some property/method that some developer may try to call, odds are that your software is going to be a bit more \"magical\" and harder to understand and maintain.\n\nThat's not to say there aren't cases where fallbacks are an appropriate design pattern, but it's not very common or idiomatic in JS, so if you find yourself doing so, you might want to take a step back and reconsider if that's really appropriate and sensible design.\n\n**Note:** In ES6, an advanced functionality called `Proxy` is introduced which can provide something of a \"method not found\" type of behavior. `Proxy` is beyond the scope of this book, but will be covered in detail in a later book in the *\"You Don't Know JS\"* series.\n\n**Don't miss an important but nuanced point here.**\n\nDesigning software where you intend for a developer to, for instance, call `myObject.cool()` and have that work even though there is no `cool()` method on `myObject` introduces some \"magic\" into your API design that can be surprising for future developers who maintain your software.\n\nYou can however design your API with less \"magic\" to it, but still take advantage of the power of `[[Prototype]]` linkage.\n\n```js\nvar anotherObject = {\n\tcool: function() {\n\t\tconsole.log( \"cool!\" );\n\t}\n};\n\nvar myObject = Object.create( anotherObject );\n\nmyObject.doCool = function() {\n\tthis.cool(); // internal delegation!\n};\n\nmyObject.doCool(); // \"cool!\"\n```\n\nHere, we call `myObject.doCool()`, which is a method that *actually exists* on `myObject`, making our API design more explicit (less \"magical\"). *Internally*, our implementation follows the **delegation design pattern** (see Chapter 6), taking advantage of `[[Prototype]]` delegation to `anotherObject.cool()`.\n\nIn other words, delegation will tend to be less surprising/confusing if it's an internal implementation detail rather than plainly exposed in your API design. We will expound on **delegation** in great detail in the next chapter.\n\n## Review (TL;DR)\n\nWhen attempting a property access on an object that doesn't have that property, the object's internal `[[Prototype]]` linkage defines where the `[[Get]]` operation (see Chapter 3) should look next. This cascading linkage from object to object essentially defines a \"prototype chain\" (somewhat similar to a nested scope chain) of objects to traverse for property resolution.\n\nAll normal objects have the built-in `Object.prototype` as the top of the prototype chain (like the global scope in scope look-up), where property resolution will stop if not found anywhere prior in the chain. `toString()`, `valueOf()`, and several other common utilities exist on this `Object.prototype` object, explaining how all objects in the language are able to access them.\n\nThe most common way to get two objects linked to each other is using the `new` keyword with a function call, which among its four steps (see Chapter 2), it creates a new object linked to another object.\n\nThe \"another object\" that the new object is linked to happens to be the object referenced by the arbitrarily named `.prototype` property of the function called with `new`. Functions called with `new` are often called \"constructors\", despite the fact that they are not actually instantiating a class as *constructors* do in traditional class-oriented languages.\n\nWhile these JavaScript mechanisms can seem to resemble \"class instantiation\" and \"class inheritance\" from traditional class-oriented languages, the key distinction is that in JavaScript, no copies are made. Rather, objects end up linked to each other via an internal `[[Prototype]]` chain.\n\nFor a variety of reasons, not the least of which is terminology precedent, \"inheritance\" (and \"prototypal inheritance\") and all the other OO terms just do not make sense when considering how JavaScript *actually* works (not just applied to our forced mental models).\n\nInstead, \"delegation\" is a more appropriate term, because these relationships are not *copies* but delegation **links**.\n"
  },
  {
    "path": "this & object prototypes/ch6.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Chapter 6: Behavior Delegation\n\nIn Chapter 5, we addressed the `[[Prototype]]` mechanism  in detail, and *why* it's confusing and inappropriate (despite countless attempts for nearly two decades) to describe it as \"class\" or \"inheritance\". We trudged through not only the fairly verbose syntax (`.prototype` littering the code), but the various gotchas (like surprising `.constructor` resolution or ugly pseudo-polymorphic syntax). We explored variations of the \"mixin\" approach, which many people use to attempt to smooth over such rough areas.\n\nIt's a common reaction at this point to wonder why it has to be so complex to do something seemingly so simple. Now that we've pulled back the curtain and seen just how dirty it all gets, it's not a surprise that most JS developers never dive this deep, and instead relegate such mess to a \"class\" library to handle it for them.\n\nI hope by now you're not content to just gloss over and leave such details to a \"black box\" library. Let's now dig into how we *could and should be* thinking about the object `[[Prototype]]` mechanism in JS, in a **much simpler and more straightforward way** than the confusion of classes.\n\nAs a brief review of our conclusions from Chapter 5, the `[[Prototype]]` mechanism is an internal link that exists on one object which references another object.\n\nThis linkage is exercised when a property/method reference is made against the first object, and no such property/method exists. In that case, the `[[Prototype]]` linkage tells the engine to look for the property/method on the linked-to object. In turn, if that object cannot fulfill the look-up, its `[[Prototype]]` is followed, and so on. This series of links between objects forms what is called the \"prototype chain\".\n\nIn other words, the actual mechanism, the essence of what's important to the functionality we can leverage in JavaScript, is **all about objects being linked to other objects.**\n\nThat single observation is fundamental and critical to understanding the motivations and approaches for the rest of this chapter!\n\n## Towards Delegation-Oriented Design\n\nTo properly focus our thoughts on how to use `[[Prototype]]` in the most straightforward way, we must recognize that it represents a fundamentally different design pattern from classes (see Chapter 4).\n\n**Note:** *Some* principles of class-oriented design are still very valid, so don't toss out everything you know (just most of it!). For example, *encapsulation* is quite powerful, and is compatible (though not as common) with delegation.\n\nWe need to try to change our thinking from the class/inheritance design pattern to the behavior delegation design pattern. If you have done most or all of your programming in your education/career thinking in classes, this may be uncomfortable or feel unnatural. You may need to try this mental exercise quite a few times to get the hang of this very different way of thinking.\n\nI'm going to walk you through some theoretical exercises first, then we'll look side-by-side at a more concrete example to give you practical context for your own code.\n\n### Class Theory\n\nLet's say we have several similar tasks (\"XYZ\", \"ABC\", etc) that we need to model in our software.\n\nWith classes, the way you design the scenario is: define a general parent (base) class like `Task`, defining shared behavior for all the \"alike\" tasks. Then, you define child classes `XYZ` and `ABC`, both of which inherit from `Task`, and each of which adds specialized behavior to handle their respective tasks.\n\n**Importantly,** the class design pattern will encourage you that to get the most out of inheritance, you will want to employ method overriding (and polymorphism), where you override the definition of some general `Task` method in your `XYZ` task, perhaps even making use of `super` to call to the base version of that method while adding more behavior to it. **You'll likely find quite a few places** where you can \"abstract\" out general behavior to the parent class and specialize (override) it in your child classes.\n\nHere's some loose pseudo-code for that scenario:\n\n```js\nclass Task {\n\tid;\n\n\t// constructor `Task()`\n\tTask(ID) { id = ID; }\n\toutputTask() { output( id ); }\n}\n\nclass XYZ inherits Task {\n\tlabel;\n\n\t// constructor `XYZ()`\n\tXYZ(ID,Label) { super( ID ); label = Label; }\n\toutputTask() { super(); output( label ); }\n}\n\nclass ABC inherits Task {\n\t// ...\n}\n```\n\nNow, you can instantiate one or more **copies** of the `XYZ` child class, and use those instance(s) to perform task \"XYZ\". These instances have **copies both** of the general `Task` defined behavior as well as the specific `XYZ` defined behavior. Likewise, instances of the `ABC` class would have copies of the `Task` behavior and the specific `ABC` behavior. After construction, you will generally only interact with these instances (and not the classes), as the instances each have copies of all the behavior you need to do the intended task.\n\n### Delegation Theory\n\nBut now let's try to think about the same problem domain, but using *behavior delegation* instead of *classes*.\n\nYou will first define an **object** (not a class, nor a `function` as most JS'rs would lead you to believe) called `Task`, and it will have concrete behavior on it that includes utility methods that various tasks can use (read: *delegate to*!). Then, for each task (\"XYZ\", \"ABC\"), you define an **object** to hold that task-specific data/behavior. You **link** your task-specific object(s) to the `Task` utility object, allowing them to delegate to it when they need to.\n\nBasically, you think about performing task \"XYZ\" as needing behaviors from two sibling/peer objects (`XYZ` and `Task`) to accomplish it. But rather than needing to compose them together, via class copies, we can keep them in their separate objects, and we can allow `XYZ` object to **delegate to** `Task` when needed.\n\nHere's some simple code to suggest how you accomplish that:\n\n```js\nvar Task = {\n\tsetID: function(ID) { this.id = ID; },\n\toutputID: function() { console.log( this.id ); }\n};\n\n// make `XYZ` delegate to `Task`\nvar XYZ = Object.create( Task );\n\nXYZ.prepareTask = function(ID,Label) {\n\tthis.setID( ID );\n\tthis.label = Label;\n};\n\nXYZ.outputTaskDetails = function() {\n\tthis.outputID();\n\tconsole.log( this.label );\n};\n\n// ABC = Object.create( Task );\n// ABC ... = ...\n```\n\nIn this code, `Task` and `XYZ` are not classes (or functions), they're **just objects**. `XYZ` is set up via `Object.create(..)` to `[[Prototype]]` delegate to the `Task` object (see Chapter 5).\n\nAs compared to class-orientation (aka, OO -- object-oriented), I call this style of code **\"OLOO\"** (objects-linked-to-other-objects). All we *really* care about is that the `XYZ` object delegates to the `Task` object (as does the `ABC` object).\n\nIn JavaScript, the `[[Prototype]]` mechanism links **objects** to other **objects**. There are no abstract mechanisms like \"classes\", no matter how much you try to convince yourself otherwise. It's like paddling a canoe upstream: you *can* do it, but you're *choosing* to go against the natural current, so it's obviously **going to be harder to get where you're going.**\n\nSome other differences to note with **OLOO style code**:\n\n1. Both `id` and `label` data members from the previous class example are data properties directly on `XYZ` (neither is on `Task`). In general, with `[[Prototype]]` delegation involved, **you want state to be on the delegators** (`XYZ`, `ABC`), not on the delegate (`Task`).\n2. With the class design pattern, we intentionally named `outputTask` the same on both parent (`Task`) and child (`XYZ`), so that we could take advantage of overriding (polymorphism). In behavior delegation, we do the opposite: **we avoid if at all possible naming things the same** at different levels of the `[[Prototype]]` chain (called shadowing -- see Chapter 5), because having those name collisions creates awkward/brittle syntax to disambiguate references (see Chapter 4), and we want to avoid that if we can.\n\n   This design pattern calls for less of general method names which are prone to overriding and instead more of descriptive method names, *specific* to the type of behavior each object is doing. **This can actually create easier to understand/maintain code**, because the names of methods (not only at definition location but strewn throughout other code) are more obvious (self documenting).\n3. `this.setID(ID);` inside of a method on the `XYZ` object first looks on `XYZ` for `setID(..)`, but since it doesn't find a method of that name on `XYZ`, `[[Prototype]]` *delegation* means it can follow the link to `Task` to look for `setID(..)`, which it of course finds. Moreover, because of implicit call-site `this` binding rules (see Chapter 2), when `setID(..)` runs, even though the method was found on `Task`, the `this` binding for that function call is `XYZ` exactly as we'd expect and want. We see the same thing with `this.outputID()` later in the code listing.\n\n   In other words, the general utility methods that exist on `Task` are available to us while interacting with `XYZ`, because `XYZ` can delegate to `Task`.\n\n**Behavior Delegation** means: let some object (`XYZ`) provide a delegation (to `Task`) for property or method references if not found on the object (`XYZ`).\n\nThis is an *extremely powerful* design pattern, very distinct from the idea of parent and child classes, inheritance, polymorphism, etc. Rather than organizing the objects in your mind vertically, with Parents flowing down to Children, think of objects side-by-side, as peers, with any direction of delegation links between the objects as necessary.\n\n**Note:** Delegation is more properly used as an internal implementation detail rather than exposed directly in the API design. In the above example, we don't necessarily *intend* with our API design for developers to call `XYZ.setID()` (though we can, of course!). We sorta *hide* the delegation as an internal detail of our API, where `XYZ.prepareTask(..)` delegates to `Task.setID(..)`. See the \"Links As Fallbacks?\" discussion in Chapter 5 for more detail.\n\n#### Mutual Delegation (Disallowed)\n\nYou cannot create a *cycle* where two or more objects are mutually delegated (bi-directionally) to each other. If you make `B` linked to `A`, and then try to link `A` to `B`, you will get an error.\n\nIt's a shame (not terribly surprising, but mildly annoying) that this is disallowed. If you made a reference to a property/method which didn't exist in either place, you'd have an infinite recursion on the `[[Prototype]]` loop. But if all references were strictly present, then `B` could delegate to `A`, and vice versa, and it *could* work. This would mean you could use either object to delegate to the other, for various tasks. There are a few niche use-cases where this might be helpful.\n\nBut it's disallowed because engine implementors have observed that it's more performant to check for (and reject!) the infinite circular reference once at set-time rather than needing to have the performance hit of that guard check every time you look-up a property on an object.\n\n#### Debugged\n\nWe'll briefly cover a subtle detail that can be confusing to developers. In general, the JS specification does not control how browser developer tools should represent specific values/structures to a developer, so each browser/engine is free to interpret such things as they see fit. As such, browsers/tools *don't always agree*. Specifically, the behavior we will now examine is currently observed only in Chrome's Developer Tools.\n\nConsider this traditional \"class constructor\" style JS code, as it would appear in the *console* of Chrome Developer Tools:\n\n```js\nfunction Foo() {}\n\nvar a1 = new Foo();\n\na1; // Foo {}\n```\n\nLet's look at the last line of that snippet: the output of evaluating the `a1` expression, which prints `Foo {}`. If you try this same code in Firefox, you will likely see `Object {}`. Why the difference? What do these outputs mean?\n\nChrome is essentially saying \"{} is an empty object that was constructed by a function with name 'Foo'\". Firefox is saying \"{} is an empty object of general construction from Object\". The subtle difference is that Chrome is actively tracking, as an *internal property*, the name of the actual function that did the construction, whereas other browsers don't track that additional information.\n\nIt would be tempting to attempt to explain this with JavaScript mechanisms:\n\n```js\nfunction Foo() {}\n\nvar a1 = new Foo();\n\na1.constructor; // Foo(){}\na1.constructor.name; // \"Foo\"\n```\n\nSo, is that how Chrome is outputting \"Foo\", by simply examining the object's `.constructor.name`? Confusingly, the answer is both \"yes\" and \"no\".\n\nConsider this code:\n\n```js\nfunction Foo() {}\n\nvar a1 = new Foo();\n\nFoo.prototype.constructor = function Gotcha(){};\n\na1.constructor; // Gotcha(){}\na1.constructor.name; // \"Gotcha\"\n\na1; // Foo {}\n```\n\nEven though we change `a1.constructor.name` to legitimately be something else (\"Gotcha\"), Chrome's console still uses the \"Foo\" name.\n\nSo, it would appear the answer to previous question (does it use `.constructor.name`?) is **no**, it must track it somewhere else, internally.\n\nBut, Not so fast! Let's see how this kind of behavior works with OLOO-style code:\n\n```js\nvar Foo = {};\n\nvar a1 = Object.create( Foo );\n\na1; // Object {}\n\nObject.defineProperty( Foo, \"constructor\", {\n\tenumerable: false,\n\tvalue: function Gotcha(){}\n});\n\na1; // Gotcha {}\n```\n\nAh-ha! **Gotcha!** Here, Chrome's console **did** find and use the `.constructor.name`. Actually, while writing this book, this exact behavior was identified as a bug in Chrome, and by the time you're reading this, it may have already been fixed. So you may instead have seen the corrected `a1; // Object {}`.\n\nAside from that bug, the internal tracking (apparently only for debug output purposes) of the \"constructor name\" that Chrome does (shown in the earlier snippets) is an intentional Chrome-only extension of behavior beyond what the JS specification calls for.\n\nIf you don't use a \"constructor\" to make your objects, as we've discouraged with OLOO-style code here in this chapter, then you'll get objects that Chrome does *not* track an internal \"constructor name\" for, and such objects will correctly only be outputted as \"Object {}\", meaning \"object generated from Object() construction\".\n\n**Don't think** this represents a drawback of OLOO-style coding. When you code with OLOO and behavior delegation as your design pattern, *who* \"constructed\" (that is, *which function* was called with `new`?) some object is an irrelevant detail. Chrome's specific internal \"constructor name\" tracking is really only useful if you're fully embracing \"class-style\" coding, but is moot if you're instead embracing OLOO delegation.\n\n### Mental Models Compared\n\nNow that you can see a difference between \"class\" and \"delegation\" design patterns, at least theoretically, let's see the implications these design patterns have on the mental models we use to reason about our code.\n\nWe'll examine some more theoretical (\"Foo\", \"Bar\") code, and compare both ways (OO vs. OLOO) of implementing the code. The first snippet uses the classical (\"prototypal\") OO style:\n\n```js\nfunction Foo(who) {\n\tthis.me = who;\n}\nFoo.prototype.identify = function() {\n\treturn \"I am \" + this.me;\n};\n\nfunction Bar(who) {\n\tFoo.call( this, who );\n}\nBar.prototype = Object.create( Foo.prototype );\n\nBar.prototype.speak = function() {\n\talert( \"Hello, \" + this.identify() + \".\" );\n};\n\nvar b1 = new Bar( \"b1\" );\nvar b2 = new Bar( \"b2\" );\n\nb1.speak();\nb2.speak();\n```\n\nParent class `Foo`, inherited by child class `Bar`, which is then instantiated twice as `b1` and `b2`. What we have is `b1` delegating to `Bar.prototype` which delegates to `Foo.prototype`. This should look fairly familiar to you, at this point. Nothing too ground-breaking going on.\n\nNow, let's implement **the exact same functionality** using *OLOO* style code:\n\n```js\nvar Foo = {\n\tinit: function(who) {\n\t\tthis.me = who;\n\t},\n\tidentify: function() {\n\t\treturn \"I am \" + this.me;\n\t}\n};\n\nvar Bar = Object.create( Foo );\n\nBar.speak = function() {\n\talert( \"Hello, \" + this.identify() + \".\" );\n};\n\nvar b1 = Object.create( Bar );\nb1.init( \"b1\" );\nvar b2 = Object.create( Bar );\nb2.init( \"b2\" );\n\nb1.speak();\nb2.speak();\n```\n\nWe take exactly the same advantage of `[[Prototype]]` delegation from `b1` to `Bar` to `Foo` as we did in the previous snippet between `b1`, `Bar.prototype`, and `Foo.prototype`. **We still have the same 3 objects linked together**.\n\nBut, importantly, we've greatly simplified *all the other stuff* going on, because now we just set up **objects** linked to each other, without needing all the cruft and confusion of things that look (but don't behave!) like classes, with constructors and prototypes and `new` calls.\n\nAsk yourself: if I can get the same functionality with OLOO style code as I do with \"class\" style code, but OLOO is simpler and has less things to think about, **isn't OLOO better**?\n\nLet's examine the mental models involved between these two snippets.\n\nFirst, the class-style code snippet implies this mental model of entities and their relationships:\n\n<img src=\"fig4.png\">\n\nActually, that's a little unfair/misleading, because it's showing a lot of extra detail that you don't *technically* need to know at all times (though you *do* need to understand it!). One take-away is that it's quite a complex series of relationships. But another take-away: if you spend the time to follow those relationship arrows around, **there's an amazing amount of internal consistency** in JS's mechanisms.\n\nFor instance, the ability of a JS function to access `call(..)`, `apply(..)`, and `bind(..)` (see Chapter 2) is because functions themselves are objects, and function-objects also have a `[[Prototype]]` linkage, to the `Function.prototype` object, which defines those default methods that any function-object can delegate to. JS can do those things, *and you can too!*.\n\nOK, let's now look at a *slightly* simplified version of that diagram which is a little more \"fair\" for comparison -- it shows only the *relevant* entities and relationships.\n\n<img src=\"fig5.png\">\n\nStill pretty complex, eh? The dotted lines are depicting the implied relationships when you setup the \"inheritance\" between `Foo.prototype` and `Bar.prototype` and haven't yet *fixed* the **missing** `.constructor` property reference (see \"Constructor Redux\" in Chapter 5). Even with those dotted lines removed, the mental model is still an awful lot to juggle every time you work with object linkages.\n\nNow, let's look at the mental model for OLOO-style code:\n\n<img src=\"fig6.png\">\n\nAs you can see comparing them, it's quite obvious that OLOO-style code has *vastly less stuff* to worry about, because OLOO-style code embraces the **fact** that the only thing we ever really cared about was the **objects linked to other objects**.\n\nAll the other \"class\" cruft was a confusing and complex way of getting the same end result. Remove that stuff, and things get much simpler (without losing any capability).\n\n## Classes vs. Objects\n\nWe've just seen various theoretical explorations and mental models of \"classes\" vs. \"behavior delegation\". But, let's now look at more concrete code scenarios to show how'd you actually use these ideas.\n\nWe'll first examine a typical scenario in front-end web dev: creating UI widgets (buttons, drop-downs, etc).\n\n### Widget \"Classes\"\n\nBecause you're probably still so used to the OO design pattern, you'll likely immediately think of this problem domain in terms of a parent class (perhaps called `Widget`) with all the common base widget behavior, and then child derived classes for specific widget types (like `Button`).\n\n**Note:** We're going to use jQuery here for DOM and CSS manipulation, only because it's a detail we don't really care about for the purposes of our current discussion. None of this code cares which JS framework (jQuery, Dojo, YUI, etc), if any, you might solve such mundane tasks with.\n\nLet's examine how we'd implement the \"class\" design in classic-style pure JS without any \"class\" helper library or syntax:\n\n```js\n// Parent class\nfunction Widget(width,height) {\n\tthis.width = width || 50;\n\tthis.height = height || 50;\n\tthis.$elem = null;\n}\n\nWidget.prototype.render = function($where){\n\tif (this.$elem) {\n\t\tthis.$elem.css( {\n\t\t\twidth: this.width + \"px\",\n\t\t\theight: this.height + \"px\"\n\t\t} ).appendTo( $where );\n\t}\n};\n\n// Child class\nfunction Button(width,height,label) {\n\t// \"super\" constructor call\n\tWidget.call( this, width, height );\n\tthis.label = label || \"Default\";\n\n\tthis.$elem = $( \"<button>\" ).text( this.label );\n}\n\n// make `Button` \"inherit\" from `Widget`\nButton.prototype = Object.create( Widget.prototype );\n\n// override base \"inherited\" `render(..)`\nButton.prototype.render = function($where) {\n\t// \"super\" call\n\tWidget.prototype.render.call( this, $where );\n\tthis.$elem.click( this.onClick.bind( this ) );\n};\n\nButton.prototype.onClick = function(evt) {\n\tconsole.log( \"Button '\" + this.label + \"' clicked!\" );\n};\n\n$( document ).ready( function(){\n\tvar $body = $( document.body );\n\tvar btn1 = new Button( 125, 30, \"Hello\" );\n\tvar btn2 = new Button( 150, 40, \"World\" );\n\n\tbtn1.render( $body );\n\tbtn2.render( $body );\n} );\n```\n\nOO design patterns tell us to declare a base `render(..)` in the parent class, then override it in our child class, but not to replace it per se, rather to augment the base functionality with button-specific behavior.\n\nNotice the ugliness of *explicit pseudo-polymorphism* (see Chapter 4) with `Widget.call` and `Widget.prototype.render.call` references for faking \"super\" calls from the child \"class\" methods back up to the \"parent\" class base methods. Yuck.\n\n#### ES6 `class` sugar\n\nWe cover ES6 `class` syntax sugar in detail in Appendix A, but let's briefly demonstrate how we'd implement the same code using `class`:\n\n```js\nclass Widget {\n\tconstructor(width,height) {\n\t\tthis.width = width || 50;\n\t\tthis.height = height || 50;\n\t\tthis.$elem = null;\n\t}\n\trender($where){\n\t\tif (this.$elem) {\n\t\t\tthis.$elem.css( {\n\t\t\t\twidth: this.width + \"px\",\n\t\t\t\theight: this.height + \"px\"\n\t\t\t} ).appendTo( $where );\n\t\t}\n\t}\n}\n\nclass Button extends Widget {\n\tconstructor(width,height,label) {\n\t\tsuper( width, height );\n\t\tthis.label = label || \"Default\";\n\t\tthis.$elem = $( \"<button>\" ).text( this.label );\n\t}\n\trender($where) {\n\t\tsuper.render( $where );\n\t\tthis.$elem.click( this.onClick.bind( this ) );\n\t}\n\tonClick(evt) {\n\t\tconsole.log( \"Button '\" + this.label + \"' clicked!\" );\n\t}\n}\n\n$( document ).ready( function(){\n\tvar $body = $( document.body );\n\tvar btn1 = new Button( 125, 30, \"Hello\" );\n\tvar btn2 = new Button( 150, 40, \"World\" );\n\n\tbtn1.render( $body );\n\tbtn2.render( $body );\n} );\n```\n\nUndoubtedly, a number of the syntax uglies of the previous classical approach have been smoothed over with ES6's `class`. The presence of a `super(..)` in particular seems quite nice (though when you dig into it, it's not all roses!).\n\nDespite syntactic improvements, **these are not *real* classes**, as they still operate on top of the `[[Prototype]]` mechanism. They suffer from all the same mental-model mismatches we explored in Chapters 4, 5 and thus far in this chapter. Appendix A will expound on the ES6 `class` syntax and its implications in detail. We'll see why solving syntax hiccups doesn't substantially solve our class confusions in JS, though it makes a valiant effort masquerading as a solution!\n\nWhether you use the classic prototypal syntax or the new ES6 sugar, you've still made a *choice* to model the problem domain (UI widgets) with \"classes\". And as the previous few chapters try to demonstrate, this *choice* in JavaScript is opting you into extra headaches and mental tax.\n\n### Delegating Widget Objects\n\nHere's our simpler `Widget` / `Button` example, using **OLOO style delegation**:\n\n```js\nvar Widget = {\n\tinit: function(width,height){\n\t\tthis.width = width || 50;\n\t\tthis.height = height || 50;\n\t\tthis.$elem = null;\n\t},\n\tinsert: function($where){\n\t\tif (this.$elem) {\n\t\t\tthis.$elem.css( {\n\t\t\t\twidth: this.width + \"px\",\n\t\t\t\theight: this.height + \"px\"\n\t\t\t} ).appendTo( $where );\n\t\t}\n\t}\n};\n\nvar Button = Object.create( Widget );\n\nButton.setup = function(width,height,label){\n\t// delegated call\n\tthis.init( width, height );\n\tthis.label = label || \"Default\";\n\n\tthis.$elem = $( \"<button>\" ).text( this.label );\n};\nButton.build = function($where) {\n\t// delegated call\n\tthis.insert( $where );\n\tthis.$elem.click( this.onClick.bind( this ) );\n};\nButton.onClick = function(evt) {\n\tconsole.log( \"Button '\" + this.label + \"' clicked!\" );\n};\n\n$( document ).ready( function(){\n\tvar $body = $( document.body );\n\n\tvar btn1 = Object.create( Button );\n\tbtn1.setup( 125, 30, \"Hello\" );\n\n\tvar btn2 = Object.create( Button );\n\tbtn2.setup( 150, 40, \"World\" );\n\n\tbtn1.build( $body );\n\tbtn2.build( $body );\n} );\n```\n\nWith this OLOO-style approach, we don't think of `Widget` as a parent and `Button` as a child. Rather, `Widget` **is just an object** and is sort of a utility collection that any specific type of widget might want to delegate to, and `Button` **is also just a stand-alone object** (with a delegation link to `Widget`, of course!).\n\nFrom a design pattern perspective, we **didn't** share the same method name `render(..)` in both objects, the way classes suggest, but instead we chose different names (`insert(..)` and `build(..)`) that were more descriptive of what task each does specifically. The *initialization* methods are called `init(..)` and `setup(..)`, respectively, for the same reasons.\n\nNot only does this delegation design pattern suggest different and more descriptive names (rather than shared and more generic names), but doing so with OLOO happens to avoid the ugliness of the explicit pseudo-polymorphic calls (`Widget.call` and `Widget.prototype.render.call`), as you can see by the simple, relative, delegated calls to `this.init(..)` and `this.insert(..)`.\n\nSyntactically, we also don't have any constructors, `.prototype` or `new` present, as they are, in fact, just unnecessary cruft.\n\nNow, if you're paying close attention, you may notice that what was previously just one call (`var btn1 = new Button(..)`) is now two calls (`var btn1 = Object.create(Button)` and `btn1.setup(..)`). Initially this may seem like a drawback (more code).\n\nHowever, even this is something that's **a pro of OLOO style code** as compared to classical prototype style code. How?\n\nWith class constructors, you are \"forced\" (not really, but strongly suggested) to do both construction and initialization in the same step. However, there are many cases where being able to do these two steps separately (as you do with OLOO!) is more flexible.\n\nFor example, let's say you create all your instances in a pool at the beginning of your program, but you wait to initialize them with specific setup until they are pulled from the pool and used. We showed the two calls happening right next to each other, but of course they can happen at very different times and in very different parts of our code, as needed.\n\n**OLOO** supports *better* the principle of separation of concerns, where creation and initialization are not necessarily conflated into the same operation.\n\n## Simpler Design\n\nIn addition to OLOO providing ostensibly simpler (and more flexible!) code, behavior delegation as a pattern can actually lead to simpler code architecture. Let's examine one last example that illustrates how OLOO simplifies your overall design.\n\nThe scenario we'll examine is two controller objects, one for handling the login form of a web page, and another for actually handling the authentication (communication) with the server.\n\nWe'll need a utility helper for making the Ajax communication to the server. We'll use jQuery (though any framework would do fine), since it handles not only the Ajax for us, but it returns a promise-like answer so that we can listen for the response in our calling code with `.then(..)`.\n\n**Note:** We don't cover Promises here, but we will cover them in a future title of the *\"You Don't Know JS\"* series.\n\nFollowing the typical class design pattern, we'll break up the task into base functionality in a class called `Controller`, and then we'll derive two child classes, `LoginController` and `AuthController`, which both inherit from `Controller` and specialize some of those base behaviors.\n\n```js\n// Parent class\nfunction Controller() {\n\tthis.errors = [];\n}\nController.prototype.showDialog = function(title,msg) {\n\t// display title & message to user in dialog\n};\nController.prototype.success = function(msg) {\n\tthis.showDialog( \"Success\", msg );\n};\nController.prototype.failure = function(err) {\n\tthis.errors.push( err );\n\tthis.showDialog( \"Error\", err );\n};\n```\n\n```js\n// Child class\nfunction LoginController() {\n\tController.call( this );\n}\n// Link child class to parent\nLoginController.prototype = Object.create( Controller.prototype );\nLoginController.prototype.getUser = function() {\n\treturn document.getElementById( \"login_username\" ).value;\n};\nLoginController.prototype.getPassword = function() {\n\treturn document.getElementById( \"login_password\" ).value;\n};\nLoginController.prototype.validateEntry = function(user,pw) {\n\tuser = user || this.getUser();\n\tpw = pw || this.getPassword();\n\n\tif (!(user && pw)) {\n\t\treturn this.failure( \"Please enter a username & password!\" );\n\t}\n\telse if (pw.length < 5) {\n\t\treturn this.failure( \"Password must be 5+ characters!\" );\n\t}\n\n\t// got here? validated!\n\treturn true;\n};\n// Override to extend base `failure()`\nLoginController.prototype.failure = function(err) {\n\t// \"super\" call\n\tController.prototype.failure.call( this, \"Login invalid: \" + err );\n};\n```\n\n```js\n// Child class\nfunction AuthController(login) {\n\tController.call( this );\n\t// in addition to inheritance, we also need composition\n\tthis.login = login;\n}\n// Link child class to parent\nAuthController.prototype = Object.create( Controller.prototype );\nAuthController.prototype.server = function(url,data) {\n\treturn $.ajax( {\n\t\turl: url,\n\t\tdata: data\n\t} );\n};\nAuthController.prototype.checkAuth = function() {\n\tvar user = this.login.getUser();\n\tvar pw = this.login.getPassword();\n\n\tif (this.login.validateEntry( user, pw )) {\n\t\tthis.server( \"/check-auth\",{\n\t\t\tuser: user,\n\t\t\tpw: pw\n\t\t} )\n\t\t.then( this.success.bind( this ) )\n\t\t.fail( this.failure.bind( this ) );\n\t}\n};\n// Override to extend base `success()`\nAuthController.prototype.success = function() {\n\t// \"super\" call\n\tController.prototype.success.call( this, \"Authenticated!\" );\n};\n// Override to extend base `failure()`\nAuthController.prototype.failure = function(err) {\n\t// \"super\" call\n\tController.prototype.failure.call( this, \"Auth Failed: \" + err );\n};\n```\n\n```js\nvar auth = new AuthController(\n\t// in addition to inheritance, we also need composition\n\tnew LoginController()\n);\nauth.checkAuth();\n```\n\nWe have base behaviors that all controllers share, which are `success(..)`, `failure(..)` and `showDialog(..)`. Our child classes `LoginController` and `AuthController` override `failure(..)` and `success(..)` to augment the default base class behavior. Also note that `AuthController` needs an instance of `LoginController` to interact with the login form, so that becomes a member data property.\n\nThe other thing to mention is that we chose some *composition* to sprinkle in on top of the inheritance. `AuthController` needs to know about `LoginController`, so we instantiate it (`new LoginController()`) and keep a class member property called `this.login` to reference it, so that `AuthController` can invoke behavior on `LoginController`.\n\n**Note:** There *might* have been a slight temptation to make `AuthController` inherit from `LoginController`, or vice versa, such that we had *virtual composition* through the inheritance chain. But this is a strongly clear example of what's wrong with class inheritance as *the* model for the problem domain, because neither `AuthController` nor `LoginController` are specializing base behavior of the other, so inheritance between them makes little sense except if classes are your only design pattern. Instead, we layered in some simple *composition* and now they can cooperate, while still both benefiting from the inheritance from the parent base `Controller`.\n\nIf you're familiar with class-oriented (OO) design, this should all look pretty familiar and natural.\n\n### De-class-ified\n\nBut, **do we really need to model this problem** with a parent `Controller` class, two child classes, **and some composition**? Is there a way to take advantage of OLOO-style behavior delegation and have a *much* simpler design? **Yes!**\n\n```js\nvar LoginController = {\n\terrors: [],\n\tgetUser: function() {\n\t\treturn document.getElementById( \"login_username\" ).value;\n\t},\n\tgetPassword: function() {\n\t\treturn document.getElementById( \"login_password\" ).value;\n\t},\n\tvalidateEntry: function(user,pw) {\n\t\tuser = user || this.getUser();\n\t\tpw = pw || this.getPassword();\n\n\t\tif (!(user && pw)) {\n\t\t\treturn this.failure( \"Please enter a username & password!\" );\n\t\t}\n\t\telse if (pw.length < 5) {\n\t\t\treturn this.failure( \"Password must be 5+ characters!\" );\n\t\t}\n\n\t\t// got here? validated!\n\t\treturn true;\n\t},\n\tshowDialog: function(title,msg) {\n\t\t// display success message to user in dialog\n\t},\n\tfailure: function(err) {\n\t\tthis.errors.push( err );\n\t\tthis.showDialog( \"Error\", \"Login invalid: \" + err );\n\t}\n};\n```\n\n```js\n// Link `AuthController` to delegate to `LoginController`\nvar AuthController = Object.create( LoginController );\n\nAuthController.errors = [];\nAuthController.checkAuth = function() {\n\tvar user = this.getUser();\n\tvar pw = this.getPassword();\n\n\tif (this.validateEntry( user, pw )) {\n\t\tthis.server( \"/check-auth\",{\n\t\t\tuser: user,\n\t\t\tpw: pw\n\t\t} )\n\t\t.then( this.accepted.bind( this ) )\n\t\t.fail( this.rejected.bind( this ) );\n\t}\n};\nAuthController.server = function(url,data) {\n\treturn $.ajax( {\n\t\turl: url,\n\t\tdata: data\n\t} );\n};\nAuthController.accepted = function() {\n\tthis.showDialog( \"Success\", \"Authenticated!\" )\n};\nAuthController.rejected = function(err) {\n\tthis.failure( \"Auth Failed: \" + err );\n};\n```\n\nSince `AuthController` is just an object (so is `LoginController`), we don't need to instantiate (like `new AuthController()`) to perform our task. All we need to do is:\n\n```js\nAuthController.checkAuth();\n```\n\nOf course, with OLOO, if you do need to create one or more additional objects in the delegation chain, that's easy, and still doesn't require anything like class instantiation:\n\n```js\nvar controller1 = Object.create( AuthController );\nvar controller2 = Object.create( AuthController );\n```\n\nWith behavior delegation, `AuthController` and `LoginController` are **just objects**, *horizontal* peers of each other, and are not arranged or related as parents and children in class-orientation. We somewhat arbitrarily chose to have `AuthController` delegate to `LoginController` -- it would have been just as valid for the delegation to go the reverse direction.\n\nThe main takeaway from this second code listing is that we only have two entities (`LoginController` and `AuthController`), **not three** as before.\n\nWe didn't need a base `Controller` class to \"share\" behavior between the two, because delegation is a powerful enough mechanism to give us the functionality we need. We also, as noted before, don't need to instantiate our classes to work with them, because there are no classes, **just the objects themselves.** Furthermore, there's no need for *composition* as delegation gives the two objects the ability to cooperate *differentially* as needed.\n\nLastly, we avoided the polymorphism pitfalls of class-oriented design by not having the names `success(..)` and `failure(..)` be the same on both objects, which would have required ugly explicit pseudopolymorphism. Instead, we called them `accepted()` and `rejected(..)` on `AuthController` -- slightly more descriptive names for their specific tasks.\n\n**Bottom line**: we end up with the same capability, but a (significantly) simpler design. That's the power of OLOO-style code and the power of the *behavior delegation* design pattern.\n\n## Nicer Syntax\n\nOne of the nicer things that makes ES6's `class` so deceptively attractive (see Appendix A on why to avoid it!) is the short-hand syntax for declaring class methods:\n\n```js\nclass Foo {\n\tmethodName() { /* .. */ }\n}\n```\n\nWe get to drop the word `function` from the declaration, which makes JS developers everywhere cheer!\n\nAnd you may have noticed and been frustrated that the suggested OLOO syntax above has lots of `function` appearances, which seems like a bit of a detractor to the goal of OLOO simplification. **But it doesn't have to be that way!**\n\nAs of ES6, we can use *concise method declarations* in any object literal, so an object in OLOO style can be declared this way (same short-hand sugar as with `class` body syntax):\n\n```js\nvar LoginController = {\n\terrors: [],\n\tgetUser() { // Look ma, no `function`!\n\t\t// ...\n\t},\n\tgetPassword() {\n\t\t// ...\n\t}\n\t// ...\n};\n```\n\nAbout the only difference is that object literals will still require `,` comma separators between elements whereas `class` syntax doesn't. Pretty minor concession in the whole scheme of things.\n\nMoreover, as of ES6, the clunkier syntax you use (like for the `AuthController` definition), where you're assigning properties individually and not using an object literal, can be re-written using an object literal (so that you can use concise methods), and you can just modify that object's `[[Prototype]]` with `Object.setPrototypeOf(..)`, like this:\n\n```js\n// use nicer object literal syntax w/ concise methods!\nvar AuthController = {\n\terrors: [],\n\tcheckAuth() {\n\t\t// ...\n\t},\n\tserver(url,data) {\n\t\t// ...\n\t}\n\t// ...\n};\n\n// NOW, link `AuthController` to delegate to `LoginController`\nObject.setPrototypeOf( AuthController, LoginController );\n```\n\nOLOO-style as of ES6, with concise methods, **is a lot friendlier** than it was before (and even then, it was much simpler and nicer than classical prototype-style code). **You don't have to opt for class** (complexity) to get nice clean object syntax!\n\n### Unlexical\n\nThere *is* one drawback to concise methods that's subtle but important to note. Consider this code:\n\n```js\nvar Foo = {\n\tbar() { /*..*/ },\n\tbaz: function baz() { /*..*/ }\n};\n```\n\nHere's the syntactic de-sugaring that expresses how that code will operate:\n\n```js\nvar Foo = {\n\tbar: function() { /*..*/ },\n\tbaz: function baz() { /*..*/ }\n};\n```\n\nSee the difference? The `bar()` short-hand became an *anonymous function expression* (`function()..`) attached to the `bar` property, because the function object itself has no name identifier. Compare that to the manually specified *named function expression* (`function baz()..`) which has a lexical name identifier `baz` in addition to being attached to a `.baz` property.\n\nSo what? In the *\"Scope & Closures\"* title of this *\"You Don't Know JS\"* book series, we cover the three main downsides of *anonymous function expressions* in detail. We'll just briefly repeat them so we can compare to the concise method short-hand.\n\nLack of a `name` identifier on an anonymous function:\n\n1. makes debugging stack traces harder\n2. makes self-referencing (recursion, event (un)binding, etc) harder\n3. makes code (a little bit) harder to understand\n\nItems 1 and 3 don't apply to concise methods.\n\nEven though the de-sugaring uses an *anonymous function expression* which normally would have no `name` in stack traces, concise methods are specified to set the internal `name` property of the function object accordingly, so stack traces should be able to use it (though that's implementation dependent so not guaranteed).\n\nItem 2 is, unfortunately, **still a drawback to concise methods**. They will not have a lexical identifier to use as a self-reference. Consider:\n\n```js\nvar Foo = {\n\tbar: function(x) {\n\t\tif (x < 10) {\n\t\t\treturn Foo.bar( x * 2 );\n\t\t}\n\t\treturn x;\n\t},\n\tbaz: function baz(x) {\n\t\tif (x < 10) {\n\t\t\treturn baz( x * 2 );\n\t\t}\n\t\treturn x;\n\t}\n};\n```\n\nThe manual `Foo.bar(x*2)` reference above kind of suffices in this example, but there are many cases where a function wouldn't necessarily be able to do that, such as cases where the function is being shared in delegation across different objects, using `this` binding, etc. You would want to use a real self-reference, and the function object's `name` identifier is the best way to accomplish that.\n\nJust be aware of this caveat for concise methods, and if you run into such issues with lack of self-reference, make sure to forgo the concise method syntax **just for that declaration** in favor of the manual *named function expression* declaration form: `baz: function baz(){..}`.\n\n## Introspection\n\nIf you've spent much time with class oriented programming (either in JS or other languages), you're probably familiar with *type introspection*: inspecting an instance to find out what *kind* of object it is. The primary goal of *type introspection* with class instances is to reason about the structure/capabilities of the object based on *how it was created*.\n\nConsider this code which uses `instanceof` (see Chapter 5) for introspecting on an object `a1` to infer its capability:\n\n```js\nfunction Foo() {\n\t// ...\n}\nFoo.prototype.something = function(){\n\t// ...\n}\n\nvar a1 = new Foo();\n\n// later\n\nif (a1 instanceof Foo) {\n\ta1.something();\n}\n```\n\nBecause `Foo.prototype` (not `Foo`!) is in the `[[Prototype]]` chain (see Chapter 5) of `a1`, the `instanceof` operator (confusingly) pretends to tell us that `a1` is an instance of the `Foo` \"class\". With this knowledge, we then assume that `a1` has the capabilities described by the `Foo` \"class\".\n\nOf course, there is no `Foo` class, only a plain old normal function `Foo`, which happens to have a reference to an arbitrary object (`Foo.prototype`) that `a1` happens to be delegation-linked to. By its syntax, `instanceof` pretends to be inspecting the relationship between `a1` and `Foo`, but it's actually telling us whether `a1` and (the arbitrary object referenced by) `Foo.prototype` are related.\n\nThe semantic confusion (and indirection) of `instanceof` syntax means that to use `instanceof`-based introspection to ask if object `a1` is related to the capabilities object in question, you *have to* have a function that holds a reference to that object -- you can't just directly ask if the two objects are related.\n\nRecall the abstract `Foo` / `Bar` / `b1` example from earlier in this chapter, which we'll abbreviate here:\n\n```js\nfunction Foo() { /* .. */ }\nFoo.prototype...\n\nfunction Bar() { /* .. */ }\nBar.prototype = Object.create( Foo.prototype );\n\nvar b1 = new Bar( \"b1\" );\n```\n\nFor *type introspection* purposes on the entities in that example, using `instanceof` and `.prototype` semantics, here are the various checks you might need to perform:\n\n```js\n// relating `Foo` and `Bar` to each other\nBar.prototype instanceof Foo; // true\nObject.getPrototypeOf( Bar.prototype ) === Foo.prototype; // true\nFoo.prototype.isPrototypeOf( Bar.prototype ); // true\n\n// relating `b1` to both `Foo` and `Bar`\nb1 instanceof Foo; // true\nb1 instanceof Bar; // true\nObject.getPrototypeOf( b1 ) === Bar.prototype; // true\nFoo.prototype.isPrototypeOf( b1 ); // true\nBar.prototype.isPrototypeOf( b1 ); // true\n```\n\nIt's fair to say that some of that kinda sucks. For instance, intuitively (with classes) you might want to be able to say something like `Bar instanceof Foo` (because it's easy to mix up what \"instance\" means to think it includes \"inheritance\"), but that's not a sensible comparison in JS. You have to do `Bar.prototype instanceof Foo` instead.\n\nAnother common, but perhaps less robust, pattern for *type introspection*, which many devs seem to prefer over `instanceof`, is called \"duck typing\". This term comes from the adage, \"if it looks like a duck, and it quacks like a duck, it must be a duck\".\n\nExample:\n\n```js\nif (a1.something) {\n\ta1.something();\n}\n```\n\nRather than inspecting for a relationship between `a1` and an object that holds the delegatable `something()` function, we assume that the test for `a1.something` passing means `a1` has the capability to call `.something()` (regardless of if it found the method directly on `a1` or delegated to some other object). In and of itself, that assumption isn't so risky.\n\nBut \"duck typing\" is often extended to make **other assumptions about the object's capabilities** besides what's being tested, which of course introduces more risk (aka, brittle design) into the test.\n\nOne notable example of \"duck typing\" comes with ES6 Promises (which as an earlier note explained are not being covered in this book).\n\nFor various reasons, there's a need to determine if any arbitrary object reference *is a Promise*, but the way that test is done is to check if the object happens to have a `then()` function present on it. In other words, **if any object** happens to have a `then()` method, ES6 Promises will assume unconditionally that the object **is a \"thenable\"** and therefore will expect it to behave conformantly to all standard behaviors of Promises.\n\nIf you have any non-Promise object that happens for whatever reason to have a `then()` method on it, you are strongly advised to keep it far away from the ES6 Promise mechanism to avoid broken assumptions.\n\nThat example clearly illustrates the perils of \"duck typing\". You should only use such approaches sparingly and in controlled conditions.\n\nTurning our attention once again back to OLOO-style code as presented here in this chapter, *type introspection* turns out to be much cleaner. Let's recall (and abbreviate) the `Foo` / `Bar` / `b1` OLOO example from earlier in the chapter:\n\n```js\nvar Foo = { /* .. */ };\n\nvar Bar = Object.create( Foo );\nBar...\n\nvar b1 = Object.create( Bar );\n```\n\nUsing this OLOO approach, where all we have are plain objects that are related via `[[Prototype]]` delegation, here's the quite simplified *type introspection* we might use:\n\n```js\n// relating `Foo` and `Bar` to each other\nFoo.isPrototypeOf( Bar ); // true\nObject.getPrototypeOf( Bar ) === Foo; // true\n\n// relating `b1` to both `Foo` and `Bar`\nFoo.isPrototypeOf( b1 ); // true\nBar.isPrototypeOf( b1 ); // true\nObject.getPrototypeOf( b1 ) === Bar; // true\n```\n\nWe're not using `instanceof` anymore, because it's confusingly pretending to have something to do with classes. Now, we just ask the (informally stated) question, \"are you *a* prototype of me?\" There's no more indirection necessary with stuff like `Foo.prototype` or the painfully verbose `Foo.prototype.isPrototypeOf(..)`.\n\nI think it's fair to say these checks are significantly less complicated/confusing than the previous set of introspection checks. **Yet again, we see that OLOO is simpler than (but with all the same power of) class-style coding in JavaScript.**\n\n## Review (TL;DR)\n\nClasses and inheritance are a design pattern you can *choose*, or *not choose*, in your software architecture. Most developers take for granted that classes are the only (proper) way to organize code, but here we've seen there's another less-commonly talked about pattern that's actually quite powerful: **behavior delegation**.\n\nBehavior delegation suggests objects as peers of each other, which delegate amongst themselves, rather than parent and child class relationships. JavaScript's `[[Prototype]]` mechanism is, by its very designed nature, a behavior delegation mechanism. That means we can either choose to struggle to implement class mechanics on top of JS (see Chapters 4 and 5), or we can just embrace the natural state of `[[Prototype]]` as a delegation mechanism.\n\nWhen you design code with objects only, not only does it simplify the syntax you use, but it can actually lead to simpler code architecture design.\n\n**OLOO** (objects-linked-to-other-objects) is a code style which creates and relates objects directly without the abstraction of classes. OLOO quite naturally implements `[[Prototype]]`-based behavior delegation.\n"
  },
  {
    "path": "this & object prototypes/foreword.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n# Foreword\n\nWhile reading this book in preparation for writing this foreword, I was forced to reflect on how I learned JavaScript and how much it has changed over the last 15 years that I have been programming and developing with it.\n\nWhen I started using JavaScript 15 years ago, the practice of using non-HTML technologies such as CSS and JS in your web pages was called DHTML or Dynamic HTML. Back then, the usefulness of JavaScript varied greatly and seemed to be tilted toward adding animated snowflakes to your web pages or dynamic clocks that told the time in the status bar. Suffice it to say, I didn’t really pay much attention to JavaScript in the early part of my career because of the novelty of the implementations that I often found on the Internet.\n\nIt wasn’t until 2005 that I first rediscovered JavaScript as a real programming language that I needed to pay closer attention to. After digging into the first beta release of Google Maps, I was hooked on the potential it had. At the time, Google Maps was a first-of-its-kind application -- it allowed you to move a map around with your mouse, zoom in and out, and make server requests without reloading the page -- all with JavaScript. It seemed like magic!\n\nWhen anything seems like magic, it is usually a good indication you are at the dawn of a new way of doing things. And boy, was I not wrong -- fast-forwarding to today, I would say that JavaScript is one of the primary languages I use for both client- and server-side programming, and I wouldn’t have it any other way.\n\nOne of my regrets as I look over the past 15 years is that I didn’t give JavaScript more of a chance before 2005, or more accurately, that I lacked the foresight to see JavaScript as a true programming language that is just as useful as C++, C#, Java, and many others.\n\nIf I had this *You Don’t Know JS* series of books at the start of my career, my career history would look much different than it does today. And that is one of the things I love about this series: it explains JS at a level that builds your understanding as you go through the series, but in a fun and informative way.\n\n*this & Object Prototypes* is a wonderful continuation to the series. It does a great and natural job of building on the prior book, Scope & Closures, and extending that knowledge to a very important part of the JS language, the `this` keyword and prototypes. These two simple things are pivotal for what you will learn in the future books, because they are foundational to doing real programming with JavaScript. The concept of how to create objects, relate them, and extend them to represent things in your application is necessary to create large and complex applications in JavaScript. And without them, creating complex applications (such as Google Maps) wouldn’t be possible in JavaScript.\n\nI would say that the vast majority of web developers probably have never built a JavaScript object and just treat the language as event-binding glue between buttons and AJAX requests. I was in that camp at a point in my career, but after I learned how to master prototypes and create objects in JavaScript, a world of possibilities opened up for me. If you fall into the category of just creating event-binding glue code, this book is a must-read; if you just need a refresher, this book will be a go-to resource for you. Either way, you will not be disappointed. Trust me!\n\nNick Berardi<br>\n[nickberardi.com](http://nickberardi.com), [@nberardi](http://twitter.com/nberardi)\n"
  },
  {
    "path": "this & object prototypes/toc.md",
    "content": "# You Don't Know JS: *this* & Object Prototypes\n\n## Table of Contents\n\n* Foreword\n* Preface\n* Chapter 1: `this` Or That?\n\t* Why `this`?\n\t* Confusions\n\t* What's `this`?\n* Chapter 2: `this` All Makes Sense Now!\n\t* Call-site\n\t* Nothing But Rules\n\t* Everything In Order\n\t* Binding Exceptions\n\t* Lexical `this`\n* Chapter 3: Objects\n\t* Syntax\n\t* Type\n\t* Contents\n\t* Iteration\n* Chapter 4: Mixing (Up) \"Class\" Objects\n\t* Class Theory\n\t* Class Mechanics\n\t* Class Inheritance\n\t* Mixins\n* Chapter 5: Prototypes\n\t* `[[Prototype]]`\n\t* \"Class\"\n\t* \"(Prototypal) Inheritance\"\n\t* Object Links\n* Chapter 6: Behavior Delegation\n\t* Towards Delegation-Oriented Design\n\t* Classes vs. Objects\n\t* Simpler Design\n\t* Nicer Syntax\n\t* Introspection\n* Appendix A: ES6 `class`\n* Appendix B: Acknowledgments\n\n"
  },
  {
    "path": "types & grammar/README.md",
    "content": "# You Don't Know JS: Types & Grammar\n\n<img src=\"cover.jpg\" width=\"300\">\n\n-----\n\n**[Purchase digital/print copy from O'Reilly](http://shop.oreilly.com/product/0636920033745.do)**\n\n-----\n\n[Table of Contents](toc.md)\n\n* [Foreword](foreword.md) (by [David Walsh](http://davidwalsh.name))\n* [Preface](../preface.md)\n* [Chapter 1: Types](ch1.md)\n* [Chapter 2: Values](ch2.md)\n* [Chapter 3: Natives](ch3.md)\n* [Chapter 4: Coercion](ch4.md)\n* [Chapter 5: Grammar](ch5.md)\n* [Appendix A: Mixed Environment JavaScript](apA.md)\n* [Appendix B: Thank You's!](apB.md)\n"
  },
  {
    "path": "types & grammar/apA.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Appendix A: Mixed Environment JavaScript\n\nBeyond the core language mechanics we've fully explored in this book, there are several ways that your JS code can behave differently when it runs in the real world. If JS was executing purely inside an engine, it'd be entirely predictable based on nothing but the black-and-white of the spec. But JS pretty much always runs in the context of a hosting environment, which exposes your code to some degree of unpredictability.\n\nFor example, when your code runs alongside code from other sources, or when your code runs in different types of JS engines (not just browsers), there are some things that may behave differently.\n\nWe'll briefly explore some of these concerns.\n\n## Annex B (ECMAScript)\n\nIt's a little known fact that the official name of the language is ECMAScript (referring to the ECMA standards body that manages it). What then is \"JavaScript\"? JavaScript is the common tradename of the language, of course, but more appropriately, JavaScript is basically the browser implementation of the spec.\n\nThe official ECMAScript specification includes \"Annex B,\" which discusses specific deviations from the official spec for the purposes of JS compatibility in browsers.\n\nThe proper way to consider these deviations is that they are only reliably present/valid if your code is running in a browser. If your code always runs in browsers, you won't see any observable difference. If not (like if it can run in node.js, Rhino, etc.), or you're not sure, tread carefully.\n\nThe main compatibility differences:\n\n* Octal number literals are allowed, such as `0123` (decimal `83`) in non-`strict mode`.\n* `window.escape(..)` and `window.unescape(..)` allow you to escape or unescape strings with `%`-delimited hexadecimal escape sequences. For example: `window.escape( \"?foo=97%&bar=3%\" )` produces `\"%3Ffoo%3D97%25%26bar%3D3%25\"`.\n* `String.prototype.substr` is quite similar to `String.prototype.substring`, except that instead of the second parameter being the ending index (noninclusive), the second parameter is the `length` (number of characters to include).\n\n### Web ECMAScript\n\nThe Web ECMAScript specification (http://javascript.spec.whatwg.org/) covers the differences between the official ECMAScript specification and the current JavaScript implementations in browsers.\n\nIn other words, these items are \"required\" of browsers (to be compatible with each other) but are not (as of the time of writing) listed in the \"Annex B\" section of the official spec:\n\n* `<!--` and `-->` are valid single-line comment delimiters.\n* `String.prototype` additions for returning HTML-formatted strings: `anchor(..)`, `big(..)`, `blink(..)`, `bold(..)`, `fixed(..)`, `fontcolor(..)`, `fontsize(..)`, `italics(..)`, `link(..)`, `small(..)`, `strike(..)`, and `sub(..)`. **Note:** These are very rarely used in practice, and are generally discouraged in favor of other built-in DOM APIs or user-defined utilities.\n* `RegExp` extensions: `RegExp.$1` .. `RegExp.$9` (match-groups) and `RegExp.lastMatch`/`RegExp[\"$&\"]` (most recent match).\n* `Function.prototype` additions: `Function.prototype.arguments` (aliases internal `arguments` object) and `Function.caller` (aliases internal `arguments.caller`). **Note:** `arguments` and thus `arguments.caller` are deprecated, so you should avoid using them if possible. That goes doubly so for these aliases -- don't use them!\n\n**Note:** Some other minor and rarely used deviations are not included in our list here. See the external \"Annex B\" and \"Web ECMAScript\" documents for more detailed information as needed.\n\nGenerally speaking, all these differences are rarely used, so the deviations from the specification are not significant concerns. **Just be careful** if you rely on any of them.\n\n## Host Objects\n\nThe well-covered rules for how variables behave in JS have exceptions to them when it comes to variables that are auto-defined, or otherwise created and provided to JS by the environment that hosts your code (browser, etc.) -- so called, \"host objects\" (which include both built-in `object`s and `function`s).\n\nFor example:\n\n```js\nvar a = document.createElement( \"div\" );\n\ntypeof a;\t\t\t\t\t\t\t\t// \"object\" -- as expected\nObject.prototype.toString.call( a );\t// \"[object HTMLDivElement]\"\n\na.tagName;\t\t\t\t\t\t\t\t// \"DIV\"\n```\n\n`a` is not just an `object`, but a special host object because it's a DOM element. It has a different internal `[[Class]]` value (`\"HTMLDivElement\"`) and comes with predefined (and often unchangeable) properties.\n\nAnother such quirk has already been covered, in the \"Falsy Objects\" section in Chapter 4: some objects can exist but when coerced to `boolean` they (confoundingly) will coerce to `false` instead of the expected `true`.\n\nOther behavior variations with host objects to be aware of can include:\n\n* not having access to normal `object` built-ins like `toString()`\n* not being overwritable\n* having certain predefined read-only properties\n* having methods that cannot be `this`-overriden to other objects\n* and more...\n\nHost objects are critical to making our JS code work with its surrounding environment. But it's important to note when you're interacting with a host object and be careful assuming its behaviors, as they will quite often not conform to regular JS `object`s.\n\nOne notable example of a host object that you probably interact with regularly is the `console` object and its various functions (`log(..)`, `error(..)`, etc.). The `console` object is provided by the *hosting environment* specifically so your code can interact with it for various development-related output tasks.\n\nIn browsers, `console` hooks up to the developer tools' console display, whereas in node.js and other server-side JS environments, `console` is generally connected to the standard-output (`stdout`) and standard-error (`stderr`) streams of the JavaScript environment system process.\n\n## Global DOM Variables\n\nYou're probably aware that declaring a variable in the global scope (with or without `var`) creates not only a global variable, but also its mirror: a property of the same name on the `global` object (`window` in the browser).\n\nBut what may be less common knowledge is that (because of legacy browser behavior) creating DOM elements with `id` attributes creates global variables of those same names. For example:\n\n```html\n<div id=\"foo\"></div>\n```\n\nAnd:\n\n```js\nif (typeof foo == \"undefined\") {\n\tfoo = 42;\t\t// will never run\n}\n\nconsole.log( foo );\t// HTML element\n```\n\nYou're perhaps used to managing global variable tests (using `typeof` or `.. in window` checks) under the assumption that only JS code creates such variables, but as you can see, the contents of your hosting HTML page can also create them, which can easily throw off your existence check logic if you're not careful.\n\nThis is yet one more reason why you should, if at all possible, avoid using global variables, and if you have to, use variables with unique names that won't likely collide. But you also need to make sure not to collide with the HTML content as well as any other code.\n\n## Native Prototypes\n\nOne of the most widely known and classic pieces of JavaScript *best practice* wisdom is: **never extend native prototypes**.\n\nWhatever method or property name you come up with to add to `Array.prototype` that doesn't (yet) exist, if it's a useful addition and well-designed, and properly named, there's a strong chance it *could* eventually end up being added to the spec -- in which case your extension is now in conflict.\n\nHere's a real example that actually happened to me that illustrates this point well.\n\nI was building an embeddable widget for other websites, and my widget relied on jQuery (though pretty much any framework would have suffered this gotcha). It worked on almost every site, but we ran across one where it was totally broken.\n\nAfter almost a week of analysis/debugging, I found that the site in question had, buried deep in one of its legacy files, code that looked like this:\n\n```js\n// Netscape 4 doesn't have Array.push\nArray.prototype.push = function(item) {\n\tthis[this.length] = item;\n};\n```\n\nAside from the crazy comment (who cares about Netscape 4 anymore!?), this looks reasonable, right?\n\nThe problem is, `Array.prototype.push` was added to the spec sometime subsequent to this Netscape 4 era coding, but what was added is not compatible with this code. The standard `push(..)` allows multiple items to be pushed at once. This hacked one ignores the subsequent items.\n\nBasically all JS frameworks have code that relies on `push(..)` with multiple elements. In my case, it was code around the CSS selector engine that was completely busted. But there could conceivably be dozens of other places susceptible.\n\nThe developer who originally wrote that `push(..)` hack had the right instinct to call it `push`, but didn't foresee pushing multiple elements. They were certainly acting in good faith, but they created a landmine that didn't go off until almost 10 years later when I unwittingly came along.\n\nThere's multiple lessons to take away on all sides.\n\nFirst, don't extend the natives unless you're absolutely sure your code is the only code that will ever run in that environment. If you can't say that 100%, then extending the natives is dangerous. You must weigh the risks.\n\nNext, don't unconditionally define extensions (because you can overwrite natives accidentally). In this particular example, had the code said this:\n\n```js\nif (!Array.prototype.push) {\n\t// Netscape 4 doesn't have Array.push\n\tArray.prototype.push = function(item) {\n\t\tthis[this.length] = item;\n\t};\n}\n```\n\nThe `if` statement guard would have only defined this hacked `push()` for JS environments where it didn't exist. In my case, that probably would have been OK. But even this approach is not without risk:\n\n1. If the site's code (for some crazy reason!) was relying on a `push(..)` that ignored multiple items, that code would have been broken years ago when the standard `push(..)` was rolled out.\n2. If any other library had come in and hacked in a `push(..)` ahead of this `if` guard, and it did so in an incompatible way, that would have broken the site at that time.\n\nWhat that highlights is an interesting question that, frankly, doesn't get enough attention from JS developers: **Should you EVER rely on native built-in behavior** if your code is running in any environment where it's not the only code present?\n\nThe strict answer is **no**, but that's awfully impractical. Your code usually can't redefine its own private untouchable versions of all built-in behavior relied on. Even if you *could*, that's pretty wasteful.\n\nSo, should you feature-test for the built-in behavior as well as compliance-testing that it does what you expect? And what if that test fails -- should your code just refuse to run?\n\n```js\n// don't trust Array.prototype.push\n(function(){\n\tif (Array.prototype.push) {\n\t\tvar a = [];\n\t\ta.push(1,2);\n\t\tif (a[0] === 1 && a[1] === 2) {\n\t\t\t// tests passed, safe to use!\n\t\t\treturn;\n\t\t}\n\t}\n\n\tthrow Error(\n\t\t\"Array#push() is missing/broken!\"\n\t);\n})();\n```\n\nIn theory, that sounds plausible, but it's also pretty impractical to design tests for every single built-in method.\n\nSo, what should we do? Should we *trust but verify* (feature- and compliance-test) **everything**? Should we just assume existence is compliance and let breakage (caused by others) bubble up as it will?\n\nThere's no great answer. The only fact that can be observed is that extending native prototypes is the only way these things bite you.\n\nIf you don't do it, and no one else does in the code in your application, you're safe. Otherwise, you should build in at least a little bit of skepticism, pessimism, and expectation of possible breakage.\n\nHaving a full set of unit/regression tests of your code that runs in all known environments is one way to surface some of these issues earlier, but it doesn't do anything to actually protect you from these conflicts.\n\n### Shims/Polyfills\n\nIt's usually said that the only safe place to extend a native is in an older (non-spec-compliant) environment, since that's unlikely to ever change -- new browsers with new spec features replace older browsers rather than amending them.\n\nIf you could see into the future, and know for sure what a future standard was going to be, like for `Array.prototype.foobar`, it'd be totally safe to make your own compatible version of it to use now, right?\n\n```js\nif (!Array.prototype.foobar) {\n\t// silly, silly\n\tArray.prototype.foobar = function() {\n\t\tthis.push( \"foo\", \"bar\" );\n\t};\n}\n```\n\nIf there's already a spec for `Array.prototype.foobar`, and the specified behavior is equal to this logic, you're pretty safe in defining such a snippet, and in that case it's generally called a \"polyfill\" (or \"shim\").\n\nSuch code is **very** useful to include in your code base to \"patch\" older browser environments that aren't updated to the newest specs. Using polyfills is a great way to create predictable code across all your supported environments.\n\n**Tip:** ES5-Shim (https://github.com/es-shims/es5-shim) is a comprehensive collection of shims/polyfills for bringing a project up to ES5 baseline, and similarly, ES6-Shim (https://github.com/es-shims/es6-shim) provides shims for new APIs added as of ES6. While APIs can be shimmed/polyfilled, new syntax generally cannot. To bridge the syntactic divide, you'll want to also use an ES6-to-ES5 transpiler like Traceur (https://github.com/google/traceur-compiler/wiki/Getting-Started).\n\nIf there's likely a coming standard, and most discussions agree what it's going to be called and how it will operate, creating the ahead-of-time polyfill for future-facing standards compliance is called \"prollyfill\" (probably-fill).\n\nThe real catch is if some new standard behavior can't be (fully) polyfilled/prollyfilled.\n\nThere's debate in the community if a partial-polyfill for the common cases is acceptable (documenting the parts that cannot be polyfilled), or if a polyfill should be avoided if it purely can't be 100% compliant to the spec.\n\nMany developers at least accept some common partial polyfills (like for instance `Object.create(..)`), because the parts that aren't covered are not parts they intend to use anyway.\n\nSome developers believe that the `if` guard around a polyfill/shim should include some form of conformance test, replacing the existing method either if it's absent or fails the tests. This extra layer of compliance testing is sometimes used to distinguish \"shim\" (compliance tested) from \"polyfill\" (existence checked).\n\nThe only absolute take-away is that there is no absolute *right* answer here. Extending natives, even when done \"safely\" in older environments, is not 100% safe. The same goes for relying upon (possibly extended) natives in the presence of others' code.\n\nEither should always be done with caution, defensive code, and lots of obvious documentation about the risks.\n\n## `<script>`s\n\nMost browser-viewed websites/applications have more than one file that contains their code, and it's common to have a few or several `<script src=..></script>` elements in the page that load these files separately, and even a few inline-code `<script> .. </script>` elements as well.\n\nBut do these separate files/code snippets constitute separate programs or are they collectively one JS program?\n\nThe (perhaps surprising) reality is they act more like independent JS programs in most, but not all, respects.\n\nThe one thing they *share* is the single `global` object (`window` in the browser), which means multiple files can append their code to that shared namespace and they can all interact.\n\nSo, if one `script` element defines a global function `foo()`, when a second `script` later runs, it can access and call `foo()` just as if it had defined the function itself.\n\nBut global variable scope *hoisting* (see the *Scope & Closures* title of this series) does not occur across these boundaries, so the following code would not work (because `foo()`'s declaration isn't yet declared), regardless of if they are (as shown) inline `<script> .. </script>` elements or externally loaded `<script src=..></script>` files:\n\n```html\n<script>foo();</script>\n\n<script>\n  function foo() { .. }\n</script>\n```\n\nBut either of these *would* work instead:\n\n```html\n<script>\n  foo();\n  function foo() { .. }\n</script>\n```\n\nOr:\n\n```html\n<script>\n  function foo() { .. }\n</script>\n\n<script>foo();</script>\n```\n\nAlso, if an error occurs in a `script` element (inline or external), as a separate standalone JS program it will fail and stop, but any subsequent `script`s will run (still with the shared `global`) unimpeded.\n\nYou can create `script` elements dynamically from your code, and inject them into the DOM of the page, and the code in them will behave basically as if loaded normally in a separate file:\n\n```js\nvar greeting = \"Hello World\";\n\nvar el = document.createElement( \"script\" );\n\nel.text = \"function foo(){ alert( greeting );\\\n } setTimeout( foo, 1000 );\";\n\ndocument.body.appendChild( el );\n```\n\n**Note:** Of course, if you tried the above snippet but set `el.src` to some file URL instead of setting `el.text` to the code contents, you'd be dynamically creating an externally loaded `<script src=..></script>` element.\n\nOne difference between code in an inline code block and that same code in an external file is that in the inline code block, the sequence of characters `</script>` cannot appear together, as (regardless of where it appears) it would be interpreted as the end of the code block. So, beware of code like:\n\n```html\n<script>\n  var code = \"<script>alert( 'Hello World' )</script>\";\n</script>\n```\n\nIt looks harmless, but the `</script>` appearing inside the `string` literal will terminate the script block abnormally, causing an error. The most common workaround is:\n\n```js\n\"</sc\" + \"ript>\";\n```\n\nAlso, beware that code inside an external file will be interpreted in the character set (UTF-8, ISO-8859-8, etc.) the file is served with (or the default), but that same code in an inline `script` element in your HTML page will be interpreted by the character set of the page (or its default).\n\n**Warning:** The `charset` attribute will not work on inline script elements.\n\nAnother deprecated practice with inline `script` elements is including HTML-style or X(HT)ML-style comments around inline code, like:\n\n```html\n<script>\n<!--\nalert( \"Hello\" );\n//-->\n</script>\n\n<script>\n<!--//--><![CDATA[//><!--\nalert( \"World\" );\n//--><!]]>\n</script>\n```\n\nBoth of these are totally unnecessary now, so if you're still doing that, stop it!\n\n**Note:** Both `<!--` and `-->` (HTML-style comments) are actually specified as valid single-line comment delimiters (`var x = 2; <!-- valid comment` and `--> another valid line comment`) in JavaScript (see the \"Web ECMAScript\" section earlier), purely because of this old technique. But never use them.\n\n## Reserved Words\n\nThe ES5 spec defines a set of \"reserved words\" in Section 7.6.1 that cannot be used as standalone variable names. Technically, there are four categories: \"keywords\", \"future reserved words\", the `null` literal, and the `true` / `false` boolean literals.\n\nKeywords are the obvious ones like `function` and `switch`. Future reserved words include things like `enum`, though many of the rest of them (`class`, `extends`, etc.) are all now actually used by ES6; there are other strict-mode only reserved words like `interface`.\n\nStackOverflow user \"art4theSould\" creatively worked all these reserved words into a fun little poem (http://stackoverflow.com/questions/26255/reserved-keywords-in-javascript/12114140#12114140):\n\n> Let this long package float,\n> Goto private class if short.\n> While protected with debugger case,\n> Continue volatile interface.\n> Instanceof super synchronized throw,\n> Extends final export throws.\n>\n> Try import double enum?\n> - False, boolean, abstract function,\n> Implements typeof transient break!\n> Void static, default do,\n> Switch int native new.\n> Else, delete null public var\n> In return for const, true, char\n> …Finally catch byte.\n\n**Note:** This poem includes words that were reserved in ES3 (`byte`, `long`, etc.) that are no longer reserved as of ES5.\n\nPrior to ES5, the reserved words also could not be property names or keys in object literals, but that restriction no longer exists.\n\nSo, this is not allowed:\n\n```js\nvar import = \"42\";\n```\n\nBut this is allowed:\n\n```js\nvar obj = { import: \"42\" };\nconsole.log( obj.import );\n```\n\nYou should be aware though that some older browser versions (mainly older IE) weren't completely consistent on applying these rules, so there are places where using reserved words in object property name locations can still cause issues. Carefully test all supported browser environments.\n\n## Implementation Limits\n\nThe JavaScript spec does not place arbitrary limits on things such as the number of arguments to a function or the length of a string literal, but these limits exist nonetheless, because of implementation details in different engines.\n\nFor example:\n\n```js\nfunction addAll() {\n\tvar sum = 0;\n\tfor (var i=0; i < arguments.length; i++) {\n\t\tsum += arguments[i];\n\t}\n\treturn sum;\n}\n\nvar nums = [];\nfor (var i=1; i < 100000; i++) {\n\tnums.push(i);\n}\n\naddAll( 2, 4, 6 );\t\t\t\t// 12\naddAll.apply( null, nums );\t\t// should be: 499950000\n```\n\nIn some JS engines, you'll get the correct `499950000` answer, but in others (like Safari 6.x), you'll get the error: \"RangeError: Maximum call stack size exceeded.\"\n\nExamples of other limits known to exist:\n\n* maximum number of characters allowed in a string literal (not just a string value)\n* size (bytes) of data that can be sent in arguments to a function call (aka stack size)\n* number of parameters in a function declaration\n* maximum depth of non-optimized call stack (i.e., with recursion): how long a chain of function calls from one to the other can be\n* number of seconds a JS program can run continuously blocking the browser\n* maximum length allowed for a variable name\n* ...\n\nIt's not very common at all to run into these limits, but you should be aware that limits can and do exist, and importantly that they vary between engines.\n\n## Review\n\nWe know and can rely upon the fact that the JS language itself has one standard and is predictably implemented by all the modern browsers/engines. This is a very good thing!\n\nBut JavaScript rarely runs in isolation. It runs in an environment mixed in with code from third-party libraries, and sometimes it even runs in engines/environments that differ from those found in browsers.\n\nPaying close attention to these issues improves the reliability and robustness of your code.\n"
  },
  {
    "path": "types & grammar/apB.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Appendix B: Acknowledgments\n\nI have many people to thank for making this book title and the overall series happen.\n\nFirst, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.\n\nI'd like to thank my editors at O'Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into \"open source\" book writing, editing, and production.\n\nThank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, and many others. A big thank you to David Walsh for writing the Foreword for this title.\n\nThank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy \"kangax\" Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, and so many others, I can't even scratch the surface.\n\nThe *You Don't Know JS* book series was born on Kickstarter, so I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:\n\n> Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu 'Dilys' Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard\n\nThis book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!\n\nThank you again to all the countless folks I didn't name but who I nonetheless owe thanks. May this book series be \"owned\" by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.\n"
  },
  {
    "path": "types & grammar/ch1.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Chapter 1: Types\n\nMost developers would say that a dynamic language (like JS) does not have *types*. Let's see what the ES5.1 specification (http://www.ecma-international.org/ecma-262/5.1/) has to say on the topic:\n\n> Algorithms within this specification manipulate values each of which has an associated type. The possible value types are exactly those defined in this clause. Types are further sub classified into ECMAScript language types and specification types.\n>\n> An ECMAScript language type corresponds to values that are directly manipulated by an ECMAScript programmer using the ECMAScript language. The ECMAScript language types are Undefined, Null, Boolean, String, Number, and Object.\n\nNow, if you're a fan of strongly typed (statically typed) languages, you may object to this usage of the word \"type.\" In those languages, \"type\" means a whole lot *more* than it does here in JS.\n\nSome people say JS shouldn't claim to have \"types,\" and they should instead be called \"tags\" or perhaps \"subtypes\".\n\nBah! We're going to use this rough definition (the same one that seems to drive the wording of the spec): a *type* is an intrinsic, built-in set of characteristics that uniquely identifies the behavior of a particular value and distinguishes it from other values, both to the engine **and to the developer**.\n\nIn other words, if both the engine and the developer treat value `42` (the number) differently than they treat value `\"42\"` (the string), then those two values have different *types* -- `number` and `string`, respectively. When you use `42`, you are *intending* to do something numeric, like math. But when you use `\"42\"`, you are *intending* to do something string'ish, like outputting to the page, etc. **These two values have different types.**\n\nThat's by no means a perfect definition. But it's good enough for this discussion. And it's consistent with how JS describes itself.\n\n# A Type By Any Other Name...\n\nBeyond academic definition disagreements, why does it matter if JavaScript has *types* or not?\n\nHaving a proper understanding of each *type* and its intrinsic behavior is absolutely essential to understanding how to properly and accurately convert values to different types (see Coercion, Chapter 4). Nearly every JS program ever written will need to handle value coercion in some shape or form, so it's important you do so responsibly and with confidence.\n\nIf you have the `number` value `42`, but you want to treat it like a `string`, such as pulling out the `\"2\"` as a character in position `1`, you obviously must first convert (coerce) the value from `number` to `string`.\n\nThat seems simple enough.\n\nBut there are many different ways that such coercion can happen. Some of these ways are explicit, easy to reason about, and reliable. But if you're not careful, coercion can happen in very strange and surprising ways.\n\nCoercion confusion is perhaps one of the most profound frustrations for JavaScript developers. It has often been criticized as being so *dangerous* as to be considered a flaw in the design of the language, to be shunned and avoided.\n\nArmed with a full understanding of JavaScript types, we're aiming to illustrate why coercion's *bad reputation* is largely overhyped and somewhat undeserved -- to flip your perspective, to seeing coercion's power and usefulness. But first, we have to get a much better grip on values and types.\n\n## Built-in Types\n\nJavaScript defines seven built-in types:\n\n* `null`\n* `undefined`\n* `boolean`\n* `number`\n* `string`\n* `object`\n* `symbol` -- added in ES6!\n\n**Note:** All of these types except `object` are called \"primitives\".\n\nThe `typeof` operator inspects the type of the given value, and always returns one of seven string values -- surprisingly, there's not an exact 1-to-1 match with the seven built-in types we just listed.\n\n```js\ntypeof undefined     === \"undefined\"; // true\ntypeof true          === \"boolean\";   // true\ntypeof 42            === \"number\";    // true\ntypeof \"42\"          === \"string\";    // true\ntypeof { life: 42 }  === \"object\";    // true\n\n// added in ES6!\ntypeof Symbol()      === \"symbol\";    // true\n```\n\nThese six listed types have values of the corresponding type and return a string value of the same name, as shown. `Symbol` is a new data type as of ES6, and will be covered in Chapter 3.\n\nAs you may have noticed, I excluded `null` from the above listing. It's *special* -- special in the sense that it's buggy when combined with the `typeof` operator:\n\n```js\ntypeof null === \"object\"; // true\n```\n\nIt would have been nice (and correct!) if it returned `\"null\"`, but this original bug in JS has persisted for nearly two decades, and will likely never be fixed because there's too much existing web content that relies on its buggy behavior that \"fixing\" the bug would *create* more \"bugs\" and break a lot of web software.\n\nIf you want to test for a `null` value using its type, you need a compound condition:\n\n```js\nvar a = null;\n\n(!a && typeof a === \"object\"); // true\n```\n\n`null` is the only primitive value that is \"falsy\" (aka false-like; see Chapter 4) but that also returns `\"object\"` from the `typeof` check.\n\nSo what's the seventh string value that `typeof` can return?\n\n```js\ntypeof function a(){ /* .. */ } === \"function\"; // true\n```\n\nIt's easy to think that `function` would be a top-level built-in type in JS, especially given this behavior of the `typeof` operator. However, if you read the spec, you'll see it's actually a \"subtype\" of object. Specifically, a function is referred to as a \"callable object\" -- an object that has an internal `[[Call]]` property that allows it to be invoked.\n\nThe fact that functions are actually objects is quite useful. Most importantly, they can have properties. For example:\n\n```js\nfunction a(b,c) {\n\t/* .. */\n}\n```\n\nThe function object has a `length` property set to the number of formal parameters it is declared with.\n\n```js\na.length; // 2\n```\n\nSince you declared the function with two formal named parameters (`b` and `c`), the \"length of the function\" is `2`.\n\nWhat about arrays? They're native to JS, so are they a special type?\n\n```js\ntypeof [1,2,3] === \"object\"; // true\n```\n\nNope, just objects. It's most appropriate to think of them also as a \"subtype\" of object (see Chapter 3), in this case with the additional characteristics of being numerically indexed (as opposed to just being string-keyed like plain objects) and maintaining an automatically updated `.length` property.\n\n## Values as Types\n\nIn JavaScript, variables don't have types -- **values have types**. Variables can hold any value, at any time.\n\nAnother way to think about JS types is that JS doesn't have \"type enforcement,\" in that the engine doesn't insist that a *variable* always holds values of the *same initial type* that it starts out with. A variable can, in one assignment statement, hold a `string`, and in the next hold a `number`, and so on.\n\nThe *value* `42` has an intrinsic type of `number`, and its *type* cannot be changed. Another value, like `\"42\"` with the `string` type, can be created *from* the `number` value `42` through a process called **coercion** (see Chapter 4).\n\nIf you use `typeof` against a variable, it's not asking \"what's the type of the variable?\" as it may seem, since JS variables have no types. Instead, it's asking \"what's the type of the value *in* the variable?\"\n\n```js\nvar a = 42;\ntypeof a; // \"number\"\n\na = true;\ntypeof a; // \"boolean\"\n```\n\nThe `typeof` operator always returns a string. So:\n\n```js\ntypeof typeof 42; // \"string\"\n```\n\nThe first `typeof 42` returns `\"number\"`, and `typeof \"number\"` is `\"string\"`.\n\n### `undefined` vs \"undeclared\"\n\nVariables that have no value *currently*, actually have the `undefined` value. Calling `typeof` against such variables will return `\"undefined\"`:\n\n```js\nvar a;\n\ntypeof a; // \"undefined\"\n\nvar b = 42;\nvar c;\n\n// later\nb = c;\n\ntypeof b; // \"undefined\"\ntypeof c; // \"undefined\"\n```\n\nIt's tempting for most developers to think of the word \"undefined\" and think of it as a synonym for \"undeclared.\" However, in JS, these two concepts are quite different.\n\nAn \"undefined\" variable is one that has been declared in the accessible scope, but *at the moment* has no other value in it. By contrast, an \"undeclared\" variable is one that has not been formally declared in the accessible scope.\n\nConsider:\n\n```js\nvar a;\n\na; // undefined\nb; // ReferenceError: b is not defined\n```\n\nAn annoying confusion is the error message that browsers assign to this condition. As you can see, the message is \"b is not defined,\" which is of course very easy and reasonable to confuse with \"b is undefined.\" Yet again, \"undefined\" and \"is not defined\" are very different things. It'd be nice if the browsers said something like \"b is not found\" or \"b is not declared,\" to reduce the confusion!\n\nThere's also a special behavior associated with `typeof` as it relates to undeclared variables that even further reinforces the confusion. Consider:\n\n```js\nvar a;\n\ntypeof a; // \"undefined\"\n\ntypeof b; // \"undefined\"\n```\n\nThe `typeof` operator returns `\"undefined\"` even for \"undeclared\" (or \"not defined\") variables. Notice that there was no error thrown when we executed `typeof b`, even though `b` is an undeclared variable. This is a special safety guard in the behavior of `typeof`.\n\nSimilar to above, it would have been nice if `typeof` used with an undeclared variable returned \"undeclared\" instead of conflating the result value with the different \"undefined\" case.\n\n### `typeof` Undeclared\n\nNevertheless, this safety guard is a useful feature when dealing with JavaScript in the browser, where multiple script files can load variables into the shared global namespace.\n\n**Note:** Many developers believe there should never be any variables in the global namespace, and that everything should be contained in modules and private/separate namespaces. This is great in theory but nearly impossible in practicality; still it's a good goal to strive toward! Fortunately, ES6 added first-class support for modules, which will eventually make that much more practical.\n\nAs a simple example, imagine having a \"debug mode\" in your program that is controlled by a global variable (flag) called `DEBUG`. You'd want to check if that variable was declared before performing a debug task like logging a message to the console. A top-level global `var DEBUG = true` declaration would only be included in a \"debug.js\" file, which you only load into the browser when you're in development/testing, but not in production.\n\nHowever, you have to take care in how you check for the global `DEBUG` variable in the rest of your application code, so that you don't throw a `ReferenceError`. The safety guard on `typeof` is our friend in this case.\n\n```js\n// oops, this would throw an error!\nif (DEBUG) {\n\tconsole.log( \"Debugging is starting\" );\n}\n\n// this is a safe existence check\nif (typeof DEBUG !== \"undefined\") {\n\tconsole.log( \"Debugging is starting\" );\n}\n```\n\nThis sort of check is useful even if you're not dealing with user-defined variables (like `DEBUG`). If you are doing a feature check for a built-in API, you may also find it helpful to check without throwing an error:\n\n```js\nif (typeof atob === \"undefined\") {\n\tatob = function() { /*..*/ };\n}\n```\n\n**Note:** If you're defining a \"polyfill\" for a feature if it doesn't already exist, you probably want to avoid using `var` to make the `atob` declaration. If you declare `var atob` inside the `if` statement, this declaration is hoisted (see the *Scope & Closures* title of this series) to the top of the scope, even if the `if` condition doesn't pass (because the global `atob` already exists!). In some browsers and for some special types of global built-in variables (often called \"host objects\"), this duplicate declaration may throw an error. Omitting the `var` prevents this hoisted declaration.\n\nAnother way of doing these checks against global variables but without the safety guard feature of `typeof` is to observe that all global variables are also properties of the global object, which in the browser is basically the `window` object. So, the above checks could have been done (quite safely) as:\n\n```js\nif (window.DEBUG) {\n\t// ..\n}\n\nif (!window.atob) {\n\t// ..\n}\n```\n\nUnlike referencing undeclared variables, there is no `ReferenceError` thrown if you try to access an object property (even on the global `window` object) that doesn't exist.\n\nOn the other hand, manually referencing the global variable with a `window` reference is something some developers prefer to avoid, especially if your code needs to run in multiple JS environments (not just browsers, but server-side node.js, for instance), where the global object may not always be called `window`.\n\nTechnically, this safety guard on `typeof` is useful even if you're not using global variables, though these circumstances are less common, and some developers may find this design approach less desirable. Imagine a utility function that you want others to copy-and-paste into their programs or modules, in which you want to check to see if the including program has defined a certain variable (so that you can use it) or not:\n\n```js\nfunction doSomethingCool() {\n\tvar helper =\n\t\t(typeof FeatureXYZ !== \"undefined\") ?\n\t\tFeatureXYZ :\n\t\tfunction() { /*.. default feature ..*/ };\n\n\tvar val = helper();\n\t// ..\n}\n```\n\n`doSomethingCool()` tests for a variable called `FeatureXYZ`, and if found, uses it, but if not, uses its own. Now, if someone includes this utility in their module/program, it safely checks if they've defined `FeatureXYZ` or not:\n\n```js\n// an IIFE (see \"Immediately Invoked Function Expressions\"\n// discussion in the *Scope & Closures* title of this series)\n(function(){\n\tfunction FeatureXYZ() { /*.. my XYZ feature ..*/ }\n\n\t// include `doSomethingCool(..)`\n\tfunction doSomethingCool() {\n\t\tvar helper =\n\t\t\t(typeof FeatureXYZ !== \"undefined\") ?\n\t\t\tFeatureXYZ :\n\t\t\tfunction() { /*.. default feature ..*/ };\n\n\t\tvar val = helper();\n\t\t// ..\n\t}\n\n\tdoSomethingCool();\n})();\n```\n\nHere, `FeatureXYZ` is not at all a global variable, but we're still using the safety guard of `typeof` to make it safe to check for. And importantly, here there is *no* object we can use (like we did for global variables with `window.___`) to make the check, so `typeof` is quite helpful.\n\nOther developers would prefer a design pattern called \"dependency injection,\" where instead of `doSomethingCool()` inspecting implicitly for `FeatureXYZ` to be defined outside/around it, it would need to have the dependency explicitly passed in, like:\n\n```js\nfunction doSomethingCool(FeatureXYZ) {\n\tvar helper = FeatureXYZ ||\n\t\tfunction() { /*.. default feature ..*/ };\n\n\tvar val = helper();\n\t// ..\n}\n```\n\nThere are lots of options when designing such functionality. No one pattern here is \"correct\" or \"wrong\" -- there are various tradeoffs to each approach. But overall, it's nice that the `typeof` undeclared safety guard gives us more options.\n\n## Review\n\nJavaScript has seven built-in *types*: `null`, `undefined`,  `boolean`, `number`, `string`, `object`, `symbol`. They can be identified by the `typeof` operator.\n\nVariables don't have types, but the values in them do. These types define intrinsic behavior of the values.\n\nMany developers will assume \"undefined\" and \"undeclared\" are roughly the same thing, but in JavaScript, they're quite different. `undefined` is a value that a declared variable can hold. \"Undeclared\" means a variable has never been declared.\n\nJavaScript unfortunately kind of conflates these two terms, not only in its error messages (\"ReferenceError: a is not defined\") but also in the return values of `typeof`, which is `\"undefined\"` for both cases.\n\nHowever, the safety guard (preventing an error) on `typeof` when used against an undeclared variable can be helpful in certain cases.\n"
  },
  {
    "path": "types & grammar/ch2.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Chapter 2: Values\n\n`array`s, `string`s, and `number`s are the most basic building-blocks of any program, but JavaScript has some unique characteristics with these types that may either delight or confound you.\n\nLet's look at several of the built-in value types in JS, and explore how we can more fully understand and correctly leverage their behaviors.\n\n## Arrays\n\nAs compared to other type-enforced languages, JavaScript `array`s are just containers for any type of value, from `string` to `number` to `object` to even another `array` (which is how you get multidimensional `array`s).\n\n```js\nvar a = [ 1, \"2\", [3] ];\n\na.length;\t\t// 3\na[0] === 1;\t\t// true\na[2][0] === 3;\t// true\n```\n\nYou don't need to presize your `array`s (see \"Arrays\" in Chapter 3), you can just declare them and add values as you see fit:\n\n```js\nvar a = [ ];\n\na.length;\t// 0\n\na[0] = 1;\na[1] = \"2\";\na[2] = [ 3 ];\n\na.length;\t// 3\n```\n\n**Warning:** Using `delete` on an `array` value will remove that slot from the `array`, but even if you remove the final element, it does **not** update the `length` property, so be careful! We'll cover the `delete` operator itself in more detail in Chapter 5.\n\nBe careful about creating \"sparse\" `array`s (leaving or creating empty/missing slots):\n\n```js\nvar a = [ ];\n\na[0] = 1;\n// no `a[1]` slot set here\na[2] = [ 3 ];\n\na[1];\t\t// undefined\n\na.length;\t// 3\n```\n\nWhile that works, it can lead to some confusing behavior with the \"empty slots\" you leave in between. While the slot appears to have the `undefined` value in it, it will not behave the same as if the slot is explicitly set (`a[1] = undefined`). See \"Arrays\" in Chapter 3 for more information.\n\n`array`s are numerically indexed (as you'd expect), but the tricky thing is that they also are objects that can have `string` keys/properties added to them (but which don't count toward the `length` of the `array`):\n\n```js\nvar a = [ ];\n\na[0] = 1;\na[\"foobar\"] = 2;\n\na.length;\t\t// 1\na[\"foobar\"];\t// 2\na.foobar;\t\t// 2\n```\n\nHowever, a gotcha to be aware of is that if a `string` value intended as a key can be coerced to a standard base-10 `number`, then it is assumed that you wanted to use it as a `number` index rather than as a `string` key!\n\n```js\nvar a = [ ];\n\na[\"13\"] = 42;\n\na.length; // 14\n```\n\nGenerally, it's not a great idea to add `string` keys/properties to `array`s. Use `object`s for holding values in keys/properties, and save `array`s for strictly numerically indexed values.\n\n### Array-Likes\n\nThere will be occasions where you need to convert an `array`-like value (a numerically indexed collection of values) into a true `array`, usually so you can call array utilities (like `indexOf(..)`, `concat(..)`, `forEach(..)`, etc.) against the collection of values.\n\nFor example, various DOM query operations return lists of DOM elements that are not true `array`s but are `array`-like enough for our conversion purposes. Another common example is when functions expose the `arguments` (`array`-like) object (as of ES6, deprecated) to access the arguments as a list.\n\nOne very common way to make such a conversion is to borrow the `slice(..)` utility against the value:\n\n```js\nfunction foo() {\n\tvar arr = Array.prototype.slice.call( arguments );\n\tarr.push( \"bam\" );\n\tconsole.log( arr );\n}\n\nfoo( \"bar\", \"baz\" ); // [\"bar\",\"baz\",\"bam\"]\n```\n\nIf `slice()` is called without any other parameters, as it effectively is in the above snippet, the default values for its parameters have the effect of duplicating the `array` (or, in this case, `array`-like).\n\nAs of ES6, there's also a built-in utility called `Array.from(..)` that can do the same task:\n\n```js\n...\nvar arr = Array.from( arguments );\n...\n```\n\n**Note:** `Array.from(..)` has several powerful capabilities, and will be covered in detail in the *ES6 & Beyond* title of this series.\n\n## Strings\n\nIt's a very common belief that `string`s are essentially just `array`s of characters. While the implementation under the covers may or may not use `array`s, it's important to realize that JavaScript `string`s are really not the same as `array`s of characters. The similarity is mostly just skin-deep.\n\nFor example, let's consider these two values:\n\n```js\nvar a = \"foo\";\nvar b = [\"f\",\"o\",\"o\"];\n```\n\nStrings do have a shallow resemblance to `array`s -- `array`-likes, as above -- for instance, both of them having a `length` property, an `indexOf(..)` method (`array` version only as of ES5), and a `concat(..)` method:\n\n```js\na.length;\t\t\t\t\t\t\t// 3\nb.length;\t\t\t\t\t\t\t// 3\n\na.indexOf( \"o\" );\t\t\t\t\t// 1\nb.indexOf( \"o\" );\t\t\t\t\t// 1\n\nvar c = a.concat( \"bar\" );\t\t\t// \"foobar\"\nvar d = b.concat( [\"b\",\"a\",\"r\"] );\t// [\"f\",\"o\",\"o\",\"b\",\"a\",\"r\"]\n\na === c;\t\t\t\t\t\t\t// false\nb === d;\t\t\t\t\t\t\t// false\n\na;\t\t\t\t\t\t\t\t\t// \"foo\"\nb;\t\t\t\t\t\t\t\t\t// [\"f\",\"o\",\"o\"]\n```\n\nSo, they're both basically just \"arrays of characters\", right? **Not exactly**:\n\n```js\na[1] = \"O\";\nb[1] = \"O\";\n\na; // \"foo\"\nb; // [\"f\",\"O\",\"o\"]\n```\n\nJavaScript `string`s are immutable, while `array`s are quite mutable. Moreover, the `a[1]` character position access form was not always widely valid JavaScript. Older versions of IE did not allow that syntax (but now they do). Instead, the *correct* approach has been `a.charAt(1)`.\n\nA further consequence of immutable `string`s is that none of the `string` methods that alter its contents can modify in-place, but rather must create and return new `string`s. By contrast, many of the methods that change `array` contents actually *do* modify in-place.\n\n```js\nc = a.toUpperCase();\na === c;\t// false\na;\t\t\t// \"foo\"\nc;\t\t\t// \"FOO\"\n\nb.push( \"!\" );\nb;\t\t\t// [\"f\",\"O\",\"o\",\"!\"]\n```\n\nAlso, many of the `array` methods that could be helpful when dealing with `string`s are not actually available for them, but we can \"borrow\" non-mutation `array` methods against our `string`:\n\n```js\na.join;\t\t\t// undefined\na.map;\t\t\t// undefined\n\nvar c = Array.prototype.join.call( a, \"-\" );\nvar d = Array.prototype.map.call( a, function(v){\n\treturn v.toUpperCase() + \".\";\n} ).join( \"\" );\n\nc;\t\t\t\t// \"f-o-o\"\nd;\t\t\t\t// \"F.O.O.\"\n```\n\nLet's take another example: reversing a `string` (incidentally, a common JavaScript interview trivia question!). `array`s have a `reverse()` in-place mutator method, but `string`s do not:\n\n```js\na.reverse;\t\t// undefined\n\nb.reverse();\t// [\"!\",\"o\",\"O\",\"f\"]\nb;\t\t\t\t// [\"!\",\"o\",\"O\",\"f\"]\n```\n\nUnfortunately, this \"borrowing\" doesn't work with `array` mutators, because `string`s are immutable and thus can't be modified in place:\n\n```js\nArray.prototype.reverse.call( a );\n// still returns a String object wrapper (see Chapter 3)\n// for \"foo\" :(\n```\n\nAnother workaround (aka hack) is to convert the `string` into an `array`, perform the desired operation, then convert it back to a `string`.\n\n```js\nvar c = a\n\t// split `a` into an array of characters\n\t.split( \"\" )\n\t// reverse the array of characters\n\t.reverse()\n\t// join the array of characters back to a string\n\t.join( \"\" );\n\nc; // \"oof\"\n```\n\nIf that feels ugly, it is. Nevertheless, *it works* for simple `string`s, so if you need something quick-n-dirty, often such an approach gets the job done.\n\n**Warning:** Be careful! This approach **doesn't work** for `string`s with complex (unicode) characters in them (astral symbols, multibyte characters, etc.). You need more sophisticated library utilities that are unicode-aware for such operations to be handled accurately. Consult Mathias Bynens' work on the subject: *Esrever* (https://github.com/mathiasbynens/esrever).\n\nThe other way to look at this is: if you are more commonly doing tasks on your \"strings\" that treat them as basically *arrays of characters*, perhaps it's better to just actually store them as `array`s rather than as `string`s. You'll probably save yourself a lot of hassle of converting from `string` to `array` each time. You can always call `join(\"\")` on the `array` *of characters* whenever you actually need the `string` representation.\n\n## Numbers\n\nJavaScript has just one numeric type: `number`. This type includes both \"integer\" values and fractional decimal numbers. I say \"integer\" in quotes because it's long been a criticism of JS that there are not true integers, as there are in other languages. That may change at some point in the future, but for now, we just have `number`s for everything.\n\nSo, in JS, an \"integer\" is just a value that has no fractional decimal value. That is, `42.0` is as much an \"integer\" as `42`.\n\nLike most modern languages, including practically all scripting languages, the implementation of JavaScript's `number`s is based on the \"IEEE 754\" standard, often called \"floating-point.\" JavaScript specifically uses the \"double precision\" format (aka \"64-bit binary\") of the standard.\n\nThere are many great write-ups on the Web about the nitty-gritty details of how binary floating-point numbers are stored in memory, and the implications of those choices. Because understanding bit patterns in memory is not strictly necessary to understand how to correctly use `number`s in JS, we'll leave it as an exercise for the interested reader if you'd like to dig further into IEEE 754 details.\n\n### Numeric Syntax\n\nNumber literals are expressed in JavaScript generally as base-10 decimal literals. For example:\n\n```js\nvar a = 42;\nvar b = 42.3;\n```\n\nThe leading portion of a decimal value, if `0`, is optional:\n\n```js\nvar a = 0.42;\nvar b = .42;\n```\n\nSimilarly, the trailing portion (the fractional) of a decimal value after the `.`, if `0`, is optional:\n\n```js\nvar a = 42.0;\nvar b = 42.;\n```\n\n**Warning:** `42.` is pretty uncommon, and probably not a great idea if you're trying to avoid confusion when other people read your code. But it is, nevertheless, valid.\n\nBy default, most `number`s will be outputted as base-10 decimals, with trailing fractional `0`s removed. So:\n\n```js\nvar a = 42.300;\nvar b = 42.0;\n\na; // 42.3\nb; // 42\n```\n\nVery large or very small `number`s will by default be outputted in exponent form, the same as the output of the `toExponential()` method, like:\n\n```js\nvar a = 5E10;\na;\t\t\t\t\t// 50000000000\na.toExponential();\t// \"5e+10\"\n\nvar b = a * a;\nb;\t\t\t\t\t// 2.5e+21\n\nvar c = 1 / a;\nc;\t\t\t\t\t// 2e-11\n```\n\nBecause `number` values can be boxed with the `Number` object wrapper (see Chapter 3), `number` values can access methods that are built into the `Number.prototype` (see Chapter 3). For example, the `toFixed(..)` method allows you to specify how many fractional decimal places you'd like the value to be represented with:\n\n```js\nvar a = 42.59;\n\na.toFixed( 0 ); // \"43\"\na.toFixed( 1 ); // \"42.6\"\na.toFixed( 2 ); // \"42.59\"\na.toFixed( 3 ); // \"42.590\"\na.toFixed( 4 ); // \"42.5900\"\n```\n\nNotice that the output is actually a `string` representation of the `number`, and that the value is `0`-padded on the right-hand side if you ask for more decimals than the value holds.\n\n`toPrecision(..)` is similar, but specifies how many *significant digits* should be used to represent the value:\n\n```js\nvar a = 42.59;\n\na.toPrecision( 1 ); // \"4e+1\"\na.toPrecision( 2 ); // \"43\"\na.toPrecision( 3 ); // \"42.6\"\na.toPrecision( 4 ); // \"42.59\"\na.toPrecision( 5 ); // \"42.590\"\na.toPrecision( 6 ); // \"42.5900\"\n```\n\nYou don't have to use a variable with the value in it to access these methods; you can access these methods directly on `number` literals. But you have to be careful with the `.` operator. Since `.` is a valid numeric character, it will first be interpreted as part of the `number` literal, if possible, instead of being interpreted as a property accessor.\n\n```js\n// invalid syntax:\n42.toFixed( 3 );\t// SyntaxError\n\n// these are all valid:\n(42).toFixed( 3 );\t// \"42.000\"\n0.42.toFixed( 3 );\t// \"0.420\"\n42..toFixed( 3 );\t// \"42.000\"\n```\n\n`42.toFixed(3)` is invalid syntax, because the `.` is swallowed up as part of the `42.` literal (which is valid -- see above!), and so then there's no `.` property operator present to make the `.toFixed` access.\n\n`42..toFixed(3)` works because the first `.` is part of the `number` and the second `.` is the property operator. But it probably looks strange, and indeed it's very rare to see something like that in actual JavaScript code. In fact, it's pretty uncommon to access methods directly on any of the primitive values. Uncommon doesn't mean *bad* or *wrong*.\n\n**Note:** There are libraries that extend the built-in `Number.prototype` (see Chapter 3) to provide extra operations on/with `number`s, and so in those cases, it's perfectly valid to use something like `10..makeItRain()` to set off a 10-second money raining animation, or something else silly like that.\n\nThis is also technically valid (notice the space):\n\n```js\n42 .toFixed(3); // \"42.000\"\n```\n\nHowever, with the `number` literal specifically, **this is particularly confusing coding style** and will serve no other purpose but to confuse other developers (and your future self). Avoid it.\n\n`number`s can also be specified in exponent form, which is common when representing larger `number`s, such as:\n\n```js\nvar onethousand = 1E3;\t\t\t\t\t\t// means 1 * 10^3\nvar onemilliononehundredthousand = 1.1E6;\t// means 1.1 * 10^6\n```\n\n`number` literals can also be expressed in other bases, like binary, octal, and hexadecimal.\n\nThese formats work in current versions of JavaScript:\n\n```js\n0xf3; // hexadecimal for: 243\n0Xf3; // ditto\n\n0363; // octal for: 243\n```\n\n**Note:** Starting with ES6 + `strict` mode, the `0363` form of octal literals is no longer allowed (see below for the new form). The `0363` form is still allowed in non-`strict` mode, but you should stop using it anyway, to be future-friendly (and because you should be using `strict` mode by now!).\n\nAs of ES6, the following new forms are also valid:\n\n```js\n0o363;\t\t// octal for: 243\n0O363;\t\t// ditto\n\n0b11110011;\t// binary for: 243\n0B11110011; // ditto\n```\n\nPlease do your fellow developers a favor: never use the `0O363` form. `0` next to capital `O` is just asking for confusion. Always use the lowercase predicates `0x`, `0b`, and `0o`.\n\n### Small Decimal Values\n\nThe most (in)famous side effect of using binary floating-point numbers (which, remember, is true of **all** languages that use IEEE 754 -- not *just* JavaScript as many assume/pretend) is:\n\n```js\n0.1 + 0.2 === 0.3; // false\n```\n\nMathematically, we know that statement should be `true`. Why is it `false`?\n\nSimply put, the representations for `0.1` and `0.2` in binary floating-point are not exact, so when they are added, the result is not exactly `0.3`. It's **really** close: `0.30000000000000004`, but if your comparison fails, \"close\" is irrelevant.\n\n**Note:** Should JavaScript switch to a different `number` implementation that has exact representations for all values? Some think so. There have been many alternatives presented over the years. None of them have been accepted yet, and perhaps never will. As easy as it may seem to just wave a hand and say, \"fix that bug already!\", it's not nearly that easy. If it were, it most definitely would have been changed a long time ago.\n\nNow, the question is, if some `number`s can't be *trusted* to be exact, does that mean we can't use `number`s at all? **Of course not.**\n\nThere are some applications where you need to be more careful, especially when dealing with fractional decimal values. There are also plenty of (maybe most?) applications that only deal with whole numbers (\"integers\"), and moreover, only deal with numbers in the millions or trillions at maximum. These applications have been, and always will be, **perfectly safe** to use numeric operations in JS.\n\nWhat if we *did* need to compare two `number`s, like `0.1 + 0.2` to `0.3`, knowing that the simple equality test fails?\n\nThe most commonly accepted practice is to use a tiny \"rounding error\" value as the *tolerance* for comparison. This tiny value is often called \"machine epsilon,\" which is commonly `2^-52` (`2.220446049250313e-16`) for the kind of `number`s in JavaScript.\n\nAs of ES6, `Number.EPSILON` is predefined with this tolerance value, so you'd want to use it, but you can safely polyfill the definition for pre-ES6:\n\n```js\nif (!Number.EPSILON) {\n\tNumber.EPSILON = Math.pow(2,-52);\n}\n```\n\nWe can use this `Number.EPSILON` to compare two `number`s for \"equality\" (within the rounding error tolerance):\n\n```js\nfunction numbersCloseEnoughToEqual(n1,n2) {\n\treturn Math.abs( n1 - n2 ) < Number.EPSILON;\n}\n\nvar a = 0.1 + 0.2;\nvar b = 0.3;\n\nnumbersCloseEnoughToEqual( a, b );\t\t\t\t\t// true\nnumbersCloseEnoughToEqual( 0.0000001, 0.0000002 );\t// false\n```\n\nThe maximum floating-point value that can be represented is roughly `1.798e+308` (which is really, really, really huge!), predefined for you as `Number.MAX_VALUE`. On the small end, `Number.MIN_VALUE` is roughly `5e-324`, which isn't negative but is really close to zero!\n\n### Safe Integer Ranges\n\nBecause of how `number`s are represented, there is a range of \"safe\" values for the whole `number` \"integers\", and it's significantly less than `Number.MAX_VALUE`.\n\nThe maximum integer that can \"safely\" be represented (that is, there's a guarantee that the requested value is actually representable unambiguously) is `2^53 - 1`, which is `9007199254740991`. If you insert your commas, you'll see that this is just over 9 quadrillion. So that's pretty darn big for `number`s to range up to.\n\nThis value is actually automatically predefined in ES6, as `Number.MAX_SAFE_INTEGER`. Unsurprisingly, there's a minimum value, `-9007199254740991`, and it's defined in ES6 as `Number.MIN_SAFE_INTEGER`.\n\nThe main way that JS programs are confronted with dealing with such large numbers is when dealing with 64-bit IDs from databases, etc. 64-bit numbers cannot be represented accurately with the `number` type, so must be stored in (and transmitted to/from) JavaScript using `string` representation.\n\nNumeric operations on such large ID `number` values (besides comparison, which will be fine with `string`s) aren't all that common, thankfully. But if you *do* need to perform math on these very large values, for now you'll need to use a *big number* utility. Big numbers may get official support in a future version of JavaScript.\n\n### Testing for Integers\n\nTo test if a value is an integer, you can use the ES6-specified `Number.isInteger(..)`:\n\n```js\nNumber.isInteger( 42 );\t\t// true\nNumber.isInteger( 42.000 );\t// true\nNumber.isInteger( 42.3 );\t// false\n```\n\nTo polyfill `Number.isInteger(..)` for pre-ES6:\n\n```js\nif (!Number.isInteger) {\n\tNumber.isInteger = function(num) {\n\t\treturn typeof num == \"number\" && num % 1 == 0;\n\t};\n}\n```\n\nTo test if a value is a *safe integer*, use the ES6-specified `Number.isSafeInteger(..)`:\n\n```js\nNumber.isSafeInteger( Number.MAX_SAFE_INTEGER );\t// true\nNumber.isSafeInteger( Math.pow( 2, 53 ) );\t\t\t// false\nNumber.isSafeInteger( Math.pow( 2, 53 ) - 1 );\t\t// true\n```\n\nTo polyfill `Number.isSafeInteger(..)` in pre-ES6 browsers:\n\n```js\nif (!Number.isSafeInteger) {\n\tNumber.isSafeInteger = function(num) {\n\t\treturn Number.isInteger( num ) &&\n\t\t\tMath.abs( num ) <= Number.MAX_SAFE_INTEGER;\n\t};\n}\n```\n\n### 32-bit (Signed) Integers\n\nWhile integers can range up to roughly 9 quadrillion safely (53 bits), there are some numeric operations (like the bitwise operators) that are only defined for 32-bit `number`s, so the \"safe range\" for `number`s used in that way must be much smaller.\n\nThe range then is `Math.pow(-2,31)` (`-2147483648`, about -2.1 billion) up to `Math.pow(2,31)-1` (`2147483647`, about +2.1 billion).\n\nTo force a `number` value in `a` to a 32-bit signed integer value, use `a | 0`. This works because the `|` bitwise operator only works for 32-bit integer values (meaning it can only pay attention to 32 bits and any other bits will be lost). Then, \"or'ing\" with zero is essentially a no-op bitwise speaking.\n\n**Note:** Certain special values (which we will cover in the next section) such as `NaN` and `Infinity` are not \"32-bit safe,\" in that those values when passed to a bitwise operator will pass through the abstract operation `ToInt32` (see Chapter 4) and become simply the `+0` value for the purpose of that bitwise operation.\n\n## Special Values\n\nThere are several special values spread across the various types that the *alert* JS developer needs to be aware of, and use properly.\n\n### The Non-value Values\n\nFor the `undefined` type, there is one and only one value: `undefined`. For the `null` type, there is one and only one value: `null`. So for both of them, the label is both its type and its value.\n\nBoth `undefined` and `null` are often taken to be interchangeable as either \"empty\" values or \"non\" values. Other developers prefer to distinguish between them with nuance. For example:\n\n* `null` is an empty value\n* `undefined` is a missing value\n\nOr:\n\n* `undefined` hasn't had a value yet\n* `null` had a value and doesn't anymore\n\nRegardless of how you choose to \"define\" and use these two values, `null` is a special keyword, not an identifier, and thus you cannot treat it as a variable to assign to (why would you!?). However, `undefined` *is* (unfortunately) an identifier. Uh oh.\n\n### Undefined\n\nIn non-`strict` mode, it's actually possible (though incredibly ill-advised!) to assign a value to the globally provided `undefined` identifier:\n\n```js\nfunction foo() {\n\tundefined = 2; // really bad idea!\n}\n\nfoo();\n```\n\n```js\nfunction foo() {\n\t\"use strict\";\n\tundefined = 2; // TypeError!\n}\n\nfoo();\n```\n\nIn both non-`strict` mode and `strict` mode, however, you can create a local variable of the name `undefined`. But again, this is a terrible idea!\n\n```js\nfunction foo() {\n\t\"use strict\";\n\tvar undefined = 2;\n\tconsole.log( undefined ); // 2\n}\n\nfoo();\n```\n\n**Friends don't let friends override `undefined`.** Ever.\n\n#### `void` Operator\n\nWhile `undefined` is a built-in identifier that holds (unless modified -- see above!) the built-in `undefined` value, another way to get this value is the `void` operator.\n\nThe expression `void ___` \"voids\" out any value, so that the result of the expression is always the `undefined` value. It doesn't modify the existing value; it just ensures that no value comes back from the operator expression.\n\n```js\nvar a = 42;\n\nconsole.log( void a, a ); // undefined 42\n```\n\nBy convention (mostly from C-language programming), to represent the `undefined` value stand-alone by using `void`, you'd use `void 0` (though clearly even `void true` or any other `void` expression does the same thing). There's no practical difference between `void 0`, `void 1`, and `undefined`.\n\nBut the `void` operator can be useful in a few other circumstances, if you need to ensure that an expression has no result value (even if it has side effects).\n\nFor example:\n\n```js\nfunction doSomething() {\n\t// note: `APP.ready` is provided by our application\n\tif (!APP.ready) {\n\t\t// try again later\n\t\treturn void setTimeout( doSomething, 100 );\n\t}\n\n\tvar result;\n\n\t// do some other stuff\n\treturn result;\n}\n\n// were we able to do it right away?\nif (doSomething()) {\n\t// handle next tasks right away\n}\n```\n\nHere, the `setTimeout(..)` function returns a numeric value (the unique identifier of the timer interval, if you wanted to cancel it), but we want to `void` that out so that the return value of our function doesn't give a false-positive with the `if` statement.\n\nMany devs prefer to just do these actions separately, which works the same but doesn't use the `void` operator:\n\n```js\nif (!APP.ready) {\n\t// try again later\n\tsetTimeout( doSomething, 100 );\n\treturn;\n}\n```\n\nIn general, if there's ever a place where a value exists (from some expression) and you'd find it useful for the value to be `undefined` instead, use the `void` operator. That probably won't be terribly common in your programs, but in the rare cases you do need it, it can be quite helpful.\n\n### Special Numbers\n\nThe `number` type includes several special values. We'll take a look at each in detail.\n\n#### The Not Number, Number\n\nAny mathematic operation you perform without both operands being `number`s (or values that can be interpreted as regular `number`s in base 10 or base 16) will result in the operation failing to produce a valid `number`, in which case you will get the `NaN` value.\n\n`NaN` literally stands for \"not a `number`\", though this label/description is very poor and misleading, as we'll see shortly. It would be much more accurate to think of `NaN` as being \"invalid number,\" \"failed number,\" or even \"bad number,\" than to think of it as \"not a number.\"\n\nFor example:\n\n```js\nvar a = 2 / \"foo\";\t\t// NaN\n\ntypeof a === \"number\";\t// true\n```\n\nIn other words: \"the type of not-a-number is 'number'!\" Hooray for confusing names and semantics.\n\n`NaN` is a kind of \"sentinel value\" (an otherwise normal value that's assigned a special meaning) that represents a special kind of error condition within the `number` set. The error condition is, in essence: \"I tried to perform a mathematic operation but failed, so here's the failed `number` result instead.\"\n\nSo, if you have a value in some variable and want to test to see if it's this special failed-number `NaN`, you might think you could directly compare to `NaN` itself, as you can with any other value, like `null` or `undefined`. Nope.\n\n```js\nvar a = 2 / \"foo\";\n\na == NaN;\t// false\na === NaN;\t// false\n```\n\n`NaN` is a very special value in that it's never equal to another `NaN` value (i.e., it's never equal to itself). It's the only value, in fact, that is not reflexive (without the Identity characteristic `x === x`). So, `NaN !== NaN`. A bit strange, huh?\n\nSo how *do* we test for it, if we can't compare to `NaN` (since that comparison would always fail)?\n\n```js\nvar a = 2 / \"foo\";\n\nisNaN( a ); // true\n```\n\nEasy enough, right? We use the built-in global utility called `isNaN(..)` and it tells us if the value is `NaN` or not. Problem solved!\n\nNot so fast.\n\nThe `isNaN(..)` utility has a fatal flaw. It appears it tried to take the meaning of `NaN` (\"Not a Number\") too literally -- that its job is basically: \"test if the thing passed in is either not a `number` or is a `number`.\" But that's not quite accurate.\n\n```js\nvar a = 2 / \"foo\";\nvar b = \"foo\";\n\na; // NaN\nb; // \"foo\"\n\nwindow.isNaN( a ); // true\nwindow.isNaN( b ); // true -- ouch!\n```\n\nClearly, `\"foo\"` is literally *not a `number`*, but it's definitely not the `NaN` value either! This bug has been in JS since the very beginning (over 19 years of *ouch*).\n\nAs of ES6, finally a replacement utility has been provided: `Number.isNaN(..)`. A simple polyfill for it so that you can safely check `NaN` values *now* even in pre-ES6 browsers is:\n\n```js\nif (!Number.isNaN) {\n\tNumber.isNaN = function(n) {\n\t\treturn (\n\t\t\ttypeof n === \"number\" &&\n\t\t\twindow.isNaN( n )\n\t\t);\n\t};\n}\n\nvar a = 2 / \"foo\";\nvar b = \"foo\";\n\nNumber.isNaN( a ); // true\nNumber.isNaN( b ); // false -- phew!\n```\n\nActually, we can implement a `Number.isNaN(..)` polyfill even easier, by taking advantage of that peculiar fact that `NaN` isn't equal to itself. `NaN` is the *only* value in the whole language where that's true; every other value is always **equal to itself**.\n\nSo:\n\n```js\nif (!Number.isNaN) {\n\tNumber.isNaN = function(n) {\n\t\treturn n !== n;\n\t};\n}\n```\n\nWeird, huh? But it works!\n\n`NaN`s are probably a reality in a lot of real-world JS programs, either on purpose or by accident. It's a really good idea to use a reliable test, like `Number.isNaN(..)` as provided (or polyfilled), to recognize them properly.\n\nIf you're currently using just `isNaN(..)` in a program, the sad reality is your program *has a bug*, even if you haven't been bitten by it yet!\n\n#### Infinities\n\nDevelopers from traditional compiled languages like C are probably used to seeing either a compiler error or runtime exception, like \"Divide by zero,\" for an operation like:\n\n```js\nvar a = 1 / 0;\n```\n\nHowever, in JS, this operation is well-defined and results in the value `Infinity` (aka `Number.POSITIVE_INFINITY`). Unsurprisingly:\n\n```js\nvar a = 1 / 0;\t// Infinity\nvar b = -1 / 0;\t// -Infinity\n```\n\nAs you can see, `-Infinity` (aka `Number.NEGATIVE_INFINITY`) results from a divide-by-zero where either (but not both!) of the divide operands is negative.\n\nJS uses finite numeric representations (IEEE 754 floating-point, which we covered earlier), so contrary to pure mathematics, it seems it *is* possible to overflow even with an operation like addition or subtraction, in which case you'd get `Infinity` or `-Infinity`.\n\nFor example:\n\n```js\nvar a = Number.MAX_VALUE;\t// 1.7976931348623157e+308\na + a;\t\t\t\t\t\t// Infinity\na + Math.pow( 2, 970 );\t\t// Infinity\na + Math.pow( 2, 969 );\t\t// 1.7976931348623157e+308\n```\n\nAccording to the specification, if an operation like addition results in a value that's too big to represent, the IEEE 754 \"round-to-nearest\" mode specifies what the result should be. So, in a crude sense, `Number.MAX_VALUE + Math.pow( 2, 969 )` is closer to `Number.MAX_VALUE` than to `Infinity`, so it \"rounds down,\" whereas `Number.MAX_VALUE + Math.pow( 2, 970 )` is closer to `Infinity` so it \"rounds up\".\n\nIf you think too much about that, it's going to make your head hurt. So don't. Seriously, stop!\n\nOnce you overflow to either one of the *infinities*, however, there's no going back. In other words, in an almost poetic sense, you can go from finite to infinite but not from infinite back to finite.\n\nIt's almost philosophical to ask: \"What is infinity divided by infinity\". Our naive brains would likely say \"1\" or maybe \"infinity.\" Turns out neither is true. Both mathematically and in JavaScript, `Infinity / Infinity` is not a defined operation. In JS, this results in `NaN`.\n\nBut what about any positive finite `number` divided by `Infinity`? That's easy! `0`. And what about a negative finite `number` divided by `Infinity`? Keep reading!\n\n#### Zeros\n\nWhile it may confuse the mathematics-minded reader, JavaScript has both a normal zero `0` (otherwise known as a positive zero `+0`) *and* a negative zero `-0`. Before we explain why the `-0` exists, we should examine how JS handles it, because it can be quite confusing.\n\nBesides being specified literally as `-0`, negative zero also results from certain mathematic operations. For example:\n\n```js\nvar a = 0 / -3; // -0\nvar b = 0 * -3; // -0\n```\n\nAddition and subtraction cannot result in a negative zero.\n\nA negative zero when examined in the developer console will usually reveal `-0`, though that was not the common case until fairly recently, so some older browsers you encounter may still report it as `0`.\n\nHowever, if you try to stringify a negative zero value, it will always be reported as `\"0\"`, according to the spec.\n\n```js\nvar a = 0 / -3;\n\n// (some browser) consoles at least get it right\na;\t\t\t\t\t\t\t// -0\n\n// but the spec insists on lying to you!\na.toString();\t\t\t\t// \"0\"\na + \"\";\t\t\t\t\t\t// \"0\"\nString( a );\t\t\t\t// \"0\"\n\n// strangely, even JSON gets in on the deception\nJSON.stringify( a );\t\t// \"0\"\n```\n\nInterestingly, the reverse operations (going from `string` to `number`) don't lie:\n\n```js\n+\"-0\";\t\t\t\t// -0\nNumber( \"-0\" );\t\t// -0\nJSON.parse( \"-0\" );\t// -0\n```\n\n**Warning:** The `JSON.stringify( -0 )` behavior of `\"0\"` is particularly strange when you observe that it's inconsistent with the reverse: `JSON.parse( \"-0\" )` reports `-0` as you'd correctly expect.\n\nIn addition to stringification of negative zero being deceptive to hide its true value, the comparison operators are also (intentionally) configured to *lie*.\n\n```js\nvar a = 0;\nvar b = 0 / -3;\n\na == b;\t\t// true\n-0 == 0;\t// true\n\na === b;\t// true\n-0 === 0;\t// true\n\n0 > -0;\t\t// false\na > b;\t\t// false\n```\n\nClearly, if you want to distinguish a `-0` from a `0` in your code, you can't just rely on what the developer console outputs, so you're going to have to be a bit more clever:\n\n```js\nfunction isNegZero(n) {\n\tn = Number( n );\n\treturn (n === 0) && (1 / n === -Infinity);\n}\n\nisNegZero( -0 );\t\t// true\nisNegZero( 0 / -3 );\t// true\nisNegZero( 0 );\t\t\t// false\n```\n\nNow, why do we need a negative zero, besides academic trivia?\n\nThere are certain applications where developers use the magnitude of a value to represent one piece of information (like speed of movement per animation frame) and the sign of that `number` to represent another piece of information (like the direction of that movement).\n\nIn those applications, as one example, if a variable arrives at zero and it loses its sign, then you would lose the information of what direction it was moving in before it arrived at zero. Preserving the sign of the zero prevents potentially unwanted information loss.\n\n### Special Equality\n\nAs we saw above, the `NaN` value and the `-0` value have special behavior when it comes to equality comparison. `NaN` is never equal to itself, so you have to use ES6's `Number.isNaN(..)` (or a polyfill). Similarly, `-0` lies and pretends that it's equal (even `===` strict equal -- see Chapter 4) to regular positive `0`, so you have to use the somewhat hackish `isNegZero(..)` utility we suggested above.\n\nAs of ES6, there's a new utility that can be used to test two values for absolute equality, without any of these exceptions. It's called `Object.is(..)`:\n\n```js\nvar a = 2 / \"foo\";\nvar b = -3 * 0;\n\nObject.is( a, NaN );\t// true\nObject.is( b, -0 );\t\t// true\n\nObject.is( b, 0 );\t\t// false\n```\n\nThere's a pretty simple polyfill for `Object.is(..)` for pre-ES6 environments:\n\n```js\nif (!Object.is) {\n\tObject.is = function(v1, v2) {\n\t\t// test for `-0`\n\t\tif (v1 === 0 && v2 === 0) {\n\t\t\treturn 1 / v1 === 1 / v2;\n\t\t}\n\t\t// test for `NaN`\n\t\tif (v1 !== v1) {\n\t\t\treturn v2 !== v2;\n\t\t}\n\t\t// everything else\n\t\treturn v1 === v2;\n\t};\n}\n```\n\n`Object.is(..)` probably shouldn't be used in cases where `==` or `===` are known to be *safe* (see Chapter 4 \"Coercion\"), as the operators are likely much more efficient and certainly are more idiomatic/common. `Object.is(..)` is mostly for these special cases of equality.\n\n## Value vs. Reference\n\nIn many other languages, values can either be assigned/passed by value-copy or by reference-copy depending on the syntax you use.\n\nFor example, in C++ if you want to pass a `number` variable into a function and have that variable's value updated, you can declare the function parameter like `int& myNum`, and when you pass in a variable like `x`, `myNum` will be a **reference to `x`**; references are like a special form of pointers, where you obtain a pointer to another variable (like an *alias*). If you don't declare a reference parameter, the value passed in will *always* be copied, even if it's a complex object.\n\nIn JavaScript, there are no pointers, and references work a bit differently. You cannot have a reference from one JS variable to another variable. That's just not possible.\n\nA reference in JS points at a (shared) **value**, so if you have 10 different references, they are all always distinct references to a single shared value; **none of them are references/pointers to each other.**\n\nMoreover, in JavaScript, there are no syntactic hints that control value vs. reference assignment/passing. Instead, the *type* of the value *solely* controls whether that value will be assigned by value-copy or by reference-copy.\n\nLet's illustrate:\n\n```js\nvar a = 2;\nvar b = a; // `b` is always a copy of the value in `a`\nb++;\na; // 2\nb; // 3\n\nvar c = [1,2,3];\nvar d = c; // `d` is a reference to the shared `[1,2,3]` value\nd.push( 4 );\nc; // [1,2,3,4]\nd; // [1,2,3,4]\n```\n\nSimple values (aka scalar primitives) are *always* assigned/passed by value-copy: `null`, `undefined`, `string`, `number`, `boolean`, and ES6's `symbol`.\n\nCompound values -- `object`s (including `array`s, and all boxed object wrappers -- see Chapter 3) and `function`s -- *always* create a copy of the reference on assignment or passing.\n\nIn the above snippet, because `2` is a scalar primitive, `a` holds one initial copy of that value, and `b` is assigned another *copy* of the value. When changing `b`, you are in no way changing the value in `a`.\n\nBut **both `c` and `d`** are separate references to the same shared value `[1,2,3]`, which is a compound value. It's important to note that neither `c` nor `d` more \"owns\" the `[1,2,3]` value -- both are just equal peer references to the value. So, when using either reference to modify (`.push(4)`) the actual shared `array` value itself, it's affecting just the one shared value, and both references will reference the newly modified value `[1,2,3,4]`.\n\nSince references point to the values themselves and not to the variables, you cannot use one reference to change where another reference is pointed:\n\n```js\nvar a = [1,2,3];\nvar b = a;\na; // [1,2,3]\nb; // [1,2,3]\n\n// later\nb = [4,5,6];\na; // [1,2,3]\nb; // [4,5,6]\n```\n\nWhen we make the assignment `b = [4,5,6]`, we are doing absolutely nothing to affect *where* `a` is still referencing (`[1,2,3]`). To do that, `b` would have to be a pointer to `a` rather than a reference to the `array` -- but no such capability exists in JS!\n\nThe most common way such confusion happens is with function parameters:\n\n```js\nfunction foo(x) {\n\tx.push( 4 );\n\tx; // [1,2,3,4]\n\n\t// later\n\tx = [4,5,6];\n\tx.push( 7 );\n\tx; // [4,5,6,7]\n}\n\nvar a = [1,2,3];\n\nfoo( a );\n\na; // [1,2,3,4]  not  [4,5,6,7]\n```\n\nWhen we pass in the argument `a`, it assigns a copy of the `a` reference to `x`. `x` and `a` are separate references pointing at the same `[1,2,3]` value. Now, inside the function, we can use that reference to mutate the value itself (`push(4)`). But when we make the assignment `x = [4,5,6]`, this is in no way affecting where the initial reference `a` is pointing -- still points at the (now modified) `[1,2,3,4]` value.\n\nThere is no way to use the `x` reference to change where `a` is pointing. We could only modify the contents of the shared value that both `a` and `x` are pointing to.\n\nTo accomplish changing `a` to have the `[4,5,6,7]` value contents, you can't create a new `array` and assign -- you must modify the existing `array` value:\n\n```js\nfunction foo(x) {\n\tx.push( 4 );\n\tx; // [1,2,3,4]\n\n\t// later\n\tx.length = 0; // empty existing array in-place\n\tx.push( 4, 5, 6, 7 );\n\tx; // [4,5,6,7]\n}\n\nvar a = [1,2,3];\n\nfoo( a );\n\na; // [4,5,6,7]  not  [1,2,3,4]\n```\n\nAs you can see, `x.length = 0` and `x.push(4,5,6,7)` were not creating a new `array`, but modifying the existing shared `array`. So of course, `a` references the new `[4,5,6,7]` contents.\n\nRemember: you cannot directly control/override value-copy vs. reference -- those semantics are controlled entirely by the type of the underlying value.\n\nTo effectively pass a compound value (like an `array`) by value-copy, you need to manually make a copy of it, so that the reference passed doesn't still point to the original. For example:\n\n```js\nfoo( a.slice() );\n```\n\n`slice(..)` with no parameters by default makes an entirely new (shallow) copy of the `array`. So, we pass in a reference only to the copied `array`, and thus `foo(..)` cannot affect the contents of `a`.\n\nTo do the reverse -- pass a scalar primitive value in a way where its value updates can be seen, kinda like a reference -- you have to wrap the value in another compound value (`object`, `array`, etc) that *can* be passed by reference-copy:\n\n```js\nfunction foo(wrapper) {\n\twrapper.a = 42;\n}\n\nvar obj = {\n\ta: 2\n};\n\nfoo( obj );\n\nobj.a; // 42\n```\n\nHere, `obj` acts as a wrapper for the scalar primitive property `a`. When passed to `foo(..)`, a copy of the `obj` reference is passed in and set to the `wrapper` parameter. We now can use the `wrapper` reference to access the shared object, and update its property. After the function finishes, `obj.a` will see the updated value `42`.\n\nIt may occur to you that if you wanted to pass in a reference to a scalar primitive value like `2`, you could just box the value in its `Number` object wrapper (see Chapter 3).\n\nIt *is* true a copy of the reference to this `Number` object *will* be passed to the function, but unfortunately, having a reference to the shared object is not going to give you the ability to modify the shared primitive value, like you may expect:\n\n```js\nfunction foo(x) {\n\tx = x + 1;\n\tx; // 3\n}\n\nvar a = 2;\nvar b = new Number( a ); // or equivalently `Object(a)`\n\nfoo( b );\nconsole.log( b ); // 2, not 3\n```\n\nThe problem is that the underlying scalar primitive value is *not mutable* (same goes for `String` and `Boolean`). If a `Number` object holds the scalar primitive value `2`, that exact `Number` object can never be changed to hold another value; you can only create a whole new `Number` object with a different value.\n\nWhen `x` is used in the expression `x + 1`, the underlying scalar primitive value `2` is unboxed (extracted) from the `Number` object automatically, so the line `x = x + 1` very subtly changes `x` from being a shared reference to the `Number` object, to just holding the scalar primitive value `3` as a result of the addition operation `2 + 1`. Therefore, `b` on the outside still references the original unmodified/immutable `Number` object holding the value `2`.\n\nYou *can* add properties on top of the `Number` object (just not change its inner primitive value), so you could exchange information indirectly via those additional properties.\n\nThis is not all that common, however; it probably would not be considered a good practice by most developers.\n\nInstead of using the wrapper object `Number` in this way, it's probably much better to use the manual object wrapper (`obj`) approach in the earlier snippet. That's not to say that there's no clever uses for the boxed object wrappers like `Number` -- just that you should probably prefer the scalar primitive value form in most cases.\n\nReferences are quite powerful, but sometimes they get in your way, and sometimes you need them where they don't exist. The only control you have over reference vs. value-copy behavior is the type of the value itself, so you must indirectly influence the assignment/passing behavior by which value types you choose to use.\n\n## Review\n\nIn JavaScript, `array`s are simply numerically indexed collections of any value-type. `string`s are somewhat \"`array`-like\", but they have distinct behaviors and care must be taken if you want to treat them as `array`s. Numbers in JavaScript include both \"integers\" and floating-point values.\n\nSeveral special values are defined within the primitive types.\n\nThe `null` type has just one value: `null`, and likewise the `undefined` type has just the `undefined` value. `undefined` is basically the default value in any variable or property if no other value is present. The `void` operator lets you create the `undefined` value from any other value.\n\n`number`s include several special values, like `NaN` (supposedly \"Not a Number\", but really more appropriately \"invalid number\"); `+Infinity` and `-Infinity`; and `-0`.\n\nSimple scalar primitives (`string`s, `number`s, etc.) are assigned/passed by value-copy, but compound values (`object`s, etc.) are assigned/passed by reference-copy. References are not like references/pointers in other languages -- they're never pointed at other variables/references, only at the underlying values.\n"
  },
  {
    "path": "types & grammar/ch3.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Chapter 3: Natives\n\nSeveral times in Chapters 1 and 2, we alluded to various built-ins, usually called \"natives,\" like `String` and `Number`. Let's examine those in detail now.\n\nHere's a list of the most commonly used natives:\n\n* `String()`\n* `Number()`\n* `Boolean()`\n* `Array()`\n* `Object()`\n* `Function()`\n* `RegExp()`\n* `Date()`\n* `Error()`\n* `Symbol()` -- added in ES6!\n\nAs you can see, these natives are actually built-in functions.\n\nIf you're coming to JS from a language like Java, JavaScript's `String()` will look like the `String(..)` constructor you're used to for creating string values. So, you'll quickly observe that you can do things like:\n\n```js\nvar s = new String( \"Hello World!\" );\n\nconsole.log( s.toString() ); // \"Hello World!\"\n```\n\nIt *is* true that each of these natives can be used as a native constructor. But what's being constructed may be different than you think.\n\n```js\nvar a = new String( \"abc\" );\n\ntypeof a; // \"object\" ... not \"String\"\n\na instanceof String; // true\n\nObject.prototype.toString.call( a ); // \"[object String]\"\n```\n\nThe result of the constructor form of value creation (`new String(\"abc\")`) is an object wrapper around the primitive (`\"abc\"`) value.\n\nImportantly, `typeof` shows that these objects are not their own special *types*, but more appropriately they are subtypes of the `object` type.\n\nThis object wrapper can further be observed with:\n\n```js\nconsole.log( a );\n```\n\nThe output of that statement varies depending on your browser, as developer consoles are free to choose however they feel it's appropriate to serialize the object for developer inspection.\n\n**Note:** At the time of writing, the latest Chrome prints something like this: `String {0: \"a\", 1: \"b\", 2: \"c\", length: 3, [[PrimitiveValue]]: \"abc\"}`. But older versions of Chrome used to just print this: `String {0: \"a\", 1: \"b\", 2: \"c\"}`. The latest Firefox currently prints `String [\"a\",\"b\",\"c\"]`, but used to print `\"abc\"` in italics, which was clickable to open the object inspector. Of course, these results are subject to rapid change and your experience may vary.\n\nThe point is, `new String(\"abc\")` creates a string wrapper object around `\"abc\"`, not just the primitive `\"abc\"` value itself.\n\n## Internal `[[Class]]`\n\nValues that are `typeof` `\"object\"` (such as an array) are additionally tagged with an internal `[[Class]]` property (think of this more as an internal *class*ification rather than related to classes from traditional class-oriented coding). This property cannot be accessed directly, but can generally be revealed indirectly by borrowing the default `Object.prototype.toString(..)` method called against the value. For example:\n\n```js\nObject.prototype.toString.call( [1,2,3] );\t\t\t// \"[object Array]\"\n\nObject.prototype.toString.call( /regex-literal/i );\t// \"[object RegExp]\"\n```\n\nSo, for the array in this example, the internal `[[Class]]` value is `\"Array\"`, and for the regular expression, it's `\"RegExp\"`. In most cases, this internal `[[Class]]` value corresponds to the built-in native constructor (see below) that's related to the value, but that's not always the case.\n\nWhat about primitive values? First, `null` and `undefined`:\n\n```js\nObject.prototype.toString.call( null );\t\t\t// \"[object Null]\"\nObject.prototype.toString.call( undefined );\t// \"[object Undefined]\"\n```\n\nYou'll note that there are no `Null()` or `Undefined()` native constructors, but nevertheless the `\"Null\"` and `\"Undefined\"` are the internal `[[Class]]` values exposed.\n\nBut for the other simple primitives like `string`, `number`, and `boolean`, another behavior actually kicks in, which is usually called \"boxing\" (see \"Boxing Wrappers\" section next):\n\n```js\nObject.prototype.toString.call( \"abc\" );\t// \"[object String]\"\nObject.prototype.toString.call( 42 );\t\t// \"[object Number]\"\nObject.prototype.toString.call( true );\t\t// \"[object Boolean]\"\n```\n\nIn this snippet, each of the simple primitives are automatically boxed by their respective object wrappers, which is why `\"String\"`, `\"Number\"`, and `\"Boolean\"` are revealed as the respective internal `[[Class]]` values.\n\n**Note:** The behavior of `toString()` and `[[Class]]` as illustrated here has changed a bit from ES5 to ES6, but we cover those details in the *ES6 & Beyond* title of this series.\n\n## Boxing Wrappers\n\nThese object wrappers serve a very important purpose. Primitive values don't have properties or methods, so to access `.length` or `.toString()` you need an object wrapper around the value. Thankfully, JS will automatically *box* (aka wrap) the primitive value to fulfill such accesses.\n\n```js\nvar a = \"abc\";\n\na.length; // 3\na.toUpperCase(); // \"ABC\"\n```\n\nSo, if you're going to be accessing these properties/methods on your string values regularly, like a `i < a.length` condition in a `for` loop for instance, it might seem to make sense to just have the object form of the value from the start, so the JS engine doesn't need to implicitly create it for you.\n\nBut it turns out that's a bad idea. Browsers long ago performance-optimized the common cases like `.length`, which means your program will *actually go slower* if you try to \"preoptimize\" by directly using the object form (which isn't on the optimized path).\n\nIn general, there's basically no reason to use the object form directly. It's better to just let the boxing happen implicitly where necessary. In other words, never do things like `new String(\"abc\")`, `new Number(42)`, etc -- always prefer using the literal primitive values `\"abc\"` and `42`.\n\n### Object Wrapper Gotchas\n\nThere are some gotchas with using the object wrappers directly that you should be aware of if you *do* choose to ever use them.\n\nFor example, consider `Boolean` wrapped values:\n\n```js\nvar a = new Boolean( false );\n\nif (!a) {\n\tconsole.log( \"Oops\" ); // never runs\n}\n```\n\nThe problem is that you've created an object wrapper around the `false` value, but objects themselves are \"truthy\" (see Chapter 4), so using the object behaves oppositely to using the underlying `false` value itself, which is quite contrary to normal expectation.\n\nIf you want to manually box a primitive value, you can use the `Object(..)` function (no `new` keyword):\n\n```js\nvar a = \"abc\";\nvar b = new String( a );\nvar c = Object( a );\n\ntypeof a; // \"string\"\ntypeof b; // \"object\"\ntypeof c; // \"object\"\n\nb instanceof String; // true\nc instanceof String; // true\n\nObject.prototype.toString.call( b ); // \"[object String]\"\nObject.prototype.toString.call( c ); // \"[object String]\"\n```\n\nAgain, using the boxed object wrapper directly (like `b` and `c` above) is usually discouraged, but there may be some rare occasions you'll run into where they may be useful.\n\n## Unboxing\n\nIf you have an object wrapper and you want to get the underlying primitive value out, you can use the `valueOf()` method:\n\n```js\nvar a = new String( \"abc\" );\nvar b = new Number( 42 );\nvar c = new Boolean( true );\n\na.valueOf(); // \"abc\"\nb.valueOf(); // 42\nc.valueOf(); // true\n```\n\nUnboxing can also happen implicitly, when using an object wrapper value in a way that requires the primitive value. This process (coercion) will be covered in more detail in Chapter 4, but briefly:\n\n```js\nvar a = new String( \"abc\" );\nvar b = a + \"\"; // `b` has the unboxed primitive value \"abc\"\n\ntypeof a; // \"object\"\ntypeof b; // \"string\"\n```\n\n## Natives as Constructors\n\nFor `array`, `object`, `function`, and regular-expression values, it's almost universally preferred that you use the literal form for creating the values, but the literal form creates the same sort of object as the constructor form does (that is, there is no nonwrapped value).\n\nJust as we've seen above with the other natives, these constructor forms should generally be avoided, unless you really know you need them, mostly because they introduce exceptions and gotchas that you probably don't really *want* to deal with.\n\n### `Array(..)`\n\n```js\nvar a = new Array( 1, 2, 3 );\na; // [1, 2, 3]\n\nvar b = [1, 2, 3];\nb; // [1, 2, 3]\n```\n\n**Note:** The `Array(..)` constructor does not require the `new` keyword in front of it. If you omit it, it will behave as if you have used it anyway. So `Array(1,2,3)` is the same outcome as `new Array(1,2,3)`.\n\nThe `Array` constructor has a special form where if only one `number` argument is passed, instead of providing that value as *contents* of the array, it's taken as a length to \"presize the array\" (well, sorta).\n\nThis is a terrible idea. Firstly, you can trip over that form accidentally, as it's easy to forget.\n\nBut more importantly, there's no such thing as actually presizing the array. Instead, what you're creating is an otherwise empty array, but setting the `length` property of the array to the numeric value specified.\n\nAn array that has no explicit values in its slots, but has a `length` property that *implies* the slots exist, is a weird exotic type of data structure in JS with some very strange and confusing behavior. The capability to create such a value comes purely from old, deprecated, historical functionalities (\"array-like objects\" like the `arguments` object).\n\n**Note:** An array with at least one \"empty slot\" in it is often called a \"sparse array.\"\n\nIt doesn't help matters that this is yet another example where browser developer consoles vary on how they represent such an object, which breeds more confusion.\n\nFor example:\n\n```js\nvar a = new Array( 3 );\n\na.length; // 3\na;\n```\n\nThe serialization of `a` in Chrome is (at the time of writing): `[ undefined x 3 ]`. **This is really unfortunate.** It implies that there are three `undefined` values in the slots of this array, when in fact the slots do not exist (so-called \"empty slots\" -- also a bad name!).\n\nTo visualize the difference, try this:\n\n```js\nvar a = new Array( 3 );\nvar b = [ undefined, undefined, undefined ];\nvar c = [];\nc.length = 3;\n\na;\nb;\nc;\n```\n\n**Note:** As you can see with `c` in this example, empty slots in an array can happen after creation of the array. Changing the `length` of an array to go beyond its number of actually-defined slot values, you implicitly introduce empty slots. In fact, you could even call `delete b[1]` in the above snippet, and it would introduce an empty slot into the middle of `b`.\n\nFor `b` (in Chrome, currently), you'll find `[ undefined, undefined, undefined ]` as the serialization, as opposed to `[ undefined x 3 ]` for `a` and `c`. Confused? Yeah, so is everyone else.\n\nWorse than that, at the time of writing, Firefox reports `[ , , , ]` for `a` and `c`. Did you catch why that's so confusing? Look closely. Three commas implies four slots, not three slots like we'd expect.\n\n**What!?** Firefox puts an extra `,` on the end of their serialization here because as of ES5, trailing commas in lists (array values, property lists, etc.) are allowed (and thus dropped and ignored). So if you were to type in a `[ , , , ]` value into your program or the console, you'd actually get the underlying value that's like `[ , , ]` (that is, an array with three empty slots). This choice, while confusing if reading the developer console, is defended as instead making copy-n-paste behavior accurate.\n\nIf you're shaking your head or rolling your eyes about now, you're not alone! Shrugs.\n\nUnfortunately, it gets worse. More than just confusing console output, `a` and `b` from the above code snippet actually behave the same in some cases **but differently in others**:\n\n```js\na.join( \"-\" ); // \"--\"\nb.join( \"-\" ); // \"--\"\n\na.map(function(v,i){ return i; }); // [ undefined x 3 ]\nb.map(function(v,i){ return i; }); // [ 0, 1, 2 ]\n```\n\n**Ugh.**\n\nThe `a.map(..)` call *fails* because the slots don't actually exist, so `map(..)` has nothing to iterate over. `join(..)` works differently. Basically, we can think of it implemented sort of like this:\n\n```js\nfunction fakeJoin(arr,connector) {\n\tvar str = \"\";\n\tfor (var i = 0; i < arr.length; i++) {\n\t\tif (i > 0) {\n\t\t\tstr += connector;\n\t\t}\n\t\tif (arr[i] !== undefined) {\n\t\t\tstr += arr[i];\n\t\t}\n\t}\n\treturn str;\n}\n\nvar a = new Array( 3 );\nfakeJoin( a, \"-\" ); // \"--\"\n```\n\nAs you can see, `join(..)` works by just *assuming* the slots exist and looping up to the `length` value. Whatever `map(..)` does internally, it (apparently) doesn't make such an assumption, so the result from the strange \"empty slots\" array is unexpected and likely to cause failure.\n\nSo, if you wanted to *actually* create an array of actual `undefined` values (not just \"empty slots\"), how could you do it (besides manually)?\n\n```js\nvar a = Array.apply( null, { length: 3 } );\na; // [ undefined, undefined, undefined ]\n```\n\nConfused? Yeah. Here's roughly how it works.\n\n`apply(..)` is a utility available to all functions, which calls the function it's used with but in a special way.\n\nThe first argument is a `this` object binding (covered in the *this & Object Prototypes* title of this series), which we don't care about here, so we set it to `null`. The second argument is supposed to be an array (or something *like* an array -- aka an \"array-like object\"). The contents of this \"array\" are \"spread\" out as arguments to the function in question.\n\nSo, `Array.apply(..)` is calling the `Array(..)` function and spreading out the values (of the `{ length: 3 }` object value) as its arguments.\n\nInside of `apply(..)`, we can envision there's another `for` loop (kinda like `join(..)` from above) that goes from `0` up to, but not including, `length` (`3` in our case).\n\nFor each index, it retrieves that key from the object. So if the array-object parameter was named `arr` internally inside of the `apply(..)` function, the property access would effectively be `arr[0]`, `arr[1]`, and `arr[2]`. Of course, none of those properties exist on the `{ length: 3 }` object value, so all three of those property accesses would return the value `undefined`.\n\nIn other words, it ends up calling `Array(..)` basically like this: `Array(undefined,undefined,undefined)`, which is how we end up with an array filled with `undefined` values, and not just those (crazy) empty slots.\n\nWhile `Array.apply( null, { length: 3 } )` is a strange and verbose way to create an array filled with `undefined` values, it's **vastly** better and more reliable than what you get with the footgun'ish `Array(3)` empty slots.\n\nBottom line: **never ever, under any circumstances**, should you intentionally create and use these exotic empty-slot arrays. Just don't do it. They're nuts.\n\n### `Object(..)`, `Function(..)`, and `RegExp(..)`\n\nThe `Object(..)`/`Function(..)`/`RegExp(..)` constructors are also generally optional (and thus should usually be avoided unless specifically called for):\n\n```js\nvar c = new Object();\nc.foo = \"bar\";\nc; // { foo: \"bar\" }\n\nvar d = { foo: \"bar\" };\nd; // { foo: \"bar\" }\n\nvar e = new Function( \"a\", \"return a * 2;\" );\nvar f = function(a) { return a * 2; };\nfunction g(a) { return a * 2; }\n\nvar h = new RegExp( \"^a*b+\", \"g\" );\nvar i = /^a*b+/g;\n```\n\nThere's practically no reason to ever use the `new Object()` constructor form, especially since it forces you to add properties one-by-one instead of many at once in the object literal form.\n\nThe `Function` constructor is helpful only in the rarest of cases, where you need to dynamically define a function's parameters and/or its function body. **Do not just treat `Function(..)` as an alternate form of `eval(..)`.** You will almost never need to dynamically define a function in this way.\n\nRegular expressions defined in the literal form (`/^a*b+/g`) are strongly preferred, not just for ease of syntax but for performance reasons -- the JS engine precompiles and caches them before code execution. Unlike the other constructor forms we've seen so far, `RegExp(..)` has some reasonable utility: to dynamically define the pattern for a regular expression.\n\n```js\nvar name = \"Kyle\";\nvar namePattern = new RegExp( \"\\\\b(?:\" + name + \")+\\\\b\", \"ig\" );\n\nvar matches = someText.match( namePattern );\n```\n\nThis kind of scenario legitimately occurs in JS programs from time to time, so you'd need to use the `new RegExp(\"pattern\",\"flags\")` form.\n\n### `Date(..)` and `Error(..)`\n\nThe `Date(..)` and `Error(..)` native constructors are much more useful than the other natives, because there is no literal form for either.\n\nTo create a date object value, you must use `new Date()`. The `Date(..)` constructor accepts optional arguments to specify the date/time to use, but if omitted, the current date/time is assumed.\n\nBy far the most common reason you construct a date object is to get the current timestamp value (a signed integer number of milliseconds since Jan 1, 1970). You can do this by calling `getTime()` on a date object instance.\n\nBut an even easier way is to just call the static helper function defined as of ES5: `Date.now()`. And to polyfill that for pre-ES5 is pretty easy:\n\n```js\nif (!Date.now) {\n\tDate.now = function(){\n\t\treturn (new Date()).getTime();\n\t};\n}\n```\n\n**Note:** If you call `Date()` without `new`, you'll get back a string representation of the date/time at that moment. The exact form of this representation is not specified in the language spec, though browsers tend to agree on something close to: `\"Fri Jul 18 2014 00:31:02 GMT-0500 (CDT)\"`.\n\nThe `Error(..)` constructor (much like `Array()` above) behaves the same with the `new` keyword present or omitted.\n\nThe main reason you'd want to create an error object is that it captures the current execution stack context into the object (in most JS engines, revealed as a read-only `.stack` property once constructed). This stack context includes the function call-stack and the line-number where the error object was created, which makes debugging that error much easier.\n\nYou would typically use such an error object with the `throw` operator:\n\n```js\nfunction foo(x) {\n\tif (!x) {\n\t\tthrow new Error( \"x wasn't provided\" );\n\t}\n\t// ..\n}\n```\n\nError object instances generally have at least a `message` property, and sometimes other properties (which you should treat as read-only), like `type`. However, other than inspecting the above-mentioned `stack` property, it's usually best to just call `toString()` on the error object (either explicitly, or implicitly through coercion -- see Chapter 4) to get a friendly-formatted error message.\n\n**Tip:** Technically, in addition to the general `Error(..)` native, there are several other specific-error-type natives: `EvalError(..)`, `RangeError(..)`, `ReferenceError(..)`, `SyntaxError(..)`, `TypeError(..)`, and `URIError(..)`. But it's very rare to manually use these specific error natives. They are automatically used if your program actually suffers from a real exception (such as referencing an undeclared variable and getting a `ReferenceError` error).\n\n### `Symbol(..)`\n\nNew as of ES6, an additional primitive value type has been added, called \"Symbol\". Symbols are special \"unique\" (not strictly guaranteed!) values that can be used as properties on objects with little fear of any collision. They're primarily designed for special built-in behaviors of ES6 constructs, but you can also define your own symbols.\n\nSymbols can be used as property names, but you cannot see or access the actual value of a symbol from your program, nor from the developer console. If you evaluate a symbol in the developer console, what's shown looks like `Symbol(Symbol.create)`, for example.\n\nThere are several predefined symbols in ES6, accessed as static properties of the `Symbol` function object, like `Symbol.create`, `Symbol.iterator`, etc. To use them, do something like:\n\n```js\nobj[Symbol.iterator] = function(){ /*..*/ };\n```\n\nTo define your own custom symbols, use the `Symbol(..)` native. The `Symbol(..)` native \"constructor\" is unique in that you're not allowed to use `new` with it, as doing so will throw an error.\n\n```js\nvar mysym = Symbol( \"my own symbol\" );\nmysym;\t\t\t\t// Symbol(my own symbol)\nmysym.toString();\t// \"Symbol(my own symbol)\"\ntypeof mysym; \t\t// \"symbol\"\n\nvar a = { };\na[mysym] = \"foobar\";\n\nObject.getOwnPropertySymbols( a );\n// [ Symbol(my own symbol) ]\n```\n\nWhile symbols are not actually private (`Object.getOwnPropertySymbols(..)` reflects on the object and reveals the symbols quite publicly), using them for private or special properties is likely their primary use-case. For most developers, they may take the place of property names with `_` underscore prefixes, which are almost always by convention signals to say, \"hey, this is a private/special/internal property, so leave it alone!\"\n\n**Note:** `Symbol`s are *not* `object`s, they are simple scalar primitives.\n\n### Native Prototypes\n\nEach of the built-in native constructors has its own `.prototype` object -- `Array.prototype`, `String.prototype`, etc.\n\nThese objects contain behavior unique to their particular object subtype.\n\nFor example, all string objects, and by extension (via boxing) `string` primitives, have access to default behavior as methods defined on the `String.prototype` object.\n\n**Note:** By documentation convention, `String.prototype.XYZ` is shortened to `String#XYZ`, and likewise for all the other `.prototype`s.\n\n* `String#indexOf(..)`: find the position in the string of another substring\n* `String#charAt(..)`: access the character at a position in the string\n* `String#substr(..)`, `String#substring(..)`, and `String#slice(..)`: extract a portion of the string as a new string\n* `String#toUpperCase()` and `String#toLowerCase()`: create a new string that's converted to either uppercase or lowercase\n* `String#trim()`: create a new string that's stripped of any trailing or leading whitespace\n\nNone of the methods modify the string *in place*. Modifications (like case conversion or trimming) create a new value from the existing value.\n\nBy virtue of prototype delegation (see the *this & Object Prototypes* title in this series), any string value can access these methods:\n\n```js\nvar a = \" abc \";\n\na.indexOf( \"c\" ); // 3\na.toUpperCase(); // \" ABC \"\na.trim(); // \"abc\"\n```\n\nThe other constructor prototypes contain behaviors appropriate to their types, such as `Number#toFixed(..)` (stringifying a number with a fixed number of decimal digits) and `Array#concat(..)` (merging arrays). All functions have access to `apply(..)`, `call(..)`, and `bind(..)` because `Function.prototype` defines them.\n\nBut, some of the native prototypes aren't *just* plain objects:\n\n```js\ntypeof Function.prototype;\t\t\t// \"function\"\nFunction.prototype();\t\t\t\t// it's an empty function!\n\nRegExp.prototype.toString();\t\t// \"/(?:)/\" -- empty regex\n\"abc\".match( RegExp.prototype );\t// [\"\"]\n```\n\nA particularly bad idea, you can even modify these native prototypes (not just adding properties as you're probably familiar with):\n\n```js\nArray.isArray( Array.prototype );\t// true\nArray.prototype.push( 1, 2, 3 );\t// 3\nArray.prototype;\t\t\t\t\t// [1,2,3]\n\n// don't leave it that way, though, or expect weirdness!\n// reset the `Array.prototype` to empty\nArray.prototype.length = 0;\n```\n\nAs you can see, `Function.prototype` is a function, `RegExp.prototype` is a regular expression, and `Array.prototype` is an array. Interesting and cool, huh?\n\n#### Prototypes As Defaults\n\n`Function.prototype` being an empty function, `RegExp.prototype` being an \"empty\" (e.g., non-matching) regex, and `Array.prototype` being an empty array, make them all nice \"default\" values to assign to variables if those variables wouldn't already have had a value of the proper type.\n\nFor example:\n\n```js\nfunction isThisCool(vals,fn,rx) {\n\tvals = vals || Array.prototype;\n\tfn = fn || Function.prototype;\n\trx = rx || RegExp.prototype;\n\n\treturn rx.test(\n\t\tvals.map( fn ).join( \"\" )\n\t);\n}\n\nisThisCool();\t\t// true\n\nisThisCool(\n\t[\"a\",\"b\",\"c\"],\n\tfunction(v){ return v.toUpperCase(); },\n\t/D/\n);\t\t\t\t\t// false\n```\n\n**Note:** As of ES6, we don't need to use the `vals = vals || ..` default value syntax trick (see Chapter 4) anymore, because default values can be set for parameters via native syntax in the function declaration (see Chapter 5).\n\nOne minor side-benefit of this approach is that the `.prototype`s are already created and built-in, thus created *only once*. By contrast, using `[]`, `function(){}`, and `/(?:)/` values themselves for those defaults would (likely, depending on engine implementations) be recreating those values (and probably garbage-collecting them later) for *each call* of `isThisCool(..)`. That could be memory/CPU wasteful.\n\nAlso, be very careful not to use `Array.prototype` as a default value **that will subsequently be modified**. In this example, `vals` is used read-only, but if you were to instead make in-place changes to `vals`, you would actually be modifying `Array.prototype` itself, which would lead to the gotchas mentioned earlier!\n\n**Note:** While we're pointing out these native prototypes and some usefulness, be cautious of relying on them and even more wary of modifying them in any way. See Appendix A \"Native Prototypes\" for more discussion.\n\n## Review\n\nJavaScript provides object wrappers around primitive values, known as natives (`String`, `Number`, `Boolean`, etc). These object wrappers give the values access to behaviors appropriate for each object subtype (`String#trim()` and `Array#concat(..)`).\n\nIf you have a simple scalar primitive value like `\"abc\"` and you access its `length` property or some `String.prototype` method, JS automatically \"boxes\" the value (wraps it in its respective object wrapper) so that the property/method accesses can be fulfilled.\n"
  },
  {
    "path": "types & grammar/ch4.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Chapter 4: Coercion\n\nNow that we much more fully understand JavaScript's types and values, we turn our attention to a very controversial topic: coercion.\n\nAs we mentioned in Chapter 1, the debates over whether coercion is a useful feature or a flaw in the design of the language (or somewhere in between!) have raged since day one. If you've read other popular books on JS, you know that the overwhelmingly prevalent *message* out there is that coercion is magical, evil, confusing, and just downright a bad idea.\n\nIn the same overall spirit of this book series, rather than running away from coercion because everyone else does, or because you get bitten by some quirk, I think you should run toward that which you don't understand and seek to *get it* more fully.\n\nOur goal is to fully explore the pros and cons (yes, there *are* pros!) of coercion, so that you can make an informed decision on its appropriateness in your program.\n\n## Converting Values\n\nConverting a value from one type to another is often called \"type casting,\" when done explicitly, and \"coercion\" when done implicitly (forced by the rules of how a value is used).\n\n**Note:** It may not be obvious, but JavaScript coercions always result in one of the scalar primitive (see Chapter 2) values, like `string`, `number`, or `boolean`. There is no coercion that results in a complex value like `object` or `function`. Chapter 3 covers \"boxing,\" which wraps scalar primitive values in their `object` counterparts, but this is not really coercion in an accurate sense.\n\nAnother way these terms are often distinguished is as follows: \"type casting\" (or \"type conversion\") occur in statically typed languages at compile time, while \"type coercion\" is a runtime conversion for dynamically typed languages.\n\nHowever, in JavaScript, most people refer to all these types of conversions as *coercion*, so the way I prefer to distinguish is to say \"implicit coercion\" vs. \"explicit coercion.\"\n\nThe difference should be obvious: \"explicit coercion\" is when it is obvious from looking at the code that a type conversion is intentionally occurring, whereas \"implicit coercion\" is when the type conversion will occur as a less obvious side effect of some other intentional operation.\n\nFor example, consider these two approaches to coercion:\n\n```js\nvar a = 42;\n\nvar b = a + \"\";\t\t\t// implicit coercion\n\nvar c = String( a );\t// explicit coercion\n```\n\nFor `b`, the coercion that occurs happens implicitly, because the `+` operator combined with one of the operands being a `string` value (`\"\"`) will insist on the operation being a `string` concatenation (adding two strings together), which *as a (hidden) side effect* will force the `42` value in `a` to be coerced to its `string` equivalent: `\"42\"`.\n\nBy contrast, the `String(..)` function makes it pretty obvious that it's explicitly taking the value in `a` and coercing it to a `string` representation.\n\nBoth approaches accomplish the same effect: `\"42\"` comes from `42`. But it's the *how* that is at the heart of the heated debates over JavaScript coercion.\n\n**Note:** Technically, there's some nuanced behavioral difference here beyond the stylistic difference. We cover that in more detail later in the chapter, in the \"Implicitly: Strings <--> Numbers\" section.\n\nThe terms \"explicit\" and \"implicit,\" or \"obvious\" and \"hidden side effect,\" are *relative*.\n\nIf you know exactly what `a + \"\"` is doing and you're intentionally doing that to coerce to a `string`, you might feel the operation is sufficiently \"explicit.\" Conversely, if you've never seen the `String(..)` function used for `string` coercion, its behavior might seem hidden enough as to feel \"implicit\" to you.\n\nBut we're having this discussion of \"explicit\" vs. \"implicit\" based on the likely opinions of an *average, reasonably informed, but not expert or JS specification devotee* developer. To whatever extent you do or do not find yourself fitting neatly in that bucket, you will need to adjust your perspective on our observations here accordingly.\n\nJust remember: it's often rare that we write our code and are the only ones who ever read it. Even if you're an expert on all the ins and outs of JS, consider how a less experienced teammate of yours will feel when they read your code. Will it be \"explicit\" or \"implicit\" to them in the same way it is for you?\n\n## Abstract Value Operations\n\nBefore we can explore *explicit* vs *implicit* coercion, we need to learn the basic rules that govern how values *become* either a `string`, `number`, or `boolean`. The ES5 spec in section 9 defines several \"abstract operations\" (fancy spec-speak for \"internal-only operation\") with the rules of value conversion. We will specifically pay attention to: `ToString`, `ToNumber`, and `ToBoolean`, and to a lesser extent, `ToPrimitive`.\n\n### `ToString`\n\nWhen any non-`string` value is coerced to a `string` representation, the conversion is handled by the `ToString` abstract operation in section 9.8 of the specification.\n\nBuilt-in primitive values have natural stringification: `null` becomes `\"null\"`, `undefined` becomes `\"undefined\"` and `true` becomes `\"true\"`. `number`s are generally expressed in the natural way you'd expect, but as we discussed in Chapter 2, very small or very large `numbers` are represented in exponent form:\n\n```js\n// multiplying `1.07` by `1000`, seven times over\nvar a = 1.07 * 1000 * 1000 * 1000 * 1000 * 1000 * 1000 * 1000;\n\n// seven times three digits => 21 digits\na.toString(); // \"1.07e21\"\n```\n\nFor regular objects, unless you specify your own, the default `toString()` (located in `Object.prototype.toString()`) will return the *internal `[[Class]]`* (see Chapter 3), like for instance `\"[object Object]\"`.\n\nBut as shown earlier, if an object has its own `toString()` method on it, and you use that object in a `string`-like way, its `toString()` will automatically be called, and the `string` result of that call will be used instead.\n\n**Note:** The way an object is coerced to a `string` technically goes through the `ToPrimitive` abstract operation (ES5 spec, section 9.1), but those nuanced details are covered in more detail in the `ToNumber` section later in this chapter, so we will skip over them here.\n\nArrays have an overridden default `toString()` that stringifies as the (string) concatenation of all its values (each stringified themselves), with `\",\"` in between each value:\n\n```js\nvar a = [1,2,3];\n\na.toString(); // \"1,2,3\"\n```\n\nAgain, `toString()` can either be called explicitly, or it will automatically be called if a non-`string` is used in a `string` context.\n\n#### JSON Stringification\n\nAnother task that seems awfully related to `ToString` is when you use the `JSON.stringify(..)` utility to serialize a value to a JSON-compatible `string` value.\n\nIt's important to note that this stringification is not exactly the same thing as coercion. But since it's related to the `ToString` rules above, we'll take a slight diversion to cover JSON stringification behaviors here.\n\nFor most simple values, JSON stringification behaves basically the same as `toString()` conversions, except that the serialization result is *always a `string`*:\n\n```js\nJSON.stringify( 42 );\t// \"42\"\nJSON.stringify( \"42\" );\t// \"\"42\"\" (a string with a quoted string value in it)\nJSON.stringify( null );\t// \"null\"\nJSON.stringify( true );\t// \"true\"\n```\n\nAny *JSON-safe* value can be stringified by `JSON.stringify(..)`. But what is *JSON-safe*? Any value that can be represented validly in a JSON representation.\n\nIt may be easier to consider values that are **not** JSON-safe. Some examples: `undefined`s, `function`s, (ES6+) `symbol`s, and `object`s with circular references (where property references in an object structure create a never-ending cycle through each other). These are all illegal values for a standard JSON structure, mostly because they aren't portable to other languages that consume JSON values.\n\nThe `JSON.stringify(..)` utility will automatically omit `undefined`, `function`, and `symbol` values when it comes across them. If such a value is found in an `array`, that value is replaced by `null` (so that the array position information isn't altered). If found as a property of an `object`, that property will simply be excluded.\n\nConsider:\n\n```js\nJSON.stringify( undefined );\t\t\t\t\t// undefined\nJSON.stringify( function(){} );\t\t\t\t\t// undefined\n\nJSON.stringify( [1,undefined,function(){},4] );\t// \"[1,null,null,4]\"\nJSON.stringify( { a:2, b:function(){} } );\t\t// \"{\"a\":2}\"\n```\n\nBut if you try to `JSON.stringify(..)` an `object` with circular reference(s) in it, an error will be thrown.\n\nJSON stringification has the special behavior that if an `object` value has a `toJSON()` method defined, this method will be called first to get a value to use for serialization.\n\nIf you intend to JSON stringify an object that may contain illegal JSON value(s), or if you just have values in the `object` that aren't appropriate for the serialization, you should define a `toJSON()` method for it that returns a *JSON-safe* version of the `object`.\n\nFor example:\n\n```js\nvar o = { };\n\nvar a = {\n\tb: 42,\n\tc: o,\n\td: function(){}\n};\n\n// create a circular reference inside `a`\no.e = a;\n\n// would throw an error on the circular reference\n// JSON.stringify( a );\n\n// define a custom JSON value serialization\na.toJSON = function() {\n\t// only include the `b` property for serialization\n\treturn { b: this.b };\n};\n\nJSON.stringify( a ); // \"{\"b\":42}\"\n```\n\nIt's a very common misconception that `toJSON()` should return a JSON stringification representation. That's probably incorrect, unless you're wanting to actually stringify the `string` itself (usually not!). `toJSON()` should return the actual regular value (of whatever type) that's appropriate, and `JSON.stringify(..)` itself will handle the stringification.\n\nIn other words, `toJSON()` should be interpreted as \"to a JSON-safe value suitable for stringification,\" not \"to a JSON string\" as many developers mistakenly assume.\n\nConsider:\n\n```js\nvar a = {\n\tval: [1,2,3],\n\n\t// probably correct!\n\ttoJSON: function(){\n\t\treturn this.val.slice( 1 );\n\t}\n};\n\nvar b = {\n\tval: [1,2,3],\n\n\t// probably incorrect!\n\ttoJSON: function(){\n\t\treturn \"[\" +\n\t\t\tthis.val.slice( 1 ).join() +\n\t\t\"]\";\n\t}\n};\n\nJSON.stringify( a ); // \"[2,3]\"\n\nJSON.stringify( b ); // \"\"[2,3]\"\"\n```\n\nIn the second call, we stringified the returned `string` rather than the `array` itself, which was probably not what we wanted to do.\n\nWhile we're talking about `JSON.stringify(..)`, let's discuss some lesser-known functionalities that can still be very useful.\n\nAn optional second argument can be passed to `JSON.stringify(..)` that is called *replacer*. This argument can either be an `array` or a `function`. It's used to customize the recursive serialization of an `object` by providing a filtering mechanism for which properties should and should not be included, in a similar way to how `toJSON()` can prepare a value for serialization.\n\nIf *replacer* is an `array`, it should be an `array` of `string`s, each of which will specify a property name that is allowed to be included in the serialization of the `object`. If a property exists that isn't in this list, it will be skipped.\n\nIf *replacer* is a `function`, it will be called once for the `object` itself, and then once for each property in the `object`, and each time is passed two arguments, *key* and *value*. To skip a *key* in the serialization, return `undefined`. Otherwise, return the *value* provided.\n\n```js\nvar a = {\n\tb: 42,\n\tc: \"42\",\n\td: [1,2,3]\n};\n\nJSON.stringify( a, [\"b\",\"c\"] ); // \"{\"b\":42,\"c\":\"42\"}\"\n\nJSON.stringify( a, function(k,v){\n\tif (k !== \"c\") return v;\n} );\n// \"{\"b\":42,\"d\":[1,2,3]}\"\n```\n\n**Note:** In the `function` *replacer* case, the key argument `k` is `undefined` for the first call (where the `a` object itself is being passed in). The `if` statement **filters out** the property named `\"c\"`. Stringification is recursive, so the `[1,2,3]` array has each of its values (`1`, `2`, and `3`) passed as `v` to *replacer*, with indexes (`0`, `1`, and `2`) as `k`.\n\nA third optional argument can also be passed to `JSON.stringify(..)`, called *space*, which is used as indentation for prettier human-friendly output. *space* can be a positive integer to indicate how many space characters should be used at each indentation level. Or, *space* can be a `string`, in which case up to the first ten characters of its value will be used for each indentation level.\n\n```js\nvar a = {\n\tb: 42,\n\tc: \"42\",\n\td: [1,2,3]\n};\n\nJSON.stringify( a, null, 3 );\n// \"{\n//    \"b\": 42,\n//    \"c\": \"42\",\n//    \"d\": [\n//       1,\n//       2,\n//       3\n//    ]\n// }\"\n\nJSON.stringify( a, null, \"-----\" );\n// \"{\n// -----\"b\": 42,\n// -----\"c\": \"42\",\n// -----\"d\": [\n// ----------1,\n// ----------2,\n// ----------3\n// -----]\n// }\"\n```\n\nRemember, `JSON.stringify(..)` is not directly a form of coercion. We covered it here, however, for two reasons that relate its behavior to `ToString` coercion:\n\n1. `string`, `number`, `boolean`, and `null` values all stringify for JSON basically the same as how they coerce to `string` values via the rules of the `ToString` abstract operation.\n2. If you pass an `object` value to `JSON.stringify(..)`, and that `object` has a `toJSON()` method on it, `toJSON()` is automatically called to (sort of) \"coerce\" the value to be *JSON-safe* before stringification.\n\n### `ToNumber`\n\nIf any non-`number` value is used in a way that requires it to be a `number`, such as a mathematical operation, the ES5 spec defines the `ToNumber` abstract operation in section 9.3.\n\nFor example, `true` becomes `1` and `false` becomes `0`. `undefined` becomes `NaN`, but (curiously) `null` becomes `0`.\n\n`ToNumber` for a `string` value essentially works for the most part like the rules/syntax for numeric literals (see Chapter 3). If it fails, the result is `NaN` (instead of a syntax error as with `number` literals). One example difference is that `0`-prefixed octal numbers are not handled as octals (just as normal base-10 decimals) in this operation, though such octals are valid as `number` literals (see Chapter 2).\n\n**Note:** The differences between `number` literal grammar and `ToNumber` on a `string` value are subtle and highly nuanced, and thus will not be covered further here. Consult section 9.3.1 of the ES5 spec for more information.\n\nObjects (and arrays) will first be converted to their primitive value equivalent, and the resulting value (if a primitive but not already a `number`) is coerced to a `number` according to the `ToNumber` rules just mentioned.\n\nTo convert to this primitive value equivalent, the `ToPrimitive` abstract operation (ES5 spec, section 9.1) will consult the value (using the internal `DefaultValue` operation -- ES5 spec, section 8.12.8) in question to see if it has a `valueOf()` method. If `valueOf()` is available and it returns a primitive value, *that* value is used for the coercion. If not, but `toString()` is available, it will provide the value for the coercion.\n\nIf neither operation can provide a primitive value, a `TypeError` is thrown.\n\nAs of ES5, you can create such a noncoercible object -- one without `valueOf()` and `toString()` -- if it has a `null` value for its `[[Prototype]]`, typically created with `Object.create(null)`. See the *this & Object Prototypes* title of this series for more information on `[[Prototype]]`s.\n\n**Note:** We cover how to coerce to `number`s later in this chapter in detail, but for this next code snippet, just assume the `Number(..)` function does so.\n\nConsider:\n\n```js\nvar a = {\n\tvalueOf: function(){\n\t\treturn \"42\";\n\t}\n};\n\nvar b = {\n\ttoString: function(){\n\t\treturn \"42\";\n\t}\n};\n\nvar c = [4,2];\nc.toString = function(){\n\treturn this.join( \"\" );\t// \"42\"\n};\n\nNumber( a );\t\t\t// 42\nNumber( b );\t\t\t// 42\nNumber( c );\t\t\t// 42\nNumber( \"\" );\t\t\t// 0\nNumber( [] );\t\t\t// 0\nNumber( [ \"abc\" ] );\t// NaN\n```\n\n### `ToBoolean`\n\nNext, let's have a little chat about how `boolean`s behave in JS. There's **lots of confusion and misconception** floating out there around this topic, so pay close attention!\n\nFirst and foremost, JS has actual keywords `true` and `false`, and they behave exactly as you'd expect of `boolean` values. It's a common misconception that the values `1` and `0` are identical to `true`/`false`. While that may be true in other languages, in JS the `number`s are `number`s and the `boolean`s are `boolean`s. You can coerce `1` to `true` (and vice versa) or `0` to `false` (and vice versa). But they're not the same.\n\n#### Falsy Values\n\nBut that's not the end of the story. We need to discuss how values other than the two `boolean`s behave whenever you coerce *to* their `boolean` equivalent.\n\nAll of JavaScript's values can be divided into two categories:\n\n1. values that will become `false` if coerced to `boolean`\n2. everything else (which will obviously become `true`)\n\nI'm not just being facetious. The JS spec defines a specific, narrow list of values that will coerce to `false` when coerced to a `boolean` value.\n\nHow do we know what the list of values is? In the ES5 spec, section 9.2 defines a `ToBoolean` abstract operation, which says exactly what happens for all the possible values when you try to coerce them \"to boolean.\"\n\nFrom that table, we get the following as the so-called \"falsy\" values list:\n\n* `undefined`\n* `null`\n* `false`\n* `+0`, `-0`, and `NaN`\n* `\"\"`\n\nThat's it. If a value is on that list, it's a \"falsy\" value, and it will coerce to `false` if you force a `boolean` coercion on it.\n\nBy logical conclusion, if a value is *not* on that list, it must be on *another list*, which we call the \"truthy\" values list. But JS doesn't really define a \"truthy\" list per se. It gives some examples, such as saying explicitly that all objects are truthy, but mostly the spec just implies: **anything not explicitly on the falsy list is therefore truthy.**\n\n#### Falsy Objects\n\nWait a minute, that section title even sounds contradictory. I literally *just said* the spec calls all objects truthy, right? There should be no such thing as a \"falsy object.\"\n\nWhat could that possibly even mean?\n\nYou might be tempted to think it means an object wrapper (see Chapter 3) around a falsy value (such as `\"\"`, `0` or `false`). But don't fall into that *trap*.\n\n**Note:** That's a subtle specification joke some of you may get.\n\nConsider:\n\n```js\nvar a = new Boolean( false );\nvar b = new Number( 0 );\nvar c = new String( \"\" );\n```\n\nWe know all three values here are objects (see Chapter 3) wrapped around obviously falsy values. But do these objects behave as `true` or as `false`? That's easy to answer:\n\n```js\nvar d = Boolean( a && b && c );\n\nd; // true\n```\n\nSo, all three behave as `true`, as that's the only way `d` could end up as `true`.\n\n**Tip:** Notice the `Boolean( .. )` wrapped around the `a && b && c` expression -- you might wonder why that's there. We'll come back to that later in this chapter, so make a mental note of it. For a sneak-peek (trivia-wise), try for yourself what `d` will be if you just do `d = a && b && c` without the `Boolean( .. )` call!\n\nSo, if \"falsy objects\" are **not just objects wrapped around falsy values**, what the heck are they?\n\nThe tricky part is that they can show up in your JS program, but they're not actually part of JavaScript itself.\n\n**What!?**\n\nThere are certain cases where browsers have created their own sort of *exotic* values behavior, namely this idea of \"falsy objects,\" on top of regular JS semantics.\n\nA \"falsy object\" is a value that looks and acts like a normal object (properties, etc.), but when you coerce it to a `boolean`, it coerces to a `false` value.\n\n**Why!?**\n\nThe most well-known case is `document.all`: an array-like (object) provided to your JS program *by the DOM* (not the JS engine itself), which exposes elements in your page to your JS program. It *used* to behave like a normal object--it would act truthy. But not anymore.\n\n`document.all` itself was never really \"standard\" and has long since been deprecated/abandoned.\n\n\"Can't they just remove it, then?\" Sorry, nice try. Wish they could. But there's far too many legacy JS code bases out there that rely on using it.\n\nSo, why make it act falsy? Because coercions of `document.all` to `boolean` (like in `if` statements) were almost always used as a means of detecting old, nonstandard IE.\n\nIE has long since come up to standards compliance, and in many cases is pushing the web forward as much or more than any other browser. But all that old `if (document.all) { /* it's IE */ }` code is still out there, and much of it is probably never going away. All this legacy code is still assuming it's running in decade-old IE, which just leads to bad browsing experience for IE users.\n\nSo, we can't remove `document.all` completely, but IE doesn't want `if (document.all) { .. }` code to work anymore, so that users in modern IE get new, standards-compliant code logic.\n\n\"What should we do?\" **\"I've got it! Let's bastardize the JS type system and pretend that `document.all` is falsy!\"\n\nUgh. That sucks. It's a crazy gotcha that most JS developers don't understand. But the alternative (doing nothing about the above no-win problems) sucks *just a little bit more*.\n\nSo... that's what we've got: crazy, nonstandard \"falsy objects\" added to JavaScript by the browsers. Yay!\n\n#### Truthy Values\n\nBack to the truthy list. What exactly are the truthy values? Remember: **a value is truthy if it's not on the falsy list.**\n\nConsider:\n\n```js\nvar a = \"false\";\nvar b = \"0\";\nvar c = \"''\";\n\nvar d = Boolean( a && b && c );\n\nd;\n```\n\nWhat value do you expect `d` to have here? It's gotta be either `true` or `false`.\n\nIt's `true`. Why? Because despite the contents of those `string` values looking like falsy values, the `string` values themselves are all truthy, because `\"\"` is the only `string` value on the falsy list.\n\nWhat about these?\n\n```js\nvar a = [];\t\t\t\t// empty array -- truthy or falsy?\nvar b = {};\t\t\t\t// empty object -- truthy or falsy?\nvar c = function(){};\t// empty function -- truthy or falsy?\n\nvar d = Boolean( a && b && c );\n\nd;\n```\n\nYep, you guessed it, `d` is still `true` here. Why? Same reason as before. Despite what it may seem like, `[]`, `{}`, and `function(){}` are *not* on the falsy list, and thus are truthy values.\n\nIn other words, the truthy list is infinitely long. It's impossible to make such a list. You can only make a finite falsy list and consult *it*.\n\nTake five minutes, write the falsy list on a post-it note for your computer monitor, or memorize it if you prefer. Either way, you'll easily be able to construct a virtual truthy list whenever you need it by simply asking if it's on the falsy list or not.\n\nThe importance of truthy and falsy is in understanding how a value will behave if you coerce it (either explicitly or implicitly) to a `boolean` value. Now that you have those two lists in mind, we can dive into coercion examples themselves.\n\n## Explicit Coercion\n\n*Explicit* coercion refers to type conversions that are obvious and explicit. There's a wide range of type conversion usage that clearly falls under the *explicit* coercion category for most developers.\n\nThe goal here is to identify patterns in our code where we can make it clear and obvious that we're converting a value from one type to another, so as to not leave potholes for future developers to trip into. The more explicit we are, the more likely someone later will be able to read our code and understand without undue effort what our intent was.\n\nIt would be hard to find any salient disagreements with *explicit* coercion, as it most closely aligns with how the commonly accepted practice of type conversion works in statically typed languages. As such, we'll take for granted (for now) that *explicit* coercion can be agreed upon to not be evil or controversial. We'll revisit this later, though.\n\n### Explicitly: Strings <--> Numbers\n\nWe'll start with the simplest and perhaps most common coercion operation: coercing values between `string` and `number` representation.\n\nTo coerce between `string`s and `number`s, we use the built-in `String(..)` and `Number(..)` functions (which we referred to as \"native constructors\" in Chapter 3), but **very importantly**, we do not use the `new` keyword in front of them. As such, we're not creating object wrappers.\n\nInstead, we're actually *explicitly coercing* between the two types:\n\n```js\nvar a = 42;\nvar b = String( a );\n\nvar c = \"3.14\";\nvar d = Number( c );\n\nb; // \"42\"\nd; // 3.14\n```\n\n`String(..)` coerces from any other value to a primitive `string` value, using the rules of the `ToString` operation discussed earlier. `Number(..)` coerces from any other value to a primitive `number` value, using the rules of the `ToNumber` operation discussed earlier.\n\nI call this *explicit* coercion because in general, it's pretty obvious to most developers that the end result of these operations is the applicable type conversion.\n\nIn fact, this usage actually looks a lot like it does in some other statically typed languages.\n\nFor example, in C/C++, you can say either `(int)x` or `int(x)`, and both will convert the value in `x` to an integer. Both forms are valid, but many prefer the latter, which kinda looks like a function call. In JavaScript, when you say `Number(x)`, it looks awfully similar. Does it matter that it's *actually* a function call in JS? Not really.\n\nBesides `String(..)` and `Number(..)`, there are other ways to \"explicitly\" convert these values between `string` and `number`:\n\n```js\nvar a = 42;\nvar b = a.toString();\n\nvar c = \"3.14\";\nvar d = +c;\n\nb; // \"42\"\nd; // 3.14\n```\n\nCalling `a.toString()` is ostensibly explicit (pretty clear that \"toString\" means \"to a string\"), but there's some hidden implicitness here. `toString()` cannot be called on a *primitive* value like `42`. So JS automatically \"boxes\" (see Chapter 3) `42` in an object wrapper, so that `toString()` can be called against the object. In other words, you might call it \"explicitly implicit.\"\n\n`+c` here is showing the *unary operator* form (operator with only one operand) of the `+` operator. Instead of performing mathematic addition (or string concatenation -- see below), the unary `+` explicitly coerces its operand (`c`) to a `number` value.\n\nIs `+c` *explicit* coercion? Depends on your experience and perspective. If you know (which you do, now!) that unary `+` is explicitly intended for `number` coercion, then it's pretty explicit and obvious. However, if you've never seen it before, it can seem awfully confusing, implicit, with hidden side effects, etc.\n\n**Note:** The generally accepted perspective in the open-source JS community is that unary `+` is an accepted form of *explicit* coercion.\n\nEven if you really like the `+c` form, there are definitely places where it can look awfully confusing. Consider:\n\n```js\nvar c = \"3.14\";\nvar d = 5+ +c;\n\nd; // 8.14\n```\n\nThe unary `-` operator also coerces like `+` does, but it also flips the sign of the number. However, you cannot put two `--` next to each other to unflip the sign, as that's parsed as the decrement operator. Instead, you would need to do: `- -\"3.14\"` with a space in between, and that would result in coercion to `3.14`.\n\nYou can probably dream up all sorts of hideous combinations of binary operators (like `+` for addition) next to the unary form of an operator. Here's another crazy example:\n\n```js\n1 + - + + + - + 1;\t// 2\n```\n\nYou should strongly consider avoiding unary `+` (or `-`) coercion when it's immediately adjacent to other operators. While the above works, it would almost universally be considered a bad idea. Even `d = +c` (or `d =+ c` for that matter!) can far too easily be confused for `d += c`, which is entirely different!\n\n**Note:** Another extremely confusing place for unary `+` to be used adjacent to another operator would be the `++` increment operator and `--` decrement operator. For example: `a +++b`, `a + ++b`, and `a + + +b`. See \"Expression Side-Effects\" in Chapter 5 for more about `++`.\n\nRemember, we're trying to be explicit and **reduce** confusion, not make it much worse!\n\n#### `Date` To `number`\n\nAnother common usage of the unary `+` operator is to coerce a `Date` object into a `number`, because the result is the unix timestamp (milliseconds elapsed since 1 January 1970 00:00:00 UTC) representation of the date/time value:\n\n```js\nvar d = new Date( \"Mon, 18 Aug 2014 08:53:06 CDT\" );\n\n+d; // 1408369986000\n```\n\nThe most common usage of this idiom is to get the current *now* moment as a timestamp, such as:\n\n```js\nvar timestamp = +new Date();\n```\n\n**Note:** Some developers are aware of a peculiar syntactic \"trick\" in JavaScript, which is that the `()` set on a constructor call (a function called with `new`) is *optional* if there are no arguments to pass. So you may run across the `var timestamp = +new Date;` form. However, not all developers agree that omitting the `()` improves readability, as it's an uncommon syntax exception that only applies to the `new fn()` call form and not the regular `fn()` call form.\n\nBut coercion is not the only way to get the timestamp out of a `Date` object. A noncoercion approach is perhaps even preferable, as it's even more explicit:\n\n```js\nvar timestamp = new Date().getTime();\n// var timestamp = (new Date()).getTime();\n// var timestamp = (new Date).getTime();\n```\n\nBut an *even more* preferable noncoercion option is to use the ES5 added `Date.now()` static function:\n\n```js\nvar timestamp = Date.now();\n```\n\nAnd if you want to polyfill `Date.now()` into older browsers, it's pretty simple:\n\n```js\nif (!Date.now) {\n\tDate.now = function() {\n\t\treturn +new Date();\n\t};\n}\n```\n\nI'd recommend skipping the coercion forms related to dates. Use `Date.now()` for current *now* timestamps, and `new Date( .. ).getTime()` for getting a timestamp of a specific *non-now* date/time that you need to specify.\n\n#### The Curious Case of the `~`\n\nOne coercive JS operator that is often overlooked and usually very confused is the tilde `~` operator (aka \"bitwise NOT\"). Many of those who even understand what it does will often times still want to avoid it. But sticking to the spirit of our approach in this book and series, let's dig into it to find out if `~` has anything useful to give us.\n\nIn the \"32-bit (Signed) Integers\" section of Chapter 2, we covered how bitwise operators in JS are defined only for 32-bit operations, which means they force their operands to conform to 32-bit value representations. The rules for how this happens are controlled by the `ToInt32` abstract operation (ES5 spec, section 9.5).\n\n`ToInt32` first does a `ToNumber` coercion, which means if the value is `\"123\"`, it's going to first become `123` before the `ToInt32` rules are applied.\n\nWhile not *technically* coercion itself (since the type doesn't change!), using bitwise operators (like `|` or `~`) with certain special `number` values produces a coercive effect that results in a different `number` value.\n\nFor example, let's first consider the `|` \"bitwise OR\" operator used in the otherwise no-op idiom `0 | x`, which (as Chapter 2 showed) essentially only does the `ToInt32` conversion:\n\n```js\n0 | -0;\t\t\t// 0\n0 | NaN;\t\t// 0\n0 | Infinity;\t// 0\n0 | -Infinity;\t// 0\n```\n\nThese special numbers aren't 32-bit representable (since they come from the 64-bit IEEE 754 standard -- see Chapter 2), so `ToInt32` just specifies `0` as the result from these values.\n\nIt's debatable if `0 | __` is an *explicit* form of this coercive `ToInt32` operation or if it's more *implicit*. From the spec perspective, it's unquestionably *explicit*, but if you don't understand bitwise operations at this level, it can seem a bit more *implicitly* magical. Nevertheless, consistent with other assertions in this chapter, we will call it *explicit*.\n\nSo, let's turn our attention back to `~`. The `~` operator first \"coerces\" to a 32-bit `number` value, and then performs a bitwise negation (flipping each bit's parity).\n\n**Note:** This is very similar to how `!` not only coerces its value to `boolean` but also flips its parity (see discussion of the \"unary `!`\" later).\n\nBut... what!? Why do we care about bits being flipped? That's some pretty specialized, nuanced stuff. It's pretty rare for JS developers to need to reason about individual bits.\n\nAnother way of thinking about the definition of `~` comes from old-school computer science/discrete Mathematics: `~` performs two's-complement. Great, thanks, that's totally clearer!\n\nLet's try again: `~x` is roughly the same as `-(x+1)`. That's weird, but slightly easier to reason about. So:\n\n```js\n~42;\t// -(42+1) ==> -43\n```\n\nYou're probably still wondering what the heck all this `~` stuff is about, or why it really matters for a coercion discussion. Let's quickly get to the point.\n\nConsider `-(x+1)`. What's the only value that you can perform that operation on that will produce a `0` (or `-0` technically!) result? `-1`. In other words, `~` used with a range of `number` values will produce a falsy (easily coercible to `false`) `0` value for the `-1` input value, and any other truthy `number` otherwise.\n\nWhy is that relevant?\n\n`-1` is commonly called a \"sentinel value,\" which basically means a value that's given an arbitrary semantic meaning within the greater set of values of its same type (`number`s). The C-language uses `-1` sentinel values for many functions that return `>= 0` values for \"success\" and `-1` for \"failure.\"\n\nJavaScript adopted this precedent when defining the `string` operation `indexOf(..)`, which searches for a substring and if found returns its zero-based index position, or `-1` if not found.\n\nIt's pretty common to try to use `indexOf(..)` not just as an operation to get the position, but as a `boolean` check of presence/absence of a substring in another `string`. Here's how developers usually perform such checks:\n\n```js\nvar a = \"Hello World\";\n\nif (a.indexOf( \"lo\" ) >= 0) {\t// true\n\t// found it!\n}\nif (a.indexOf( \"lo\" ) != -1) {\t// true\n\t// found it\n}\n\nif (a.indexOf( \"ol\" ) < 0) {\t// true\n\t// not found!\n}\nif (a.indexOf( \"ol\" ) == -1) {\t// true\n\t// not found!\n}\n```\n\nI find it kind of gross to look at `>= 0` or `== -1`. It's basically a \"leaky abstraction,\" in that it's leaking underlying implementation behavior -- the usage of sentinel `-1` for \"failure\" -- into my code. I would prefer to hide such a detail.\n\nAnd now, finally, we see why `~` could help us! Using `~` with `indexOf()` \"coerces\" (actually just transforms) the value **to be appropriately `boolean`-coercible**:\n\n```js\nvar a = \"Hello World\";\n\n~a.indexOf( \"lo\" );\t\t\t// -4   <-- truthy!\n\nif (~a.indexOf( \"lo\" )) {\t// true\n\t// found it!\n}\n\n~a.indexOf( \"ol\" );\t\t\t// 0    <-- falsy!\n!~a.indexOf( \"ol\" );\t\t// true\n\nif (!~a.indexOf( \"ol\" )) {\t// true\n\t// not found!\n}\n```\n\n`~` takes the return value of `indexOf(..)` and transforms it: for the \"failure\" `-1` we get the falsy `0`, and every other value is truthy.\n\n**Note:** The `-(x+1)` pseudo-algorithm for `~` would imply that `~-1` is `-0`, but actually it produces `0` because the underlying operation is actually bitwise, not mathematic.\n\nTechnically, `if (~a.indexOf(..))` is still relying on *implicit* coercion of its resultant `0` to `false` or nonzero to `true`. But overall, `~` still feels to me more like an *explicit* coercion mechanism, as long as you know what it's intended to do in this idiom.\n\nI find this to be cleaner code than the previous `>= 0` / `== -1` clutter.\n\n##### Truncating Bits\n\nThere's one more place `~` may show up in code you run across: some developers use the double tilde `~~` to truncate the decimal part of a `number` (i.e., \"coerce\" it to a whole number \"integer\"). It's commonly (though mistakingly) said this is the same result as calling `Math.floor(..)`.\n\nHow `~~` works is that the first `~` applies the `ToInt32` \"coercion\" and does the bitwise flip, and then the second `~` does another bitwise flip, flipping all the bits back to the original state. The end result is just the `ToInt32` \"coercion\" (aka truncation).\n\n**Note:** The bitwise double-flip of `~~` is very similar to the parity double-negate `!!` behavior, explained in the \"Explicitly: * --> Boolean\" section later.\n\nHowever, `~~` needs some caution/clarification. First, it only works reliably on 32-bit values. But more importantly, it doesn't work the same on negative numbers as `Math.floor(..)` does!\n\n```js\nMath.floor( -49.6 );\t// -50\n~~-49.6;\t\t\t\t// -49\n```\n\nSetting the `Math.floor(..)` difference aside, `~~x` can truncate to a (32-bit) integer. But so does `x | 0`, and seemingly with (slightly) *less effort*.\n\nSo, why might you choose `~~x` over `x | 0`, then? Operator precedence (see Chapter 5):\n\n```js\n~~1E20 / 10;\t\t// 166199296\n\n1E20 | 0 / 10;\t\t// 1661992960\n(1E20 | 0) / 10;\t// 166199296\n```\n\nJust as with all other advice here, use `~` and `~~` as explicit mechanisms for \"coercion\" and value transformation only if everyone who reads/writes such code is properly aware of how these operators work!\n\n### Explicitly: Parsing Numeric Strings\n\nA similar outcome to coercing a `string` to a `number` can be achieved by parsing a `number` out of a `string`'s character contents. There are, however, distinct differences between this parsing and the type conversion we examined above.\n\nConsider:\n\n```js\nvar a = \"42\";\nvar b = \"42px\";\n\nNumber( a );\t// 42\nparseInt( a );\t// 42\n\nNumber( b );\t// NaN\nparseInt( b );\t// 42\n```\n\nParsing a numeric value out of a string is *tolerant* of non-numeric characters -- it just stops parsing left-to-right when encountered -- whereas coercion is *not tolerant* and fails resulting in the `NaN` value.\n\nParsing should not be seen as a substitute for coercion. These two tasks, while similar, have different purposes. Parse a `string` as a `number` when you don't know/care what other non-numeric characters there may be on the right-hand side. Coerce a `string` (to a `number`) when the only acceptable values are numeric and something like `\"42px\"` should be rejected as a `number`.\n\n**Tip:** `parseInt(..)` has a twin, `parseFloat(..)`, which (as it sounds) pulls out a floating-point number from a string.\n\nDon't forget that `parseInt(..)` operates on `string` values. It makes absolutely no sense to pass a `number` value to `parseInt(..)`. Nor would it make sense to pass any other type of value, like `true`, `function(){..}` or `[1,2,3]`.\n\nIf you pass a non-`string`, the value you pass will automatically be coerced to a `string` first (see \"`ToString`\" earlier), which would clearly be a kind of hidden *implicit* coercion. It's a really bad idea to rely upon such a behavior in your program, so never use `parseInt(..)` with a non-`string` value.\n\nPrior to ES5, another gotcha existed with `parseInt(..)`, which was the source of many JS programs' bugs. If you didn't pass a second argument to indicate which numeric base (aka radix) to use for interpreting the numeric `string` contents, `parseInt(..)` would look at the beginning character(s) to make a guess.\n\nIf the first two characters were `\"0x\"` or `\"0X\"`, the guess (by convention) was that you wanted to interpret the `string` as a hexadecimal (base-16) `number`. Otherwise, if the first character was `\"0\"`, the guess (again, by convention) was that you wanted to interpret the `string` as an octal (base-8) `number`.\n\nHexadecimal `string`s (with the leading `0x` or `0X`) aren't terribly easy to get mixed up. But the octal number guessing proved devilishly common. For example:\n\n```js\nvar hour = parseInt( selectedHour.value );\nvar minute = parseInt( selectedMinute.value );\n\nconsole.log( \"The time you selected was: \" + hour + \":\" + minute);\n```\n\nSeems harmless, right? Try selecting `08` for the hour and `09` for the minute. You'll get `0:0`. Why? because neither `8` nor `9` are valid characters in octal base-8.\n\nThe pre-ES5 fix was simple, but so easy to forget: **always pass `10` as the second argument**. This was totally safe:\n\n```js\nvar hour = parseInt( selectedHour.value, 10 );\nvar minute = parseInt( selectedMiniute.value, 10 );\n```\n\nAs of ES5, `parseInt(..)` no longer guesses octal. Unless you say otherwise, it assumes base-10 (or base-16 for `\"0x\"` prefixes). That's much nicer. Just be careful if your code has to run in pre-ES5 environments, in which case you still need to pass `10` for the radix.\n\n#### Parsing Non-Strings\n\nOne somewhat infamous example of `parseInt(..)`'s behavior is highlighted in a sarcastic joke post a few years ago, poking fun at this JS behavior:\n\n```js\nparseInt( 1/0, 19 ); // 18\n```\n\nThe assumptive (but totally invalid) assertion was, \"If I pass in Infinity, and parse an integer out of that, I should get Infinity back, not 18.\" Surely, JS must be crazy for this outcome, right?\n\nThough this example is obviously contrived and unreal, let's indulge the madness for a moment and examine whether JS really is that crazy.\n\nFirst off, the most obvious sin committed here is to pass a non-`string` to `parseInt(..)`. That's a no-no. Do it and you're asking for trouble. But even if you do, JS politely coerces what you pass in into a `string` that it can try to parse.\n\nSome would argue that this is unreasonable behavior, and that `parseInt(..)` should refuse to operate on a non-`string` value. Should it perhaps throw an error? That would be very Java-like, frankly. I shudder at thinking JS should start throwing errors all over the place so that `try..catch` is needed around almost every line.\n\nShould it return `NaN`? Maybe. But... what about:\n\n```js\nparseInt( new String( \"42\") );\n```\n\nShould that fail, too? It's a non-`string` value. If you want that `String` object wrapper to be unboxed to `\"42\"`, then is it really so unusual for `42` to first become `\"42\"` so that `42` can be parsed back out?\n\nI would argue that this half-*explicit*, half-*implicit* coercion that can occur can often be a very helpful thing. For example:\n\n```js\nvar a = {\n\tnum: 21,\n\ttoString: function() { return String( this.num * 2 ); }\n};\n\nparseInt( a ); // 42\n```\n\nThe fact that `parseInt(..)` forcibly coerces its value to a `string` to perform the parse on is quite sensible. If you pass in garbage, and you get garbage back out, don't blame the trash can -- it just did its job faithfully.\n\nSo, if you pass in a value like `Infinity` (the result of `1 / 0` obviously), what sort of `string` representation would make the most sense for its coercion? Only two reasonable choices come to mind: `\"Infinity\"` and `\"∞\"`. JS chose `\"Infinity\"`. I'm glad it did.\n\nI think it's a good thing that **all values** in JS have some sort of default `string` representation, so that they aren't mysterious black boxes that we can't debug and reason about.\n\nNow, what about base-19? Obviously, completely bogus and contrived. No real JS programs use base-19. It's absurd. But again, let's indulge the ridiculousness. In base-19, the valid numeric characters are `0` - `9` and `a` - `i` (case insensitive).\n\nSo, back to our `parseInt( 1/0, 19 )` example. It's essentially `parseInt( \"Infinity\", 19 )`. How does it parse? The first character is `\"I\"`, which is value `18` in the silly base-19. The second character `\"n\"` is not in the valid set of numeric characters, and as such the parsing simply politely stops, just like when it ran across `\"p\"` in `\"42px\"`.\n\nThe result? `18`. Exactly like it sensibly should be. The behaviors involved to get us there, and not to an error or to `Infinity` itself, are **very important** to JS, and should not be so easily discarded.\n\nOther examples of this behavior with `parseInt(..)` that may be surprising but are quite sensible include:\n\n```js\nparseInt( 0.000008 );\t\t// 0   (\"0\" from \"0.000008\")\nparseInt( 0.0000008 );\t\t// 8   (\"8\" from \"8e-7\")\nparseInt( false, 16 );\t\t// 250 (\"fa\" from \"false\")\nparseInt( parseInt, 16 );\t// 15  (\"f\" from \"function..\")\n\nparseInt( \"0x10\" );\t\t\t// 16\nparseInt( \"103\", 2 );\t\t// 2\n```\n\n`parseInt(..)` is actually pretty predictable and consistent in its behavior. If you use it correctly, you'll get sensible results. If you use it incorrectly, the crazy results you get are not the fault of JavaScript.\n\n### Explicitly: * --> Boolean\n\nNow, let's examine coercing from any non-`boolean` value to a `boolean`.\n\nJust like with `String(..)` and `Number(..)` above, `Boolean(..)` (without the `new`, of course!) is an explicit way of forcing the `ToBoolean` coercion:\n\n```js\nvar a = \"0\";\nvar b = [];\nvar c = {};\n\nvar d = \"\";\nvar e = 0;\nvar f = null;\nvar g;\n\nBoolean( a ); // true\nBoolean( b ); // true\nBoolean( c ); // true\n\nBoolean( d ); // false\nBoolean( e ); // false\nBoolean( f ); // false\nBoolean( g ); // false\n```\n\nWhile `Boolean(..)` is clearly explicit, it's not at all common or idiomatic.\n\nJust like the unary `+` operator coerces a value to a `number` (see above), the unary `!` negate operator explicitly coerces a value to a `boolean`. The *problem* is that it also flips the value from truthy to falsy or vice versa. So, the most common way JS developers explicitly coerce to `boolean` is to use the `!!` double-negate operator, because the second `!` will flip the parity back to the original:\n\n```js\nvar a = \"0\";\nvar b = [];\nvar c = {};\n\nvar d = \"\";\nvar e = 0;\nvar f = null;\nvar g;\n\n!!a;\t// true\n!!b;\t// true\n!!c;\t// true\n\n!!d;\t// false\n!!e;\t// false\n!!f;\t// false\n!!g;\t// false\n```\n\nAny of these `ToBoolean` coercions would happen *implicitly* without the `Boolean(..)` or `!!`, if used in a `boolean` context such as an `if (..) ..` statement. But the goal here is to explicitly force the value to a `boolean` to make it clearer that the `ToBoolean` coercion is intended.\n\nAnother example use-case for explicit `ToBoolean` coercion is if you want to force a `true`/`false` value coercion in the JSON serialization of a data structure:\n\n```js\nvar a = [\n\t1,\n\tfunction(){ /*..*/ },\n\t2,\n\tfunction(){ /*..*/ }\n];\n\nJSON.stringify( a ); // \"[1,null,2,null]\"\n\nJSON.stringify( a, function(key,val){\n\tif (typeof val == \"function\") {\n\t\t// force `ToBoolean` coercion of the function\n\t\treturn !!val;\n\t}\n\telse {\n\t\treturn val;\n\t}\n} );\n// \"[1,true,2,true]\"\n```\n\nIf you come to JavaScript from Java, you may recognize this idiom:\n\n```js\nvar a = 42;\n\nvar b = a ? true : false;\n```\n\nThe `? :` ternary operator will test `a` for truthiness, and based on that test will either assign `true` or `false` to `b`, accordingly.\n\nOn its surface, this idiom looks like a form of *explicit* `ToBoolean`-type coercion, since it's obvious that only either `true` or `false` come out of the operation.\n\nHowever, there's a hidden *implicit* coercion, in that the `a` expression has to first be coerced to `boolean` to perform the truthiness test. I'd call this idiom \"explicitly implicit.\" Furthermore, I'd suggest **you should avoid this idiom completely** in JavaScript. It offers no real benefit, and worse, masquerades as something it's not.\n\n`Boolean(a)` and `!!a` are far better as *explicit* coercion options.\n\n## Implicit Coercion\n\n*Implicit* coercion refers to type conversions that are hidden, with non-obvious side-effects that implicitly occur from other actions. In other words, *implicit coercions* are any type conversions that aren't obvious (to you).\n\nWhile it's clear what the goal of *explicit* coercion is (making code explicit and more understandable), it might be *too* obvious that *implicit* coercion has the opposite goal: making code harder to understand.\n\nTaken at face value, I believe that's where much of the ire towards coercion comes from. The majority of complaints about \"JavaScript coercion\" are actually aimed (whether they realize it or not) at *implicit* coercion.\n\n**Note:** Douglas Crockford, author of *\"JavaScript: The Good Parts\"*, has claimed in many conference talks and writings that JavaScript coercion should be avoided. But what he seems to mean is that *implicit* coercion is bad (in his opinion). However, if you read his own code, you'll find plenty of examples of coercion, both *implicit* and *explicit*! In truth, his angst seems to primarily be directed at the `==` operation, but as you'll see in this chapter, that's only part of the coercion mechanism.\n\nSo, **is implicit coercion** evil? Is it dangerous? Is it a flaw in JavaScript's design? Should we avoid it at all costs?\n\nI bet most of you readers are inclined to enthusiastically cheer, \"Yes!\"\n\n**Not so fast.** Hear me out.\n\nLet's take a different perspective on what *implicit* coercion is, and can be, than just that it's \"the opposite of the good explicit kind of coercion.\" That's far too narrow and misses an important nuance.\n\nLet's define the goal of *implicit* coercion as: to reduce verbosity, boilerplate, and/or unnecessary implementation detail that clutters up our code with noise that distracts from the more important intent.\n\n### Simplifying Implicitly\n\nBefore we even get to JavaScript, let me suggest something pseudo-code'ish from some theoretical strongly typed language to illustrate:\n\n```js\nSomeType x = SomeType( AnotherType( y ) )\n```\n\nIn this example, I have some arbitrary type of value in `y` that I want to convert to the `SomeType` type. The problem is, this language can't go directly from whatever `y` currently is to `SomeType`. It needs an intermediate step, where it first converts to `AnotherType`, and then from `AnotherType` to `SomeType`.\n\nNow, what if that language (or definition you could create yourself with the language) *did* just let you say:\n\n```js\nSomeType x = SomeType( y )\n```\n\nWouldn't you generally agree that we simplified the type conversion here to reduce the unnecessary \"noise\" of the intermediate conversion step? I mean, is it *really* all that important, right here at this point in the code, to see and deal with the fact that `y` goes to `AnotherType` first before then going to `SomeType`?\n\nSome would argue, at least in some circumstances, yes. But I think an equal argument can be made of many other circumstances that here, the simplification **actually aids in the readability of the code** by abstracting or hiding away such details, either in the language itself or in our own abstractions.\n\nUndoubtedly, behind the scenes, somewhere, the intermediate conversion step is still happening. But if that detail is hidden from view here, we can just reason about getting `y` to type `SomeType` as a generic operation and hide the messy details.\n\nWhile not a perfect analogy, what I'm going to argue throughout the rest of this chapter is that JS *implicit* coercion can be thought of as providing a similar aid to your code.\n\nBut, **and this is very important**, that is not an unbounded, absolute statement. There are definitely plenty of *evils* lurking around *implicit* coercion, that will harm your code much more than any potential readability improvements. Clearly, we have to learn how to avoid such constructs so we don't poison our code with all manner of bugs.\n\nMany developers believe that if a mechanism can do some useful thing **A** but can also be abused or misused to do some awful thing **Z**, then we should throw out that mechanism altogether, just to be safe.\n\nMy encouragement to you is: don't settle for that. Don't \"throw the baby out with the bathwater.\" Don't assume *implicit* coercion is all bad because all you think you've ever seen is its \"bad parts.\" I think there are \"good parts\" here, and I want to help and inspire more of you to find and embrace them!\n\n### Implicitly: Strings <--> Numbers\n\nEarlier in this chapter, we explored *explicitly* coercing between `string` and `number` values. Now, let's explore the same task but with *implicit* coercion approaches. But before we do, we have to examine some nuances of operations that will *implicitly* force coercion.\n\nThe `+` operator is overloaded to serve the purposes of both `number` addition and `string` concatenation. So how does JS know which type of operation you want to use? Consider:\n\n```js\nvar a = \"42\";\nvar b = \"0\";\n\nvar c = 42;\nvar d = 0;\n\na + b; // \"420\"\nc + d; // 42\n```\n\nWhat's different that causes `\"420\"` vs `42`? It's a common misconception that the difference is whether one or both of the operands is a `string`, as that means `+` will assume `string` concatenation. While that's partially true, it's more complicated than that.\n\nConsider:\n\n```js\nvar a = [1,2];\nvar b = [3,4];\n\na + b; // \"1,23,4\"\n```\n\nNeither of these operands is a `string`, but clearly they were both coerced to `string`s and then the `string` concatenation kicked in. So what's really going on?\n\n(**Warning:** deeply nitty gritty spec-speak coming, so skip the next two paragraphs if that intimidates you!)\n\n-----\n\nAccording to ES5 spec section 11.6.1, the `+` algorithm (when an `object` value is an operand) will concatenate if either operand is either already a `string`, or if the following steps produce a `string` representation. So, when `+` receives an `object` (including `array`) for either operand, it first calls the `ToPrimitive` abstract operation (section 9.1) on the value, which then calls the `[[DefaultValue]]` algorithm (section 8.12.8) with a context hint of `number`.\n\nIf you're paying close attention, you'll notice that this operation is now identical to how the `ToNumber` abstract operation handles `object`s (see the \"`ToNumber`\"\" section earlier). The `valueOf()` operation on the `array` will fail to produce a simple primitive, so it then falls to a `toString()` representation. The two `array`s thus become `\"1,2\"` and `\"3,4\"`, respectively. Now, `+` concatenates the two `string`s as you'd normally expect: `\"1,23,4\"`.\n\n-----\n\nLet's set aside those messy details and go back to an earlier, simplified explanation: if either operand to `+` is a `string` (or becomes one with the above steps!), the operation will be `string` concatenation. Otherwise, it's always numeric addition.\n\n**Note:** A commonly cited coercion gotcha is `[] + {}` vs. `{} + []`, as those two expressions result, respectively, in `\"[object Object]\"` and `0`. There's more to it, though, and we cover those details in \"Blocks\" in Chapter 5.\n\nWhat's that mean for *implicit* coercion?\n\nYou can coerce a `number` to a `string` simply by \"adding\" the `number` and the `\"\"` empty `string`:\n\n```js\nvar a = 42;\nvar b = a + \"\";\n\nb; // \"42\"\n```\n\n**Tip:** Numeric addition with the `+` operator is commutative, which means `2 + 3` is the same as `3 + 2`. String concatenation with `+` is obviously not generally commutative, **but** with the specific case of `\"\"`, it's effectively commutative, as `a + \"\"` and `\"\" + a` will produce the same result.\n\nIt's extremely common/idiomatic to (*implicitly*) coerce `number` to `string` with a `+ \"\"` operation. In fact, interestingly, even some of the most vocal critics of *implicit* coercion still use that approach in their own code, instead of one of its *explicit* alternatives.\n\n**I think this is a great example** of a useful form in *implicit* coercion, despite how frequently the mechanism gets criticized!\n\nComparing this *implicit* coercion of `a + \"\"` to our earlier example of `String(a)` *explicit* coercion, there's one additional quirk to be aware of. Because of how the `ToPrimitive` abstract operation works, `a + \"\"` invokes `valueOf()` on the `a` value, whose return value is then finally converted to a `string` via the internal `ToString` abstract operation. But `String(a)` just invokes `toString()` directly.\n\nBoth approaches ultimately result in a `string`, but if you're using an `object` instead of a regular primitive `number` value, you may not necessarily get the *same* `string` value!\n\nConsider:\n\n```js\nvar a = {\n\tvalueOf: function() { return 42; },\n\ttoString: function() { return 4; }\n};\n\na + \"\";\t\t\t// \"42\"\n\nString( a );\t// \"4\"\n```\n\nGenerally, this sort of gotcha won't bite you unless you're really trying to create confusing data structures and operations, but you should be careful if you're defining both your own `valueOf()` and `toString()` methods for some `object`, as how you coerce the value could affect the outcome.\n\nWhat about the other direction? How can we *implicitly coerce* from `string` to `number`?\n\n```js\nvar a = \"3.14\";\nvar b = a - 0;\n\nb; // 3.14\n```\n\nThe `-` operator is defined only for numeric subtraction, so `a - 0` forces `a`'s value to be coerced to a `number`. While far less common, `a * 1` or `a / 1` would accomplish the same result, as those operators are also only defined for numeric operations.\n\nWhat about `object` values with the `-` operator? Similar story as for `+` above:\n\n```js\nvar a = [3];\nvar b = [1];\n\na - b; // 2\n```\n\nBoth `array` values have to become `number`s, but they end up first being coerced to `strings` (using the expected `toString()` serialization), and then are coerced to `number`s, for the `-` subtraction to perform on.\n\nSo, is *implicit* coercion of `string` and `number` values the ugly evil you've always heard horror stories about? I don't personally think so.\n\nCompare `b = String(a)` (*explicit*) to `b = a + \"\"` (*implicit*). I think cases can be made for both approaches being useful in your code. Certainly `b = a + \"\"` is quite a bit more common in JS programs, proving its own utility regardless of *feelings* about the merits or hazards of *implicit* coercion in general.\n\n### Implicitly: Booleans --> Numbers\n\nI think a case where *implicit* coercion can really shine is in simplifying certain types of complicated `boolean` logic into simple numeric addition. Of course, this is not a general-purpose technique, but a specific solution for specific cases.\n\nConsider:\n\n```js\nfunction onlyOne(a,b,c) {\n\treturn !!((a && !b && !c) ||\n\t\t(!a && b && !c) || (!a && !b && c));\n}\n\nvar a = true;\nvar b = false;\n\nonlyOne( a, b, b );\t// true\nonlyOne( b, a, b );\t// true\n\nonlyOne( a, b, a );\t// false\n```\n\nThis `onlyOne(..)` utility should only return `true` if exactly one of the arguments is `true` / truthy. It's using *implicit* coercion on the truthy checks and *explicit* coercion on the others, including the final return value.\n\nBut what if we needed that utility to be able to handle four, five, or twenty flags in the same way? It's pretty difficult to imagine implementing code that would handle all those permutations of comparisons.\n\nBut here's where coercing the `boolean` values to `number`s (`0` or `1`, obviously) can greatly help:\n\n```js\nfunction onlyOne() {\n\tvar sum = 0;\n\tfor (var i=0; i < arguments.length; i++) {\n\t\t// skip falsy values. same as treating\n\t\t// them as 0's, but avoids NaN's.\n\t\tif (arguments[i]) {\n\t\t\tsum += arguments[i];\n\t\t}\n\t}\n\treturn sum == 1;\n}\n\nvar a = true;\nvar b = false;\n\nonlyOne( b, a );\t\t// true\nonlyOne( b, a, b, b, b );\t// true\n\nonlyOne( b, b );\t\t// false\nonlyOne( b, a, b, b, b, a );\t// false\n```\n\n**Note:** Of course, instead of the `for` loop in `onlyOne(..)`, you could more tersely use the ES5 `reduce(..)` utility, but I didn't want to obscure the concepts.\n\nWhat we're doing here is relying on the `1` for `true`/truthy coercions, and numerically adding them all up. `sum += arguments[i]` uses *implicit* coercion to make that happen. If one and only one value in the `arguments` list is `true`, then the numeric sum will be `1`, otherwise the sum will not be `1` and thus the desired condition is not met.\n\nWe could of course do this with *explicit* coercion instead:\n\n```js\nfunction onlyOne() {\n\tvar sum = 0;\n\tfor (var i=0; i < arguments.length; i++) {\n\t\tsum += Number( !!arguments[i] );\n\t}\n\treturn sum === 1;\n}\n```\n\nWe first use `!!arguments[i]` to force the coercion of the value to `true` or `false`. That's so you could pass non-`boolean` values in, like `onlyOne( \"42\", 0 )`, and it would still work as expected (otherwise you'd end up with `string` concatenation and the logic would be incorrect).\n\nOnce we're sure it's a `boolean`, we do another *explicit* coercion with `Number(..)` to make sure the value is `0` or `1`.\n\nIs the *explicit* coercion form of this utility \"better\"? It does avoid the `NaN` trap as explained in the code comments. But, ultimately, it depends on your needs. I personally think the former version, relying on *implicit* coercion is more elegant (if you won't be passing `undefined` or `NaN`), and the *explicit* version is needlessly more verbose.\n\nBut as with almost everything we're discussing here, it's a judgment call.\n\n**Note:** Regardless of *implicit* or *explicit* approaches, you could easily make `onlyTwo(..)` or `onlyFive(..)` variations by simply changing the final comparison from `1`, to `2` or `5`, respectively. That's drastically easier than adding a bunch of `&&` and `||` expressions. So, generally, coercion is very helpful in this case.\n\n### Implicitly: * --> Boolean\n\nNow, let's turn our attention to *implicit* coercion to `boolean` values, as it's by far the most common and also by far the most potentially troublesome.\n\nRemember, *implicit* coercion is what kicks in when you use a value in such a way that it forces the value to be converted. For numeric and `string` operations, it's fairly easy to see how the coercions can occur.\n\nBut, what sort of expression operations require/force (*implicitly*) a `boolean` coercion?\n\n1. The test expression in an `if (..)` statement.\n2. The test expression (second clause) in a `for ( .. ; .. ; .. )` header.\n3. The test expression in `while (..)` and `do..while(..)` loops.\n4. The test expression (first clause) in `? :` ternary expressions.\n5. The left-hand operand (which serves as a test expression -- see below!) to the `||` (\"logical or\") and `&&` (\"logical and\") operators.\n\nAny value used in these contexts that is not already a `boolean` will be *implicitly* coerced to a `boolean` using the rules of the `ToBoolean` abstract operation covered earlier in this chapter.\n\nLet's look at some examples:\n\n```js\nvar a = 42;\nvar b = \"abc\";\nvar c;\nvar d = null;\n\nif (a) {\n\tconsole.log( \"yep\" );\t\t// yep\n}\n\nwhile (c) {\n\tconsole.log( \"nope, never runs\" );\n}\n\nc = d ? a : b;\nc;\t\t\t\t\t// \"abc\"\n\nif ((a && d) || c) {\n\tconsole.log( \"yep\" );\t\t// yep\n}\n```\n\nIn all these contexts, the non-`boolean` values are *implicitly coerced* to their `boolean` equivalents to make the test decisions.\n\n### Operators `||` and `&&`\n\nIt's quite likely that you have seen the `||` (\"logical or\") and `&&` (\"logical and\") operators in most or all other languages you've used. So it'd be natural to assume that they work basically the same in JavaScript as in other similar languages.\n\nThere's some very little known, but very important, nuance here.\n\nIn fact, I would argue these operators shouldn't even be called \"logical ___ operators\", as that name is incomplete in describing what they do. If I were to give them a more accurate (if more clumsy) name, I'd call them \"selector operators,\" or more completely, \"operand selector operators.\"\n\nWhy? Because they don't actually result in a *logic* value (aka `boolean`) in JavaScript, as they do in some other languages.\n\nSo what *do* they result in? They result in the value of one (and only one) of their two operands. In other words, **they select one of the two operand's values**.\n\nQuoting the ES5 spec from section 11.11:\n\n> The value produced by a && or || operator is not necessarily of type Boolean. The value produced will always be the value of one of the two operand expressions.\n\nLet's illustrate:\n\n```js\nvar a = 42;\nvar b = \"abc\";\nvar c = null;\n\na || b;\t\t// 42\na && b;\t\t// \"abc\"\n\nc || b;\t\t// \"abc\"\nc && b;\t\t// null\n```\n\n**Wait, what!?** Think about that. In languages like C and PHP, those expressions result in `true` or `false`, but in JS (and Python and Ruby, for that matter!), the result comes from the values themselves.\n\nBoth `||` and `&&` operators perform a `boolean` test on the **first operand** (`a` or `c`). If the operand is not already `boolean` (as it's not, here), a normal `ToBoolean` coercion occurs, so that the test can be performed.\n\nFor the `||` operator, if the test is `true`, the `||` expression results in the value of the *first operand* (`a` or `c`). If the test is `false`, the `||` expression results in the value of the *second operand* (`b`).\n\nInversely, for the `&&` operator, if the test is `true`, the `&&` expression results in the value of the *second operand* (`b`). If the test is `false`, the `&&` expression results in the value of the *first operand* (`a` or `c`).\n\nThe result of a `||` or `&&` expression is always the underlying value of one of the operands, **not** the (possibly coerced) result of the test. In `c && b`, `c` is `null`, and thus falsy. But the `&&` expression itself results in `null` (the value in `c`), not in the coerced `false` used in the test.\n\nDo you see how these operators act as \"operand selectors\", now?\n\nAnother way of thinking about these operators:\n\n```js\na || b;\n// roughly equivalent to:\na ? a : b;\n\na && b;\n// roughly equivalent to:\na ? b : a;\n```\n\n**Note:** I call `a || b` \"roughly equivalent\" to `a ? a : b` because the outcome is identical, but there's a nuanced difference. In `a ? a : b`, if `a` was a more complex expression (like for instance one that might have side effects like calling a `function`, etc.), then the `a` expression would possibly be evaluated twice (if the first evaluation was truthy). By contrast, for `a || b`, the `a` expression is evaluated only once, and that value is used both for the coercive test as well as the result value (if appropriate). The same nuance applies to the `a && b` and `a ? b : a` expressions.\n\nAn extremely common and helpful usage of this behavior, which there's a good chance you may have used before and not fully understood, is:\n\n```js\nfunction foo(a,b) {\n\ta = a || \"hello\";\n\tb = b || \"world\";\n\n\tconsole.log( a + \" \" + b );\n}\n\nfoo();\t\t\t\t\t// \"hello world\"\nfoo( \"yeah\", \"yeah!\" );\t// \"yeah yeah!\"\n```\n\nThe `a = a || \"hello\"` idiom (sometimes said to be JavaScript's version of the C# \"null coalescing operator\") acts to test `a` and if it has no value (or only an undesired falsy value), provides a backup default value (`\"hello\"`).\n\n**Be careful**, though!\n\n```js\nfoo( \"That's it!\", \"\" ); // \"That's it! world\" <-- Oops!\n```\n\nSee the problem? `\"\"` as the second argument is a falsy value (see `ToBoolean` earlier in this chapter), so the `b = b || \"world\"` test fails, and the `\"world\"` default value is substituted, even though the intent probably was to have the explicitly passed `\"\"` be the value assigned to `b`.\n\nThis `||` idiom is extremely common, and quite helpful, but you have to use it only in cases where *all falsy values* should be skipped. Otherwise, you'll need to be more explicit in your test, and probably use a `? :` ternary instead.\n\nThis *default value assignment* idiom is so common (and useful!) that even those who publicly and vehemently decry JavaScript coercion often use it in their own code!\n\nWhat about `&&`?\n\nThere's another idiom that is quite a bit less commonly authored manually, but which is used by JS minifiers frequently. The `&&` operator \"selects\" the second operand if and only if the first operand tests as truthy, and this usage is sometimes called the \"guard operator\" (also see \"Short Circuited\" in Chapter 5) -- the first expression test \"guards\" the second expression:\n\n```js\nfunction foo() {\n\tconsole.log( a );\n}\n\nvar a = 42;\n\na && foo(); // 42\n```\n\n`foo()` gets called only because `a` tests as truthy. If that test failed, this `a && foo()` expression statement would just silently stop -- this is known as \"short circuiting\" -- and never call `foo()`.\n\nAgain, it's not nearly as common for people to author such things. Usually, they'd do `if (a) { foo(); }` instead. But JS minifiers choose `a && foo()` because it's much shorter. So, now, if you ever have to decipher such code, you'll know what it's doing and why.\n\nOK, so `||` and `&&` have some neat tricks up their sleeve, as long as you're willing to allow the *implicit* coercion into the mix.\n\n**Note:** Both the `a = b || \"something\"` and `a && b()` idioms rely on short circuiting behavior, which we cover in more detail in Chapter 5.\n\nThe fact that these operators don't actually result in `true` and `false` is possibly messing with your head a little bit by now. You're probably wondering how all your `if` statements and `for` loops have been working, if they've included compound logical expressions like `a && (b || c)`.\n\nDon't worry! The sky is not falling. Your code is (probably) just fine. It's just that you probably never realized before that there was an *implicit* coercion to `boolean` going on **after** the compound expression was evaluated.\n\nConsider:\n\n```js\nvar a = 42;\nvar b = null;\nvar c = \"foo\";\n\nif (a && (b || c)) {\n\tconsole.log( \"yep\" );\n}\n```\n\nThis code still works the way you always thought it did, except for one subtle extra detail. The `a && (b || c)` expression *actually* results in `\"foo\"`, not `true`. So, the `if` statement *then* forces the `\"foo\"` value to coerce to a `boolean`, which of course will be `true`.\n\nSee? No reason to panic. Your code is probably still safe. But now you know more about how it does what it does.\n\nAnd now you also realize that such code is using *implicit* coercion. If you're in the \"avoid (implicit) coercion camp\" still, you're going to need to go back and make all of those tests *explicit*:\n\n```js\nif (!!a && (!!b || !!c)) {\n\tconsole.log( \"yep\" );\n}\n```\n\nGood luck with that! ... Sorry, just teasing.\n\n### Symbol Coercion\n\nUp to this point, there's been almost no observable outcome difference between *explicit* and *implicit* coercion -- only the readability of code has been at stake.\n\nBut ES6 Symbols introduce a gotcha into the coercion system that we need to discuss briefly. For reasons that go well beyond the scope of what we'll discuss in this book, *explicit* coercion of a `symbol` to a `string` is allowed, but *implicit* coercion of the same is disallowed and throws an error.\n\nConsider:\n\n```js\nvar s1 = Symbol( \"cool\" );\nString( s1 );\t\t\t\t\t// \"Symbol(cool)\"\n\nvar s2 = Symbol( \"not cool\" );\ns2 + \"\";\t\t\t\t\t\t// TypeError\n```\n\n`symbol` values cannot coerce to `number` at all (throws an error either way), but strangely they can both *explicitly* and *implicitly* coerce to `boolean` (always `true`).\n\nConsistency is always easier to learn, and exceptions are never fun to deal with, but we just need to be careful around the new ES6 `symbol` values and how we coerce them.\n\nThe good news: it's probably going to be exceedingly rare for you to need to coerce a `symbol` value. The way they're typically used (see Chapter 3) will probably not call for coercion on a normal basis.\n\n## Loose Equals vs. Strict Equals\n\nLoose equals is the `==` operator, and strict equals is the `===` operator. Both operators are used for comparing two values for \"equality,\" but the \"loose\" vs. \"strict\" indicates a **very important** difference in behavior between the two, specifically in how they decide \"equality.\"\n\nA very common misconception about these two operators is: \"`==` checks values for equality and `===` checks both values and types for equality.\" While that sounds nice and reasonable, it's inaccurate. Countless well-respected JavaScript books and blogs have said exactly that, but unfortunately they're all *wrong*.\n\nThe correct description is: \"`==` allows coercion in the equality comparison and `===` disallows coercion.\"\n\n### Equality Performance\n\nStop and think about the difference between the first (inaccurate) explanation and this second (accurate) one.\n\nIn the first explanation, it seems obvious that `===` is *doing more work* than `==`, because it has to *also* check the type. In the second explanation, `==` is the one *doing more work* because it has to follow through the steps of coercion if the types are different.\n\nDon't fall into the trap, as many have, of thinking this has anything to do with performance, though, as if `==` is going to be slower than `===` in any relevant way. While it's measurable that coercion does take *a little bit* of processing time, it's mere microseconds (yes, that's millionths of a second!).\n\nIf you're comparing two values of the same types, `==` and `===` use the identical algorithm, and so other than minor differences in engine implementation, they should do the same work.\n\nIf you're comparing two values of different types, the performance isn't the important factor. What you should be asking yourself is: when comparing these two values, do I want coercion or not?\n\nIf you want coercion, use `==` loose equality, but if you don't want coercion, use `===` strict equality.\n\n**Note:** The implication here then is that both `==` and `===` check the types of their operands. The difference is in how they respond if the types don't match.\n\n### Abstract Equality\n\nThe `==` operator's behavior is defined as \"The Abstract Equality Comparison Algorithm\" in section 11.9.3 of the ES5 spec. What's listed there is a comprehensive but simple algorithm that explicitly states every possible combination of types, and how the coercions (if necessary) should happen for each combination.\n\n**Warning:** When (*implicit*) coercion is maligned as being too complicated and too flawed to be a *useful good part*, it is these rules of \"abstract equality\" that are being condemned. Generally, they are said to be too complex and too unintuitive for developers to practically learn and use, and that they are prone more to causing bugs in JS programs than to enabling greater code readability. I believe this is a flawed premise -- that you readers are competent developers who write (and read and understand!) algorithms (aka code) all day long. So, what follows is a plain exposition of the \"abstract equality\" in simple terms. But I implore you to also read the ES5 spec section 11.9.3. I think you'll be surprised at just how reasonable it is.\n\nBasically, the first clause (11.9.3.1) says, if the two values being compared are of the same type, they are simply and naturally compared via Identity as you'd expect. For example, `42` is only equal to `42`, and `\"abc\"` is only equal to `\"abc\"`.\n\nSome minor exceptions to normal expectation to be aware of:\n\n* `NaN` is never equal to itself (see Chapter 2)\n* `+0` and `-0` are equal to each other (see Chapter 2)\n\nThe final provision in clause 11.9.3.1 is for `==` loose equality comparison with `object`s (including `function`s and `array`s). Two such values are only *equal* if they are both references to *the exact same value*. No coercion occurs here.\n\n**Note:** The `===` strict equality comparison is defined identically to 11.9.3.1, including the provision about two `object` values. It's a very little known fact that **`==` and `===` behave identically** in the case where two `object`s are being compared!\n\nThe rest of the algorithm in 11.9.3 specifies that if you use `==` loose equality to compare two values of different types, one or both of the values will need to be *implicitly* coerced. This coercion happens so that both values eventually end up as the same type, which can then directly be compared for equality using simple value Identity.\n\n**Note:** The `!=` loose not-equality operation is defined exactly as you'd expect, in that it's literally the `==` operation comparison performed in its entirety, then the negation of the result. The same goes for the `!==` strict not-equality operation.\n\n#### Comparing: `string`s to `number`s\n\nTo illustrate `==` coercion, let's first build off the `string` and `number` examples earlier in this chapter:\n\n```js\nvar a = 42;\nvar b = \"42\";\n\na === b;\t// false\na == b;\t\t// true\n```\n\nAs we'd expect, `a === b` fails, because no coercion is allowed, and indeed the `42` and `\"42\"` values are different.\n\nHowever, the second comparison `a == b` uses loose equality, which means that if the types happen to be different, the comparison algorithm will perform *implicit* coercion on one or both values.\n\nBut exactly what kind of coercion happens here? Does the `a` value of `42` become a `string`, or does the `b` value of `\"42\"` become a `number`?\n\nIn the ES5 spec, clauses 11.9.3.4-5 say:\n\n> 4. If Type(x) is Number and Type(y) is String,\n>    return the result of the comparison x == ToNumber(y).\n> 5. If Type(x) is String and Type(y) is Number,\n>    return the result of the comparison ToNumber(x) == y.\n\n**Warning:** The spec uses `Number` and `String` as the formal names for the types, while this book prefers `number` and `string` for the primitive types. Do not let the capitalization of `Number` in the spec confuse you for the `Number()` native function. For our purposes, the capitalization of the type name is irrelevant -- they have basically the same meaning.\n\nClearly, the spec says the `\"42\"` value is coerced to a `number` for the comparison. The *how* of that coercion has already been covered earlier, specifically with the `ToNumber` abstract operation. In this case, it's quite obvious then that the resulting two `42` values are equal.\n\n#### Comparing: anything to `boolean`\n\nOne of the biggest gotchas with the *implicit* coercion of `==` loose equality pops up when you try to compare a value directly to `true` or `false`.\n\nConsider:\n\n```js\nvar a = \"42\";\nvar b = true;\n\na == b;\t// false\n```\n\nWait, what happened here!? We know that `\"42\"` is a truthy value (see earlier in this chapter). So, how come it's not `==` loose equal to `true`?\n\nThe reason is both simple and deceptively tricky. It's so easy to misunderstand, many JS developers never pay close enough attention to fully grasp it.\n\nLet's again quote the spec, clauses 11.9.3.6-7:\n\n> 6. If Type(x) is Boolean,\n>    return the result of the comparison ToNumber(x) == y.\n> 7. If Type(y) is Boolean,\n>    return the result of the comparison x == ToNumber(y).\n\nLet's break that down. First:\n\n```js\nvar x = true;\nvar y = \"42\";\n\nx == y; // false\n```\n\nThe `Type(x)` is indeed `Boolean`, so it performs `ToNumber(x)`, which coerces `true` to `1`. Now, `1 == \"42\"` is evaluated. The types are still different, so (essentially recursively) we reconsult the algorithm, which just as above will coerce `\"42\"` to `42`, and `1 == 42` is clearly `false`.\n\nReverse it, and we still get the same outcome:\n\n```js\nvar x = \"42\";\nvar y = false;\n\nx == y; // false\n```\n\nThe `Type(y)` is `Boolean` this time, so `ToNumber(y)` yields `0`. `\"42\" == 0` recursively becomes `42 == 0`, which is of course `false`.\n\nIn other words, **the value `\"42\"` is neither `== true` nor `== false`.** At first, that statement might seem crazy. How can a value be neither truthy nor falsy?\n\nBut that's the problem! You're asking the wrong question, entirely. It's not your fault, really. Your brain is tricking you.\n\n`\"42\"` is indeed truthy, but `\"42\" == true` **is not performing a boolean test/coercion** at all, no matter what your brain says. `\"42\"` *is not* being coerced to a `boolean` (`true`), but instead `true` is being coerced to a `1`, and then `\"42\"` is being coerced to `42`.\n\nWhether we like it or not, `ToBoolean` is not even involved here, so the truthiness or falsiness of `\"42\"` is irrelevant to the `==` operation!\n\nWhat *is* relevant is to understand how the `==` comparison algorithm behaves with all the different type combinations. As it regards a `boolean` value on either side of the `==`, a `boolean` always coerces to a `number` *first*.\n\nIf that seems strange to you, you're not alone. I personally would recommend to never, ever, under any circumstances, use `== true` or `== false`. Ever.\n\nBut remember, I'm only talking about `==` here. `=== true` and `=== false` wouldn't allow the coercion, so they're safe from this hidden `ToNumber` coercion.\n\nConsider:\n\n```js\nvar a = \"42\";\n\n// bad (will fail!):\nif (a == true) {\n\t// ..\n}\n\n// also bad (will fail!):\nif (a === true) {\n\t// ..\n}\n\n// good enough (works implicitly):\nif (a) {\n\t// ..\n}\n\n// better (works explicitly):\nif (!!a) {\n\t// ..\n}\n\n// also great (works explicitly):\nif (Boolean( a )) {\n\t// ..\n}\n```\n\nIf you avoid ever using `== true` or `== false` (aka loose equality with `boolean`s) in your code, you'll never have to worry about this truthiness/falsiness mental gotcha.\n\n#### Comparing: `null`s to `undefined`s\n\nAnother example of *implicit* coercion can be seen with `==` loose equality between `null` and `undefined` values. Yet again quoting the ES5 spec, clauses 11.9.3.2-3:\n\n> 2. If x is null and y is undefined, return true.\n> 3. If x is undefined and y is null, return true.\n\n`null` and `undefined`, when compared with `==` loose equality, equate to (aka coerce to) each other (as well as themselves, obviously), and no other values in the entire language.\n\nWhat this means is that `null` and `undefined` can be treated as indistinguishable for comparison purposes, if you use the `==` loose equality operator to allow their mutual *implicit* coercion.\n\n```js\nvar a = null;\nvar b;\n\na == b;\t\t// true\na == null;\t// true\nb == null;\t// true\n\na == false;\t// false\nb == false;\t// false\na == \"\";\t// false\nb == \"\";\t// false\na == 0;\t\t// false\nb == 0;\t\t// false\n```\n\nThe coercion between `null` and `undefined` is safe and predictable, and no other values can give false positives in such a check. I recommend using this coercion to allow `null` and `undefined` to be indistinguishable and thus treated as the same value.\n\nFor example:\n\n```js\nvar a = doSomething();\n\nif (a == null) {\n\t// ..\n}\n```\n\nThe `a == null` check will pass only if `doSomething()` returns either `null` or `undefined`, and will fail with any other value, even other falsy values like `0`, `false`, and `\"\"`.\n\nThe *explicit* form of the check, which disallows any such coercion, is (I think) unnecessarily much uglier (and perhaps a tiny bit less performant!):\n\n```js\nvar a = doSomething();\n\nif (a === undefined || a === null) {\n\t// ..\n}\n```\n\nIn my opinion, the form `a == null` is yet another example where *implicit* coercion improves code readability, but does so in a reliably safe way.\n\n#### Comparing: `object`s to non-`object`s\n\nIf an `object`/`function`/`array` is compared to a simple scalar primitive (`string`, `number`, or `boolean`), the ES5 spec says in clauses 11.9.3.8-9:\n\n> 8. If Type(x) is either String or Number and Type(y) is Object,\n>    return the result of the comparison x == ToPrimitive(y).\n> 9. If Type(x) is Object and Type(y) is either String or Number,\n>    return the result of the comparison ToPrimitive(x) == y.\n\n**Note:** You may notice that these clauses only mention `String` and `Number`, but not `Boolean`. That's because, as quoted earlier, clauses 11.9.3.6-7 take care of coercing any `Boolean` operand presented to a `Number` first.\n\nConsider:\n\n```js\nvar a = 42;\nvar b = [ 42 ];\n\na == b;\t// true\n```\n\nThe `[ 42 ]` value has its `ToPrimitive` abstract operation called (see the \"Abstract Value Operations\" section earlier), which results in the `\"42\"` value. From there, it's just `42 == \"42\"`, which as we've already covered becomes `42 == 42`, so `a` and `b` are found to be coercively equal.\n\n**Tip:** All the quirks of the `ToPrimitive` abstract operation that we discussed earlier in this chapter (`toString()`, `valueOf()`) apply here as you'd expect. This can be quite useful if you have a complex data structure that you want to define a custom `valueOf()` method on, to provide a simple value for equality comparison purposes.\n\nIn Chapter 3, we covered \"unboxing,\" where an `object` wrapper around a primitive value (like from `new String(\"abc\")`, for instance) is unwrapped, and the underlying primitive value (`\"abc\"`) is returned. This behavior is related to the `ToPrimitive` coercion in the `==` algorithm:\n\n```js\nvar a = \"abc\";\nvar b = Object( a );\t// same as `new String( a )`\n\na === b;\t\t\t\t// false\na == b;\t\t\t\t\t// true\n```\n\n`a == b` is `true` because `b` is coerced (aka \"unboxed,\" unwrapped) via `ToPrimitive` to its underlying `\"abc\"` simple scalar primitive value, which is the same as the value in `a`.\n\nThere are some values where this is not the case, though, because of other overriding rules in the `==` algorithm. Consider:\n\n```js\nvar a = null;\nvar b = Object( a );\t// same as `Object()`\na == b;\t\t\t\t\t// false\n\nvar c = undefined;\nvar d = Object( c );\t// same as `Object()`\nc == d;\t\t\t\t\t// false\n\nvar e = NaN;\nvar f = Object( e );\t// same as `new Number( e )`\ne == f;\t\t\t\t\t// false\n```\n\nThe `null` and `undefined` values cannot be boxed -- they have no object wrapper equivalent -- so `Object(null)` is just like `Object()` in that both just produce a normal object.\n\n`NaN` can be boxed to its `Number` object wrapper equivalent, but when `==` causes an unboxing, the `NaN == NaN` comparison fails because `NaN` is never equal to itself (see Chapter 2).\n\n### Edge Cases\n\nNow that we've thoroughly examined how the *implicit* coercion of `==` loose equality works (in both sensible and surprising ways), let's try to call out the worst, craziest corner cases so we can see what we need to avoid to not get bitten with coercion bugs.\n\nFirst, let's examine how modifying the built-in native prototypes can produce crazy results:\n\n#### A Number By Any Other Value Would...\n\n```js\nNumber.prototype.valueOf = function() {\n\treturn 3;\n};\n\nnew Number( 2 ) == 3;\t// true\n```\n\n**Warning:** `2 == 3` would not have fallen into this trap, because neither `2` nor `3` would have invoked the built-in `Number.prototype.valueOf()` method because both are already primitive `number` values and can be compared directly. However, `new Number(2)` must go through the `ToPrimitive` coercion, and thus invoke `valueOf()`.\n\nEvil, huh? Of course it is. No one should ever do such a thing. The fact that you *can* do this is sometimes used as a criticism of coercion and `==`. But that's misdirected frustration. JavaScript is not *bad* because you can do such things, a developer is *bad* **if they do such things**. Don't fall into the \"my programming language should protect me from myself\" fallacy.\n\nNext, let's consider another tricky example, which takes the evil from the previous example to another level:\n\n```js\nif (a == 2 && a == 3) {\n\t// ..\n}\n```\n\nYou might think this would be impossible, because `a` could never be equal to both `2` and `3` *at the same time*. But \"at the same time\" is inaccurate, since the first expression `a == 2` happens strictly *before* `a == 3`.\n\nSo, what if we make `a.valueOf()` have side effects each time it's called, such that the first time it returns `2` and the second time it's called it returns `3`? Pretty easy:\n\n```js\nvar i = 2;\n\nNumber.prototype.valueOf = function() {\n\treturn i++;\n};\n\nvar a = new Number( 42 );\n\nif (a == 2 && a == 3) {\n\tconsole.log( \"Yep, this happened.\" );\n}\n```\n\nAgain, these are evil tricks. Don't do them. But also don't use them as complaints against coercion. Potential abuses of a mechanism are not sufficient evidence to condemn the mechanism. Just avoid these crazy tricks, and stick only with valid and proper usage of coercion.\n\n#### False-y Comparisons\n\nThe most common complaint against *implicit* coercion in `==` comparisons comes from how falsy values behave surprisingly when compared to each other.\n\nTo illustrate, let's look at a list of the corner-cases around falsy value comparisons, to see which ones are reasonable and which are troublesome:\n\n```js\n\"0\" == null;\t\t\t// false\n\"0\" == undefined;\t\t// false\n\"0\" == false;\t\t\t// true -- UH OH!\n\"0\" == NaN;\t\t\t\t// false\n\"0\" == 0;\t\t\t\t// true\n\"0\" == \"\";\t\t\t\t// false\n\nfalse == null;\t\t\t// false\nfalse == undefined;\t\t// false\nfalse == NaN;\t\t\t// false\nfalse == 0;\t\t\t\t// true -- UH OH!\nfalse == \"\";\t\t\t// true -- UH OH!\nfalse == [];\t\t\t// true -- UH OH!\nfalse == {};\t\t\t// false\n\n\"\" == null;\t\t\t\t// false\n\"\" == undefined;\t\t// false\n\"\" == NaN;\t\t\t\t// false\n\"\" == 0;\t\t\t\t// true -- UH OH!\n\"\" == [];\t\t\t\t// true -- UH OH!\n\"\" == {};\t\t\t\t// false\n\n0 == null;\t\t\t\t// false\n0 == undefined;\t\t\t// false\n0 == NaN;\t\t\t\t// false\n0 == [];\t\t\t\t// true -- UH OH!\n0 == {};\t\t\t\t// false\n```\n\nIn this list of 24 comparisons, 17 of them are quite reasonable and predictable. For example, we know that `\"\"` and `NaN` are not at all equatable values, and indeed they don't coerce to be loose equals, whereas `\"0\"` and `0` are reasonably equatable and *do* coerce as loose equals.\n\nHowever, seven of the comparisons are marked with \"UH OH!\" because as false positives, they are much more likely gotchas that could trip you up. `\"\"` and `0` are definitely distinctly different values, and it's rare you'd want to treat them as equatable, so their mutual coercion is troublesome. Note that there aren't any false negatives here.\n\n#### The Crazy Ones\n\nWe don't have to stop there, though. We can keep looking for even more troublesome coercions:\n\n```js\n[] == ![];\t\t// true\n```\n\nOooo, that seems at a higher level of crazy, right!? Your brain may likely trick you that you're comparing a truthy to a falsy value, so the `true` result is surprising, as we *know* a value can never be truthy and falsy at the same time!\n\nBut that's not what's actually happening. Let's break it down. What do we know about the `!` unary operator? It explicitly coerces to a `boolean` using the `ToBoolean` rules (and it also flips the parity). So before `[] == ![]` is even processed, it's actually already translated to `[] == false`. We already saw that form in our above list (`false == []`), so its surprise result is *not new* to us.\n\nHow about other corner cases?\n\n```js\n2 == [2];\t\t// true\n\"\" == [null];\t// true\n```\n\nAs we said earlier in our `ToNumber` discussion, the right-hand side `[2]` and `[null]` values will go through a `ToPrimitive` coercion so they can be more readily compared to the simple primitives (`2` and `\"\"`, respectively) on the left-hand side. Since the `valueOf()` for `array` values just returns the `array` itself, coercion falls to stringifying the `array`.\n\n`[2]` will become `\"2\"`, which then is `ToNumber` coerced to `2` for the right-hand side value in the first comparison. `[null]` just straight becomes `\"\"`.\n\nSo, `2 == 2` and `\"\" == \"\"` are completely understandable.\n\nIf your instinct is to still dislike these results, your frustration is not actually with coercion like you probably think it is. It's actually a complaint against the default `array` values' `ToPrimitive` behavior of coercing to a `string` value. More likely, you'd just wish that `[2].toString()` didn't return `\"2\"`, or that `[null].toString()` didn't return `\"\"`.\n\nBut what exactly *should* these `string` coercions result in? I can't really think of any other appropriate `string` coercion of `[2]` than `\"2\"`, except perhaps `\"[2]\"` -- but that could be very strange in other contexts!\n\nYou could rightly make the case that since `String(null)` becomes `\"null\"`, then `String([null])` should also become `\"null\"`. That's a reasonable assertion. So, that's the real culprit.\n\n*Implicit* coercion itself isn't the evil here. Even an *explicit* coercion of `[null]` to a `string` results in `\"\"`. What's at odds is whether it's sensible at all for `array` values to stringify to the equivalent of their contents, and exactly how that happens. So, direct your frustration at the rules for `String( [..] )`, because that's where the craziness stems from. Perhaps there should be no stringification coercion of `array`s at all? But that would have lots of other downsides in other parts of the language.\n\nAnother famously cited gotcha:\n\n```js\n0 == \"\\n\";\t\t// true\n```\n\nAs we discussed earlier with empty `\"\"`, `\"\\n\"` (or `\" \"` or any other whitespace combination) is coerced via `ToNumber`, and the result is `0`. What other `number` value would you expect whitespace to coerce to? Does it bother you that *explicit* `Number(\" \")` yields `0`?\n\nReally the only other reasonable `number` value that empty strings or whitespace strings could coerce to is the `NaN`. But would that *really* be better? The comparison `\" \" == NaN` would of course fail, but it's unclear that we'd have really *fixed* any of the underlying concerns.\n\nThe chances that a real-world JS program fails because `0 == \"\\n\"` are awfully rare, and such corner cases are easy to avoid.\n\nType conversions **always** have corner cases, in any language -- nothing specific to coercion. The issues here are about second-guessing a certain set of corner cases (and perhaps rightly so!?), but that's not a salient argument against the overall coercion mechanism.\n\nBottom line: almost any crazy coercion between *normal values* that you're likely to run into (aside from intentionally tricky `valueOf()` or `toString()` hacks as earlier) will boil down to the short seven-item list of gotcha coercions we've identified above.\n\nTo contrast against these 24 likely suspects for coercion gotchas, consider another list like this:\n\n```js\n42 == \"43\";\t\t\t\t\t\t\t// false\n\"foo\" == 42;\t\t\t\t\t\t// false\n\"true\" == true;\t\t\t\t\t\t// false\n\n42 == \"42\";\t\t\t\t\t\t\t// true\n\"foo\" == [ \"foo\" ];\t\t\t\t\t// true\n```\n\nIn these nonfalsy, noncorner cases (and there are literally an infinite number of comparisons we could put on this list), the coercion results are totally safe, reasonable, and explainable.\n\n#### Sanity Check\n\nOK, we've definitely found some crazy stuff when we've looked deeply into *implicit* coercion. No wonder that most developers claim coercion is evil and should be avoided, right!?\n\nBut let's take a step back and do a sanity check.\n\nBy way of magnitude comparison, we have *a list* of seven troublesome gotcha coercions, but we have *another list* of (at least 17, but actually infinite) coercions that are totally sane and explainable.\n\nIf you're looking for a textbook example of \"throwing the baby out with the bathwater,\" this is it: discarding the entirety of coercion (the infinitely large list of safe and useful behaviors) because of a list of literally just seven gotchas.\n\nThe more prudent reaction would be to ask, \"how can I use the countless *good parts* of coercion, but avoid the few *bad parts*?\"\n\nLet's look again at the *bad* list:\n\n```js\n\"0\" == false;\t\t\t// true -- UH OH!\nfalse == 0;\t\t\t\t// true -- UH OH!\nfalse == \"\";\t\t\t// true -- UH OH!\nfalse == [];\t\t\t// true -- UH OH!\n\"\" == 0;\t\t\t\t// true -- UH OH!\n\"\" == [];\t\t\t\t// true -- UH OH!\n0 == [];\t\t\t\t// true -- UH OH!\n```\n\nFour of the seven items on this list involve `== false` comparison, which we said earlier you should **always, always** avoid. That's a pretty easy rule to remember.\n\nNow the list is down to three.\n\n```js\n\"\" == 0;\t\t\t\t// true -- UH OH!\n\"\" == [];\t\t\t\t// true -- UH OH!\n0 == [];\t\t\t\t// true -- UH OH!\n```\n\nAre these reasonable coercions you'd do in a normal JavaScript program? Under what conditions would they really happen?\n\nI don't think it's terribly likely that you'd literally use `== []` in a `boolean` test in your program, at least not if you know what you're doing. You'd probably instead be doing `== \"\"` or `== 0`, like:\n\n```js\nfunction doSomething(a) {\n\tif (a == \"\") {\n\t\t// ..\n\t}\n}\n```\n\nYou'd have an oops if you accidentally called `doSomething(0)` or `doSomething([])`. Another scenario:\n\n```js\nfunction doSomething(a,b) {\n\tif (a == b) {\n\t\t// ..\n\t}\n}\n```\n\nAgain, this could break if you did something like `doSomething(\"\",0)` or `doSomething([],\"\")`.\n\nSo, while the situations *can* exist where these coercions will bite you, and you'll want to be careful around them, they're probably not super common on the whole of your code base.\n\n#### Safely Using Implicit Coercion\n\nThe most important advice I can give you: examine your program and reason about what values can show up on either side of an `==` comparison. To effectively avoid issues with such comparisons, here's some heuristic rules to follow:\n\n1. If either side of the comparison can have `true` or `false` values, don't ever, EVER use `==`.\n2. If either side of the comparison can have `[]`, `\"\"`, or `0` values, seriously consider not using `==`.\n\nIn these scenarios, it's almost certainly better to use `===` instead of `==`, to avoid unwanted coercion. Follow those two simple rules and pretty much all the coercion gotchas that could reasonably hurt you will effectively be avoided.\n\n**Being more explicit/verbose in these cases will save you from a lot of headaches.**\n\nThe question of `==` vs. `===` is really appropriately framed as: should you allow coercion for a comparison or not?\n\nThere's lots of cases where such coercion can be helpful, allowing you to more tersely express some comparison logic (like with `null` and `undefined`, for example).\n\nIn the overall scheme of things, there's relatively few cases where *implicit* coercion is truly dangerous. But in those places, for safety sake, definitely use `===`.\n\n**Tip:** Another place where coercion is guaranteed *not* to bite you is with the `typeof` operator. `typeof` is always going to return you one of seven strings (see Chapter 1), and none of them are the empty `\"\"` string. As such, there's no case where checking the type of some value is going to run afoul of *implicit* coercion. `typeof x == \"function\"` is 100% as safe and reliable as `typeof x === \"function\"`. Literally, the spec says the algorithm will be identical in this situation. So, don't just blindly use `===` everywhere simply because that's what your code tools tell you to do, or (worst of all) because you've been told in some book to **not think about it**. You own the quality of your code.\n\nIs *implicit* coercion evil and dangerous? In a few cases, yes, but overwhelmingly, no.\n\nBe a responsible and mature developer. Learn how to use the power of coercion (both *explicit* and *implicit*) effectively and safely. And teach those around you to do the same.\n\nHere's a handy table made by Alex Dorey (@dorey on GitHub) to visualize a variety of comparisons:\n\n<img src=\"fig1.png\" width=\"600\">\n\nSource: https://github.com/dorey/JavaScript-Equality-Table\n\n## Abstract Relational Comparison\n\nWhile this part of *implicit* coercion often gets a lot less attention, it's important nonetheless to think about what happens with `a < b` comparisons (similar to how we just examined `a == b` in depth).\n\nThe \"Abstract Relational Comparison\" algorithm in ES5 section 11.8.5 essentially divides itself into two parts: what to do if the comparison involves both `string` values (second half), or anything else (first half).\n\n**Note:** The algorithm is only defined for `a < b`. So, `a > b` is handled as `b < a`.\n\nThe algorithm first calls `ToPrimitive` coercion on both values, and if the return result of either call is not a `string`, then both values are coerced to `number` values using the `ToNumber` operation rules, and compared numerically.\n\nFor example:\n\n```js\nvar a = [ 42 ];\nvar b = [ \"43\" ];\n\na < b;\t// true\nb < a;\t// false\n```\n\n**Note:** Similar caveats for `-0` and `NaN` apply here as they did in the `==` algorithm discussed earlier.\n\nHowever, if both values are `string`s for the `<` comparison, simple lexicographic (natural alphabetic) comparison on the characters is performed:\n\n```js\nvar a = [ \"42\" ];\nvar b = [ \"043\" ];\n\na < b;\t// false\n```\n\n`a` and `b` are *not* coerced to `number`s, because both of them end up as `string`s after the `ToPrimitive` coercion on the two `array`s. So, `\"42\"` is compared character by character to `\"043\"`, starting with the first characters `\"4\"` and `\"0\"`, respectively. Since `\"0\"` is lexicographically *less than* than `\"4\"`, the comparison returns `false`.\n\nThe exact same behavior and reasoning goes for:\n\n```js\nvar a = [ 4, 2 ];\nvar b = [ 0, 4, 3 ];\n\na < b;\t// false\n```\n\nHere, `a` becomes `\"4,2\"` and `b` becomes `\"0,4,3\"`, and those lexicographically compare identically to the previous snippet.\n\nWhat about:\n\n```js\nvar a = { b: 42 };\nvar b = { b: 43 };\n\na < b;\t// ??\n```\n\n`a < b` is also `false`, because `a` becomes `[object Object]` and `b` becomes `[object Object]`, and so clearly `a` is not lexicographically less than `b`.\n\nBut strangely:\n\n```js\nvar a = { b: 42 };\nvar b = { b: 43 };\n\na < b;\t// false\na == b;\t// false\na > b;\t// false\n\na <= b;\t// true\na >= b;\t// true\n```\n\nWhy is `a == b` not `true`? They're the same `string` value (`\"[object Object]\"`), so it seems they should be equal, right? Nope. Recall the previous discussion about how `==` works with `object` references.\n\nBut then how are `a <= b` and `a >= b` resulting in `true`, if `a < b` **and** `a == b` **and** `a > b` are all `false`?\n\nBecause the spec says for `a <= b`, it will actually evaluate `b < a` first, and then negate that result. Since `b < a` is *also* `false`, the result of `a <= b` is `true`.\n\nThat's probably awfully contrary to how you might have explained what `<=` does up to now, which would likely have been the literal: \"less than *or* equal to.\" JS more accurately considers `<=` as \"not greater than\" (`!(a > b)`, which JS treats as `!(b < a)`). Moreover, `a >= b` is explained by first considering it as `b <= a`, and then applying the same reasoning.\n\nUnfortunately, there is no \"strict relational comparison\" as there is for equality. In other words, there's no way to prevent *implicit* coercion from occurring with relational comparisons like `a < b`, other than to ensure that `a` and `b` are of the same type explicitly before making the comparison.\n\nUse the same reasoning from our earlier `==` vs. `===` sanity check discussion. If coercion is helpful and reasonably safe, like in a `42 < \"43\"` comparison, **use it**. On the other hand, if you need to be safe about a relational comparison, *explicitly coerce* the values first, before using `<` (or its counterparts).\n\n```js\nvar a = [ 42 ];\nvar b = \"043\";\n\na < b;\t\t\t\t\t\t// false -- string comparison!\nNumber( a ) < Number( b );\t// true -- number comparison!\n```\n\n## Review\n\nIn this chapter, we turned our attention to how JavaScript type conversions happen, called **coercion**, which can be characterized as either *explicit* or *implicit*.\n\nCoercion gets a bad rap, but it's actually quite useful in many cases. An important task for the responsible JS developer is to take the time to learn all the ins and outs of coercion to decide which parts will help improve their code, and which parts they really should avoid.\n\n*Explicit* coercion is code which is obvious that the intent is to convert a value from one type to another. The benefit is improvement in readability and maintainability of code by reducing confusion.\n\n*Implicit* coercion is coercion that is \"hidden\" as a side-effect of some other operation, where it's not as obvious that the type conversion will occur. While it may seem that *implicit* coercion is the opposite of *explicit* and is thus bad (and indeed, many think so!), actually *implicit* coercion is also about improving the readability of code.\n\nEspecially for *implicit*, coercion must be used responsibly and consciously. Know why you're writing the code you're writing, and how it works. Strive to write code that others will easily be able to learn from and understand as well.\n"
  },
  {
    "path": "types & grammar/ch5.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Chapter 5: Grammar\n\nThe last major topic we want to tackle is how JavaScript's language syntax works (aka its grammar). You may think you know how to write JS, but there's an awful lot of nuance to various parts of the language grammar that lead to confusion and misconception, so we want to dive into those parts and clear some things up.\n\n**Note:** The term \"grammar\" may be a little less familiar to readers than the term \"syntax.\" In many ways, they are similar terms, describing the *rules* for how the language works. There are nuanced differences, but they mostly don't matter for our discussion here. The grammar for JavaScript is a structured way to describe how the syntax (operators, keywords, etc.) fits together into well-formed, valid programs. In other words, discussing syntax without grammar would leave out a lot of the important details. So our focus here in this chapter is most accurately described as *grammar*, even though the raw syntax of the language is what developers directly interact with.\n\n## Statements & Expressions\n\nIt's fairly common for developers to assume that the term \"statement\" and \"expression\" are roughly equivalent. But here we need to distinguish between the two, because there are some very important differences in our JS programs.\n\nTo draw the distinction, let's borrow from terminology you may be more familiar with: the English language.\n\nA \"sentence\" is one complete formation of words that expresses a thought. It's comprised of one or more \"phrases,\" each of which can be connected with punctuation marks or conjunction words (\"and,\" \"or,\" etc). A phrase can itself be made up of smaller phrases. Some phrases are incomplete and don't accomplish much by themselves, while other phrases can stand on their own. These rules are collectively called the *grammar* of the English language.\n\nAnd so it goes with JavaScript grammar. Statements are sentences, expressions are phrases, and operators are conjunctions/punctuation.\n\nEvery expression in JS can be evaluated down to a single, specific value result. For example:\n\n```js\nvar a = 3 * 6;\nvar b = a;\nb;\n```\n\nIn this snippet, `3 * 6` is an expression (evaluates to the value `18`). But `a` on the second line is also an expression, as is `b` on the third line. The `a` and `b` expressions both evaluate to the values stored in those variables at that moment, which also happens to be `18`.\n\nMoreover, each of the three lines is a statement containing expressions. `var a = 3 * 6` and `var b = a` are called \"declaration statements\" because they each declare a variable (and optionally assign a value to it). The `a = 3 * 6` and `b = a` assignments (minus the `var`s) are called assignment expressions.\n\nThe third line contains just the expression `b`, but it's also a statement all by itself (though not a terribly interesting one!). This is generally referred to as an \"expression statement.\"\n\n### Statement Completion Values\n\nIt's a fairly little known fact that statements all have completion values (even if that value is just `undefined`).\n\nHow would you even go about seeing the completion value of a statement?\n\nThe most obvious answer is to type the statement into your browser's developer console, because when you execute it, the console by default reports the completion value of the most recent statement it executed.\n\nLet's consider `var b = a`. What's the completion value of that statement?\n\nThe `b = a` assignment expression results in the value that was assigned (`18` above), but the `var` statement itself results in `undefined`. Why? Because `var` statements are defined that way in the spec. If you put `var a = 42;` into your console, you'll see `undefined` reported back instead of `42`.\n\n**Note:** Technically, it's a little more complex than that. In the ES5 spec, section 12.2 \"Variable Statement,\" the `VariableDeclaration` algorithm actually *does* return a value (a `string` containing the name of the variable declared -- weird, huh!?), but that value is basically swallowed up (except for use by the `for..in` loop) by the `VariableStatement` algorithm, which forces an empty (aka `undefined`) completion value.\n\nIn fact, if you've done much code experimenting in your console (or in a JavaScript environment REPL -- read/evaluate/print/loop tool), you've probably seen `undefined` reported after many different statements, and perhaps never realized why or what that was. Put simply, the console is just reporting the statement's completion value.\n\nBut what the console prints out for the completion value isn't something we can use inside our program. So how can we capture the completion value?\n\nThat's a much more complicated task. Before we explain *how*, let's explore *why* you would want to do that.\n\nWe need to consider other types of statement completion values. For example, any regular `{ .. }` block has a completion value of the completion value of its last contained statement/expression.\n\nConsider:\n\n```js\nvar b;\n\nif (true) {\n\tb = 4 + 38;\n}\n```\n\nIf you typed that into your console/REPL, you'd probably see `42` reported, since `42` is the completion value of the `if` block, which took on the completion value of its last assignment expression statement `b = 4 + 38`.\n\nIn other words, the completion value of a block is like an *implicit return* of the last statement value in the block.\n\n**Note:** This is conceptually familiar in languages like CoffeeScript, which have implicit `return` values from `function`s that are the same as the last statement value in the function.\n\nBut there's an obvious problem. This kind of code doesn't work:\n\n```js\nvar a, b;\n\na = if (true) {\n\tb = 4 + 38;\n};\n```\n\nWe can't capture the completion value of a statement and assign it into another variable in any easy syntactic/grammatical way (at least not yet!).\n\nSo, what can we do?\n\n**Warning**: For demo purposes only -- don't actually do the following in your real code!\n\nWe could use the much maligned `eval(..)` (sometimes pronounced \"evil\") function to capture this completion value.\n\n```js\nvar a, b;\n\na = eval( \"if (true) { b = 4 + 38; }\" );\n\na;\t// 42\n```\n\nYeeeaaahhhh. That's terribly ugly. But it works! And it illustrates the point that statement completion values are a real thing that can be captured not just in our console but in our programs.\n\nThere's a proposal for ES7 called \"do expression.\" Here's how it might work:\n\n```js\nvar a, b;\n\na = do {\n\tif (true) {\n\t\tb = 4 + 38;\n\t}\n};\n\na;\t// 42\n```\n\nThe `do { .. }` expression executes a block (with one or many statements in it), and the final statement completion value inside the block becomes the completion value *of* the `do` expression, which can then be assigned to `a` as shown.\n\nThe general idea is to be able to treat statements as expressions -- they can show up inside other statements -- without needing to wrap them in an inline function expression and perform an explicit `return ..`.\n\nFor now, statement completion values are not much more than trivia. But they're probably going to take on more significance as JS evolves, and hopefully `do { .. }` expressions will reduce the temptation to use stuff like `eval(..)`.\n\n**Warning:** Repeating my earlier admonition: avoid `eval(..)`. Seriously. See the *Scope & Closures* title of this series for more explanation.\n\n### Expression Side Effects\n\nMost expressions don't have side effects. For example:\n\n```js\nvar a = 2;\nvar b = a + 3;\n```\n\nThe expression `a + 3` did not *itself* have a side effect, like for instance changing `a`. It had a result, which is `5`, and that result was assigned to `b` in the statement `b = a + 3`.\n\nThe most common example of an expression with (possible) side effects is a function call expression:\n\n```js\nfunction foo() {\n\ta = a + 1;\n}\n\nvar a = 1;\nfoo();\t\t// result: `undefined`, side effect: changed `a`\n```\n\nThere are other side-effecting expressions, though. For example:\n\n```js\nvar a = 42;\nvar b = a++;\n```\n\nThe expression `a++` has two separate behaviors. *First*, it returns the current value of `a`, which is `42` (which then gets assigned to `b`). But *next*, it changes the value of `a` itself, incrementing it by one.\n\n```js\nvar a = 42;\nvar b = a++;\n\na;\t// 43\nb;\t// 42\n```\n\nMany developers would mistakenly believe that `b` has value `43` just like `a` does. But the confusion comes from not fully considering the *when* of the side effects of the `++` operator.\n\nThe `++` increment operator and the `--` decrement operator are both unary operators (see Chapter 4), which can be used in either a postfix (\"after\") position or prefix (\"before\") position.\n\n```js\nvar a = 42;\n\na++;\t// 42\na;\t\t// 43\n\n++a;\t// 44\na;\t\t// 44\n```\n\nWhen `++` is used in the prefix position as `++a`, its side effect (incrementing `a`) happens *before* the value is returned from the expression, rather than *after* as with `a++`.\n\n**Note:** Would you think `++a++` was legal syntax? If you try it, you'll get a `ReferenceError` error, but why? Because side-effecting operators **require a variable reference** to target their side effects to. For `++a++`, the `a++` part is evaluated first (because of operator precedence -- see below), which gives back the value of `a` _before_ the increment. But then it tries to evaluate `++42`, which (if you try it) gives the same `ReferenceError` error, since `++` can't have a side effect directly on a value like `42`.\n\nIt is sometimes mistakenly thought that you can encapsulate the *after* side effect of `a++` by wrapping it in a `( )` pair, like:\n\n```js\nvar a = 42;\nvar b = (a++);\n\na;\t// 43\nb;\t// 42\n```\n\nUnfortunately, `( )` itself doesn't define a new wrapped expression that would be evaluated *after* the *after side effect* of the `a++` expression, as we might have hoped. In fact, even if it did, `a++` returns `42` first, and unless you have another expression that reevaluates `a` after the side effect of `++`, you're not going to get `43` from that expression, so `b` will not be assigned `43`.\n\nThere's an option, though: the `,` statement-series comma operator. This operator allows you to string together multiple standalone expression statements into a single statement:\n\n```js\nvar a = 42, b;\nb = ( a++, a );\n\na;\t// 43\nb;\t// 43\n```\n\n**Note:** The `( .. )` around `a++, a` is required here. The reason is operator precedence, which we'll cover later in this chapter.\n\nThe expression `a++, a` means that the second `a` statement expression gets evaluated *after* the *after side effects* of the first `a++` statement expression, which means it returns the `43` value for assignment to `b`.\n\nAnother example of a side-effecting operator is `delete`. As we showed in Chapter 2, `delete` is used to remove a property from an `object` or a slot from an `array`. But it's usually just called as a standalone statement:\n\n```js\nvar obj = {\n\ta: 42\n};\n\nobj.a;\t\t\t// 42\ndelete obj.a;\t// true\nobj.a;\t\t\t// undefined\n```\n\nThe result value of the `delete` operator is `true` if the requested operation is valid/allowable, or `false` otherwise. But the side effect of the operator is that it removes the property (or array slot).\n\n**Note:** What do we mean by valid/allowable? Nonexistent properties, or properties that exist and are configurable (see Chapter 3 of the *this & Object Prototypes* title of this series) will return `true` from the `delete` operator. Otherwise, the result will be `false` or an error.\n\nOne last example of a side-effecting operator, which may at once be both obvious and nonobvious, is the `=` assignment operator.\n\nConsider:\n\n```js\nvar a;\n\na = 42;\t\t// 42\na;\t\t\t// 42\n```\n\nIt may not seem like `=` in `a = 42` is a side-effecting operator for the expression. But if we examine the result value of the `a = 42` statement, it's the value that was just assigned (`42`), so the assignment of that same value into `a` is essentially a side effect.\n\n**Tip:** The same reasoning about side effects goes for the compound-assignment operators like `+=`, `-=`, etc. For example, `a = b += 2` is processed first as `b += 2` (which is `b = b + 2`), and the result of *that* `=` assignment is then assigned to `a`.\n\nThis behavior that an assignment expression (or statement) results in the assigned value is primarily useful for chained assignments, such as:\n\n```js\nvar a, b, c;\n\na = b = c = 42;\n```\n\nHere, `c = 42` is evaluated to `42` (with the side effect of assigning `42` to `c`), then `b = 42` is evaluated to `42` (with the side effect of assigning `42` to `b`), and finally `a = 42` is evaluated (with the side effect of assigning `42` to `a`).\n\n**Warning:** A common mistake developers make with chained assignments is like `var a = b = 42`. While this looks like the same thing, it's not. If that statement were to happen without there also being a separate `var b` (somewhere in the scope) to formally declare `b`, then `var a = b = 42` would not declare `b` directly. Depending on `strict` mode, that would either throw an error or create an accidental global (see the *Scope & Closures* title of this series).\n\nAnother scenario to consider:\n\n```js\nfunction vowels(str) {\n\tvar matches;\n\n\tif (str) {\n\t\t// pull out all the vowels\n\t\tmatches = str.match( /[aeiou]/g );\n\n\t\tif (matches) {\n\t\t\treturn matches;\n\t\t}\n\t}\n}\n\nvowels( \"Hello World\" ); // [\"e\",\"o\",\"o\"]\n```\n\nThis works, and many developers prefer such. But using an idiom where we take advantage of the assignment side effect, we can simplify by combining the two `if` statements into one:\n\n```js\nfunction vowels(str) {\n\tvar matches;\n\n\t// pull out all the vowels\n\tif (str && (matches = str.match( /[aeiou]/g ))) {\n\t\treturn matches;\n\t}\n}\n\nvowels( \"Hello World\" ); // [\"e\",\"o\",\"o\"]\n```\n\n**Note:** The `( .. )` around `matches = str.match..` is required. The reason is operator precedence, which we'll cover in the \"Operator Precedence\" section later in this chapter.\n\nI prefer this shorter style, as I think it makes it clearer that the two conditionals are in fact related rather than separate. But as with most stylistic choices in JS, it's purely opinion which one is *better*.\n\n### Contextual Rules\n\nThere are quite a few places in the JavaScript grammar rules where the same syntax means different things depending on where/how it's used. This kind of thing can, in isolation, cause quite a bit of confusion.\n\nWe won't exhaustively list all such cases here, but just call out a few of the common ones.\n\n#### `{ .. }` Curly Braces\n\nThere's two main places (and more coming as JS evolves!) that a pair of `{ .. }` curly braces will show up in your code. Let's take a look at each of them.\n\n##### Object Literals\n\nFirst, as an `object` literal:\n\n```js\n// assume there's a `bar()` function defined\n\nvar a = {\n\tfoo: bar()\n};\n```\n\nHow do we know this is an `object` literal? Because the `{ .. }` pair is a value that's getting assigned to `a`.\n\n**Note:** The `a` reference is called an \"l-value\" (aka left-hand value) since it's the target of an assignment. The `{ .. }` pair is an \"r-value\" (aka right-hand value) since it's used *just* as a value (in this case as the source of an assignment).\n\n##### Labels\n\nWhat happens if we remove the `var a =` part of the above snippet?\n\n```js\n// assume there's a `bar()` function defined\n\n{\n\tfoo: bar()\n}\n```\n\nA lot of developers assume that the `{ .. }` pair is just a standalone `object` literal that doesn't get assigned anywhere. But it's actually entirely different.\n\nHere, `{ .. }` is just a regular code block. It's not very idiomatic in JavaScript (much more so in other languages!) to have a standalone `{ .. }` block like that, but it's perfectly valid JS grammar. It can be especially helpful when combined with `let` block-scoping declarations (see the *Scope & Closures* title in this series).\n\nThe `{ .. }` code block here is functionally pretty much identical to the code block being attached to some statement, like a `for`/`while` loop, `if` conditional, etc.\n\nBut if it's a normal block of code, what's that bizarre looking `foo: bar()` syntax, and how is that legal?\n\nIt's because of a little known (and, frankly, discouraged) feature in JavaScript called \"labeled statements.\" `foo` is a label for the statement `bar()` (which has omitted its trailing `;` -- see \"Automatic Semicolons\" later in this chapter). But what's the point of a labeled statement?\n\nIf JavaScript had a `goto` statement, you'd theoretically be able to say `goto foo` and have execution jump to that location in code. `goto`s are usually considered terrible coding idioms as they make code much harder to understand (aka \"spaghetti code\"), so it's a *very good thing* that JavaScript doesn't have a general `goto`.\n\nHowever, JS *does* support a limited, special form of `goto`: labeled jumps. Both the `continue` and `break` statements can optionally accept a specified label, in which case the program flow \"jumps\" kind of like a `goto`. Consider:\n\n```js\n// `foo` labeled-loop\nfoo: for (var i=0; i<4; i++) {\n\tfor (var j=0; j<4; j++) {\n\t\t// whenever the loops meet, continue outer loop\n\t\tif (j == i) {\n\t\t\t// jump to the next iteration of\n\t\t\t// the `foo` labeled-loop\n\t\t\tcontinue foo;\n\t\t}\n\n\t\t// skip odd multiples\n\t\tif ((j * i) % 2 == 1) {\n\t\t\t// normal (non-labeled) `continue` of inner loop\n\t\t\tcontinue;\n\t\t}\n\n\t\tconsole.log( i, j );\n\t}\n}\n// 1 0\n// 2 0\n// 2 1\n// 3 0\n// 3 2\n```\n\n**Note:** `continue foo` does not mean \"go to the 'foo' labeled position to continue\", but rather, \"continue the loop that is labeled 'foo' with its next iteration.\" So, it's not *really* an arbitrary `goto`.\n\nAs you can see, we skipped over the odd-multiple `3 1` iteration, but the labeled-loop jump also skipped iterations `1 1` and `2 2`.\n\nPerhaps a slightly more useful form of the labeled jump is with `break __` from inside an inner loop where you want to break out of the outer loop. Without a labeled `break`, this same logic could sometimes be rather awkward to write:\n\n```js\n// `foo` labeled-loop\nfoo: for (var i=0; i<4; i++) {\n\tfor (var j=0; j<4; j++) {\n\t\tif ((i * j) >= 3) {\n\t\t\tconsole.log( \"stopping!\", i, j );\n\t\t\t// break out of the `foo` labeled loop\n\t\t\tbreak foo;\n\t\t}\n\n\t\tconsole.log( i, j );\n\t}\n}\n// 0 0\n// 0 1\n// 0 2\n// 0 3\n// 1 0\n// 1 1\n// 1 2\n// stopping! 1 3\n```\n\n**Note:** `break foo` does not mean \"go to the 'foo' labeled position to continue,\" but rather, \"break out of the loop/block that is labeled 'foo' and continue *after* it.\" Not exactly a `goto` in the traditional sense, huh?\n\nThe nonlabeled `break` alternative to the above would probably need to involve one or more functions, shared scope variable access, etc. It would quite likely be more confusing than labeled `break`, so here using a labeled `break` is perhaps the better option.\n\nA label can apply to a non-loop block, but only `break` can reference such a non-loop label. You can do a labeled `break ___` out of any labeled block, but you cannot `continue ___` a non-loop label, nor can you do a non-labeled `break` out of a block.\n\n```js\nfunction foo() {\n\t// `bar` labeled-block\n\tbar: {\n\t\tconsole.log( \"Hello\" );\n\t\tbreak bar;\n\t\tconsole.log( \"never runs\" );\n\t}\n\tconsole.log( \"World\" );\n}\n\nfoo();\n// Hello\n// World\n```\n\nLabeled loops/blocks are extremely uncommon, and often frowned upon. It's best to avoid them if possible; for example using function calls instead of the loop jumps. But there are perhaps some limited cases where they might be useful. If you're going to use a labeled jump, make sure to document what you're doing with plenty of comments!\n\nIt's a very common belief that JSON is a proper subset of JS, so a string of JSON (like `{\"a\":42}` -- notice the quotes around the property name as JSON requires!) is thought to be a valid JavaScript program. **Not true!** Try putting `{\"a\":42}` into your JS console, and you'll get an error.\n\nThat's because statement labels cannot have quotes around them, so `\"a\"` is not a valid label, and thus `:` can't come right after it.\n\nSo, JSON is truly a subset of JS syntax, but JSON is not valid JS grammar by itself.\n\nOne extremely common misconception along these lines is that if you were to load a JS file into a `<script src=..>` tag that only has JSON content in it (like from an API call), the data would be read as valid JavaScript but just be inaccessible to the program. JSON-P (the practice of wrapping the JSON data in a function call, like `foo({\"a\":42})`) is usually said to solve this inaccessibility by sending the value to one of your program's functions.\n\n**Not true!** The totally valid JSON value `{\"a\":42}` by itself would actually throw a JS error because it'd be interpreted as a statement block with an invalid label. But `foo({\"a\":42})` is valid JS because in it, `{\"a\":42}` is an `object` literal value being passed to `foo(..)`. So, properly said, **JSON-P makes JSON into valid JS grammar!**\n\n##### Blocks\n\nAnother commonly cited JS gotcha (related to coercion -- see Chapter 4) is:\n\n```js\n[] + {}; // \"[object Object]\"\n{} + []; // 0\n```\n\nThis seems to imply the `+` operator gives different results depending on whether the first operand is the `[]` or the `{}`. But that actually has nothing to do with it!\n\nOn the first line, `{}` appears in the `+` operator's expression, and is therefore interpreted as an actual value (an empty `object`). Chapter 4 explained that `[]` is coerced to `\"\"` and thus `{}` is coerced to a `string` value as well: `\"[object Object]\"`.\n\nBut on the second line, `{}` is interpreted as a standalone `{}` empty block (which does nothing). Blocks don't need semicolons to terminate them, so the lack of one here isn't a problem. Finally, `+ []` is an expression that *explicitly coerces* (see Chapter 4) the `[]` to a `number`, which is the `0` value.\n\n##### Object Destructuring\n\nStarting with ES6, another place that you'll see `{ .. }` pairs showing up is with \"destructuring assignments\" (see the *ES6 & Beyond* title of this series for more info), specifically `object` destructuring. Consider:\n\n```js\nfunction getData() {\n\t// ..\n\treturn {\n\t\ta: 42,\n\t\tb: \"foo\"\n\t};\n}\n\nvar { a, b } = getData();\n\nconsole.log( a, b ); // 42 \"foo\"\n```\n\nAs you can probably tell, `var { a , b } = ..` is a form of ES6 destructuring assignment, which is roughly equivalent to:\n\n```js\nvar res = getData();\nvar a = res.a;\nvar b = res.b;\n```\n\n**Note:** `{ a, b }` is actually ES6 destructuring shorthand for `{ a: a, b: b }`, so either will work, but it's expected that the shorter `{ a, b }` will become the preferred form.\n\nObject destructuring with a `{ .. }` pair can also be used for named function arguments, which is sugar for this same sort of implicit object property assignment:\n\n```js\nfunction foo({ a, b, c }) {\n\t// no need for:\n\t// var a = obj.a, b = obj.b, c = obj.c\n\tconsole.log( a, b, c );\n}\n\nfoo( {\n\tc: [1,2,3],\n\ta: 42,\n\tb: \"foo\"\n} );\t// 42 \"foo\" [1, 2, 3]\n```\n\nSo, the context we use `{ .. }` pairs in entirely determines what they mean, which illustrates the difference between syntax and grammar. It's very important to understand these nuances to avoid unexpected interpretations by the JS engine.\n\n#### `else if` And Optional Blocks\n\nIt's a common misconception that JavaScript has an `else if` clause, because you can do:\n\n```js\nif (a) {\n\t// ..\n}\nelse if (b) {\n\t// ..\n}\nelse {\n\t// ..\n}\n```\n\nBut there's a hidden characteristic of the JS grammar here: there is no `else if`. But `if` and `else` statements are allowed to omit the `{ }` around their attached block if they only contain a single statement. You've seen this many times before, undoubtedly:\n\n```js\nif (a) doSomething( a );\n```\n\nMany JS style guides will insist that you always use `{ }` around a single statement block, like:\n\n```js\nif (a) { doSomething( a ); }\n```\n\nHowever, the exact same grammar rule applies to the `else` clause, so the `else if` form you've likely always coded is *actually* parsed as:\n\n```js\nif (a) {\n\t// ..\n}\nelse {\n\tif (b) {\n\t\t// ..\n\t}\n\telse {\n\t\t// ..\n\t}\n}\n```\n\nThe `if (b) { .. } else { .. }` is a single statement that follows the `else`, so you can either put the surrounding `{ }` in or not. In other words, when you use `else if`, you're technically breaking that common style guide rule and just defining your `else` with a single `if` statement.\n\nOf course, the `else if` idiom is extremely common and results in one less level of indentation, so it's attractive. Whichever way you do it, just call out explicitly in your own style guide/rules and don't assume things like `else if` are direct grammar rules.\n\n## Operator Precedence\n\nAs we covered in Chapter 4, JavaScript's version of `&&` and `||` are interesting in that they select and return one of their operands, rather than just resulting in `true` or `false`. That's easy to reason about if there are only two operands and one operator.\n\n```js\nvar a = 42;\nvar b = \"foo\";\n\na && b;\t// \"foo\"\na || b;\t// 42\n```\n\nBut what about when there's two operators involved, and three operands?\n\n```js\nvar a = 42;\nvar b = \"foo\";\nvar c = [1,2,3];\n\na && b || c; // ???\na || b && c; // ???\n```\n\nTo understand what those expressions result in, we're going to need to understand what rules govern how the operators are processed when there's more than one present in an expression.\n\nThese rules are called \"operator precedence.\"\n\nI bet most readers feel they have a decent grasp on operator precedence. But as with everything else we've covered in this book series, we're going to poke and prod at that understanding to see just how solid it really is, and hopefully learn a few new things along the way.\n\nRecall the example from above:\n\n```js\nvar a = 42, b;\nb = ( a++, a );\n\na;\t// 43\nb;\t// 43\n```\n\nBut what would happen if we remove the `( )`?\n\n```js\nvar a = 42, b;\nb = a++, a;\n\na;\t// 43\nb;\t// 42\n```\n\nWait! Why did that change the value assigned to `b`?\n\nBecause the `,` operator has a lower precedence than the `=` operator. So, `b = a++, a` is interpreted as `(b = a++), a`. Because (as we explained earlier) `a++` has *after side effects*, the assigned value to `b` is the value `42` before the `++` changes `a`.\n\nThis is just a simple matter of needing to understand operator precedence. If you're going to use `,` as a statement-series operator, it's important to know that it actually has the lowest precedence. Every other operator will more tightly bind than `,` will.\n\nNow, recall this example from above:\n\n```js\nif (str && (matches = str.match( /[aeiou]/g ))) {\n\t// ..\n}\n```\n\nWe said the `( )` around the assignment is required, but why? Because `&&` has higher precedence than `=`, so without the `( )` to force the binding, the expression would instead be treated as `(str && matches) = str.match..`. But this would be an error, because the result of `(str && matches)` isn't going to be a variable, but instead a value (in this case `undefined`), and so it can't be the left-hand side of an `=` assignment!\n\nOK, so you probably think you've got this operator precedence thing down.\n\nLet's move on to a more complex example (which we'll carry throughout the next several sections of this chapter) to *really* test your understanding:\n\n```js\nvar a = 42;\nvar b = \"foo\";\nvar c = false;\n\nvar d = a && b || c ? c || b ? a : c && b : a;\n\nd;\t\t// ??\n```\n\nOK, evil, I admit it. No one would write a string of expressions like that, right? *Probably* not, but we're going to use it to examine various issues around chaining multiple operators together, which *is* a very common task.\n\nThe result above is `42`. But that's not nearly as interesting as how we can figure out that answer without just plugging it into a JS program to let JavaScript sort it out.\n\nLet's dig in.\n\nThe first question -- it may not have even occurred to you to ask -- is, does the first part (`a && b || c`) behave like `(a && b) || c` or like `a && (b || c)`? Do you know for certain? Can you even convince yourself they are actually different?\n\n```js\n(false && true) || true;\t// true\nfalse && (true || true);\t// false\n```\n\nSo, there's proof they're different. But still, how does `false && true || true` behave? The answer:\n\n```js\nfalse && true || true;\t\t// true\n(false && true) || true;\t// true\n```\n\nSo we have our answer. The `&&` operator is evaluated first and the `||` operator is evaluated second.\n\nBut is that just because of left-to-right processing? Let's reverse the order of operators:\n\n```js\ntrue || false && false;\t\t// true\n\n(true || false) && false;\t// false -- nope\ntrue || (false && false);\t// true -- winner, winner!\n```\n\nNow we've proved that `&&` is evaluated first and then `||`, and in this case that was actually counter to generally expected left-to-right processing.\n\nSo what caused the behavior? **Operator precedence**.\n\nEvery language defines its own operator precedence list. It's dismaying, though, just how uncommon it is that JS developers have read JS's list.\n\nIf you knew it well, the above examples wouldn't have tripped you up in the slightest, because you'd already know that `&&` is more precedent than `||`. But I bet a fair amount of readers had to think about it a little bit.\n\n**Note:** Unfortunately, the JS spec doesn't really have its operator precedence list in a convenient, single location. You have to parse through and understand all the grammar rules. So we'll try to lay out the more common and useful bits here in a more convenient format. For a complete list of operator precedence, see \"Operator Precedence\" on the MDN site (* https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Operator_Precedence).\n\n### Short Circuited\n\nIn Chapter 4, we mentioned in a side note the \"short circuiting\" nature of operators like `&&` and `||`. Let's revisit that in more detail now.\n\nFor both `&&` and `||` operators, the right-hand operand will **not be evaluated** if the left-hand operand is sufficient to determine the outcome of the operation. Hence, the name \"short circuited\" (in that if possible, it will take an early shortcut out).\n\nFor example, with `a && b`, `b` is not evaluated if `a` is falsy, because the result of the `&&` operand is already certain, so there's no point in bothering to check `b`. Likewise, with `a || b`, if `a` is truthy, the result of the operand is already certain, so there's no reason to check `b`.\n\nThis short circuiting can be very helpful and is commonly used:\n\n```js\nfunction doSomething(opts) {\n\tif (opts && opts.cool) {\n\t\t// ..\n\t}\n}\n```\n\nThe `opts` part of the `opts && opts.cool` test acts as sort of a guard, because if `opts` is unset (or is not an `object`), the expression `opts.cool` would throw an error. The `opts` test failing plus the short circuiting means that `opts.cool` won't even be evaluated, thus no error!\n\nSimilarly, you can use `||` short circuiting:\n\n```js\nfunction doSomething(opts) {\n\tif (opts.cache || primeCache()) {\n\t\t// ..\n\t}\n}\n```\n\nHere, we're checking for `opts.cache` first, and if it's present, we don't call the `primeCache()` function, thus avoiding potentially unnecessary work.\n\n### Tighter Binding\n\nBut let's turn our attention back to that earlier complex statement example with all the chained operators, specifically the `? :` ternary operator parts. Does the `? :` operator have more or less precedence than the `&&` and `||` operators?\n\n```js\na && b || c ? c || b ? a : c && b : a\n```\n\nIs that more like this:\n\n```js\na && b || (c ? c || (b ? a : c) && b : a)\n```\n\nor this?\n\n```js\n(a && b || c) ? (c || b) ? a : (c && b) : a\n```\n\nThe answer is the second one. But why?\n\nBecause `&&` is more precedent than `||`, and `||` is more precedent than `? :`.\n\nSo, the expression `(a && b || c)` is evaluated *first* before the `? :` it participates in. Another way this is commonly explained is that `&&` and `||` \"bind more tightly\" than `? :`. If the reverse was true, then `c ? c...` would bind more tightly, and it would behave (as the first choice) like `a && b || (c ? c..)`.\n\n### Associativity\n\nSo, the `&&` and `||` operators bind first, then the `? :` operator. But what about multiple operators of the same precedence? Do they always process left-to-right or right-to-left?\n\nIn general, operators are either left-associative or right-associative, referring to whether **grouping happens from the left or from the right**.\n\nIt's important to note that associativity is *not* the same thing as left-to-right or right-to-left processing.\n\nBut why does it matter whether processing is left-to-right or right-to-left? Because expressions can have side effects, like for instance with function calls:\n\n```js\nvar a = foo() && bar();\n```\n\nHere, `foo()` is evaluated first, and then possibly `bar()` depending on the result of the `foo()` expression. That definitely could result in different program behavior than if `bar()` was called before `foo()`.\n\nBut this behavior is *just* left-to-right processing (the default behavior in JavaScript!) -- it has nothing to do with the associativity of `&&`. In that example, since there's only one `&&` and thus no relevant grouping here, associativity doesn't even come into play.\n\nBut with an expression like `a && b && c`, grouping *will* happen implicitly, meaning that either `a && b` or `b && c` will be evaluated first.\n\nTechnically, `a && b && c` will be handled as `(a && b) && c`, because `&&` is left-associative (so is `||`, by the way). However, the right-associative alternative `a && (b && c)` behaves observably the same way. For the same values, the same expressions are evaluated in the same order.\n\n**Note:** If hypothetically `&&` was right-associative, it would be processed the same as if you manually used `( )` to create grouping like `a && (b && c)`. But that still **doesn't mean** that `c` would be processed before `b`. Right-associativity does **not** mean right-to-left evaluation, it means right-to-left **grouping**. Either way, regardless of the grouping/associativity, the strict ordering of evaluation will be `a`, then `b`, then `c` (aka left-to-right).\n\nSo it doesn't really matter that much that `&&` and `||` are left-associative, other than to be accurate in how we discuss their definitions.\n\nBut that's not always the case. Some operators would behave very differently depending on left-associativity vs. right-associativity.\n\nConsider the `? :` (\"ternary\" or \"conditional\") operator:\n\n```js\na ? b : c ? d : e;\n```\n\n`? :` is right-associative, so which grouping represents how it will be processed?\n\n* `a ? b : (c ? d : e)`\n* `(a ? b : c) ? d : e`\n\nThe answer is `a ? b : (c ? d : e)`. Unlike with `&&` and `||` above, the right-associativity here actually matters, as `(a ? b : c) ? d : e` *will* behave differently for some (but not all!) combinations of values.\n\nOne such example:\n\n```js\ntrue ? false : true ? true : true;\t\t// false\n\ntrue ? false : (true ? true : true);\t// false\n(true ? false : true) ? true : true;\t// true\n```\n\nEven more nuanced differences lurk with other value combinations, even if the end result is the same. Consider:\n\n```js\ntrue ? false : true ? true : false;\t\t// false\n\ntrue ? false : (true ? true : false);\t// false\n(true ? false : true) ? true : false;\t// false\n```\n\nFrom that scenario, the same end result implies that the grouping is moot. However:\n\n```js\nvar a = true, b = false, c = true, d = true, e = false;\n\na ? b : (c ? d : e); // false, evaluates only `a` and `b`\n(a ? b : c) ? d : e; // false, evaluates `a`, `b` AND `e`\n```\n\nSo, we've clearly proved that `? :` is right-associative, and that it actually matters with respect to how the operator behaves if chained with itself.\n\nAnother example of right-associativity (grouping) is the `=` operator. Recall the chained assignment example from earlier in the chapter:\n\n```js\nvar a, b, c;\n\na = b = c = 42;\n```\n\nWe asserted earlier that `a = b = c = 42` is processed by first evaluating the `c = 42` assignment, then `b = ..`, and finally `a = ..`. Why? Because of the right-associativity, which actually treats the statement like this: `a = (b = (c = 42))`.\n\nRemember our running complex assignment expression example from earlier in the chapter?\n\n```js\nvar a = 42;\nvar b = \"foo\";\nvar c = false;\n\nvar d = a && b || c ? c || b ? a : c && b : a;\n\nd;\t\t// 42\n```\n\nArmed with our knowledge of precedence and associativity, we should now be able to break down the code into its grouping behavior like this:\n\n```js\n((a && b) || c) ? ((c || b) ? a : (c && b)) : a\n```\n\nOr, to present it indented if that's easier to understand:\n\n```js\n(\n  (a && b)\n    ||\n  c\n)\n  ?\n(\n  (c || b)\n    ?\n  a\n    :\n  (c && b)\n)\n  :\na\n```\n\nLet's solve it now:\n\n1. `(a && b)` is `\"foo\"`.\n2. `\"foo\" || c` is `\"foo\"`.\n3. For the first `?` test, `\"foo\"` is truthy.\n4. `(c || b)` is `\"foo\"`.\n5. For the second `?` test, `\"foo\"` is truthy.\n6. `a` is `42`.\n\nThat's it, we're done! The answer is `42`, just as we saw earlier. That actually wasn't so hard, was it?\n\n### Disambiguation\n\nYou should now have a much better grasp on operator precedence (and associativity) and feel much more comfortable understanding how code with multiple chained operators will behave.\n\nBut an important question remains: should we all write code understanding and perfectly relying on all the rules of operator precedence/associativity? Should we only use `( )` manual grouping when it's necessary to force a different processing binding/order?\n\nOr, on the other hand, should we recognize that even though such rules *are in fact* learnable, there's enough gotchas to warrant ignoring automatic precedence/associativity? If so, should we thus always use `( )` manual grouping and remove all reliance on these automatic behaviors?\n\nThis debate is highly subjective, and heavily symmetrical to the debate in Chapter 4 over *implicit* coercion. Most developers feel the same way about both debates: either they accept both behaviors and code expecting them, or they discard both behaviors and stick to manual/explicit idioms.\n\nOf course, I cannot answer this question definitively for the reader here anymore than I could in Chapter 4. But I've presented you the pros and cons, and hopefully encouraged enough deeper understanding that you can make informed rather than hype-driven decisions.\n\nIn my opinion, there's an important middle ground. We should mix both operator precedence/associativity *and* `( )` manual grouping into our programs -- I argue the same way in Chapter 4 for healthy/safe usage of *implicit* coercion, but certainly don't endorse it exclusively without bounds.\n\nFor example, `if (a && b && c) ..` is perfectly OK to me, and I wouldn't do `if ((a && b) && c) ..` just to explicitly call out the associativity, because I think it's overly verbose.\n\nOn the other hand, if I needed to chain two `? :` conditional operators together, I'd certainly use `( )` manual grouping to make it absolutely clear what my intended logic is.\n\nThus, my advice here is similar to that of Chapter 4: **use operator precedence/associativity where it leads to shorter and cleaner code, but use `( )` manual grouping in places where it helps create clarity and reduce confusion.**\n\n## Automatic Semicolons\n\nASI (Automatic Semicolon Insertion) is when JavaScript assumes a `;` in certain places in your JS program even if you didn't put one there.\n\nWhy would it do that? Because if you omit even a single required `;` your program would fail. Not very forgiving. ASI allows JS to be tolerant of certain places where `;` aren't commonly thought  to be necessary.\n\nIt's important to note that ASI will only take effect in the presence of a newline (aka line break). Semicolons are not inserted in the middle of a line.\n\nBasically, if the JS parser parses a line where a parser error would occur (a missing expected `;`), and it can reasonably insert one, it does so. What's reasonable for insertion? Only if there's nothing but whitespace and/or comments between the end of some statement and that line's newline/line break.\n\nConsider:\n\n```js\nvar a = 42, b\nc;\n```\n\nShould JS treat the `c` on the next line as part of the `var` statement? It certainly would if a `,` had come anywhere (even another line) between `b` and `c`. But since there isn't one, JS assumes instead that there's an implied `;` (at the newline) after `b`. Thus, `c;` is left as a standalone expression statement.\n\nSimilarly:\n\n```js\nvar a = 42, b = \"foo\";\n\na\nb\t// \"foo\"\n```\n\nThat's still a valid program without error, because expression statements also accept ASI.\n\nThere's certain places where ASI is helpful, like for instance:\n\n```js\nvar a = 42;\n\ndo {\n\t// ..\n} while (a)\t// <-- ; expected here!\na;\n```\n\nThe grammar requires a `;` after a `do..while` loop, but not after `while` or `for` loops. But most developers don't remember that! So, ASI helpfully steps in and inserts one.\n\nAs we said earlier in the chapter, statement blocks do not require `;` termination, so ASI isn't necessary:\n\n```js\nvar a = 42;\n\nwhile (a) {\n\t// ..\n} // <-- no ; expected here\na;\n```\n\nThe other major case where ASI kicks in is with the `break`, `continue`, `return`, and (ES6) `yield` keywords:\n\n```js\nfunction foo(a) {\n\tif (!a) return\n\ta *= 2;\n\t// ..\n}\n```\n\nThe `return` statement doesn't carry across the newline to the `a *= 2` expression, as ASI assumes the `;` terminating the `return` statement. Of course, `return` statements *can* easily break across multiple lines, just not when there's nothing after `return` but the newline/line break.\n\n```js\nfunction foo(a) {\n\treturn (\n\t\ta * 2 + 3 / 12\n\t);\n}\n```\n\nIdentical reasoning applies to `break`, `continue`, and `yield`.\n\n### Error Correction\n\nOne of the most hotly contested *religious wars* in the JS community (besides tabs vs. spaces) is whether to rely heavily/exclusively on ASI or not.\n\nMost, but not all, semicolons are optional, but the two `;`s in the `for ( .. ) ..` loop header are required.\n\nOn the pro side of this debate, many developers believe that ASI is a useful mechanism that allows them to write more terse (and more \"beautiful\") code by omitting all but the strictly required `;`s (which are very few). It is often asserted that ASI makes many `;`s optional, so a correctly written program *without them* is no different than a correctly written program *with them*.\n\nOn the con side of the debate, many other developers will assert that there are *too many* places that can be accidental gotchas, especially for newer, less experienced developers, where unintended `;`s being magically inserted change the meaning. Similarly, some developers will argue that if they omit a semicolon, it's a flat-out mistake, and they want their tools (linters, etc.) to catch it before the JS engine *corrects* the mistake under the covers.\n\nLet me just share my perspective. A strict reading of the spec implies that ASI is an \"error correction\" routine. What kind of error, you may ask? Specifically, a **parser error**. In other words, in an attempt to have the parser fail less, ASI lets it be more tolerant.\n\nBut tolerant of what? In my view, the only way a **parser error** occurs is if it's given an incorrect/errored program to parse. So, while ASI is strictly correcting parser errors, the only way it can get such errors is if there were first program authoring errors -- omitting semicolons where the grammar rules require them.\n\nSo, to put it more bluntly, when I hear someone claim that they want to omit \"optional semicolons,\" my brain translates that claim to \"I want to write the most parser-broken program I can that will still work.\"\n\nI find that to be a ludicrous position to take and the arguments of saving keystrokes and having more \"beautiful code\" to be weak at best.\n\nFurthermore, I don't agree that this is the same thing as the spaces vs tabs debate -- that it's purely cosmetic -- but rather I believe it's a fundamental question of writing code that adheres to grammar requirements vs. code that relies on grammar exceptions to just barely skate through.\n\nAnother way of looking at it is that relying on ASI is essentially considering newlines to be significant \"whitespace.\" Other languages like Python have true significant whitespace. But is it really appropriate to think of JavaScript as having significant newlines as it stands today?\n\nMy take: **use semicolons wherever you know they are \"required,\" and limit your assumptions about ASI to a minimum.**\n\nBut don't just take my word for it. Back in 2012, creator of JavaScript Brendan Eich said (http://brendaneich.com/2012/04/the-infernal-semicolon/) the following:\n\n> The moral of this story: ASI is (formally speaking) a syntactic error correction procedure. If you start to code as if it were a universal significant-newline rule, you will get into trouble.\n> ..\n> I wish I had made newlines more significant in JS back in those ten days in May, 1995.\n> ..\n> Be careful not to use ASI as if it gave JS significant newlines.\n\n## Errors\n\nNot only does JavaScript have different *subtypes* of errors (`TypeError`, `ReferenceError`, `SyntaxError`, etc.), but also the grammar defines certain errors to be enforced at compile time, as compared to all other errors that happen during runtime.\n\nIn particular, there have long been a number of specific conditions that should be caught and reported as \"early errors\" (during compilation). Any straight-up syntax error is an early error (e.g., `a = ,`), but also the grammar defines things that are syntactically valid but disallowed nonetheless.\n\nSince execution of your code has not begun yet, these errors are not catchable with `try..catch`; they will just fail the parsing/compilation of your program.\n\n**Tip:** There's no requirement in the spec about exactly how browsers (and developer tools) should report errors. So you may see variations across browsers in the following error examples, in what specific subtype of error is reported or what the included error message text will be.\n\nOne simple example is with syntax inside a regular expression literal. There's nothing wrong with the JS syntax here, but the invalid regex will throw an early error:\n\n```js\nvar a = /+foo/;\t\t// Error!\n```\n\nThe target of an assignment must be an identifier (or an ES6 destructuring expression that produces one or more identifiers), so a value like `42` in that position is illegal and can be reported right away:\n\n```js\nvar a;\n42 = a;\t\t// Error!\n```\n\nES5's `strict` mode defines even more early errors. For example, in `strict` mode, function parameter names cannot be duplicated:\n\n```js\nfunction foo(a,b,a) { }\t\t\t\t\t// just fine\n\nfunction bar(a,b,a) { \"use strict\"; }\t// Error!\n```\n\nAnother `strict` mode early error is an object literal having more than one property of the same name:\n\n```js\n(function(){\n\t\"use strict\";\n\n\tvar a = {\n\t\tb: 42,\n\t\tb: 43\n\t};\t\t\t// Error!\n})();\n```\n\n**Note:** Semantically speaking, such errors aren't technically *syntax* errors but more *grammar* errors -- the above snippets are syntactically valid. But since there is no `GrammarError` type, some browsers use `SyntaxError` instead.\n\n### Using Variables Too Early\n\nES6 defines a (frankly confusingly named) new concept called the TDZ (\"Temporal Dead Zone\").\n\nThe TDZ refers to places in code where a variable reference cannot yet be made, because it hasn't reached its required initialization.\n\nThe most clear example of this is with ES6 `let` block-scoping:\n\n```js\n{\n\ta = 2;\t\t// ReferenceError!\n\tlet a;\n}\n```\n\nThe assignment `a = 2` is accessing the `a` variable (which is indeed block-scoped to the `{ .. }` block) before it's been initialized by the `let a` declaration, so it's in the TDZ for `a` and throws an error.\n\nInterestingly, while `typeof` has an exception to be safe for undeclared variables (see Chapter 1), no such safety exception is made for TDZ references:\n\n```js\n{\n\ttypeof a;\t// undefined\n\ttypeof b;\t// ReferenceError! (TDZ)\n\tlet b;\n}\n```\n\n## Function Arguments\n\nAnother example of a TDZ violation can be seen with ES6 default parameter values (see the *ES6 & Beyond* title of this series):\n\n```js\nvar b = 3;\n\nfunction foo( a = 42, b = a + b + 5 ) {\n\t// ..\n}\n```\n\nThe `b` reference in the assignment would happen in the TDZ for the parameter `b` (not pull in the outer `b` reference), so it will throw an error. However, the `a` in the assignment is fine since by that time it's past the TDZ for parameter `a`.\n\nWhen using ES6's default parameter values, the default value is applied to the parameter if you either omit an argument, or you pass an `undefined` value in its place:\n\n```js\nfunction foo( a = 42, b = a + 1 ) {\n\tconsole.log( a, b );\n}\n\nfoo();\t\t\t\t\t// 42 43\nfoo( undefined );\t\t// 42 43\nfoo( 5 );\t\t\t\t// 5 6\nfoo( void 0, 7 );\t\t// 42 7\nfoo( null );\t\t\t// null 1\n```\n\n**Note:** `null` is coerced to a `0` value in the `a + 1` expression. See Chapter 4 for more info.\n\nFrom the ES6 default parameter values perspective, there's no difference between omitting an argument and passing an `undefined` value. However, there is a way to detect the difference in some cases:\n\n```js\nfunction foo( a = 42, b = a + 1 ) {\n\tconsole.log(\n\t\targuments.length, a, b,\n\t\targuments[0], arguments[1]\n\t);\n}\n\nfoo();\t\t\t\t\t// 0 42 43 undefined undefined\nfoo( 10 );\t\t\t\t// 1 10 11 10 undefined\nfoo( 10, undefined );\t// 2 10 11 10 undefined\nfoo( 10, null );\t\t// 2 10 null 10 null\n```\n\nEven though the default parameter values are applied to the `a` and `b` parameters, if no arguments were passed in those slots, the `arguments` array will not have entries.\n\nConversely, if you pass an `undefined` argument explicitly, an entry will exist in the `arguments` array for that argument, but it will be `undefined` and not (necessarily) the same as the default value that was applied to the named parameter for that same slot.\n\nWhile ES6 default parameter values can create divergence between the `arguments` array slot and the corresponding named parameter variable, this same disjointedness can also occur in tricky ways in ES5:\n\n```js\nfunction foo(a) {\n\ta = 42;\n\tconsole.log( arguments[0] );\n}\n\nfoo( 2 );\t// 42 (linked)\nfoo();\t\t// undefined (not linked)\n```\n\nIf you pass an argument, the `arguments` slot and the named parameter are linked to always have the same value. If you omit the argument, no such linkage occurs.\n\nBut in `strict` mode, the linkage doesn't exist regardless:\n\n```js\nfunction foo(a) {\n\t\"use strict\";\n\ta = 42;\n\tconsole.log( arguments[0] );\n}\n\nfoo( 2 );\t// 2 (not linked)\nfoo();\t\t// undefined (not linked)\n```\n\nIt's almost certainly a bad idea to ever rely on any such linkage, and in fact the linkage itself is a leaky abstraction that's exposing an underlying implementation detail of the engine, rather than a properly designed feature.\n\nUse of the `arguments` array has been deprecated (especially in favor of ES6 `...` rest parameters -- see the *ES6 & Beyond* title of this series), but that doesn't mean that it's all bad.\n\nPrior to ES6, `arguments` is the only way to get an array of all passed arguments to pass along to other functions, which turns out to be quite useful. You can also mix named parameters with the `arguments` array and be safe, as long as you follow one simple rule: **never refer to a named parameter *and* its corresponding `arguments` slot at the same time.** If you avoid that bad practice, you'll never expose the leaky linkage behavior.\n\n```js\nfunction foo(a) {\n\tconsole.log( a + arguments[1] ); // safe!\n}\n\nfoo( 10, 32 );\t// 42\n```\n\n## `try..finally`\n\nYou're probably familiar with how the `try..catch` block works. But have you ever stopped to consider the `finally` clause that can be paired with it? In fact, were you aware that `try` only requires either `catch` or `finally`, though both can be present if needed.\n\nThe code in the `finally` clause *always* runs (no matter what), and it always runs right after the `try` (and `catch` if present) finish, before any other code runs. In one sense, you can kind of think of the code in a `finally` clause as being in a callback function that will always be called regardless of how the rest of the block behaves.\n\nSo what happens if there's a `return` statement inside a `try` clause? It obviously will return a value, right? But does the calling code that receives that value run before or after the `finally`?\n\n```js\nfunction foo() {\n\ttry {\n\t\treturn 42;\n\t}\n\tfinally {\n\t\tconsole.log( \"Hello\" );\n\t}\n\n\tconsole.log( \"never runs\" );\n}\n\nconsole.log( foo() );\n// Hello\n// 42\n```\n\nThe `return 42` runs right away, which sets up the completion value from the `foo()` call. This action completes the `try` clause and the `finally` clause immediately runs next. Only then is the `foo()` function complete, so that its completion value is returned back for the `console.log(..)` statement to use.\n\nThe exact same behavior is true of a `throw` inside `try`:\n\n```js\n function foo() {\n\ttry {\n\t\tthrow 42;\n\t}\n\tfinally {\n\t\tconsole.log( \"Hello\" );\n\t}\n\n\tconsole.log( \"never runs\" );\n}\n\nconsole.log( foo() );\n// Hello\n// Uncaught Exception: 42\n```\n\nNow, if an exception is thrown (accidentally or intentionally) inside a `finally` clause, it will override as the primary completion of that function. If a previous `return` in the `try` block had set a completion value for the function, that value will be abandoned.\n\n```js\nfunction foo() {\n\ttry {\n\t\treturn 42;\n\t}\n\tfinally {\n\t\tthrow \"Oops!\";\n\t}\n\n\tconsole.log( \"never runs\" );\n}\n\nconsole.log( foo() );\n// Uncaught Exception: Oops!\n```\n\nIt shouldn't be surprising that other nonlinear control statements like `continue` and `break` exhibit similar behavior to `return` and `throw`:\n\n```js\nfor (var i=0; i<10; i++) {\n\ttry {\n\t\tcontinue;\n\t}\n\tfinally {\n\t\tconsole.log( i );\n\t}\n}\n// 0 1 2 3 4 5 6 7 8 9\n```\n\nThe `console.log(i)` statement runs at the end of the loop iteration, which is caused by the `continue` statement. However, it still runs before the `i++` iteration update statement, which is why the values printed are `0..9` instead of `1..10`.\n\n**Note:** ES6 adds a `yield` statement, in generators (see the *Async & Performance* title of this series) which in some ways can be seen as an intermediate `return` statement. However, unlike a `return`, a `yield` isn't complete until the generator is resumed, which means a `try { .. yield .. }` has not completed. So an attached `finally` clause will not run right after the `yield` like it does with `return`.\n\nA `return` inside a `finally` has the special ability to override a previous `return` from the `try` or `catch` clause, but only if `return` is explicitly called:\n\n```js\nfunction foo() {\n\ttry {\n\t\treturn 42;\n\t}\n\tfinally {\n\t\t// no `return ..` here, so no override\n\t}\n}\n\nfunction bar() {\n\ttry {\n\t\treturn 42;\n\t}\n\tfinally {\n\t\t// override previous `return 42`\n\t\treturn;\n\t}\n}\n\nfunction baz() {\n\ttry {\n\t\treturn 42;\n\t}\n\tfinally {\n\t\t// override previous `return 42`\n\t\treturn \"Hello\";\n\t}\n}\n\nfoo();\t// 42\nbar();\t// undefined\nbaz();\t// \"Hello\"\n```\n\nNormally, the omission of `return` in a function is the same as `return;` or even `return undefined;`, but inside a `finally` block the omission of `return` does not act like an overriding `return undefined`; it just lets the previous `return` stand.\n\nIn fact, we can really up the craziness if we combine `finally` with labeled `break` (discussed earlier in the chapter):\n\n```js\nfunction foo() {\n\tbar: {\n\t\ttry {\n\t\t\treturn 42;\n\t\t}\n\t\tfinally {\n\t\t\t// break out of `bar` labeled block\n\t\t\tbreak bar;\n\t\t}\n\t}\n\n\tconsole.log( \"Crazy\" );\n\n\treturn \"Hello\";\n}\n\nconsole.log( foo() );\n// Crazy\n// Hello\n```\n\nBut... don't do this. Seriously. Using a `finally` + labeled `break` to effectively cancel a `return` is doing your best to create the most confusing code possible. I'd wager no amount of comments will redeem this code.\n\n## `switch`\n\nLet's briefly explore the `switch` statement, a sort-of syntactic shorthand for an `if..else if..else..` statement chain.\n\n```js\nswitch (a) {\n\tcase 2:\n\t\t// do something\n\t\tbreak;\n\tcase 42:\n\t\t// do another thing\n\t\tbreak;\n\tdefault:\n\t\t// fallback to here\n}\n```\n\nAs you can see, it evaluates `a` once, then matches the resulting value to each `case` expression (just simple value expressions here). If a match is found, execution will begin in that matched `case`, and will either go until a `break` is encountered or until the end of the `switch` block is found.\n\nThat much may not surprise you, but there are several quirks about `switch` you may not have noticed before.\n\nFirst, the matching that occurs between the `a` expression and each `case` expression is identical to the `===` algorithm (see Chapter 4). Often times `switch`es are used with absolute values in `case` statements, as shown above, so strict matching is appropriate.\n\nHowever, you may wish to allow coercive equality (aka `==`, see Chapter 4), and to do so you'll need to sort of \"hack\" the `switch` statement a bit:\n\n```js\nvar a = \"42\";\n\nswitch (true) {\n\tcase a == 10:\n\t\tconsole.log( \"10 or '10'\" );\n\t\tbreak;\n\tcase a == 42:\n\t\tconsole.log( \"42 or '42'\" );\n\t\tbreak;\n\tdefault:\n\t\t// never gets here\n}\n// 42 or '42'\n```\n\nThis works because the `case` clause can have any expression (not just simple values), which means it will strictly match that expression's result to the test expression (`true`). Since `a == 42` results in `true` here, the match is made.\n\nDespite `==`, the `switch` matching itself is still strict, between `true` and `true` here. If the `case` expression resulted in something that was truthy but not strictly `true` (see Chapter 4), it wouldn't work. This can bite you if you're for instance using a \"logical operator\" like `||` or `&&` in your expression:\n\n```js\nvar a = \"hello world\";\nvar b = 10;\n\nswitch (true) {\n\tcase (a || b == 10):\n\t\t// never gets here\n\t\tbreak;\n\tdefault:\n\t\tconsole.log( \"Oops\" );\n}\n// Oops\n```\n\nSince the result of `(a || b == 10)` is `\"hello world\"` and not `true`, the strict match fails. In this case, the fix is to force the expression explicitly to be a `true` or `false`, such as `case !!(a || b == 10):` (see Chapter 4).\n\nLastly, the `default` clause is optional, and it doesn't necessarily have to come at the end (although that's the strong convention). Even in the `default` clause, the same rules apply about encountering a `break` or not:\n\n```js\nvar a = 10;\n\nswitch (a) {\n\tcase 1:\n\tcase 2:\n\t\t// never gets here\n\tdefault:\n\t\tconsole.log( \"default\" );\n\tcase 3:\n\t\tconsole.log( \"3\" );\n\t\tbreak;\n\tcase 4:\n\t\tconsole.log( \"4\" );\n}\n// default\n// 3\n```\n\n**Note:** As discussed previously about labeled `break`s, the `break` inside a `case` clause can also be labeled.\n\nThe way this snippet processes is that it passes through all the `case` clause matching first, finds no match, then goes back up to the `default` clause and starts executing. Since there's no `break` there, it continues executing in the already skipped over `case 3` block, before stopping once it hits that `break`.\n\nWhile this sort of round-about logic is clearly possible in JavaScript, there's almost no chance that it's going to make for reasonable or understandable code. Be very skeptical if you find yourself wanting to create such circular logic flow, and if you really do, make sure you include plenty of code comments to explain what you're up to!\n\n## Review\n\nJavaScript grammar has plenty of nuance that we as developers should spend a little more time paying closer attention to than we typically do. A little bit of effort goes a long way to solidifying your deeper knowledge of the language.\n\nStatements and expressions have analogs in English language -- statements are like sentences and expressions are like phrases. Expressions can be pure/self-contained, or they can have side effects.\n\nThe JavaScript grammar layers semantic usage rules (aka context) on top of the pure syntax. For example, `{ }` pairs used in various places in your program can mean statement blocks, `object` literals, (ES6) destructuring assignments, or (ES6) named function arguments.\n\nJavaScript operators all have well-defined rules for precedence (which ones bind first before others) and associativity (how multiple operator expressions are implicitly grouped). Once you learn these rules, it's up to you to decide if precedence/associativity are *too implicit* for their own good, or if they will aid in writing shorter, clearer code.\n\nASI (Automatic Semicolon Insertion) is a parser-error-correction mechanism built into the JS engine, which allows it under certain circumstances to insert an assumed `;` in places where it is required, was omitted, *and* where insertion fixes the parser error. The debate rages over whether this behavior implies that most `;` are optional (and can/should be omitted for cleaner code) or whether it means that omitting them is making mistakes that the JS engine merely cleans up for you.\n\nJavaScript has several types of errors, but it's less known that it has two classifications for errors: \"early\" (compiler thrown, uncatchable) and \"runtime\" (`try..catch`able). All syntax errors are obviously early errors that stop the program before it runs, but there are others, too.\n\nFunction arguments have an interesting relationship to their formal declared named parameters. Specifically, the `arguments` array has a number of gotchas of leaky abstraction behavior if you're not careful. Avoid `arguments` if you can, but if you must use it, by all means avoid using the positional slot in `arguments` at the same time as using a named parameter for that same argument.\n\nThe `finally` clause attached to a `try` (or `try..catch`) offers some very interesting quirks in terms of execution processing order. Some of these quirks can be helpful, but it's possible to create lots of confusion, especially if combined with labeled blocks. As always, use `finally` to make code better and clearer, not more clever or confusing.\n\nThe `switch` offers some nice shorthand for `if..else if..` statements, but beware of many common simplifying assumptions about its behavior. There are several quirks that can trip you up if you're not careful, but there's also some neat hidden tricks that `switch` has up its sleeve!\n"
  },
  {
    "path": "types & grammar/foreword.md",
    "content": "# You Don't Know JS: Types & Grammar\n# Foreword\n\nIt was once said, \"JavaScript is the only language developers don't learn to use before using it.\"\n\nI laugh each time I hear that quote because it was true for me and I suspect it was for many other developers. JavaScript, and maybe even CSS and HTML, were not a core computer science language taught at college in the Internet's early days, so personal development was very much based on the budding developer's search and \"view source\" abilities to piece together these basic web languages.\n\nI still remember my first high school website project. The task was to create any type of web store, and me being a James Bond fan, I decided to create a Goldeneye store. It had everything: the Goldeneye midi theme song playing in the background, a JavaScript-powered crosshairs following the mouse around the screen, and a gunshot sound that played upon every click. Q would have been proud of this masterpiece of a website.\n\nI tell that story because I did back then what many developers are doing today: I copied and pasted chunks of JavaScript code into my project without having a clue what's actually happening. The widespread use of JavaScript toolkits like jQuery have, in their own small way, perpetuated this pattern of nonlearning of core JavaScript.\n\nI'm not disparaging JavaScript toolkit use; after all, I'm a member of the MooTools JavaScript team! But the reason JavaScript toolkits are as powerful as they are is because their developers know the fundamentals, and their \"gotchas,\" and apply them magnificently. As useful as these toolkits are, it's still incredibly important to know the basics of the language, and with books like Kyle Simpson's *You Don't Know JS* series, there's no excuse not to learn them.\n\n*Types and Grammar*, the third installment of the series, is an excellent look at the core JavaScript fundamentals that copy and paste and JavaScript toolkits don't and could never teach you. Coercion and its pitfalls, natives as constructors, and the whole gamut of JavaScript basics are thoroughly explained with focused code examples. Like the other books in this series, Kyle cuts straight to the point: no fluff and word-smithing -- exactly the type of tech book I love.\n\nEnjoy Types and Grammar and don't let it get too far away from your desk!\n\nDavid Walsh<br>\n[http://davidwalsh.name](http://davidwalsh.name), [@davidwalshblog](http://twitter.com/davidwalshblog)<br>\nSenior Web Developer, Mozilla\n"
  },
  {
    "path": "types & grammar/toc.md",
    "content": "# You Don't Know JS: Types & Grammar\n\n## Table of Contents\n\n* Foreword\n* Preface\n* Chapter 1: Types\n\t* A Type By Any Other Name...\n\t* Built-in Types\n\t* Values as Types\n* Chapter 2: Values\n\t* Arrays\n\t* Strings\n\t* Numbers\n\t* Special Values\n\t* Value vs Reference\n* Chapter 3: Natives\n\t* Internal `[[Class]]`\n\t* Boxing Wrappers\n\t* Unboxing\n\t* Natives as Constructors\n* Chapter 4: Coercion\n\t* Converting Values\n\t* Abstract Value Operations\n\t* Explicit Coercion\n\t* Implicit Coercion\n\t* Loose Equals vs Strict Equals\n\t* Abstract Relational Comparison\n* Chapter 5: Grammar\n\t* Statements & Expressions\n\t* Operator Precedence\n\t* Automatic Semicolons\n\t* Errors\n\t* Function Arguments\n\t* `try..finally`\n\t* `switch`\n* Appendix A: Mixed Environment JavaScript\n* Appendix B: Acknowledgments\n\n"
  },
  {
    "path": "up & going/README.md",
    "content": "# You Don't Know JS: Up & Going\n\n<img src=\"cover.jpg\" width=\"300\">\n\n-----\n\n**[Purchase digital/print copy from O'Reilly](http://shop.oreilly.com/product/0636920039303.do)**\n\n-----\n\n[Table of Contents](toc.md)\n\n* [Foreword](foreword.md) (by [Jenn Lukas](http://jennlukas.com))\n* [Preface](../preface.md)\n* [Chapter 1: Into Programming](ch1.md)\n* [Chapter 2: Into JavaScript](ch2.md)\n* [Chapter 3: Into YDKJS](ch3.md)\n* [Appendix A: Thank You's!](apA.md)\n"
  },
  {
    "path": "up & going/apA.md",
    "content": "# You Don't Know JS: Up & Going\n# Appendix A: Acknowledgments\n\nI have many people to thank for making this book title and the overall series happen.\n\nFirst, I must thank my wife Christen Simpson, and my two kids Ethan and Emily, for putting up with Dad always pecking away at the computer. Even when not writing books, my obsession with JavaScript glues my eyes to the screen far more than it should. That time I borrow from my family is the reason these books can so deeply and completely explain JavaScript to you, the reader. I owe my family everything.\n\nI'd like to thank my editors at O'Reilly, namely Simon St.Laurent and Brian MacDonald, as well as the rest of the editorial and marketing staff. They are fantastic to work with, and have been especially accommodating during this experiment into \"open source\" book writing, editing, and production.\n\nThank you to the many folks who have participated in making this book series better by providing editorial suggestions and corrections, including Shelley Powers, Tim Ferro, Evan Borden, Forrest L. Norvell, Jennifer Davis, Jesse Harlin, Kris Kowal, Rick Waldron, Jordan Harband, Benjamin Gruenbaum, Vyacheslav Egorov, David Nolen, and many others. A big thank you to Jenn Lukas for writing the Foreword for this title.\n\nThank you to the countless folks in the community, including members of the TC39 committee, who have shared so much knowledge with the rest of us, and especially tolerated my incessant questions and explorations with patience and detail. John-David Dalton, Juriy \"kangax\" Zaytsev, Mathias Bynens, Axel Rauschmayer, Nicholas Zakas, Angus Croll, Reginald Braithwaite, Dave Herman, Brendan Eich, Allen Wirfs-Brock, Bradley Meck, Domenic Denicola, David Walsh, Tim Disney, Peter van der Zee, Andrea Giammarchi, Kit Cambridge, Eric Elliott, and so many others, I can't even scratch the surface.\n\nSince the \"You Don't Know JS\" book series was born on Kickstarter, I also wish to thank all my (nearly) 500 generous backers, without whom this book series could not have happened:\n\n> Jan Szpila, nokiko, Murali Krishnamoorthy, Ryan Joy, Craig Patchett, pdqtrader, Dale Fukami, ray hatfield, R0drigo Perez [Mx], Dan Petitt, Jack Franklin, Andrew Berry, Brian Grinstead, Rob Sutherland, Sergi Meseguer, Phillip Gourley, Mark Watson, Jeff Carouth, Alfredo Sumaran, Martin Sachse, Marcio Barrios, Dan, AimelyneM, Matt Sullivan, Delnatte Pierre-Antoine, Jake Smith, Eugen Tudorancea, Iris, David Trinh, simonstl, Ray Daly, Uros Gruber, Justin Myers, Shai Zonis, Mom & Dad, Devin Clark, Dennis Palmer, Brian Panahi Johnson, Josh Marshall, Marshall, Dennis Kerr, Matt Steele, Erik Slagter, Sacah, Justin Rainbow, Christian Nilsson, Delapouite, D.Pereira, Nicolas Hoizey, George V. Reilly, Dan Reeves, Bruno Laturner, Chad Jennings, Shane King, Jeremiah Lee Cohick, od3n, Stan Yamane, Marko Vucinic, Jim B, Stephen Collins, Ægir Þorsteinsson, Eric Pederson, Owain, Nathan Smith, Jeanetteurphy, Alexandre ELISÉ, Chris Peterson, Rik Watson, Luke Matthews, Justin Lowery, Morten Nielsen, Vernon Kesner, Chetan Shenoy, Paul Tregoing, Marc Grabanski, Dion Almaer, Andrew Sullivan, Keith Elsass, Tom Burke, Brian Ashenfelter, David Stuart, Karl Swedberg, Graeme, Brandon Hays, John Christopher, Gior, manoj reddy, Chad Smith, Jared Harbour, Minoru TODA, Chris Wigley, Daniel Mee, Mike, Handyface, Alex Jahraus, Carl Furrow, Rob Foulkrod, Max Shishkin, Leigh Penny Jr., Robert Ferguson, Mike van Hoenselaar, Hasse Schougaard, rajan venkataguru, Jeff Adams, Trae Robbins, Rolf Langenhuijzen, Jorge Antunes, Alex Koloskov, Hugh Greenish, Tim Jones, Jose Ochoa, Michael Brennan-White, Naga Harish Muvva, Barkóczi Dávid, Kitt Hodsden, Paul McGraw, Sascha Goldhofer, Andrew Metcalf, Markus Krogh, Michael Mathews, Matt Jared, Juanfran, Georgie Kirschner, Kenny Lee, Ted Zhang, Amit Pahwa, Inbal Sinai, Dan Raine, Schabse Laks, Michael Tervoort, Alexandre Abreu, Alan Joseph Williams, NicolasD, Cindy Wong, Reg Braithwaite, LocalPCGuy, Jon Friskics, Chris Merriman, John Pena, Jacob Katz, Sue Lockwood, Magnus Johansson, Jeremy Crapsey, Grzegorz Pawłowski, nico nuzzaci, Christine Wilks, Hans Bergren, charles montgomery, Ariel בר-לבב Fogel, Ivan Kolev, Daniel Campos, Hugh Wood, Christian Bradford, Frédéric Harper, Ionuţ Dan Popa, Jeff Trimble, Rupert Wood, Trey Carrico, Pancho Lopez, Joël kuijten, Tom A Marra, Jeff Jewiss, Jacob Rios, Paolo Di Stefano, Soledad Penades, Chris Gerber, Andrey Dolganov, Wil Moore III, Thomas Martineau, Kareem, Ben Thouret, Udi Nir, Morgan Laupies, jory carson-burson, Nathan L Smith, Eric Damon Walters, Derry Lozano-Hoyland, Geoffrey Wiseman, mkeehner, KatieK, Scott MacFarlane, Brian LaShomb, Adrien Mas, christopher ross, Ian Littman, Dan Atkinson, Elliot Jobe, Nick Dozier, Peter Wooley, John Hoover, dan, Martin A. Jackson, Héctor Fernando Hurtado, andy ennamorato, Paul Seltmann, Melissa Gore, Dave Pollard, Jack Smith, Philip Da Silva, Guy Israeli, @megalithic, Damian Crawford, Felix Gliesche, April Carter Grant, Heidi, jim tierney, Andrea Giammarchi, Nico Vignola, Don Jones, Chris Hartjes, Alex Howes, john gibbon, David J. Groom, BBox, Yu 'Dilys' Sun, Nate Steiner, Brandon Satrom, Brian Wyant, Wesley Hales, Ian Pouncey, Timothy Kevin Oxley, George Terezakis, sanjay raj, Jordan Harband, Marko McLion, Wolfgang Kaufmann, Pascal Peuckert, Dave Nugent, Markus Liebelt, Welling Guzman, Nick Cooley, Daniel Mesquita, Robert Syvarth, Chris Coyier, Rémy Bach, Adam Dougal, Alistair Duggin, David Loidolt, Ed Richer, Brian Chenault, GoldFire Studios, Carles Andrés, Carlos Cabo, Yuya Saito, roberto ricardo, Barnett Klane, Mike Moore, Kevin Marx, Justin Love, Joe Taylor, Paul Dijou, Michael Kohler, Rob Cassie, Mike Tierney, Cody Leroy Lindley, tofuji, Shimon Schwartz, Raymond, Luc De Brouwer, David Hayes, Rhys Brett-Bowen, Dmitry, Aziz Khoury, Dean, Scott Tolinski - Level Up, Clement Boirie, Djordje Lukic, Anton Kotenko, Rafael Corral, Philip Hurwitz, Jonathan Pidgeon, Jason Campbell, Joseph C., SwiftOne, Jan Hohner, Derick Bailey, getify, Daniel Cousineau, Chris Charlton, Eric Turner, David Turner, Joël Galeran, Dharma Vagabond, adam, Dirk van Bergen, dave ♥♫★ furf, Vedran Zakanj, Ryan McAllen, Natalie Patrice Tucker, Eric J. Bivona, Adam Spooner, Aaron Cavano, Kelly Packer, Eric J, Martin Drenovac, Emilis, Michael Pelikan, Scott F. Walter, Josh Freeman, Brandon Hudgeons, vijay chennupati, Bill Glennon, Robin R., Troy Forster, otaku_coder, Brad, Scott, Frederick Ostrander, Adam Brill, Seb Flippence, Michael Anderson, Jacob, Adam Randlett, Standard, Joshua Clanton, Sebastian Kouba, Chris Deck, SwordFire, Hannes Papenberg, Richard Woeber, hnzz, Rob Crowther, Jedidiah Broadbent, Sergey Chernyshev, Jay-Ar Jamon, Ben Combee, luciano bonachela, Mark Tomlinson, Kit Cambridge, Michael Melgares, Jacob Adams, Adrian Bruinhout, Bev Wieber, Scott Puleo, Thomas Herzog, April Leone, Daniel Mizieliński, Kees van Ginkel, Jon Abrams, Erwin Heiser, Avi Laviad, David newell, Jean-Francois Turcot, Niko Roberts, Erik Dana, Charles Neill, Aaron Holmes, Grzegorz Ziółkowski, Nathan Youngman, Timothy, Jacob Mather, Michael Allan, Mohit Seth, Ryan Ewing, Benjamin Van Treese, Marcelo Santos, Denis Wolf, Phil Keys, Chris Yung, Timo Tijhof, Martin Lekvall, Agendine, Greg Whitworth, Helen Humphrey, Dougal Campbell, Johannes Harth, Bruno Girin, Brian Hough, Darren Newton, Craig McPheat, Olivier Tille, Dennis Roethig, Mathias Bynens, Brendan Stromberger, sundeep, John Meyer, Ron Male, John F Croston III, gigante, Carl Bergenhem, B.J. May, Rebekah Tyler, Ted Foxberry, Jordan Reese, Terry Suitor, afeliz, Tom Kiefer, Darragh Duffy, Kevin Vanderbeken, Andy Pearson, Simon Mac Donald, Abid Din, Chris Joel, Tomas Theunissen, David Dick, Paul Grock, Brandon Wood, John Weis, dgrebb, Nick Jenkins, Chuck Lane, Johnny Megahan, marzsman, Tatu Tamminen, Geoffrey Knauth, Alexander Tarmolov, Jeremy Tymes, Chad Auld, Sean Parmelee, Rob Staenke, Dan Bender, Yannick derwa, Joshua Jones, Geert Plaisier, Tom LeZotte, Christen Simpson, Stefan Bruvik, Justin Falcone, Carlos Santana, Michael Weiss, Pablo Villoslada, Peter deHaan, Dimitris Iliopoulos, seyDoggy, Adam Jordens, Noah Kantrowitz, Amol M, Matthew Winnard, Dirk Ginader, Phinam Bui, David Rapson, Andrew Baxter, Florian Bougel, Michael George, Alban Escalier, Daniel Sellers, Sasha Rudan, John Green, Robert Kowalski, David I. Teixeira (@ditma, Charles Carpenter, Justin Yost, Sam S, Denis Ciccale, Kevin Sheurs, Yannick Croissant, Pau Fracés, Stephen McGowan, Shawn Searcy, Chris Ruppel, Kevin Lamping, Jessica Campbell, Christopher Schmitt, Sablons, Jonathan Reisdorf, Bunni Gek, Teddy Huff, Michael Mullany, Michael Fürstenberg, Carl Henderson, Rick Yoesting, Scott Nichols, Hernán Ciudad, Andrew Maier, Mike Stapp, Jesse Shawl, Sérgio Lopes, jsulak, Shawn Price, Joel Clermont, Chris Ridmann, Sean Timm, Jason Finch, Aiden Montgomery, Elijah Manor, Derek Gathright, Jesse Harlin, Dillon Curry, Courtney Myers, Diego Cadenas, Arne de Bree, João Paulo Dubas, James Taylor, Philipp Kraeutli, Mihai Păun, Sam Gharegozlou, joshjs, Matt Murchison, Eric Windham, Timo Behrmann, Andrew Hall, joshua price, Théophile Villard\n\nThis book series is being produced in an open source fashion, including editing and production. We owe GitHub a debt of gratitude for making that sort of thing possible for the community!\n\nThank you again to all the countless folks I didn't name but to whom I nonetheless owe thanks. May this book series be \"owned\" by all of us and serve to contribute to increasing awareness and understanding of the JavaScript language, to the benefit of all current and future community contributors.\n"
  },
  {
    "path": "up & going/ch1.md",
    "content": "# You Don't Know JS: Up & Going\n# Chapter 1: Into Programming\n\nWelcome to the *You Don't Know JS* (*YDKJS*) series.\n\n*Up & Going* is an introduction to several basic concepts of programming -- of course we lean toward JavaScript (often abbreviated JS) specifically -- and how to approach and understand the rest of the titles in this series. Especially if you're just getting into programming and/or JavaScript, this book will briefly explore what you need to get *up and going*.\n\nThis book starts off explaining the basic principles of programming at a very high level. It's mostly intended if you are starting *YDKJS* with little to no prior programming experience, and are looking to these books to help get you started along a path to understanding programming through the lens of JavaScript.\n\nChapter 1 should be approached as a quick overview of the things you'll want to learn more about and practice to get *into programming*. There are also many other fantastic programming introduction resources that can help you dig into these topics further, and I encourage you to learn from them in addition to this chapter.\n\nOnce you feel comfortable with general programming basics, Chapter 2 will help guide you to a familiarity with JavaScript's flavor of programming. Chapter 2 introduces what JavaScript is about, but again, it's not a comprehensive guide -- that's what the rest of the *YDKJS* books are for!\n\nIf you're already fairly comfortable with JavaScript, first check out Chapter 3 as a brief glimpse of what to expect from *YDKJS*, then jump right in!\n\n## Code\n\nLet's start from the beginning.\n\nA program, often referred to as *source code* or just *code*, is a set of special instructions to tell the computer what tasks to perform. Usually code is saved in a text file, although with JavaScript you can also type code directly into a developer console in a browser, which we'll cover shortly.\n\nThe rules for valid format and combinations of instructions is called a *computer language*, sometimes referred to as its *syntax*, much the same as English tells you how to spell words and how to create valid sentences using words and punctuation.\n\n### Statements\n\nIn a computer language, a group of words, numbers, and operators that performs a specific task is a *statement*. In JavaScript, a statement might look as follows:\n\n```js\na = b * 2;\n```\n\nThe characters `a` and `b` are called *variables* (see \"Variables\"), which are like simple boxes you can store any of your stuff in. In programs, variables hold values (like the number `42`) to be used by the program. Think of them as symbolic placeholders for the values themselves.\n\nBy contrast, the `2` is just a value itself, called a *literal value*, because it stands alone without being stored in a variable.\n\nThe `=` and `*` characters are *operators* (see \"Operators\") -- they perform actions with the values and variables such as assignment and mathematic multiplication.\n\nMost statements in JavaScript conclude with a semicolon (`;`) at the end.\n\nThe statement `a = b * 2;` tells the computer, roughly, to get the current value stored in the variable `b`, multiply that value by `2`, then store the result back into another variable we call `a`.\n\nPrograms are just collections of many such statements, which together describe all the steps that it takes to perform your program's purpose.\n\n### Expressions\n\nStatements are made up of one or more *expressions*. An expression is any reference to a variable or value, or a set of variable(s) and value(s) combined with operators.\n\nFor example:\n\n```js\na = b * 2;\n```\n\nThis statement has four expressions in it:\n\n* `2` is a *literal value expression*\n* `b` is a *variable expression*, which means to retrieve its current value\n* `b * 2` is an *arithmetic expression*, which means to do the multiplication\n* `a = b * 2` is an *assignment expression*, which means to assign the result of the `b * 2` expression to the variable `a` (more on assignments later)\n\nA general expression that stands alone is also called an *expression statement*, such as the following:\n\n```js\nb * 2;\n```\n\nThis flavor of expression statement is not very common or useful, as generally it wouldn't have any effect on the running of the program -- it would retrieve the value of `b` and multiply it by `2`, but then wouldn't do anything with that result.\n\nA more common expression statement is a *call expression* statement (see \"Functions\"), as the entire statement is the function call expression itself:\n\n```js\nalert( a );\n```\n\n### Executing a Program\n\nHow do those collections of programming statements tell the computer what to do? The program needs to be *executed*, also referred to as *running the program*.\n\nStatements like `a = b * 2` are helpful for developers when reading and writing, but are not actually in a form the computer can directly understand. So a special utility on the computer (either an *interpreter* or a *compiler*) is used to translate the code you write into commands a computer can understand.\n\nFor some computer languages, this translation of commands is typically done from top to bottom, line by line, every time the program is run, which is usually called *interpreting* the code.\n\nFor other languages, the translation is done ahead of time, called *compiling* the code, so when the program *runs* later, what's running is actually the already compiled computer instructions ready to go.\n\nIt's typically asserted that JavaScript is *interpreted*, because your JavaScript source code is processed each time it's run. But that's not entirely accurate. The JavaScript engine actually *compiles* the program on the fly and then immediately runs the compiled code.\n\n**Note:** For more information on JavaScript compiling, see the first two chapters of the *Scope & Closures* title of this series.\n\n## Try It Yourself\n\nThis chapter is going to introduce each programming concept with simple snippets of code, all written in JavaScript (obviously!).\n\nIt cannot be emphasized enough: while you go through this chapter -- and you may need to spend the time to go over it several times -- you should practice each of these concepts by typing the code yourself. The easiest way to do that is to open up the developer tools console in your nearest browser (Firefox, Chrome, IE, etc.).\n\n**Tip:** Typically, you can launch the developer console with a keyboard shortcut or from a menu item. For more detailed information about launching and using the console in your favorite browser, see \"Mastering The Developer Tools Console\" (http://blog.teamtreehouse.com/mastering-developer-tools-console). To type multiple lines into the console at once, use `<shift> + <enter>` to move to the next new line. Once you hit `<enter>` by itself, the console will run everything you've just typed.\n\nLet's get familiar with the process of running code in the console. First, I suggest opening up an empty tab in your browser. I prefer to do this by typing `about:blank` into the address bar. Then, make sure your developer console is open, as we just mentioned.\n\nNow, type this code and see how it runs:\n\n```js\na = 21;\n\nb = a * 2;\n\nconsole.log( b );\n```\n\nTyping the preceding code into the console in Chrome should produce something like the following:\n\n<img src=\"fig1.png\" width=\"500\">\n\nGo on, try it. The best way to learn programming is to start coding!\n\n### Output\n\nIn the previous code snippet, we used `console.log(..)`. Briefly, let's look at what that line of code is all about.\n\nYou may have guessed, but that's exactly how we print text (aka *output* to the user) in the developer console. There are two characteristics of that statement that we should explain.\n\nFirst, the `log( b )` part is referred to as a function call (see \"Functions\"). What's happening is we're handing the `b` variable to that function, which asks it to take the value of `b` and print it to the console.\n\nSecond, the `console.` part is an object reference where the `log(..)` function is located. We cover objects and their properties in more detail in Chapter 2.\n\nAnother way of creating output that you can see is to run an `alert(..)` statement. For example:\n\n```js\nalert( b );\n```\n\nIf you run that, you'll notice that instead of printing the output to the console, it shows a popup \"OK\" box with the contents of the `b` variable. However, using `console.log(..)` is generally going to make learning about coding and running your programs in the console easier than using `alert(..)`, because you can output many values at once without interrupting the browser interface.\n\nFor this book, we'll use `console.log(..)` for output.\n\n### Input\n\nWhile we're discussing output, you may also wonder about *input* (i.e., receiving information from the user).\n\nThe most common way that happens is for the HTML page to show form elements (like text boxes) to a user that they can type into, and then using JS to read those values into your program's variables.\n\nBut there's an easier way to get input for simple learning and demonstration purposes such as what you'll be doing throughout this book. Use the `prompt(..)` function:\n\n```js\nage = prompt( \"Please tell me your age:\" );\n\nconsole.log( age );\n```\n\nAs you may have guessed, the message you pass to `prompt(..)` -- in this case, `\"Please tell me your age:\"` -- is printed into the popup.\n\nThis should look similar to the following:\n\n<img src=\"fig2.png\" width=\"500\">\n\nOnce you submit the input text by clicking \"OK,\" you'll observe that the value you typed is stored in the `age` variable, which we then *output* with `console.log(..)`:\n\n<img src=\"fig3.png\" width=\"500\">\n\nTo keep things simple while we're learning basic programming concepts, the examples in this book will not require input. But now that you've seen how to use `prompt(..)`, if you want to challenge yourself you can try to use input in your explorations of the examples.\n\n## Operators\n\nOperators are how we perform actions on variables and values. We've already seen two JavaScript operators, the `=` and the `*`.\n\nThe `*` operator performs mathematic multiplication. Simple enough, right?\n\nThe `=` equals operator is used for *assignment* -- we first calculate the value on the *right-hand side* (source value) of the `=` and then put it into the variable that we specify on the *left-hand side* (target variable).\n\n**Warning:** This may seem like a strange reverse order to specify assignment. Instead of `a = 42`, some might prefer to flip the order so the source value is on the left and the target variable is on the right, like `42 -> a` (this is not valid JavaScript!). Unfortunately, the `a = 42` ordered form, and similar variations, is quite prevalent in modern programming languages. If it feels unnatural, just spend some time rehearsing that ordering in your mind to get accustomed to it.\n\nConsider:\n\n```js\na = 2;\nb = a + 1;\n```\n\nHere, we assign the `2` value to the `a` variable. Then, we get the value of the `a` variable (still `2`), add `1` to it resulting in the value `3`, then store that value in the `b` variable.\n\nWhile not technically an operator, you'll need the keyword `var` in every program, as it's the primary way you *declare* (aka *create*) *var*iables (see \"Variables\").\n\nYou should always declare the variable by name before you use it. But you only need to declare a variable once for each *scope* (see \"Scope\"); it can be used as many times after that as needed. For example:\n\n```js\nvar a = 20;\n\na = a + 1;\na = a * 2;\n\nconsole.log( a );\t// 42\n```\n\nHere are some of the most common operators in JavaScript:\n\n* Assignment: `=` as in `a = 2`.\n* Math: `+` (addition), `-` (subtraction), `*` (multiplication), and `/` (division), as in `a * 3`.\n* Compound Assignment: `+=`, `-=`, `*=`, and `/=` are compound operators that combine a math operation with assignment, as in `a += 2` (same as `a = a + 2`).\n* Increment/Decrement: `++` (increment), `--` (decrement), as in `a++` (similar to `a = a + 1`).\n* Object Property Access: `.` as in `console.log()`.\n\n   Objects are values that hold other values at specific named locations called properties. `obj.a` means an object value called `obj` with a property of the name `a`. Properties can alternatively be accessed as `obj[\"a\"]`. See Chapter 2.\n* Equality: `==` (loose-equals), `===` (strict-equals), `!=` (loose not-equals), `!==` (strict not-equals), as in `a == b`.\n\n   See \"Values & Types\" and Chapter 2.\n* Comparison: `<` (less than), `>` (greater than), `<=` (less than or loose-equals), `>=` (greater than or loose-equals), as in `a <= b`.\n\n   See \"Values & Types\" and Chapter 2.\n* Logical: `&&` (and), `||` (or), as in `a || b` that selects either `a` *or* `b`.\n\n   These operators are used to express compound conditionals (see \"Conditionals\"), like if either `a` *or* `b` is true.\n\n**Note:** For much more detail, and coverage of operators not mentioned here, see the Mozilla Developer Network (MDN)'s \"Expressions and Operators\" (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Expressions_and_Operators).\n\n## Values & Types\n\nIf you ask an employee at a phone store how much a certain phone costs, and they say \"ninety-nine, ninety-nine\" (i.e., $99.99), they're giving you an actual numeric dollar figure that represents what you'll need to pay (plus taxes) to buy it. If you want to buy two of those phones, you can easily do the mental math to double that value to get $199.98 for your base cost.\n\nIf that same employee picks up another similar phone but says it's \"free\" (perhaps with air quotes), they're not giving you a number, but instead another kind of representation of your expected cost ($0.00) -- the word \"free.\"\n\nWhen you later ask if the phone includes a charger, that answer could only have been either \"yes\" or \"no.\"\n\nIn very similar ways, when you express values in a program, you choose different representations for those values based on what you plan to do with them.\n\nThese different representations for values are called *types* in programming terminology. JavaScript has built-in types for each of these so called *primitive* values:\n\n* When you need to do math, you want a `number`.\n* When you need to print a value on the screen, you need a `string` (one or more characters, words, sentences).\n* When you need to make a decision in your program, you need a `boolean` (`true` or `false`).\n\nValues that are included directly in the source code are called *literals*. `string` literals are surrounded by double quotes `\"...\"` or single quotes (`'...'`) -- the only difference is stylistic preference. `number` and `boolean` literals are just presented as is (i.e., `42`, `true`, etc.).\n\nConsider:\n\n```js\n\"I am a string\";\n'I am also a string';\n\n42;\n\ntrue;\nfalse;\n```\n\nBeyond `string`/`number`/`boolean` value types, it's common for programming languages to provide *arrays*, *objects*, *functions*, and more. We'll cover much more about values and types throughout this chapter and the next.\n\n### Converting Between Types\n\nIf you have a `number` but need to print it on the screen, you need to convert the value to a `string`, and in JavaScript this conversion is called \"coercion.\" Similarly, if someone enters a series of numeric characters into a form on an ecommerce page, that's a `string`, but if you need to then use that value to do math operations, you need to *coerce* it to a `number`.\n\nJavaScript provides several different facilities for forcibly coercing between *types*. For example:\n\n```js\nvar a = \"42\";\nvar b = Number( a );\n\nconsole.log( a );\t// \"42\"\nconsole.log( b );\t// 42\n```\n\nUsing `Number(..)` (a built-in function) as shown is an *explicit* coercion from any other type to the `number` type. That should be pretty straightforward.\n\nBut a controversial topic is what happens when you try to compare two values that are not already of the same type, which would require *implicit* coercion.\n\nWhen comparing the string `\"99.99\"` to the number `99.99`, most people would agree they are equivalent. But they're not exactly the same, are they? It's the same value in two different representations, two different *types*. You could say they're \"loosely equal,\" couldn't you?\n\nTo help you out in these common situations, JavaScript will sometimes kick in and *implicitly* coerce values to the matching types.\n\nSo if you use the `==` loose equals operator to make the comparison `\"99.99\" == 99.99`, JavaScript will convert the left-hand side `\"99.99\"` to its `number` equivalent `99.99`. The comparison then becomes `99.99 == 99.99`, which is of course `true`.\n\nWhile designed to help you, implicit coercion can create confusion if you haven't taken the time to learn the rules that govern its behavior. Most JS developers never have, so the common feeling is that implicit coercion is confusing and harms programs with unexpected bugs, and should thus be avoided. It's even sometimes called a flaw in the design of the language.\n\nHowever, implicit coercion is a mechanism that *can be learned*, and moreover *should be learned* by anyone wishing to take JavaScript programming seriously. Not only is it not confusing once you learn the rules, it can actually make your programs better! The effort is well worth it.\n\n**Note:** For more information on coercion, see Chapter 2 of this title and Chapter 4 of the *Types & Grammar* title of this series.\n\n## Code Comments\n\nThe phone store employee might jot down some notes on the features of a newly released phone or on the new plans her company offers. These notes are only for the employee -- they're not for customers to read. Nevertheless, these notes help the employee do her job better by documenting the hows and whys of what she should tell customers.\n\nOne of the most important lessons you can learn about writing code is that it's not just for the computer. Code is every bit as much, if not more, for the developer as it is for the compiler.\n\nYour computer only cares about machine code, a series of binary 0s and 1s, that comes from *compilation*. There's a nearly infinite number of programs you could write that yield the same series of 0s and 1s. The choices you make about how to write your program matter -- not only to you, but to your other team members and even to your future self.\n\nYou should strive not just to write programs that work correctly, but programs that make sense when examined. You can go a long way in that effort by choosing good names for your variables (see \"Variables\") and functions (see \"Functions\").\n\nBut another important part is code comments. These are bits of text in your program that are inserted purely to explain things to a human. The interpreter/compiler will always ignore these comments.\n\nThere are lots of opinions on what makes well-commented code; we can't really define absolute universal rules. But some observations and guidelines are quite useful:\n\n* Code without comments is suboptimal.\n* Too many comments (one per line, for example) is probably a sign of poorly written code.\n* Comments should explain *why*, not *what*. They can optionally explain *how* if that's particularly confusing.\n\nIn JavaScript, there are two types of comments possible: a single-line comment and a multiline comment.\n\nConsider:\n\n```js\n// This is a single-line comment\n\n/* But this is\n       a multiline\n             comment.\n                      */\n```\n\nThe `//` single-line comment is appropriate if you're going to put a comment right above a single statement, or even at the end of a line. Everything on the line after the `//` is treated as the comment (and thus ignored by the compiler), all the way to the end of the line. There's no restriction to what can appear inside a single-line comment.\n\nConsider:\n\n```js\nvar a = 42;\t\t// 42 is the meaning of life\n```\n\nThe `/* .. */` multiline comment is appropriate if you have several lines worth of explanation to make in your comment.\n\nHere's a common usage of multiline comments:\n\n```js\n/* The following value is used because\n   it has been shown that it answers\n   every question in the universe. */\nvar a = 42;\n```\n\nIt can also appear anywhere on a line, even in the middle of a line, because the `*/` ends it. For example:\n\n```js\nvar a = /* arbitrary value */ 42;\n\nconsole.log( a );\t// 42\n```\n\nThe only thing that cannot appear inside a multiline comment is a `*/`, because that would be interpreted to end the comment.\n\nYou will definitely want to begin your learning of programming by starting off with the habit of commenting code. Throughout the rest of this chapter, you'll see I use comments to explain things, so do the same in your own practice. Trust me, everyone who reads your code will thank you!\n\n## Variables\n\nMost useful programs need to track a value as it changes over the course of the program, undergoing different operations as called for by your program's intended tasks.\n\nThe easiest way to go about that in your program is to assign a value to a symbolic container, called a *variable* -- so called because the value in this container can *vary* over time as needed.\n\nIn some programming languages, you declare a variable (container) to hold a specific type of value, such as `number` or `string`. *Static typing*, otherwise known as *type enforcement*, is typically cited as a benefit for program correctness by preventing unintended value conversions.\n\nOther languages emphasize types for values instead of variables. *Weak typing*, otherwise known as *dynamic typing*, allows a variable to hold any type of value at any time. It's typically cited as a benefit for program flexibility by allowing a single variable to represent a value no matter what type form that value may take at any given moment in the program's logic flow.\n\nJavaScript uses the latter approach, *dynamic typing*, meaning variables can hold values of any *type* without any *type* enforcement.\n\nAs mentioned earlier, we declare a variable using the `var` statement -- notice there's no other *type* information in the declaration. Consider this simple program:\n\n```js\nvar amount = 99.99;\n\namount = amount * 2;\n\nconsole.log( amount );\t\t// 199.98\n\n// convert `amount` to a string, and\n// add \"$\" on the beginning\namount = \"$\" + String( amount );\n\nconsole.log( amount );\t\t// \"$199.98\"\n```\n\nThe `amount` variable starts out holding the number `99.99`, and then holds the `number` result of `amount * 2`, which is `199.98`.\n\nThe first `console.log(..)` command has to *implicitly* coerce that `number` value to a `string` to print it out.\n\nThen the statement `amount = \"$\" + String(amount)` *explicitly* coerces the `199.98` value to a `string` and adds a `\"$\"` character to the beginning. At this point, `amount` now holds the `string` value `\"$199.98\"`, so the second `console.log(..)` statement doesn't need to do any coercion to print it out.\n\nJavaScript developers will note the flexibility of using the `amount` variable for each of the `99.99`, `199.98`, and the `\"$199.98\"` values. Static-typing enthusiasts would prefer a separate variable like `amountStr` to hold the final `\"$199.98\"` representation of the value, because it's a different type.\n\nEither way, you'll note that `amount` holds a running value that changes over the course of the program, illustrating the primary purpose of variables: managing program *state*.\n\nIn other words, *state* is tracking the changes to values as your program runs.\n\nAnother common usage of variables is for centralizing value setting. This is more typically called *constants*, when you declare a variable with a value and intend for that value to *not change* throughout the program.\n\nYou declare these *constants*, often at the top of a program, so that it's convenient for you to have one place to go to alter a value if you need to. By convention, JavaScript variables as constants are usually capitalized, with underscores `_` between multiple words.\n\nHere's a silly example:\n\n```js\nvar TAX_RATE = 0.08;\t// 8% sales tax\n\nvar amount = 99.99;\n\namount = amount * 2;\n\namount = amount + (amount * TAX_RATE);\n\nconsole.log( amount );\t\t\t\t// 215.9784\nconsole.log( amount.toFixed( 2 ) );\t// \"215.98\"\n```\n\n**Note:** Similar to how `console.log(..)` is a function `log(..)` accessed as an object property on the `console` value, `toFixed(..)` here is a function that can be accessed on `number` values. JavaScript `number`s aren't automatically formatted for dollars -- the engine doesn't know what your intent is and there's no type for currency. `toFixed(..)` lets us specify how many decimal places we'd like the `number` rounded to, and it produces the `string` as necessary.\n\nThe `TAX_RATE` variable is only *constant* by convention -- there's nothing special in this program that prevents it from being changed. But if the city raises the sales tax rate to 9%, we can still easily update our program by setting the `TAX_RATE` assigned value to `0.09` in one place, instead of finding many occurrences of the value `0.08` strewn throughout the program and updating all of them.\n\nThe newest version of JavaScript at the time of this writing (commonly called \"ES6\") includes a new way to declare *constants*, by using `const` instead of `var`:\n\n```js\n// as of ES6:\nconst TAX_RATE = 0.08;\n\nvar amount = 99.99;\n\n// ..\n```\n\nConstants are useful just like variables with unchanged values, except that constants also prevent accidentally changing value somewhere else after the initial setting. If you tried to assign any different value to `TAX_RATE` after that first declaration, your program would reject the change (and in strict mode, fail with an error -- see \"Strict Mode\" in Chapter 2).\n\nBy the way, that kind of \"protection\" against mistakes is similar to the static-typing type enforcement, so you can see why static types in other languages can be attractive!\n\n**Note:** For more information about how different values in variables can be used in your programs, see the *Types & Grammar* title of this series.\n\n## Blocks\n\nThe phone store employee must go through a series of steps to complete the checkout as you buy your new phone.\n\nSimilarly, in code we often need to group a series of statements together, which we often call a *block*. In JavaScript, a block is defined by wrapping one or more statements inside a curly-brace pair `{ .. }`. Consider:\n\n```js\nvar amount = 99.99;\n\n// a general block\n{\n\tamount = amount * 2;\n\tconsole.log( amount );\t// 199.98\n}\n```\n\nThis kind of standalone `{ .. }` general block is valid, but isn't as commonly seen in JS programs. Typically, blocks are attached to some other control statement, such as an `if` statement (see \"Conditionals\") or a loop (see \"Loops\"). For example:\n\n```js\nvar amount = 99.99;\n\n// is amount big enough?\nif (amount > 10) {\t\t\t// <-- block attached to `if`\n\tamount = amount * 2;\n\tconsole.log( amount );\t// 199.98\n}\n```\n\nWe'll explain `if` statements in the next section, but as you can see, the `{ .. }` block with its two statements is attached to `if (amount > 10)`; the statements inside the block will only be processed if the conditional passes.\n\n**Note:** Unlike most other statements like `console.log(amount);`, a block statement does not need a semicolon (`;`) to conclude it.\n\n## Conditionals\n\n\"Do you want to add on the extra screen protectors to your purchase, for $9.99?\" The helpful phone store employee has asked you to make a decision. And you may need to first consult the current *state* of your wallet or bank account to answer that question. But obviously, this is just a simple \"yes or no\" question.\n\nThere are quite a few ways we can express *conditionals* (aka decisions) in our programs.\n\nThe most common one is the `if` statement. Essentially, you're saying, \"*If* this condition is true, do the following...\". For example:\n\n```js\nvar bank_balance = 302.13;\nvar amount = 99.99;\n\nif (amount < bank_balance) {\n\tconsole.log( \"I want to buy this phone!\" );\n}\n```\n\nThe `if` statement requires an expression in between the parentheses `( )` that can be treated as either `true` or `false`. In this program, we provided the expression `amount < bank_balance`, which indeed will either evaluate to `true` or `false` depending on the amount in the `bank_balance` variable.\n\nYou can even provide an alternative if the condition isn't true, called an `else` clause. Consider:\n\n```js\nconst ACCESSORY_PRICE = 9.99;\n\nvar bank_balance = 302.13;\nvar amount = 99.99;\n\namount = amount * 2;\n\n// can we afford the extra purchase?\nif ( amount < bank_balance ) {\n\tconsole.log( \"I'll take the accessory!\" );\n\tamount = amount + ACCESSORY_PRICE;\n}\n// otherwise:\nelse {\n\tconsole.log( \"No, thanks.\" );\n}\n```\n\nHere, if `amount < bank_balance` is `true`, we'll print out `\"I'll take the accessory!\"` and add the `9.99` to our `amount` variable. Otherwise, the `else` clause says we'll just politely respond with `\"No, thanks.\"` and leave `amount` unchanged.\n\nAs we discussed in \"Values & Types\" earlier, values that aren't already of an expected type are often coerced to that type. The `if` statement expects a `boolean`, but if you pass it something that's not already `boolean`, coercion will occur.\n\nJavaScript defines a list of specific values that are considered \"falsy\" because when coerced to a `boolean`, they become `false` -- these include values like `0` and `\"\"`. Any other value not on the \"falsy\" list is automatically \"truthy\" -- when coerced to a `boolean` they become `true`. Truthy values include things like `99.99` and `\"free\"`. See \"Truthy & Falsy\" in Chapter 2 for more information.\n\n*Conditionals* exist in other forms besides the `if`. For example, the `switch` statement can be used as a shorthand for a series of `if..else` statements (see Chapter 2). Loops (see \"Loops\") use a *conditional* to determine if the loop should keep going or stop.\n\n**Note:** For deeper information about the coercions that can occur implicitly in the test expressions of *conditionals*, see Chapter 4 of the *Types & Grammar* title of this series.\n\n## Loops\n\nDuring busy times, there's a waiting list for customers who need to speak to the phone store employee. While there's still people on that list, she just needs to keep serving the next customer.\n\nRepeating a set of actions until a certain condition fails -- in other words, repeating only while the condition holds -- is the job of programming loops; loops can take different forms, but they all satisfy this basic behavior.\n\nA loop includes the test condition as well as a block (typically as `{ .. }`). Each time the loop block executes, that's called an *iteration*.\n\nFor example, the `while` loop and the `do..while` loop forms illustrate the concept of repeating a block of statements until a condition no longer evaluates to `true`:\n\n```js\nwhile (numOfCustomers > 0) {\n\tconsole.log( \"How may I help you?\" );\n\n\t// help the customer...\n\n\tnumOfCustomers = numOfCustomers - 1;\n}\n\n// versus:\n\ndo {\n\tconsole.log( \"How may I help you?\" );\n\n\t// help the customer...\n\n\tnumOfCustomers = numOfCustomers - 1;\n} while (numOfCustomers > 0);\n```\n\nThe only practical difference between these loops is whether the conditional is tested before the first iteration (`while`) or after the first iteration (`do..while`).\n\nIn either form, if the conditional tests as `false`, the next iteration will not run. That means if the condition is initially `false`, a `while` loop will never run, but a `do..while` loop will run just the first time.\n\nSometimes you are looping for the intended purpose of counting a certain set of numbers, like from `0` to `9` (ten numbers). You can do that by setting a loop iteration variable like `i` at value `0` and incrementing it by `1` each iteration.\n\n**Warning:** For a variety of historical reasons, programming languages almost always count things in a zero-based fashion, meaning starting with `0` instead of `1`. If you're not familiar with that mode of thinking, it can be quite confusing at first. Take some time to practice counting starting with `0` to become more comfortable with it!\n\nThe conditional is tested on each iteration, much as if there is an implied `if` statement inside the loop.\n\nWe can use JavaScript's `break` statement to stop a loop. Also, we can observe that it's awfully easy to create a loop that would otherwise run forever without a `break`ing mechanism.\n\nLet's illustrate:\n\n```js\nvar i = 0;\n\n// a `while..true` loop would run forever, right?\nwhile (true) {\n\t// stop the loop?\n\tif ((i <= 9) === false) {\n\t\tbreak;\n\t}\n\n\tconsole.log( i );\n\ti = i + 1;\n}\n// 0 1 2 3 4 5 6 7 8 9\n```\n\n**Warning:** This is not necessarily a practical form you'd want to use for your loops. It's presented here for illustration purposes only.\n\nWhile a `while` (or `do..while`) can accomplish the task manually, there's another syntactic form called a `for` loop for just that purpose:\n\n```js\nfor (var i = 0; i <= 9; i = i + 1) {\n\tconsole.log( i );\n}\n// 0 1 2 3 4 5 6 7 8 9\n```\n\nAs you can see, in both cases the conditional `i <= 9` is `true` for the first 10 iterations (`i` of values `0` through `9`) of either loop form, but becomes `false` once `i` is value `10`.\n\nThe `for` loop has three clauses: the initialization clause (`var i=0`), the conditional test clause (`i <= 9`), and the update clause (`i = i + 1`). So if you're going to do counting with your loop iterations, `for` is a more compact and often easier form to understand and write.\n\nThere are other specialized loop forms that are intended to iterate over specific values, such as the properties of an object (see Chapter 2) where the implied conditional test is just whether all the properties have been processed. The \"loop until a condition fails\" concept holds no matter what the form of the loop.\n\n## Functions\n\nThe phone store employee probably doesn't carry around a calculator to figure out the taxes and final purchase amount. That's a task she needs to define once and reuse over and over again. Odds are, the company has a checkout register (computer, tablet, etc.) with those \"functions\" built in.\n\nSimilarly, your program will almost certainly want to break up the code's tasks into reusable pieces, instead of repeatedly repeating yourself repetitiously (pun intended!). The way to do this is to define a `function`.\n\nA function is generally a named section of code that can be \"called\" by name, and the code inside it will be run each time. Consider:\n\n```js\nfunction printAmount() {\n\tconsole.log( amount.toFixed( 2 ) );\n}\n\nvar amount = 99.99;\n\nprintAmount(); // \"99.99\"\n\namount = amount * 2;\n\nprintAmount(); // \"199.98\"\n```\n\nFunctions can optionally take arguments (aka parameters) -- values you pass in. And they can also optionally return a value back.\n\n```js\nfunction printAmount(amt) {\n\tconsole.log( amt.toFixed( 2 ) );\n}\n\nfunction formatAmount() {\n\treturn \"$\" + amount.toFixed( 2 );\n}\n\nvar amount = 99.99;\n\nprintAmount( amount * 2 );\t\t// \"199.98\"\n\namount = formatAmount();\nconsole.log( amount );\t\t\t// \"$99.99\"\n```\n\nThe function `printAmount(..)` takes a parameter that we call `amt`. The function `formatAmount()` returns a value. Of course, you can also combine those two techniques in the same function.\n\nFunctions are often used for code that you plan to call multiple times, but they can also be useful just to organize related bits of code into named collections, even if you only plan to call them once.\n\nConsider:\n\n```js\nconst TAX_RATE = 0.08;\n\nfunction calculateFinalPurchaseAmount(amt) {\n\t// calculate the new amount with the tax\n\tamt = amt + (amt * TAX_RATE);\n\n\t// return the new amount\n\treturn amt;\n}\n\nvar amount = 99.99;\n\namount = calculateFinalPurchaseAmount( amount );\n\nconsole.log( amount.toFixed( 2 ) );\t\t// \"107.99\"\n```\n\nAlthough `calculateFinalPurchaseAmount(..)` is only called once, organizing its behavior into a separate named function makes the code that uses its logic (the `amount = calculateFinal...` statement) cleaner. If the function had more statements in it, the benefits would be even more pronounced.\n\n### Scope\n\nIf you ask the phone store employee for a phone model that her store doesn't carry, she will not be able to sell you the phone you want. She only has access to the phones in her store's inventory. You'll have to try another store to see if you can find the phone you're looking for.\n\nProgramming has a term for this concept: *scope* (technically called *lexical scope*). In JavaScript, each function gets its own scope. Scope is basically a collection of variables as well as the rules for how those variables are accessed by name. Only code inside that function can access that function's *scoped* variables.\n\nA variable name has to be unique within the same scope -- there can't be two different `a` variables sitting right next to each other. But the same variable name `a` could appear in different scopes.\n\n```js\nfunction one() {\n\t// this `a` only belongs to the `one()` function\n\tvar a = 1;\n\tconsole.log( a );\n}\n\nfunction two() {\n\t// this `a` only belongs to the `two()` function\n\tvar a = 2;\n\tconsole.log( a );\n}\n\none();\t\t// 1\ntwo();\t\t// 2\n```\n\nAlso, a scope can be nested inside another scope, just like if a clown at a birthday party blows up one balloon inside another balloon. If one scope is nested inside another, code inside the innermost scope can access variables from either scope.\n\nConsider:\n\n```js\nfunction outer() {\n\tvar a = 1;\n\n\tfunction inner() {\n\t\tvar b = 2;\n\n\t\t// we can access both `a` and `b` here\n\t\tconsole.log( a + b );\t// 3\n\t}\n\n\tinner();\n\n\t// we can only access `a` here\n\tconsole.log( a );\t\t\t// 1\n}\n\nouter();\n```\n\nLexical scope rules say that code in one scope can access variables of either that scope or any scope outside of it.\n\nSo, code inside the `inner()` function has access to both variables `a` and `b`, but code in `outer()` has access only to `a` -- it cannot access `b` because that variable is only inside `inner()`.\n\nRecall this code snippet from earlier:\n\n```js\nconst TAX_RATE = 0.08;\n\nfunction calculateFinalPurchaseAmount(amt) {\n\t// calculate the new amount with the tax\n\tamt = amt + (amt * TAX_RATE);\n\n\t// return the new amount\n\treturn amt;\n}\n```\n\nThe `TAX_RATE` constant (variable) is accessible from inside the `calculateFinalPurchaseAmount(..)` function, even though we didn't pass it in, because of lexical scope.\n\n**Note:** For more information about lexical scope, see the first three chapters of the *Scope & Closures* title of this series.\n\n## Practice\n\nThere is absolutely no substitute for practice in learning programming. No amount of articulate writing on my part is alone going to make you a programmer.\n\nWith that in mind, let's try practicing some of the concepts we learned here in this chapter. I'll give the \"requirements,\" and you try it first. Then consult the code listing below to see how I approached it.\n\n* Write a program to calculate the total price of your phone purchase. You will keep purchasing phones (hint: loop!) until you run out of money in your bank account. You'll also buy accessories for each phone as long as your purchase amount is below your mental spending threshold.\n* After you've calculated your purchase amount, add in the tax, then print out the calculated purchase amount, properly formatted.\n* Finally, check the amount against your bank account balance to see if you can afford it or not.\n* You should set up some constants for the \"tax rate,\" \"phone price,\" \"accessory price,\" and \"spending threshold,\" as well as a variable for your \"bank account balance.\"\"\n* You should define functions for calculating the tax and for formatting the price with a \"$\" and rounding to two decimal places.\n* **Bonus Challenge:** Try to incorporate input into this program, perhaps with the `prompt(..)` covered in \"Input\" earlier. You may prompt the user for their bank account balance, for example. Have fun and be creative!\n\nOK, go ahead. Try it. Don't peek at my code listing until you've given it a shot yourself!\n\n**Note:** Because this is a JavaScript book, I'm obviously going to solve the practice exercise in JavaScript. But you can do it in another language for now if you feel more comfortable.\n\nHere's my JavaScript solution for this exercise:\n\n```js\nconst SPENDING_THRESHOLD = 200;\nconst TAX_RATE = 0.08;\nconst PHONE_PRICE = 99.99;\nconst ACCESSORY_PRICE = 9.99;\n\nvar bank_balance = 303.91;\nvar amount = 0;\n\nfunction calculateTax(amount) {\n\treturn amount * TAX_RATE;\n}\n\nfunction formatAmount(amount) {\n\treturn \"$\" + amount.toFixed( 2 );\n}\n\n// keep buying phones while you still have money\nwhile (amount < bank_balance) {\n\t// buy a new phone!\n\tamount = amount + PHONE_PRICE;\n\n\t// can we afford the accessory?\n\tif (amount < SPENDING_THRESHOLD) {\n\t\tamount = amount + ACCESSORY_PRICE;\n\t}\n}\n\n// don't forget to pay the government, too\namount = amount + calculateTax( amount );\n\nconsole.log(\n\t\"Your purchase: \" + formatAmount( amount )\n);\n// Your purchase: $334.76\n\n// can you actually afford this purchase?\nif (amount > bank_balance) {\n\tconsole.log(\n\t\t\"You can't afford this purchase. :(\"\n\t);\n}\n// You can't afford this purchase. :(\n```\n\n**Note:** The simplest way to run this JavaScript program is to type it into the developer console of your nearest browser.\n\nHow did you do? It wouldn't hurt to try it again now that you've seen my code. And play around with changing some of the constants to see how the program runs with different values.\n\n## Review\n\nLearning programming doesn't have to be a complex and overwhelming process. There are just a few basic concepts you need to wrap your head around.\n\nThese act like building blocks. To build a tall tower, you start first by putting block on top of block on top of block. The same goes with programming. Here are some of the essential programming building blocks:\n\n* You need *operators* to perform actions on values.\n* You need values and *types* to perform different kinds of actions like math on `number`s or output with `string`s.\n* You need *variables* to store data (aka *state*) during your program's execution.\n* You need *conditionals* like `if` statements to make decisions.\n* You need *loops* to repeat tasks until a condition stops being true.\n* You need *functions* to organize your code into logical and reusable chunks.\n\nCode comments are one effective way to write more readable code, which makes your program easier to understand, maintain, and fix later if there are problems.\n\nFinally, don't neglect the power of practice. The best way to learn how to write code is to write code.\n\nI'm excited you're well on your way to learning how to code, now! Keep it up. Don't forget to check out other beginner programming resources (books, blogs, online training, etc.). This chapter and this book are a great start, but they're just a brief introduction.\n\nThe next chapter will review many of the concepts from this chapter, but from a more JavaScript-specific perspective, which will highlight most of the major topics that are addressed in deeper detail throughout the rest of the series.\n"
  },
  {
    "path": "up & going/ch2.md",
    "content": "# You Don't Know JS: Up & Going\n# Chapter 2: Into JavaScript\n\nIn the previous chapter, I introduced the basic building blocks of programming, such as variables, loops, conditionals, and functions. Of course, all the code shown has been in JavaScript. But in this chapter, we want to focus specifically on things you need to know about JavaScript to get up and going as a JS developer.\n\nWe will introduce quite a few concepts in this chapter that will not be fully explored until subsequent *YDKJS* books. You can think of this chapter as an overview of the topics covered in detail throughout the rest of this series.\n\nEspecially if you're new to JavaScript, you should expect to spend quite a bit of time reviewing the concepts and code examples here multiple times. Any good foundation is laid brick by brick, so don't expect that you'll immediately understand it all the first pass through.\n\nYour journey to deeply learn JavaScript starts here.\n\n**Note:** As I said in Chapter 1, you should definitely try all this code yourself as you read and work through this chapter. Be aware that some of the code here assumes capabilities introduced in the newest version of JavaScript at the time of this writing (commonly referred to as \"ES6\" for the 6th edition of ECMAScript -- the official name of the JS specification). If you happen to be using an older, pre-ES6 browser, the code may not work. A recent update of a modern browser (like Chrome, Firefox, or IE) should be used.\n\n## Values & Types\n\nAs we asserted in Chapter 1, JavaScript has typed values, not typed variables. The following built-in types are available:\n\n* `string`\n* `number`\n* `boolean`\n* `null` and `undefined`\n* `object`\n* `symbol` (new to ES6)\n\nJavaScript provides a `typeof` operator that can examine a value and tell you what type it is:\n\n```js\nvar a;\ntypeof a;\t\t\t\t// \"undefined\"\n\na = \"hello world\";\ntypeof a;\t\t\t\t// \"string\"\n\na = 42;\ntypeof a;\t\t\t\t// \"number\"\n\na = true;\ntypeof a;\t\t\t\t// \"boolean\"\n\na = null;\ntypeof a;\t\t\t\t// \"object\" -- weird, bug\n\na = undefined;\ntypeof a;\t\t\t\t// \"undefined\"\n\na = { b: \"c\" };\ntypeof a;\t\t\t\t// \"object\"\n```\n\nThe return value from the `typeof` operator is always one of six (seven as of ES6! - the \"symbol\" type) string values. That is, `typeof \"abc\"` returns `\"string\"`, not `string`.\n\nNotice how in this snippet the `a` variable holds every different type of value, and that despite appearances, `typeof a` is not asking for the \"type of `a`\", but rather for the \"type of the value currently in `a`.\" Only values have types in JavaScript; variables are just simple containers for those values.\n\n`typeof null` is an interesting case, because it errantly returns `\"object\"`, when you'd expect it to return `\"null\"`.\n\n**Warning:** This is a long-standing bug in JS, but one that is likely never going to be fixed. Too much code on the Web relies on the bug and thus fixing it would cause a lot more bugs!\n\nAlso, note `a = undefined`. We're explicitly setting `a` to the `undefined` value, but that is behaviorally no different from a variable that has no value set yet, like with the `var a;` line at the top of the snippet. A variable can get to this \"undefined\" value state in several different ways, including functions that return no values and usage of the `void` operator.\n\n### Objects\n\nThe `object` type refers to a compound value where you can set properties (named locations) that each hold their own values of any type. This is perhaps one of the most useful value types in all of JavaScript.\n\n```js\nvar obj = {\n\ta: \"hello world\",\n\tb: 42,\n\tc: true\n};\n\nobj.a;\t\t// \"hello world\"\nobj.b;\t\t// 42\nobj.c;\t\t// true\n\nobj[\"a\"];\t// \"hello world\"\nobj[\"b\"];\t// 42\nobj[\"c\"];\t// true\n```\n\nIt may be helpful to think of this `obj` value visually:\n\n<img src=\"fig4.png\">\n\nProperties can either be accessed with *dot notation* (i.e., `obj.a`) or *bracket notation* (i.e., `obj[\"a\"]`). Dot notation is shorter and generally easier to read, and is thus preferred when possible.\n\nBracket notation is useful if you have a property name that has special characters in it, like `obj[\"hello world!\"]` -- such properties are often referred to as *keys* when accessed via bracket notation. The `[ ]` notation requires either a variable (explained next) or a `string` *literal* (which needs to be wrapped in `\" .. \"` or `' .. '`).\n\nOf course, bracket notation is also useful if you want to access a property/key but the name is stored in another variable, such as:\n\n```js\nvar obj = {\n\ta: \"hello world\",\n\tb: 42\n};\n\nvar b = \"a\";\n\nobj[b];\t\t\t// \"hello world\"\nobj[\"b\"];\t\t// 42\n```\n\n**Note:** For more information on JavaScript `object`s, see the *this & Object Prototypes* title of this series, specifically Chapter 3.\n\nThere are a couple of other value types that you will commonly interact with in JavaScript programs: *array* and *function*. But rather than being proper built-in types, these should be thought of more like subtypes -- specialized versions of the `object` type.\n\n#### Arrays\n\nAn array is an `object` that holds values (of any type) not particularly in named properties/keys, but rather in numerically indexed positions. For example:\n\n```js\nvar arr = [\n\t\"hello world\",\n\t42,\n\ttrue\n];\n\narr[0];\t\t\t// \"hello world\"\narr[1];\t\t\t// 42\narr[2];\t\t\t// true\narr.length;\t\t// 3\n\ntypeof arr;\t\t// \"object\"\n```\n\n**Note:** Languages that start counting at zero, like JS does, use `0` as the index of the first element in the array.\n\nIt may be helpful to think of `arr` visually:\n\n<img src=\"fig5.png\">\n\nBecause arrays are special objects (as `typeof` implies), they can also have properties, including the automatically updated `length` property.\n\nYou theoretically could use an array as a normal object with your own named properties, or you could use an `object` but only give it numeric properties (`0`, `1`, etc.) similar to an array. However, this would generally be considered improper usage of the respective types.\n\nThe best and most natural approach is to use arrays for numerically positioned values and use `object`s for named properties.\n\n#### Functions\n\nThe other `object` subtype you'll use all over your JS programs is a function:\n\n```js\nfunction foo() {\n\treturn 42;\n}\n\nfoo.bar = \"hello world\";\n\ntypeof foo;\t\t\t// \"function\"\ntypeof foo();\t\t// \"number\"\ntypeof foo.bar;\t\t// \"string\"\n```\n\nAgain, functions are a subtype of `objects` -- `typeof` returns `\"function\"`, which implies that a `function` is a main type -- and can thus have properties, but you typically will only use function object properties (like `foo.bar`) in limited cases.\n\n**Note:** For more information on JS values and their types, see the first two chapters of the *Types & Grammar* title of this series.\n\n### Built-In Type Methods\n\nThe built-in types and subtypes we've just discussed have behaviors exposed as properties and methods that are quite powerful and useful.\n\nFor example:\n\n```js\nvar a = \"hello world\";\nvar b = 3.14159;\n\na.length;\t\t\t\t// 11\na.toUpperCase();\t\t// \"HELLO WORLD\"\nb.toFixed(4);\t\t\t// \"3.1416\"\n```\n\nThe \"how\" behind being able to call `a.toUpperCase()` is more complicated than just that method existing on the value.\n\nBriefly, there is a `String` (capital `S`) object wrapper form, typically called a \"native,\" that pairs with the primitive `string` type; it's this object wrapper that defines the `toUpperCase()` method on its prototype.\n\nWhen you use a primitive value like `\"hello world\"` as an `object` by referencing a property or method (e.g., `a.toUpperCase()` in the previous snippet), JS automatically \"boxes\" the value to its object wrapper counterpart (hidden under the covers).\n\nA `string` value can be wrapped by a `String` object, a `number` can be wrapped by a `Number` object, and a `boolean` can be wrapped by a `Boolean` object. For the most part, you don't need to worry about or directly use these object wrapper forms of the values -- prefer the primitive value forms in practically all cases and JavaScript will take care of the rest for you.\n\n**Note:** For more information on JS natives and \"boxing,\" see Chapter 3 of the *Types & Grammar* title of this series. To better understand the prototype of an object, see Chapter 5 of the *this & Object Prototypes* title of this series.\n\n### Comparing Values\n\nThere are two main types of value comparison that you will need to make in your JS programs: *equality* and *inequality*. The result of any comparison is a strictly `boolean` value (`true` or `false`), regardless of what value types are compared.\n\n#### Coercion\n\nWe talked briefly about coercion in Chapter 1, but let's revisit it here.\n\nCoercion comes in two forms in JavaScript: *explicit* and *implicit*. Explicit coercion is simply that you can see obviously from the code that a conversion from one type to another will occur, whereas implicit coercion is when the type conversion can happen as more of a non-obvious side effect of some other operation.\n\nYou've probably heard sentiments like \"coercion is evil\" drawn from the fact that there are clearly places where coercion can produce some surprising results. Perhaps nothing evokes frustration from developers more than when the language surprises them.\n\nCoercion is not evil, nor does it have to be surprising. In fact, the majority of cases you can construct with type coercion are quite sensible and understandable, and can even be used to *improve* the readability of your code. But we won't go much further into that debate -- Chapter 4 of the *Types & Grammar* title of this series covers all sides.\n\nHere's an example of *explicit* coercion:\n\n```js\nvar a = \"42\";\n\nvar b = Number( a );\n\na;\t\t\t\t// \"42\"\nb;\t\t\t\t// 42 -- the number!\n```\n\nAnd here's an example of *implicit* coercion:\n\n```js\nvar a = \"42\";\n\nvar b = a * 1;\t// \"42\" implicitly coerced to 42 here\n\na;\t\t\t\t// \"42\"\nb;\t\t\t\t// 42 -- the number!\n```\n\n#### Truthy & Falsy\n\nIn Chapter 1, we briefly mentioned the \"truthy\" and \"falsy\" nature of values: when a non-`boolean` value is coerced to a `boolean`, does it become `true` or `false`, respectively?\n\nThe specific list of \"falsy\" values in JavaScript is as follows:\n\n* `\"\"` (empty string)\n* `0`, `-0`, `NaN` (invalid `number`)\n* `null`, `undefined`\n* `false`\n\nAny value that's not on this \"falsy\" list is \"truthy.\" Here are some examples of those:\n\n* `\"hello\"`\n* `42`\n* `true`\n* `[ ]`, `[ 1, \"2\", 3 ]` (arrays)\n* `{ }`, `{ a: 42 }` (objects)\n* `function foo() { .. }` (functions)\n\nIt's important to remember that a non-`boolean` value only follows this \"truthy\"/\"falsy\" coercion if it's actually coerced to a `boolean`. It's not all that difficult to confuse yourself with a situation that seems like it's coercing a value to a `boolean` when it's not.\n\n#### Equality\n\nThere are four equality operators: `==`, `===`, `!=`, and `!==`. The `!` forms are of course the symmetric \"not equal\" versions of their counterparts; *non-equality* should not be confused with *inequality*.\n\nThe difference between `==` and `===` is usually characterized that `==` checks for value equality and `===` checks for both value and type equality. However, this is inaccurate. The proper way to characterize them is that `==` checks for value equality with coercion allowed, and `===` checks for value equality without allowing coercion; `===` is often called \"strict equality\" for this reason.\n\nConsider the implicit coercion that's allowed by the `==` loose-equality comparison and not allowed with the `===` strict-equality:\n\n```js\nvar a = \"42\";\nvar b = 42;\n\na == b;\t\t\t// true\na === b;\t\t// false\n```\n\nIn the `a == b` comparison, JS notices that the types do not match, so it goes through an ordered series of steps to coerce one or both values to a different type until the types match, where then a simple value equality can be checked.\n\nIf you think about it, there's two possible ways `a == b` could give `true` via coercion. Either the comparison could end up as `42 == 42` or it could be `\"42\" == \"42\"`. So which is it?\n\nThe answer: `\"42\"` becomes `42`, to make the comparison `42 == 42`. In such a simple example, it doesn't really seem to matter which way that process goes, as the end result is the same. There are more complex cases where it matters not just what the end result of the comparison is, but *how* you get there.\n\nThe `a === b` produces `false`, because the coercion is not allowed, so the simple value comparison obviously fails. Many developers feel that `===` is more predictable, so they advocate always using that form and staying away from `==`. I think this view is very shortsighted. I believe `==` is a powerful tool that helps your program, *if you take the time to learn how it works.*\n\nWe're not going to cover all the nitty-gritty details of how the coercion in `==` comparisons works here. Much of it is pretty sensible, but there are some important corner cases to be careful of. You can read section 11.9.3 of the ES5 specification (http://www.ecma-international.org/ecma-262/5.1/) to see the exact rules, and you'll be surprised at just how straightforward this mechanism is, compared to all the negative hype surrounding it.\n\nTo boil down a whole lot of details to a few simple takeaways, and help you know whether to use `==` or `===` in various situations, here are my simple rules:\n\n* If either value (aka side) in a comparison could be the `true` or `false` value, avoid `==` and use `===`.\n* If either value in a comparison could be one of these specific values (`0`, `\"\"`, or `[]` -- empty array), avoid `==` and use `===`.\n* In *all* other cases, you're safe to use `==`. Not only is it safe, but in many cases it simplifies your code in a way that improves readability.\n\nWhat these rules boil down to is requiring you to think critically about your code and about what kinds of values can come through variables that get compared for equality. If you can be certain about the values, and `==` is safe, use it! If you can't be certain about the values, use `===`. It's that simple.\n\nThe `!=` non-equality form pairs with `==`, and the `!==` form pairs with `===`. All the rules and observations we just discussed hold symmetrically for these non-equality comparisons.\n\nYou should take special note of the `==` and `===` comparison rules if you're comparing two non-primitive values, like `object`s (including `function` and `array`). Because those values are actually held by reference, both `==` and `===` comparisons will simply check whether the references match, not anything about the underlying values.\n\nFor example, `array`s are by default coerced to `string`s by simply joining all the values with commas (`,`) in between. You might think that two `array`s with the same contents would be `==` equal, but they're not:\n\n```js\nvar a = [1,2,3];\nvar b = [1,2,3];\nvar c = \"1,2,3\";\n\na == c;\t\t// true\nb == c;\t\t// true\na == b;\t\t// false\n```\n\n**Note:** For more information about the `==` equality comparison rules, see the ES5 specification (section 11.9.3) and also consult Chapter 4 of the *Types & Grammar* title of this series; see Chapter 2 for more information about values versus references.\n\n#### Inequality\n\nThe `<`, `>`, `<=`, and `>=` operators are used for inequality, referred to in the specification as \"relational comparison.\" Typically they will be used with ordinally comparable values like `number`s. It's easy to understand that `3 < 4`.\n\nBut JavaScript `string` values can also be compared for inequality, using typical alphabetic rules (`\"bar\" < \"foo\"`).\n\nWhat about coercion? Similar rules as `==` comparison (though not exactly identical!) apply to the inequality operators. Notably, there are no \"strict inequality\" operators that would disallow coercion the same way `===` \"strict equality\" does.\n\nConsider:\n\n```js\nvar a = 41;\nvar b = \"42\";\nvar c = \"43\";\n\na < b;\t\t// true\nb < c;\t\t// true\n```\n\nWhat happens here? In section 11.8.5 of the ES5 specification, it says that if both values in the `<` comparison are `string`s, as it is with `b < c`, the comparison is made lexicographically (aka alphabetically like a dictionary). But if one or both is not a `string`, as it is with `a < b`, then both values are coerced to be `number`s, and a typical numeric comparison occurs.\n\nThe biggest gotcha you may run into here with comparisons between potentially different value types -- remember, there are no \"strict inequality\" forms to use -- is when one of the values cannot be made into a valid number, such as:\n\n```js\nvar a = 42;\nvar b = \"foo\";\n\na < b;\t\t// false\na > b;\t\t// false\na == b;\t\t// false\n```\n\nWait, how can all three of those comparisons be `false`? Because the `b` value is being coerced to the \"invalid number value\" `NaN` in the `<` and `>` comparisons, and the specification says that `NaN` is neither greater-than nor less-than any other value.\n\nThe `==` comparison fails for a different reason. `a == b` could fail if it's interpreted either as `42 == NaN` or `\"42\" == \"foo\"` -- as we explained earlier, the former is the case.\n\n**Note:** For more information about the inequality comparison rules, see section 11.8.5 of the ES5 specification and also consult Chapter 4 of the *Types & Grammar* title of this series.\n\n## Variables\n\nIn JavaScript, variable names (including function names) must be valid *identifiers*. The strict and complete rules for valid characters in identifiers are a little complex when you consider nontraditional characters such as Unicode. If you only consider typical ASCII alphanumeric characters though, the rules are simple.\n\nAn identifier must start with `a`-`z`, `A`-`Z`, `$`, or `_`. It can then contain any of those characters plus the numerals `0`-`9`.\n\nGenerally, the same rules apply to a property name as to a variable identifier. However, certain words cannot be used as variables, but are OK as property names. These words are called \"reserved words,\" and include the JS keywords (`for`, `in`, `if`, etc.) as well as `null`, `true`, and `false`.\n\n**Note:** For more information about reserved words, see Appendix A of the *Types & Grammar* title of this series.\n\n### Function Scopes\n\nYou use the `var` keyword to declare a variable that will belong to the current function scope, or the global scope if at the top level outside of any function.\n\n#### Hoisting\n\nWherever a `var` appears inside a scope, that declaration is taken to belong to the entire scope and accessible everywhere throughout.\n\nMetaphorically, this behavior is called *hoisting*, when a `var` declaration is conceptually \"moved\" to the top of its enclosing scope. Technically, this process is more accurately explained by how code is compiled, but we can skip over those details for now.\n\nConsider:\n\n```js\nvar a = 2;\n\nfoo();\t\t\t\t\t// works because `foo()`\n\t\t\t\t\t\t// declaration is \"hoisted\"\n\nfunction foo() {\n\ta = 3;\n\n\tconsole.log( a );\t// 3\n\n\tvar a;\t\t\t\t// declaration is \"hoisted\"\n\t\t\t\t\t\t// to the top of `foo()`\n}\n\nconsole.log( a );\t// 2\n```\n\n**Warning:** It's not common or a good idea to rely on variable *hoisting* to use a variable earlier in its scope than its `var` declaration appears; it can be quite confusing. It's much more common and accepted to use *hoisted* function declarations, as we do with the `foo()` call appearing before its formal declaration.\n\n#### Nested Scopes\n\nWhen you declare a variable, it is available anywhere in that scope, as well as any lower/inner scopes. For example:\n\n```js\nfunction foo() {\n\tvar a = 1;\n\n\tfunction bar() {\n\t\tvar b = 2;\n\n\t\tfunction baz() {\n\t\t\tvar c = 3;\n\n\t\t\tconsole.log( a, b, c );\t// 1 2 3\n\t\t}\n\n\t\tbaz();\n\t\tconsole.log( a, b );\t\t// 1 2\n\t}\n\n\tbar();\n\tconsole.log( a );\t\t\t\t// 1\n}\n\nfoo();\n```\n\nNotice that `c` is not available inside of `bar()`, because it's declared only inside the inner `baz()` scope, and that `b` is not available to `foo()` for the same reason.\n\nIf you try to access a variable's value in a scope where it's not available, you'll get a `ReferenceError` thrown. If you try to set a variable that hasn't been declared, you'll either end up creating a variable in the top-level global scope (bad!) or getting an error, depending on \"strict mode\" (see \"Strict Mode\"). Let's take a look:\n\n```js\nfunction foo() {\n\ta = 1;\t// `a` not formally declared\n}\n\nfoo();\na;\t\t\t// 1 -- oops, auto global variable :(\n```\n\nThis is a very bad practice. Don't do it! Always formally declare your variables.\n\nIn addition to creating declarations for variables at the function level, ES6 *lets* you declare variables to belong to individual blocks (pairs of `{ .. }`), using the `let` keyword. Besides some nuanced details, the scoping rules will behave roughly the same as we just saw with functions:\n\n```js\nfunction foo() {\n\tvar a = 1;\n\n\tif (a >= 1) {\n\t\tlet b = 2;\n\n\t\twhile (b < 5) {\n\t\t\tlet c = b * 2;\n\t\t\tb++;\n\n\t\t\tconsole.log( a + c );\n\t\t}\n\t}\n}\n\nfoo();\n// 5 7 9\n```\n\nBecause of using `let` instead of `var`, `b` will belong only to the `if` statement and thus not to the whole `foo()` function's scope. Similarly, `c` belongs only to the `while` loop. Block scoping is very useful for managing your variable scopes in a more fine-grained fashion, which can make your code much easier to maintain over time.\n\n**Note:** For more information about scope, see the *Scope & Closures* title of this series. See the *ES6 & Beyond* title of this series for more information about `let` block scoping.\n\n## Conditionals\n\nIn addition to the `if` statement we introduced briefly in Chapter 1, JavaScript provides a few other conditionals mechanisms that we should take a look at.\n\nSometimes you may find yourself writing a series of `if..else..if` statements like this:\n\n```js\nif (a == 2) {\n\t// do something\n}\nelse if (a == 10) {\n\t// do another thing\n}\nelse if (a == 42) {\n\t// do yet another thing\n}\nelse {\n\t// fallback to here\n}\n```\n\nThis structure works, but it's a little verbose because you need to specify the `a` test for each case. Here's another option, the `switch` statement:\n\n```js\nswitch (a) {\n\tcase 2:\n\t\t// do something\n\t\tbreak;\n\tcase 10:\n\t\t// do another thing\n\t\tbreak;\n\tcase 42:\n\t\t// do yet another thing\n\t\tbreak;\n\tdefault:\n\t\t// fallback to here\n}\n```\n\nThe `break` is important if you want only the statement(s) in one `case` to run. If you omit `break` from a `case`, and that `case` matches or runs, execution will continue with the next `case`'s statements regardless of that `case` matching. This so called \"fall through\" is sometimes useful/desired:\n\n```js\nswitch (a) {\n\tcase 2:\n\tcase 10:\n\t\t// some cool stuff\n\t\tbreak;\n\tcase 42:\n\t\t// other stuff\n\t\tbreak;\n\tdefault:\n\t\t// fallback\n}\n```\n\nHere, if `a` is either `2` or `10`, it will execute the \"some cool stuff\" code statements.\n\nAnother form of conditional in JavaScript is the \"conditional operator,\" often called the \"ternary operator.\" It's like a more concise form of a single `if..else` statement, such as:\n\n```js\nvar a = 42;\n\nvar b = (a > 41) ? \"hello\" : \"world\";\n\n// similar to:\n\n// if (a > 41) {\n//    b = \"hello\";\n// }\n// else {\n//    b = \"world\";\n// }\n```\n\nIf the test expression (`a > 41` here) evaluates as `true`, the first clause (`\"hello\"`) results, otherwise the second clause (`\"world\"`) results, and whatever the result is then gets assigned to `b`.\n\nThe conditional operator doesn't have to be used in an assignment, but that's definitely the most common usage.\n\n**Note:** For more information about testing conditions and other patterns for `switch` and `? :`, see the *Types & Grammar* title of this series.\n\n## Strict Mode\n\nES5 added a \"strict mode\" to the language, which tightens the rules for certain behaviors. Generally, these restrictions are seen as keeping the code to a safer and more appropriate set of guidelines. Also, adhering to strict mode makes your code generally more optimizable by the engine. Strict mode is a big win for code, and you should use it for all your programs.\n\nYou can opt in to strict mode for an individual function, or an entire file, depending on where you put the strict mode pragma:\n\n```js\nfunction foo() {\n\t\"use strict\";\n\n\t// this code is strict mode\n\n\tfunction bar() {\n\t\t// this code is strict mode\n\t}\n}\n\n// this code is not strict mode\n```\n\nCompare that to:\n\n```js\n\"use strict\";\n\nfunction foo() {\n\t// this code is strict mode\n\n\tfunction bar() {\n\t\t// this code is strict mode\n\t}\n}\n\n// this code is strict mode\n```\n\nOne key difference (improvement!) with strict mode is disallowing the implicit auto-global variable declaration from omitting the `var`:\n\n```js\nfunction foo() {\n\t\"use strict\";\t// turn on strict mode\n\ta = 1;\t\t\t// `var` missing, ReferenceError\n}\n\nfoo();\n```\n\nIf you turn on strict mode in your code, and you get errors, or code starts behaving buggy, your temptation might be to avoid strict mode. But that instinct would be a bad idea to indulge. If strict mode causes issues in your program, almost certainly it's a sign that you have things in your program you should fix.\n\nNot only will strict mode keep your code to a safer path, and not only will it make your code more optimizable, but it also represents the future direction of the language. It'd be easier on you to get used to strict mode now than to keep putting it off -- it'll only get harder to convert later!\n\n**Note:** For more information about strict mode, see the Chapter 5 of the *Types & Grammar* title of this series.\n\n## Functions As Values\n\nSo far, we've discussed functions as the primary mechanism of *scope* in JavaScript. You recall typical `function` declaration syntax as follows:\n\n```js\nfunction foo() {\n\t// ..\n}\n```\n\nThough it may not seem obvious from that syntax, `foo` is basically just a variable in the outer enclosing scope that's given a reference to the `function` being declared. That is, the `function` itself is a value, just like `42` or `[1,2,3]` would be.\n\nThis may sound like a strange concept at first, so take a moment to ponder it. Not only can you pass a value (argument) *to* a function, but *a function itself can be a value* that's assigned to variables, or passed to or returned from other functions.\n\nAs such, a function value should be thought of as an expression, much like any other value or expression.\n\nConsider:\n\n```js\nvar foo = function() {\n\t// ..\n};\n\nvar x = function bar(){\n\t// ..\n};\n```\n\nThe first function expression assigned to the `foo` variable is called *anonymous* because it has no `name`.\n\nThe second function expression is *named* (`bar`), even as a reference to it is also assigned to the `x` variable. *Named function expressions* are generally more preferable, though *anonymous function expressions* are still extremely common.\n\nFor more information, see the *Scope & Closures* title of this series.\n\n### Immediately Invoked Function Expressions (IIFEs)\n\nIn the previous snippet, neither of the function expressions are executed -- we could if we had included `foo()` or `x()`, for instance.\n\nThere's another way to execute a function expression, which is typically referred to as an *immediately invoked function expression* (IIFE):\n\n```js\n(function IIFE(){\n\tconsole.log( \"Hello!\" );\n})();\n// \"Hello!\"\n```\n\nThe outer `( .. )` that surrounds the `(function IIFE(){ .. })` function expression is just a nuance of JS grammar needed to prevent it from being treated as a normal function declaration.\n\nThe final `()` on the end of the expression -- the `})();` line -- is what actually executes the function expression referenced immediately before it.\n\nThat may seem strange, but it's not as foreign as first glance. Consider the similarities between `foo` and `IIFE` here:\n\n```js\nfunction foo() { .. }\n\n// `foo` function reference expression,\n// then `()` executes it\nfoo();\n\n// `IIFE` function expression,\n// then `()` executes it\n(function IIFE(){ .. })();\n```\n\nAs you can see, listing the `(function IIFE(){ .. })` before its executing `()` is essentially the same as including `foo` before its executing `()`; in both cases, the function reference is executed with `()` immediately after it.\n\nBecause an IIFE is just a function, and functions create variable *scope*, using an IIFE in this fashion is often used to declare variables that won't affect the surrounding code outside the IIFE:\n\n```js\nvar a = 42;\n\n(function IIFE(){\n\tvar a = 10;\n\tconsole.log( a );\t// 10\n})();\n\nconsole.log( a );\t\t// 42\n```\n\nIIFEs can also have return values:\n\n```js\nvar x = (function IIFE(){\n\treturn 42;\n})();\n\nx;\t// 42\n```\n\nThe `42` value gets `return`ed from the `IIFE`-named function being executed, and is then assigned to `x`.\n\n### Closure\n\n*Closure* is one of the most important, and often least understood, concepts in JavaScript. I won't cover it in deep detail here, and instead refer you to the *Scope & Closures* title of this series. But I want to say a few things about it so you understand the general concept. It will be one of the most important techniques in your JS skillset.\n\nYou can think of closure as a way to \"remember\" and continue to access a function's scope (its variables) even once the function has finished running.\n\nConsider:\n\n```js\nfunction makeAdder(x) {\n\t// parameter `x` is an inner variable\n\n\t// inner function `add()` uses `x`, so\n\t// it has a \"closure\" over it\n\tfunction add(y) {\n\t\treturn y + x;\n\t};\n\n\treturn add;\n}\n```\n\nThe reference to the inner `add(..)` function that gets returned with each call to the outer `makeAdder(..)` is able to remember whatever `x` value was passed in to `makeAdder(..)`. Now, let's use `makeAdder(..)`:\n\n```js\n// `plusOne` gets a reference to the inner `add(..)`\n// function with closure over the `x` parameter of\n// the outer `makeAdder(..)`\nvar plusOne = makeAdder( 1 );\n\n// `plusTen` gets a reference to the inner `add(..)`\n// function with closure over the `x` parameter of\n// the outer `makeAdder(..)`\nvar plusTen = makeAdder( 10 );\n\nplusOne( 3 );\t\t// 4  <-- 1 + 3\nplusOne( 41 );\t\t// 42 <-- 1 + 41\n\nplusTen( 13 );\t\t// 23 <-- 10 + 13\n```\n\nMore on how this code works:\n\n1. When we call `makeAdder(1)`, we get back a reference to its inner `add(..)` that remembers `x` as `1`. We call this function reference `plusOne(..)`.\n2. When we call `makeAdder(10)`, we get back another reference to its inner `add(..)` that remembers `x` as `10`. We call this function reference `plusTen(..)`.\n3. When we call `plusOne(3)`, it adds `3` (its inner `y`) to the `1` (remembered by `x`), and we get `4` as the result.\n4. When we call `plusTen(13)`, it adds `13` (its inner `y`) to the `10` (remembered by `x`), and we get `23` as the result.\n\nDon't worry if this seems strange and confusing at first -- it can be! It'll take lots of practice to understand it fully.\n\nBut trust me, once you do, it's one of the most powerful and useful techniques in all of programming. It's definitely worth the effort to let your brain simmer on closures for a bit. In the next section, we'll get a little more practice with closure.\n\n#### Modules\n\nThe most common usage of closure in JavaScript is the module pattern. Modules let you define private implementation details (variables, functions) that are hidden from the outside world, as well as a public API that *is* accessible from the outside.\n\nConsider:\n\n```js\nfunction User(){\n\tvar username, password;\n\n\tfunction doLogin(user,pw) {\n\t\tusername = user;\n\t\tpassword = pw;\n\n\t\t// do the rest of the login work\n\t}\n\n\tvar publicAPI = {\n\t\tlogin: doLogin\n\t};\n\n\treturn publicAPI;\n}\n\n// create a `User` module instance\nvar fred = User();\n\nfred.login( \"fred\", \"12Battery34!\" );\n```\n\nThe `User()` function serves as an outer scope that holds the variables `username` and `password`, as well as the inner `doLogin()` function; these are all private inner details of this `User` module that cannot be accessed from the outside world.\n\n**Warning:** We are not calling `new User()` here, on purpose, despite the fact that probably seems more common to most readers. `User()` is just a function, not a class to be instantiated, so it's just called normally. Using `new` would be inappropriate and actually waste resources.\n\nExecuting `User()` creates an *instance* of the `User` module -- a whole new scope is created, and thus a whole new copy of each of these inner variables/functions. We assign this instance to `fred`. If we run `User()` again, we'd get a new instance entirely separate from `fred`.\n\nThe inner `doLogin()` function has a closure over `username` and `password`, meaning it will retain its access to them even after the `User()` function finishes running.\n\n`publicAPI` is an object with one property/method on it, `login`, which is a reference to the inner `doLogin()` function. When we return `publicAPI` from `User()`, it becomes the instance we call `fred`.\n\nAt this point, the outer `User()` function has finished executing. Normally, you'd think the inner variables like `username` and `password` have gone away. But here they have not, because there's a closure in the `login()` function keeping them alive.\n\nThat's why we can call `fred.login(..)` -- the same as calling the inner `doLogin(..)` -- and it can still access `username` and `password` inner variables.\n\nThere's a good chance that with just this brief glimpse at closure and the module pattern, some of it is still a bit confusing. That's OK! It takes some work to wrap your brain around it.\n\nFrom here, go read the *Scope & Closures* title of this series for a much more in-depth exploration.\n\n## `this` Identifier\n\nAnother very commonly misunderstood concept in JavaScript is the `this` identifier. Again, there's a couple of chapters on it in the *this & Object Prototypes* title of this series, so here we'll just briefly introduce the concept.\n\nWhile it may often seem that `this` is related to \"object-oriented patterns,\" in JS `this` is a different mechanism.\n\nIf a function has a `this` reference inside it, that `this` reference usually points to an `object`. But which `object` it points to depends on how the function was called.\n\nIt's important to realize that `this` *does not* refer to the function itself, as is the most common misconception.\n\nHere's a quick illustration:\n\n```js\nfunction foo() {\n\tconsole.log( this.bar );\n}\n\nvar bar = \"global\";\n\nvar obj1 = {\n\tbar: \"obj1\",\n\tfoo: foo\n};\n\nvar obj2 = {\n\tbar: \"obj2\"\n};\n\n// --------\n\nfoo();\t\t\t\t// \"global\"\nobj1.foo();\t\t\t// \"obj1\"\nfoo.call( obj2 );\t\t// \"obj2\"\nnew foo();\t\t\t// undefined\n```\n\nThere are four rules for how `this` gets set, and they're shown in those last four lines of that snippet.\n\n1. `foo()` ends up setting `this` to the global object in non-strict mode -- in strict mode, `this` would be `undefined` and you'd get an error in accessing the `bar` property -- so `\"global\"` is the value found for `this.bar`.\n2. `obj1.foo()` sets `this` to the `obj1` object.\n3. `foo.call(obj2)` sets `this` to the `obj2` object.\n4. `new foo()` sets `this` to a brand new empty object.\n\nBottom line: to understand what `this` points to, you have to examine how the function in question was called. It will be one of those four ways just shown, and that will then answer what `this` is.\n\n**Note:** For more information about `this`, see Chapters 1 and 2 of the *this & Object Prototypes* title of this series.\n\n## Prototypes\n\nThe prototype mechanism in JavaScript is quite complicated. We will only glance at it here. You will want to spend plenty of time reviewing Chapters 4-6 of the *this & Object Prototypes* title of this series for all the details.\n\nWhen you reference a property on an object, if that property doesn't exist, JavaScript will automatically use that object's internal prototype reference to find another object to look for the property on. You could think of this almost as a fallback if the property is missing.\n\nThe internal prototype reference linkage from one object to its fallback happens at the time the object is created. The simplest way to illustrate it is with a built-in utility called `Object.create(..)`.\n\nConsider:\n\n```js\nvar foo = {\n\ta: 42\n};\n\n// create `bar` and link it to `foo`\nvar bar = Object.create( foo );\n\nbar.b = \"hello world\";\n\nbar.b;\t\t// \"hello world\"\nbar.a;\t\t// 42 <-- delegated to `foo`\n```\n\nIt may help to visualize the `foo` and `bar` objects and their relationship:\n\n<img src=\"fig6.png\">\n\nThe `a` property doesn't actually exist on the `bar` object, but because `bar` is prototype-linked to `foo`, JavaScript automatically falls back to looking for `a` on the `foo` object, where it's found.\n\nThis linkage may seem like a strange feature of the language. The most common way this feature is used -- and I would argue, abused -- is to try to emulate/fake a \"class\" mechanism with \"inheritance.\"\n\nBut a more natural way of applying prototypes is a pattern called \"behavior delegation,\" where you intentionally design your linked objects to be able to *delegate* from one to the other for parts of the needed behavior.\n\n**Note:** For more information about prototypes and behavior delegation, see Chapters 4-6 of the *this & Object Prototypes* title of this series.\n\n## Old & New\n\nSome of the JS features we've already covered, and certainly many of the features covered in the rest of this series, are newer additions and will not necessarily be available in older browsers. In fact, some of the newest features in the specification aren't even implemented in any stable browsers yet.\n\nSo, what do you do with the new stuff? Do you just have to wait around for years or decades for all the old browsers to fade into obscurity?\n\nThat's how many people think about the situation, but it's really not a healthy approach to JS.\n\nThere are two main techniques you can use to \"bring\" the newer JavaScript stuff to the older browsers: polyfilling and transpiling.\n\n### Polyfilling\n\nThe word \"polyfill\" is an invented term (by Remy Sharp) (https://remysharp.com/2010/10/08/what-is-a-polyfill) used to refer to taking the definition of a newer feature and producing a piece of code that's equivalent to the behavior, but is able to run in older JS environments.\n\nFor example, ES6 defines a utility called `Number.isNaN(..)` to provide an accurate non-buggy check for `NaN` values, deprecating the original `isNaN(..)` utility. But it's easy to polyfill that utility so that you can start using it in your code regardless of whether the end user is in an ES6 browser or not.\n\nConsider:\n\n```js\nif (!Number.isNaN) {\n\tNumber.isNaN = function isNaN(x) {\n\t\treturn x !== x;\n\t};\n}\n```\n\nThe `if` statement guards against applying the polyfill definition in ES6 browsers where it will already exist. If it's not already present, we define `Number.isNaN(..)`.\n\n**Note:** The check we do here takes advantage of a quirk with `NaN` values, which is that they're the only value in the whole language that is not equal to itself. So the `NaN` value is the only one that would make `x !== x` be `true`.\n\nNot all new features are fully polyfillable. Sometimes most of the behavior can be polyfilled, but there are still small deviations. You should be really, really careful in implementing a polyfill yourself, to make sure you are adhering to the specification as strictly as possible.\n\nOr better yet, use an already vetted set of polyfills that you can trust, such as those provided by ES5-Shim (https://github.com/es-shims/es5-shim) and ES6-Shim (https://github.com/es-shims/es6-shim).\n\n### Transpiling\n\nThere's no way to polyfill new syntax that has been added to the language. The new syntax would throw an error in the old JS engine as unrecognized/invalid.\n\nSo the better option is to use a tool that converts your newer code into older code equivalents. This process is commonly called \"transpiling,\" a term for transforming + compiling.\n\nEssentially, your source code is authored in the new syntax form, but what you deploy to the browser is the transpiled code in old syntax form. You typically insert the transpiler into your build process, similar to your code linter or your minifier.\n\nYou might wonder why you'd go to the trouble to write new syntax only to have it transpiled away to older code -- why not just write the older code directly?\n\nThere are several important reasons you should care about transpiling:\n\n* The new syntax added to the language is designed to make your code more readable and maintainable. The older equivalents are often much more convoluted. You should prefer writing newer and cleaner syntax, not only for yourself but for all other members of the development team.\n* If you transpile only for older browsers, but serve the new syntax to the newest browsers, you get to take advantage of browser performance optimizations with the new syntax. This also lets browser makers have more real-world code to test their implementations and optimizations on.\n* Using the new syntax earlier allows it to be tested more robustly in the real world, which provides earlier feedback to the JavaScript committee (TC39). If issues are found early enough, they can be changed/fixed before those language design mistakes become permanent.\n\nHere's a quick example of transpiling. ES6 adds a feature called \"default parameter values.\" It looks like this:\n\n```js\nfunction foo(a = 2) {\n\tconsole.log( a );\n}\n\nfoo();\t\t// 2\nfoo( 42 );\t// 42\n```\n\nSimple, right? Helpful, too! But it's new syntax that's invalid in pre-ES6 engines. So what will a transpiler do with that code to make it run in older environments?\n\n```js\nfunction foo() {\n\tvar a = arguments[0] !== (void 0) ? arguments[0] : 2;\n\tconsole.log( a );\n}\n```\n\nAs you can see, it checks to see if the `arguments[0]` value is `void 0` (aka `undefined`), and if so provides the `2` default value; otherwise, it assigns whatever was passed.\n\nIn addition to being able to now use the nicer syntax even in older browsers, looking at the transpiled code actually explains the intended behavior more clearly.\n\nYou may not have realized just from looking at the ES6 version that `undefined` is the only value that can't get explicitly passed in for a default-value parameter, but the transpiled code makes that much more clear.\n\nThe last important detail to emphasize about transpilers is that they should now be thought of as a standard part of the JS development ecosystem and process. JS is going to continue to evolve, much more quickly than before, so every few months new syntax and new features will be added.\n\nIf you use a transpiler by default, you'll always be able to make that switch to newer syntax whenever you find it useful, rather than always waiting for years for today's browsers to phase out.\n\nThere are quite a few great transpilers for you to choose from. Here are some good options at the time of this writing:\n\n* Babel (https://babeljs.io) (formerly 6to5): Transpiles ES6+ into ES5\n* Traceur (https://github.com/google/traceur-compiler): Transpiles ES6, ES7, and beyond into ES5\n\n## Non-JavaScript\n\nSo far, the only things we've covered are in the JS language itself. The reality is that most JS is written to run in and interact with environments like browsers. A good chunk of the stuff that you write in your code is, strictly speaking, not directly controlled by JavaScript. That probably sounds a little strange.\n\nThe most common non-JavaScript JavaScript you'll encounter is the DOM API. For example:\n\n```js\nvar el = document.getElementById( \"foo\" );\n```\n\nThe `document` variable exists as a global variable when your code is running in a browser. It's not provided by the JS engine, nor is it particularly controlled by the JavaScript specification. It takes the form of something that looks an awful lot like a normal JS `object`, but it's not really exactly that. It's a special `object,` often called a \"host object.\"\n\nMoreover, the `getElementById(..)` method on `document` looks like a normal JS function, but it's just a thinly exposed interface to a built-in method provided by the DOM from your browser. In some (newer-generation) browsers, this layer may also be in JS, but traditionally the DOM and its behavior is implemented in something more like C/C++.\n\nAnother example is with input/output (I/O).\n\nEveryone's favorite `alert(..)` pops up a message box in the user's browser window. `alert(..)` is provided to your JS program by the browser, not by the JS engine itself. The call you make sends the message to the browser internals and it handles drawing and displaying the message box.\n\nThe same goes with `console.log(..)`; your browser provides such mechanisms and hooks them up to the developer tools.\n\nThis book, and this whole series, focuses on JavaScript the language. That's why you don't see any substantial coverage of these non-JavaScript JavaScript mechanisms. Nevertheless, you need to be aware of them, as they'll be in every JS program you write!\n\n## Review\n\nThe first step to learning JavaScript's flavor of programming is to get a basic understanding of its core mechanisms like values, types, function closures, `this`, and prototypes.\n\nOf course, each of these topics deserves much greater coverage than you've seen here, but that's why they have chapters and books dedicated to them throughout the rest of this series. After you feel pretty comfortable with the concepts and code samples in this chapter, the rest of the series awaits you to really dig in and get to know the language deeply.\n\nThe final chapter of this book will briefly summarize each of the other titles in the series and the other concepts they cover besides what we've already explored.\n"
  },
  {
    "path": "up & going/ch3.md",
    "content": "# You Don't Know JS: Up & Going\n# Chapter 3: Into YDKJS\n\nWhat is this series all about? Put simply, it's about taking seriously the task of learning *all parts of JavaScript*, not just some subset of the language that someone called \"the good parts,\" and not just whatever minimal amount you need to get your job done at work.\n\nSerious developers in other languages expect to put in the effort to learn most or all of the language(s) they primarily write in, but JS developers seem to stand out from the crowd in the sense of typically not learning very much of the language. This is not a good thing, and it's not something we should continue to allow to be the norm.\n\nThe *You Don't Know JS* (*YDKJS*) series stands in stark contrast to the typical approaches to learning JS, and is unlike almost any other JS books you will read. It challenges you to go beyond your comfort zone and to ask the deeper \"why\" questions for every single behavior you encounter. Are you up for that challenge?\n\nI'm going to use this final chapter to briefly summarize what to expect from the rest of the books in the series, and how to most effectively go about building a foundation of JS learning on top of *YDKJS*.\n\n## Scope & Closures\n\nPerhaps one of the most fundamental things you'll need to quickly come to terms with is how scoping of variables really works in JavaScript. It's not enough to have anecdotal fuzzy *beliefs* about scope.\n\nThe *Scope & Closures* title starts by debunking the common misconception that JS is an \"interpreted language\" and therefore not compiled. Nope.\n\nThe JS engine compiles your code right before (and sometimes during!) execution. So we use some deeper understanding of the compiler's approach to our code to understand how it finds and deals with variable and function declarations. Along the way, we see the typical metaphor for JS variable scope management, \"Hoisting.\"\n\nThis critical understanding of \"lexical scope\" is what we then base our exploration of closure on for the last chapter of the book. Closure is perhaps the single most important concept in all of JS, but if you haven't first grasped firmly how scope works, closure will likely remain beyond your grasp.\n\nOne important application of closure is the module pattern, as we briefly introduced in this book in Chapter 2. The module pattern is perhaps the most prevalent code organization pattern in all of JavaScript; deep understanding of it should be one of your highest priorities.\n\n## this & Object Prototypes\n\nPerhaps one of the most widespread and persistent mistruths about JavaScript is that the `this` keyword refers to the function it appears in. Terribly mistaken.\n\nThe `this` keyword is dynamically bound based on how the function in question is executed, and it turns out there are four simple rules to understand and fully determine `this` binding.\n\nClosely related to the `this` keyword is the object prototype mechanism, which is a look-up chain for properties, similar to how lexical scope variables are found. But wrapped up in the prototypes is the other huge miscue about JS: the idea of emulating (fake) classes and (so-called \"prototypal\") inheritance.\n\nUnfortunately, the desire to bring class and inheritance design pattern thinking to JavaScript is just about the worst thing you could try to do, because while the syntax may trick you into thinking there's something like classes present, in fact the prototype mechanism is fundamentally opposite in its behavior.\n\nWhat's at issue is whether it's better to ignore the mismatch and pretend that what you're implementing is \"inheritance,\" or whether it's more appropriate to learn and embrace how the object prototype system actually works. The latter is more appropriately named \"behavior delegation.\"\n\nThis is more than syntactic preference. Delegation is an entirely different, and more powerful, design pattern, one that replaces the need to design with classes and inheritance. But these assertions will absolutely fly in the face of nearly every other blog post, book, and conference talk on the subject for the entirety of JavaScript's lifetime.\n\nThe claims I make regarding delegation versus inheritance come not from a dislike of the language and its syntax, but from the desire to see the true capability of the language properly leveraged and the endless confusion and frustration wiped away.\n\nBut the case I make regarding prototypes and delegation is a much more involved one than what I will indulge here. If you're ready to reconsider everything you think you know about JavaScript \"classes\" and \"inheritance,\" I offer you the chance to \"take the red pill\" (*Matrix* 1999) and check out Chapters 4-6 of the *this & Object Prototypes* title of this series.\n\n## Types & Grammar\n\nThe third title in this series primarily focuses on tackling yet another highly controversial topic: type coercion. Perhaps no topic causes more frustration with JS developers than when you talk about the confusions surrounding implicit coercion.\n\nBy far, the conventional wisdom is that implicit coercion is a \"bad part\" of the language and should be avoided at all costs. In fact, some have gone so far as to call it a \"flaw\" in the design of the language. Indeed, there are tools whose entire job is to do nothing but scan your code and complain if you're doing anything even remotely like coercion.\n\nBut is coercion really so confusing, so bad, so treacherous, that your code is doomed from the start if you use it?\n\nI say no. After having built up an understanding of how types and values really work in Chapters 1-3, Chapter 4 takes on this debate and fully explains how coercion works, in all its nooks and crevices. We see just what parts of coercion really are surprising and what parts actually make complete sense if given the time to learn.\n\nBut I'm not merely suggesting that coercion is sensible and learnable, I'm asserting that coercion is an incredibly useful and totally underestimated tool that *you should be using in your code.* I'm saying that coercion, when used properly, not only works, but makes your code better. All the naysayers and doubters will surely scoff at such a position, but I believe it's one of the main keys to upping your JS game.\n\nDo you want to just keep following what the crowd says, or are you willing to set all the assumptions aside and look at coercion with a fresh perspective? The *Types & Grammar* title of this series will coerce your thinking.\n\n## Async & Performance\n\nThe first three titles of this series focus on the core mechanics of the language, but the fourth title branches out slightly to cover patterns on top of the language mechanics for managing asynchronous programming. Asynchrony is not only critical to the performance of our applications, it's increasingly becoming *the* critical factor in writability and maintainability.\n\nThe book starts first by clearing up a lot of terminology and concept confusion around things like \"async,\" \"parallel,\" and \"concurrent,\" and explains in depth how such things do and do not apply to JS.\n\nThen we move into examining callbacks as the primary method of enabling asynchrony. But it's here that we quickly see that the callback alone is hopelessly insufficient for the modern demands of asynchronous programming. We identify two major deficiencies of callbacks-only coding: *Inversion of Control* (IoC) trust loss and lack of linear reason-ability.\n\nTo address these two major deficiencies, ES6 introduces two new mechanisms (and indeed, patterns): promises and generators.\n\nPromises are a time-independent wrapper around a \"future value,\" which lets you reason about and compose them regardless of if the value is ready or not yet. Moreover, they effectively solve the IoC trust issues by routing callbacks through a trustable and composable promise mechanism.\n\nGenerators introduce a new mode of execution for JS functions, whereby the generator can be paused at `yield` points and be resumed asynchronously later. The pause-and-resume capability enables synchronous, sequential looking code in the generator to be processed asynchronously behind the scenes. By doing so, we address the non-linear, non-local-jump confusions of callbacks and thereby make our asynchronous code sync-looking so as to be more reason-able.\n\nBut it's the combination of promises and generators that \"yields\" our most effective asynchronous coding pattern to date in JavaScript. In fact, much of the future sophistication of asynchrony coming in ES7 and later will certainly be built on this foundation. To be serious about programming effectively in an async world, you're going to need to get really comfortable with combining promises and generators.\n\nIf promises and generators are about expressing patterns that let our programs run more concurrently and thus get more processing accomplished in a shorter period, JS has many other facets of performance optimization worth exploring.\n\nChapter 5 delves into topics like program parallelism with Web Workers and data parallelism with SIMD, as well as low-level optimization techniques like ASM.js. Chapter 6 takes a look at performance optimization from the perspective of proper benchmarking techniques, including what kinds of performance to worry about and what to ignore.\n\nWriting JavaScript effectively means writing code that can break the constraint barriers of being run dynamically in a wide range of browsers and other environments. It requires a lot of intricate and detailed planning and effort on our parts to take a program from \"it works\" to \"it works well.\"\n\nThe *Async & Performance* title is designed to give you all the tools and skills you need to write reasonable and performant JavaScript code.\n\n## ES6 & Beyond\n\nNo matter how much you feel you've mastered JavaScript to this point, the truth is that JavaScript is never going to stop evolving, and moreover, the rate of evolution is increasing rapidly. This fact is almost a metaphor for the spirit of this series, to embrace that we'll never fully *know* every part of JS, because as soon as you master it all, there's going to be new stuff coming down the line that you'll need to learn.\n\nThis title is dedicated to both the short- and mid-term visions of where the language is headed, not just the *known* stuff like ES6 but the *likely* stuff beyond.\n\nWhile all the titles of this series embrace the state of JavaScript at the time of this writing, which is mid-way through ES6 adoption, the primary focus in the series has been more on ES5. Now, we want to turn our attention to ES6, ES7, and ...\n\nSince ES6 is nearly complete at the time of this writing, *ES6 & Beyond* starts by dividing up the concrete stuff from the ES6 landscape into several key categories, including new syntax, new data structures (collections), and new processing capabilities and APIs. We cover each of these new ES6 features, in varying levels of detail, including reviewing details that are touched on in other books of this series.\n\nSome exciting ES6 things to look forward to reading about: destructuring, default parameter values, symbols, concise methods, computed properties, arrow functions, block scoping, promises, generators, iterators, modules, proxies, weakmaps, and much, much more! Phew, ES6 packs quite a punch!\n\nThe first part of the book is a roadmap for all the stuff you need to learn to get ready for the new and improved JavaScript you'll be writing and exploring over the next couple of years.\n\nThe latter part of the book turns attention to briefly glance at things that we can likely expect to see in the near future of JavaScript. The most important realization here is that post-ES6, JS is likely going to evolve feature by feature rather than version by version, which means we can expect to see these near-future things coming much sooner than you might imagine.\n\nThe future for JavaScript is bright. Isn't it time we start learning it!?\n\n## Review\n\nThe *YDKJS* series is dedicated to the proposition that all JS developers can and should learn all of the parts of this great language. No person's opinion, no framework's assumptions, and no project's deadline should be the excuse for why you never learn and deeply understand JavaScript.\n\nWe take each important area of focus in the language and dedicate a short but very dense book to fully explore all the parts of it that you perhaps thought you knew but probably didn't fully.\n\n\"You Don't Know JS\" isn't a criticism or an insult. It's a realization that all of us, myself included, must come to terms with. Learning JavaScript isn't an end goal but a process. We don't know JavaScript, yet. But we will!\n"
  },
  {
    "path": "up & going/foreword.md",
    "content": "# You Don't Know JS: Up & Going\n# Foreword\n\nWhat was the last new thing you learned?\n\nPerhaps it was a foreign language, like Italian or German. Or maybe it was a graphics editor, like Photoshop. Or a cooking technique or woodworking or an exercise routine. I want you to remember that feeling when you finally got it: the lightbulb moment. When things went from blurry to crystal clear, as you mastered the table saw or understood the difference between masculine and feminine nouns in French. How did it feel? Pretty amazing, right?\n\nNow I want you to travel back a little bit further in your memory to right before you learned your new skill. How did *that* feel? Probably slightly intimidating and maybe a little bit frustrating, right? At one point, we all did not know the things that we know now and that’s totally OK; we all start somewhere. Learning new material is an exciting adventure, especially if you are looking to learn the subject efficiently.\n\nI teach a lot of beginner coding classes. The students who take my classes have often tried teaching themselves subjects like HTML or JavaScript by reading blog posts or copying and pasting code, but they haven’t been able to truly master the material that will allow them to code their desired outcome. And because they don’t truly grasp the ins and outs of certain coding topics, they can’t write powerful code or debug their own work, as they don’t really understand what is happening.\n\nI always believe in teaching my classes the proper way, meaning I teach web standards, semantic markup, well-commented code, and other best practices. I cover the subject in a thorough manner to explain the hows and whys, without just tossing out code to copy and paste. When you strive to comprehend your code, you create better work and become better at what you do. The code isn’t just your *job* anymore, it’s your *craft*. This is why I love *Up & Going*. Kyle takes us on a deep dive through syntax and terminology to give a great introduction to JavaScript without cutting corners. This book doesn’t skim over the surface, but really allows us to genuinely understand the concepts we will be writing.\n\nBecause it’s not enough to be able to duplicate jQuery snippets into your website, the same way it’s not enough to learn how to open, close, and save a document in Photoshop. Sure, once I learn a few basics about the program I could create and share a design I made. But without legitimately knowing the tools and what is behind them, how can I define a grid, or craft a legible type system, or optimize graphics for web use. The same goes for JavaScript. Without knowing how loops work, or how to define variables, or what scope is, we won’t be writing the best code we can. We don’t want to settle for anything less -- this is, after all, our craft.\n\nThe more you are exposed to JavaScript, the clearer it becomes. Words like closures, objects, and methods might seem out of reach to you now, but this book will help those terms come into clarity. I want you to keep those two feelings of before and after you learn something in mind as you begin this book. It might seem daunting, but you’ve picked up this book because you are starting an awesome journey to hone your knowledge. *Up & Going* is the start of our path to understanding programming. Enjoy the lightbulb moments!\n\nJenn Lukas<br>\n[jennlukas.com](http://jennlukas.com/), [@jennlukas](https://twitter.com/jennlukas)<br>\nFront-end consultant\n"
  },
  {
    "path": "up & going/toc.md",
    "content": "# You Don't Know JS: Up & Going\n\n## Table of Contents\n\n* Foreword\n* Preface\n* Chapter 1: Into Programming\n\t* Code\n\t* Try It Yourself\n\t* Operators\n\t* Values & Types\n\t* Code Comments\n\t* Variables\n\t* Blocks\n\t* Conditionals\n\t* Loops\n\t* Functions\n\t* Practice\n* Chapter 2: Into JavaScript\n\t* Values & Types\n\t* Variables\n\t* Conditionals\n\t* Strict Mode\n\t* Functions As Values\n\t* `this` Keyword\n\t* Prototypes\n\t* Old & New\n\t* Non-JavaScript\n* Chapter 3: Into YDKJS\n\t* Scope & Closures\n\t* this & Object Prototypes\n\t* Types & Grammar\n\t* Async & Performance\n\t* ES6 & Beyond\n* Appendix A: Acknowledgments\n"
  }
]