[
  {
    "path": ".circleci/config.yml",
    "content": "version: 2\n\njobs:\n  build:\n    docker:\n      - image: cimg/rust:1.70.0\n    steps:\n      - checkout\n      - run:\n          name: Version information\n          command: |\n            rustc --version\n            cargo --version\n            rustup --version\n      - run:\n          name: Calculate dependencies\n          command: cargo generate-lockfile\n      - restore_cache:\n          keys:\n            - cargo-cache-{{ arch }}-{{ checksum \"Cargo.lock\" }}\n      - run:\n          name: Check Formatting\n          command: |\n            rustfmt --version\n            cargo fmt --all -- --check --color=auto\n      - run:\n          name: Build all targets\n          command: cargo build --all --all-targets\n      - run:\n          name: Run all tests\n          command: cargo test --all\n      - save_cache:\n          paths:\n            - /usr/local/cargo/registry\n            - target/debug/.fingerprint\n            - target/debug/build\n            - target/debug/deps\n          key: cargo-cache-{{ arch }}-{{ checksum \"Cargo.lock\" }}\n"
  },
  {
    "path": ".gitignore",
    "content": "target/\n**/*.rs.bk\nCargo.lock\n.DS_Store\n.#*\n.envrc\n.direnv\nshell.nix\n.dir-locals.el\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "## [0.4]\n- Don't \"nagle\" in the reliable channel, *require* flush calls to ensure data is\n  sent.\n- [API Change]: Change length limits in message channels to be uniformly `u16`\n  and use the type system to express maximum values rather than constants.\n- Fix panics in reliable bincode channel with messages near upper limit due to\n  improper buffer size.\n- Document that all async methods are supposed to be cancel safe.\n\n## [0.3]\n- Fix the message_channels test to be less confusing, this is very important as\n  it is currently the best (hah) example.\n- Make `BufferPacketPool` derive Copy if the type it wraps is Copy.\n- Simplify `Runtime` trait to not require an explicit `Interval`.\n  `Runtime::Delay` wasn't even *used* prior to this, but it is the only timing\n  requirement now and has been renamed to `Sleep` to match tokio 0.3. Neither\n  tokio nor smol allocate as part of creating a `Sleep` / `Timer`, so having an\n  explicit `Interval` is not really necessary to avoid e.g. allocation, and the\n  way tokio's `Interval` works was not ideal anyway and we shouldn't rely on\n  how it is implemented.\n\n## [0.2]\n- Correctness fixes for unreliable message lengths\n- Performance improvements for bincode message serialization\n- Avoid unnecessary calls to SendExt::send\n- Performance improvements and fixes for internal `event_watch` events channel.\n- [API Change]: Update to bincode 1.3, no longer using the deprecated bincode API\n- [API Change]: Return `Result` in `MessageChannels` async methods on\n  disconnection, panicking is never appropriate for a network error. Instead,\n  the panicking version of methods in `MessageChannels` *only* panic on\n  unregistered message types.\n\n## [0.1.1]\n- Small bugifx for unreliable message channels, don't error with\n  `SendError::TooBig` when the message will actually fit.\n\n## [0.1.0]\n- Initial release\n"
  },
  {
    "path": "Cargo.toml",
    "content": "[package]\nname = \"turbulence\"\nversion = \"0.4.0\"\nauthors = [\"kyren <kerriganw@gmail.com>\"]\nedition = \"2021\"\ndescription = \"Tools to provide serialization, multiplexing, optional reliability, and optional compression to a game's networking.\"\nreadme = \"README.md\"\nrepository = \"https://github.com/kyren/turbulence\"\ndocumentation = \"https://docs.rs/turbulence\"\nkeywords = [\"gamedev\", \"networking\"]\nlicense = \"MIT OR Apache-2.0\"\n\n[badges]\ncircle-ci = { repository = \"kyren/turbulence\", branch = \"master\" }\n\n[dependencies]\nbincode = \"1.3\"\nbyteorder = \"1.3\"\ncache-padded = \"1.2\"\ncrossbeam-channel = \"0.5\"\nfutures = \"0.3\"\nrustc-hash = \"1.0\"\nserde = \"1.0\"\nsnap = \"1.0\"\nthiserror = \"1.0\"\n\n[dev-dependencies]\nrand = { version = \"0.8\", features = [\"small_rng\"] }\nserde = { version = \"1.0\", features = [\"derive\"] }\n"
  },
  {
    "path": "LICENSE-APACHE",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"{}\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright {yyyy} {name of copyright owner}\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "LICENSE-CC0",
    "content": "Creative Commons Legal Code\n\nCC0 1.0 Universal\n\n    CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE\n    LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN\n    ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS\n    INFORMATION ON AN \"AS-IS\" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES\n    REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS\n    PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM\n    THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED\n    HEREUNDER.\n\nStatement of Purpose\n\nThe laws of most jurisdictions throughout the world automatically confer\nexclusive Copyright and Related Rights (defined below) upon the creator\nand subsequent owner(s) (each and all, an \"owner\") of an original work of\nauthorship and/or a database (each, a \"Work\").\n\nCertain owners wish to permanently relinquish those rights to a Work for\nthe purpose of contributing to a commons of creative, cultural and\nscientific works (\"Commons\") that the public can reliably and without fear\nof later claims of infringement build upon, modify, incorporate in other\nworks, reuse and redistribute as freely as possible in any form whatsoever\nand for any purposes, including without limitation commercial purposes.\nThese owners may contribute to the Commons to promote the ideal of a free\nculture and the further production of creative, cultural and scientific\nworks, or to gain reputation or greater distribution for their Work in\npart through the use and efforts of others.\n\nFor these and/or other purposes and motivations, and without any\nexpectation of additional consideration or compensation, the person\nassociating CC0 with a Work (the \"Affirmer\"), to the extent that he or she\nis an owner of Copyright and Related Rights in the Work, voluntarily\nelects to apply CC0 to the Work and publicly distribute the Work under its\nterms, with knowledge of his or her Copyright and Related Rights in the\nWork and the meaning and intended legal effect of CC0 on those rights.\n\n1. Copyright and Related Rights. A Work made available under CC0 may be\nprotected by copyright and related or neighboring rights (\"Copyright and\nRelated Rights\"). Copyright and Related Rights include, but are not\nlimited to, the following:\n\n  i. the right to reproduce, adapt, distribute, perform, display,\n     communicate, and translate a Work;\n ii. moral rights retained by the original author(s) and/or performer(s);\niii. publicity and privacy rights pertaining to a person's image or\n     likeness depicted in a Work;\n iv. rights protecting against unfair competition in regards to a Work,\n     subject to the limitations in paragraph 4(a), below;\n  v. rights protecting the extraction, dissemination, use and reuse of data\n     in a Work;\n vi. database rights (such as those arising under Directive 96/9/EC of the\n     European Parliament and of the Council of 11 March 1996 on the legal\n     protection of databases, and under any national implementation\n     thereof, including any amended or successor version of such\n     directive); and\nvii. other similar, equivalent or corresponding rights throughout the\n     world based on applicable law or treaty, and any national\n     implementations thereof.\n\n2. Waiver. To the greatest extent permitted by, but not in contravention\nof, applicable law, Affirmer hereby overtly, fully, permanently,\nirrevocably and unconditionally waives, abandons, and surrenders all of\nAffirmer's Copyright and Related Rights and associated claims and causes\nof action, whether now known or unknown (including existing as well as\nfuture claims and causes of action), in the Work (i) in all territories\nworldwide, (ii) for the maximum duration provided by applicable law or\ntreaty (including future time extensions), (iii) in any current or future\nmedium and for any number of copies, and (iv) for any purpose whatsoever,\nincluding without limitation commercial, advertising or promotional\npurposes (the \"Waiver\"). Affirmer makes the Waiver for the benefit of each\nmember of the public at large and to the detriment of Affirmer's heirs and\nsuccessors, fully intending that such Waiver shall not be subject to\nrevocation, rescission, cancellation, termination, or any other legal or\nequitable action to disrupt the quiet enjoyment of the Work by the public\nas contemplated by Affirmer's express Statement of Purpose.\n\n3. Public License Fallback. Should any part of the Waiver for any reason\nbe judged legally invalid or ineffective under applicable law, then the\nWaiver shall be preserved to the maximum extent permitted taking into\naccount Affirmer's express Statement of Purpose. In addition, to the\nextent the Waiver is so judged Affirmer hereby grants to each affected\nperson a royalty-free, non transferable, non sublicensable, non exclusive,\nirrevocable and unconditional license to exercise Affirmer's Copyright and\nRelated Rights in the Work (i) in all territories worldwide, (ii) for the\nmaximum duration provided by applicable law or treaty (including future\ntime extensions), (iii) in any current or future medium and for any number\nof copies, and (iv) for any purpose whatsoever, including without\nlimitation commercial, advertising or promotional purposes (the\n\"License\"). The License shall be deemed effective as of the date CC0 was\napplied by Affirmer to the Work. Should any part of the License for any\nreason be judged legally invalid or ineffective under applicable law, such\npartial invalidity or ineffectiveness shall not invalidate the remainder\nof the License, and in such case Affirmer hereby affirms that he or she\nwill not (i) exercise any of his or her remaining Copyright and Related\nRights in the Work or (ii) assert any associated claims and causes of\naction with respect to the Work, in either case contrary to Affirmer's\nexpress Statement of Purpose.\n\n4. Limitations and Disclaimers.\n\n a. No trademark or patent rights held by Affirmer are waived, abandoned,\n    surrendered, licensed or otherwise affected by this document.\n b. Affirmer offers the Work as-is and makes no representations or\n    warranties of any kind concerning the Work, express, implied,\n    statutory or otherwise, including without limitation warranties of\n    title, merchantability, fitness for a particular purpose, non\n    infringement, or the absence of latent or other defects, accuracy, or\n    the present or absence of errors, whether or not discoverable, all to\n    the greatest extent permissible under applicable law.\n c. Affirmer disclaims responsibility for clearing rights of other persons\n    that may apply to the Work or any use thereof, including without\n    limitation any person's Copyright and Related Rights in the Work.\n    Further, Affirmer disclaims responsibility for obtaining any necessary\n    consents, permissions or other rights required for any use of the\n    Work.\n d. Affirmer understands and acknowledges that Creative Commons is not a\n    party to this document and has no duty or obligation with respect to\n    this CC0 or use of the Work."
  },
  {
    "path": "LICENSE-MIT",
    "content": "Permission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\n"
  },
  {
    "path": "README.md",
    "content": "# turbulence\n\n*We'll get there, but it's gonna be a bumpy ride.*\n\n---\n\n[![Build Status](https://img.shields.io/circleci/project/github/kyren/turbulence.svg)](https://circleci.com/gh/kyren/turbulence)\n[![Latest Version](https://img.shields.io/crates/v/turbulence.svg)](https://crates.io/crates/turbulence)\n[![API Documentation](https://docs.rs/turbulence/badge.svg)](https://docs.rs/turbulence)\n\n\nMultiplexed, optionally reliable, async, transport agnostic, reactor agnostic\nnetworking library for games.\n\nThis library does not actually perform any networking itself or interact with\nplatform networking APIs in any way, it is instead a way to take some kind of\n*unreliable* and *unordered* transport layer that you provide and turn it into\na set of independent networking channels, each of which can optionally be made\n*reliable* and *ordered*.\n\nThe best way right now to understand what this library is useful for probably\nto look at the [MessageChannels test](tests/message_channels.rs). This is the\nhighest level, simplest API provided: it allows you to define N message types\nserializable with serde, define each individual channel's networking settings,\nand then gives you a set of handles for pushing packets into and taking packets\nout of this `MessageChannels` interface. The user is expected to take outgoing\npackets and send them out over UDP (or similar), and also read incoming packets\nfrom UDP (or similar) and pass them in. The only reliability requirement for\nusing this is that if a packet is received from a remote, it must be intact\nand uncorrupted, but other than this the underlying transport does not need\nto provide any reliability or order guarantees. The reason that no corruption\ncheck is performed is that many transport layers already provide this for free,\nso it would often not be useful for `turbulence` to do that itself. Since there\nis no requirement for reliability, simply dropping incoming packets that do not\npass a consistency check is appropriate.\n\nThis library is structured in a way that provides a lot of flexibility but does\nnot do very much to help you actually get a network connection set up between\na game server and client. Setting up a UDP game server is a complex task, and\nthis library is designed to help with one *piece* of this puzzle.\n\n---\n\n### What this library actually does\n\n`turbulence` currently contains two main protocols and builds some conveniences\non top of them:\n\n1) It has an unreliable, unordered messaging protocol that takes in messages\n   that must be less than the size of a packet and coalesces them so that\n   multiple messages are sent per packet. This is by far the simpler of the two\n   protocols, and is appropriate for per-tick updates for things like position\n   data, where resends of old data are not useful.\n   \n2) It has a reliable, ordered transport with flow control that is similar to\n   TCP, but much simpler and without automatic congestion control. Instead of\n   congestion control, the user specifies the target packet send rate as part\n   of the protocol settings.\n   \n`turbulence` then provides on top of these:\n\n3) Reliable and unreliable channels of `bincode` serialized types.\n\n4) A reliable channel of `bincode` serialized types that are automatically\n   coalesced and compressed.\n   \nAnd then finally this library also provides an API for multiplexing multiple\ninstances of these channels across a single stream of packets and some\nconvenient ways of constructing the channels and accessing them by message\ntype. This is what the `MessageChannels` interface provides.\n\n### Questions you might ask\n\n***Why would you ever need something like this?***\n\nYou would need this library only if most or all of the following is true:\n\n1) You have a real time, networked game where TCP or TCP-like protocols are\n   inappropriate, and something unreliable like UDP must be used for latency\n   reasons.\n\n2) You have a game that needs to send both fast unreliable data like position\n   and also stream reliable game related data such as terrain data or chat or\n   complex entity data that is bandwidth intensive.\n   \n3) You have several independent streams of reliable data and they need to not\n   block each other or choke off fast unreliable data.\n\n4) It is impractical or undesirable (or impossible) to use many different OS\n   level networking sockets, or to use existing networking libraries that hook\n   deeply into the OS or even just assume the existence of UDP sockets.\n\n***Why do you need this library, doesn't XYZ protocol already do this*** (Where\nXYZ is plain TCP, plain UDP, SCTP, QUIC, etc)\n\nIn a way, this library is equivalent to having multiple UDP connections and\nbandwidth limited TCP connections at one time. If you can already do exactly\nthat and that's acceptable for you, then you might consider just doing that\ninstead of using this library!\n\nThis library is also a bit similar to something like QUIC in that it gives you\nmultiple independent channels of data which do not block each other. If QUIC\neventually supports truly unrleliable, unordered messages (AFAIK currently this\nis only a proposed extension?), AND it has an implementation that you can use,\nthen certainly using QUIC would be a viable option.\n\n***So this library contains a re-implementation of something like TCP, isn't\ntrying to implement something like that fiendishly complex and generally a bad\nidea?***\n\nProbably, but since it is designed for low-ish static bandwidth limits and\ndoesn't concern itself with congestion control, this cuts out a *lot* of the\ncomplexity. Still, this is the most complex part of this library, but it is\nwell tested and definitely at least works *in the environments I have run\nso far*. It's not very complicated, it could probably be described as \"the\nsimplest TCP-like thing that you could reasonably write and use\".\n\nYou should not be using the reliable streams in this library in the same way\nthat you use TCP. A good example of what *shouldn't* probably go over this\nlibrary is something like streaming asset data, you should have a separate\nchannel for data that should be streamed as fast as possible and will always be\nbandwidth rather than gameplay limited.\n\nThe reliable streams here are for things that are normally gameplay limited but\nmight be spikey, and where you *want* to limit the bandwidth so those spikes\ndon't slow down more important data or slow down other players.\n\n***Why is this library so generic?  It's TOO generic, everything is based on\ntraits like `PacketPool` and `Runtime` and it's hard to use. Why can't you just\nuse tokio / async-std?***\n\nThe `PacketPool` trait exists not only to allow for custom packet types but\nalso for things like the multiplexer, so it serves double duty. `Runtime`\nexists because I use this library in a web browser connecting to a remote\nserver using [webrtc-unreliable](https://github.com/kyren/webrtc-unreliable),\nand I have to implement it manually on top of web APIs and that is\ncurrently not trivial to do.\n\n### Current status / Future plans\n\nI've used this library in a real project over the real internet, and it\ndefinitely works. I've also tested it in-game using link conditioners to\nsimulate various levels of packet loss and duplication and *as far as I can\ntell* it works as advertised.\n\nThe library is usable currently, but the API should in no way be considered\nstable, it still may see a lot of churn.\n\nIn the near future it might be useful to have other channel types that provide\nin-between guarantees like only reliability guarantees but not in-order\nguarantees or vice versa.\n\nEventually, I'd like the reliable channels to have some sort of congestion\navoidance, but this would probably need to be cooperative between reliable\nchannels in some way.\n\nThe library desperately needs better examples, especially a fully worked\nexample using e.g. tokio and UDP, but setting up such an example is a large\ntask by itself.\n\n## License\n\n`turbulence` is licensed under any of:\n\n* MIT License [LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT\n* Apache License Version 2.0 [LICENSE-APACHE](LICENSE-APACHE) or\n  https://opensource.org/licenses/Apache-2.0\n* Creative Commons CC0 1.0 Universal Public Domain Dedication\n  [LICENSE-CC0](LICENSE-CC0) or\n  https://creativecommons.org/publicdomain/zero/1.0/\n\nat your option.\n"
  },
  {
    "path": "src/bandwidth_limiter.rs",
    "content": "use std::time::Duration;\n\nuse crate::runtime::Timer;\n\npub struct BandwidthLimiter<T: Timer> {\n    bandwidth: u32,\n    burst_bandwidth: u32,\n    bytes_available: f64,\n    last_calculation: T::Instant,\n}\n\nimpl<T: Timer> BandwidthLimiter<T> {\n    /// The `burst_bandwidth` is the maximum amount of bandwidth credit that can accumulate.\n    pub fn new(timer: &T, bandwidth: u32, burst_bandwidth: u32) -> Self {\n        let last_calculation = timer.now();\n        BandwidthLimiter {\n            bandwidth,\n            burst_bandwidth,\n            bytes_available: burst_bandwidth as f64,\n            last_calculation,\n        }\n    }\n\n    /// Delay until a time where there will be bandwidth available.\n    pub fn delay_until_available(&self, timer: &T) -> Option<T::Sleep> {\n        if self.bytes_available < 0. {\n            Some(timer.sleep(Duration::from_secs_f64(\n                (-self.bytes_available) / self.bandwidth as f64,\n            )))\n        } else {\n            None\n        }\n    }\n\n    /// Actually update the amount of available bandwidth. Additional available bytes are not added\n    /// until this method is called to add them.\n    pub fn update_available(&mut self, timer: &T) {\n        let now = timer.now();\n        self.bytes_available += timer\n            .duration_between(self.last_calculation, now)\n            .as_secs_f64()\n            * self.bandwidth as f64;\n        self.bytes_available = self.bytes_available.min(self.burst_bandwidth as f64);\n        self.last_calculation = now;\n    }\n\n    /// The bandwidth limiter only needs to limit outgoing packets being sent at all, not their\n    /// size, so this returns true if a non-negative amount of bytes is available. If a packet is\n    /// sent that is larger than the available bytes, the available bytes will go negative and this\n    /// will no longer return true.\n    pub fn bytes_available(&self) -> bool {\n        self.bytes_available >= 0.\n    }\n\n    /// Record that bytes were sent, possibly going into bandwidth debt.\n    pub fn take_bytes(&mut self, bytes: u32) {\n        self.bytes_available -= bytes as f64\n    }\n}\n"
  },
  {
    "path": "src/buffer.rs",
    "content": "use std::ops::{Deref, DerefMut};\n\npub use crate::packet::{Packet, PacketPool};\n\n/// A trait for implementing `PacketPool` more easily using an allocator for statically sized\n/// buffers.\npub trait BufferPool {\n    type Buffer: Deref<Target = [u8]> + DerefMut;\n\n    fn capacity(&self) -> usize;\n    fn acquire(&mut self) -> Self::Buffer;\n}\n\n/// Turns a `BufferPool` implementation into something that implements `PacketPool`.\n#[derive(Debug, Copy, Clone, Default)]\npub struct BufferPacketPool<B>(B);\n\nimpl<B> BufferPacketPool<B> {\n    pub fn new(buffer_pool: B) -> Self {\n        BufferPacketPool(buffer_pool)\n    }\n}\n\nimpl<B: BufferPool> PacketPool for BufferPacketPool<B> {\n    type Packet = BufferPacket<B::Buffer>;\n\n    fn capacity(&self) -> usize {\n        self.0.capacity()\n    }\n\n    fn acquire(&mut self) -> Self::Packet {\n        BufferPacket {\n            buffer: self.0.acquire(),\n            len: 0,\n        }\n    }\n}\n\n#[derive(Debug)]\npub struct BufferPacket<B> {\n    buffer: B,\n    len: usize,\n}\n\nimpl<B> Packet for BufferPacket<B>\nwhere\n    B: Deref<Target = [u8]> + DerefMut,\n{\n    fn resize(&mut self, len: usize, val: u8) {\n        assert!(len <= self.buffer.len());\n        for i in self.len..len {\n            self.buffer[i] = val;\n        }\n        self.len = len;\n    }\n}\n\nimpl<B> Deref for BufferPacket<B>\nwhere\n    B: Deref<Target = [u8]>,\n{\n    type Target = [u8];\n\n    fn deref(&self) -> &[u8] {\n        &self.buffer[0..self.len]\n    }\n}\n\nimpl<B> DerefMut for BufferPacket<B>\nwhere\n    B: Deref<Target = [u8]> + DerefMut,\n{\n    fn deref_mut(&mut self) -> &mut [u8] {\n        &mut self.buffer[0..self.len]\n    }\n}\n"
  },
  {
    "path": "src/compressed_bincode_channel.rs",
    "content": "use std::{\n    convert::TryInto,\n    marker::PhantomData,\n    task::{Context, Poll},\n    u16,\n};\n\nuse bincode::Options as _;\nuse byteorder::{ByteOrder, LittleEndian};\nuse futures::{future, ready, task};\nuse serde::{de::DeserializeOwned, Serialize};\nuse snap::raw::{decompress_len, max_compress_len, Decoder as SnapDecoder, Encoder as SnapEncoder};\nuse thiserror::Error;\n\nuse crate::reliable_channel::{self, ReliableChannel};\n\n/// The maximum serialized length of a `CompressedBincodeChannel` message. This also serves as\n/// the maximum size of a compressed chunk of messages, but it is guaranteed that any message <=\n/// `MAX_MESSAGE_LEN` can be sent, even if it cannot be compressed.\npub const MAX_MESSAGE_LEN: u16 = u16::MAX;\n\n#[derive(Debug, Error)]\npub enum SendError {\n    /// Fatal internal channel error.\n    #[error(\"reliable channel error error: {0}\")]\n    ReliableChannelError(#[from] reliable_channel::Error),\n    /// Non-fatal error, no message is sent.\n    #[error(\"bincode serialization error: {0}\")]\n    BincodeError(#[from] bincode::Error),\n}\n\n#[derive(Debug, Error)]\npub enum RecvError {\n    /// Fatal internal channel error.\n    #[error(\"reliable channel error error: {0}\")]\n    ReliableChannelError(#[from] reliable_channel::Error),\n    /// Fatal error, indicates corruption or protocol mismatch.\n    #[error(\"Snappy serialization error: {0}\")]\n    SnapError(#[from] snap::Error),\n    /// Fatal error, stream becomes desynchronized, individual serialized types are not length\n    /// prefixed.\n    #[error(\"bincode serialization error: {0}\")]\n    BincodeError(#[from] bincode::Error),\n}\n\n/// Wraps a `ReliableMessageChannel` and reliably sends a single message type serialized with\n/// `bincode` and compressed with `snap`.\n///\n/// Messages are written in large blocks to aid compression. Messages are serialized end to end, and\n/// when a block reaches the maximum configured size (or `flush` is called), the block is compressed\n/// and sent as a single message.\n///\n/// This saves space from the compression and also from the reduced message header overhead per\n/// individual message.\npub struct CompressedBincodeChannel {\n    channel: ReliableChannel,\n\n    send_chunk: Vec<u8>,\n\n    write_buffer: Vec<u8>,\n    write_pos: usize,\n\n    read_buffer: Vec<u8>,\n    read_pos: usize,\n\n    recv_chunk: Vec<u8>,\n    recv_pos: usize,\n\n    encoder: SnapEncoder,\n    decoder: SnapDecoder,\n}\n\nimpl From<ReliableChannel> for CompressedBincodeChannel {\n    fn from(channel: ReliableChannel) -> Self {\n        Self::new(channel)\n    }\n}\n\nimpl CompressedBincodeChannel {\n    pub fn new(channel: ReliableChannel) -> Self {\n        CompressedBincodeChannel {\n            channel,\n            send_chunk: Vec::new(),\n            write_buffer: Vec::new(),\n            write_pos: 0,\n            read_buffer: Vec::new(),\n            read_pos: 0,\n            recv_chunk: Vec::new(),\n            recv_pos: 0,\n            encoder: SnapEncoder::new(),\n            decoder: SnapDecoder::new(),\n        }\n    }\n\n    pub fn into_inner(self) -> ReliableChannel {\n        self.channel\n    }\n\n    /// Send the given message.\n    ///\n    /// This method is cancel safe, it will never partially send a message, and completes\n    /// immediately upon successfully queuing a message to send.\n    pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendError> {\n        future::poll_fn(|cx| self.poll_send(cx, msg)).await\n    }\n\n    pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendError> {\n        match self.poll_send(&mut Context::from_waker(task::noop_waker_ref()), msg) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    /// Finish sending the current block of messages, compressing them and sending them over the\n    /// reliable channel.\n    ///\n    /// This method is cancel safe.\n    pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {\n        future::poll_fn(|cx| self.poll_flush(cx)).await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {\n        match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    /// Receive a message.\n    ///\n    /// This method is cancel safe, it will never partially receive a message and will never drop a\n    /// received message.\n    pub async fn recv<M: DeserializeOwned>(&mut self) -> Result<M, RecvError> {\n        future::poll_fn(|cx| self.poll_recv_ready(cx)).await?;\n        Ok(self.recv_next()?)\n    }\n\n    pub fn try_recv<M: DeserializeOwned>(&mut self) -> Result<Option<M>, RecvError> {\n        match self.poll_recv::<M>(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(None),\n            Poll::Ready(Ok(val)) => Ok(Some(val)),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    pub fn poll_send<M: Serialize>(\n        &mut self,\n        cx: &mut Context,\n        msg: &M,\n    ) -> Poll<Result<(), SendError>> {\n        let bincode_config = self.bincode_config();\n\n        let serialized_len = bincode_config.serialized_size(msg)?;\n        if self.send_chunk.len() as u64 + serialized_len > MAX_MESSAGE_LEN as u64 {\n            ready!(self.poll_write_send_chunk(cx))?;\n        }\n\n        bincode_config.serialize_into(&mut self.send_chunk, msg)?;\n\n        Poll::Ready(Ok(()))\n    }\n\n    pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        ready!(self.poll_write_send_chunk(cx))?;\n        ready!(self.poll_finish_write(cx))?;\n        self.channel.flush()?;\n        Poll::Ready(Ok(()))\n    }\n\n    pub fn poll_recv<M: DeserializeOwned>(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<M, RecvError>> {\n        ready!(self.poll_recv_ready(cx))?;\n        Poll::Ready(Ok(self.recv_next::<M>()?))\n    }\n\n    fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvError>> {\n        loop {\n            if self.recv_pos < self.recv_chunk.len() {\n                return Poll::Ready(Ok(()));\n            }\n\n            if self.read_pos < 3 {\n                self.read_buffer.resize(3, 0);\n                ready!(self.poll_finish_read(cx))?;\n            }\n\n            let compressed = self.read_buffer[0] != 0;\n            let chunk_len = LittleEndian::read_u16(&self.read_buffer[1..3]);\n            self.read_buffer.resize(chunk_len as usize + 3, 0);\n            ready!(self.poll_finish_read(cx))?;\n\n            if compressed {\n                let decompressed_len = decompress_len(&self.read_buffer[3..])?;\n                self.recv_chunk\n                    .resize(decompressed_len.min(MAX_MESSAGE_LEN as usize), 0);\n                self.decoder\n                    .decompress(&self.read_buffer[3..], &mut self.recv_chunk)?;\n            } else {\n                self.recv_chunk.resize(chunk_len as usize, 0);\n                self.recv_chunk.copy_from_slice(&self.read_buffer[3..]);\n            }\n\n            self.recv_pos = 0;\n            self.read_pos = 0;\n        }\n    }\n\n    fn recv_next<M: DeserializeOwned>(&mut self) -> Result<M, bincode::Error> {\n        let bincode_config = self.bincode_config();\n        let mut reader = &self.recv_chunk[self.recv_pos..];\n        let msg = bincode_config.deserialize_from(&mut reader)?;\n        self.recv_pos = self.recv_chunk.len() - reader.len();\n        Ok(msg)\n    }\n\n    fn poll_write_send_chunk(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), reliable_channel::Error>> {\n        if !self.send_chunk.is_empty() {\n            ready!(self.poll_finish_write(cx))?;\n\n            self.write_pos = 0;\n            self.write_buffer\n                .resize(max_compress_len(self.send_chunk.len()) + 3, 0);\n            // Should not error, `write_buffer` is correctly sized and is less than `2^32 - 1`\n            let compressed_len = self\n                .encoder\n                .compress(&self.send_chunk, &mut self.write_buffer[3..])\n                .expect(\"unexpected snap encoder error\");\n            self.write_buffer.truncate(compressed_len + 3);\n            if compressed_len >= self.send_chunk.len() {\n                // If our compressed size is worse than our uncompressed size, write the original\n                // chunk.\n                self.write_buffer.truncate(self.send_chunk.len() + 3);\n                self.write_buffer[3..].copy_from_slice(&self.send_chunk);\n                // An initial 0 means uncompressed\n                self.write_buffer[0] = 0;\n                LittleEndian::write_u16(\n                    &mut self.write_buffer[1..3],\n                    (self.send_chunk.len()).try_into().unwrap(),\n                );\n            } else {\n                // An initial 1 means compressed\n                self.write_buffer[0] = 1;\n                LittleEndian::write_u16(\n                    &mut self.write_buffer[1..3],\n                    (compressed_len).try_into().unwrap(),\n                );\n            }\n\n            self.send_chunk.clear();\n        }\n\n        Poll::Ready(Ok(()))\n    }\n\n    fn poll_finish_write(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        while self.write_pos < self.write_buffer.len() {\n            let len = ready!(self\n                .channel\n                .poll_write(cx, &self.write_buffer[self.write_pos..]))?;\n            self.write_pos += len;\n        }\n        Poll::Ready(Ok(()))\n    }\n\n    fn poll_finish_read(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        while self.read_pos < self.read_buffer.len() {\n            let len = ready!(self\n                .channel\n                .poll_read(cx, &mut self.read_buffer[self.read_pos..]))?;\n            self.read_pos += len;\n        }\n        Poll::Ready(Ok(()))\n    }\n\n    fn bincode_config(&self) -> impl bincode::Options + Copy {\n        bincode::options().with_limit(MAX_MESSAGE_LEN as u64)\n    }\n}\n\n/// Wrapper over an `CompressedBincodeChannel` that only allows a single message type.\npub struct CompressedTypedChannel<M> {\n    channel: CompressedBincodeChannel,\n    _phantom: PhantomData<M>,\n}\n\nimpl<M> From<ReliableChannel> for CompressedTypedChannel<M> {\n    fn from(channel: ReliableChannel) -> Self {\n        Self::new(channel)\n    }\n}\n\nimpl<M> CompressedTypedChannel<M> {\n    pub fn new(channel: ReliableChannel) -> Self {\n        CompressedTypedChannel {\n            channel: CompressedBincodeChannel::new(channel),\n            _phantom: PhantomData,\n        }\n    }\n\n    pub fn into_inner(self) -> ReliableChannel {\n        self.channel.into_inner()\n    }\n\n    pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {\n        self.channel.flush().await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {\n        self.channel.try_flush()\n    }\n\n    pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        self.channel.poll_flush(cx)\n    }\n}\n\nimpl<M: Serialize> CompressedTypedChannel<M> {\n    pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {\n        self.channel.send(msg).await\n    }\n\n    pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {\n        self.channel.try_send(msg)\n    }\n\n    pub fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), SendError>> {\n        self.channel.poll_send(cx, msg)\n    }\n}\n\nimpl<M: DeserializeOwned> CompressedTypedChannel<M> {\n    pub async fn recv(&mut self) -> Result<M, RecvError> {\n        self.channel.recv::<M>().await\n    }\n\n    pub fn try_recv(&mut self) -> Result<Option<M>, RecvError> {\n        self.channel.try_recv::<M>()\n    }\n\n    pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, RecvError>> {\n        self.channel.poll_recv::<M>(cx)\n    }\n}\n"
  },
  {
    "path": "src/event_watch.rs",
    "content": "use std::{\n    sync::{\n        atomic::{self, AtomicBool},\n        Arc,\n    },\n    task::Poll,\n};\n\nuse futures::{future, task::AtomicWaker};\n\n/// Creates a multi-producer single-consumer stream of events with certain beneficial properties.\n///\n/// If a receiver is waiting on a signaled event, calling `Sender::signal` will wakeup the receiver\n/// as normal. However, if the receiver is *not* waiting on a signaled event and `Sender::signal`\n/// has been called since the last time the `Receiver::wait` was called, then calling\n/// `Receiver::wait` again will immediately resolve. In this way, the receiver is prevented from\n/// possibly missing events.\n///\n/// In other words, calling `Sender::signal` will always do one of two things:\n///   1) Wake up a currently waiting receiver\n///   2) Make the next call to `Receiver::wait` resolve immediately\n///\n/// Multiple calls to `Sender::signal` events will however *not* cause *multiple* calls to\n/// `Receiver::wait` to resolve immediately, only the very next call to `Receiver::wait`.\n///\n/// You can look at this as a specialized version a bounded channel of `()` with capacity 1.\npub fn channel() -> (Sender, Receiver) {\n    let state = Arc::new(State {\n        waker: AtomicWaker::new(),\n        signaled: AtomicBool::new(false),\n    });\n\n    let sender_state = Arc::clone(&state);\n\n    (Sender(sender_state), Receiver(state))\n}\n\n#[derive(Debug, Clone)]\npub struct Sender(Arc<State>);\n\nimpl Sender {\n    pub fn signal(&self) {\n        self.0.signaled.store(true, atomic::Ordering::SeqCst);\n        self.0.waker.wake()\n    }\n}\n\n#[derive(Debug)]\npub struct Receiver(Arc<State>);\n\nimpl Receiver {\n    pub async fn wait(&mut self) {\n        future::poll_fn(|cx| {\n            if self.0.signaled.swap(false, atomic::Ordering::SeqCst) {\n                Poll::Ready(())\n            } else {\n                self.0.waker.register(cx.waker());\n                if self.0.signaled.swap(false, atomic::Ordering::SeqCst) {\n                    Poll::Ready(())\n                } else {\n                    Poll::Pending\n                }\n            }\n        })\n        .await\n    }\n}\n\n#[derive(Debug)]\nstruct State {\n    waker: AtomicWaker,\n    signaled: AtomicBool,\n}\n"
  },
  {
    "path": "src/lib.rs",
    "content": "mod bandwidth_limiter;\npub mod buffer;\npub mod compressed_bincode_channel;\nmod event_watch;\npub mod message_channels;\npub mod packet;\npub mod packet_multiplexer;\npub mod reliable_bincode_channel;\npub mod reliable_channel;\nmod ring_buffer;\npub mod runtime;\npub mod spsc;\npub mod unreliable_bincode_channel;\npub mod unreliable_channel;\nmod windows;\n\npub use self::{\n    buffer::{BufferPacket, BufferPacketPool, BufferPool},\n    compressed_bincode_channel::{CompressedBincodeChannel, CompressedTypedChannel},\n    message_channels::{\n        MessageChannelMode, MessageChannelSettings, MessageChannels, MessageChannelsBuilder,\n    },\n    packet::{Packet, PacketPool, MAX_PACKET_LEN},\n    packet_multiplexer::{\n        ChannelStatistics, ChannelTotals, IncomingMultiplexedPackets, MuxPacket, MuxPacketPool,\n        OutgoingMultiplexedPackets, PacketChannel, PacketMultiplexer,\n    },\n    reliable_bincode_channel::{ReliableBincodeChannel, ReliableTypedChannel},\n    reliable_channel::ReliableChannel,\n    runtime::{Spawn, Timer},\n    unreliable_bincode_channel::{UnreliableBincodeChannel, UnreliableTypedChannel},\n    unreliable_channel::UnreliableChannel,\n};\n"
  },
  {
    "path": "src/message_channels.rs",
    "content": "use std::{\n    any::{type_name, Any, TypeId},\n    collections::{hash_map, HashMap, HashSet},\n    error::Error,\n    task::{Context, Poll},\n};\n\nuse futures::{\n    future::{self, BoxFuture, RemoteHandle},\n    ready, select,\n    stream::FuturesUnordered,\n    FutureExt, SinkExt, StreamExt, TryFutureExt,\n};\nuse rustc_hash::FxHashMap;\nuse serde::{de::DeserializeOwned, Serialize};\nuse thiserror::Error;\n\nuse crate::{\n    event_watch,\n    packet::PacketPool,\n    packet_multiplexer::{ChannelStatistics, PacketChannel, PacketMultiplexer},\n    reliable_channel,\n    runtime::{Spawn, Timer},\n    spsc::{self, TryRecvError},\n    unreliable_channel, CompressedTypedChannel, MuxPacketPool, ReliableChannel,\n    ReliableTypedChannel, UnreliableChannel, UnreliableTypedChannel,\n};\n\n// TODO: Message channels are currently always full-duplex, because the unreliable / reliable\n// channels backing them are always full-duplex. We could add configuration to limit a channel to\n// send or receive only, and to error if the remote sends to a send-only channel.\n#[derive(Debug, Clone, PartialEq)]\npub struct MessageChannelSettings {\n    pub channel: PacketChannel,\n    pub channel_mode: MessageChannelMode,\n    /// The buffer size for the spsc channel of messages that transports messages of this type to /\n    /// from the network task.\n    pub message_buffer_size: usize,\n    /// The buffer size for the spsc channel of packets for this message type that transports\n    /// packets to / from the packet multiplexer.\n    pub packet_buffer_size: usize,\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub enum MessageChannelMode {\n    Unreliable(unreliable_channel::Settings),\n    Reliable(reliable_channel::Settings),\n    Compressed(reliable_channel::Settings),\n}\n\npub trait ChannelMessage: Serialize + DeserializeOwned + Send + Sync + 'static {}\n\nimpl<T: Serialize + DeserializeOwned + Send + Sync + 'static> ChannelMessage for T {}\n\n#[derive(Debug, Error)]\npub enum ChannelAlreadyRegistered {\n    #[error(\"message type already registered\")]\n    MessageType,\n    #[error(\"channel already registered\")]\n    Channel,\n}\n\npub type TaskError = Box<dyn Error + Send + Sync>;\n\n#[derive(Debug, Error)]\n#[error(\"network task for message type {type_name:?} has errored: {error}\")]\npub struct ChannelTaskError {\n    pub type_name: &'static str,\n    pub error: TaskError,\n}\n\npub struct MessageChannelsBuilder<S, T, P>\nwhere\n    S: Spawn,\n    T: Timer,\n    P: PacketPool,\n{\n    spawn: S,\n    timer: T,\n    pool: P,\n    channels: HashSet<PacketChannel>,\n    register_fns: HashMap<TypeId, (&'static str, MessageChannelSettings, RegisterFn<S, T, P>)>,\n}\n\nimpl<S, T, P> MessageChannelsBuilder<S, T, P>\nwhere\n    S: Spawn,\n    T: Timer,\n    P: PacketPool,\n{\n    pub fn new(spawn: S, timer: T, pool: P) -> Self {\n        MessageChannelsBuilder {\n            spawn,\n            timer,\n            pool,\n            channels: HashSet::new(),\n            register_fns: HashMap::new(),\n        }\n    }\n}\n\nimpl<S, T, P> MessageChannelsBuilder<S, T, P>\nwhere\n    S: Spawn + Clone + 'static,\n    T: Timer + Clone + 'static,\n    P: PacketPool + Clone + Send + 'static,\n    P::Packet: Send,\n{\n    /// Register this message type on the constructed `MessageChannels`, using the given channel\n    /// settings.\n    ///\n    /// Can only be called once per message type, will error if it is called with the same message\n    /// type or channel number more than once.\n    pub fn register<M: ChannelMessage>(\n        &mut self,\n        settings: MessageChannelSettings,\n    ) -> Result<(), ChannelAlreadyRegistered> {\n        if !self.channels.insert(settings.channel) {\n            return Err(ChannelAlreadyRegistered::Channel);\n        }\n\n        match self.register_fns.entry(TypeId::of::<M>()) {\n            hash_map::Entry::Occupied(_) => Err(ChannelAlreadyRegistered::MessageType),\n            hash_map::Entry::Vacant(vacant) => {\n                vacant.insert((\n                    type_name::<M>(),\n                    settings,\n                    register_message_type::<S, T, P, M>,\n                ));\n                Ok(())\n            }\n        }\n    }\n\n    /// Build a `MessageChannels` instance that can send and receive all of the registered message\n    /// types via channels on the given packet multiplexer.\n    pub fn build(self, multiplexer: &mut PacketMultiplexer<P::Packet>) -> MessageChannels {\n        let Self {\n            spawn,\n            timer,\n            pool,\n            register_fns,\n            ..\n        } = self;\n        let mut channels_map = ChannelsMap::default();\n        let mut tasks: FuturesUnordered<_> = register_fns\n            .into_iter()\n            .map(|(_, (type_name, settings, register_fn))| {\n                register_fn(\n                    settings,\n                    spawn.clone(),\n                    timer.clone(),\n                    pool.clone(),\n                    multiplexer,\n                    &mut channels_map,\n                )\n                .map_err(move |error| ChannelTaskError { type_name, error })\n            })\n            .collect();\n\n        let (remote, remote_handle) = async move {\n            match tasks.next().await {\n                None => ChannelTaskError {\n                    type_name: \"none\",\n                    error: \"no channel tasks to run\".to_owned().into(),\n                },\n                Some(Ok(())) => panic!(\"channel tasks only return errors\"),\n                Some(Err(err)) => err,\n            }\n        }\n        .remote_handle();\n        spawn.spawn(remote);\n\n        MessageChannels {\n            disconnected: false,\n            task: remote_handle,\n            channels: channels_map,\n        }\n    }\n}\n\n#[derive(Debug, Error)]\n#[error(\"no such message type `{0}` registered\")]\npub struct MessageTypeUnregistered(&'static str);\n\n#[derive(Debug, Error)]\n#[error(\"`MessageChannels` instance has become disconnected\")]\npub struct MessageChannelsDisconnected;\n\n#[derive(Debug, Error)]\npub enum TryAsyncMessageError {\n    #[error(transparent)]\n    Unregistered(#[from] MessageTypeUnregistered),\n    #[error(transparent)]\n    Disconnected(#[from] MessageChannelsDisconnected),\n}\n\n/// Manages a set of channels through a packet multiplexer, where each channel is associated with\n/// exactly one message type.\n///\n/// Acts as a bridge between the sync and async worlds. Provides sync methods to send and receive\n/// messages that do not block or error. Has simplified error handling, is if any of the backing\n/// tasks end in an error or if the backing packet channels are dropped, the `MessageChannels` will\n/// permanently go into a \"disconnected\" state.\n///\n/// Additionally still provides async versions of methods to send and receive messages that share\n/// the same simplified error handling, which may be useful during startup or shutdown.\n#[derive(Debug)]\npub struct MessageChannels {\n    disconnected: bool,\n    task: RemoteHandle<ChannelTaskError>,\n    channels: ChannelsMap,\n}\n\nimpl MessageChannels {\n    /// Returns whether this `MessageChannels` has become disconnected because the backing network\n    /// task has errored.\n    ///\n    /// Once it has become disconnected, a `MessageChannels` is permanently in this errored state.\n    /// You can receive the error from the task by calling `MessageChannels::recv_err`.\n    pub fn is_connected(&self) -> bool {\n        !self.disconnected\n    }\n\n    /// Consume this `MessageChannels` and receive the networking task shutdown error.\n    ///\n    /// If this `MessageChannels` is disconnected, returns the error that caused it to become\n    /// disconnected. If it is not disconnected, it will become disconnected by calling this and\n    /// return that error.\n    pub async fn recv_err(self) -> ChannelTaskError {\n        drop(self.channels);\n        self.task.await\n    }\n\n    /// Send the given message on the channel associated with its message type.\n    ///\n    /// In order to ensure delivery, `flush` should be called for the same message type to\n    /// immediately send any buffered messages.\n    ///\n    /// If the spsc channel for this message type is full, will return the message that was sent\n    /// back to the caller. If the message was successfully put onto the outgoing spsc channel, will\n    /// return None.\n    ///\n    /// # Panics\n    /// Panics if this message type was not registered with the `MessageChannelsBuilder` used to\n    /// build this `MessageChannels` instance.\n    pub fn send<M: ChannelMessage>(&mut self, message: M) -> Option<M> {\n        self.try_send(message).unwrap()\n    }\n\n    /// Like `MessageChannels::send` but errors instead of panicking when the message type is\n    /// unregistered.\n    pub fn try_send<M: ChannelMessage>(\n        &mut self,\n        message: M,\n    ) -> Result<Option<M>, MessageTypeUnregistered> {\n        let channels = self.channels.get_mut::<M>()?;\n\n        Ok(if self.disconnected {\n            Some(message)\n        } else if let Err(err) = channels.outgoing_sender.try_send(message) {\n            if err.is_disconnected() {\n                self.disconnected = true;\n            }\n            Some(err.into_inner())\n        } else {\n            None\n        })\n    }\n\n    /// Any async version of `MessageChannels::send`, sends the given message on the\n    /// channel associated with its message type but waits if the channel is full. Like\n    /// `MessageChannels::send`, `MessageChannels::flush` must still be called afterwards in order\n    /// to ensure delivery.\n    ///\n    /// This method is cancel safe, it will never partially send a message, though canceling it may\n    /// or may not buffer a message to be sent.\n    ///\n    /// # Panics\n    /// Panics if this message type is not registered.\n    pub async fn async_send<M: ChannelMessage>(\n        &mut self,\n        message: M,\n    ) -> Result<(), MessageChannelsDisconnected> {\n        self.try_async_send(message).await.map_err(|e| match e {\n            TryAsyncMessageError::Unregistered(e) => panic!(\"{}\", e),\n            TryAsyncMessageError::Disconnected(e) => e,\n        })\n    }\n\n    /// Like `MessageChannels::async_send` but errors instead of panicking when the message type is\n    /// unregistered.\n    pub async fn try_async_send<M: ChannelMessage>(\n        &mut self,\n        message: M,\n    ) -> Result<(), TryAsyncMessageError> {\n        let channels = self.channels.get_mut::<M>()?;\n\n        if self.disconnected {\n            Err(MessageChannelsDisconnected.into())\n        } else {\n            let res = channels.outgoing_sender.send(message).await;\n\n            if res.is_err() {\n                self.disconnected = true;\n                Err(MessageChannelsDisconnected.into())\n            } else {\n                Ok(())\n            }\n        }\n    }\n\n    /// Immediately send any buffered messages for this message type. Messages may not be delivered\n    /// unless `flush` is called after any `send` calls.\n    ///\n    /// # Panics\n    /// Panics if this message type was not registered with the `MessageChannelsBuilder` used to\n    /// build this `MessageChannels` instance.\n    pub fn flush<M: ChannelMessage>(&mut self) {\n        self.try_flush::<M>().unwrap();\n    }\n\n    /// Like `MessageChannels::flush` but errors instead of panicking when the message type is\n    /// unregistered.\n    pub fn try_flush<M: ChannelMessage>(&mut self) -> Result<(), MessageTypeUnregistered> {\n        self.channels.get_mut::<M>()?.flush_sender.signal();\n        Ok(())\n    }\n\n    /// Receive an incoming message on the channel associated with this mesage type, if one is\n    /// available.\n    ///\n    /// # Panics\n    /// Panics if this message type was not registered with the `MessageChannelsBuilder` used to\n    /// build this `MessageChannels` instance.\n    pub fn recv<M: ChannelMessage>(&mut self) -> Option<M> {\n        self.try_recv().unwrap()\n    }\n\n    /// Like `MessageChannels::recv` but errors instead of panicking when the message type is\n    /// unregistered.\n    pub fn try_recv<M: ChannelMessage>(&mut self) -> Result<Option<M>, MessageTypeUnregistered> {\n        let channels = self.channels.get_mut::<M>()?;\n\n        Ok(if self.disconnected {\n            None\n        } else {\n            match channels.incoming_receiver.try_recv() {\n                Ok(msg) => Some(msg),\n                Err(err) => {\n                    if err.is_disconnected() {\n                        self.disconnected = true;\n                    }\n                    None\n                }\n            }\n        })\n    }\n\n    /// Any async version of `MessageChannels::receive`, receives an incoming message on the channel\n    /// associated with its message type but waits if there is no message available.\n    ///\n    /// This method is cancel safe, it will never partially read a message or drop received\n    /// messages.\n    ///\n    /// # Panics\n    /// Panics if this message type is not registered.\n    pub async fn async_recv<M: ChannelMessage>(\n        &mut self,\n    ) -> Result<M, MessageChannelsDisconnected> {\n        self.try_async_recv().await.map_err(|e| match e {\n            TryAsyncMessageError::Unregistered(e) => panic!(\"{}\", e),\n            TryAsyncMessageError::Disconnected(e) => e,\n        })\n    }\n\n    /// Like `MessageChannels::async_recv` but errors instead of panicking when the message type is\n    /// unregistered.\n    pub async fn try_async_recv<M: ChannelMessage>(&mut self) -> Result<M, TryAsyncMessageError> {\n        let channels = self.channels.get_mut::<M>()?;\n\n        if self.disconnected {\n            Err(MessageChannelsDisconnected.into())\n        } else if let Some(message) = channels.incoming_receiver.next().await {\n            Ok(message)\n        } else {\n            self.disconnected = true;\n            Err(MessageChannelsDisconnected.into())\n        }\n    }\n\n    pub fn statistics<M: ChannelMessage>(&self) -> &ChannelStatistics {\n        self.try_statistics::<M>().unwrap()\n    }\n\n    pub fn try_statistics<M: ChannelMessage>(\n        &self,\n    ) -> Result<&ChannelStatistics, MessageTypeUnregistered> {\n        Ok(&self.channels.get::<M>()?.statistics)\n    }\n}\n\ntype ChannelTask = BoxFuture<'static, Result<(), TaskError>>;\ntype RegisterFn<S, T, P> = fn(\n    MessageChannelSettings,\n    S,\n    T,\n    P,\n    &mut PacketMultiplexer<<P as PacketPool>::Packet>,\n    &mut ChannelsMap,\n) -> ChannelTask;\n\n#[derive(Debug, Error)]\n#[error(\"channel has been disconnected\")]\nstruct ChannelDisconnected;\n\nstruct ChannelSet<M> {\n    outgoing_sender: spsc::Sender<M>,\n    incoming_receiver: spsc::Receiver<M>,\n    flush_sender: event_watch::Sender,\n    statistics: ChannelStatistics,\n}\n\n#[derive(Debug, Default)]\nstruct ChannelsMap(FxHashMap<TypeId, Box<dyn Any + Send + Sync>>);\n\nimpl ChannelsMap {\n    fn insert<M: ChannelMessage>(&mut self, channel_set: ChannelSet<M>) -> bool {\n        self.0\n            .insert(TypeId::of::<M>(), Box::new(channel_set))\n            .is_none()\n    }\n\n    fn get<M: ChannelMessage>(&self) -> Result<&ChannelSet<M>, MessageTypeUnregistered> {\n        Ok(self\n            .0\n            .get(&TypeId::of::<M>())\n            .ok_or_else(|| MessageTypeUnregistered(type_name::<M>()))?\n            .downcast_ref()\n            .unwrap())\n    }\n\n    fn get_mut<M: ChannelMessage>(\n        &mut self,\n    ) -> Result<&mut ChannelSet<M>, MessageTypeUnregistered> {\n        Ok(self\n            .0\n            .get_mut(&TypeId::of::<M>())\n            .ok_or_else(|| MessageTypeUnregistered(type_name::<M>()))?\n            .downcast_mut()\n            .unwrap())\n    }\n}\n\nfn register_message_type<S, T, P, M>(\n    settings: MessageChannelSettings,\n    spawn: S,\n    timer: T,\n    packet_pool: P,\n    multiplexer: &mut PacketMultiplexer<P::Packet>,\n    channels_map: &mut ChannelsMap,\n) -> ChannelTask\nwhere\n    S: Spawn + Clone + 'static,\n    T: Timer + Clone + 'static,\n    P: PacketPool + Clone + Send + 'static,\n    P::Packet: Send,\n    M: ChannelMessage,\n{\n    let (incoming_sender, incoming_receiver) = spsc::channel::<M>(settings.message_buffer_size);\n    let (outgoing_sender, outgoing_receiver) = spsc::channel::<M>(settings.message_buffer_size);\n\n    let (flush_sender, flush_receiver) = event_watch::channel();\n\n    let (channel_sender, channel_receiver, statistics) = multiplexer\n        .open_channel(settings.channel, settings.packet_buffer_size)\n        .expect(\"duplicate packet channel\");\n\n    let packet_pool = MuxPacketPool::new(packet_pool);\n\n    let channel_task = match settings.channel_mode {\n        MessageChannelMode::Unreliable(unreliable_settings) => channel_task(\n            UnreliableTypedChannel::new(UnreliableChannel::new(\n                timer,\n                packet_pool,\n                unreliable_settings,\n                channel_sender,\n                channel_receiver,\n            )),\n            incoming_sender,\n            outgoing_receiver,\n            flush_receiver,\n        )\n        .boxed(),\n        MessageChannelMode::Reliable(reliable_settings) => channel_task(\n            ReliableTypedChannel::new(ReliableChannel::new(\n                spawn,\n                timer,\n                packet_pool,\n                reliable_settings,\n                channel_sender,\n                channel_receiver,\n            )),\n            incoming_sender,\n            outgoing_receiver,\n            flush_receiver,\n        )\n        .boxed(),\n        MessageChannelMode::Compressed(reliable_settings) => channel_task(\n            CompressedTypedChannel::new(ReliableChannel::new(\n                spawn,\n                timer,\n                packet_pool,\n                reliable_settings,\n                channel_sender,\n                channel_receiver,\n            )),\n            incoming_sender,\n            outgoing_receiver,\n            flush_receiver,\n        )\n        .boxed(),\n    };\n\n    channels_map.insert(ChannelSet::<M> {\n        outgoing_sender,\n        flush_sender,\n        incoming_receiver,\n        statistics,\n    });\n\n    channel_task\n}\n\ntrait MessageBincodeChannel<M: ChannelMessage> {\n    fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>>;\n    fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>>;\n    fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>>;\n}\n\nimpl<T, P, M> MessageBincodeChannel<M> for UnreliableTypedChannel<T, P, M>\nwhere\n    T: Timer,\n    P: PacketPool,\n    M: ChannelMessage,\n{\n    fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {\n        UnreliableTypedChannel::poll_recv(self, cx).map_err(|e| e.into())\n    }\n\n    fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>> {\n        ready!(self.poll_send_ready(cx))?;\n        Poll::Ready(Ok(self.start_send(msg)?))\n    }\n\n    fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {\n        UnreliableTypedChannel::poll_flush(self, cx).map_err(|e| e.into())\n    }\n}\n\nimpl<M: ChannelMessage> MessageBincodeChannel<M> for ReliableTypedChannel<M> {\n    fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {\n        ReliableTypedChannel::poll_recv(self, cx).map_err(|e| e.into())\n    }\n\n    fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>> {\n        ready!(self.poll_send_ready(cx))?;\n        Poll::Ready(Ok(self.start_send(msg)?))\n    }\n\n    fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {\n        ReliableTypedChannel::poll_flush(self, cx).map_err(|e| e.into())\n    }\n}\n\nimpl<M: ChannelMessage> MessageBincodeChannel<M> for CompressedTypedChannel<M> {\n    fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {\n        CompressedTypedChannel::poll_recv(self, cx).map_err(|e| e.into())\n    }\n\n    fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>> {\n        CompressedTypedChannel::poll_send(self, cx, msg).map_err(|e| e.into())\n    }\n\n    fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {\n        CompressedTypedChannel::poll_flush(self, cx).map_err(|e| e.into())\n    }\n}\n\nasync fn channel_task<M: ChannelMessage>(\n    mut channel: impl MessageBincodeChannel<M>,\n    mut incoming_message_sender: spsc::Sender<M>,\n    mut outgoing_message_receiver: spsc::Receiver<M>,\n    mut flush_receiver: event_watch::Receiver,\n) -> Result<(), TaskError> {\n    enum Next<M> {\n        Incoming(M),\n        Outgoing(M),\n        Flush,\n    }\n\n    loop {\n        let next = {\n            select! {\n                incoming = future::poll_fn(|cx| channel.poll_recv(cx)).fuse() => {\n                    Next::Incoming(incoming?)\n                }\n                outgoing = outgoing_message_receiver.next().fuse() => {\n                    Next::Outgoing(outgoing.ok_or(ChannelDisconnected)?)\n                }\n                _ = flush_receiver.wait().fuse() => Next::Flush,\n            }\n        };\n\n        match next {\n            Next::Incoming(incoming) => incoming_message_sender.send(incoming).await?,\n            Next::Outgoing(outgoing) => {\n                future::poll_fn(|cx| channel.poll_send(cx, &outgoing)).await?\n            }\n            Next::Flush => loop {\n                match outgoing_message_receiver.try_recv() {\n                    Ok(outgoing) => future::poll_fn(|cx| channel.poll_send(cx, &outgoing)).await?,\n                    Err(TryRecvError::Disconnected) => return Err(ChannelDisconnected.into()),\n                    Err(TryRecvError::Empty) => {\n                        future::poll_fn(|cx| channel.poll_flush(cx)).await?;\n                        break;\n                    }\n                }\n            },\n        }\n    }\n}\n"
  },
  {
    "path": "src/packet.rs",
    "content": "use std::ops::{Deref, DerefMut};\n\n/// The maximum usable packet size by `turbulence`.\n///\n/// It is not useful for an implementation of `PacketPool` to return packets with larger capacity\n/// than this, `turbulence` may not be able to use the entire packet capacity otherwise.\npub const MAX_PACKET_LEN: u16 = 32768;\n\n/// A trait for packet buffers used by `turbulence`.\npub trait Packet: Deref<Target = [u8]> + DerefMut {\n    /// Resizes the packet to the given length, which must be at most the static capacity.\n    fn resize(&mut self, len: usize, val: u8);\n\n    fn extend(&mut self, other: &[u8]) {\n        let cur_len = self.len();\n        let new_len = cur_len + other.len();\n        self.resize(new_len, 0);\n        self[cur_len..new_len].copy_from_slice(other);\n    }\n\n    fn truncate(&mut self, len: usize) {\n        let len = len.min(self.len());\n        self.resize(len, 0);\n    }\n\n    fn clear(&mut self) {\n        self.resize(0, 0);\n    }\n}\n\n/// Trait for packet allocation and pooling.\n///\n/// All packets that are allocated from `turbulence` are allocated through this interface.\n///\n/// Packets must implement the `Packet` trait and should all have the same capacity: the MTU for\n/// whatever the underlying transport is, up to `MAX_PACKET_LEN` in size.\npub trait PacketPool {\n    type Packet: Packet;\n\n    /// Static maximum capacity packets returned by this pool.\n    fn capacity(&self) -> usize;\n\n    fn acquire(&mut self) -> Self::Packet;\n}\n"
  },
  {
    "path": "src/packet_multiplexer.rs",
    "content": "use std::{\n    collections::{hash_map, HashMap},\n    fmt,\n    ops::{Deref, DerefMut},\n    pin::Pin,\n    sync::{\n        atomic::{AtomicU64, Ordering},\n        Arc,\n    },\n    task::{Context, Poll},\n    u8,\n};\n\nuse futures::{stream::SelectAll, Sink, SinkExt, Stream, StreamExt};\nuse rustc_hash::{FxHashMap, FxHashSet};\nuse thiserror::Error;\n\nuse crate::{\n    packet::{Packet, PacketPool},\n    spsc,\n};\n\npub type PacketChannel = u8;\n\n/// A wrapper over a `Packet` that reserves the first byte for the channel.\n#[derive(Debug)]\npub struct MuxPacket<P>(P);\n\nimpl<P> Packet for MuxPacket<P>\nwhere\n    P: Packet,\n{\n    fn resize(&mut self, len: usize, val: u8) {\n        self.0.resize(len + 1, val);\n    }\n\n    fn extend(&mut self, other: &[u8]) {\n        self.0.extend(other);\n    }\n\n    fn truncate(&mut self, len: usize) {\n        self.0.truncate(len + 1);\n    }\n\n    fn clear(&mut self) {\n        self.0.resize(1, 0);\n    }\n}\n\nimpl<P> Deref for MuxPacket<P>\nwhere\n    P: Packet,\n{\n    type Target = [u8];\n\n    fn deref(&self) -> &[u8] {\n        &self.0[1..]\n    }\n}\n\nimpl<P> DerefMut for MuxPacket<P>\nwhere\n    P: Packet,\n{\n    fn deref_mut(&mut self) -> &mut [u8] {\n        &mut self.0[1..]\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct MuxPacketPool<P>(P);\n\nimpl<P> MuxPacketPool<P> {\n    pub fn new(packet_pool: P) -> Self {\n        MuxPacketPool(packet_pool)\n    }\n}\n\nimpl<P> PacketPool for MuxPacketPool<P>\nwhere\n    P: PacketPool,\n{\n    type Packet = MuxPacket<P::Packet>;\n\n    fn capacity(&self) -> usize {\n        self.0.capacity() - 1\n    }\n\n    fn acquire(&mut self) -> MuxPacket<P::Packet> {\n        let mut packet = self.0.acquire();\n        packet.resize(1, 0);\n        MuxPacket(packet)\n    }\n}\n\nimpl<P> From<P> for MuxPacketPool<P> {\n    fn from(pool: P) -> MuxPacketPool<P> {\n        MuxPacketPool(pool)\n    }\n}\n\n#[derive(Debug, Error)]\n#[error(\"packet channel has already been opened\")]\npub struct DuplicateChannel;\n\n#[derive(Debug, Copy, Clone)]\npub struct ChannelTotals {\n    pub packets: u64,\n    pub bytes: u64,\n}\n\n#[derive(Debug, Clone)]\npub struct ChannelStatistics(Arc<ChannelStatisticsData>);\n\nimpl ChannelStatistics {\n    pub fn incoming_totals(&self) -> ChannelTotals {\n        ChannelTotals {\n            packets: self.0.incoming_packets.load(Ordering::Relaxed),\n            bytes: self.0.incoming_bytes.load(Ordering::Relaxed),\n        }\n    }\n\n    pub fn outgoing_totals(&self) -> ChannelTotals {\n        ChannelTotals {\n            packets: self.0.outgoing_packets.load(Ordering::Relaxed),\n            bytes: self.0.outgoing_bytes.load(Ordering::Relaxed),\n        }\n    }\n}\n\n/// Routes packets marked with a channel header from a single `Sink` / `Stream` pair to a set of\n/// `Sink` / `Stream` pairs for each channel.\n///\n/// Also monitors bandwidth on each channel independently, and returns a `ChannelStatistics` handle\n/// to query bandwidth totals for that specific channel.\npub struct PacketMultiplexer<P> {\n    incoming: HashMap<PacketChannel, ChannelSender<P>>,\n    outgoing: SelectAll<ChannelReceiver<P>>,\n}\n\nimpl<P> PacketMultiplexer<P>\nwhere\n    P: Packet,\n{\n    pub fn new() -> PacketMultiplexer<P> {\n        PacketMultiplexer {\n            incoming: HashMap::new(),\n            outgoing: SelectAll::new(),\n        }\n    }\n\n    /// Open a multiplexed packet channel, producing a sender for outgoing `MuxPacket`s on this\n    /// channel, and a receiver for incoming `MuxPacket`s on this channel.\n    ///\n    /// The `buffer_size` parameter controls the buffer size requested when creating the spsc\n    /// channels for the returned `Sender` and `Receiver`.\n    pub fn open_channel(\n        &mut self,\n        channel: PacketChannel,\n        buffer_size: usize,\n    ) -> Result<\n        (\n            spsc::Sender<MuxPacket<P>>,\n            spsc::Receiver<MuxPacket<P>>,\n            ChannelStatistics,\n        ),\n        DuplicateChannel,\n    > {\n        let statistics = Arc::new(ChannelStatisticsData::default());\n        match self.incoming.entry(channel) {\n            hash_map::Entry::Occupied(_) => Err(DuplicateChannel),\n            hash_map::Entry::Vacant(vacant) => {\n                let (incoming_sender, incoming_receiver) = spsc::channel(buffer_size);\n                let (outgoing_sender, outgoing_receiver) = spsc::channel(buffer_size);\n                vacant.insert(ChannelSender {\n                    sender: incoming_sender,\n                    statistics: Arc::clone(&statistics),\n                });\n                self.outgoing.push(ChannelReceiver {\n                    channel,\n                    receiver: outgoing_receiver,\n                    statistics: Arc::clone(&statistics),\n                });\n                Ok((\n                    outgoing_sender,\n                    incoming_receiver,\n                    ChannelStatistics(statistics),\n                ))\n            }\n        }\n    }\n\n    /// Start multiplexing packets to all opened channels.\n    ///\n    /// Returns an `IncomingMultiplexedPackets` which is a `Sink` for incoming packets, and an\n    /// `OutgoingMultiplexedPackets` which is a `Stream` for outgoing packets.\n    pub fn start(self) -> (IncomingMultiplexedPackets<P>, OutgoingMultiplexedPackets<P>) {\n        (\n            IncomingMultiplexedPackets {\n                incoming: self.incoming.into_iter().collect(),\n                to_send: None,\n                to_flush: FxHashSet::default(),\n            },\n            OutgoingMultiplexedPackets {\n                outgoing: self.outgoing,\n            },\n        )\n    }\n}\n\n#[derive(Debug, Error)]\npub enum IncomingError {\n    #[error(\"packet received for unopened channel\")]\n    UnknownPacketChannel,\n    #[error(\"channel receiver has been dropped\")]\n    ChannelReceiverDropped,\n}\n\n#[derive(Error)]\npub enum IncomingTrySendError<P> {\n    #[error(\"packet channel is full\")]\n    IsFull(P),\n    #[error(transparent)]\n    Error(#[from] IncomingError),\n}\n\nimpl<P> fmt::Debug for IncomingTrySendError<P> {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        match self {\n            IncomingTrySendError::IsFull(_) => write!(f, \"IncomingTrySendError::IsFull\"),\n            IncomingTrySendError::Error(err) => f\n                .debug_tuple(\"IncomingTrySendError::Error\")\n                .field(err)\n                .finish(),\n        }\n    }\n}\n\nimpl<P> IncomingTrySendError<P> {\n    pub fn is_full(&self) -> bool {\n        match self {\n            IncomingTrySendError::IsFull(_) => true,\n            _ => false,\n        }\n    }\n}\n\n/// A handle to push incoming packets into the multiplexer.\npub struct IncomingMultiplexedPackets<P> {\n    incoming: FxHashMap<PacketChannel, ChannelSender<P>>,\n    to_send: Option<P>,\n    to_flush: FxHashSet<PacketChannel>,\n}\n\nimpl<P> Unpin for IncomingMultiplexedPackets<P> {}\n\nimpl<P> IncomingMultiplexedPackets<P>\nwhere\n    P: Packet,\n{\n    /// Attempt to send the given packet to the appropriate multiplexed channel without blocking.\n    ///\n    /// If a normal error occurs, returns `IncomingError::Error`, if the destination channel buffer\n    /// is full, returns `IncomingTrySendError::IsFull`.\n    pub fn try_send(&mut self, packet: P) -> Result<(), IncomingTrySendError<P>> {\n        let channel = packet[0];\n        let incoming = self\n            .incoming\n            .get_mut(&channel)\n            .ok_or(IncomingError::UnknownPacketChannel)?;\n\n        let mux_packet_len = (packet.len() - 1) as u64;\n        incoming.sender.try_send(MuxPacket(packet)).map_err(|e| {\n            if e.is_full() {\n                IncomingTrySendError::IsFull(e.into_inner().0)\n            } else {\n                IncomingError::ChannelReceiverDropped.into()\n            }\n        })?;\n        incoming.statistics.mark_incoming_packet(mux_packet_len);\n\n        Ok(())\n    }\n}\n\nimpl<P> Sink<P> for IncomingMultiplexedPackets<P>\nwhere\n    P: Packet,\n{\n    type Error = IncomingError;\n\n    fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {\n        if let Some(packet) = self.to_send.take() {\n            let channel = packet[0];\n            let incoming = self\n                .incoming\n                .get_mut(&channel)\n                .ok_or(IncomingError::UnknownPacketChannel)?;\n            match incoming.sender.poll_ready_unpin(cx) {\n                Poll::Pending => {\n                    self.to_send = Some(packet);\n                    Poll::Pending\n                }\n                Poll::Ready(Ok(())) => {\n                    let mux_packet_len = (packet.len() - 1) as u64;\n                    incoming\n                        .sender\n                        .start_send_unpin(MuxPacket(packet))\n                        .map_err(|_| IncomingError::ChannelReceiverDropped)?;\n                    incoming.statistics.mark_incoming_packet(mux_packet_len);\n                    self.to_flush.insert(channel);\n                    Poll::Ready(Ok(()))\n                }\n                Poll::Ready(Err(_)) => Poll::Ready(Err(IncomingError::ChannelReceiverDropped)),\n            }\n        } else {\n            Poll::Ready(Ok(()))\n        }\n    }\n\n    fn start_send(mut self: Pin<&mut Self>, item: P) -> Result<(), Self::Error> {\n        assert!(self.to_send.is_none());\n        self.to_send = Some(item);\n        Ok(())\n    }\n\n    fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {\n        if self.as_mut().poll_ready(cx)?.is_pending() {\n            return Poll::Pending;\n        }\n        while let Some(&channel) = self.to_flush.iter().next() {\n            let incoming = self\n                .incoming\n                .get_mut(&channel)\n                .ok_or(IncomingError::UnknownPacketChannel)?;\n            if incoming\n                .sender\n                .poll_flush_unpin(cx)\n                .map_err(|_| IncomingError::ChannelReceiverDropped)?\n                .is_pending()\n            {\n                return Poll::Pending;\n            }\n            self.to_flush.remove(&channel);\n        }\n        Poll::Ready(Ok(()))\n    }\n\n    fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {\n        self.poll_flush(cx)\n    }\n}\n\n/// A handle to receive outgoing packets from the multiplexer.\npub struct OutgoingMultiplexedPackets<P> {\n    outgoing: SelectAll<ChannelReceiver<P>>,\n}\n\nimpl<P> Stream for OutgoingMultiplexedPackets<P>\nwhere\n    P: Packet,\n{\n    type Item = P;\n\n    fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {\n        self.outgoing.poll_next_unpin(cx)\n    }\n}\n\nstruct ChannelSender<P> {\n    sender: spsc::Sender<MuxPacket<P>>,\n    statistics: Arc<ChannelStatisticsData>,\n}\n\nstruct ChannelReceiver<P> {\n    channel: PacketChannel,\n    receiver: spsc::Receiver<MuxPacket<P>>,\n    statistics: Arc<ChannelStatisticsData>,\n}\n\nimpl<P> Unpin for ChannelReceiver<P> {}\n\nimpl<P> Stream for ChannelReceiver<P>\nwhere\n    P: Packet,\n{\n    type Item = P;\n\n    fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {\n        match self.receiver.poll_next_unpin(cx) {\n            Poll::Ready(Some(packet)) => {\n                let mut packet = packet.0;\n                packet[0] = self.channel;\n                self.statistics\n                    .mark_outgoing_packet((packet.len() - 1) as u64);\n                Poll::Ready(Some(packet))\n            }\n            Poll::Ready(None) => Poll::Ready(None),\n            Poll::Pending => Poll::Pending,\n        }\n    }\n}\n\n#[derive(Debug, Default)]\nstruct ChannelStatisticsData {\n    incoming_packets: AtomicU64,\n    incoming_bytes: AtomicU64,\n\n    outgoing_packets: AtomicU64,\n    outgoing_bytes: AtomicU64,\n}\n\nimpl ChannelStatisticsData {\n    fn mark_incoming_packet(&self, len: u64) {\n        self.incoming_packets.fetch_add(1, Ordering::Relaxed);\n        self.incoming_bytes.fetch_add(len, Ordering::Relaxed);\n    }\n\n    fn mark_outgoing_packet(&self, len: u64) {\n        self.outgoing_packets.fetch_add(1, Ordering::Relaxed);\n        self.outgoing_bytes.fetch_add(len, Ordering::Relaxed);\n    }\n}\n"
  },
  {
    "path": "src/reliable_bincode_channel.rs",
    "content": "use std::{\n    marker::PhantomData,\n    task::{Context, Poll},\n    u16,\n};\n\nuse bincode::Options as _;\nuse byteorder::{ByteOrder, LittleEndian};\nuse futures::{future, ready, task};\nuse serde::{Deserialize, Serialize};\nuse thiserror::Error;\n\nuse crate::reliable_channel::{self, ReliableChannel};\n\n/// The maximum serialized length of a `ReliableBincodeChannel` message.\npub const MAX_MESSAGE_LEN: u16 = u16::MAX;\n\n#[derive(Debug, Error)]\npub enum SendError {\n    /// Fatal internal channel error.\n    #[error(\"reliable channel error: {0}\")]\n    ReliableChannelError(#[from] reliable_channel::Error),\n    /// Non-fatal error, message is unsent.\n    #[error(\"bincode serialization error: {0}\")]\n    BincodeError(#[from] bincode::Error),\n}\n\n#[derive(Debug, Error)]\npub enum RecvError {\n    /// Fatal internal channel error.\n    #[error(\"reliable channel error: {0}\")]\n    ReliableChannelError(#[from] reliable_channel::Error),\n    /// Non-fatal error, message is skipped.\n    #[error(\"bincode serialization error: {0}\")]\n    BincodeError(#[from] bincode::Error),\n}\n\n/// Wraps a `ReliableChannel` together with an internal buffer to allow easily sending message types\n/// serialized with `bincode`.\n///\n/// Messages are guaranteed to arrive, and are guaranteed to be in order. Messages have a maximum\n/// length, but this maximum size can be larger than the size of an individual packet.\npub struct ReliableBincodeChannel {\n    channel: ReliableChannel,\n\n    write_buffer: Vec<u8>,\n    write_pos: usize,\n\n    read_buffer: Vec<u8>,\n    read_pos: usize,\n}\n\nimpl From<ReliableChannel> for ReliableBincodeChannel {\n    fn from(channel: ReliableChannel) -> Self {\n        Self::new(channel)\n    }\n}\n\nimpl ReliableBincodeChannel {\n    /// Create a new `ReliableBincodeChannel` with a maximum message size of `max_message_len`.\n    pub fn new(channel: ReliableChannel) -> Self {\n        ReliableBincodeChannel {\n            channel,\n            write_buffer: Vec::new(),\n            write_pos: 0,\n            read_buffer: Vec::new(),\n            read_pos: 0,\n        }\n    }\n\n    pub fn into_inner(self) -> ReliableChannel {\n        self.channel\n    }\n\n    /// Write the given message to the reliable channel.\n    ///\n    /// In order to ensure that messages are sent in a timely manner, `flush` must be called after\n    /// calling this method. Without calling `flush`, any pending writes will not be sent until the\n    /// next automatic sender task wakeup.\n    ///\n    /// This method is cancel safe, it will never partially send a message, and completes\n    /// immediately upon successfully queuing a message to send.\n    pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendError> {\n        future::poll_fn(|cx| self.poll_send_ready(cx)).await?;\n        self.start_send(msg)?;\n        Ok(())\n    }\n\n    pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendError> {\n        if self.try_send_ready()? {\n            self.start_send(msg)?;\n            Ok(true)\n        } else {\n            Ok(false)\n        }\n    }\n\n    /// Ensure that any previously sent messages are sent as soon as possible.\n    ///\n    /// This method is cancel safe.\n    pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {\n        future::poll_fn(|cx| self.poll_flush(cx)).await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {\n        match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    /// Read the next available incoming message.\n    ///\n    /// This method is cancel safe, it will never partially read a message or drop received\n    /// messages.\n    pub async fn recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvError> {\n        future::poll_fn(|cx| self.poll_recv_ready(cx)).await?;\n        self.recv_next::<M>()\n    }\n\n    pub fn try_recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<Option<M>, RecvError> {\n        match self.poll_recv::<M>(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(None),\n            Poll::Ready(Ok(val)) => Ok(Some(val)),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    pub fn poll_send_ready(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), reliable_channel::Error>> {\n        while !self.write_buffer.is_empty() {\n            let len = ready!(self\n                .channel\n                .poll_write(cx, &self.write_buffer[self.write_pos..]))?;\n            self.write_pos += len;\n            if self.write_pos == self.write_buffer.len() {\n                self.write_pos = 0;\n                self.write_buffer.clear();\n            }\n        }\n        Poll::Ready(Ok(()))\n    }\n\n    pub fn try_send_ready(&mut self) -> Result<bool, reliable_channel::Error> {\n        match self.poll_send_ready(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    pub fn start_send<M: Serialize>(&mut self, msg: &M) -> Result<(), bincode::Error> {\n        assert!(self.write_buffer.is_empty());\n        self.write_buffer.resize(2, 0);\n        let bincode_config = self.bincode_config();\n        bincode_config.serialize_into(&mut self.write_buffer, msg)?;\n        let message_len = self.write_buffer.len() - 2;\n        LittleEndian::write_u16(\n            &mut self.write_buffer[0..2],\n            message_len.try_into().unwrap(),\n        );\n        Ok(())\n    }\n\n    pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        ready!(self.poll_send_ready(cx))?;\n        self.channel.flush()?;\n        Poll::Ready(Ok(()))\n    }\n\n    pub fn poll_recv<'a, M: Deserialize<'a>>(\n        &'a mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<M, RecvError>> {\n        ready!(self.poll_recv_ready(cx))?;\n        Poll::Ready(self.recv_next::<M>())\n    }\n\n    fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvError>> {\n        if self.read_pos < 2 {\n            self.read_buffer.resize(2, 0);\n            ready!(self.poll_finish_read(cx))?;\n        }\n\n        let message_len = LittleEndian::read_u16(&self.read_buffer[0..2]);\n        self.read_buffer.resize(message_len as usize + 2, 0);\n        ready!(self.poll_finish_read(cx))?;\n\n        Poll::Ready(Ok(()))\n    }\n\n    fn recv_next<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvError> {\n        let bincode_config = self.bincode_config();\n        let res = bincode_config.deserialize(&self.read_buffer[2..]);\n        self.read_pos = 0;\n        Ok(res?)\n    }\n\n    fn poll_finish_read(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        while self.read_pos < self.read_buffer.len() {\n            let len = ready!(self\n                .channel\n                .poll_read(cx, &mut self.read_buffer[self.read_pos..]))?;\n            self.read_pos += len;\n        }\n        Poll::Ready(Ok(()))\n    }\n\n    fn bincode_config(&self) -> impl bincode::Options + Copy {\n        bincode::options().with_limit(MAX_MESSAGE_LEN as u64)\n    }\n}\n\n/// Wrapper over an `ReliableBincodeChannel` that only allows a single message type.\npub struct ReliableTypedChannel<M> {\n    channel: ReliableBincodeChannel,\n    _phantom: PhantomData<M>,\n}\n\nimpl<M> From<ReliableChannel> for ReliableTypedChannel<M> {\n    fn from(channel: ReliableChannel) -> Self {\n        Self::new(channel)\n    }\n}\n\nimpl<M> ReliableTypedChannel<M> {\n    pub fn new(channel: ReliableChannel) -> Self {\n        ReliableTypedChannel {\n            channel: ReliableBincodeChannel::new(channel),\n            _phantom: PhantomData,\n        }\n    }\n\n    pub fn into_inner(self) -> ReliableChannel {\n        self.channel.into_inner()\n    }\n\n    pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {\n        self.channel.flush().await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {\n        self.channel.try_flush()\n    }\n\n    pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {\n        self.channel.poll_flush(cx)\n    }\n\n    pub fn poll_send_ready(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), reliable_channel::Error>> {\n        self.channel.poll_send_ready(cx)\n    }\n\n    pub fn try_send_ready(&mut self) -> Result<bool, reliable_channel::Error> {\n        self.channel.try_send_ready()\n    }\n}\n\nimpl<M: Serialize> ReliableTypedChannel<M> {\n    pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {\n        self.channel.send(msg).await\n    }\n\n    pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {\n        self.channel.try_send(msg)\n    }\n\n    pub fn start_send(&mut self, msg: &M) -> Result<(), bincode::Error> {\n        self.channel.start_send(msg)\n    }\n}\n\nimpl<'a, M: Deserialize<'a>> ReliableTypedChannel<M> {\n    pub async fn recv(&'a mut self) -> Result<M, RecvError> {\n        self.channel.recv::<M>().await\n    }\n\n    pub fn try_recv(&'a mut self) -> Result<Option<M>, RecvError> {\n        self.channel.try_recv::<M>()\n    }\n\n    pub fn poll_recv(&'a mut self, cx: &mut Context) -> Poll<Result<M, RecvError>> {\n        self.channel.poll_recv::<M>(cx)\n    }\n}\n"
  },
  {
    "path": "src/reliable_channel.rs",
    "content": "use std::{\n    i16,\n    num::Wrapping,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n    time::Duration,\n    u32,\n};\n\nuse byteorder::{ByteOrder, LittleEndian};\nuse futures::{\n    future::{self, Fuse, FusedFuture, RemoteHandle},\n    select,\n    task::AtomicWaker,\n    FutureExt, SinkExt, StreamExt,\n};\nuse rustc_hash::FxHashMap;\nuse thiserror::Error;\n\nuse crate::{\n    bandwidth_limiter::BandwidthLimiter,\n    packet::{Packet, PacketPool},\n    runtime::{Spawn, Timer},\n    spsc,\n    windows::{\n        stream_gt, AckResult, RecvWindow, RecvWindowReader, SendWindow, SendWindowWriter, StreamPos,\n    },\n};\n\n/// All reliable channel errors are fatal. Once any error is returned all further reliable channel\n/// method calls will return `Err(Error::Shutdown)`.\n#[derive(Debug, Error)]\npub enum Error {\n    #[error(\"incoming or outgoing packet channel has been disconnected\")]\n    Disconnected,\n    #[error(\"remote endpoint has violated the reliability protocol\")]\n    ProtocolError,\n    #[error(\"an error has been encountered that has caused the channel to shutdown\")]\n    Shutdown,\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub struct Settings {\n    /// The target outgoing bandwidth, in bytes / sec.\n    ///\n    /// This is the target bandwidth usage for all sent packets, not the target bandwidth for the\n    /// actual underlying stream. Both sends and resends (but not currently acks) count against this\n    /// bandwidth limit, so this is designed to limit the amount of traffic this channel produces.\n    pub bandwidth: u32,\n    /// The maximum amount of bandwidth credit that can accumulate. This is the maximum bytes that\n    /// will be sent in a single burst.\n    pub burst_bandwidth: u32,\n    /// The size of the incoming ring buffer.\n    pub recv_window_size: u32,\n    /// The size of the outgoing ring buffer.\n    pub send_window_size: u32,\n    /// The sending side of a channel will always send a constant amount of bytes more than what\n    /// it believes the remote's recv window actually is, to avoid stalling the connection. This\n    /// controls the amount past the recv window which will be sent, and also the initial amount of\n    /// data that will be sent when the connection starts up.\n    pub init_send: u32,\n    /// The transmission task for the channel will wake up at this rate to do resends, if not woken\n    /// up to send other data.\n    pub resend_time: Duration,\n    /// The initial estimate for the RTT.\n    pub initial_rtt: Duration,\n    /// The maximum reasonable RTT which will be used as an upper bound for packet RTT values.\n    pub max_rtt: Duration,\n    /// The computed RTT for each received acknowledgment will be mixed with the RTT estimate by\n    /// this factor.\n    pub rtt_update_factor: f64,\n    /// Resends will occur if an acknowledgment is not received within this multiplicative factor of\n    /// the estimated RTT.\n    pub rtt_resend_factor: f64,\n}\n\n/// Turns a stream of unreliable, unordered packets into a reliable in-order stream of data.\npub struct ReliableChannel {\n    send_window_writer: SendWindowWriter,\n    recv_window_reader: RecvWindowReader,\n    shared: Arc<Shared>,\n    task: Fuse<RemoteHandle<Error>>,\n}\n\nimpl ReliableChannel {\n    pub fn new<S, T, P>(\n        spawn: S,\n        timer: T,\n        packet_pool: P,\n        settings: Settings,\n        sender: spsc::Sender<P::Packet>,\n        receiver: spsc::Receiver<P::Packet>,\n    ) -> Self\n    where\n        S: Spawn + 'static,\n        T: Timer + 'static,\n        P: PacketPool + Send + 'static,\n        P::Packet: Send,\n    {\n        assert!(settings.bandwidth != 0);\n        assert!(settings.recv_window_size != 0);\n        assert!(settings.recv_window_size != 0);\n        assert!(settings.burst_bandwidth != 0);\n        assert!(settings.init_send != 0);\n        assert!(settings.rtt_update_factor > 0.);\n        assert!(settings.rtt_resend_factor > 0.);\n\n        let resend_timer = Box::pin(timer.sleep(settings.resend_time).fuse());\n\n        let (send_window, send_window_writer) =\n            SendWindow::new(settings.send_window_size, Wrapping(0));\n        let (recv_window, recv_window_reader) =\n            RecvWindow::new(settings.recv_window_size, Wrapping(0));\n\n        let shared = Arc::new(Shared::default());\n\n        let bandwidth_limiter =\n            BandwidthLimiter::new(&timer, settings.bandwidth, settings.burst_bandwidth);\n        let remote_recv_available = settings.init_send;\n        let rtt_estimate = settings.initial_rtt.as_secs_f64();\n\n        let task = Task {\n            settings,\n            timer,\n            packet_pool,\n            sender,\n            receiver,\n            shared: shared.clone(),\n            send_window,\n            recv_window,\n            resend_timer,\n            remote_recv_available,\n            unacked_ranges: FxHashMap::default(),\n            rtt_estimate,\n            bandwidth_limiter,\n        };\n        let (remote, remote_handle) =\n            { async move { task.main_loop().await.unwrap_err() } }.remote_handle();\n\n        spawn.spawn(remote);\n\n        ReliableChannel {\n            send_window_writer,\n            recv_window_reader,\n            shared,\n            task: remote_handle.fuse(),\n        }\n    }\n\n    /// Write the given data to the reliable channel and return once any nonzero amount of data has\n    /// been written.\n    ///\n    /// In order to ensure that all data will be sent, `ReliableChannel::flush` must be called after\n    /// any number of writes.\n    ///\n    /// This method is cancel safe, it completes immediately once any amount of data is written,\n    /// dropping an incomplete future will have no effect.\n    pub async fn write(&mut self, data: &[u8]) -> Result<usize, Error> {\n        future::poll_fn(|cx| self.poll_write(cx, data)).await\n    }\n\n    /// Ensure that any previously written data will be fully sent.\n    ///\n    /// Returns once the sending task has been notified to wake up and will send the written data\n    /// promptly. Does *not* actually wait for outgoing packets to be sent before returning.\n    pub fn flush(&mut self) -> Result<(), Error> {\n        if self.task.is_terminated() {\n            Err(Error::Shutdown)\n        } else if let Some(error) = (&mut self.task).now_or_never() {\n            Err(error)\n        } else {\n            self.shared.send_ready.wake();\n            Ok(())\n        }\n    }\n\n    /// Read any available data. Returns once at least one byte of data has been read.\n    ///\n    /// This method is cancel safe, it completes immediately once any amount of data is read,\n    /// dropping an incomplete future will have no effect.\n    pub async fn read(&mut self, data: &mut [u8]) -> Result<usize, Error> {\n        future::poll_fn(|cx| self.poll_read(cx, data)).await\n    }\n\n    pub fn poll_write(&mut self, cx: &mut Context, data: &[u8]) -> Poll<Result<usize, Error>> {\n        if self.task.is_terminated() {\n            return Poll::Ready(Err(Error::Shutdown));\n        }\n\n        if let Poll::Ready(err) = self.task.poll_unpin(cx) {\n            return Poll::Ready(Err(err));\n        }\n\n        if data.is_empty() {\n            return Poll::Ready(Ok(0));\n        }\n\n        let len = self.send_window_writer.write(data);\n        if len > 0 {\n            Poll::Ready(Ok(len as usize))\n        } else {\n            self.shared.write_ready.register(cx.waker());\n            let len = self.send_window_writer.write(data);\n            if len > 0 {\n                Poll::Ready(Ok(len as usize))\n            } else {\n                self.shared.send_ready.wake();\n                Poll::Pending\n            }\n        }\n    }\n\n    pub fn poll_read(&mut self, cx: &mut Context, data: &mut [u8]) -> Poll<Result<usize, Error>> {\n        if self.task.is_terminated() {\n            return Poll::Ready(Err(Error::Shutdown));\n        }\n\n        if let Poll::Ready(err) = self.task.poll_unpin(cx) {\n            return Poll::Ready(Err(err));\n        }\n\n        if data.is_empty() {\n            return Poll::Ready(Ok(0));\n        }\n\n        let len = self.recv_window_reader.read(data);\n        if len > 0 {\n            Poll::Ready(Ok(len as usize))\n        } else {\n            self.shared.read_ready.register(cx.waker());\n            let len = self.recv_window_reader.read(data);\n            if len > 0 {\n                Poll::Ready(Ok(len as usize))\n            } else {\n                Poll::Pending\n            }\n        }\n    }\n\n    /// The amount of space currently available for writing without blocking.\n    pub fn write_available(&self) -> usize {\n        self.send_window_writer.write_available() as usize\n    }\n\n    /// Attempt to write data without blocking or registering wakeups.\n    pub fn try_write(&mut self, data: &[u8]) -> Result<usize, Error> {\n        if self.task.is_terminated() {\n            Err(Error::Shutdown)\n        } else {\n            Ok(self.send_window_writer.write(data) as usize)\n        }\n    }\n\n    /// Attempt to read data without blocking or registering wakeups.\n    pub fn try_read(&mut self, data: &mut [u8]) -> Result<usize, Error> {\n        if self.task.is_terminated() {\n            Err(Error::Shutdown)\n        } else {\n            Ok(self.recv_window_reader.read(data) as usize)\n        }\n    }\n}\n\n#[derive(Default)]\nstruct Shared {\n    send_ready: AtomicWaker,\n    write_ready: AtomicWaker,\n    read_ready: AtomicWaker,\n}\n\nstruct UnackedRange<I> {\n    start: StreamPos,\n    end: StreamPos,\n    last_sent: Option<I>,\n    retransmit: bool,\n}\n\nstruct Task<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    timer: T,\n    settings: Settings,\n    packet_pool: P,\n    sender: spsc::Sender<P::Packet>,\n    receiver: spsc::Receiver<P::Packet>,\n\n    shared: Arc<Shared>,\n    send_window: SendWindow,\n    recv_window: RecvWindow,\n    resend_timer: Pin<Box<Fuse<T::Sleep>>>,\n    remote_recv_available: u32,\n    unacked_ranges: FxHashMap<StreamPos, UnackedRange<T::Instant>>,\n    rtt_estimate: f64,\n    bandwidth_limiter: BandwidthLimiter<T>,\n}\n\nimpl<T, P> Task<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    async fn main_loop(mut self) -> Result<(), Error> {\n        loop {\n            enum WakeReason<P> {\n                ResendTimer,\n                IncomingPacket(P),\n                SendAvailable,\n            }\n\n            self.bandwidth_limiter.update_available(&self.timer);\n\n            let wake_reason = {\n                let bandwidth_limiter = &self.bandwidth_limiter;\n                let resend_timer = &mut self.resend_timer;\n\n                let resend_timer = async {\n                    if !resend_timer.is_terminated() {\n                        resend_timer.await;\n                    }\n                    // Don't bother waking up for the resend timer until we have bandwidth available\n                    // to do resends.\n                    if let Some(delay) = bandwidth_limiter.delay_until_available(&self.timer) {\n                        delay.await;\n                    }\n                }\n                .fuse();\n\n                let send_available = async {\n                    if self.remote_recv_available == 0 {\n                        // Don't wake up at all for sending new data if we couldn't send anything\n                        // anyway.\n                        future::pending::<()>().await;\n                    }\n\n                    // Don't wake up for sending new data until we have bandwidth available.\n                    if let Some(delay) = bandwidth_limiter.delay_until_available(&self.timer) {\n                        delay.await;\n                    }\n\n                    future::poll_fn(|cx| {\n                        if self.send_window.send_available() > 0 {\n                            Poll::Ready(())\n                        } else {\n                            self.shared.send_ready.register(cx.waker());\n                            if self.send_window.send_available() > 0 {\n                                Poll::Ready(())\n                            } else {\n                                Poll::Pending\n                            }\n                        }\n                    })\n                    .await\n                }\n                .fuse();\n\n                select! {\n                    _ = { resend_timer } => WakeReason::ResendTimer,\n                    incoming_packet = self.receiver.next().fuse() => {\n                        WakeReason::IncomingPacket(incoming_packet.ok_or(Error::Disconnected)?)\n                    },\n                    _ = { send_available } => WakeReason::SendAvailable,\n                }\n            };\n\n            self.bandwidth_limiter.update_available(&self.timer);\n\n            match wake_reason {\n                WakeReason::ResendTimer => {\n                    self.resend().await?;\n                    self.resend_timer\n                        .set(self.timer.sleep(self.settings.resend_time).fuse());\n                }\n                WakeReason::IncomingPacket(packet) => {\n                    self.recv_packet(packet).await?;\n                }\n                WakeReason::SendAvailable => {\n                    // We should use available bandwidth for resends before sending, to avoid\n                    // starving resends\n                    self.resend().await?;\n                    self.resend_timer\n                        .set(self.timer.sleep(self.settings.resend_time).fuse());\n\n                    self.send().await?;\n                }\n            }\n\n            // Don't let the connection stall. If we are now out of unacked ranges to resend and\n            // we believe the remote has no recv left, we will receive no acknowledgments to let us\n            // update the remote receive window. Keep sending a small amount of data past the remote\n            // receive window, even if it is unacked, so that we are notified when the remote starts\n            // processing data again.\n            if self.unacked_ranges.is_empty() && self.remote_recv_available == 0 {\n                self.remote_recv_available = self.settings.init_send;\n            }\n        }\n    }\n\n    // Send any data available to send, if we have the bandwidth for it\n    async fn send(&mut self) -> Result<(), Error> {\n        if !self.bandwidth_limiter.bytes_available() {\n            return Ok(());\n        }\n\n        let send_amt = (self.send_window.send_available())\n            .min(self.remote_recv_available)\n            .min(i16::MAX as u32);\n\n        if send_amt == 0 {\n            return Ok(());\n        }\n\n        let send_amt = send_amt.min((self.packet_pool.capacity() - 6) as u32);\n        let mut packet = self.packet_pool.acquire();\n\n        packet.resize(6 + send_amt as usize, 0);\n\n        let (start, end) = self.send_window.send(&mut packet[6..]).unwrap();\n        assert_eq!((end - start).0, send_amt);\n\n        LittleEndian::write_i16(&mut packet[0..2], send_amt as i16);\n        LittleEndian::write_u32(&mut packet[2..6], start.0);\n\n        self.unacked_ranges.insert(\n            start,\n            UnackedRange {\n                start,\n                end,\n                last_sent: Some(self.timer.now()),\n                retransmit: false,\n            },\n        );\n\n        self.bandwidth_limiter.take_bytes(packet.len() as u32);\n        self.sender\n            .send(packet)\n            .await\n            .map_err(|_| Error::Disconnected)?;\n\n        self.remote_recv_available -= send_amt;\n\n        Ok(())\n    }\n\n    // Resend any data whose retransmit time has been reached, if we have the bandwidth for it\n    async fn resend(&mut self) -> Result<(), Error> {\n        for unacked in self.unacked_ranges.values_mut() {\n            if !self.bandwidth_limiter.bytes_available() {\n                break;\n            }\n\n            let resend = if let Some(last_sent) = unacked.last_sent {\n                let elapsed = self.timer.duration_between(last_sent, self.timer.now());\n                elapsed.as_secs_f64() > self.rtt_estimate * self.settings.rtt_resend_factor\n            } else {\n                true\n            };\n\n            if resend {\n                unacked.last_sent = Some(self.timer.now());\n                unacked.retransmit = true;\n\n                let len = (unacked.end - unacked.start).0;\n\n                let mut packet = self.packet_pool.acquire();\n                packet.resize(6 + len as usize, 0);\n                LittleEndian::write_i16(&mut packet[0..2], len as i16);\n                LittleEndian::write_u32(&mut packet[2..6], unacked.start.0);\n\n                self.send_window\n                    .get_unacked(unacked.start, &mut packet[6..]);\n\n                self.bandwidth_limiter.take_bytes(packet.len() as u32);\n\n                self.sender\n                    .send(packet)\n                    .await\n                    .map_err(|_| Error::Disconnected)?;\n            }\n        }\n\n        Ok(())\n    }\n\n    // Receive the given packet and respond with an acknowledgment packet, ignoring bandwidth\n    // limits.\n    async fn recv_packet(&mut self, packet: P::Packet) -> Result<(), Error> {\n        if packet.len() < 2 {\n            return Err(Error::ProtocolError);\n        }\n\n        let data_len = LittleEndian::read_i16(&packet[0..2]);\n        if data_len < 0 {\n            if packet.len() != 10 {\n                return Err(Error::ProtocolError);\n            }\n\n            let start_pos = Wrapping(LittleEndian::read_u32(&packet[2..6]));\n            let end_pos = start_pos + Wrapping(-data_len as u32);\n            let recv_window_end = Wrapping(LittleEndian::read_u32(&packet[6..10]));\n\n            if stream_gt(recv_window_end, self.send_window.send_pos()) {\n                let old_remote_recv_available = self.remote_recv_available;\n                self.remote_recv_available = self\n                    .remote_recv_available\n                    .max((recv_window_end - self.send_window.send_pos()).0);\n\n                if self.remote_recv_available != 0 && old_remote_recv_available == 0 {\n                    // If we now believe the remote is newly ready to receive data, go ahead and\n                    // send it.\n                    self.send().await?;\n                }\n            }\n\n            let acked_range = match self.send_window.ack_range(start_pos, end_pos) {\n                AckResult::NotFound => None,\n                AckResult::Ack => {\n                    let acked = self.unacked_ranges.remove(&start_pos).unwrap();\n                    assert_eq!(acked.end, end_pos);\n                    Some(acked)\n                }\n                AckResult::PartialAck(nacked_end) => {\n                    let mut acked = self.unacked_ranges.remove(&start_pos).unwrap();\n                    assert_eq!(acked.end, nacked_end);\n                    acked.end = end_pos;\n                    self.unacked_ranges.insert(\n                        end_pos,\n                        UnackedRange {\n                            start: end_pos,\n                            end: nacked_end,\n                            last_sent: None,\n                            retransmit: true,\n                        },\n                    );\n                    Some(acked)\n                }\n            };\n\n            if let Some(acked_range) = acked_range {\n                // Only update the RTT estimation for acked ranges that did not need to be\n                // retransmitted, otherwise we do not know which packet is being acked and thus\n                // can't be sure of the actual RTT for this ack.\n                if !acked_range.retransmit {\n                    if let Some(last_sent) = acked_range.last_sent {\n                        let rtt = self\n                            .timer\n                            .duration_between(last_sent, self.timer.now())\n                            .min(self.settings.max_rtt)\n                            .as_secs_f64();\n                        self.rtt_estimate +=\n                            (rtt - self.rtt_estimate) * self.settings.rtt_update_factor;\n                    }\n                }\n\n                if self.send_window.write_available() > 0 {\n                    self.shared.write_ready.wake();\n                }\n            }\n        } else {\n            if packet.len() < 6 {\n                return Err(Error::ProtocolError);\n            }\n\n            let start_pos = Wrapping(LittleEndian::read_u32(&packet[2..6]));\n            if data_len as usize != packet.len() - 6 {\n                return Err(Error::ProtocolError);\n            }\n\n            if let Some(end_pos) = self.recv_window.recv(start_pos, &packet[6..]) {\n                let mut ack_packet = self.packet_pool.acquire();\n                ack_packet.resize(10, 0);\n                let ack_len = (end_pos - start_pos).0 as i16;\n                LittleEndian::write_i16(&mut ack_packet[0..2], -ack_len);\n                LittleEndian::write_u32(&mut ack_packet[2..6], start_pos.0);\n                LittleEndian::write_u32(&mut ack_packet[6..10], self.recv_window.window_end().0);\n\n                // We currently do not count acknowledgement packets against the outgoing bandwidth\n                // at all.\n                self.sender\n                    .send(ack_packet)\n                    .await\n                    .map_err(|_| Error::Disconnected)?;\n\n                if self.recv_window.read_available() > 0 {\n                    self.shared.read_ready.wake();\n                }\n            }\n        }\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "src/ring_buffer.rs",
    "content": "use std::{\n    alloc::{alloc, dealloc, Layout},\n    mem::{self, MaybeUninit},\n    ptr::NonNull,\n    slice,\n    sync::{\n        atomic::{AtomicUsize, Ordering},\n        Arc,\n    },\n};\n\nuse cache_padded::CachePadded;\n\npub struct RingBuffer {\n    buffer: NonNull<MaybeUninit<u8>>,\n    capacity: usize,\n    head: CachePadded<AtomicUsize>,\n    tail: CachePadded<AtomicUsize>,\n}\n\nimpl RingBuffer {\n    pub fn new(capacity: usize) -> (Writer, Reader) {\n        assert!(capacity != 0);\n        let buffer = Arc::new(Self {\n            buffer: unsafe {\n                NonNull::new(alloc(Layout::array::<MaybeUninit<u8>>(capacity).unwrap())\n                    as *mut MaybeUninit<u8>)\n                .unwrap()\n            },\n            capacity,\n            head: CachePadded::new(AtomicUsize::new(0)),\n            tail: CachePadded::new(AtomicUsize::new(0)),\n        });\n\n        let writer = Writer(buffer.clone());\n        let reader = Reader(buffer);\n        (writer, reader)\n    }\n\n    pub fn write_available(&self) -> usize {\n        let head = self.head.load(Ordering::Acquire);\n        let tail = self.tail.load(Ordering::Acquire);\n\n        head_to_tail(self.capacity, head, tail)\n    }\n\n    pub fn read_available(&self) -> usize {\n        let head = self.head.load(Ordering::Acquire);\n        let tail = self.tail.load(Ordering::Acquire);\n\n        tail_to_head(self.capacity, tail, head)\n    }\n}\n\nimpl Drop for RingBuffer {\n    fn drop(&mut self) {\n        unsafe {\n            dealloc(\n                self.buffer.as_ptr() as *mut u8,\n                Layout::array::<MaybeUninit<u8>>(self.capacity).unwrap(),\n            );\n        }\n    }\n}\n\nunsafe impl Send for RingBuffer {}\nunsafe impl Sync for RingBuffer {}\n\npub struct Writer(Arc<RingBuffer>);\n\nimpl Writer {\n    pub fn available(&self) -> usize {\n        self.0.write_available()\n    }\n\n    pub fn write(&mut self, mut offset: usize, mut data: &[u8]) -> usize {\n        let head_pos = self.0.head.load(Ordering::Acquire);\n        let tail_pos = self.0.tail.load(Ordering::Acquire);\n\n        let head = collapse_position(self.0.capacity, head_pos);\n        let tail = collapse_position(self.0.capacity, tail_pos);\n\n        if head == tail && head_pos != tail_pos {\n            return 0;\n        }\n\n        let (mut left, mut right): (&mut [MaybeUninit<u8>], &mut [MaybeUninit<u8>]) = unsafe {\n            if head < tail {\n                (\n                    slice::from_raw_parts_mut(self.0.buffer.as_ptr().add(head), tail - head),\n                    &mut [],\n                )\n            } else {\n                (\n                    slice::from_raw_parts_mut(\n                        self.0.buffer.as_ptr().add(head),\n                        self.0.capacity - head,\n                    ),\n                    slice::from_raw_parts_mut(self.0.buffer.as_ptr(), tail),\n                )\n            }\n        };\n\n        let left_eat = left.len().min(offset);\n        left = &mut left[left_eat..];\n        offset -= left_eat;\n\n        let left_len = left.len().min(data.len());\n        write_slice(&mut left[0..left_len], &data[0..left_len]);\n        data = &data[left_len..];\n\n        let right_eat = right.len().min(offset);\n        right = &mut right[right_eat..];\n\n        let right_len = right.len().min(data.len());\n        write_slice(&mut right[0..right_len], &data[0..right_len]);\n\n        left_len + right_len\n    }\n\n    pub fn advance(&mut self, offset: usize) -> usize {\n        let head = self.0.head.load(Ordering::Acquire);\n        let tail = self.0.tail.load(Ordering::Acquire);\n\n        let offset = offset.min(head_to_tail(self.0.capacity, head, tail));\n        let head = increment(self.0.capacity, head, offset);\n        self.0.head.store(head, Ordering::Release);\n\n        offset\n    }\n\n    pub fn buffer(&self) -> &RingBuffer {\n        &self.0\n    }\n}\n\npub struct Reader(Arc<RingBuffer>);\n\nimpl Reader {\n    pub fn available(&self) -> usize {\n        self.0.read_available()\n    }\n\n    pub fn read(&self, mut offset: usize, mut data: &mut [u8]) -> usize {\n        let head_pos = self.0.head.load(Ordering::Acquire);\n        let tail_pos = self.0.tail.load(Ordering::Acquire);\n\n        let head = collapse_position(self.0.capacity, head_pos);\n        let tail = collapse_position(self.0.capacity, tail_pos);\n\n        if head == tail && head_pos == tail_pos {\n            return 0;\n        }\n\n        let (mut left, mut right): (&[u8], &[u8]) = unsafe {\n            if tail < head {\n                (\n                    slice::from_raw_parts(self.0.buffer.as_ptr().add(tail) as *mut u8, head - tail),\n                    &mut [],\n                )\n            } else {\n                (\n                    slice::from_raw_parts(\n                        self.0.buffer.as_ptr().add(tail) as *mut u8,\n                        self.0.capacity - tail,\n                    ),\n                    slice::from_raw_parts(self.0.buffer.as_ptr() as *mut u8, head),\n                )\n            }\n        };\n\n        let left_eat = left.len().min(offset);\n        left = &left[left_eat..];\n        offset -= left_eat;\n\n        let left_len = left.len().min(data.len());\n        data[0..left_len].copy_from_slice(&left[0..left_len]);\n        data = &mut data[left_len..];\n\n        let right_eat = right.len().min(offset);\n        right = &right[right_eat..];\n\n        let right_len = right.len().min(data.len());\n        data[0..right_len].copy_from_slice(&right[0..right_len]);\n\n        left_len + right_len\n    }\n\n    pub fn advance(&mut self, offset: usize) -> usize {\n        let head = self.0.head.load(Ordering::Acquire);\n        let tail = self.0.tail.load(Ordering::Acquire);\n\n        let offset = offset.min(tail_to_head(self.0.capacity, tail, head));\n        let tail = increment(self.0.capacity, tail, offset);\n        self.0.tail.store(tail, Ordering::Release);\n\n        offset\n    }\n\n    pub fn buffer(&self) -> &RingBuffer {\n        &self.0\n    }\n}\n\nfn collapse_position(capacity: usize, pos: usize) -> usize {\n    if pos < capacity {\n        pos\n    } else {\n        pos - capacity\n    }\n}\n\nfn tail_to_head(capacity: usize, tail: usize, head: usize) -> usize {\n    if tail <= head {\n        head - tail\n    } else {\n        capacity - (tail - capacity) + head\n    }\n}\n\nfn head_to_tail(capacity: usize, head: usize, tail: usize) -> usize {\n    capacity - tail_to_head(capacity, tail, head)\n}\n\nfn increment(capacity: usize, pos: usize, n: usize) -> usize {\n    if n == 0 {\n        return pos;\n    }\n\n    let threshold = (capacity - n) + capacity;\n    if pos < threshold {\n        pos + n\n    } else {\n        pos - threshold\n    }\n}\n\nfn write_slice(dst: &mut [MaybeUninit<u8>], src: &[u8]) {\n    let src: &[MaybeUninit<u8>] = unsafe { mem::transmute(src) };\n    dst.copy_from_slice(src);\n}\n\n#[cfg(test)]\nmod tests {\n    use std::thread;\n\n    use super::*;\n\n    #[test]\n    fn basic_read_write() {\n        let (mut writer, mut reader) = RingBuffer::new(7);\n        let mut buffer = [0; 7];\n\n        assert_eq!(writer.available(), 7);\n        assert_eq!(writer.write(0, &[0, 1, 2]), 3);\n        assert_eq!(writer.advance(3), 3);\n        assert_eq!(writer.available(), 4);\n        assert_eq!(reader.available(), 3);\n        assert_eq!(reader.read(0, &mut buffer), 3);\n        assert_eq!(buffer[0..3], [0, 1, 2]);\n        assert_eq!(writer.available(), 4);\n        assert_eq!(reader.advance(3), 3);\n        assert_eq!(writer.available(), 7);\n        assert_eq!(reader.available(), 0);\n        assert_eq!(writer.write(0, &[0, 1, 2]), 3);\n        assert_eq!(writer.advance(3), 3);\n        assert_eq!(writer.available(), 4);\n        assert_eq!(reader.read(0, &mut buffer[0..3]), 3);\n        assert_eq!(buffer[0..3], [0, 1, 2]);\n        assert_eq!(writer.write(0, &[3, 4, 5]), 3);\n        assert_eq!(writer.advance(3), 3);\n        assert_eq!(writer.available(), 1);\n        assert_eq!(writer.write(0, &[6, 7, 8, 9]), 1);\n        assert_eq!(writer.advance(1), 1);\n        assert_eq!(writer.available(), 0);\n        assert_eq!(reader.available(), 7);\n        assert_eq!(reader.read(4, &mut buffer[0..5]), 3);\n        assert_eq!(buffer[0..3], [4, 5, 6]);\n        assert_eq!(reader.read(0, &mut buffer[0..2]), 2);\n        assert_eq!(buffer[0..2], [0, 1]);\n        assert_eq!(reader.advance(2), 2);\n        assert_eq!(reader.available(), 5);\n        assert_eq!(writer.available(), 2);\n        assert_eq!(reader.read(0, &mut buffer[0..3]), 3);\n        assert_eq!(buffer[0..3], [2, 3, 4]);\n        assert_eq!(reader.advance(3), 3);\n        assert_eq!(reader.available(), 2);\n        assert_eq!(writer.available(), 5);\n        assert_eq!(reader.read(0, &mut buffer[0..5]), 2);\n        assert_eq!(buffer[0..2], [5, 6]);\n        assert_eq!(reader.available(), 2);\n        assert_eq!(writer.available(), 5);\n        assert_eq!(reader.advance(5), 2);\n        assert_eq!(reader.available(), 0);\n        assert_eq!(writer.available(), 7);\n        assert_eq!(writer.write(3, &[13, 14]), 2);\n        assert_eq!(writer.write(0, &[10, 11, 12]), 3);\n        assert_eq!(writer.advance(5), 5);\n        assert_eq!(writer.available(), 2);\n        assert_eq!(reader.available(), 5);\n        assert_eq!(reader.read(2, &mut buffer[0..5]), 3);\n        assert_eq!(buffer[0..3], [12, 13, 14]);\n        assert_eq!(reader.read(0, &mut buffer[0..3]), 3);\n        assert_eq!(buffer[0..3], [10, 11, 12]);\n    }\n\n    #[test]\n    fn threaded_read_write() {\n        let (mut writer, mut reader) = RingBuffer::new(64);\n\n        let a = thread::spawn(move || {\n            let mut b = [0; 32];\n            let mut i = 0;\n            loop {\n                let write = 11 + (i % 17);\n                for j in 0..write {\n                    b[j] = ((i + j) % 256) as u8;\n                }\n                let len = writer.write(0, &b[0..write]);\n                writer.advance(len);\n                i += len;\n                if i >= 10_000 {\n                    break;\n                }\n            }\n        });\n\n        let b = thread::spawn(move || {\n            let mut b = [0; 32];\n            let mut i = 0;\n            loop {\n                let r = reader.read(0, &mut b);\n                for j in 0..r {\n                    assert_eq!(b[j], ((i + j) % 256) as u8);\n                }\n                assert_eq!(reader.advance(r), r);\n                i += r;\n                if i >= 10_000 {\n                    break;\n                }\n            }\n        });\n\n        b.join().unwrap();\n        a.join().unwrap();\n    }\n}\n"
  },
  {
    "path": "src/runtime.rs",
    "content": "//! Traits for async runtime functionality needed by `turbulence`.\n\nuse std::{future::Future, time::Duration};\n\n/// This is similar to the `futures::task::Spawn` trait, but it is generic in the spawned\n/// future, which is better for backends like tokio.\npub trait Spawn: Send + Sync {\n    fn spawn<F>(&self, future: F)\n    where\n        F: Future<Output = ()> + Send + 'static;\n}\n\n/// This is designed so that it can be implemented on multiple platforms with multiple runtimes,\n/// including `wasm32-unknown-unknown`, where `std::time::Instant` is unavailable.\npub trait Timer: Send + Sync {\n    type Instant: Send + Sync + Copy;\n    type Sleep: Future<Output = ()> + Send;\n\n    /// Return the current instant.\n    fn now(&self) -> Self::Instant;\n\n    /// Similarly to `std::time::Instant::duration_since`, may panic if `later` comes before\n    /// `earlier`.\n    fn duration_between(&self, earlier: Self::Instant, later: Self::Instant) -> Duration;\n\n    /// Create a future which resolves after the given time has passed.\n    fn sleep(&self, duration: Duration) -> Self::Sleep;\n}\n"
  },
  {
    "path": "src/spsc.rs",
    "content": "use std::{\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n};\n\npub use crossbeam_channel::{TryRecvError, TrySendError};\nuse futures::{task::AtomicWaker, Sink, Stream};\nuse thiserror::Error;\n\n#[derive(Default)]\nstruct Shared {\n    send_ready: AtomicWaker,\n    recv_ready: AtomicWaker,\n}\n\npub struct Receiver<T> {\n    channel: crossbeam_channel::Receiver<T>,\n    shared: Arc<Shared>,\n}\n\nimpl<T> Drop for Receiver<T> {\n    fn drop(&mut self) {\n        self.shared.send_ready.wake();\n    }\n}\n\nimpl<T> Unpin for Receiver<T> {}\n\nimpl<T> Stream for Receiver<T> {\n    type Item = T;\n\n    fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<T>> {\n        match self.try_recv() {\n            Ok(r) => Poll::Ready(Some(r)),\n            Err(TryRecvError::Disconnected) => Poll::Ready(None),\n            Err(TryRecvError::Empty) => {\n                self.shared.recv_ready.register(cx.waker());\n                match self.try_recv() {\n                    Ok(r) => Poll::Ready(Some(r)),\n                    Err(TryRecvError::Disconnected) => Poll::Ready(None),\n                    Err(TryRecvError::Empty) => Poll::Pending,\n                }\n            }\n        }\n    }\n}\n\nimpl<T> Receiver<T> {\n    pub fn try_recv(&mut self) -> Result<T, TryRecvError> {\n        let t = self.channel.try_recv()?;\n        self.shared.send_ready.wake();\n        Ok(t)\n    }\n}\n\n#[derive(Debug, Error)]\n#[error(\"spsc channel disconnected\")]\npub struct Disconnected;\n\npub struct Sender<T> {\n    channel: crossbeam_channel::Sender<T>,\n    shared: Arc<Shared>,\n    slot: Option<T>,\n}\n\nimpl<T> Drop for Sender<T> {\n    fn drop(&mut self) {\n        self.shared.recv_ready.wake()\n    }\n}\n\nimpl<T> Unpin for Sender<T> {}\n\nimpl<T> Sink<T> for Sender<T> {\n    type Error = Disconnected;\n\n    fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {\n        if let Some(t) = self.slot.take() {\n            match self.try_send(t) {\n                Ok(()) => Poll::Ready(Ok(())),\n                Err(TrySendError::Disconnected(_)) => Poll::Ready(Err(Disconnected)),\n                Err(TrySendError::Full(t)) => {\n                    self.shared.send_ready.register(cx.waker());\n                    match self.try_send(t) {\n                        Ok(()) => Poll::Ready(Ok(())),\n                        Err(TrySendError::Disconnected(_)) => Poll::Ready(Err(Disconnected)),\n                        Err(TrySendError::Full(t)) => {\n                            self.slot = Some(t);\n                            Poll::Pending\n                        }\n                    }\n                }\n            }\n        } else {\n            Poll::Ready(Ok(()))\n        }\n    }\n\n    fn start_send(mut self: Pin<&mut Self>, item: T) -> Result<(), Self::Error> {\n        if self.slot.replace(item).is_some() {\n            panic!(\"start_send called without without being ready\");\n        }\n        Ok(())\n    }\n\n    fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {\n        self.poll_ready(cx)\n    }\n\n    fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {\n        self.poll_flush(cx)\n    }\n}\n\nimpl<T> Sender<T> {\n    pub fn try_send(&mut self, t: T) -> Result<(), TrySendError<T>> {\n        if let Some(prev) = self.slot.take() {\n            if let Err(err) = self.channel.try_send(prev) {\n                match err {\n                    TrySendError::Full(inner) => {\n                        self.slot = Some(inner);\n                        return Err(TrySendError::Full(t));\n                    }\n                    TrySendError::Disconnected(inner) => {\n                        self.slot = Some(inner);\n                        return Err(TrySendError::Disconnected(t));\n                    }\n                }\n            } else {\n                self.shared.recv_ready.wake();\n            }\n        }\n        self.channel.try_send(t)?;\n        self.shared.recv_ready.wake();\n        Ok(())\n    }\n}\n\npub fn channel<T>(capacity: usize) -> (Sender<T>, Receiver<T>) {\n    let (sender, receiver) = crossbeam_channel::bounded(capacity);\n    let shared = Arc::new(Shared::default());\n\n    (\n        Sender {\n            channel: sender,\n            shared: shared.clone(),\n            slot: None,\n        },\n        Receiver {\n            channel: receiver,\n            shared,\n        },\n    )\n}\n"
  },
  {
    "path": "src/unreliable_bincode_channel.rs",
    "content": "use std::{\n    marker::PhantomData,\n    task::{Context, Poll},\n};\n\nuse bincode::Options as _;\nuse futures::{future, ready, task};\nuse serde::{Deserialize, Serialize};\nuse thiserror::Error;\n\nuse crate::{\n    packet::PacketPool,\n    runtime::Timer,\n    unreliable_channel::{self, UnreliableChannel},\n};\n\n#[derive(Debug, Error)]\npub enum SendError {\n    #[error(\"unreliable channel error: {0}\")]\n    UnreliableChannelError(#[from] unreliable_channel::SendError),\n    /// Non-fatal error, message is unsent.\n    #[error(\"bincode serialization error: {0}\")]\n    BincodeError(#[from] bincode::Error),\n}\n\n#[derive(Debug, Error)]\npub enum RecvError {\n    #[error(\"unreliable channel error: {0}\")]\n    UnreliableChannelError(#[from] unreliable_channel::RecvError),\n    /// Non-fatal error, message is skipped.\n    #[error(\"bincode serialization error: {0}\")]\n    BincodeError(#[from] bincode::Error),\n}\n\n/// Wraps an `UnreliableChannel` together with an internal buffer to allow easily sending message\n/// types serialized with `bincode`.\n///\n/// Just like the underlying channel, messages are not guaranteed to arrive, nor are they guaranteed\n/// to arrive in order.\npub struct UnreliableBincodeChannel<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    channel: UnreliableChannel<T, P>,\n    pending_write: Vec<u8>,\n}\n\nimpl<T, P> From<UnreliableChannel<T, P>> for UnreliableBincodeChannel<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    fn from(channel: UnreliableChannel<T, P>) -> Self {\n        Self::new(channel)\n    }\n}\n\nimpl<T, P> UnreliableBincodeChannel<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    pub fn new(channel: UnreliableChannel<T, P>) -> Self {\n        UnreliableBincodeChannel {\n            channel,\n            pending_write: Vec::new(),\n        }\n    }\n\n    pub fn into_inner(self) -> UnreliableChannel<T, P> {\n        self.channel\n    }\n\n    /// Maximum allowed message length based on the packet capacity of the provided `PacketPool`.\n    ///\n    /// Will never be greater than `MAX_PACKET_LEN - 2`.\n    pub fn max_message_len(&self) -> u16 {\n        self.channel.max_message_len()\n    }\n\n    /// Write the given serializable message type to the channel.\n    ///\n    /// Messages are coalesced into larger packets before being sent, so in order to guarantee that\n    /// the message is actually sent, you must call `flush`.\n    ///\n    /// This method is cancel safe, it will never partially send a message, and completes\n    /// immediately upon successfully queuing a message to send.\n    pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendError> {\n        future::poll_fn(|cx| self.poll_send_ready(cx)).await?;\n        self.start_send(msg)?;\n        Ok(())\n    }\n\n    pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendError> {\n        if self.try_send_ready()? {\n            self.start_send(msg)?;\n            Ok(true)\n        } else {\n            Ok(false)\n        }\n    }\n\n    /// Finish sending any unsent coalesced packets.\n    ///\n    /// This *must* be called to guarantee that any sent messages are actually sent to the outgoing\n    /// packet stream.\n    ///\n    /// This method is cancel safe.\n    pub async fn flush(&mut self) -> Result<(), unreliable_channel::SendError> {\n        future::poll_fn(|cx| self.poll_flush(cx)).await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, unreliable_channel::SendError> {\n        match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    /// Receive a deserializable message type as soon as the next message is available.\n    ///\n    /// This method is cancel safe, it will never partially read a message or drop received\n    /// messages.\n    pub async fn recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvError> {\n        let bincode_config = self.bincode_config();\n        let msg = self.channel.recv().await?;\n        Ok(bincode_config.deserialize(msg)?)\n    }\n\n    pub fn try_recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<Option<M>, RecvError> {\n        match self.poll_recv::<M>(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(None),\n            Poll::Ready(Ok(val)) => Ok(Some(val)),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    pub fn poll_send_ready(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), unreliable_channel::SendError>> {\n        if !self.pending_write.is_empty() {\n            ready!(self.channel.poll_send(cx, &self.pending_write))?;\n            self.pending_write.clear();\n        }\n        Poll::Ready(Ok(()))\n    }\n\n    pub fn try_send_ready(&mut self) -> Result<bool, unreliable_channel::SendError> {\n        match self.poll_send_ready(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    pub fn start_send<M: Serialize>(&mut self, msg: &M) -> Result<(), bincode::Error> {\n        assert!(self.pending_write.is_empty());\n\n        let bincode_config = self.bincode_config();\n        bincode_config.serialize_into(&mut self.pending_write, msg)?;\n\n        Ok(())\n    }\n\n    pub fn poll_flush(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), unreliable_channel::SendError>> {\n        ready!(self.poll_send_ready(cx))?;\n        ready!(self.channel.poll_flush(cx))?;\n        Poll::Ready(Ok(()))\n    }\n\n    pub fn poll_recv<'a, M: Deserialize<'a>>(\n        &'a mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<M, RecvError>> {\n        let bincode_config = self.bincode_config();\n        let msg = ready!(self.channel.poll_recv(cx))?;\n        Poll::Ready(Ok(bincode_config.deserialize::<M>(msg)?))\n    }\n\n    fn bincode_config(&self) -> impl bincode::Options + Copy {\n        bincode::options().with_limit(self.max_message_len() as u64)\n    }\n}\n\n/// Wrapper over an `UnreliableBincodeChannel` that only allows a single message type.\npub struct UnreliableTypedChannel<T, P, M>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    channel: UnreliableBincodeChannel<T, P>,\n    _phantom: PhantomData<M>,\n}\n\nimpl<T, P, M> From<UnreliableChannel<T, P>> for UnreliableTypedChannel<T, P, M>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    fn from(channel: UnreliableChannel<T, P>) -> Self {\n        Self::new(channel)\n    }\n}\n\nimpl<T, P, M> UnreliableTypedChannel<T, P, M>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    pub fn new(channel: UnreliableChannel<T, P>) -> Self {\n        Self {\n            channel: UnreliableBincodeChannel::new(channel),\n            _phantom: PhantomData,\n        }\n    }\n\n    pub fn into_inner(self) -> UnreliableChannel<T, P> {\n        self.channel.into_inner()\n    }\n\n    pub async fn flush(&mut self) -> Result<(), unreliable_channel::SendError> {\n        self.channel.flush().await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, unreliable_channel::SendError> {\n        self.channel.try_flush()\n    }\n\n    pub fn poll_flush(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), unreliable_channel::SendError>> {\n        self.channel.poll_flush(cx)\n    }\n\n    pub fn poll_send_ready(\n        &mut self,\n        cx: &mut Context,\n    ) -> Poll<Result<(), unreliable_channel::SendError>> {\n        self.channel.poll_send_ready(cx)\n    }\n\n    pub fn try_send_ready(&mut self) -> Result<bool, unreliable_channel::SendError> {\n        self.channel.try_send_ready()\n    }\n}\n\nimpl<T, P, M> UnreliableTypedChannel<T, P, M>\nwhere\n    T: Timer,\n    P: PacketPool,\n    M: Serialize,\n{\n    pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {\n        self.channel.send(msg).await\n    }\n\n    pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {\n        self.channel.try_send(msg)\n    }\n\n    pub fn start_send(&mut self, msg: &M) -> Result<(), bincode::Error> {\n        self.channel.start_send(msg)\n    }\n}\n\nimpl<'a, T, P, M> UnreliableTypedChannel<T, P, M>\nwhere\n    T: Timer,\n    P: PacketPool,\n    M: Deserialize<'a>,\n{\n    pub async fn recv(&'a mut self) -> Result<M, RecvError> {\n        self.channel.recv::<M>().await\n    }\n\n    pub fn try_recv(&'a mut self) -> Result<Option<M>, RecvError> {\n        self.channel.try_recv::<M>()\n    }\n\n    pub fn poll_recv(&'a mut self, cx: &mut Context) -> Poll<Result<M, RecvError>> {\n        self.channel.poll_recv::<M>(cx)\n    }\n}\n"
  },
  {
    "path": "src/unreliable_channel.rs",
    "content": "use std::{\n    convert::TryInto,\n    future::Future,\n    mem,\n    pin::Pin,\n    task::{Context, Poll},\n};\n\nuse byteorder::{ByteOrder, LittleEndian};\nuse futures::{future, ready, task, SinkExt, StreamExt};\nuse thiserror::Error;\n\nuse crate::{\n    bandwidth_limiter::BandwidthLimiter,\n    packet::{Packet, PacketPool, MAX_PACKET_LEN},\n    runtime::Timer,\n    spsc,\n};\n\n#[derive(Debug, Error)]\n/// Fatal error due to channel disconnection.\n#[error(\"incoming or outgoing packet channel has been disconnected\")]\npub struct Disconnected;\n\n#[derive(Debug, Error)]\npub enum SendError {\n    #[error(transparent)]\n    Disconnected(#[from] Disconnected),\n    /// Non-fatal error, message is unsent.\n    #[error(\"message is larger than fits in the maximum packet size\")]\n    TooBig,\n}\n\n#[derive(Debug, Error)]\npub enum RecvError {\n    #[error(transparent)]\n    Disconnected(#[from] Disconnected),\n    /// Non-fatal error, the remainder of the incoming packet is dropped.\n    #[error(\"incoming packet has bad message format\")]\n    BadFormat,\n}\n\n#[derive(Debug, Clone, PartialEq)]\npub struct Settings {\n    /// The target outgoing bandwidth, in bytes / sec.\n    pub bandwidth: u32,\n    /// The maximum amount of bandwidth credit that can accumulate. This is the maximum bytes that\n    /// will be sent in a single burst.\n    pub burst_bandwidth: u32,\n}\n\n/// Turns a stream of unreliable, unordered packets into a stream of unreliable, unordered messages.\npub struct UnreliableChannel<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    timer: T,\n    packet_pool: P,\n    bandwidth_limiter: BandwidthLimiter<T>,\n    sender: spsc::Sender<P::Packet>,\n    receiver: spsc::Receiver<P::Packet>,\n    out_packet: P::Packet,\n    in_packet: Option<(P::Packet, usize)>,\n    delay_until_available: Pin<Box<Option<T::Sleep>>>,\n}\n\nimpl<T, P> UnreliableChannel<T, P>\nwhere\n    T: Timer,\n    P: PacketPool,\n{\n    pub fn new(\n        timer: T,\n        mut packet_pool: P,\n        settings: Settings,\n        sender: spsc::Sender<P::Packet>,\n        receiver: spsc::Receiver<P::Packet>,\n    ) -> Self {\n        let out_packet = packet_pool.acquire();\n        let bandwidth_limiter =\n            BandwidthLimiter::new(&timer, settings.bandwidth, settings.burst_bandwidth);\n        UnreliableChannel {\n            timer,\n            packet_pool,\n            bandwidth_limiter,\n            receiver,\n            sender,\n            out_packet,\n            in_packet: None,\n            delay_until_available: Box::pin(None),\n        }\n    }\n\n    /// Maximum allowed message length based on the packet capacity of the provided `PacketPool`.\n    ///\n    /// Will never be greater than `MAX_PACKET_LEN - 2`.\n    pub fn max_message_len(&self) -> u16 {\n        self.packet_pool.capacity().min(MAX_PACKET_LEN as usize) as u16 - 2\n    }\n\n    /// Write the given message to the channel.\n    ///\n    /// Messages are coalesced into larger packets before being sent, so in order to guarantee that\n    /// the message is actually sent, you must call `flush`.\n    ///\n    /// Messages have a maximum size based on the size of the packets returned from the packet pool.\n    /// Two bytes are used to encode the length of the message, so the maximum message length is\n    /// `packet.len() - 2`, for whatever packet sizes are returned by the pool.\n    ///\n    /// This method is cancel safe, it will never partially send a message, and the future will\n    /// complete immediately after writing a message.\n    pub async fn send(&mut self, msg: &[u8]) -> Result<(), SendError> {\n        future::poll_fn(|cx| self.poll_send(cx, msg)).await\n    }\n\n    pub fn try_send(&mut self, msg: &[u8]) -> Result<bool, SendError> {\n        match self.poll_send(&mut Context::from_waker(task::noop_waker_ref()), msg) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    /// Finish sending any unsent coalesced packets.\n    ///\n    /// This *must* be called to guarantee that any sent messages are actually sent to the outgoing\n    /// packet stream.\n    ///\n    /// This method is cancel safe.\n    pub async fn flush(&mut self) -> Result<(), Disconnected> {\n        future::poll_fn(|cx| self.poll_flush(cx)).await\n    }\n\n    pub fn try_flush(&mut self) -> Result<bool, Disconnected> {\n        match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(false),\n            Poll::Ready(Ok(())) => Ok(true),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    /// Receive a message into the provide buffer.\n    ///\n    /// If the received message fits into the provided buffer, this will return `Ok(message_len)`,\n    /// otherwise it will return `Err(RecvError::TooBig)`.\n    ///\n    /// This method is cancel safe, it will never partially read a message or drop received\n    /// messages.\n    pub async fn recv(&mut self) -> Result<&[u8], RecvError> {\n        future::poll_fn(|cx| self.poll_recv_ready(cx)).await?;\n        self.recv_next()\n    }\n\n    pub fn try_recv(&mut self) -> Result<Option<&[u8]>, RecvError> {\n        match self.poll_recv_ready(&mut Context::from_waker(task::noop_waker_ref())) {\n            Poll::Pending => Ok(None),\n            Poll::Ready(Ok(())) => Ok(Some(self.recv_next()?)),\n            Poll::Ready(Err(err)) => Err(err),\n        }\n    }\n\n    pub fn poll_send(&mut self, cx: &mut Context, msg: &[u8]) -> Poll<Result<(), SendError>> {\n        ready!(self.poll_send_ready(cx, msg.len()))?;\n        let mut send = self.start_send();\n        send.buffer()[0..msg.len()].copy_from_slice(msg);\n        send.finish(msg.len());\n        Poll::Ready(Ok(()))\n    }\n\n    /// Wait until we can send at least a `msg_len` length message via `start_send`.\n    ///\n    /// The available message length may be more than requested, if `msg_len` is zero, then this\n    /// will return as soon as a message of any length can be sent.\n    pub fn poll_send_ready(\n        &mut self,\n        cx: &mut Context,\n        msg_len: usize,\n    ) -> Poll<Result<(), SendError>> {\n        let msg_len: u16 = msg_len.try_into().map_err(|_| SendError::TooBig)?;\n\n        let start = self.out_packet.len();\n        if self.packet_pool.capacity() - start < msg_len as usize + 2 {\n            ready!(self.poll_flush(cx))?;\n\n            if self.packet_pool.capacity() < msg_len as usize + 2 {\n                return Poll::Ready(Err(SendError::TooBig));\n            }\n        }\n\n        Poll::Ready(Ok(()))\n    }\n\n    /// Start sending a message up to the maximum remaining available message length.\n    ///\n    /// # Panics\n    /// May panic if called without `poll_send_ready` being returned for some message length.\n    pub fn start_send(&mut self) -> StartSend<P::Packet> {\n        StartSend::new(&mut self.out_packet, self.packet_pool.capacity())\n    }\n\n    pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), Disconnected>> {\n        if self.out_packet.is_empty() {\n            return Poll::Ready(Ok(()));\n        }\n\n        if self.delay_until_available.is_none() {\n            self.bandwidth_limiter.update_available(&self.timer);\n            if let Some(delay) = self.bandwidth_limiter.delay_until_available(&self.timer) {\n                self.delay_until_available.set(Some(delay));\n            }\n        }\n\n        if let Some(delay) = self.delay_until_available.as_mut().as_pin_mut() {\n            ready!(delay.poll(cx));\n            self.delay_until_available.set(None);\n        }\n\n        ready!(self.sender.poll_ready_unpin(cx)).map_err(|_| Disconnected)?;\n\n        let out_packet = mem::replace(&mut self.out_packet, self.packet_pool.acquire());\n        self.bandwidth_limiter.take_bytes(out_packet.len() as u32);\n        self.sender\n            .start_send_unpin(out_packet)\n            .map_err(|_| Disconnected)?;\n\n        self.sender.poll_flush_unpin(cx).map_err(|_| Disconnected)\n    }\n\n    pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<&[u8], RecvError>> {\n        ready!(self.poll_recv_ready(cx))?;\n        Poll::Ready(self.recv_next())\n    }\n\n    fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvError>> {\n        if let Some((packet, in_pos)) = &self.in_packet {\n            if *in_pos == packet.len() {\n                self.in_packet = None;\n            }\n        }\n\n        if self.in_packet.is_none() {\n            let packet = ready!(self.receiver.poll_next_unpin(cx)).ok_or(Disconnected)?;\n            self.in_packet = Some((packet, 0));\n        }\n\n        Poll::Ready(Ok(()))\n    }\n\n    fn recv_next(&mut self) -> Result<&[u8], RecvError> {\n        let (packet, in_pos) = self.in_packet.as_mut().unwrap();\n        assert_ne!(*in_pos, packet.len());\n\n        if *in_pos + 2 > packet.len() {\n            *in_pos = packet.len();\n            return Err(RecvError::BadFormat);\n        }\n        let length = LittleEndian::read_u16(&packet[*in_pos..*in_pos + 2]) as usize;\n        *in_pos += 2;\n\n        if *in_pos + length > packet.len() {\n            *in_pos = packet.len();\n            return Err(RecvError::BadFormat);\n        }\n\n        let msg = &packet[*in_pos..*in_pos + length];\n        *in_pos += length;\n\n        Ok(msg)\n    }\n}\n\npub struct StartSend<'a, P> {\n    packet: &'a mut P,\n    start: usize,\n    capacity: usize,\n}\n\nimpl<'a, P: Packet> StartSend<'a, P> {\n    fn new(packet: &'a mut P, capacity: usize) -> Self {\n        assert!(\n            capacity >= packet.len() + 2,\n            \"not enough room to write size header\"\n        );\n        let start = packet.len();\n        packet.resize(capacity, 0);\n        Self {\n            packet,\n            start,\n            capacity,\n        }\n    }\n\n    /// Returns the buffer to write the outgoing message into.\n    pub fn buffer(&mut self) -> &mut [u8] {\n        &mut self.packet[self.start + 2..]\n    }\n\n    /// Finish writing a message that has been written into the provided buffer.\n    ///\n    /// # Panics\n    /// Panics if called with a message length larger than the size of the provided buffer.\n    pub fn finish(self, msg_len: usize) {\n        assert!(\n            msg_len < self.capacity - self.start - 2,\n            \"cannot send packet greater than size of provided buffer\"\n        );\n        let msg_len: u16 = msg_len.try_into().unwrap();\n        LittleEndian::write_u16(&mut self.packet[self.start..self.start + 2], msg_len);\n        self.packet.truncate(self.start + msg_len as usize + 2);\n    }\n}\n"
  },
  {
    "path": "src/windows.rs",
    "content": "use std::{cmp::Ordering, num::Wrapping, u32};\n\nuse crate::ring_buffer::{self, RingBuffer};\n\npub type StreamPos = Wrapping<u32>;\n\n/// Compare the given wrapping stream positions.\n///\n/// A value `a` is considered less than `b` if it is faster to get to `a` from `b` by going left\n/// than by going right, and `a` is considered greater than `b` if the opposite is true.\n///\n/// Cannot be used to implement `Ord` because this operation is not transitive.\n///\n/// In the case of a tie, where `a` != `b` but `a - b == b - a` (in other words, where both values\n/// are exactly opposite each other), there is no sensible wrapping order for `a` and `b`. In\n/// order use `stream_cmp` sensibly, we must ensure that `StreamPos` values can never be more than\n/// `u32::MAX / 2` (or 2^31 - 1) apart.\npub fn stream_cmp(a: StreamPos, b: StreamPos) -> Option<Ordering> {\n    let ord = (b - a).cmp(&(a - b));\n    if ord == Ordering::Equal && a != b {\n        None\n    } else {\n        Some(ord)\n    }\n}\n\npub fn stream_lt(a: StreamPos, b: StreamPos) -> bool {\n    stream_cmp(a, b).map(Ordering::is_lt).unwrap_or(false)\n}\n\npub fn stream_le(a: StreamPos, b: StreamPos) -> bool {\n    stream_cmp(a, b).map(Ordering::is_le).unwrap_or(false)\n}\n\npub fn stream_gt(a: StreamPos, b: StreamPos) -> bool {\n    stream_cmp(a, b).map(Ordering::is_gt).unwrap_or(false)\n}\n\npub fn stream_ge(a: StreamPos, b: StreamPos) -> bool {\n    stream_cmp(a, b).map(Ordering::is_ge).unwrap_or(false)\n}\n\n#[derive(Debug, Eq, PartialEq)]\npub enum AckResult {\n    /// This range was not found or acked more than was sent.\n    NotFound,\n    /// This range was fully acked.\n    Ack,\n    /// This range was a partial ack of a previously sent range, and the range from the end of the\n    /// provided range to this stream position should be considered nacked.\n    PartialAck(StreamPos),\n}\n\npub struct SendWindowWriter {\n    writer: ring_buffer::Writer,\n}\n\nimpl SendWindowWriter {\n    /// Write the given data to the end of the send buffer, up to the available amount to be\n    /// written.\n    pub fn write(&mut self, data: &[u8]) -> u32 {\n        let len = self.writer.write(0, data);\n        self.writer.advance(len);\n        len as u32\n    }\n\n    /// The amount of data available to be written\n    pub fn write_available(&self) -> u32 {\n        self.writer.buffer().write_available() as u32\n    }\n}\n\n/// Coaelesces and buffers outgoing stream data up to a configured window capacity and keeps it\n/// available to resend until it is acknowledged from the remote.\npub struct SendWindow {\n    reader: ring_buffer::Reader,\n    // The stream position of the first byte of the outgoing buffer after the \"sent\" bytes.\n    send_pos: StreamPos,\n    // The number of bytes at the beginning of the outgoing buffer that have already been sent, but\n    // are being kept in case they need to be retransmitted.\n    sent: u32,\n    // The set of sent but un-acked stream ranges. All of these ranges should be non-empty and non-\n    // overlapping, and the list should remain sorted in wrap-around stream ordering, and all of the\n    // ranges should fall within the \"sent\" portion of the buffer.\n    unacked_ranges: Vec<(StreamPos, StreamPos)>,\n}\n\nimpl SendWindow {\n    pub fn new(capacity: u32, stream_start: StreamPos) -> (SendWindow, SendWindowWriter) {\n        // Any more than this and the unacked list might not be totally ordered.\n        assert!(capacity <= u32::MAX / 2);\n\n        let (writer, reader) = RingBuffer::new(capacity as usize);\n        (\n            SendWindow {\n                reader,\n                send_pos: stream_start,\n                sent: 0,\n                unacked_ranges: Vec::new(),\n            },\n            SendWindowWriter { writer },\n        )\n    }\n\n    /// The amount of data available to be written\n    pub fn write_available(&self) -> u32 {\n        self.reader.buffer().write_available() as u32\n    }\n\n    /// The stream position of the next byte of data that would be sent with a call to\n    /// `SendWindow::send`.\n    pub fn send_pos(&self) -> StreamPos {\n        self.send_pos\n    }\n\n    pub fn send_available(&self) -> u32 {\n        self.reader.available() as u32 - self.sent\n    }\n\n    /// Send any pending written data up to the size of the provided buffer, and add this sent range\n    /// as an unacked range.\n    ///\n    /// Returns the stream range of the sent data. Not all of the provided buffer is necessarily\n    /// written, only the data from the start of the buffer to the length of the returned stream\n    /// range is actually written. Will not return a zero sized range, if no data is available to be\n    /// sent or the provided buffer is empty, will return None.\n    pub fn send(&mut self, data: &mut [u8]) -> Option<(StreamPos, StreamPos)> {\n        let send_amt = (self.reader.available() - self.sent as usize).min(data.len()) as u32;\n        if send_amt == 0 {\n            None\n        } else {\n            assert_eq!(\n                self.reader\n                    .read(self.sent as usize, &mut data[0..send_amt as usize]),\n                send_amt as usize,\n            );\n            let start = self.send_pos;\n            let end = start + Wrapping(send_amt);\n\n            self.sent += send_amt;\n            self.send_pos = end;\n            self.unacked_ranges.push((start, end));\n\n            Some((start, end))\n        }\n    }\n\n    /// Returns the stream position after the last contiguously acked sent data. The stream data\n    /// from `unacked_start` to `send_pos` is sent but not yet fully acked, and is retained in the\n    /// send buffer.\n    pub fn unacked_start(&self) -> StreamPos {\n        self.send_pos - Wrapping(self.sent)\n    }\n\n    /// Fetches a portion of the unacked region of the send buffer. Range must be within\n    /// [unacked_start, send_pos].\n    pub fn get_unacked(&self, start: StreamPos, data: &mut [u8]) {\n        let unacked_start = self.unacked_start();\n        let buf_start = (start - unacked_start).0 as usize;\n        assert_eq!(self.reader.read(buf_start as usize, data), data.len());\n    }\n\n    /// Acknowledge the receipt of the given stream range from the remote, and thus potentially free\n    /// up send buffer space.\n    ///\n    /// Acknowledged ranges are allowed to be equal to or shorter than the sent ranges, but they\n    /// *must* start with the same stream position. Acked ranges will be ignored if they are empty\n    /// or do not start with the same position as a previously sent, unacked range.\n    pub fn ack_range(&mut self, start: StreamPos, end: StreamPos) -> AckResult {\n        if self.unacked_ranges.is_empty() {\n            return AckResult::NotFound;\n        }\n\n        if !stream_lt(start, end) {\n            return AckResult::NotFound;\n        }\n\n        if !stream_ge(start, self.unacked_ranges.first().unwrap().0)\n            || !stream_le(end, self.unacked_ranges.last().unwrap().1)\n        {\n            return AckResult::NotFound;\n        }\n\n        match self\n            .unacked_ranges\n            .binary_search_by(|(range_start, _)| stream_cmp(*range_start, start).unwrap())\n        {\n            Ok(i) => {\n                if stream_gt(end, self.unacked_ranges[i].1) {\n                    AckResult::NotFound\n                } else {\n                    let unacked_start = self.unacked_start();\n                    if end == self.unacked_ranges[i].1 {\n                        self.unacked_ranges.remove(i);\n\n                        if start == unacked_start {\n                            assert_eq!(i, 0);\n                            if self.unacked_ranges.is_empty() {\n                                self.reader.advance(self.sent as usize);\n                                self.sent = 0;\n                            } else {\n                                let acked_amt = (self.unacked_ranges[0].0 - start).0;\n                                self.reader.advance(acked_amt as usize);\n                                self.sent -= acked_amt;\n                            }\n                        }\n                        AckResult::Ack\n                    } else {\n                        if start == unacked_start {\n                            assert_eq!(i, 0);\n                            let acked_amt = (end - start).0;\n                            self.reader.advance(acked_amt as usize);\n                            self.sent -= acked_amt;\n                        }\n\n                        self.unacked_ranges[i].0 = end;\n                        AckResult::PartialAck(self.unacked_ranges[i].1)\n                    }\n                }\n            }\n            Err(_) => AckResult::NotFound,\n        }\n    }\n}\n\npub struct RecvWindowReader {\n    reader: ring_buffer::Reader,\n}\n\nimpl RecvWindowReader {\n    /// Read any ready data off of the beginning of the read buffer and return the number of bytes\n    /// read.\n    pub fn read(&mut self, data: &mut [u8]) -> u32 {\n        let len = self.reader.read(0, data);\n        self.reader.advance(len);\n        len as u32\n    }\n}\n\n/// Receives stream data up to a configured window capacity, in any order, and combines it into an\n/// ordered stream.\npub struct RecvWindow {\n    writer: ring_buffer::Writer,\n    // The current stream position of the first byte of the incoming buffer after the \"ready\" bytes.\n    recv_pos: StreamPos,\n    // An ordered list (in wrap-around stream positions) of non-contiguous received regions of data\n    // in the buffer that do not connect with the \"ready\" data. This is used to receive out-of-\n    // ordered data and allow it to be recombined into an in-order stream.\n    //\n    // The invariants here are:\n    // 1) The list must contain non-overlapping, non-\"touching\" regions. In other words, the end of\n    //    unready region i cannot be the equal to or greater than the start of unready region i + 1.\n    // 2) The list must contain no empty regions, the end of any unready region must be strictly\n    //    greater than the beginning.\n    // 3) The list must not contain regions spanning such a large distance that the wrap-around\n    //    ordering of the regions is no longer total.\n    unready: Vec<(StreamPos, StreamPos)>,\n}\n\nimpl RecvWindow {\n    pub fn new(capacity: u32, stream_start: StreamPos) -> (RecvWindow, RecvWindowReader) {\n        // Any more than this and the unready list might not be totally ordered.\n        assert!(capacity <= u32::MAX / 2);\n\n        let (writer, reader) = RingBuffer::new(capacity as usize);\n        (\n            RecvWindow {\n                writer,\n                recv_pos: stream_start,\n                unready: Vec::new(),\n            },\n            RecvWindowReader { reader },\n        )\n    }\n\n    /// The amount of contiguous data available to be read\n    pub fn read_available(&self) -> u32 {\n        self.writer.buffer().read_available() as u32\n    }\n\n    /// The stream position where no more data could be received. This window will move forward as\n    /// data is read.\n    pub fn window_end(&self) -> StreamPos {\n        self.recv_pos + Wrapping(self.writer.available() as u32)\n    }\n\n    /// Receive a new block of data and return the upper bound of the stream range that was\n    /// successfully stored.\n    ///\n    /// If redundant data is received, all redundant data will be returned as successfully stored,\n    /// even data that has already been read out. It will *not* be checked for consistency with\n    /// existing data, it will simply be ignored and assumed to be identical.\n    ///\n    /// The returned upper bound will never be beyond the current window end, any data that falls\n    /// beyond the receive window cannot be stored.\n    ///\n    /// The range formed by the start position and the returned upper bound will never be empty, it\n    /// will either be a non-empty range of successfully received data or this method will return\n    /// None. The range formed by the start position and the returned upper bound will also never be\n    /// larger than the provided data, it will either be equal to or smaller.\n    ///\n    /// Received data may not be made immediately available for read if it is not contiguous with\n    /// the existing ready data.\n    pub fn recv(&mut self, start_pos: StreamPos, data: &[u8]) -> Option<StreamPos> {\n        assert!(data.len() <= u32::MAX as usize / 2);\n\n        // `recv_end_pos` is the stream position at the end of the maximum capacity of the receive\n        // buffer.\n        let recv_end_pos = self.recv_pos + Wrapping(self.writer.available() as u32);\n\n        // `end_pos` is the stream position at the end of the input data\n        let end_pos = start_pos + Wrapping(data.len() as u32);\n\n        // If stream positions were strictly ordered this would not be necessary, but this check\n        // combined with the assertions that `data.len() <= u32::MAX / 2` and `self.capacity <=\n        // u32::MAX / 2` should prevent wrapping issues.\n        if !stream_lt(start_pos, recv_end_pos) {\n            return None;\n        }\n\n        // `copy_start_pos` is the stream position at either the given `start_pos`, or the current\n        // receive position, whichever is greater. We do not copy data that has already been\n        // received, so this is where we will begin copying.\n        let copy_start_pos = if stream_gt(self.recv_pos, start_pos) {\n            self.recv_pos\n        } else {\n            start_pos\n        };\n\n        // We calculate the `end_pos` as being either the previous `end_pos` or the stream position\n        // at the maximum capacity of the receive buffer. We should not read more data than the\n        // requested buffer capacity can hold.\n        let end_pos = if stream_lt(end_pos, recv_end_pos) {\n            end_pos\n        } else {\n            recv_end_pos\n        };\n\n        // If we are not copying any new data (the range from `copy_start_pos` to `end_pos` is\n        // empty), then we are done.\n        if stream_ge(copy_start_pos, end_pos) {\n            // We should only return and end position if there is actually acknowledged data (it\n            // doesn't matter if the data has already been read and we skip copying it).\n            if stream_lt(start_pos, end_pos) {\n                return Some(end_pos);\n            } else {\n                return None;\n            }\n        }\n\n        // The index in the source buffer where we start copying from\n        let data_start = (copy_start_pos - start_pos).0 as usize;\n        // The index in the receive buffer where we start copying to\n        let buf_start = (copy_start_pos - self.recv_pos).0 as usize;\n        // The index in the receive buffer where we stop copying\n        let buf_end = (end_pos - self.recv_pos).0 as usize;\n\n        assert_eq!(\n            self.writer.write(\n                buf_start,\n                &data[data_start..data_start + buf_end - buf_start],\n            ),\n            buf_end - buf_start\n        );\n\n        // Very, very carefully, combine this newly received region with the existing unready\n        // regions and maintain all the invariants of the unready list.\n\n        if stream_ge(self.recv_pos, start_pos) {\n            // If this received region touches the end of the ready block, we need to combine this\n            // region with the ready block, and any unready regions that it overlaps with also need\n            // to be combined into the ready block.\n\n            let pos = match self\n                .unready\n                .binary_search_by(|(_, end)| stream_cmp(*end, end_pos).unwrap())\n            {\n                Ok(i) => i,\n                Err(i) => i,\n            };\n\n            let end = if pos == self.unready.len() {\n                self.unready.clear();\n                end_pos\n            } else if stream_ge(end_pos, self.unready[pos].0) {\n                let end = self.unready[pos].1;\n                self.unready.drain(0..=pos);\n                end\n            } else {\n                end_pos\n            };\n\n            self.writer.advance((end - self.recv_pos).0 as usize);\n            self.recv_pos = end;\n        } else {\n            // If this received region does not touch the end of the ready block, we just need to\n            // combine this with the other unready regions to maintain the invariants. It must be\n            // combined with any overlapping unready regions or any unready regions that are exactly\n            // next to each other.\n\n            let insert_pos = match self\n                .unready\n                .binary_search_by(|(_, end)| stream_cmp(*end, start_pos).unwrap())\n            {\n                Ok(i) => i,\n                Err(i) => i,\n            };\n\n            if insert_pos == self.unready.len() {\n                self.unready.push((start_pos, end_pos));\n            } else {\n                for i in insert_pos..self.unready.len() {\n                    if stream_lt(end_pos, self.unready[i].0) {\n                        if i == insert_pos {\n                            self.unready.insert(insert_pos, (start_pos, end_pos));\n                        } else {\n                            self.unready.drain(insert_pos + 1..i);\n                            if stream_lt(start_pos, self.unready[insert_pos].0) {\n                                self.unready[insert_pos].0 = start_pos;\n                            }\n                            self.unready[insert_pos].1 = end_pos;\n                        }\n                        break;\n                    } else if stream_lt(end_pos, self.unready[i].1) || i == self.unready.len() - 1 {\n                        let start = self.unready[insert_pos].0;\n                        self.unready.drain(insert_pos..i);\n                        self.unready[insert_pos].0 = if stream_lt(start_pos, start) {\n                            start_pos\n                        } else {\n                            start\n                        };\n                        if stream_gt(end_pos, self.unready[insert_pos].1) {\n                            self.unready[insert_pos].1 = end_pos;\n                        }\n                        break;\n                    }\n                }\n            }\n        }\n\n        Some(end_pos)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    use std::u32;\n\n    #[test]\n    fn test_send_window() {\n        let stream_start = Wrapping(u32::MAX - 11);\n        let write_data = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15];\n        let mut send_data = [0; 16];\n        let (mut send_window, mut send_window_writer) = SendWindow::new(7, stream_start);\n\n        assert_eq!(send_window_writer.writer.available(), 7);\n        assert_eq!(send_window.send_pos(), stream_start);\n\n        assert_eq!(send_window_writer.write(&write_data[0..4]), 4);\n        assert_eq!(send_window_writer.write(&write_data[4..6]), 2);\n        assert_eq!(send_window_writer.write(&write_data[6..10]), 1);\n\n        assert_eq!(send_window.send_pos(), stream_start);\n\n        assert_eq!(send_window.send_available(), 7);\n        assert_eq!(\n            send_window.send(&mut send_data[0..6]),\n            Some((stream_start, stream_start + Wrapping(6)))\n        );\n        for i in 0..6 {\n            assert_eq!(send_data[i], i as u8);\n        }\n        assert_eq!(send_window.send_pos(), stream_start + Wrapping(6));\n\n        assert_eq!(send_window_writer.writer.available(), 0);\n\n        assert_eq!(\n            send_window.ack_range(stream_start, stream_start + Wrapping(4)),\n            AckResult::PartialAck(stream_start + Wrapping(6))\n        );\n\n        assert_eq!(send_window_writer.writer.available(), 4);\n        assert_eq!(send_window_writer.write(&write_data[7..16]), 4);\n\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(4), stream_start + Wrapping(6)),\n            AckResult::Ack\n        );\n\n        assert_eq!(send_window_writer.writer.available(), 2);\n        assert_eq!(send_window_writer.write(&write_data[11..16]), 2);\n\n        assert_eq!(send_window.send_available(), 7);\n        assert_eq!(\n            send_window.send(&mut send_data[6..9]),\n            Some((stream_start + Wrapping(6), stream_start + Wrapping(9)))\n        );\n        for i in 6..9 {\n            assert_eq!(send_data[i], i as u8);\n        }\n        assert_eq!(send_window.send_pos(), stream_start + Wrapping(9));\n\n        assert_eq!(send_window.send_available(), 4);\n        assert_eq!(\n            send_window.send(&mut send_data[9..11]),\n            Some((stream_start + Wrapping(9), stream_start + Wrapping(11)))\n        );\n        for i in 9..11 {\n            assert_eq!(send_data[i], i as u8);\n        }\n        assert_eq!(send_window.send_pos(), stream_start + Wrapping(11));\n\n        assert_eq!(send_window.send_available(), 2);\n        assert_eq!(\n            send_window.send(&mut send_data[11..16]),\n            Some((stream_start + Wrapping(11), stream_start + Wrapping(13)))\n        );\n        for i in 11..13 {\n            assert_eq!(send_data[i], i as u8);\n        }\n        assert_eq!(send_window.send_pos(), stream_start + Wrapping(13));\n\n        // Ack ranges that error should not affect anything\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(10), stream_start + Wrapping(11)),\n            AckResult::NotFound\n        );\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(11), stream_start + Wrapping(15)),\n            AckResult::NotFound\n        );\n\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(11), stream_start + Wrapping(12)),\n            AckResult::PartialAck(stream_start + Wrapping(13))\n        );\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(6), stream_start + Wrapping(9)),\n            AckResult::Ack\n        );\n\n        assert_eq!(send_window_writer.writer.available(), 3);\n        assert_eq!(send_window.send_pos(), stream_start + Wrapping(13));\n        assert_eq!(send_window_writer.write(&write_data[14..16]), 2);\n\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(12), stream_start + Wrapping(13)),\n            AckResult::Ack\n        );\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(9), stream_start + Wrapping(11)),\n            AckResult::Ack\n        );\n\n        assert_eq!(send_window_writer.writer.available(), 5);\n\n        assert_eq!(send_window.send_available(), 2);\n        assert_eq!(\n            send_window.send(&mut send_data[14..16]),\n            Some((stream_start + Wrapping(13), stream_start + Wrapping(15)))\n        );\n        for i in 14..16 {\n            assert_eq!(send_data[i], i as u8);\n        }\n\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(13), stream_start + Wrapping(14)),\n            AckResult::PartialAck(stream_start + Wrapping(15)),\n        );\n        assert_eq!(\n            send_window.ack_range(stream_start + Wrapping(14), stream_start + Wrapping(15)),\n            AckResult::Ack,\n        );\n\n        assert_eq!(send_window_writer.writer.available(), 7);\n    }\n\n    #[test]\n    fn test_recv_window() {\n        let stream_start = Wrapping(u32::MAX - 29);\n        let recv_data = [\n            0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,\n            24, 25, 26, 27, 28, 29, 30, 31,\n        ];\n        let mut read_data = [0; 32];\n        let (mut recv_window, mut recv_window_reader) = RecvWindow::new(7, stream_start);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(7));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(0), &recv_data[0..4]),\n            Some(stream_start + Wrapping(4))\n        );\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(7));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(2), &recv_data[2..6]),\n            Some(stream_start + Wrapping(6))\n        );\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(7));\n\n        assert_eq!(recv_window_reader.read(&mut read_data[0..3]), 3);\n        assert_eq!(recv_window_reader.read(&mut read_data[3..5]), 2);\n        for i in 0..5 {\n            assert_eq!(read_data[i], i as u8);\n        }\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(12));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(4), &recv_data[4..10]),\n            Some(stream_start + Wrapping(10))\n        );\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(9), &recv_data[9..15]),\n            Some(stream_start + Wrapping(12))\n        );\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(12));\n        assert_eq!(recv_window_reader.reader.available(), 7);\n\n        assert_eq!(recv_window_reader.read(&mut read_data[5..10]), 5);\n        for i in 5..10 {\n            assert_eq!(read_data[i], i as u8);\n        }\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(17));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(25), &recv_data[25..30]),\n            None\n        );\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(15), &recv_data[15..25]),\n            Some(stream_start + Wrapping(17)),\n        );\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(17));\n\n        assert_eq!(recv_window_reader.read(&mut read_data[10..20]), 2);\n        for i in 10..12 {\n            assert_eq!(read_data[i], i as u8);\n        }\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(19));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(10), &recv_data[10..25]),\n            Some(stream_start + Wrapping(19))\n        );\n\n        // Redundant receives\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(2), &recv_data[2..10]),\n            Some(stream_start + Wrapping(10)),\n        );\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(14), &recv_data[14..21]),\n            Some(stream_start + Wrapping(19)),\n        );\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(18), &recv_data[18..21]),\n            Some(stream_start + Wrapping(19)),\n        );\n\n        // receives off of end\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(19), &recv_data[21..25]),\n            None,\n        );\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(20), &recv_data[22..25]),\n            None,\n        );\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(19), &recv_data[21..21]),\n            None,\n        );\n\n        assert_eq!(recv_window_reader.read(&mut read_data[12..25]), 7);\n        for i in 12..19 {\n            assert_eq!(read_data[i], i as u8);\n        }\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(26));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(24), &recv_data[24..25]),\n            Some(stream_start + Wrapping(25))\n        );\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(26));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(19), &recv_data[19..24]),\n            Some(stream_start + Wrapping(24))\n        );\n\n        assert_eq!(recv_window_reader.read(&mut read_data[19..25]), 6);\n        for i in 19..25 {\n            assert_eq!(read_data[i], i as u8);\n        }\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(26), &recv_data[26..27]),\n            Some(stream_start + Wrapping(27))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(28), &recv_data[28..29]),\n            Some(stream_start + Wrapping(29))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(30), &recv_data[30..31]),\n            Some(stream_start + Wrapping(31))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(29), &recv_data[29..30]),\n            Some(stream_start + Wrapping(30))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(28), &recv_data[28..29]),\n            Some(stream_start + Wrapping(29))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(27), &recv_data[27..28]),\n            Some(stream_start + Wrapping(28))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));\n        assert_eq!(\n            recv_window.recv(stream_start + Wrapping(25), &recv_data[25..26]),\n            Some(stream_start + Wrapping(26))\n        );\n        assert_eq!(recv_window_reader.read(&mut read_data[25..31]), 6);\n        for i in 25..31 {\n            assert_eq!(read_data[i], i as u8);\n        }\n\n        assert_eq!(recv_window.window_end(), stream_start + Wrapping(38));\n    }\n}\n"
  },
  {
    "path": "tests/compressed_bincode_channel.rs",
    "content": "use std::time::Duration;\n\nuse futures::channel::oneshot;\nuse rand::{rngs::SmallRng, thread_rng, RngCore, SeedableRng};\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    compressed_bincode_channel::CompressedBincodeChannel,\n    reliable_channel::{ReliableChannel, Settings},\n    runtime::Spawn,\n    spsc,\n};\n\nmod util;\n\nuse self::util::{condition_link, LinkCondition, SimpleBufferPool, SimpleRuntime};\n\n#[test]\nfn test_compressed_bincode_channel() {\n    const SETTINGS: Settings = Settings {\n        bandwidth: 2048,\n        recv_window_size: 512,\n        send_window_size: 512,\n        burst_bandwidth: 512,\n        init_send: 256,\n        resend_time: Duration::from_millis(50),\n        initial_rtt: Duration::from_millis(100),\n        max_rtt: Duration::from_millis(2000),\n        rtt_update_factor: 0.1,\n        rtt_resend_factor: 1.5,\n    };\n\n    const CONDITION: LinkCondition = LinkCondition {\n        loss: 0.2,\n        duplicate: 0.05,\n        delay: Duration::from_millis(40),\n        jitter: Duration::from_millis(10),\n    };\n\n    let packet_pool = BufferPacketPool::new(SimpleBufferPool(1000));\n    let mut runtime = SimpleRuntime::new();\n\n    let (asend, acondrecv) = spsc::channel(2);\n    let (acondsend, arecv) = spsc::channel(2);\n    condition_link(\n        CONDITION,\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SmallRng::from_rng(thread_rng()).unwrap(),\n        acondsend,\n        acondrecv,\n    );\n\n    let (bsend, bcondrecv) = spsc::channel(2);\n    let (bcondsend, brecv) = spsc::channel(2);\n    condition_link(\n        CONDITION,\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SmallRng::from_rng(thread_rng()).unwrap(),\n        bcondsend,\n        bcondrecv,\n    );\n\n    let mut stream1 = CompressedBincodeChannel::new(ReliableChannel::new(\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS.clone(),\n        bsend,\n        arecv,\n    ));\n    let mut stream2 = CompressedBincodeChannel::new(ReliableChannel::new(\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS.clone(),\n        asend,\n        brecv,\n    ));\n\n    let (a_done_send, mut a_done) = oneshot::channel();\n    runtime.spawn({\n        async move {\n            for i in 0..100 {\n                let send_val = vec![i as u8 + 13; i + 25];\n                stream1.send(&send_val).await.unwrap();\n            }\n            stream1.flush().await.unwrap();\n\n            for i in 0..100 {\n                let recv_val = stream1.recv::<Vec<u8>>().await.unwrap();\n                assert_eq!(recv_val.len(), i + 17);\n            }\n\n            let _ = a_done_send.send(stream1);\n        }\n    });\n\n    let (b_done_send, mut b_done) = oneshot::channel();\n    runtime.spawn({\n        async move {\n            for i in 0..100 {\n                let recv_val = stream2.recv::<Vec<u8>>().await.unwrap();\n                assert_eq!(recv_val, vec![i as u8 + 13; i + 25].as_slice());\n            }\n\n            for i in 0..100 {\n                let mut send_val = vec![0; i + 17];\n                rand::thread_rng().fill_bytes(&mut send_val);\n                stream2.send(&send_val).await.unwrap();\n            }\n            stream2.flush().await.unwrap();\n\n            let _ = b_done_send.send(stream2);\n        }\n    });\n\n    let mut a_done_stream = None;\n    let mut b_done_stream = None;\n    for _ in 0..100_000 {\n        a_done_stream = a_done_stream.or_else(|| a_done.try_recv().unwrap());\n        b_done_stream = b_done_stream.or_else(|| b_done.try_recv().unwrap());\n\n        if a_done_stream.is_some() && b_done_stream.is_some() {\n            return;\n        }\n\n        runtime.run_until_stalled();\n        runtime.advance_time(50);\n    }\n\n    panic!(\"didn't finish in time\");\n}\n"
  },
  {
    "path": "tests/message_channels.rs",
    "content": "use std::time::Duration;\n\nuse futures::{\n    channel::oneshot,\n    future::{self, Either},\n    SinkExt, StreamExt,\n};\nuse serde::{Deserialize, Serialize};\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    message_channels::{MessageChannelMode, MessageChannelSettings, MessageChannelsBuilder},\n    packet_multiplexer::PacketMultiplexer,\n    reliable_channel,\n    runtime::Spawn,\n    unreliable_channel,\n};\n\nmod util;\n\nuse self::util::{SimpleBufferPool, SimpleRuntime};\n\n// Define two message types, `Message1` and `Message2`\n\n// `Message1` is a reliable message on channel \"0\" that has a maximum bandwidth of 4KB/s\n\n#[derive(Serialize, Deserialize)]\nstruct Message1(i32);\n\nconst MESSAGE1_SETTINGS: MessageChannelSettings = MessageChannelSettings {\n    channel: 0,\n    channel_mode: MessageChannelMode::Reliable(reliable_channel::Settings {\n        bandwidth: 4096,\n        burst_bandwidth: 1024,\n        recv_window_size: 1024,\n        send_window_size: 1024,\n        init_send: 512,\n        resend_time: Duration::from_millis(100),\n        initial_rtt: Duration::from_millis(200),\n        max_rtt: Duration::from_secs(2),\n        rtt_update_factor: 0.1,\n        rtt_resend_factor: 1.5,\n    }),\n    message_buffer_size: 8,\n    packet_buffer_size: 8,\n};\n\n// `Message2` is an unreliable message type on channel \"1\"\n\n#[derive(Serialize, Deserialize)]\nstruct Message2(i32);\n\nconst MESSAGE2_SETTINGS: MessageChannelSettings = MessageChannelSettings {\n    channel: 1,\n    channel_mode: MessageChannelMode::Unreliable(unreliable_channel::Settings {\n        bandwidth: 4096,\n        burst_bandwidth: 1024,\n    }),\n    message_buffer_size: 8,\n    packet_buffer_size: 8,\n};\n\n#[test]\nfn test_message_channels() {\n    let mut runtime = SimpleRuntime::new();\n    let pool = BufferPacketPool::new(SimpleBufferPool(32));\n\n    // Set up two packet multiplexers, one for our sending \"A\" side and one for our receiving \"B\"\n    // side. They should both have exactly the same message types registered.\n\n    let mut multiplexer_a = PacketMultiplexer::new();\n    let mut builder_a = MessageChannelsBuilder::new(runtime.handle(), runtime.handle(), pool);\n    builder_a.register::<Message1>(MESSAGE1_SETTINGS).unwrap();\n    builder_a.register::<Message2>(MESSAGE2_SETTINGS).unwrap();\n    let mut channels_a = builder_a.build(&mut multiplexer_a);\n\n    let mut multiplexer_b = PacketMultiplexer::new();\n    let mut builder_b = MessageChannelsBuilder::new(runtime.handle(), runtime.handle(), pool);\n    builder_b.register::<Message1>(MESSAGE1_SETTINGS).unwrap();\n    builder_b.register::<Message2>(MESSAGE2_SETTINGS).unwrap();\n    let mut channels_b = builder_b.build(&mut multiplexer_b);\n\n    // Spawn a task that simulates a perfect network connection, and takes outgoing packets from\n    // each multiplexer and gives it to the other.\n    runtime.spawn(async move {\n        // We need to send packets bidirectionally from A -> B and B -> A, because reliable message\n        // channels must have a way to send acknowledgments.\n        let (mut a_incoming, mut a_outgoing) = multiplexer_a.start();\n        let (mut b_incoming, mut b_outgoing) = multiplexer_b.start();\n        loop {\n            // How to best send packets from the multiplexer to the internet and vice versa is\n            // somewhat complex. This is not a great example of how to do it.\n            //\n            // Calling `x_incoming.send(packet).await` here is using `IncomingMultiplexedPackets`\n            // `Sink` implementation, which forwards to the incoming spsc channel for whatever\n            // channel this packet is for. `turbulence` *only* uses sync channels with static\n            // size, so it is expected that this buffer might be full. You might want to instead\n            // use `IncomingMultiplexedPackets::try_send` here and if the incoming buffer is full,\n            // simply drop the packet. A full buffer means some level of the pipeline cannot keep\n            // up, and dropping the packet rather than blocking on delivering here means that\n            // a backup on one channel will not potentially block other channels from receiving\n            // packets.\n            //\n            // On the outgoing side, since `turbulence` assumes an unreliable transport, it also\n            // assumes that the actual outgoing transport can send at more or less an arbitrary\n            // rate. For this reason, the different internal channel types *block* on sending\n            // outgoing packets. It is assumed that the outgoing packet buffer would only be full\n            // under very high, temporary CPU load on the host, and they block to let the task that\n            // actually sends packets catch up. This assumption works if the outgoing stream is only\n            // really CPU bound: that it is not harmful to block on outgoing packets because we're\n            // cooperating with a task that will send UDP packets as fast as it can anyway, so we\n            // won't be blocking for long (and it's better not to burn up even more CPU making more\n            // packets that might not be sent).\n            //\n            // So why the difference, why drop incoming packets but block on outgoing packets? Well,\n            // this again assumes that the task that sends packets is utterly simple, that it is a\n            // task that just calls `sendto` or equivalent as fast as it can. On the incoming side\n            // the pipeline is much longer, and will usually include the actual main game loop.\n            // \"Blocking\" in this case may simply mean only processing a maximum number of incoming\n            // messages per tick, or something along those lines. In that case, since \"blocking\" is\n            // not a function of purely CPU load, dropping incoming packets for fairness and latency\n            // may be reasonable. On the outgoing side, we're not assuming that we may have somehow\n            // accidentally *sent* too much data, we of course assume that we are following our\n            // *own* rules, so the only cause of a backup should be very high CPU load.\n            //\n            // Since this test unrealistically assumes perfect delivery of an unreliable channel,\n            // and since this is all hard to simulate in an example with no actual network involved,\n            // we just provide perfect instant delivery. None of the subtlety of doing this in a\n            // real project is captured in this simplistic example.\n            match future::select(a_outgoing.next(), b_outgoing.next()).await {\n                Either::Left((Some(packet), _)) => {\n                    b_incoming.send(packet).await.unwrap();\n                }\n                Either::Right((Some(packet), _)) => {\n                    a_incoming.send(packet).await.unwrap();\n                }\n                Either::Left((None, _)) | Either::Right((None, _)) => break,\n            }\n        }\n    });\n\n    let (is_done_send, mut is_done_recv) = oneshot::channel();\n    runtime.spawn(async move {\n        // Now send some traffic across...\n\n        // We're using the async `MessageChannels` API, but in a game you might use the sync API.\n        channels_a.async_send(Message1(42)).await.unwrap();\n        channels_a.flush::<Message1>();\n        assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 42);\n\n        // Since our underlying simulated network is perfect, our unreliable message will always\n        // arrive.\n        channels_a.async_send(Message2(13)).await.unwrap();\n        channels_a.flush::<Message2>();\n        assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 13);\n\n        // Each message channel is independent of the others, and they all have their own\n        // independent instances of message coalescing and reliability protocols.\n\n        channels_a.async_send(Message1(20)).await.unwrap();\n        channels_a.async_send(Message2(30)).await.unwrap();\n        channels_a.async_send(Message1(21)).await.unwrap();\n        channels_a.async_send(Message2(31)).await.unwrap();\n        channels_a.async_send(Message1(22)).await.unwrap();\n        channels_a.async_send(Message2(32)).await.unwrap();\n        channels_a.flush::<Message1>();\n        channels_a.flush::<Message2>();\n\n        assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 20);\n        assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 21);\n        assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 22);\n\n        assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 30);\n        assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 31);\n        assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 32);\n\n        is_done_send.send(()).unwrap();\n    });\n\n    for _ in 0..100_000 {\n        if is_done_recv.try_recv().unwrap().is_some() {\n            return;\n        }\n\n        runtime.run_until_stalled();\n        runtime.advance_time(50);\n    }\n\n    panic!(\"didn't finish in time\");\n}\n"
  },
  {
    "path": "tests/packet_multiplexer.rs",
    "content": "use futures::{\n    executor::LocalPool,\n    future::{self, Either},\n    task::SpawnExt,\n    SinkExt, StreamExt,\n};\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    packet::{Packet, PacketPool},\n    packet_multiplexer::{MuxPacketPool, PacketMultiplexer},\n};\n\nmod util;\n\nuse self::util::SimpleBufferPool;\n\n#[test]\nfn test_multiplexer() {\n    let mut pool = LocalPool::new();\n    let spawner = pool.spawner();\n    let mut packet_pool = MuxPacketPool::new(BufferPacketPool::new(SimpleBufferPool(32)));\n\n    let mut multiplexer_a = PacketMultiplexer::new();\n    let (mut sender4a, mut receiver4a, _) = multiplexer_a.open_channel(4, 8).unwrap();\n    let (mut sender32a, mut receiver32a, _) = multiplexer_a.open_channel(32, 8).unwrap();\n\n    let mut multiplexer_b = PacketMultiplexer::new();\n    let (mut sender4b, mut receiver4b, _) = multiplexer_b.open_channel(4, 8).unwrap();\n    let (mut sender32b, mut receiver32b, _) = multiplexer_b.open_channel(32, 8).unwrap();\n\n    spawner\n        .spawn(async move {\n            let (mut a_incoming, mut a_outgoing) = multiplexer_a.start();\n            let (mut b_incoming, mut b_outgoing) = multiplexer_b.start();\n            loop {\n                match future::select(a_outgoing.next(), b_outgoing.next()).await {\n                    Either::Left((Some(packet), _)) => {\n                        b_incoming.send(packet).await.unwrap();\n                    }\n                    Either::Right((Some(packet), _)) => {\n                        a_incoming.send(packet).await.unwrap();\n                    }\n                    Either::Left((None, _)) | Either::Right((None, _)) => break,\n                }\n            }\n        })\n        .unwrap();\n\n    spawner\n        .spawn(async move {\n            let mut packet = packet_pool.acquire();\n            packet.resize(1, 17);\n            sender4a.send(packet).await.unwrap();\n\n            let mut packet = packet_pool.acquire();\n            packet.resize(1, 18);\n            sender4b.send(packet).await.unwrap();\n\n            let mut packet = packet_pool.acquire();\n            packet.resize(1, 19);\n            sender32a.send(packet).await.unwrap();\n\n            let mut packet = packet_pool.acquire();\n            packet.resize(1, 20);\n            sender32b.send(packet).await.unwrap();\n\n            let packet = receiver4a.next().await.unwrap();\n            assert_eq!(packet[0], 18);\n\n            let packet = receiver4b.next().await.unwrap();\n            assert_eq!(packet[0], 17);\n\n            let packet = receiver32a.next().await.unwrap();\n            assert_eq!(packet[0], 20);\n\n            let packet = receiver32b.next().await.unwrap();\n            assert_eq!(packet[0], 19);\n        })\n        .unwrap();\n\n    pool.run();\n}\n"
  },
  {
    "path": "tests/reliable_bincode_channel.rs",
    "content": "use std::time::Duration;\n\nuse futures::channel::oneshot;\nuse rand::{rngs::SmallRng, thread_rng, SeedableRng};\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    reliable_bincode_channel::ReliableBincodeChannel,\n    reliable_channel::{ReliableChannel, Settings},\n    runtime::Spawn,\n    spsc,\n};\n\nmod util;\n\nuse self::util::{condition_link, LinkCondition, SimpleBufferPool, SimpleRuntime};\n\n#[test]\nfn test_reliable_bincode_channel() {\n    const SETTINGS: Settings = Settings {\n        bandwidth: 2048,\n        burst_bandwidth: 512,\n        recv_window_size: 512,\n        send_window_size: 512,\n        init_send: 256,\n        resend_time: Duration::from_millis(50),\n        initial_rtt: Duration::from_millis(100),\n        max_rtt: Duration::from_millis(2000),\n        rtt_update_factor: 0.1,\n        rtt_resend_factor: 1.5,\n    };\n\n    const CONDITION: LinkCondition = LinkCondition {\n        loss: 0.2,\n        duplicate: 0.05,\n        delay: Duration::from_millis(40),\n        jitter: Duration::from_millis(10),\n    };\n\n    let packet_pool = BufferPacketPool::new(SimpleBufferPool(1000));\n    let mut runtime = SimpleRuntime::new();\n\n    let (asend, acondrecv) = spsc::channel(2);\n    let (acondsend, arecv) = spsc::channel(2);\n    condition_link(\n        CONDITION,\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SmallRng::from_rng(thread_rng()).unwrap(),\n        acondsend,\n        acondrecv,\n    );\n\n    let (bsend, bcondrecv) = spsc::channel(2);\n    let (bcondsend, brecv) = spsc::channel(2);\n    condition_link(\n        CONDITION,\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SmallRng::from_rng(thread_rng()).unwrap(),\n        bcondsend,\n        bcondrecv,\n    );\n\n    let mut stream1 = ReliableBincodeChannel::new(ReliableChannel::new(\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS.clone(),\n        bsend,\n        arecv,\n    ));\n    let mut stream2 = ReliableBincodeChannel::new(ReliableChannel::new(\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS.clone(),\n        asend,\n        brecv,\n    ));\n\n    let (a_done_send, mut a_done) = oneshot::channel();\n    runtime.spawn({\n        async move {\n            for i in 0..100 {\n                let send_val = vec![i as u8 + 42; i + 25];\n                stream1.send(&send_val).await.unwrap();\n            }\n            stream1.flush().await.unwrap();\n\n            for i in 0..100 {\n                let recv_val = stream1.recv::<&[u8]>().await.unwrap();\n                assert_eq!(recv_val, vec![i as u8 + 64; i + 50].as_slice());\n            }\n\n            let _ = a_done_send.send(stream1);\n        }\n    });\n\n    let (b_done_send, mut b_done) = oneshot::channel();\n    runtime.spawn({\n        async move {\n            for i in 0..100 {\n                let recv_val = stream2.recv::<&[u8]>().await.unwrap();\n                assert_eq!(recv_val, vec![i as u8 + 42; i + 25].as_slice());\n            }\n\n            for i in 0..100 {\n                let send_val = vec![i as u8 + 64; i + 50];\n                stream2.send(&send_val).await.unwrap();\n            }\n            stream2.flush().await.unwrap();\n\n            let _ = b_done_send.send(stream2);\n        }\n    });\n\n    let mut a_done_stream = None;\n    let mut b_done_stream = None;\n    for _ in 0..100_000 {\n        a_done_stream = a_done_stream.or_else(|| a_done.try_recv().unwrap());\n        b_done_stream = b_done_stream.or_else(|| b_done.try_recv().unwrap());\n\n        if a_done_stream.is_some() && b_done_stream.is_some() {\n            return;\n        }\n\n        runtime.run_until_stalled();\n        runtime.advance_time(50);\n    }\n\n    panic!(\"didn't finish in time\");\n}\n"
  },
  {
    "path": "tests/reliable_channel.rs",
    "content": "use std::time::Duration;\n\nuse futures::channel::oneshot;\nuse rand::{rngs::SmallRng, thread_rng, SeedableRng};\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    reliable_channel::{ReliableChannel, Settings},\n    runtime::{Spawn, Timer},\n    spsc,\n};\n\nmod util;\n\nuse self::util::{condition_link, LinkCondition, SimpleBufferPool, SimpleRuntime};\n\n#[test]\nfn test_reliable_stream() {\n    const SETTINGS: Settings = Settings {\n        bandwidth: 32768,\n        burst_bandwidth: 4096,\n        recv_window_size: 16384,\n        send_window_size: 16384,\n        init_send: 512,\n        resend_time: Duration::from_millis(50),\n        initial_rtt: Duration::from_millis(100),\n        max_rtt: Duration::from_millis(2000),\n        rtt_update_factor: 0.1,\n        rtt_resend_factor: 1.5,\n    };\n\n    const CONDITION: LinkCondition = LinkCondition {\n        loss: 0.4,\n        duplicate: 0.1,\n        delay: Duration::from_millis(30),\n        jitter: Duration::from_millis(20),\n    };\n\n    let packet_pool = BufferPacketPool::new(SimpleBufferPool(1000));\n    let mut runtime = SimpleRuntime::new();\n\n    let (asend, acondrecv) = spsc::channel(2);\n    let (acondsend, arecv) = spsc::channel(2);\n    condition_link(\n        CONDITION,\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SmallRng::from_rng(thread_rng()).unwrap(),\n        acondsend,\n        acondrecv,\n    );\n\n    let (bsend, bcondrecv) = spsc::channel(2);\n    let (bcondsend, brecv) = spsc::channel(2);\n    condition_link(\n        CONDITION,\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SmallRng::from_rng(thread_rng()).unwrap(),\n        bcondsend,\n        bcondrecv,\n    );\n\n    let mut stream1 = ReliableChannel::new(\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS,\n        bsend,\n        arecv,\n    );\n    let mut stream2 = ReliableChannel::new(\n        runtime.handle(),\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS,\n        asend,\n        brecv,\n    );\n\n    const END_POS: usize = 86_753;\n    const FLUSH_EVERY: usize = 2000;\n    const SEND_DELAY_NEAR: usize = 30_000;\n    const RECV_DELAY_NEAR: usize = 70_000;\n\n    let (a_done_send, mut a_done) = oneshot::channel();\n    runtime.spawn({\n        let runtime_handle = runtime.handle();\n        async move {\n            let mut send_buffer = [0; 512];\n            let mut c = 0;\n\n            loop {\n                for i in 0..send_buffer.len() {\n                    send_buffer[i] = (c + i) as u8;\n                }\n                let len = stream1\n                    .write(&send_buffer[0..send_buffer.len().min(END_POS - c)])\n                    .await\n                    .unwrap();\n\n                if c % FLUSH_EVERY >= (c + len) % FLUSH_EVERY {\n                    stream1.flush().unwrap();\n                }\n\n                if c < SEND_DELAY_NEAR && c + len > SEND_DELAY_NEAR {\n                    runtime_handle.sleep(Duration::from_secs(1)).await;\n                }\n\n                c += len;\n\n                if c == END_POS {\n                    stream1.flush().unwrap();\n                    break;\n                }\n            }\n\n            let _ = a_done_send.send(stream1);\n        }\n    });\n\n    let (b_done_send, mut b_done) = oneshot::channel();\n    runtime.spawn({\n        let runtime_handle = runtime.handle();\n        async move {\n            let mut recv_buffer = [0; 64];\n            let mut c = 0;\n\n            loop {\n                let len = stream2.read(&mut recv_buffer).await.unwrap();\n                for i in 0..len {\n                    if recv_buffer[i] != (c + i) as u8 {\n                        panic!();\n                    }\n                }\n\n                if c < RECV_DELAY_NEAR && c + len >= RECV_DELAY_NEAR {\n                    runtime_handle.sleep(Duration::from_secs(2)).await;\n                }\n\n                c += len;\n\n                if c == END_POS {\n                    break;\n                }\n            }\n\n            let _ = b_done_send.send(stream2);\n        }\n    });\n\n    let mut a_done_stream = None;\n    let mut b_done_stream = None;\n    for _ in 0..100_000 {\n        a_done_stream = a_done_stream.or_else(|| a_done.try_recv().unwrap());\n        b_done_stream = b_done_stream.or_else(|| b_done.try_recv().unwrap());\n\n        if a_done_stream.is_some() && b_done_stream.is_some() {\n            return;\n        }\n\n        runtime.run_until_stalled();\n        runtime.advance_time(50);\n    }\n\n    panic!(\"didn't finish in time\");\n}\n"
  },
  {
    "path": "tests/unreliable_bincode_channel.rs",
    "content": "use futures::channel::oneshot;\nuse serde::{Deserialize, Serialize};\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    runtime::Spawn,\n    spsc,\n    unreliable_bincode_channel::UnreliableTypedChannel,\n    unreliable_channel::{Settings, UnreliableChannel},\n};\n\nmod util;\n\nuse self::util::{SimpleBufferPool, SimpleRuntime, SimpleRuntimeHandle};\n\n#[test]\nfn test_unreliable_bincode_channel() {\n    const SETTINGS: Settings = Settings {\n        bandwidth: 512,\n        burst_bandwidth: 256,\n    };\n\n    let mut runtime = SimpleRuntime::new();\n    let packet_pool = BufferPacketPool::new(SimpleBufferPool(1200));\n\n    let (asend, arecv) = spsc::channel(8);\n    let (bsend, brecv) = spsc::channel(8);\n\n    let mut stream1 = UnreliableTypedChannel::new(UnreliableChannel::new(\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS,\n        bsend,\n        arecv,\n    ));\n    let mut stream2 = UnreliableTypedChannel::new(UnreliableChannel::new(\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS,\n        asend,\n        brecv,\n    ));\n\n    #[derive(Eq, PartialEq, Debug, Serialize, Deserialize)]\n    struct MyMsg {\n        a: u8,\n        b: u8,\n        c: u8,\n    }\n\n    async fn send(\n        stream: &mut UnreliableTypedChannel<\n            SimpleRuntimeHandle,\n            BufferPacketPool<SimpleBufferPool>,\n            MyMsg,\n        >,\n        val: u8,\n        len: u8,\n    ) {\n        for i in 0..len {\n            stream\n                .send(&MyMsg {\n                    a: val + i,\n                    b: val + 1 + i,\n                    c: val + 2 + i,\n                })\n                .await\n                .unwrap();\n        }\n        stream.flush().await.unwrap();\n    }\n\n    async fn recv(\n        stream: &mut UnreliableTypedChannel<\n            SimpleRuntimeHandle,\n            BufferPacketPool<SimpleBufferPool>,\n            MyMsg,\n        >,\n        val: u8,\n        len: u8,\n    ) {\n        for i in 0..len {\n            assert_eq!(\n                stream.recv().await.unwrap(),\n                MyMsg {\n                    a: val + i,\n                    b: val + 1 + i,\n                    c: val + 2 + i,\n                }\n            );\n        }\n    }\n\n    let (a_done_send, mut a_done) = oneshot::channel();\n    runtime.spawn(async move {\n        send(&mut stream1, 42, 5).await;\n        recv(&mut stream1, 17, 80).await;\n        send(&mut stream1, 4, 70).await;\n        recv(&mut stream1, 25, 115).await;\n        recv(&mut stream1, 0, 0).await;\n        recv(&mut stream1, 0, 0).await;\n        send(&mut stream1, 64, 100).await;\n        send(&mut stream1, 0, 0).await;\n        send(&mut stream1, 64, 100).await;\n        send(&mut stream1, 0, 0).await;\n        send(&mut stream1, 0, 0).await;\n        recv(&mut stream1, 0, 0).await;\n        recv(&mut stream1, 99, 6).await;\n        send(&mut stream1, 72, 40).await;\n        send(&mut stream1, 82, 50).await;\n        send(&mut stream1, 92, 60).await;\n        let _ = a_done_send.send(stream1);\n    });\n\n    let (b_done_send, mut b_done) = oneshot::channel();\n    runtime.spawn(async move {\n        recv(&mut stream2, 42, 5).await;\n        send(&mut stream2, 17, 80).await;\n        recv(&mut stream2, 4, 70).await;\n        send(&mut stream2, 25, 115).await;\n        send(&mut stream2, 0, 0).await;\n        send(&mut stream2, 0, 0).await;\n        recv(&mut stream2, 64, 100).await;\n        recv(&mut stream2, 0, 0).await;\n        recv(&mut stream2, 64, 100).await;\n        recv(&mut stream2, 0, 0).await;\n        recv(&mut stream2, 0, 0).await;\n        send(&mut stream2, 0, 0).await;\n        send(&mut stream2, 99, 6).await;\n        recv(&mut stream2, 72, 40).await;\n        recv(&mut stream2, 82, 50).await;\n        recv(&mut stream2, 92, 60).await;\n        let _ = b_done_send.send(stream2);\n    });\n\n    let mut a_done_stream = None;\n    let mut b_done_stream = None;\n    for _ in 0..100_000 {\n        a_done_stream = a_done_stream.or_else(|| a_done.try_recv().unwrap());\n        b_done_stream = b_done_stream.or_else(|| b_done.try_recv().unwrap());\n\n        if a_done_stream.is_some() && b_done_stream.is_some() {\n            return;\n        }\n\n        runtime.run_until_stalled();\n        runtime.advance_time(50);\n    }\n\n    panic!(\"didn't finish in time\");\n}\n"
  },
  {
    "path": "tests/unreliable_channel.rs",
    "content": "use futures::channel::oneshot;\n\nuse turbulence::{\n    buffer::BufferPacketPool,\n    runtime::Spawn,\n    spsc,\n    unreliable_channel::{Settings, UnreliableChannel},\n};\n\nmod util;\n\nuse self::util::{SimpleBufferPool, SimpleRuntime, SimpleRuntimeHandle};\n\n#[test]\nfn test_unreliable_channel() {\n    const SETTINGS: Settings = Settings {\n        bandwidth: 512,\n        burst_bandwidth: 256,\n    };\n\n    let mut runtime = SimpleRuntime::new();\n    let packet_pool = BufferPacketPool::new(SimpleBufferPool(1200));\n\n    let (asend, arecv) = spsc::channel(8);\n    let (bsend, brecv) = spsc::channel(8);\n\n    let mut stream1 = UnreliableChannel::new(\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS,\n        bsend,\n        arecv,\n    );\n    let mut stream2 = UnreliableChannel::new(\n        runtime.handle(),\n        packet_pool.clone(),\n        SETTINGS,\n        asend,\n        brecv,\n    );\n\n    async fn send(\n        stream: &mut UnreliableChannel<SimpleRuntimeHandle, BufferPacketPool<SimpleBufferPool>>,\n        val: u8,\n        len: usize,\n    ) {\n        let msg1 = vec![val; len];\n        stream.send(&msg1).await.unwrap();\n        stream.flush().await.unwrap();\n    }\n\n    async fn recv(\n        stream: &mut UnreliableChannel<SimpleRuntimeHandle, BufferPacketPool<SimpleBufferPool>>,\n        val: u8,\n        len: usize,\n    ) {\n        assert_eq!(stream.recv().await.unwrap(), vec![val; len].as_slice());\n    }\n\n    let (a_done_send, mut a_done) = oneshot::channel();\n    runtime.spawn(async move {\n        send(&mut stream1, 42, 5).await;\n        recv(&mut stream1, 17, 800).await;\n        send(&mut stream1, 4, 700).await;\n        recv(&mut stream1, 25, 1150).await;\n        recv(&mut stream1, 0, 0).await;\n        recv(&mut stream1, 0, 0).await;\n        send(&mut stream1, 64, 1000).await;\n        send(&mut stream1, 0, 0).await;\n        send(&mut stream1, 64, 1000).await;\n        send(&mut stream1, 0, 0).await;\n        send(&mut stream1, 0, 0).await;\n        recv(&mut stream1, 0, 0).await;\n        recv(&mut stream1, 99, 64).await;\n        send(&mut stream1, 72, 400).await;\n        send(&mut stream1, 82, 500).await;\n        send(&mut stream1, 92, 600).await;\n        let _ = a_done_send.send(stream1);\n    });\n\n    let (b_done_send, mut b_done) = oneshot::channel();\n    runtime.spawn(async move {\n        recv(&mut stream2, 42, 5).await;\n        send(&mut stream2, 17, 800).await;\n        recv(&mut stream2, 4, 700).await;\n        send(&mut stream2, 25, 1150).await;\n        send(&mut stream2, 0, 0).await;\n        send(&mut stream2, 0, 0).await;\n        recv(&mut stream2, 64, 1000).await;\n        recv(&mut stream2, 0, 0).await;\n        recv(&mut stream2, 64, 1000).await;\n        recv(&mut stream2, 0, 0).await;\n        recv(&mut stream2, 0, 0).await;\n        send(&mut stream2, 0, 0).await;\n        send(&mut stream2, 99, 64).await;\n        recv(&mut stream2, 72, 400).await;\n        recv(&mut stream2, 82, 500).await;\n        recv(&mut stream2, 92, 600).await;\n        let _ = b_done_send.send(stream2);\n    });\n\n    let mut a_done_stream = None;\n    let mut b_done_stream = None;\n    for _ in 0..100_000 {\n        a_done_stream = a_done_stream.or_else(|| a_done.try_recv().unwrap());\n        b_done_stream = b_done_stream.or_else(|| b_done.try_recv().unwrap());\n\n        if a_done_stream.is_some() && b_done_stream.is_some() {\n            return;\n        }\n\n        runtime.run_until_stalled();\n        runtime.advance_time(50);\n    }\n\n    panic!(\"didn't finish in time\");\n}\n"
  },
  {
    "path": "tests/util/mod.rs",
    "content": "#![allow(unused)]\n\nuse std::{\n    future::Future,\n    ops::Deref,\n    pin::Pin,\n    sync::{Arc, Mutex},\n    task::{Context, Poll, Waker},\n    time::Duration,\n};\n\nuse futures::{\n    channel::mpsc,\n    future::{self, BoxFuture},\n    stream::{self, FuturesUnordered},\n    task::noop_waker,\n    SinkExt, Stream, StreamExt,\n};\nuse rand::Rng;\n\nuse turbulence::{\n    buffer::BufferPool,\n    packet::{Packet, PacketPool},\n    packet_multiplexer::{MuxPacket, MuxPacketPool},\n    runtime::{Spawn, Timer},\n    spsc,\n};\n\n#[derive(Debug, Copy, Clone)]\npub struct SimpleBufferPool(pub usize);\n\nimpl BufferPool for SimpleBufferPool {\n    type Buffer = Box<[u8]>;\n\n    fn capacity(&self) -> usize {\n        self.0\n    }\n\n    fn acquire(&mut self) -> Self::Buffer {\n        vec![0; self.0].into_boxed_slice()\n    }\n}\n\nstruct TimeState {\n    time: u64,\n    queue: Vec<(u64, Waker)>,\n}\n\ntype IncomingTasks = Mutex<Vec<BoxFuture<'static, ()>>>;\n\nstruct HandleInner {\n    time_state: Mutex<TimeState>,\n    incoming_tasks: IncomingTasks,\n}\n\npub struct SimpleRuntime {\n    pool: FuturesUnordered<BoxFuture<'static, ()>>,\n    handle: SimpleRuntimeHandle,\n}\n\n#[derive(Clone)]\npub struct SimpleRuntimeHandle(Arc<HandleInner>);\n\nimpl SimpleRuntime {\n    pub fn new() -> Self {\n        SimpleRuntime {\n            pool: FuturesUnordered::new(),\n            handle: SimpleRuntimeHandle(Arc::new(HandleInner {\n                time_state: Mutex::new(TimeState {\n                    time: 0,\n                    queue: Vec::new(),\n                }),\n                incoming_tasks: IncomingTasks::default(),\n            })),\n        }\n    }\n\n    pub fn handle(&self) -> SimpleRuntimeHandle {\n        self.handle.clone()\n    }\n\n    pub fn advance_time(&mut self, millis: u64) {\n        let mut state = self.handle.0.time_state.lock().unwrap();\n        state.time += millis;\n\n        let mut arrived = 0;\n        for i in 0..state.queue.len() {\n            if state.time >= state.queue[i].0 {\n                arrived = i + 1;\n            } else {\n                break;\n            }\n        }\n\n        for (_, waker) in state.queue.drain(0..arrived) {\n            waker.wake();\n        }\n    }\n\n    pub fn run_until_stalled(&mut self) -> bool {\n        let waker = noop_waker();\n        let mut cx = Context::from_waker(&waker);\n\n        loop {\n            {\n                let mut incoming = self.handle.0.incoming_tasks.lock().unwrap();\n                for task in incoming.drain(..) {\n                    self.pool.push(task);\n                }\n            }\n\n            let next = self.pool.poll_next_unpin(&mut cx);\n\n            if self.handle.0.incoming_tasks.lock().unwrap().is_empty() {\n                match next {\n                    Poll::Pending => return false,\n                    Poll::Ready(None) => return true,\n                    Poll::Ready(Some(())) => {}\n                }\n            }\n        }\n    }\n}\n\nimpl Deref for SimpleRuntime {\n    type Target = SimpleRuntimeHandle;\n\n    fn deref(&self) -> &Self::Target {\n        &self.handle\n    }\n}\n\nasync fn do_delay(state: Arc<HandleInner>, duration: Duration) -> u64 {\n    // Our timer requires manual advancing, so delays should never spawn arrived so we don't starve\n    // the code that manually advances the time.\n    let arrival = state.time_state.lock().unwrap().time + (duration.as_millis() as u64).max(1);\n    future::poll_fn(move |cx| -> Poll<u64> {\n        let mut state = state.time_state.lock().unwrap();\n        if state.time >= arrival {\n            Poll::Ready(state.time)\n        } else {\n            let i = match state.queue.binary_search_by_key(&arrival, |(t, _)| *t) {\n                Ok(i) => i,\n                Err(i) => i,\n            };\n            state.queue.insert(i, (arrival, cx.waker().clone()));\n            Poll::Pending\n        }\n    })\n    .await\n}\n\nimpl Spawn for SimpleRuntimeHandle {\n    fn spawn<F: Future<Output = ()> + Send + 'static>(&self, f: F) {\n        self.0.incoming_tasks.lock().unwrap().push(Box::pin(f))\n    }\n}\n\nimpl Timer for SimpleRuntimeHandle {\n    type Instant = u64;\n    type Sleep = Pin<Box<dyn Future<Output = ()> + Send>>;\n\n    fn now(&self) -> Self::Instant {\n        self.0.time_state.lock().unwrap().time\n    }\n\n    fn duration_between(&self, earlier: Self::Instant, later: Self::Instant) -> Duration {\n        Duration::from_millis(later - earlier)\n    }\n\n    fn sleep(&self, duration: Duration) -> Self::Sleep {\n        let state = Arc::clone(&self.0);\n        Box::pin(async move {\n            do_delay(state, duration).await;\n        })\n    }\n}\n\n#[derive(Clone, Copy)]\npub struct LinkCondition {\n    pub loss: f64,\n    pub duplicate: f64,\n    pub delay: Duration,\n    pub jitter: Duration,\n}\n\npub fn condition_link<P>(\n    condition: LinkCondition,\n    spawn: impl Spawn + Clone + 'static,\n    timer: impl Timer + Clone + 'static,\n    mut pool: P,\n    mut rng: impl Rng + Send + 'static,\n    mut sender: spsc::Sender<P::Packet>,\n    mut receiver: spsc::Receiver<P::Packet>,\n) where\n    P: PacketPool + Send + 'static,\n    P::Packet: Send,\n{\n    // Hack to make implementing delayed packets easier\n    let (delay_outgoing, mut delay_incoming) = mpsc::unbounded();\n\n    spawn.spawn(async move {\n        while let Some(msg) = delay_incoming.next().await {\n            if sender.send(msg).await.is_err() {\n                break;\n            }\n        }\n    });\n\n    spawn.spawn({\n        let spawn = spawn.clone();\n        async move {\n            loop {\n                match receiver.next().await {\n                    Some(packet) => {\n                        if rng.gen::<f64>() > condition.loss {\n                            if rng.gen::<f64>() <= condition.duplicate {\n                                spawn.spawn({\n                                    let timer = timer.clone();\n                                    let mut outgoing = delay_outgoing.clone();\n                                    let delay = Duration::from_secs_f64(\n                                        condition.delay.as_secs_f64()\n                                            + rng.gen::<f64>() * condition.jitter.as_secs_f64(),\n                                    );\n                                    let mut dup_packet = pool.acquire();\n                                    dup_packet.extend(&packet[..]);\n                                    async move {\n                                        timer.sleep(delay).await;\n                                        let _ = outgoing.send(dup_packet).await;\n                                    }\n                                });\n                            }\n\n                            spawn.spawn({\n                                let timer = timer.clone();\n                                let mut outgoing = delay_outgoing.clone();\n                                let delay = Duration::from_secs_f64(\n                                    condition.delay.as_secs_f64()\n                                        + rng.gen::<f64>() * condition.jitter.as_secs_f64(),\n                                );\n                                async move {\n                                    timer.sleep(delay).await;\n                                    let _ = outgoing.send(packet).await;\n                                }\n                            });\n                        }\n                    }\n                    None => break,\n                }\n            }\n        }\n    });\n}\n"
  }
]