Showing preview only (225K chars total). Download the full file or copy to clipboard to get everything.
Repository: kyren/turbulence
Branch: master
Commit: b119f20a1c39
Files: 32
Total size: 214.7 KB
Directory structure:
gitextract_lm1348nn/
├── .circleci/
│ └── config.yml
├── .gitignore
├── CHANGELOG.md
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-CC0
├── LICENSE-MIT
├── README.md
├── src/
│ ├── bandwidth_limiter.rs
│ ├── buffer.rs
│ ├── compressed_bincode_channel.rs
│ ├── event_watch.rs
│ ├── lib.rs
│ ├── message_channels.rs
│ ├── packet.rs
│ ├── packet_multiplexer.rs
│ ├── reliable_bincode_channel.rs
│ ├── reliable_channel.rs
│ ├── ring_buffer.rs
│ ├── runtime.rs
│ ├── spsc.rs
│ ├── unreliable_bincode_channel.rs
│ ├── unreliable_channel.rs
│ └── windows.rs
└── tests/
├── compressed_bincode_channel.rs
├── message_channels.rs
├── packet_multiplexer.rs
├── reliable_bincode_channel.rs
├── reliable_channel.rs
├── unreliable_bincode_channel.rs
├── unreliable_channel.rs
└── util/
└── mod.rs
================================================
FILE CONTENTS
================================================
================================================
FILE: .circleci/config.yml
================================================
version: 2
jobs:
build:
docker:
- image: cimg/rust:1.70.0
steps:
- checkout
- run:
name: Version information
command: |
rustc --version
cargo --version
rustup --version
- run:
name: Calculate dependencies
command: cargo generate-lockfile
- restore_cache:
keys:
- cargo-cache-{{ arch }}-{{ checksum "Cargo.lock" }}
- run:
name: Check Formatting
command: |
rustfmt --version
cargo fmt --all -- --check --color=auto
- run:
name: Build all targets
command: cargo build --all --all-targets
- run:
name: Run all tests
command: cargo test --all
- save_cache:
paths:
- /usr/local/cargo/registry
- target/debug/.fingerprint
- target/debug/build
- target/debug/deps
key: cargo-cache-{{ arch }}-{{ checksum "Cargo.lock" }}
================================================
FILE: .gitignore
================================================
target/
**/*.rs.bk
Cargo.lock
.DS_Store
.#*
.envrc
.direnv
shell.nix
.dir-locals.el
================================================
FILE: CHANGELOG.md
================================================
## [0.4]
- Don't "nagle" in the reliable channel, *require* flush calls to ensure data is
sent.
- [API Change]: Change length limits in message channels to be uniformly `u16`
and use the type system to express maximum values rather than constants.
- Fix panics in reliable bincode channel with messages near upper limit due to
improper buffer size.
- Document that all async methods are supposed to be cancel safe.
## [0.3]
- Fix the message_channels test to be less confusing, this is very important as
it is currently the best (hah) example.
- Make `BufferPacketPool` derive Copy if the type it wraps is Copy.
- Simplify `Runtime` trait to not require an explicit `Interval`.
`Runtime::Delay` wasn't even *used* prior to this, but it is the only timing
requirement now and has been renamed to `Sleep` to match tokio 0.3. Neither
tokio nor smol allocate as part of creating a `Sleep` / `Timer`, so having an
explicit `Interval` is not really necessary to avoid e.g. allocation, and the
way tokio's `Interval` works was not ideal anyway and we shouldn't rely on
how it is implemented.
## [0.2]
- Correctness fixes for unreliable message lengths
- Performance improvements for bincode message serialization
- Avoid unnecessary calls to SendExt::send
- Performance improvements and fixes for internal `event_watch` events channel.
- [API Change]: Update to bincode 1.3, no longer using the deprecated bincode API
- [API Change]: Return `Result` in `MessageChannels` async methods on
disconnection, panicking is never appropriate for a network error. Instead,
the panicking version of methods in `MessageChannels` *only* panic on
unregistered message types.
## [0.1.1]
- Small bugifx for unreliable message channels, don't error with
`SendError::TooBig` when the message will actually fit.
## [0.1.0]
- Initial release
================================================
FILE: Cargo.toml
================================================
[package]
name = "turbulence"
version = "0.4.0"
authors = ["kyren <kerriganw@gmail.com>"]
edition = "2021"
description = "Tools to provide serialization, multiplexing, optional reliability, and optional compression to a game's networking."
readme = "README.md"
repository = "https://github.com/kyren/turbulence"
documentation = "https://docs.rs/turbulence"
keywords = ["gamedev", "networking"]
license = "MIT OR Apache-2.0"
[badges]
circle-ci = { repository = "kyren/turbulence", branch = "master" }
[dependencies]
bincode = "1.3"
byteorder = "1.3"
cache-padded = "1.2"
crossbeam-channel = "0.5"
futures = "0.3"
rustc-hash = "1.0"
serde = "1.0"
snap = "1.0"
thiserror = "1.0"
[dev-dependencies]
rand = { version = "0.8", features = ["small_rng"] }
serde = { version = "1.0", features = ["derive"] }
================================================
FILE: LICENSE-APACHE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: LICENSE-CC0
================================================
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.
================================================
FILE: LICENSE-MIT
================================================
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# turbulence
*We'll get there, but it's gonna be a bumpy ride.*
---
[](https://circleci.com/gh/kyren/turbulence)
[](https://crates.io/crates/turbulence)
[](https://docs.rs/turbulence)
Multiplexed, optionally reliable, async, transport agnostic, reactor agnostic
networking library for games.
This library does not actually perform any networking itself or interact with
platform networking APIs in any way, it is instead a way to take some kind of
*unreliable* and *unordered* transport layer that you provide and turn it into
a set of independent networking channels, each of which can optionally be made
*reliable* and *ordered*.
The best way right now to understand what this library is useful for probably
to look at the [MessageChannels test](tests/message_channels.rs). This is the
highest level, simplest API provided: it allows you to define N message types
serializable with serde, define each individual channel's networking settings,
and then gives you a set of handles for pushing packets into and taking packets
out of this `MessageChannels` interface. The user is expected to take outgoing
packets and send them out over UDP (or similar), and also read incoming packets
from UDP (or similar) and pass them in. The only reliability requirement for
using this is that if a packet is received from a remote, it must be intact
and uncorrupted, but other than this the underlying transport does not need
to provide any reliability or order guarantees. The reason that no corruption
check is performed is that many transport layers already provide this for free,
so it would often not be useful for `turbulence` to do that itself. Since there
is no requirement for reliability, simply dropping incoming packets that do not
pass a consistency check is appropriate.
This library is structured in a way that provides a lot of flexibility but does
not do very much to help you actually get a network connection set up between
a game server and client. Setting up a UDP game server is a complex task, and
this library is designed to help with one *piece* of this puzzle.
---
### What this library actually does
`turbulence` currently contains two main protocols and builds some conveniences
on top of them:
1) It has an unreliable, unordered messaging protocol that takes in messages
that must be less than the size of a packet and coalesces them so that
multiple messages are sent per packet. This is by far the simpler of the two
protocols, and is appropriate for per-tick updates for things like position
data, where resends of old data are not useful.
2) It has a reliable, ordered transport with flow control that is similar to
TCP, but much simpler and without automatic congestion control. Instead of
congestion control, the user specifies the target packet send rate as part
of the protocol settings.
`turbulence` then provides on top of these:
3) Reliable and unreliable channels of `bincode` serialized types.
4) A reliable channel of `bincode` serialized types that are automatically
coalesced and compressed.
And then finally this library also provides an API for multiplexing multiple
instances of these channels across a single stream of packets and some
convenient ways of constructing the channels and accessing them by message
type. This is what the `MessageChannels` interface provides.
### Questions you might ask
***Why would you ever need something like this?***
You would need this library only if most or all of the following is true:
1) You have a real time, networked game where TCP or TCP-like protocols are
inappropriate, and something unreliable like UDP must be used for latency
reasons.
2) You have a game that needs to send both fast unreliable data like position
and also stream reliable game related data such as terrain data or chat or
complex entity data that is bandwidth intensive.
3) You have several independent streams of reliable data and they need to not
block each other or choke off fast unreliable data.
4) It is impractical or undesirable (or impossible) to use many different OS
level networking sockets, or to use existing networking libraries that hook
deeply into the OS or even just assume the existence of UDP sockets.
***Why do you need this library, doesn't XYZ protocol already do this*** (Where
XYZ is plain TCP, plain UDP, SCTP, QUIC, etc)
In a way, this library is equivalent to having multiple UDP connections and
bandwidth limited TCP connections at one time. If you can already do exactly
that and that's acceptable for you, then you might consider just doing that
instead of using this library!
This library is also a bit similar to something like QUIC in that it gives you
multiple independent channels of data which do not block each other. If QUIC
eventually supports truly unrleliable, unordered messages (AFAIK currently this
is only a proposed extension?), AND it has an implementation that you can use,
then certainly using QUIC would be a viable option.
***So this library contains a re-implementation of something like TCP, isn't
trying to implement something like that fiendishly complex and generally a bad
idea?***
Probably, but since it is designed for low-ish static bandwidth limits and
doesn't concern itself with congestion control, this cuts out a *lot* of the
complexity. Still, this is the most complex part of this library, but it is
well tested and definitely at least works *in the environments I have run
so far*. It's not very complicated, it could probably be described as "the
simplest TCP-like thing that you could reasonably write and use".
You should not be using the reliable streams in this library in the same way
that you use TCP. A good example of what *shouldn't* probably go over this
library is something like streaming asset data, you should have a separate
channel for data that should be streamed as fast as possible and will always be
bandwidth rather than gameplay limited.
The reliable streams here are for things that are normally gameplay limited but
might be spikey, and where you *want* to limit the bandwidth so those spikes
don't slow down more important data or slow down other players.
***Why is this library so generic? It's TOO generic, everything is based on
traits like `PacketPool` and `Runtime` and it's hard to use. Why can't you just
use tokio / async-std?***
The `PacketPool` trait exists not only to allow for custom packet types but
also for things like the multiplexer, so it serves double duty. `Runtime`
exists because I use this library in a web browser connecting to a remote
server using [webrtc-unreliable](https://github.com/kyren/webrtc-unreliable),
and I have to implement it manually on top of web APIs and that is
currently not trivial to do.
### Current status / Future plans
I've used this library in a real project over the real internet, and it
definitely works. I've also tested it in-game using link conditioners to
simulate various levels of packet loss and duplication and *as far as I can
tell* it works as advertised.
The library is usable currently, but the API should in no way be considered
stable, it still may see a lot of churn.
In the near future it might be useful to have other channel types that provide
in-between guarantees like only reliability guarantees but not in-order
guarantees or vice versa.
Eventually, I'd like the reliable channels to have some sort of congestion
avoidance, but this would probably need to be cooperative between reliable
channels in some way.
The library desperately needs better examples, especially a fully worked
example using e.g. tokio and UDP, but setting up such an example is a large
task by itself.
## License
`turbulence` is licensed under any of:
* MIT License [LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT
* Apache License Version 2.0 [LICENSE-APACHE](LICENSE-APACHE) or
https://opensource.org/licenses/Apache-2.0
* Creative Commons CC0 1.0 Universal Public Domain Dedication
[LICENSE-CC0](LICENSE-CC0) or
https://creativecommons.org/publicdomain/zero/1.0/
at your option.
================================================
FILE: src/bandwidth_limiter.rs
================================================
use std::time::Duration;
use crate::runtime::Timer;
pub struct BandwidthLimiter<T: Timer> {
bandwidth: u32,
burst_bandwidth: u32,
bytes_available: f64,
last_calculation: T::Instant,
}
impl<T: Timer> BandwidthLimiter<T> {
/// The `burst_bandwidth` is the maximum amount of bandwidth credit that can accumulate.
pub fn new(timer: &T, bandwidth: u32, burst_bandwidth: u32) -> Self {
let last_calculation = timer.now();
BandwidthLimiter {
bandwidth,
burst_bandwidth,
bytes_available: burst_bandwidth as f64,
last_calculation,
}
}
/// Delay until a time where there will be bandwidth available.
pub fn delay_until_available(&self, timer: &T) -> Option<T::Sleep> {
if self.bytes_available < 0. {
Some(timer.sleep(Duration::from_secs_f64(
(-self.bytes_available) / self.bandwidth as f64,
)))
} else {
None
}
}
/// Actually update the amount of available bandwidth. Additional available bytes are not added
/// until this method is called to add them.
pub fn update_available(&mut self, timer: &T) {
let now = timer.now();
self.bytes_available += timer
.duration_between(self.last_calculation, now)
.as_secs_f64()
* self.bandwidth as f64;
self.bytes_available = self.bytes_available.min(self.burst_bandwidth as f64);
self.last_calculation = now;
}
/// The bandwidth limiter only needs to limit outgoing packets being sent at all, not their
/// size, so this returns true if a non-negative amount of bytes is available. If a packet is
/// sent that is larger than the available bytes, the available bytes will go negative and this
/// will no longer return true.
pub fn bytes_available(&self) -> bool {
self.bytes_available >= 0.
}
/// Record that bytes were sent, possibly going into bandwidth debt.
pub fn take_bytes(&mut self, bytes: u32) {
self.bytes_available -= bytes as f64
}
}
================================================
FILE: src/buffer.rs
================================================
use std::ops::{Deref, DerefMut};
pub use crate::packet::{Packet, PacketPool};
/// A trait for implementing `PacketPool` more easily using an allocator for statically sized
/// buffers.
pub trait BufferPool {
type Buffer: Deref<Target = [u8]> + DerefMut;
fn capacity(&self) -> usize;
fn acquire(&mut self) -> Self::Buffer;
}
/// Turns a `BufferPool` implementation into something that implements `PacketPool`.
#[derive(Debug, Copy, Clone, Default)]
pub struct BufferPacketPool<B>(B);
impl<B> BufferPacketPool<B> {
pub fn new(buffer_pool: B) -> Self {
BufferPacketPool(buffer_pool)
}
}
impl<B: BufferPool> PacketPool for BufferPacketPool<B> {
type Packet = BufferPacket<B::Buffer>;
fn capacity(&self) -> usize {
self.0.capacity()
}
fn acquire(&mut self) -> Self::Packet {
BufferPacket {
buffer: self.0.acquire(),
len: 0,
}
}
}
#[derive(Debug)]
pub struct BufferPacket<B> {
buffer: B,
len: usize,
}
impl<B> Packet for BufferPacket<B>
where
B: Deref<Target = [u8]> + DerefMut,
{
fn resize(&mut self, len: usize, val: u8) {
assert!(len <= self.buffer.len());
for i in self.len..len {
self.buffer[i] = val;
}
self.len = len;
}
}
impl<B> Deref for BufferPacket<B>
where
B: Deref<Target = [u8]>,
{
type Target = [u8];
fn deref(&self) -> &[u8] {
&self.buffer[0..self.len]
}
}
impl<B> DerefMut for BufferPacket<B>
where
B: Deref<Target = [u8]> + DerefMut,
{
fn deref_mut(&mut self) -> &mut [u8] {
&mut self.buffer[0..self.len]
}
}
================================================
FILE: src/compressed_bincode_channel.rs
================================================
use std::{
convert::TryInto,
marker::PhantomData,
task::{Context, Poll},
u16,
};
use bincode::Options as _;
use byteorder::{ByteOrder, LittleEndian};
use futures::{future, ready, task};
use serde::{de::DeserializeOwned, Serialize};
use snap::raw::{decompress_len, max_compress_len, Decoder as SnapDecoder, Encoder as SnapEncoder};
use thiserror::Error;
use crate::reliable_channel::{self, ReliableChannel};
/// The maximum serialized length of a `CompressedBincodeChannel` message. This also serves as
/// the maximum size of a compressed chunk of messages, but it is guaranteed that any message <=
/// `MAX_MESSAGE_LEN` can be sent, even if it cannot be compressed.
pub const MAX_MESSAGE_LEN: u16 = u16::MAX;
#[derive(Debug, Error)]
pub enum SendError {
/// Fatal internal channel error.
#[error("reliable channel error error: {0}")]
ReliableChannelError(#[from] reliable_channel::Error),
/// Non-fatal error, no message is sent.
#[error("bincode serialization error: {0}")]
BincodeError(#[from] bincode::Error),
}
#[derive(Debug, Error)]
pub enum RecvError {
/// Fatal internal channel error.
#[error("reliable channel error error: {0}")]
ReliableChannelError(#[from] reliable_channel::Error),
/// Fatal error, indicates corruption or protocol mismatch.
#[error("Snappy serialization error: {0}")]
SnapError(#[from] snap::Error),
/// Fatal error, stream becomes desynchronized, individual serialized types are not length
/// prefixed.
#[error("bincode serialization error: {0}")]
BincodeError(#[from] bincode::Error),
}
/// Wraps a `ReliableMessageChannel` and reliably sends a single message type serialized with
/// `bincode` and compressed with `snap`.
///
/// Messages are written in large blocks to aid compression. Messages are serialized end to end, and
/// when a block reaches the maximum configured size (or `flush` is called), the block is compressed
/// and sent as a single message.
///
/// This saves space from the compression and also from the reduced message header overhead per
/// individual message.
pub struct CompressedBincodeChannel {
channel: ReliableChannel,
send_chunk: Vec<u8>,
write_buffer: Vec<u8>,
write_pos: usize,
read_buffer: Vec<u8>,
read_pos: usize,
recv_chunk: Vec<u8>,
recv_pos: usize,
encoder: SnapEncoder,
decoder: SnapDecoder,
}
impl From<ReliableChannel> for CompressedBincodeChannel {
fn from(channel: ReliableChannel) -> Self {
Self::new(channel)
}
}
impl CompressedBincodeChannel {
pub fn new(channel: ReliableChannel) -> Self {
CompressedBincodeChannel {
channel,
send_chunk: Vec::new(),
write_buffer: Vec::new(),
write_pos: 0,
read_buffer: Vec::new(),
read_pos: 0,
recv_chunk: Vec::new(),
recv_pos: 0,
encoder: SnapEncoder::new(),
decoder: SnapDecoder::new(),
}
}
pub fn into_inner(self) -> ReliableChannel {
self.channel
}
/// Send the given message.
///
/// This method is cancel safe, it will never partially send a message, and completes
/// immediately upon successfully queuing a message to send.
pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendError> {
future::poll_fn(|cx| self.poll_send(cx, msg)).await
}
pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendError> {
match self.poll_send(&mut Context::from_waker(task::noop_waker_ref()), msg) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
/// Finish sending the current block of messages, compressing them and sending them over the
/// reliable channel.
///
/// This method is cancel safe.
pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
future::poll_fn(|cx| self.poll_flush(cx)).await
}
pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
/// Receive a message.
///
/// This method is cancel safe, it will never partially receive a message and will never drop a
/// received message.
pub async fn recv<M: DeserializeOwned>(&mut self) -> Result<M, RecvError> {
future::poll_fn(|cx| self.poll_recv_ready(cx)).await?;
Ok(self.recv_next()?)
}
pub fn try_recv<M: DeserializeOwned>(&mut self) -> Result<Option<M>, RecvError> {
match self.poll_recv::<M>(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(None),
Poll::Ready(Ok(val)) => Ok(Some(val)),
Poll::Ready(Err(err)) => Err(err),
}
}
pub fn poll_send<M: Serialize>(
&mut self,
cx: &mut Context,
msg: &M,
) -> Poll<Result<(), SendError>> {
let bincode_config = self.bincode_config();
let serialized_len = bincode_config.serialized_size(msg)?;
if self.send_chunk.len() as u64 + serialized_len > MAX_MESSAGE_LEN as u64 {
ready!(self.poll_write_send_chunk(cx))?;
}
bincode_config.serialize_into(&mut self.send_chunk, msg)?;
Poll::Ready(Ok(()))
}
pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
ready!(self.poll_write_send_chunk(cx))?;
ready!(self.poll_finish_write(cx))?;
self.channel.flush()?;
Poll::Ready(Ok(()))
}
pub fn poll_recv<M: DeserializeOwned>(
&mut self,
cx: &mut Context,
) -> Poll<Result<M, RecvError>> {
ready!(self.poll_recv_ready(cx))?;
Poll::Ready(Ok(self.recv_next::<M>()?))
}
fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvError>> {
loop {
if self.recv_pos < self.recv_chunk.len() {
return Poll::Ready(Ok(()));
}
if self.read_pos < 3 {
self.read_buffer.resize(3, 0);
ready!(self.poll_finish_read(cx))?;
}
let compressed = self.read_buffer[0] != 0;
let chunk_len = LittleEndian::read_u16(&self.read_buffer[1..3]);
self.read_buffer.resize(chunk_len as usize + 3, 0);
ready!(self.poll_finish_read(cx))?;
if compressed {
let decompressed_len = decompress_len(&self.read_buffer[3..])?;
self.recv_chunk
.resize(decompressed_len.min(MAX_MESSAGE_LEN as usize), 0);
self.decoder
.decompress(&self.read_buffer[3..], &mut self.recv_chunk)?;
} else {
self.recv_chunk.resize(chunk_len as usize, 0);
self.recv_chunk.copy_from_slice(&self.read_buffer[3..]);
}
self.recv_pos = 0;
self.read_pos = 0;
}
}
fn recv_next<M: DeserializeOwned>(&mut self) -> Result<M, bincode::Error> {
let bincode_config = self.bincode_config();
let mut reader = &self.recv_chunk[self.recv_pos..];
let msg = bincode_config.deserialize_from(&mut reader)?;
self.recv_pos = self.recv_chunk.len() - reader.len();
Ok(msg)
}
fn poll_write_send_chunk(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), reliable_channel::Error>> {
if !self.send_chunk.is_empty() {
ready!(self.poll_finish_write(cx))?;
self.write_pos = 0;
self.write_buffer
.resize(max_compress_len(self.send_chunk.len()) + 3, 0);
// Should not error, `write_buffer` is correctly sized and is less than `2^32 - 1`
let compressed_len = self
.encoder
.compress(&self.send_chunk, &mut self.write_buffer[3..])
.expect("unexpected snap encoder error");
self.write_buffer.truncate(compressed_len + 3);
if compressed_len >= self.send_chunk.len() {
// If our compressed size is worse than our uncompressed size, write the original
// chunk.
self.write_buffer.truncate(self.send_chunk.len() + 3);
self.write_buffer[3..].copy_from_slice(&self.send_chunk);
// An initial 0 means uncompressed
self.write_buffer[0] = 0;
LittleEndian::write_u16(
&mut self.write_buffer[1..3],
(self.send_chunk.len()).try_into().unwrap(),
);
} else {
// An initial 1 means compressed
self.write_buffer[0] = 1;
LittleEndian::write_u16(
&mut self.write_buffer[1..3],
(compressed_len).try_into().unwrap(),
);
}
self.send_chunk.clear();
}
Poll::Ready(Ok(()))
}
fn poll_finish_write(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
while self.write_pos < self.write_buffer.len() {
let len = ready!(self
.channel
.poll_write(cx, &self.write_buffer[self.write_pos..]))?;
self.write_pos += len;
}
Poll::Ready(Ok(()))
}
fn poll_finish_read(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
while self.read_pos < self.read_buffer.len() {
let len = ready!(self
.channel
.poll_read(cx, &mut self.read_buffer[self.read_pos..]))?;
self.read_pos += len;
}
Poll::Ready(Ok(()))
}
fn bincode_config(&self) -> impl bincode::Options + Copy {
bincode::options().with_limit(MAX_MESSAGE_LEN as u64)
}
}
/// Wrapper over an `CompressedBincodeChannel` that only allows a single message type.
pub struct CompressedTypedChannel<M> {
channel: CompressedBincodeChannel,
_phantom: PhantomData<M>,
}
impl<M> From<ReliableChannel> for CompressedTypedChannel<M> {
fn from(channel: ReliableChannel) -> Self {
Self::new(channel)
}
}
impl<M> CompressedTypedChannel<M> {
pub fn new(channel: ReliableChannel) -> Self {
CompressedTypedChannel {
channel: CompressedBincodeChannel::new(channel),
_phantom: PhantomData,
}
}
pub fn into_inner(self) -> ReliableChannel {
self.channel.into_inner()
}
pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
self.channel.flush().await
}
pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
self.channel.try_flush()
}
pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
self.channel.poll_flush(cx)
}
}
impl<M: Serialize> CompressedTypedChannel<M> {
pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {
self.channel.send(msg).await
}
pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {
self.channel.try_send(msg)
}
pub fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), SendError>> {
self.channel.poll_send(cx, msg)
}
}
impl<M: DeserializeOwned> CompressedTypedChannel<M> {
pub async fn recv(&mut self) -> Result<M, RecvError> {
self.channel.recv::<M>().await
}
pub fn try_recv(&mut self) -> Result<Option<M>, RecvError> {
self.channel.try_recv::<M>()
}
pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, RecvError>> {
self.channel.poll_recv::<M>(cx)
}
}
================================================
FILE: src/event_watch.rs
================================================
use std::{
sync::{
atomic::{self, AtomicBool},
Arc,
},
task::Poll,
};
use futures::{future, task::AtomicWaker};
/// Creates a multi-producer single-consumer stream of events with certain beneficial properties.
///
/// If a receiver is waiting on a signaled event, calling `Sender::signal` will wakeup the receiver
/// as normal. However, if the receiver is *not* waiting on a signaled event and `Sender::signal`
/// has been called since the last time the `Receiver::wait` was called, then calling
/// `Receiver::wait` again will immediately resolve. In this way, the receiver is prevented from
/// possibly missing events.
///
/// In other words, calling `Sender::signal` will always do one of two things:
/// 1) Wake up a currently waiting receiver
/// 2) Make the next call to `Receiver::wait` resolve immediately
///
/// Multiple calls to `Sender::signal` events will however *not* cause *multiple* calls to
/// `Receiver::wait` to resolve immediately, only the very next call to `Receiver::wait`.
///
/// You can look at this as a specialized version a bounded channel of `()` with capacity 1.
pub fn channel() -> (Sender, Receiver) {
let state = Arc::new(State {
waker: AtomicWaker::new(),
signaled: AtomicBool::new(false),
});
let sender_state = Arc::clone(&state);
(Sender(sender_state), Receiver(state))
}
#[derive(Debug, Clone)]
pub struct Sender(Arc<State>);
impl Sender {
pub fn signal(&self) {
self.0.signaled.store(true, atomic::Ordering::SeqCst);
self.0.waker.wake()
}
}
#[derive(Debug)]
pub struct Receiver(Arc<State>);
impl Receiver {
pub async fn wait(&mut self) {
future::poll_fn(|cx| {
if self.0.signaled.swap(false, atomic::Ordering::SeqCst) {
Poll::Ready(())
} else {
self.0.waker.register(cx.waker());
if self.0.signaled.swap(false, atomic::Ordering::SeqCst) {
Poll::Ready(())
} else {
Poll::Pending
}
}
})
.await
}
}
#[derive(Debug)]
struct State {
waker: AtomicWaker,
signaled: AtomicBool,
}
================================================
FILE: src/lib.rs
================================================
mod bandwidth_limiter;
pub mod buffer;
pub mod compressed_bincode_channel;
mod event_watch;
pub mod message_channels;
pub mod packet;
pub mod packet_multiplexer;
pub mod reliable_bincode_channel;
pub mod reliable_channel;
mod ring_buffer;
pub mod runtime;
pub mod spsc;
pub mod unreliable_bincode_channel;
pub mod unreliable_channel;
mod windows;
pub use self::{
buffer::{BufferPacket, BufferPacketPool, BufferPool},
compressed_bincode_channel::{CompressedBincodeChannel, CompressedTypedChannel},
message_channels::{
MessageChannelMode, MessageChannelSettings, MessageChannels, MessageChannelsBuilder,
},
packet::{Packet, PacketPool, MAX_PACKET_LEN},
packet_multiplexer::{
ChannelStatistics, ChannelTotals, IncomingMultiplexedPackets, MuxPacket, MuxPacketPool,
OutgoingMultiplexedPackets, PacketChannel, PacketMultiplexer,
},
reliable_bincode_channel::{ReliableBincodeChannel, ReliableTypedChannel},
reliable_channel::ReliableChannel,
runtime::{Spawn, Timer},
unreliable_bincode_channel::{UnreliableBincodeChannel, UnreliableTypedChannel},
unreliable_channel::UnreliableChannel,
};
================================================
FILE: src/message_channels.rs
================================================
use std::{
any::{type_name, Any, TypeId},
collections::{hash_map, HashMap, HashSet},
error::Error,
task::{Context, Poll},
};
use futures::{
future::{self, BoxFuture, RemoteHandle},
ready, select,
stream::FuturesUnordered,
FutureExt, SinkExt, StreamExt, TryFutureExt,
};
use rustc_hash::FxHashMap;
use serde::{de::DeserializeOwned, Serialize};
use thiserror::Error;
use crate::{
event_watch,
packet::PacketPool,
packet_multiplexer::{ChannelStatistics, PacketChannel, PacketMultiplexer},
reliable_channel,
runtime::{Spawn, Timer},
spsc::{self, TryRecvError},
unreliable_channel, CompressedTypedChannel, MuxPacketPool, ReliableChannel,
ReliableTypedChannel, UnreliableChannel, UnreliableTypedChannel,
};
// TODO: Message channels are currently always full-duplex, because the unreliable / reliable
// channels backing them are always full-duplex. We could add configuration to limit a channel to
// send or receive only, and to error if the remote sends to a send-only channel.
#[derive(Debug, Clone, PartialEq)]
pub struct MessageChannelSettings {
pub channel: PacketChannel,
pub channel_mode: MessageChannelMode,
/// The buffer size for the spsc channel of messages that transports messages of this type to /
/// from the network task.
pub message_buffer_size: usize,
/// The buffer size for the spsc channel of packets for this message type that transports
/// packets to / from the packet multiplexer.
pub packet_buffer_size: usize,
}
#[derive(Debug, Clone, PartialEq)]
pub enum MessageChannelMode {
Unreliable(unreliable_channel::Settings),
Reliable(reliable_channel::Settings),
Compressed(reliable_channel::Settings),
}
pub trait ChannelMessage: Serialize + DeserializeOwned + Send + Sync + 'static {}
impl<T: Serialize + DeserializeOwned + Send + Sync + 'static> ChannelMessage for T {}
#[derive(Debug, Error)]
pub enum ChannelAlreadyRegistered {
#[error("message type already registered")]
MessageType,
#[error("channel already registered")]
Channel,
}
pub type TaskError = Box<dyn Error + Send + Sync>;
#[derive(Debug, Error)]
#[error("network task for message type {type_name:?} has errored: {error}")]
pub struct ChannelTaskError {
pub type_name: &'static str,
pub error: TaskError,
}
pub struct MessageChannelsBuilder<S, T, P>
where
S: Spawn,
T: Timer,
P: PacketPool,
{
spawn: S,
timer: T,
pool: P,
channels: HashSet<PacketChannel>,
register_fns: HashMap<TypeId, (&'static str, MessageChannelSettings, RegisterFn<S, T, P>)>,
}
impl<S, T, P> MessageChannelsBuilder<S, T, P>
where
S: Spawn,
T: Timer,
P: PacketPool,
{
pub fn new(spawn: S, timer: T, pool: P) -> Self {
MessageChannelsBuilder {
spawn,
timer,
pool,
channels: HashSet::new(),
register_fns: HashMap::new(),
}
}
}
impl<S, T, P> MessageChannelsBuilder<S, T, P>
where
S: Spawn + Clone + 'static,
T: Timer + Clone + 'static,
P: PacketPool + Clone + Send + 'static,
P::Packet: Send,
{
/// Register this message type on the constructed `MessageChannels`, using the given channel
/// settings.
///
/// Can only be called once per message type, will error if it is called with the same message
/// type or channel number more than once.
pub fn register<M: ChannelMessage>(
&mut self,
settings: MessageChannelSettings,
) -> Result<(), ChannelAlreadyRegistered> {
if !self.channels.insert(settings.channel) {
return Err(ChannelAlreadyRegistered::Channel);
}
match self.register_fns.entry(TypeId::of::<M>()) {
hash_map::Entry::Occupied(_) => Err(ChannelAlreadyRegistered::MessageType),
hash_map::Entry::Vacant(vacant) => {
vacant.insert((
type_name::<M>(),
settings,
register_message_type::<S, T, P, M>,
));
Ok(())
}
}
}
/// Build a `MessageChannels` instance that can send and receive all of the registered message
/// types via channels on the given packet multiplexer.
pub fn build(self, multiplexer: &mut PacketMultiplexer<P::Packet>) -> MessageChannels {
let Self {
spawn,
timer,
pool,
register_fns,
..
} = self;
let mut channels_map = ChannelsMap::default();
let mut tasks: FuturesUnordered<_> = register_fns
.into_iter()
.map(|(_, (type_name, settings, register_fn))| {
register_fn(
settings,
spawn.clone(),
timer.clone(),
pool.clone(),
multiplexer,
&mut channels_map,
)
.map_err(move |error| ChannelTaskError { type_name, error })
})
.collect();
let (remote, remote_handle) = async move {
match tasks.next().await {
None => ChannelTaskError {
type_name: "none",
error: "no channel tasks to run".to_owned().into(),
},
Some(Ok(())) => panic!("channel tasks only return errors"),
Some(Err(err)) => err,
}
}
.remote_handle();
spawn.spawn(remote);
MessageChannels {
disconnected: false,
task: remote_handle,
channels: channels_map,
}
}
}
#[derive(Debug, Error)]
#[error("no such message type `{0}` registered")]
pub struct MessageTypeUnregistered(&'static str);
#[derive(Debug, Error)]
#[error("`MessageChannels` instance has become disconnected")]
pub struct MessageChannelsDisconnected;
#[derive(Debug, Error)]
pub enum TryAsyncMessageError {
#[error(transparent)]
Unregistered(#[from] MessageTypeUnregistered),
#[error(transparent)]
Disconnected(#[from] MessageChannelsDisconnected),
}
/// Manages a set of channels through a packet multiplexer, where each channel is associated with
/// exactly one message type.
///
/// Acts as a bridge between the sync and async worlds. Provides sync methods to send and receive
/// messages that do not block or error. Has simplified error handling, is if any of the backing
/// tasks end in an error or if the backing packet channels are dropped, the `MessageChannels` will
/// permanently go into a "disconnected" state.
///
/// Additionally still provides async versions of methods to send and receive messages that share
/// the same simplified error handling, which may be useful during startup or shutdown.
#[derive(Debug)]
pub struct MessageChannels {
disconnected: bool,
task: RemoteHandle<ChannelTaskError>,
channels: ChannelsMap,
}
impl MessageChannels {
/// Returns whether this `MessageChannels` has become disconnected because the backing network
/// task has errored.
///
/// Once it has become disconnected, a `MessageChannels` is permanently in this errored state.
/// You can receive the error from the task by calling `MessageChannels::recv_err`.
pub fn is_connected(&self) -> bool {
!self.disconnected
}
/// Consume this `MessageChannels` and receive the networking task shutdown error.
///
/// If this `MessageChannels` is disconnected, returns the error that caused it to become
/// disconnected. If it is not disconnected, it will become disconnected by calling this and
/// return that error.
pub async fn recv_err(self) -> ChannelTaskError {
drop(self.channels);
self.task.await
}
/// Send the given message on the channel associated with its message type.
///
/// In order to ensure delivery, `flush` should be called for the same message type to
/// immediately send any buffered messages.
///
/// If the spsc channel for this message type is full, will return the message that was sent
/// back to the caller. If the message was successfully put onto the outgoing spsc channel, will
/// return None.
///
/// # Panics
/// Panics if this message type was not registered with the `MessageChannelsBuilder` used to
/// build this `MessageChannels` instance.
pub fn send<M: ChannelMessage>(&mut self, message: M) -> Option<M> {
self.try_send(message).unwrap()
}
/// Like `MessageChannels::send` but errors instead of panicking when the message type is
/// unregistered.
pub fn try_send<M: ChannelMessage>(
&mut self,
message: M,
) -> Result<Option<M>, MessageTypeUnregistered> {
let channels = self.channels.get_mut::<M>()?;
Ok(if self.disconnected {
Some(message)
} else if let Err(err) = channels.outgoing_sender.try_send(message) {
if err.is_disconnected() {
self.disconnected = true;
}
Some(err.into_inner())
} else {
None
})
}
/// Any async version of `MessageChannels::send`, sends the given message on the
/// channel associated with its message type but waits if the channel is full. Like
/// `MessageChannels::send`, `MessageChannels::flush` must still be called afterwards in order
/// to ensure delivery.
///
/// This method is cancel safe, it will never partially send a message, though canceling it may
/// or may not buffer a message to be sent.
///
/// # Panics
/// Panics if this message type is not registered.
pub async fn async_send<M: ChannelMessage>(
&mut self,
message: M,
) -> Result<(), MessageChannelsDisconnected> {
self.try_async_send(message).await.map_err(|e| match e {
TryAsyncMessageError::Unregistered(e) => panic!("{}", e),
TryAsyncMessageError::Disconnected(e) => e,
})
}
/// Like `MessageChannels::async_send` but errors instead of panicking when the message type is
/// unregistered.
pub async fn try_async_send<M: ChannelMessage>(
&mut self,
message: M,
) -> Result<(), TryAsyncMessageError> {
let channels = self.channels.get_mut::<M>()?;
if self.disconnected {
Err(MessageChannelsDisconnected.into())
} else {
let res = channels.outgoing_sender.send(message).await;
if res.is_err() {
self.disconnected = true;
Err(MessageChannelsDisconnected.into())
} else {
Ok(())
}
}
}
/// Immediately send any buffered messages for this message type. Messages may not be delivered
/// unless `flush` is called after any `send` calls.
///
/// # Panics
/// Panics if this message type was not registered with the `MessageChannelsBuilder` used to
/// build this `MessageChannels` instance.
pub fn flush<M: ChannelMessage>(&mut self) {
self.try_flush::<M>().unwrap();
}
/// Like `MessageChannels::flush` but errors instead of panicking when the message type is
/// unregistered.
pub fn try_flush<M: ChannelMessage>(&mut self) -> Result<(), MessageTypeUnregistered> {
self.channels.get_mut::<M>()?.flush_sender.signal();
Ok(())
}
/// Receive an incoming message on the channel associated with this mesage type, if one is
/// available.
///
/// # Panics
/// Panics if this message type was not registered with the `MessageChannelsBuilder` used to
/// build this `MessageChannels` instance.
pub fn recv<M: ChannelMessage>(&mut self) -> Option<M> {
self.try_recv().unwrap()
}
/// Like `MessageChannels::recv` but errors instead of panicking when the message type is
/// unregistered.
pub fn try_recv<M: ChannelMessage>(&mut self) -> Result<Option<M>, MessageTypeUnregistered> {
let channels = self.channels.get_mut::<M>()?;
Ok(if self.disconnected {
None
} else {
match channels.incoming_receiver.try_recv() {
Ok(msg) => Some(msg),
Err(err) => {
if err.is_disconnected() {
self.disconnected = true;
}
None
}
}
})
}
/// Any async version of `MessageChannels::receive`, receives an incoming message on the channel
/// associated with its message type but waits if there is no message available.
///
/// This method is cancel safe, it will never partially read a message or drop received
/// messages.
///
/// # Panics
/// Panics if this message type is not registered.
pub async fn async_recv<M: ChannelMessage>(
&mut self,
) -> Result<M, MessageChannelsDisconnected> {
self.try_async_recv().await.map_err(|e| match e {
TryAsyncMessageError::Unregistered(e) => panic!("{}", e),
TryAsyncMessageError::Disconnected(e) => e,
})
}
/// Like `MessageChannels::async_recv` but errors instead of panicking when the message type is
/// unregistered.
pub async fn try_async_recv<M: ChannelMessage>(&mut self) -> Result<M, TryAsyncMessageError> {
let channels = self.channels.get_mut::<M>()?;
if self.disconnected {
Err(MessageChannelsDisconnected.into())
} else if let Some(message) = channels.incoming_receiver.next().await {
Ok(message)
} else {
self.disconnected = true;
Err(MessageChannelsDisconnected.into())
}
}
pub fn statistics<M: ChannelMessage>(&self) -> &ChannelStatistics {
self.try_statistics::<M>().unwrap()
}
pub fn try_statistics<M: ChannelMessage>(
&self,
) -> Result<&ChannelStatistics, MessageTypeUnregistered> {
Ok(&self.channels.get::<M>()?.statistics)
}
}
type ChannelTask = BoxFuture<'static, Result<(), TaskError>>;
type RegisterFn<S, T, P> = fn(
MessageChannelSettings,
S,
T,
P,
&mut PacketMultiplexer<<P as PacketPool>::Packet>,
&mut ChannelsMap,
) -> ChannelTask;
#[derive(Debug, Error)]
#[error("channel has been disconnected")]
struct ChannelDisconnected;
struct ChannelSet<M> {
outgoing_sender: spsc::Sender<M>,
incoming_receiver: spsc::Receiver<M>,
flush_sender: event_watch::Sender,
statistics: ChannelStatistics,
}
#[derive(Debug, Default)]
struct ChannelsMap(FxHashMap<TypeId, Box<dyn Any + Send + Sync>>);
impl ChannelsMap {
fn insert<M: ChannelMessage>(&mut self, channel_set: ChannelSet<M>) -> bool {
self.0
.insert(TypeId::of::<M>(), Box::new(channel_set))
.is_none()
}
fn get<M: ChannelMessage>(&self) -> Result<&ChannelSet<M>, MessageTypeUnregistered> {
Ok(self
.0
.get(&TypeId::of::<M>())
.ok_or_else(|| MessageTypeUnregistered(type_name::<M>()))?
.downcast_ref()
.unwrap())
}
fn get_mut<M: ChannelMessage>(
&mut self,
) -> Result<&mut ChannelSet<M>, MessageTypeUnregistered> {
Ok(self
.0
.get_mut(&TypeId::of::<M>())
.ok_or_else(|| MessageTypeUnregistered(type_name::<M>()))?
.downcast_mut()
.unwrap())
}
}
fn register_message_type<S, T, P, M>(
settings: MessageChannelSettings,
spawn: S,
timer: T,
packet_pool: P,
multiplexer: &mut PacketMultiplexer<P::Packet>,
channels_map: &mut ChannelsMap,
) -> ChannelTask
where
S: Spawn + Clone + 'static,
T: Timer + Clone + 'static,
P: PacketPool + Clone + Send + 'static,
P::Packet: Send,
M: ChannelMessage,
{
let (incoming_sender, incoming_receiver) = spsc::channel::<M>(settings.message_buffer_size);
let (outgoing_sender, outgoing_receiver) = spsc::channel::<M>(settings.message_buffer_size);
let (flush_sender, flush_receiver) = event_watch::channel();
let (channel_sender, channel_receiver, statistics) = multiplexer
.open_channel(settings.channel, settings.packet_buffer_size)
.expect("duplicate packet channel");
let packet_pool = MuxPacketPool::new(packet_pool);
let channel_task = match settings.channel_mode {
MessageChannelMode::Unreliable(unreliable_settings) => channel_task(
UnreliableTypedChannel::new(UnreliableChannel::new(
timer,
packet_pool,
unreliable_settings,
channel_sender,
channel_receiver,
)),
incoming_sender,
outgoing_receiver,
flush_receiver,
)
.boxed(),
MessageChannelMode::Reliable(reliable_settings) => channel_task(
ReliableTypedChannel::new(ReliableChannel::new(
spawn,
timer,
packet_pool,
reliable_settings,
channel_sender,
channel_receiver,
)),
incoming_sender,
outgoing_receiver,
flush_receiver,
)
.boxed(),
MessageChannelMode::Compressed(reliable_settings) => channel_task(
CompressedTypedChannel::new(ReliableChannel::new(
spawn,
timer,
packet_pool,
reliable_settings,
channel_sender,
channel_receiver,
)),
incoming_sender,
outgoing_receiver,
flush_receiver,
)
.boxed(),
};
channels_map.insert(ChannelSet::<M> {
outgoing_sender,
flush_sender,
incoming_receiver,
statistics,
});
channel_task
}
trait MessageBincodeChannel<M: ChannelMessage> {
fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>>;
fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>>;
fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>>;
}
impl<T, P, M> MessageBincodeChannel<M> for UnreliableTypedChannel<T, P, M>
where
T: Timer,
P: PacketPool,
M: ChannelMessage,
{
fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {
UnreliableTypedChannel::poll_recv(self, cx).map_err(|e| e.into())
}
fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>> {
ready!(self.poll_send_ready(cx))?;
Poll::Ready(Ok(self.start_send(msg)?))
}
fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {
UnreliableTypedChannel::poll_flush(self, cx).map_err(|e| e.into())
}
}
impl<M: ChannelMessage> MessageBincodeChannel<M> for ReliableTypedChannel<M> {
fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {
ReliableTypedChannel::poll_recv(self, cx).map_err(|e| e.into())
}
fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>> {
ready!(self.poll_send_ready(cx))?;
Poll::Ready(Ok(self.start_send(msg)?))
}
fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {
ReliableTypedChannel::poll_flush(self, cx).map_err(|e| e.into())
}
}
impl<M: ChannelMessage> MessageBincodeChannel<M> for CompressedTypedChannel<M> {
fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {
CompressedTypedChannel::poll_recv(self, cx).map_err(|e| e.into())
}
fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), TaskError>> {
CompressedTypedChannel::poll_send(self, cx, msg).map_err(|e| e.into())
}
fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {
CompressedTypedChannel::poll_flush(self, cx).map_err(|e| e.into())
}
}
async fn channel_task<M: ChannelMessage>(
mut channel: impl MessageBincodeChannel<M>,
mut incoming_message_sender: spsc::Sender<M>,
mut outgoing_message_receiver: spsc::Receiver<M>,
mut flush_receiver: event_watch::Receiver,
) -> Result<(), TaskError> {
enum Next<M> {
Incoming(M),
Outgoing(M),
Flush,
}
loop {
let next = {
select! {
incoming = future::poll_fn(|cx| channel.poll_recv(cx)).fuse() => {
Next::Incoming(incoming?)
}
outgoing = outgoing_message_receiver.next().fuse() => {
Next::Outgoing(outgoing.ok_or(ChannelDisconnected)?)
}
_ = flush_receiver.wait().fuse() => Next::Flush,
}
};
match next {
Next::Incoming(incoming) => incoming_message_sender.send(incoming).await?,
Next::Outgoing(outgoing) => {
future::poll_fn(|cx| channel.poll_send(cx, &outgoing)).await?
}
Next::Flush => loop {
match outgoing_message_receiver.try_recv() {
Ok(outgoing) => future::poll_fn(|cx| channel.poll_send(cx, &outgoing)).await?,
Err(TryRecvError::Disconnected) => return Err(ChannelDisconnected.into()),
Err(TryRecvError::Empty) => {
future::poll_fn(|cx| channel.poll_flush(cx)).await?;
break;
}
}
},
}
}
}
================================================
FILE: src/packet.rs
================================================
use std::ops::{Deref, DerefMut};
/// The maximum usable packet size by `turbulence`.
///
/// It is not useful for an implementation of `PacketPool` to return packets with larger capacity
/// than this, `turbulence` may not be able to use the entire packet capacity otherwise.
pub const MAX_PACKET_LEN: u16 = 32768;
/// A trait for packet buffers used by `turbulence`.
pub trait Packet: Deref<Target = [u8]> + DerefMut {
/// Resizes the packet to the given length, which must be at most the static capacity.
fn resize(&mut self, len: usize, val: u8);
fn extend(&mut self, other: &[u8]) {
let cur_len = self.len();
let new_len = cur_len + other.len();
self.resize(new_len, 0);
self[cur_len..new_len].copy_from_slice(other);
}
fn truncate(&mut self, len: usize) {
let len = len.min(self.len());
self.resize(len, 0);
}
fn clear(&mut self) {
self.resize(0, 0);
}
}
/// Trait for packet allocation and pooling.
///
/// All packets that are allocated from `turbulence` are allocated through this interface.
///
/// Packets must implement the `Packet` trait and should all have the same capacity: the MTU for
/// whatever the underlying transport is, up to `MAX_PACKET_LEN` in size.
pub trait PacketPool {
type Packet: Packet;
/// Static maximum capacity packets returned by this pool.
fn capacity(&self) -> usize;
fn acquire(&mut self) -> Self::Packet;
}
================================================
FILE: src/packet_multiplexer.rs
================================================
use std::{
collections::{hash_map, HashMap},
fmt,
ops::{Deref, DerefMut},
pin::Pin,
sync::{
atomic::{AtomicU64, Ordering},
Arc,
},
task::{Context, Poll},
u8,
};
use futures::{stream::SelectAll, Sink, SinkExt, Stream, StreamExt};
use rustc_hash::{FxHashMap, FxHashSet};
use thiserror::Error;
use crate::{
packet::{Packet, PacketPool},
spsc,
};
pub type PacketChannel = u8;
/// A wrapper over a `Packet` that reserves the first byte for the channel.
#[derive(Debug)]
pub struct MuxPacket<P>(P);
impl<P> Packet for MuxPacket<P>
where
P: Packet,
{
fn resize(&mut self, len: usize, val: u8) {
self.0.resize(len + 1, val);
}
fn extend(&mut self, other: &[u8]) {
self.0.extend(other);
}
fn truncate(&mut self, len: usize) {
self.0.truncate(len + 1);
}
fn clear(&mut self) {
self.0.resize(1, 0);
}
}
impl<P> Deref for MuxPacket<P>
where
P: Packet,
{
type Target = [u8];
fn deref(&self) -> &[u8] {
&self.0[1..]
}
}
impl<P> DerefMut for MuxPacket<P>
where
P: Packet,
{
fn deref_mut(&mut self) -> &mut [u8] {
&mut self.0[1..]
}
}
#[derive(Debug, Clone)]
pub struct MuxPacketPool<P>(P);
impl<P> MuxPacketPool<P> {
pub fn new(packet_pool: P) -> Self {
MuxPacketPool(packet_pool)
}
}
impl<P> PacketPool for MuxPacketPool<P>
where
P: PacketPool,
{
type Packet = MuxPacket<P::Packet>;
fn capacity(&self) -> usize {
self.0.capacity() - 1
}
fn acquire(&mut self) -> MuxPacket<P::Packet> {
let mut packet = self.0.acquire();
packet.resize(1, 0);
MuxPacket(packet)
}
}
impl<P> From<P> for MuxPacketPool<P> {
fn from(pool: P) -> MuxPacketPool<P> {
MuxPacketPool(pool)
}
}
#[derive(Debug, Error)]
#[error("packet channel has already been opened")]
pub struct DuplicateChannel;
#[derive(Debug, Copy, Clone)]
pub struct ChannelTotals {
pub packets: u64,
pub bytes: u64,
}
#[derive(Debug, Clone)]
pub struct ChannelStatistics(Arc<ChannelStatisticsData>);
impl ChannelStatistics {
pub fn incoming_totals(&self) -> ChannelTotals {
ChannelTotals {
packets: self.0.incoming_packets.load(Ordering::Relaxed),
bytes: self.0.incoming_bytes.load(Ordering::Relaxed),
}
}
pub fn outgoing_totals(&self) -> ChannelTotals {
ChannelTotals {
packets: self.0.outgoing_packets.load(Ordering::Relaxed),
bytes: self.0.outgoing_bytes.load(Ordering::Relaxed),
}
}
}
/// Routes packets marked with a channel header from a single `Sink` / `Stream` pair to a set of
/// `Sink` / `Stream` pairs for each channel.
///
/// Also monitors bandwidth on each channel independently, and returns a `ChannelStatistics` handle
/// to query bandwidth totals for that specific channel.
pub struct PacketMultiplexer<P> {
incoming: HashMap<PacketChannel, ChannelSender<P>>,
outgoing: SelectAll<ChannelReceiver<P>>,
}
impl<P> PacketMultiplexer<P>
where
P: Packet,
{
pub fn new() -> PacketMultiplexer<P> {
PacketMultiplexer {
incoming: HashMap::new(),
outgoing: SelectAll::new(),
}
}
/// Open a multiplexed packet channel, producing a sender for outgoing `MuxPacket`s on this
/// channel, and a receiver for incoming `MuxPacket`s on this channel.
///
/// The `buffer_size` parameter controls the buffer size requested when creating the spsc
/// channels for the returned `Sender` and `Receiver`.
pub fn open_channel(
&mut self,
channel: PacketChannel,
buffer_size: usize,
) -> Result<
(
spsc::Sender<MuxPacket<P>>,
spsc::Receiver<MuxPacket<P>>,
ChannelStatistics,
),
DuplicateChannel,
> {
let statistics = Arc::new(ChannelStatisticsData::default());
match self.incoming.entry(channel) {
hash_map::Entry::Occupied(_) => Err(DuplicateChannel),
hash_map::Entry::Vacant(vacant) => {
let (incoming_sender, incoming_receiver) = spsc::channel(buffer_size);
let (outgoing_sender, outgoing_receiver) = spsc::channel(buffer_size);
vacant.insert(ChannelSender {
sender: incoming_sender,
statistics: Arc::clone(&statistics),
});
self.outgoing.push(ChannelReceiver {
channel,
receiver: outgoing_receiver,
statistics: Arc::clone(&statistics),
});
Ok((
outgoing_sender,
incoming_receiver,
ChannelStatistics(statistics),
))
}
}
}
/// Start multiplexing packets to all opened channels.
///
/// Returns an `IncomingMultiplexedPackets` which is a `Sink` for incoming packets, and an
/// `OutgoingMultiplexedPackets` which is a `Stream` for outgoing packets.
pub fn start(self) -> (IncomingMultiplexedPackets<P>, OutgoingMultiplexedPackets<P>) {
(
IncomingMultiplexedPackets {
incoming: self.incoming.into_iter().collect(),
to_send: None,
to_flush: FxHashSet::default(),
},
OutgoingMultiplexedPackets {
outgoing: self.outgoing,
},
)
}
}
#[derive(Debug, Error)]
pub enum IncomingError {
#[error("packet received for unopened channel")]
UnknownPacketChannel,
#[error("channel receiver has been dropped")]
ChannelReceiverDropped,
}
#[derive(Error)]
pub enum IncomingTrySendError<P> {
#[error("packet channel is full")]
IsFull(P),
#[error(transparent)]
Error(#[from] IncomingError),
}
impl<P> fmt::Debug for IncomingTrySendError<P> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
IncomingTrySendError::IsFull(_) => write!(f, "IncomingTrySendError::IsFull"),
IncomingTrySendError::Error(err) => f
.debug_tuple("IncomingTrySendError::Error")
.field(err)
.finish(),
}
}
}
impl<P> IncomingTrySendError<P> {
pub fn is_full(&self) -> bool {
match self {
IncomingTrySendError::IsFull(_) => true,
_ => false,
}
}
}
/// A handle to push incoming packets into the multiplexer.
pub struct IncomingMultiplexedPackets<P> {
incoming: FxHashMap<PacketChannel, ChannelSender<P>>,
to_send: Option<P>,
to_flush: FxHashSet<PacketChannel>,
}
impl<P> Unpin for IncomingMultiplexedPackets<P> {}
impl<P> IncomingMultiplexedPackets<P>
where
P: Packet,
{
/// Attempt to send the given packet to the appropriate multiplexed channel without blocking.
///
/// If a normal error occurs, returns `IncomingError::Error`, if the destination channel buffer
/// is full, returns `IncomingTrySendError::IsFull`.
pub fn try_send(&mut self, packet: P) -> Result<(), IncomingTrySendError<P>> {
let channel = packet[0];
let incoming = self
.incoming
.get_mut(&channel)
.ok_or(IncomingError::UnknownPacketChannel)?;
let mux_packet_len = (packet.len() - 1) as u64;
incoming.sender.try_send(MuxPacket(packet)).map_err(|e| {
if e.is_full() {
IncomingTrySendError::IsFull(e.into_inner().0)
} else {
IncomingError::ChannelReceiverDropped.into()
}
})?;
incoming.statistics.mark_incoming_packet(mux_packet_len);
Ok(())
}
}
impl<P> Sink<P> for IncomingMultiplexedPackets<P>
where
P: Packet,
{
type Error = IncomingError;
fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
if let Some(packet) = self.to_send.take() {
let channel = packet[0];
let incoming = self
.incoming
.get_mut(&channel)
.ok_or(IncomingError::UnknownPacketChannel)?;
match incoming.sender.poll_ready_unpin(cx) {
Poll::Pending => {
self.to_send = Some(packet);
Poll::Pending
}
Poll::Ready(Ok(())) => {
let mux_packet_len = (packet.len() - 1) as u64;
incoming
.sender
.start_send_unpin(MuxPacket(packet))
.map_err(|_| IncomingError::ChannelReceiverDropped)?;
incoming.statistics.mark_incoming_packet(mux_packet_len);
self.to_flush.insert(channel);
Poll::Ready(Ok(()))
}
Poll::Ready(Err(_)) => Poll::Ready(Err(IncomingError::ChannelReceiverDropped)),
}
} else {
Poll::Ready(Ok(()))
}
}
fn start_send(mut self: Pin<&mut Self>, item: P) -> Result<(), Self::Error> {
assert!(self.to_send.is_none());
self.to_send = Some(item);
Ok(())
}
fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
if self.as_mut().poll_ready(cx)?.is_pending() {
return Poll::Pending;
}
while let Some(&channel) = self.to_flush.iter().next() {
let incoming = self
.incoming
.get_mut(&channel)
.ok_or(IncomingError::UnknownPacketChannel)?;
if incoming
.sender
.poll_flush_unpin(cx)
.map_err(|_| IncomingError::ChannelReceiverDropped)?
.is_pending()
{
return Poll::Pending;
}
self.to_flush.remove(&channel);
}
Poll::Ready(Ok(()))
}
fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
self.poll_flush(cx)
}
}
/// A handle to receive outgoing packets from the multiplexer.
pub struct OutgoingMultiplexedPackets<P> {
outgoing: SelectAll<ChannelReceiver<P>>,
}
impl<P> Stream for OutgoingMultiplexedPackets<P>
where
P: Packet,
{
type Item = P;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
self.outgoing.poll_next_unpin(cx)
}
}
struct ChannelSender<P> {
sender: spsc::Sender<MuxPacket<P>>,
statistics: Arc<ChannelStatisticsData>,
}
struct ChannelReceiver<P> {
channel: PacketChannel,
receiver: spsc::Receiver<MuxPacket<P>>,
statistics: Arc<ChannelStatisticsData>,
}
impl<P> Unpin for ChannelReceiver<P> {}
impl<P> Stream for ChannelReceiver<P>
where
P: Packet,
{
type Item = P;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<Self::Item>> {
match self.receiver.poll_next_unpin(cx) {
Poll::Ready(Some(packet)) => {
let mut packet = packet.0;
packet[0] = self.channel;
self.statistics
.mark_outgoing_packet((packet.len() - 1) as u64);
Poll::Ready(Some(packet))
}
Poll::Ready(None) => Poll::Ready(None),
Poll::Pending => Poll::Pending,
}
}
}
#[derive(Debug, Default)]
struct ChannelStatisticsData {
incoming_packets: AtomicU64,
incoming_bytes: AtomicU64,
outgoing_packets: AtomicU64,
outgoing_bytes: AtomicU64,
}
impl ChannelStatisticsData {
fn mark_incoming_packet(&self, len: u64) {
self.incoming_packets.fetch_add(1, Ordering::Relaxed);
self.incoming_bytes.fetch_add(len, Ordering::Relaxed);
}
fn mark_outgoing_packet(&self, len: u64) {
self.outgoing_packets.fetch_add(1, Ordering::Relaxed);
self.outgoing_bytes.fetch_add(len, Ordering::Relaxed);
}
}
================================================
FILE: src/reliable_bincode_channel.rs
================================================
use std::{
marker::PhantomData,
task::{Context, Poll},
u16,
};
use bincode::Options as _;
use byteorder::{ByteOrder, LittleEndian};
use futures::{future, ready, task};
use serde::{Deserialize, Serialize};
use thiserror::Error;
use crate::reliable_channel::{self, ReliableChannel};
/// The maximum serialized length of a `ReliableBincodeChannel` message.
pub const MAX_MESSAGE_LEN: u16 = u16::MAX;
#[derive(Debug, Error)]
pub enum SendError {
/// Fatal internal channel error.
#[error("reliable channel error: {0}")]
ReliableChannelError(#[from] reliable_channel::Error),
/// Non-fatal error, message is unsent.
#[error("bincode serialization error: {0}")]
BincodeError(#[from] bincode::Error),
}
#[derive(Debug, Error)]
pub enum RecvError {
/// Fatal internal channel error.
#[error("reliable channel error: {0}")]
ReliableChannelError(#[from] reliable_channel::Error),
/// Non-fatal error, message is skipped.
#[error("bincode serialization error: {0}")]
BincodeError(#[from] bincode::Error),
}
/// Wraps a `ReliableChannel` together with an internal buffer to allow easily sending message types
/// serialized with `bincode`.
///
/// Messages are guaranteed to arrive, and are guaranteed to be in order. Messages have a maximum
/// length, but this maximum size can be larger than the size of an individual packet.
pub struct ReliableBincodeChannel {
channel: ReliableChannel,
write_buffer: Vec<u8>,
write_pos: usize,
read_buffer: Vec<u8>,
read_pos: usize,
}
impl From<ReliableChannel> for ReliableBincodeChannel {
fn from(channel: ReliableChannel) -> Self {
Self::new(channel)
}
}
impl ReliableBincodeChannel {
/// Create a new `ReliableBincodeChannel` with a maximum message size of `max_message_len`.
pub fn new(channel: ReliableChannel) -> Self {
ReliableBincodeChannel {
channel,
write_buffer: Vec::new(),
write_pos: 0,
read_buffer: Vec::new(),
read_pos: 0,
}
}
pub fn into_inner(self) -> ReliableChannel {
self.channel
}
/// Write the given message to the reliable channel.
///
/// In order to ensure that messages are sent in a timely manner, `flush` must be called after
/// calling this method. Without calling `flush`, any pending writes will not be sent until the
/// next automatic sender task wakeup.
///
/// This method is cancel safe, it will never partially send a message, and completes
/// immediately upon successfully queuing a message to send.
pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendError> {
future::poll_fn(|cx| self.poll_send_ready(cx)).await?;
self.start_send(msg)?;
Ok(())
}
pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendError> {
if self.try_send_ready()? {
self.start_send(msg)?;
Ok(true)
} else {
Ok(false)
}
}
/// Ensure that any previously sent messages are sent as soon as possible.
///
/// This method is cancel safe.
pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
future::poll_fn(|cx| self.poll_flush(cx)).await
}
pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
/// Read the next available incoming message.
///
/// This method is cancel safe, it will never partially read a message or drop received
/// messages.
pub async fn recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvError> {
future::poll_fn(|cx| self.poll_recv_ready(cx)).await?;
self.recv_next::<M>()
}
pub fn try_recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<Option<M>, RecvError> {
match self.poll_recv::<M>(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(None),
Poll::Ready(Ok(val)) => Ok(Some(val)),
Poll::Ready(Err(err)) => Err(err),
}
}
pub fn poll_send_ready(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), reliable_channel::Error>> {
while !self.write_buffer.is_empty() {
let len = ready!(self
.channel
.poll_write(cx, &self.write_buffer[self.write_pos..]))?;
self.write_pos += len;
if self.write_pos == self.write_buffer.len() {
self.write_pos = 0;
self.write_buffer.clear();
}
}
Poll::Ready(Ok(()))
}
pub fn try_send_ready(&mut self) -> Result<bool, reliable_channel::Error> {
match self.poll_send_ready(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
pub fn start_send<M: Serialize>(&mut self, msg: &M) -> Result<(), bincode::Error> {
assert!(self.write_buffer.is_empty());
self.write_buffer.resize(2, 0);
let bincode_config = self.bincode_config();
bincode_config.serialize_into(&mut self.write_buffer, msg)?;
let message_len = self.write_buffer.len() - 2;
LittleEndian::write_u16(
&mut self.write_buffer[0..2],
message_len.try_into().unwrap(),
);
Ok(())
}
pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
ready!(self.poll_send_ready(cx))?;
self.channel.flush()?;
Poll::Ready(Ok(()))
}
pub fn poll_recv<'a, M: Deserialize<'a>>(
&'a mut self,
cx: &mut Context,
) -> Poll<Result<M, RecvError>> {
ready!(self.poll_recv_ready(cx))?;
Poll::Ready(self.recv_next::<M>())
}
fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvError>> {
if self.read_pos < 2 {
self.read_buffer.resize(2, 0);
ready!(self.poll_finish_read(cx))?;
}
let message_len = LittleEndian::read_u16(&self.read_buffer[0..2]);
self.read_buffer.resize(message_len as usize + 2, 0);
ready!(self.poll_finish_read(cx))?;
Poll::Ready(Ok(()))
}
fn recv_next<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvError> {
let bincode_config = self.bincode_config();
let res = bincode_config.deserialize(&self.read_buffer[2..]);
self.read_pos = 0;
Ok(res?)
}
fn poll_finish_read(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
while self.read_pos < self.read_buffer.len() {
let len = ready!(self
.channel
.poll_read(cx, &mut self.read_buffer[self.read_pos..]))?;
self.read_pos += len;
}
Poll::Ready(Ok(()))
}
fn bincode_config(&self) -> impl bincode::Options + Copy {
bincode::options().with_limit(MAX_MESSAGE_LEN as u64)
}
}
/// Wrapper over an `ReliableBincodeChannel` that only allows a single message type.
pub struct ReliableTypedChannel<M> {
channel: ReliableBincodeChannel,
_phantom: PhantomData<M>,
}
impl<M> From<ReliableChannel> for ReliableTypedChannel<M> {
fn from(channel: ReliableChannel) -> Self {
Self::new(channel)
}
}
impl<M> ReliableTypedChannel<M> {
pub fn new(channel: ReliableChannel) -> Self {
ReliableTypedChannel {
channel: ReliableBincodeChannel::new(channel),
_phantom: PhantomData,
}
}
pub fn into_inner(self) -> ReliableChannel {
self.channel.into_inner()
}
pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
self.channel.flush().await
}
pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
self.channel.try_flush()
}
pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliable_channel::Error>> {
self.channel.poll_flush(cx)
}
pub fn poll_send_ready(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), reliable_channel::Error>> {
self.channel.poll_send_ready(cx)
}
pub fn try_send_ready(&mut self) -> Result<bool, reliable_channel::Error> {
self.channel.try_send_ready()
}
}
impl<M: Serialize> ReliableTypedChannel<M> {
pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {
self.channel.send(msg).await
}
pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {
self.channel.try_send(msg)
}
pub fn start_send(&mut self, msg: &M) -> Result<(), bincode::Error> {
self.channel.start_send(msg)
}
}
impl<'a, M: Deserialize<'a>> ReliableTypedChannel<M> {
pub async fn recv(&'a mut self) -> Result<M, RecvError> {
self.channel.recv::<M>().await
}
pub fn try_recv(&'a mut self) -> Result<Option<M>, RecvError> {
self.channel.try_recv::<M>()
}
pub fn poll_recv(&'a mut self, cx: &mut Context) -> Poll<Result<M, RecvError>> {
self.channel.poll_recv::<M>(cx)
}
}
================================================
FILE: src/reliable_channel.rs
================================================
use std::{
i16,
num::Wrapping,
pin::Pin,
sync::Arc,
task::{Context, Poll},
time::Duration,
u32,
};
use byteorder::{ByteOrder, LittleEndian};
use futures::{
future::{self, Fuse, FusedFuture, RemoteHandle},
select,
task::AtomicWaker,
FutureExt, SinkExt, StreamExt,
};
use rustc_hash::FxHashMap;
use thiserror::Error;
use crate::{
bandwidth_limiter::BandwidthLimiter,
packet::{Packet, PacketPool},
runtime::{Spawn, Timer},
spsc,
windows::{
stream_gt, AckResult, RecvWindow, RecvWindowReader, SendWindow, SendWindowWriter, StreamPos,
},
};
/// All reliable channel errors are fatal. Once any error is returned all further reliable channel
/// method calls will return `Err(Error::Shutdown)`.
#[derive(Debug, Error)]
pub enum Error {
#[error("incoming or outgoing packet channel has been disconnected")]
Disconnected,
#[error("remote endpoint has violated the reliability protocol")]
ProtocolError,
#[error("an error has been encountered that has caused the channel to shutdown")]
Shutdown,
}
#[derive(Debug, Clone, PartialEq)]
pub struct Settings {
/// The target outgoing bandwidth, in bytes / sec.
///
/// This is the target bandwidth usage for all sent packets, not the target bandwidth for the
/// actual underlying stream. Both sends and resends (but not currently acks) count against this
/// bandwidth limit, so this is designed to limit the amount of traffic this channel produces.
pub bandwidth: u32,
/// The maximum amount of bandwidth credit that can accumulate. This is the maximum bytes that
/// will be sent in a single burst.
pub burst_bandwidth: u32,
/// The size of the incoming ring buffer.
pub recv_window_size: u32,
/// The size of the outgoing ring buffer.
pub send_window_size: u32,
/// The sending side of a channel will always send a constant amount of bytes more than what
/// it believes the remote's recv window actually is, to avoid stalling the connection. This
/// controls the amount past the recv window which will be sent, and also the initial amount of
/// data that will be sent when the connection starts up.
pub init_send: u32,
/// The transmission task for the channel will wake up at this rate to do resends, if not woken
/// up to send other data.
pub resend_time: Duration,
/// The initial estimate for the RTT.
pub initial_rtt: Duration,
/// The maximum reasonable RTT which will be used as an upper bound for packet RTT values.
pub max_rtt: Duration,
/// The computed RTT for each received acknowledgment will be mixed with the RTT estimate by
/// this factor.
pub rtt_update_factor: f64,
/// Resends will occur if an acknowledgment is not received within this multiplicative factor of
/// the estimated RTT.
pub rtt_resend_factor: f64,
}
/// Turns a stream of unreliable, unordered packets into a reliable in-order stream of data.
pub struct ReliableChannel {
send_window_writer: SendWindowWriter,
recv_window_reader: RecvWindowReader,
shared: Arc<Shared>,
task: Fuse<RemoteHandle<Error>>,
}
impl ReliableChannel {
pub fn new<S, T, P>(
spawn: S,
timer: T,
packet_pool: P,
settings: Settings,
sender: spsc::Sender<P::Packet>,
receiver: spsc::Receiver<P::Packet>,
) -> Self
where
S: Spawn + 'static,
T: Timer + 'static,
P: PacketPool + Send + 'static,
P::Packet: Send,
{
assert!(settings.bandwidth != 0);
assert!(settings.recv_window_size != 0);
assert!(settings.recv_window_size != 0);
assert!(settings.burst_bandwidth != 0);
assert!(settings.init_send != 0);
assert!(settings.rtt_update_factor > 0.);
assert!(settings.rtt_resend_factor > 0.);
let resend_timer = Box::pin(timer.sleep(settings.resend_time).fuse());
let (send_window, send_window_writer) =
SendWindow::new(settings.send_window_size, Wrapping(0));
let (recv_window, recv_window_reader) =
RecvWindow::new(settings.recv_window_size, Wrapping(0));
let shared = Arc::new(Shared::default());
let bandwidth_limiter =
BandwidthLimiter::new(&timer, settings.bandwidth, settings.burst_bandwidth);
let remote_recv_available = settings.init_send;
let rtt_estimate = settings.initial_rtt.as_secs_f64();
let task = Task {
settings,
timer,
packet_pool,
sender,
receiver,
shared: shared.clone(),
send_window,
recv_window,
resend_timer,
remote_recv_available,
unacked_ranges: FxHashMap::default(),
rtt_estimate,
bandwidth_limiter,
};
let (remote, remote_handle) =
{ async move { task.main_loop().await.unwrap_err() } }.remote_handle();
spawn.spawn(remote);
ReliableChannel {
send_window_writer,
recv_window_reader,
shared,
task: remote_handle.fuse(),
}
}
/// Write the given data to the reliable channel and return once any nonzero amount of data has
/// been written.
///
/// In order to ensure that all data will be sent, `ReliableChannel::flush` must be called after
/// any number of writes.
///
/// This method is cancel safe, it completes immediately once any amount of data is written,
/// dropping an incomplete future will have no effect.
pub async fn write(&mut self, data: &[u8]) -> Result<usize, Error> {
future::poll_fn(|cx| self.poll_write(cx, data)).await
}
/// Ensure that any previously written data will be fully sent.
///
/// Returns once the sending task has been notified to wake up and will send the written data
/// promptly. Does *not* actually wait for outgoing packets to be sent before returning.
pub fn flush(&mut self) -> Result<(), Error> {
if self.task.is_terminated() {
Err(Error::Shutdown)
} else if let Some(error) = (&mut self.task).now_or_never() {
Err(error)
} else {
self.shared.send_ready.wake();
Ok(())
}
}
/// Read any available data. Returns once at least one byte of data has been read.
///
/// This method is cancel safe, it completes immediately once any amount of data is read,
/// dropping an incomplete future will have no effect.
pub async fn read(&mut self, data: &mut [u8]) -> Result<usize, Error> {
future::poll_fn(|cx| self.poll_read(cx, data)).await
}
pub fn poll_write(&mut self, cx: &mut Context, data: &[u8]) -> Poll<Result<usize, Error>> {
if self.task.is_terminated() {
return Poll::Ready(Err(Error::Shutdown));
}
if let Poll::Ready(err) = self.task.poll_unpin(cx) {
return Poll::Ready(Err(err));
}
if data.is_empty() {
return Poll::Ready(Ok(0));
}
let len = self.send_window_writer.write(data);
if len > 0 {
Poll::Ready(Ok(len as usize))
} else {
self.shared.write_ready.register(cx.waker());
let len = self.send_window_writer.write(data);
if len > 0 {
Poll::Ready(Ok(len as usize))
} else {
self.shared.send_ready.wake();
Poll::Pending
}
}
}
pub fn poll_read(&mut self, cx: &mut Context, data: &mut [u8]) -> Poll<Result<usize, Error>> {
if self.task.is_terminated() {
return Poll::Ready(Err(Error::Shutdown));
}
if let Poll::Ready(err) = self.task.poll_unpin(cx) {
return Poll::Ready(Err(err));
}
if data.is_empty() {
return Poll::Ready(Ok(0));
}
let len = self.recv_window_reader.read(data);
if len > 0 {
Poll::Ready(Ok(len as usize))
} else {
self.shared.read_ready.register(cx.waker());
let len = self.recv_window_reader.read(data);
if len > 0 {
Poll::Ready(Ok(len as usize))
} else {
Poll::Pending
}
}
}
/// The amount of space currently available for writing without blocking.
pub fn write_available(&self) -> usize {
self.send_window_writer.write_available() as usize
}
/// Attempt to write data without blocking or registering wakeups.
pub fn try_write(&mut self, data: &[u8]) -> Result<usize, Error> {
if self.task.is_terminated() {
Err(Error::Shutdown)
} else {
Ok(self.send_window_writer.write(data) as usize)
}
}
/// Attempt to read data without blocking or registering wakeups.
pub fn try_read(&mut self, data: &mut [u8]) -> Result<usize, Error> {
if self.task.is_terminated() {
Err(Error::Shutdown)
} else {
Ok(self.recv_window_reader.read(data) as usize)
}
}
}
#[derive(Default)]
struct Shared {
send_ready: AtomicWaker,
write_ready: AtomicWaker,
read_ready: AtomicWaker,
}
struct UnackedRange<I> {
start: StreamPos,
end: StreamPos,
last_sent: Option<I>,
retransmit: bool,
}
struct Task<T, P>
where
T: Timer,
P: PacketPool,
{
timer: T,
settings: Settings,
packet_pool: P,
sender: spsc::Sender<P::Packet>,
receiver: spsc::Receiver<P::Packet>,
shared: Arc<Shared>,
send_window: SendWindow,
recv_window: RecvWindow,
resend_timer: Pin<Box<Fuse<T::Sleep>>>,
remote_recv_available: u32,
unacked_ranges: FxHashMap<StreamPos, UnackedRange<T::Instant>>,
rtt_estimate: f64,
bandwidth_limiter: BandwidthLimiter<T>,
}
impl<T, P> Task<T, P>
where
T: Timer,
P: PacketPool,
{
async fn main_loop(mut self) -> Result<(), Error> {
loop {
enum WakeReason<P> {
ResendTimer,
IncomingPacket(P),
SendAvailable,
}
self.bandwidth_limiter.update_available(&self.timer);
let wake_reason = {
let bandwidth_limiter = &self.bandwidth_limiter;
let resend_timer = &mut self.resend_timer;
let resend_timer = async {
if !resend_timer.is_terminated() {
resend_timer.await;
}
// Don't bother waking up for the resend timer until we have bandwidth available
// to do resends.
if let Some(delay) = bandwidth_limiter.delay_until_available(&self.timer) {
delay.await;
}
}
.fuse();
let send_available = async {
if self.remote_recv_available == 0 {
// Don't wake up at all for sending new data if we couldn't send anything
// anyway.
future::pending::<()>().await;
}
// Don't wake up for sending new data until we have bandwidth available.
if let Some(delay) = bandwidth_limiter.delay_until_available(&self.timer) {
delay.await;
}
future::poll_fn(|cx| {
if self.send_window.send_available() > 0 {
Poll::Ready(())
} else {
self.shared.send_ready.register(cx.waker());
if self.send_window.send_available() > 0 {
Poll::Ready(())
} else {
Poll::Pending
}
}
})
.await
}
.fuse();
select! {
_ = { resend_timer } => WakeReason::ResendTimer,
incoming_packet = self.receiver.next().fuse() => {
WakeReason::IncomingPacket(incoming_packet.ok_or(Error::Disconnected)?)
},
_ = { send_available } => WakeReason::SendAvailable,
}
};
self.bandwidth_limiter.update_available(&self.timer);
match wake_reason {
WakeReason::ResendTimer => {
self.resend().await?;
self.resend_timer
.set(self.timer.sleep(self.settings.resend_time).fuse());
}
WakeReason::IncomingPacket(packet) => {
self.recv_packet(packet).await?;
}
WakeReason::SendAvailable => {
// We should use available bandwidth for resends before sending, to avoid
// starving resends
self.resend().await?;
self.resend_timer
.set(self.timer.sleep(self.settings.resend_time).fuse());
self.send().await?;
}
}
// Don't let the connection stall. If we are now out of unacked ranges to resend and
// we believe the remote has no recv left, we will receive no acknowledgments to let us
// update the remote receive window. Keep sending a small amount of data past the remote
// receive window, even if it is unacked, so that we are notified when the remote starts
// processing data again.
if self.unacked_ranges.is_empty() && self.remote_recv_available == 0 {
self.remote_recv_available = self.settings.init_send;
}
}
}
// Send any data available to send, if we have the bandwidth for it
async fn send(&mut self) -> Result<(), Error> {
if !self.bandwidth_limiter.bytes_available() {
return Ok(());
}
let send_amt = (self.send_window.send_available())
.min(self.remote_recv_available)
.min(i16::MAX as u32);
if send_amt == 0 {
return Ok(());
}
let send_amt = send_amt.min((self.packet_pool.capacity() - 6) as u32);
let mut packet = self.packet_pool.acquire();
packet.resize(6 + send_amt as usize, 0);
let (start, end) = self.send_window.send(&mut packet[6..]).unwrap();
assert_eq!((end - start).0, send_amt);
LittleEndian::write_i16(&mut packet[0..2], send_amt as i16);
LittleEndian::write_u32(&mut packet[2..6], start.0);
self.unacked_ranges.insert(
start,
UnackedRange {
start,
end,
last_sent: Some(self.timer.now()),
retransmit: false,
},
);
self.bandwidth_limiter.take_bytes(packet.len() as u32);
self.sender
.send(packet)
.await
.map_err(|_| Error::Disconnected)?;
self.remote_recv_available -= send_amt;
Ok(())
}
// Resend any data whose retransmit time has been reached, if we have the bandwidth for it
async fn resend(&mut self) -> Result<(), Error> {
for unacked in self.unacked_ranges.values_mut() {
if !self.bandwidth_limiter.bytes_available() {
break;
}
let resend = if let Some(last_sent) = unacked.last_sent {
let elapsed = self.timer.duration_between(last_sent, self.timer.now());
elapsed.as_secs_f64() > self.rtt_estimate * self.settings.rtt_resend_factor
} else {
true
};
if resend {
unacked.last_sent = Some(self.timer.now());
unacked.retransmit = true;
let len = (unacked.end - unacked.start).0;
let mut packet = self.packet_pool.acquire();
packet.resize(6 + len as usize, 0);
LittleEndian::write_i16(&mut packet[0..2], len as i16);
LittleEndian::write_u32(&mut packet[2..6], unacked.start.0);
self.send_window
.get_unacked(unacked.start, &mut packet[6..]);
self.bandwidth_limiter.take_bytes(packet.len() as u32);
self.sender
.send(packet)
.await
.map_err(|_| Error::Disconnected)?;
}
}
Ok(())
}
// Receive the given packet and respond with an acknowledgment packet, ignoring bandwidth
// limits.
async fn recv_packet(&mut self, packet: P::Packet) -> Result<(), Error> {
if packet.len() < 2 {
return Err(Error::ProtocolError);
}
let data_len = LittleEndian::read_i16(&packet[0..2]);
if data_len < 0 {
if packet.len() != 10 {
return Err(Error::ProtocolError);
}
let start_pos = Wrapping(LittleEndian::read_u32(&packet[2..6]));
let end_pos = start_pos + Wrapping(-data_len as u32);
let recv_window_end = Wrapping(LittleEndian::read_u32(&packet[6..10]));
if stream_gt(recv_window_end, self.send_window.send_pos()) {
let old_remote_recv_available = self.remote_recv_available;
self.remote_recv_available = self
.remote_recv_available
.max((recv_window_end - self.send_window.send_pos()).0);
if self.remote_recv_available != 0 && old_remote_recv_available == 0 {
// If we now believe the remote is newly ready to receive data, go ahead and
// send it.
self.send().await?;
}
}
let acked_range = match self.send_window.ack_range(start_pos, end_pos) {
AckResult::NotFound => None,
AckResult::Ack => {
let acked = self.unacked_ranges.remove(&start_pos).unwrap();
assert_eq!(acked.end, end_pos);
Some(acked)
}
AckResult::PartialAck(nacked_end) => {
let mut acked = self.unacked_ranges.remove(&start_pos).unwrap();
assert_eq!(acked.end, nacked_end);
acked.end = end_pos;
self.unacked_ranges.insert(
end_pos,
UnackedRange {
start: end_pos,
end: nacked_end,
last_sent: None,
retransmit: true,
},
);
Some(acked)
}
};
if let Some(acked_range) = acked_range {
// Only update the RTT estimation for acked ranges that did not need to be
// retransmitted, otherwise we do not know which packet is being acked and thus
// can't be sure of the actual RTT for this ack.
if !acked_range.retransmit {
if let Some(last_sent) = acked_range.last_sent {
let rtt = self
.timer
.duration_between(last_sent, self.timer.now())
.min(self.settings.max_rtt)
.as_secs_f64();
self.rtt_estimate +=
(rtt - self.rtt_estimate) * self.settings.rtt_update_factor;
}
}
if self.send_window.write_available() > 0 {
self.shared.write_ready.wake();
}
}
} else {
if packet.len() < 6 {
return Err(Error::ProtocolError);
}
let start_pos = Wrapping(LittleEndian::read_u32(&packet[2..6]));
if data_len as usize != packet.len() - 6 {
return Err(Error::ProtocolError);
}
if let Some(end_pos) = self.recv_window.recv(start_pos, &packet[6..]) {
let mut ack_packet = self.packet_pool.acquire();
ack_packet.resize(10, 0);
let ack_len = (end_pos - start_pos).0 as i16;
LittleEndian::write_i16(&mut ack_packet[0..2], -ack_len);
LittleEndian::write_u32(&mut ack_packet[2..6], start_pos.0);
LittleEndian::write_u32(&mut ack_packet[6..10], self.recv_window.window_end().0);
// We currently do not count acknowledgement packets against the outgoing bandwidth
// at all.
self.sender
.send(ack_packet)
.await
.map_err(|_| Error::Disconnected)?;
if self.recv_window.read_available() > 0 {
self.shared.read_ready.wake();
}
}
}
Ok(())
}
}
================================================
FILE: src/ring_buffer.rs
================================================
use std::{
alloc::{alloc, dealloc, Layout},
mem::{self, MaybeUninit},
ptr::NonNull,
slice,
sync::{
atomic::{AtomicUsize, Ordering},
Arc,
},
};
use cache_padded::CachePadded;
pub struct RingBuffer {
buffer: NonNull<MaybeUninit<u8>>,
capacity: usize,
head: CachePadded<AtomicUsize>,
tail: CachePadded<AtomicUsize>,
}
impl RingBuffer {
pub fn new(capacity: usize) -> (Writer, Reader) {
assert!(capacity != 0);
let buffer = Arc::new(Self {
buffer: unsafe {
NonNull::new(alloc(Layout::array::<MaybeUninit<u8>>(capacity).unwrap())
as *mut MaybeUninit<u8>)
.unwrap()
},
capacity,
head: CachePadded::new(AtomicUsize::new(0)),
tail: CachePadded::new(AtomicUsize::new(0)),
});
let writer = Writer(buffer.clone());
let reader = Reader(buffer);
(writer, reader)
}
pub fn write_available(&self) -> usize {
let head = self.head.load(Ordering::Acquire);
let tail = self.tail.load(Ordering::Acquire);
head_to_tail(self.capacity, head, tail)
}
pub fn read_available(&self) -> usize {
let head = self.head.load(Ordering::Acquire);
let tail = self.tail.load(Ordering::Acquire);
tail_to_head(self.capacity, tail, head)
}
}
impl Drop for RingBuffer {
fn drop(&mut self) {
unsafe {
dealloc(
self.buffer.as_ptr() as *mut u8,
Layout::array::<MaybeUninit<u8>>(self.capacity).unwrap(),
);
}
}
}
unsafe impl Send for RingBuffer {}
unsafe impl Sync for RingBuffer {}
pub struct Writer(Arc<RingBuffer>);
impl Writer {
pub fn available(&self) -> usize {
self.0.write_available()
}
pub fn write(&mut self, mut offset: usize, mut data: &[u8]) -> usize {
let head_pos = self.0.head.load(Ordering::Acquire);
let tail_pos = self.0.tail.load(Ordering::Acquire);
let head = collapse_position(self.0.capacity, head_pos);
let tail = collapse_position(self.0.capacity, tail_pos);
if head == tail && head_pos != tail_pos {
return 0;
}
let (mut left, mut right): (&mut [MaybeUninit<u8>], &mut [MaybeUninit<u8>]) = unsafe {
if head < tail {
(
slice::from_raw_parts_mut(self.0.buffer.as_ptr().add(head), tail - head),
&mut [],
)
} else {
(
slice::from_raw_parts_mut(
self.0.buffer.as_ptr().add(head),
self.0.capacity - head,
),
slice::from_raw_parts_mut(self.0.buffer.as_ptr(), tail),
)
}
};
let left_eat = left.len().min(offset);
left = &mut left[left_eat..];
offset -= left_eat;
let left_len = left.len().min(data.len());
write_slice(&mut left[0..left_len], &data[0..left_len]);
data = &data[left_len..];
let right_eat = right.len().min(offset);
right = &mut right[right_eat..];
let right_len = right.len().min(data.len());
write_slice(&mut right[0..right_len], &data[0..right_len]);
left_len + right_len
}
pub fn advance(&mut self, offset: usize) -> usize {
let head = self.0.head.load(Ordering::Acquire);
let tail = self.0.tail.load(Ordering::Acquire);
let offset = offset.min(head_to_tail(self.0.capacity, head, tail));
let head = increment(self.0.capacity, head, offset);
self.0.head.store(head, Ordering::Release);
offset
}
pub fn buffer(&self) -> &RingBuffer {
&self.0
}
}
pub struct Reader(Arc<RingBuffer>);
impl Reader {
pub fn available(&self) -> usize {
self.0.read_available()
}
pub fn read(&self, mut offset: usize, mut data: &mut [u8]) -> usize {
let head_pos = self.0.head.load(Ordering::Acquire);
let tail_pos = self.0.tail.load(Ordering::Acquire);
let head = collapse_position(self.0.capacity, head_pos);
let tail = collapse_position(self.0.capacity, tail_pos);
if head == tail && head_pos == tail_pos {
return 0;
}
let (mut left, mut right): (&[u8], &[u8]) = unsafe {
if tail < head {
(
slice::from_raw_parts(self.0.buffer.as_ptr().add(tail) as *mut u8, head - tail),
&mut [],
)
} else {
(
slice::from_raw_parts(
self.0.buffer.as_ptr().add(tail) as *mut u8,
self.0.capacity - tail,
),
slice::from_raw_parts(self.0.buffer.as_ptr() as *mut u8, head),
)
}
};
let left_eat = left.len().min(offset);
left = &left[left_eat..];
offset -= left_eat;
let left_len = left.len().min(data.len());
data[0..left_len].copy_from_slice(&left[0..left_len]);
data = &mut data[left_len..];
let right_eat = right.len().min(offset);
right = &right[right_eat..];
let right_len = right.len().min(data.len());
data[0..right_len].copy_from_slice(&right[0..right_len]);
left_len + right_len
}
pub fn advance(&mut self, offset: usize) -> usize {
let head = self.0.head.load(Ordering::Acquire);
let tail = self.0.tail.load(Ordering::Acquire);
let offset = offset.min(tail_to_head(self.0.capacity, tail, head));
let tail = increment(self.0.capacity, tail, offset);
self.0.tail.store(tail, Ordering::Release);
offset
}
pub fn buffer(&self) -> &RingBuffer {
&self.0
}
}
fn collapse_position(capacity: usize, pos: usize) -> usize {
if pos < capacity {
pos
} else {
pos - capacity
}
}
fn tail_to_head(capacity: usize, tail: usize, head: usize) -> usize {
if tail <= head {
head - tail
} else {
capacity - (tail - capacity) + head
}
}
fn head_to_tail(capacity: usize, head: usize, tail: usize) -> usize {
capacity - tail_to_head(capacity, tail, head)
}
fn increment(capacity: usize, pos: usize, n: usize) -> usize {
if n == 0 {
return pos;
}
let threshold = (capacity - n) + capacity;
if pos < threshold {
pos + n
} else {
pos - threshold
}
}
fn write_slice(dst: &mut [MaybeUninit<u8>], src: &[u8]) {
let src: &[MaybeUninit<u8>] = unsafe { mem::transmute(src) };
dst.copy_from_slice(src);
}
#[cfg(test)]
mod tests {
use std::thread;
use super::*;
#[test]
fn basic_read_write() {
let (mut writer, mut reader) = RingBuffer::new(7);
let mut buffer = [0; 7];
assert_eq!(writer.available(), 7);
assert_eq!(writer.write(0, &[0, 1, 2]), 3);
assert_eq!(writer.advance(3), 3);
assert_eq!(writer.available(), 4);
assert_eq!(reader.available(), 3);
assert_eq!(reader.read(0, &mut buffer), 3);
assert_eq!(buffer[0..3], [0, 1, 2]);
assert_eq!(writer.available(), 4);
assert_eq!(reader.advance(3), 3);
assert_eq!(writer.available(), 7);
assert_eq!(reader.available(), 0);
assert_eq!(writer.write(0, &[0, 1, 2]), 3);
assert_eq!(writer.advance(3), 3);
assert_eq!(writer.available(), 4);
assert_eq!(reader.read(0, &mut buffer[0..3]), 3);
assert_eq!(buffer[0..3], [0, 1, 2]);
assert_eq!(writer.write(0, &[3, 4, 5]), 3);
assert_eq!(writer.advance(3), 3);
assert_eq!(writer.available(), 1);
assert_eq!(writer.write(0, &[6, 7, 8, 9]), 1);
assert_eq!(writer.advance(1), 1);
assert_eq!(writer.available(), 0);
assert_eq!(reader.available(), 7);
assert_eq!(reader.read(4, &mut buffer[0..5]), 3);
assert_eq!(buffer[0..3], [4, 5, 6]);
assert_eq!(reader.read(0, &mut buffer[0..2]), 2);
assert_eq!(buffer[0..2], [0, 1]);
assert_eq!(reader.advance(2), 2);
assert_eq!(reader.available(), 5);
assert_eq!(writer.available(), 2);
assert_eq!(reader.read(0, &mut buffer[0..3]), 3);
assert_eq!(buffer[0..3], [2, 3, 4]);
assert_eq!(reader.advance(3), 3);
assert_eq!(reader.available(), 2);
assert_eq!(writer.available(), 5);
assert_eq!(reader.read(0, &mut buffer[0..5]), 2);
assert_eq!(buffer[0..2], [5, 6]);
assert_eq!(reader.available(), 2);
assert_eq!(writer.available(), 5);
assert_eq!(reader.advance(5), 2);
assert_eq!(reader.available(), 0);
assert_eq!(writer.available(), 7);
assert_eq!(writer.write(3, &[13, 14]), 2);
assert_eq!(writer.write(0, &[10, 11, 12]), 3);
assert_eq!(writer.advance(5), 5);
assert_eq!(writer.available(), 2);
assert_eq!(reader.available(), 5);
assert_eq!(reader.read(2, &mut buffer[0..5]), 3);
assert_eq!(buffer[0..3], [12, 13, 14]);
assert_eq!(reader.read(0, &mut buffer[0..3]), 3);
assert_eq!(buffer[0..3], [10, 11, 12]);
}
#[test]
fn threaded_read_write() {
let (mut writer, mut reader) = RingBuffer::new(64);
let a = thread::spawn(move || {
let mut b = [0; 32];
let mut i = 0;
loop {
let write = 11 + (i % 17);
for j in 0..write {
b[j] = ((i + j) % 256) as u8;
}
let len = writer.write(0, &b[0..write]);
writer.advance(len);
i += len;
if i >= 10_000 {
break;
}
}
});
let b = thread::spawn(move || {
let mut b = [0; 32];
let mut i = 0;
loop {
let r = reader.read(0, &mut b);
for j in 0..r {
assert_eq!(b[j], ((i + j) % 256) as u8);
}
assert_eq!(reader.advance(r), r);
i += r;
if i >= 10_000 {
break;
}
}
});
b.join().unwrap();
a.join().unwrap();
}
}
================================================
FILE: src/runtime.rs
================================================
//! Traits for async runtime functionality needed by `turbulence`.
use std::{future::Future, time::Duration};
/// This is similar to the `futures::task::Spawn` trait, but it is generic in the spawned
/// future, which is better for backends like tokio.
pub trait Spawn: Send + Sync {
fn spawn<F>(&self, future: F)
where
F: Future<Output = ()> + Send + 'static;
}
/// This is designed so that it can be implemented on multiple platforms with multiple runtimes,
/// including `wasm32-unknown-unknown`, where `std::time::Instant` is unavailable.
pub trait Timer: Send + Sync {
type Instant: Send + Sync + Copy;
type Sleep: Future<Output = ()> + Send;
/// Return the current instant.
fn now(&self) -> Self::Instant;
/// Similarly to `std::time::Instant::duration_since`, may panic if `later` comes before
/// `earlier`.
fn duration_between(&self, earlier: Self::Instant, later: Self::Instant) -> Duration;
/// Create a future which resolves after the given time has passed.
fn sleep(&self, duration: Duration) -> Self::Sleep;
}
================================================
FILE: src/spsc.rs
================================================
use std::{
pin::Pin,
sync::Arc,
task::{Context, Poll},
};
pub use crossbeam_channel::{TryRecvError, TrySendError};
use futures::{task::AtomicWaker, Sink, Stream};
use thiserror::Error;
#[derive(Default)]
struct Shared {
send_ready: AtomicWaker,
recv_ready: AtomicWaker,
}
pub struct Receiver<T> {
channel: crossbeam_channel::Receiver<T>,
shared: Arc<Shared>,
}
impl<T> Drop for Receiver<T> {
fn drop(&mut self) {
self.shared.send_ready.wake();
}
}
impl<T> Unpin for Receiver<T> {}
impl<T> Stream for Receiver<T> {
type Item = T;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<T>> {
match self.try_recv() {
Ok(r) => Poll::Ready(Some(r)),
Err(TryRecvError::Disconnected) => Poll::Ready(None),
Err(TryRecvError::Empty) => {
self.shared.recv_ready.register(cx.waker());
match self.try_recv() {
Ok(r) => Poll::Ready(Some(r)),
Err(TryRecvError::Disconnected) => Poll::Ready(None),
Err(TryRecvError::Empty) => Poll::Pending,
}
}
}
}
}
impl<T> Receiver<T> {
pub fn try_recv(&mut self) -> Result<T, TryRecvError> {
let t = self.channel.try_recv()?;
self.shared.send_ready.wake();
Ok(t)
}
}
#[derive(Debug, Error)]
#[error("spsc channel disconnected")]
pub struct Disconnected;
pub struct Sender<T> {
channel: crossbeam_channel::Sender<T>,
shared: Arc<Shared>,
slot: Option<T>,
}
impl<T> Drop for Sender<T> {
fn drop(&mut self) {
self.shared.recv_ready.wake()
}
}
impl<T> Unpin for Sender<T> {}
impl<T> Sink<T> for Sender<T> {
type Error = Disconnected;
fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
if let Some(t) = self.slot.take() {
match self.try_send(t) {
Ok(()) => Poll::Ready(Ok(())),
Err(TrySendError::Disconnected(_)) => Poll::Ready(Err(Disconnected)),
Err(TrySendError::Full(t)) => {
self.shared.send_ready.register(cx.waker());
match self.try_send(t) {
Ok(()) => Poll::Ready(Ok(())),
Err(TrySendError::Disconnected(_)) => Poll::Ready(Err(Disconnected)),
Err(TrySendError::Full(t)) => {
self.slot = Some(t);
Poll::Pending
}
}
}
}
} else {
Poll::Ready(Ok(()))
}
}
fn start_send(mut self: Pin<&mut Self>, item: T) -> Result<(), Self::Error> {
if self.slot.replace(item).is_some() {
panic!("start_send called without without being ready");
}
Ok(())
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
self.poll_ready(cx)
}
fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(), Self::Error>> {
self.poll_flush(cx)
}
}
impl<T> Sender<T> {
pub fn try_send(&mut self, t: T) -> Result<(), TrySendError<T>> {
if let Some(prev) = self.slot.take() {
if let Err(err) = self.channel.try_send(prev) {
match err {
TrySendError::Full(inner) => {
self.slot = Some(inner);
return Err(TrySendError::Full(t));
}
TrySendError::Disconnected(inner) => {
self.slot = Some(inner);
return Err(TrySendError::Disconnected(t));
}
}
} else {
self.shared.recv_ready.wake();
}
}
self.channel.try_send(t)?;
self.shared.recv_ready.wake();
Ok(())
}
}
pub fn channel<T>(capacity: usize) -> (Sender<T>, Receiver<T>) {
let (sender, receiver) = crossbeam_channel::bounded(capacity);
let shared = Arc::new(Shared::default());
(
Sender {
channel: sender,
shared: shared.clone(),
slot: None,
},
Receiver {
channel: receiver,
shared,
},
)
}
================================================
FILE: src/unreliable_bincode_channel.rs
================================================
use std::{
marker::PhantomData,
task::{Context, Poll},
};
use bincode::Options as _;
use futures::{future, ready, task};
use serde::{Deserialize, Serialize};
use thiserror::Error;
use crate::{
packet::PacketPool,
runtime::Timer,
unreliable_channel::{self, UnreliableChannel},
};
#[derive(Debug, Error)]
pub enum SendError {
#[error("unreliable channel error: {0}")]
UnreliableChannelError(#[from] unreliable_channel::SendError),
/// Non-fatal error, message is unsent.
#[error("bincode serialization error: {0}")]
BincodeError(#[from] bincode::Error),
}
#[derive(Debug, Error)]
pub enum RecvError {
#[error("unreliable channel error: {0}")]
UnreliableChannelError(#[from] unreliable_channel::RecvError),
/// Non-fatal error, message is skipped.
#[error("bincode serialization error: {0}")]
BincodeError(#[from] bincode::Error),
}
/// Wraps an `UnreliableChannel` together with an internal buffer to allow easily sending message
/// types serialized with `bincode`.
///
/// Just like the underlying channel, messages are not guaranteed to arrive, nor are they guaranteed
/// to arrive in order.
pub struct UnreliableBincodeChannel<T, P>
where
T: Timer,
P: PacketPool,
{
channel: UnreliableChannel<T, P>,
pending_write: Vec<u8>,
}
impl<T, P> From<UnreliableChannel<T, P>> for UnreliableBincodeChannel<T, P>
where
T: Timer,
P: PacketPool,
{
fn from(channel: UnreliableChannel<T, P>) -> Self {
Self::new(channel)
}
}
impl<T, P> UnreliableBincodeChannel<T, P>
where
T: Timer,
P: PacketPool,
{
pub fn new(channel: UnreliableChannel<T, P>) -> Self {
UnreliableBincodeChannel {
channel,
pending_write: Vec::new(),
}
}
pub fn into_inner(self) -> UnreliableChannel<T, P> {
self.channel
}
/// Maximum allowed message length based on the packet capacity of the provided `PacketPool`.
///
/// Will never be greater than `MAX_PACKET_LEN - 2`.
pub fn max_message_len(&self) -> u16 {
self.channel.max_message_len()
}
/// Write the given serializable message type to the channel.
///
/// Messages are coalesced into larger packets before being sent, so in order to guarantee that
/// the message is actually sent, you must call `flush`.
///
/// This method is cancel safe, it will never partially send a message, and completes
/// immediately upon successfully queuing a message to send.
pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendError> {
future::poll_fn(|cx| self.poll_send_ready(cx)).await?;
self.start_send(msg)?;
Ok(())
}
pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendError> {
if self.try_send_ready()? {
self.start_send(msg)?;
Ok(true)
} else {
Ok(false)
}
}
/// Finish sending any unsent coalesced packets.
///
/// This *must* be called to guarantee that any sent messages are actually sent to the outgoing
/// packet stream.
///
/// This method is cancel safe.
pub async fn flush(&mut self) -> Result<(), unreliable_channel::SendError> {
future::poll_fn(|cx| self.poll_flush(cx)).await
}
pub fn try_flush(&mut self) -> Result<bool, unreliable_channel::SendError> {
match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
/// Receive a deserializable message type as soon as the next message is available.
///
/// This method is cancel safe, it will never partially read a message or drop received
/// messages.
pub async fn recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvError> {
let bincode_config = self.bincode_config();
let msg = self.channel.recv().await?;
Ok(bincode_config.deserialize(msg)?)
}
pub fn try_recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<Option<M>, RecvError> {
match self.poll_recv::<M>(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(None),
Poll::Ready(Ok(val)) => Ok(Some(val)),
Poll::Ready(Err(err)) => Err(err),
}
}
pub fn poll_send_ready(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), unreliable_channel::SendError>> {
if !self.pending_write.is_empty() {
ready!(self.channel.poll_send(cx, &self.pending_write))?;
self.pending_write.clear();
}
Poll::Ready(Ok(()))
}
pub fn try_send_ready(&mut self) -> Result<bool, unreliable_channel::SendError> {
match self.poll_send_ready(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
pub fn start_send<M: Serialize>(&mut self, msg: &M) -> Result<(), bincode::Error> {
assert!(self.pending_write.is_empty());
let bincode_config = self.bincode_config();
bincode_config.serialize_into(&mut self.pending_write, msg)?;
Ok(())
}
pub fn poll_flush(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), unreliable_channel::SendError>> {
ready!(self.poll_send_ready(cx))?;
ready!(self.channel.poll_flush(cx))?;
Poll::Ready(Ok(()))
}
pub fn poll_recv<'a, M: Deserialize<'a>>(
&'a mut self,
cx: &mut Context,
) -> Poll<Result<M, RecvError>> {
let bincode_config = self.bincode_config();
let msg = ready!(self.channel.poll_recv(cx))?;
Poll::Ready(Ok(bincode_config.deserialize::<M>(msg)?))
}
fn bincode_config(&self) -> impl bincode::Options + Copy {
bincode::options().with_limit(self.max_message_len() as u64)
}
}
/// Wrapper over an `UnreliableBincodeChannel` that only allows a single message type.
pub struct UnreliableTypedChannel<T, P, M>
where
T: Timer,
P: PacketPool,
{
channel: UnreliableBincodeChannel<T, P>,
_phantom: PhantomData<M>,
}
impl<T, P, M> From<UnreliableChannel<T, P>> for UnreliableTypedChannel<T, P, M>
where
T: Timer,
P: PacketPool,
{
fn from(channel: UnreliableChannel<T, P>) -> Self {
Self::new(channel)
}
}
impl<T, P, M> UnreliableTypedChannel<T, P, M>
where
T: Timer,
P: PacketPool,
{
pub fn new(channel: UnreliableChannel<T, P>) -> Self {
Self {
channel: UnreliableBincodeChannel::new(channel),
_phantom: PhantomData,
}
}
pub fn into_inner(self) -> UnreliableChannel<T, P> {
self.channel.into_inner()
}
pub async fn flush(&mut self) -> Result<(), unreliable_channel::SendError> {
self.channel.flush().await
}
pub fn try_flush(&mut self) -> Result<bool, unreliable_channel::SendError> {
self.channel.try_flush()
}
pub fn poll_flush(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), unreliable_channel::SendError>> {
self.channel.poll_flush(cx)
}
pub fn poll_send_ready(
&mut self,
cx: &mut Context,
) -> Poll<Result<(), unreliable_channel::SendError>> {
self.channel.poll_send_ready(cx)
}
pub fn try_send_ready(&mut self) -> Result<bool, unreliable_channel::SendError> {
self.channel.try_send_ready()
}
}
impl<T, P, M> UnreliableTypedChannel<T, P, M>
where
T: Timer,
P: PacketPool,
M: Serialize,
{
pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {
self.channel.send(msg).await
}
pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {
self.channel.try_send(msg)
}
pub fn start_send(&mut self, msg: &M) -> Result<(), bincode::Error> {
self.channel.start_send(msg)
}
}
impl<'a, T, P, M> UnreliableTypedChannel<T, P, M>
where
T: Timer,
P: PacketPool,
M: Deserialize<'a>,
{
pub async fn recv(&'a mut self) -> Result<M, RecvError> {
self.channel.recv::<M>().await
}
pub fn try_recv(&'a mut self) -> Result<Option<M>, RecvError> {
self.channel.try_recv::<M>()
}
pub fn poll_recv(&'a mut self, cx: &mut Context) -> Poll<Result<M, RecvError>> {
self.channel.poll_recv::<M>(cx)
}
}
================================================
FILE: src/unreliable_channel.rs
================================================
use std::{
convert::TryInto,
future::Future,
mem,
pin::Pin,
task::{Context, Poll},
};
use byteorder::{ByteOrder, LittleEndian};
use futures::{future, ready, task, SinkExt, StreamExt};
use thiserror::Error;
use crate::{
bandwidth_limiter::BandwidthLimiter,
packet::{Packet, PacketPool, MAX_PACKET_LEN},
runtime::Timer,
spsc,
};
#[derive(Debug, Error)]
/// Fatal error due to channel disconnection.
#[error("incoming or outgoing packet channel has been disconnected")]
pub struct Disconnected;
#[derive(Debug, Error)]
pub enum SendError {
#[error(transparent)]
Disconnected(#[from] Disconnected),
/// Non-fatal error, message is unsent.
#[error("message is larger than fits in the maximum packet size")]
TooBig,
}
#[derive(Debug, Error)]
pub enum RecvError {
#[error(transparent)]
Disconnected(#[from] Disconnected),
/// Non-fatal error, the remainder of the incoming packet is dropped.
#[error("incoming packet has bad message format")]
BadFormat,
}
#[derive(Debug, Clone, PartialEq)]
pub struct Settings {
/// The target outgoing bandwidth, in bytes / sec.
pub bandwidth: u32,
/// The maximum amount of bandwidth credit that can accumulate. This is the maximum bytes that
/// will be sent in a single burst.
pub burst_bandwidth: u32,
}
/// Turns a stream of unreliable, unordered packets into a stream of unreliable, unordered messages.
pub struct UnreliableChannel<T, P>
where
T: Timer,
P: PacketPool,
{
timer: T,
packet_pool: P,
bandwidth_limiter: BandwidthLimiter<T>,
sender: spsc::Sender<P::Packet>,
receiver: spsc::Receiver<P::Packet>,
out_packet: P::Packet,
in_packet: Option<(P::Packet, usize)>,
delay_until_available: Pin<Box<Option<T::Sleep>>>,
}
impl<T, P> UnreliableChannel<T, P>
where
T: Timer,
P: PacketPool,
{
pub fn new(
timer: T,
mut packet_pool: P,
settings: Settings,
sender: spsc::Sender<P::Packet>,
receiver: spsc::Receiver<P::Packet>,
) -> Self {
let out_packet = packet_pool.acquire();
let bandwidth_limiter =
BandwidthLimiter::new(&timer, settings.bandwidth, settings.burst_bandwidth);
UnreliableChannel {
timer,
packet_pool,
bandwidth_limiter,
receiver,
sender,
out_packet,
in_packet: None,
delay_until_available: Box::pin(None),
}
}
/// Maximum allowed message length based on the packet capacity of the provided `PacketPool`.
///
/// Will never be greater than `MAX_PACKET_LEN - 2`.
pub fn max_message_len(&self) -> u16 {
self.packet_pool.capacity().min(MAX_PACKET_LEN as usize) as u16 - 2
}
/// Write the given message to the channel.
///
/// Messages are coalesced into larger packets before being sent, so in order to guarantee that
/// the message is actually sent, you must call `flush`.
///
/// Messages have a maximum size based on the size of the packets returned from the packet pool.
/// Two bytes are used to encode the length of the message, so the maximum message length is
/// `packet.len() - 2`, for whatever packet sizes are returned by the pool.
///
/// This method is cancel safe, it will never partially send a message, and the future will
/// complete immediately after writing a message.
pub async fn send(&mut self, msg: &[u8]) -> Result<(), SendError> {
future::poll_fn(|cx| self.poll_send(cx, msg)).await
}
pub fn try_send(&mut self, msg: &[u8]) -> Result<bool, SendError> {
match self.poll_send(&mut Context::from_waker(task::noop_waker_ref()), msg) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
/// Finish sending any unsent coalesced packets.
///
/// This *must* be called to guarantee that any sent messages are actually sent to the outgoing
/// packet stream.
///
/// This method is cancel safe.
pub async fn flush(&mut self) -> Result<(), Disconnected> {
future::poll_fn(|cx| self.poll_flush(cx)).await
}
pub fn try_flush(&mut self) -> Result<bool, Disconnected> {
match self.poll_flush(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(false),
Poll::Ready(Ok(())) => Ok(true),
Poll::Ready(Err(err)) => Err(err),
}
}
/// Receive a message into the provide buffer.
///
/// If the received message fits into the provided buffer, this will return `Ok(message_len)`,
/// otherwise it will return `Err(RecvError::TooBig)`.
///
/// This method is cancel safe, it will never partially read a message or drop received
/// messages.
pub async fn recv(&mut self) -> Result<&[u8], RecvError> {
future::poll_fn(|cx| self.poll_recv_ready(cx)).await?;
self.recv_next()
}
pub fn try_recv(&mut self) -> Result<Option<&[u8]>, RecvError> {
match self.poll_recv_ready(&mut Context::from_waker(task::noop_waker_ref())) {
Poll::Pending => Ok(None),
Poll::Ready(Ok(())) => Ok(Some(self.recv_next()?)),
Poll::Ready(Err(err)) => Err(err),
}
}
pub fn poll_send(&mut self, cx: &mut Context, msg: &[u8]) -> Poll<Result<(), SendError>> {
ready!(self.poll_send_ready(cx, msg.len()))?;
let mut send = self.start_send();
send.buffer()[0..msg.len()].copy_from_slice(msg);
send.finish(msg.len());
Poll::Ready(Ok(()))
}
/// Wait until we can send at least a `msg_len` length message via `start_send`.
///
/// The available message length may be more than requested, if `msg_len` is zero, then this
/// will return as soon as a message of any length can be sent.
pub fn poll_send_ready(
&mut self,
cx: &mut Context,
msg_len: usize,
) -> Poll<Result<(), SendError>> {
let msg_len: u16 = msg_len.try_into().map_err(|_| SendError::TooBig)?;
let start = self.out_packet.len();
if self.packet_pool.capacity() - start < msg_len as usize + 2 {
ready!(self.poll_flush(cx))?;
if self.packet_pool.capacity() < msg_len as usize + 2 {
return Poll::Ready(Err(SendError::TooBig));
}
}
Poll::Ready(Ok(()))
}
/// Start sending a message up to the maximum remaining available message length.
///
/// # Panics
/// May panic if called without `poll_send_ready` being returned for some message length.
pub fn start_send(&mut self) -> StartSend<P::Packet> {
StartSend::new(&mut self.out_packet, self.packet_pool.capacity())
}
pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), Disconnected>> {
if self.out_packet.is_empty() {
return Poll::Ready(Ok(()));
}
if self.delay_until_available.is_none() {
self.bandwidth_limiter.update_available(&self.timer);
if let Some(delay) = self.bandwidth_limiter.delay_until_available(&self.timer) {
self.delay_until_available.set(Some(delay));
}
}
if let Some(delay) = self.delay_until_available.as_mut().as_pin_mut() {
ready!(delay.poll(cx));
self.delay_until_available.set(None);
}
ready!(self.sender.poll_ready_unpin(cx)).map_err(|_| Disconnected)?;
let out_packet = mem::replace(&mut self.out_packet, self.packet_pool.acquire());
self.bandwidth_limiter.take_bytes(out_packet.len() as u32);
self.sender
.start_send_unpin(out_packet)
.map_err(|_| Disconnected)?;
self.sender.poll_flush_unpin(cx).map_err(|_| Disconnected)
}
pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<&[u8], RecvError>> {
ready!(self.poll_recv_ready(cx))?;
Poll::Ready(self.recv_next())
}
fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvError>> {
if let Some((packet, in_pos)) = &self.in_packet {
if *in_pos == packet.len() {
self.in_packet = None;
}
}
if self.in_packet.is_none() {
let packet = ready!(self.receiver.poll_next_unpin(cx)).ok_or(Disconnected)?;
self.in_packet = Some((packet, 0));
}
Poll::Ready(Ok(()))
}
fn recv_next(&mut self) -> Result<&[u8], RecvError> {
let (packet, in_pos) = self.in_packet.as_mut().unwrap();
assert_ne!(*in_pos, packet.len());
if *in_pos + 2 > packet.len() {
*in_pos = packet.len();
return Err(RecvError::BadFormat);
}
let length = LittleEndian::read_u16(&packet[*in_pos..*in_pos + 2]) as usize;
*in_pos += 2;
if *in_pos + length > packet.len() {
*in_pos = packet.len();
return Err(RecvError::BadFormat);
}
let msg = &packet[*in_pos..*in_pos + length];
*in_pos += length;
Ok(msg)
}
}
pub struct StartSend<'a, P> {
packet: &'a mut P,
start: usize,
capacity: usize,
}
impl<'a, P: Packet> StartSend<'a, P> {
fn new(packet: &'a mut P, capacity: usize) -> Self {
assert!(
capacity >= packet.len() + 2,
"not enough room to write size header"
);
let start = packet.len();
packet.resize(capacity, 0);
Self {
packet,
start,
capacity,
}
}
/// Returns the buffer to write the outgoing message into.
pub fn buffer(&mut self) -> &mut [u8] {
&mut self.packet[self.start + 2..]
}
/// Finish writing a message that has been written into the provided buffer.
///
/// # Panics
/// Panics if called with a message length larger than the size of the provided buffer.
pub fn finish(self, msg_len: usize) {
assert!(
msg_len < self.capacity - self.start - 2,
"cannot send packet greater than size of provided buffer"
);
let msg_len: u16 = msg_len.try_into().unwrap();
LittleEndian::write_u16(&mut self.packet[self.start..self.start + 2], msg_len);
self.packet.truncate(self.start + msg_len as usize + 2);
}
}
================================================
FILE: src/windows.rs
================================================
use std::{cmp::Ordering, num::Wrapping, u32};
use crate::ring_buffer::{self, RingBuffer};
pub type StreamPos = Wrapping<u32>;
/// Compare the given wrapping stream positions.
///
/// A value `a` is considered less than `b` if it is faster to get to `a` from `b` by going left
/// than by going right, and `a` is considered greater than `b` if the opposite is true.
///
/// Cannot be used to implement `Ord` because this operation is not transitive.
///
/// In the case of a tie, where `a` != `b` but `a - b == b - a` (in other words, where both values
/// are exactly opposite each other), there is no sensible wrapping order for `a` and `b`. In
/// order use `stream_cmp` sensibly, we must ensure that `StreamPos` values can never be more than
/// `u32::MAX / 2` (or 2^31 - 1) apart.
pub fn stream_cmp(a: StreamPos, b: StreamPos) -> Option<Ordering> {
let ord = (b - a).cmp(&(a - b));
if ord == Ordering::Equal && a != b {
None
} else {
Some(ord)
}
}
pub fn stream_lt(a: StreamPos, b: StreamPos) -> bool {
stream_cmp(a, b).map(Ordering::is_lt).unwrap_or(false)
}
pub fn stream_le(a: StreamPos, b: StreamPos) -> bool {
stream_cmp(a, b).map(Ordering::is_le).unwrap_or(false)
}
pub fn stream_gt(a: StreamPos, b: StreamPos) -> bool {
stream_cmp(a, b).map(Ordering::is_gt).unwrap_or(false)
}
pub fn stream_ge(a: StreamPos, b: StreamPos) -> bool {
stream_cmp(a, b).map(Ordering::is_ge).unwrap_or(false)
}
#[derive(Debug, Eq, PartialEq)]
pub enum AckResult {
/// This range was not found or acked more than was sent.
NotFound,
/// This range was fully acked.
Ack,
/// This range was a partial ack of a previously sent range, and the range from the end of the
/// provided range to this stream position should be considered nacked.
PartialAck(StreamPos),
}
pub struct SendWindowWriter {
writer: ring_buffer::Writer,
}
impl SendWindowWriter {
/// Write the given data to the end of the send buffer, up to the available amount to be
/// written.
pub fn write(&mut self, data: &[u8]) -> u32 {
let len = self.writer.write(0, data);
self.writer.advance(len);
len as u32
}
/// The amount of data available to be written
pub fn write_available(&self) -> u32 {
self.writer.buffer().write_available() as u32
}
}
/// Coaelesces and buffers outgoing stream data up to a configured window capacity and keeps it
/// available to resend until it is acknowledged from the remote.
pub struct SendWindow {
reader: ring_buffer::Reader,
// The stream position of the first byte of the outgoing buffer after the "sent" bytes.
send_pos: StreamPos,
// The number of bytes at the beginning of the outgoing buffer that have already been sent, but
// are being kept in case they need to be retransmitted.
sent: u32,
// The set of sent but un-acked stream ranges. All of these ranges should be non-empty and non-
// overlapping, and the list should remain sorted in wrap-around stream ordering, and all of the
// ranges should fall within the "sent" portion of the buffer.
unacked_ranges: Vec<(StreamPos, StreamPos)>,
}
impl SendWindow {
pub fn new(capacity: u32, stream_start: StreamPos) -> (SendWindow, SendWindowWriter) {
// Any more than this and the unacked list might not be totally ordered.
assert!(capacity <= u32::MAX / 2);
let (writer, reader) = RingBuffer::new(capacity as usize);
(
SendWindow {
reader,
send_pos: stream_start,
sent: 0,
unacked_ranges: Vec::new(),
},
SendWindowWriter { writer },
)
}
/// The amount of data available to be written
pub fn write_available(&self) -> u32 {
self.reader.buffer().write_available() as u32
}
/// The stream position of the next byte of data that would be sent with a call to
/// `SendWindow::send`.
pub fn send_pos(&self) -> StreamPos {
self.send_pos
}
pub fn send_available(&self) -> u32 {
self.reader.available() as u32 - self.sent
}
/// Send any pending written data up to the size of the provided buffer, and add this sent range
/// as an unacked range.
///
/// Returns the stream range of the sent data. Not all of the provided buffer is necessarily
/// written, only the data from the start of the buffer to the length of the returned stream
/// range is actually written. Will not return a zero sized range, if no data is available to be
/// sent or the provided buffer is empty, will return None.
pub fn send(&mut self, data: &mut [u8]) -> Option<(StreamPos, StreamPos)> {
let send_amt = (self.reader.available() - self.sent as usize).min(data.len()) as u32;
if send_amt == 0 {
None
} else {
assert_eq!(
self.reader
.read(self.sent as usize, &mut data[0..send_amt as usize]),
send_amt as usize,
);
let start = self.send_pos;
let end = start + Wrapping(send_amt);
self.sent += send_amt;
self.send_pos = end;
self.unacked_ranges.push((start, end));
Some((start, end))
}
}
/// Returns the stream position after the last contiguously acked sent data. The stream data
/// from `unacked_start` to `send_pos` is sent but not yet fully acked, and is retained in the
/// send buffer.
pub fn unacked_start(&self) -> StreamPos {
self.send_pos - Wrapping(self.sent)
}
/// Fetches a portion of the unacked region of the send buffer. Range must be within
/// [unacked_start, send_pos].
pub fn get_unacked(&self, start: StreamPos, data: &mut [u8]) {
let unacked_start = self.unacked_start();
let buf_start = (start - unacked_start).0 as usize;
assert_eq!(self.reader.read(buf_start as usize, data), data.len());
}
/// Acknowledge the receipt of the given stream range from the remote, and thus potentially free
/// up send buffer space.
///
/// Acknowledged ranges are allowed to be equal to or shorter than the sent ranges, but they
/// *must* start with the same stream position. Acked ranges will be ignored if they are empty
/// or do not start with the same position as a previously sent, unacked range.
pub fn ack_range(&mut self, start: StreamPos, end: StreamPos) -> AckResult {
if self.unacked_ranges.is_empty() {
return AckResult::NotFound;
}
if !stream_lt(start, end) {
return AckResult::NotFound;
}
if !stream_ge(start, self.unacked_ranges.first().unwrap().0)
|| !stream_le(end, self.unacked_ranges.last().unwrap().1)
{
return AckResult::NotFound;
}
match self
.unacked_ranges
.binary_search_by(|(range_start, _)| stream_cmp(*range_start, start).unwrap())
{
Ok(i) => {
if stream_gt(end, self.unacked_ranges[i].1) {
AckResult::NotFound
} else {
let unacked_start = self.unacked_start();
if end == self.unacked_ranges[i].1 {
self.unacked_ranges.remove(i);
if start == unacked_start {
assert_eq!(i, 0);
if self.unacked_ranges.is_empty() {
self.reader.advance(self.sent as usize);
self.sent = 0;
} else {
let acked_amt = (self.unacked_ranges[0].0 - start).0;
self.reader.advance(acked_amt as usize);
self.sent -= acked_amt;
}
}
AckResult::Ack
} else {
if start == unacked_start {
assert_eq!(i, 0);
let acked_amt = (end - start).0;
self.reader.advance(acked_amt as usize);
self.sent -= acked_amt;
}
self.unacked_ranges[i].0 = end;
AckResult::PartialAck(self.unacked_ranges[i].1)
}
}
}
Err(_) => AckResult::NotFound,
}
}
}
pub struct RecvWindowReader {
reader: ring_buffer::Reader,
}
impl RecvWindowReader {
/// Read any ready data off of the beginning of the read buffer and return the number of bytes
/// read.
pub fn read(&mut self, data: &mut [u8]) -> u32 {
let len = self.reader.read(0, data);
self.reader.advance(len);
len as u32
}
}
/// Receives stream data up to a configured window capacity, in any order, and combines it into an
/// ordered stream.
pub struct RecvWindow {
writer: ring_buffer::Writer,
// The current stream position of the first byte of the incoming buffer after the "ready" bytes.
recv_pos: StreamPos,
// An ordered list (in wrap-around stream positions) of non-contiguous received regions of data
// in the buffer that do not connect with the "ready" data. This is used to receive out-of-
// ordered data and allow it to be recombined into an in-order stream.
//
// The invariants here are:
// 1) The list must contain non-overlapping, non-"touching" regions. In other words, the end of
// unready region i cannot be the equal to or greater than the start of unready region i + 1.
// 2) The list must contain no empty regions, the end of any unready region must be strictly
// greater than the beginning.
// 3) The list must not contain regions spanning such a large distance that the wrap-around
// ordering of the regions is no longer total.
unready: Vec<(StreamPos, StreamPos)>,
}
impl RecvWindow {
pub fn new(capacity: u32, stream_start: StreamPos) -> (RecvWindow, RecvWindowReader) {
// Any more than this and the unready list might not be totally ordered.
assert!(capacity <= u32::MAX / 2);
let (writer, reader) = RingBuffer::new(capacity as usize);
(
RecvWindow {
writer,
recv_pos: stream_start,
unready: Vec::new(),
},
RecvWindowReader { reader },
)
}
/// The amount of contiguous data available to be read
pub fn read_available(&self) -> u32 {
self.writer.buffer().read_available() as u32
}
/// The stream position where no more data could be received. This window will move forward as
/// data is read.
pub fn window_end(&self) -> StreamPos {
self.recv_pos + Wrapping(self.writer.available() as u32)
}
/// Receive a new block of data and return the upper bound of the stream range that was
/// successfully stored.
///
/// If redundant data is received, all redundant data will be returned as successfully stored,
/// even data that has already been read out. It will *not* be checked for consistency with
/// existing data, it will simply be ignored and assumed to be identical.
///
/// The returned upper bound will never be beyond the current window end, any data that falls
/// beyond the receive window cannot be stored.
///
/// The range formed by the start position and the returned upper bound will never be empty, it
/// will either be a non-empty range of successfully received data or this method will return
/// None. The range formed by the start position and the returned upper bound will also never be
/// larger than the provided data, it will either be equal to or smaller.
///
/// Received data may not be made immediately available for read if it is not contiguous with
/// the existing ready data.
pub fn recv(&mut self, start_pos: StreamPos, data: &[u8]) -> Option<StreamPos> {
assert!(data.len() <= u32::MAX as usize / 2);
// `recv_end_pos` is the stream position at the end of the maximum capacity of the receive
// buffer.
let recv_end_pos = self.recv_pos + Wrapping(self.writer.available() as u32);
// `end_pos` is the stream position at the end of the input data
let end_pos = start_pos + Wrapping(data.len() as u32);
// If stream positions were strictly ordered this would not be necessary, but this check
// combined with the assertions that `data.len() <= u32::MAX / 2` and `self.capacity <=
// u32::MAX / 2` should prevent wrapping issues.
if !stream_lt(start_pos, recv_end_pos) {
return None;
}
// `copy_start_pos` is the stream position at either the given `start_pos`, or the current
// receive position, whichever is greater. We do not copy data that has already been
// received, so this is where we will begin copying.
let copy_start_pos = if stream_gt(self.recv_pos, start_pos) {
self.recv_pos
} else {
start_pos
};
// We calculate the `end_pos` as being either the previous `end_pos` or the stream position
// at the maximum capacity of the receive buffer. We should not read more data than the
// requested buffer capacity can hold.
let end_pos = if stream_lt(end_pos, recv_end_pos) {
end_pos
} else {
recv_end_pos
};
// If we are not copying any new data (the range from `copy_start_pos` to `end_pos` is
// empty), then we are done.
if stream_ge(copy_start_pos, end_pos) {
// We should only return and end position if there is actually acknowledged data (it
// doesn't matter if the data has already been read and we skip copying it).
if stream_lt(start_pos, end_pos) {
return Some(end_pos);
} else {
return None;
}
}
// The index in the source buffer where we start copying from
let data_start = (copy_start_pos - start_pos).0 as usize;
// The index in the receive buffer where we start copying to
let buf_start = (copy_start_pos - self.recv_pos).0 as usize;
// The index in the receive buffer where we stop copying
let buf_end = (end_pos - self.recv_pos).0 as usize;
assert_eq!(
self.writer.write(
buf_start,
&data[data_start..data_start + buf_end - buf_start],
),
buf_end - buf_start
);
// Very, very carefully, combine this newly received region with the existing unready
// regions and maintain all the invariants of the unready list.
if stream_ge(self.recv_pos, start_pos) {
// If this received region touches the end of the ready block, we need to combine this
// region with the ready block, and any unready regions that it overlaps with also need
// to be combined into the ready block.
let pos = match self
.unready
.binary_search_by(|(_, end)| stream_cmp(*end, end_pos).unwrap())
{
Ok(i) => i,
Err(i) => i,
};
let end = if pos == self.unready.len() {
self.unready.clear();
end_pos
} else if stream_ge(end_pos, self.unready[pos].0) {
let end = self.unready[pos].1;
self.unready.drain(0..=pos);
end
} else {
end_pos
};
self.writer.advance((end - self.recv_pos).0 as usize);
self.recv_pos = end;
} else {
// If this received region does not touch the end of the ready block, we just need to
// combine this with the other unready regions to maintain the invariants. It must be
// combined with any overlapping unready regions or any unready regions that are exactly
// next to each other.
let insert_pos = match self
.unready
.binary_search_by(|(_, end)| stream_cmp(*end, start_pos).unwrap())
{
Ok(i) => i,
Err(i) => i,
};
if insert_pos == self.unready.len() {
self.unready.push((start_pos, end_pos));
} else {
for i in insert_pos..self.unready.len() {
if stream_lt(end_pos, self.unready[i].0) {
if i == insert_pos {
self.unready.insert(insert_pos, (start_pos, end_pos));
} else {
self.unready.drain(insert_pos + 1..i);
if stream_lt(start_pos, self.unready[insert_pos].0) {
self.unready[insert_pos].0 = start_pos;
}
self.unready[insert_pos].1 = end_pos;
}
break;
} else if stream_lt(end_pos, self.unready[i].1) || i == self.unready.len() - 1 {
let start = self.unready[insert_pos].0;
self.unready.drain(insert_pos..i);
self.unready[insert_pos].0 = if stream_lt(start_pos, start) {
start_pos
} else {
start
};
if stream_gt(end_pos, self.unready[insert_pos].1) {
self.unready[insert_pos].1 = end_pos;
}
break;
}
}
}
}
Some(end_pos)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::u32;
#[test]
fn test_send_window() {
let stream_start = Wrapping(u32::MAX - 11);
let write_data = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15];
let mut send_data = [0; 16];
let (mut send_window, mut send_window_writer) = SendWindow::new(7, stream_start);
assert_eq!(send_window_writer.writer.available(), 7);
assert_eq!(send_window.send_pos(), stream_start);
assert_eq!(send_window_writer.write(&write_data[0..4]), 4);
assert_eq!(send_window_writer.write(&write_data[4..6]), 2);
assert_eq!(send_window_writer.write(&write_data[6..10]), 1);
assert_eq!(send_window.send_pos(), stream_start);
assert_eq!(send_window.send_available(), 7);
assert_eq!(
send_window.send(&mut send_data[0..6]),
Some((stream_start, stream_start + Wrapping(6)))
);
for i in 0..6 {
assert_eq!(send_data[i], i as u8);
}
assert_eq!(send_window.send_pos(), stream_start + Wrapping(6));
assert_eq!(send_window_writer.writer.available(), 0);
assert_eq!(
send_window.ack_range(stream_start, stream_start + Wrapping(4)),
AckResult::PartialAck(stream_start + Wrapping(6))
);
assert_eq!(send_window_writer.writer.available(), 4);
assert_eq!(send_window_writer.write(&write_data[7..16]), 4);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(4), stream_start + Wrapping(6)),
AckResult::Ack
);
assert_eq!(send_window_writer.writer.available(), 2);
assert_eq!(send_window_writer.write(&write_data[11..16]), 2);
assert_eq!(send_window.send_available(), 7);
assert_eq!(
send_window.send(&mut send_data[6..9]),
Some((stream_start + Wrapping(6), stream_start + Wrapping(9)))
);
for i in 6..9 {
assert_eq!(send_data[i], i as u8);
}
assert_eq!(send_window.send_pos(), stream_start + Wrapping(9));
assert_eq!(send_window.send_available(), 4);
assert_eq!(
send_window.send(&mut send_data[9..11]),
Some((stream_start + Wrapping(9), stream_start + Wrapping(11)))
);
for i in 9..11 {
assert_eq!(send_data[i], i as u8);
}
assert_eq!(send_window.send_pos(), stream_start + Wrapping(11));
assert_eq!(send_window.send_available(), 2);
assert_eq!(
send_window.send(&mut send_data[11..16]),
Some((stream_start + Wrapping(11), stream_start + Wrapping(13)))
);
for i in 11..13 {
assert_eq!(send_data[i], i as u8);
}
assert_eq!(send_window.send_pos(), stream_start + Wrapping(13));
// Ack ranges that error should not affect anything
assert_eq!(
send_window.ack_range(stream_start + Wrapping(10), stream_start + Wrapping(11)),
AckResult::NotFound
);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(11), stream_start + Wrapping(15)),
AckResult::NotFound
);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(11), stream_start + Wrapping(12)),
AckResult::PartialAck(stream_start + Wrapping(13))
);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(6), stream_start + Wrapping(9)),
AckResult::Ack
);
assert_eq!(send_window_writer.writer.available(), 3);
assert_eq!(send_window.send_pos(), stream_start + Wrapping(13));
assert_eq!(send_window_writer.write(&write_data[14..16]), 2);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(12), stream_start + Wrapping(13)),
AckResult::Ack
);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(9), stream_start + Wrapping(11)),
AckResult::Ack
);
assert_eq!(send_window_writer.writer.available(), 5);
assert_eq!(send_window.send_available(), 2);
assert_eq!(
send_window.send(&mut send_data[14..16]),
Some((stream_start + Wrapping(13), stream_start + Wrapping(15)))
);
for i in 14..16 {
assert_eq!(send_data[i], i as u8);
}
assert_eq!(
send_window.ack_range(stream_start + Wrapping(13), stream_start + Wrapping(14)),
AckResult::PartialAck(stream_start + Wrapping(15)),
);
assert_eq!(
send_window.ack_range(stream_start + Wrapping(14), stream_start + Wrapping(15)),
AckResult::Ack,
);
assert_eq!(send_window_writer.writer.available(), 7);
}
#[test]
fn test_recv_window() {
let stream_start = Wrapping(u32::MAX - 29);
let recv_data = [
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31,
];
let mut read_data = [0; 32];
let (mut recv_window, mut recv_window_reader) = RecvWindow::new(7, stream_start);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(7));
assert_eq!(
recv_window.recv(stream_start + Wrapping(0), &recv_data[0..4]),
Some(stream_start + Wrapping(4))
);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(7));
assert_eq!(
recv_window.recv(stream_start + Wrapping(2), &recv_data[2..6]),
Some(stream_start + Wrapping(6))
);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(7));
assert_eq!(recv_window_reader.read(&mut read_data[0..3]), 3);
assert_eq!(recv_window_reader.read(&mut read_data[3..5]), 2);
for i in 0..5 {
assert_eq!(read_data[i], i as u8);
}
assert_eq!(recv_window.window_end(), stream_start + Wrapping(12));
assert_eq!(
recv_window.recv(stream_start + Wrapping(4), &recv_data[4..10]),
Some(stream_start + Wrapping(10))
);
assert_eq!(
recv_window.recv(stream_start + Wrapping(9), &recv_data[9..15]),
Some(stream_start + Wrapping(12))
);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(12));
assert_eq!(recv_window_reader.reader.available(), 7);
assert_eq!(recv_window_reader.read(&mut read_data[5..10]), 5);
for i in 5..10 {
assert_eq!(read_data[i], i as u8);
}
assert_eq!(recv_window.window_end(), stream_start + Wrapping(17));
assert_eq!(
recv_window.recv(stream_start + Wrapping(25), &recv_data[25..30]),
None
);
assert_eq!(
recv_window.recv(stream_start + Wrapping(15), &recv_data[15..25]),
Some(stream_start + Wrapping(17)),
);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(17));
assert_eq!(recv_window_reader.read(&mut read_data[10..20]), 2);
for i in 10..12 {
assert_eq!(read_data[i], i as u8);
}
assert_eq!(recv_window.window_end(), stream_start + Wrapping(19));
assert_eq!(
recv_window.recv(stream_start + Wrapping(10), &recv_data[10..25]),
Some(stream_start + Wrapping(19))
);
// Redundant receives
assert_eq!(
recv_window.recv(stream_start + Wrapping(2), &recv_data[2..10]),
Some(stream_start + Wrapping(10)),
);
assert_eq!(
recv_window.recv(stream_start + Wrapping(14), &recv_data[14..21]),
Some(stream_start + Wrapping(19)),
);
assert_eq!(
recv_window.recv(stream_start + Wrapping(18), &recv_data[18..21]),
Some(stream_start + Wrapping(19)),
);
// receives off of end
assert_eq!(
recv_window.recv(stream_start + Wrapping(19), &recv_data[21..25]),
None,
);
assert_eq!(
recv_window.recv(stream_start + Wrapping(20), &recv_data[22..25]),
None,
);
assert_eq!(
recv_window.recv(stream_start + Wrapping(19), &recv_data[21..21]),
None,
);
assert_eq!(recv_window_reader.read(&mut read_data[12..25]), 7);
for i in 12..19 {
assert_eq!(read_data[i], i as u8);
}
assert_eq!(recv_window.window_end(), stream_start + Wrapping(26));
assert_eq!(
recv_window.recv(stream_start + Wrapping(24), &recv_data[24..25]),
Some(stream_start + Wrapping(25))
);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(26));
assert_eq!(
recv_window.recv(stream_start + Wrapping(19), &recv_data[19..24]),
Some(stream_start + Wrapping(24))
);
assert_eq!(recv_window_reader.read(&mut read_data[19..25]), 6);
for i in 19..25 {
assert_eq!(read_data[i], i as u8);
}
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(26), &recv_data[26..27]),
Some(stream_start + Wrapping(27))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(28), &recv_data[28..29]),
Some(stream_start + Wrapping(29))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(30), &recv_data[30..31]),
Some(stream_start + Wrapping(31))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(29), &recv_data[29..30]),
Some(stream_start + Wrapping(30))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(28), &recv_data[28..29]),
Some(stream_start + Wrapping(29))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(27), &recv_data[27..28]),
Some(stream_start + Wrapping(28))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..32]), 0);
assert_eq!(recv_window.window_end(), stream_start + Wrapping(32));
assert_eq!(
recv_window.recv(stream_start + Wrapping(25), &recv_data[25..26]),
Some(stream_start + Wrapping(26))
);
assert_eq!(recv_window_reader.read(&mut read_data[25..31]), 6);
for i in 25..31 {
assert_eq!(read_data[i], i as u8);
}
assert_eq!(recv_window.window_end(), stream_start + Wrapping(38));
}
}
================================================
FILE: tests/compressed_bincode_channel.rs
================================================
use std::time::Duration;
use futures::channel::oneshot;
use rand::{rngs::SmallRng, thread_rng, RngCore, SeedableRng};
use turbulence::{
buffer::BufferPacketPool,
compressed_bincode_channel::CompressedBincodeChannel,
reliable_channel::{ReliableChannel, Settings},
runtime::Spawn,
spsc,
};
mod util;
use self::util::{condition_link, LinkCondition, SimpleBufferPool, SimpleRuntime};
#[test]
fn test_compressed_bincode_channel() {
const SETTINGS: Settings = Settings {
bandwidth: 2048,
recv_window_size: 512,
send_window_size: 512,
burst_bandwidth: 512,
init_send: 256,
resend_time: Duration::from_millis(50),
initial_rtt: Duration::from_millis(100),
max_rtt: Duration::from_millis(2000),
rtt_update_factor: 0.1,
rtt_resend_factor: 1.5,
};
const CONDITION: LinkCondition = LinkCondition {
loss: 0.2,
duplicate: 0.05,
delay: Duration::from_millis(40),
jitter: Duration::from_millis(10),
};
let packet_pool = BufferPacketPool::new(SimpleBufferPool(1000));
let mut runtime = SimpleRuntime::new();
let (asend, acondrecv) = spsc::channel(2);
let (acondsend, arecv) = spsc::channel(2);
condition_link(
CONDITION,
runtime.handle(),
runtime.handle(),
packet_pool.clone(),
SmallRng::from_rng(thread_rng()).unwrap(),
acondsend,
acondrecv,
);
let (bsend, bcondrecv) = spsc::channel(2);
let (bcondsend, brecv) = spsc::channel(2);
condition_link(
CONDITION,
runtime.handle(),
runtime.handle(),
packet_pool.clone(),
SmallRng::from_rng(thread_rng()).unwrap(),
bcondsend,
bcondrecv,
);
let mut stream1 = CompressedBincodeChannel::new(ReliableChannel::new(
runtime.handle(),
runtime.handle(),
packet_pool.clone(),
SETTINGS.clone(),
bsend,
arecv,
));
let mut stream2 = CompressedBincodeChannel::new(ReliableChannel::new(
runtime.handle(),
runtime.handle(),
packet_pool.clone(),
SETTINGS.clone(),
asend,
brecv,
));
let (a_done_send, mut a_done) = oneshot::channel();
runtime.spawn({
async move {
for i in 0..100 {
let send_val = vec![i as u8 + 13; i + 25];
stream1.send(&send_val).await.unwrap();
}
stream1.flush().await.unwrap();
for i in 0..100 {
let recv_val = stream1.recv::<Vec<u8>>().await.unwrap();
assert_eq!(recv_val.len(), i + 17);
}
let _ = a_done_send.send(stream1);
}
});
let (b_done_send, mut b_done) = oneshot::channel();
runtime.spawn({
async move {
for i in 0..100 {
let recv_val = stream2.recv::<Vec<u8>>().await.unwrap();
assert_eq!(recv_val, vec![i as u8 + 13; i + 25].as_slice());
}
for i in 0..100 {
let mut send_val = vec![0; i + 17];
rand::thread_rng().fill_bytes(&mut send_val);
stream2.send(&send_val).await.unwrap();
}
stream2.flush().await.unwrap();
let _ = b_done_send.send(stream2);
}
});
let mut a_done_stream = None;
let mut b_done_stream = None;
for _ in 0..100_000 {
a_done_stream = a_done_stream.or_else(|| a_done.try_recv().unwrap());
b_done_stream = b_done_stream.or_else(|| b_done.try_recv().unwrap());
if a_done_stream.is_some() && b_done_stream.is_some() {
return;
}
runtime.run_until_stalled();
runtime.advance_time(50);
}
panic!("didn't finish in time");
}
================================================
FILE: tests/message_channels.rs
================================================
use std::time::Duration;
use futures::{
channel::oneshot,
future::{self, Either},
SinkExt, StreamExt,
};
use serde::{Deserialize, Serialize};
use turbulence::{
buffer::BufferPacketPool,
message_channels::{MessageChannelMode, MessageChannelSettings, MessageChannelsBuilder},
packet_multiplexer::PacketMultiplexer,
reliable_channel,
runtime::Spawn,
unreliable_channel,
};
mod util;
use self::util::{SimpleBufferPool, SimpleRuntime};
// Define two message types, `Message1` and `Message2`
// `Message1` is a reliable message on channel "0" that has a maximum bandwidth of 4KB/s
#[derive(Serialize, Deserialize)]
struct Message1(i32);
const MESSAGE1_SETTINGS: MessageChannelSettings = MessageChannelSettings {
channel: 0,
channel_mode: MessageChannelMode::Reliable(reliable_channel::Settings {
bandwidth: 4096,
burst_bandwidth: 1024,
recv_window_size: 1024,
send_window_size: 1024,
init_send: 512,
resend_time: Duration::from_millis(100),
initial_rtt: Duration::from_millis(200),
max_rtt: Duration::from_secs(2),
rtt_update_factor: 0.1,
rtt_resend_factor: 1.5,
}),
message_buffer_size: 8,
packet_buffer_size: 8,
};
// `Message2` is an unreliable message type on channel "1"
#[derive(Serialize, Deserialize)]
struct Message2(i32);
const MESSAGE2_SETTINGS: MessageChannelSettings = MessageChannelSettings {
channel: 1,
channel_mode: MessageChannelMode::Unreliable(unreliable_channel::Settings {
bandwidth: 4096,
burst_bandwidth: 1024,
}),
message_buffer_size: 8,
packet_buffer_size: 8,
};
#[test]
fn test_message_channels() {
let mut runtime = SimpleRuntime::new();
let pool = BufferPacketPool::new(SimpleBufferPool(32));
// Set up two packet multiplexers, one for our sending "A" side and one for our receiving "B"
// side. They should both have exactly the same message types registered.
let mut multiplexer_a = PacketMultiplexer::new();
let mut builder_a = MessageChannelsBuilder::new(runtime.handle(), runtime.handle(), pool);
builder_a.register::<Message1>(MESSAGE1_SETTINGS).unwrap();
builder_a.register::<Message2>(MESSAGE2_SETTINGS).unwrap();
let mut channels_a = builder_a.build(&mut multiplexer_a);
let mut multiplexer_b = PacketMultiplexer::new();
let mut builder_b = MessageChannelsBuilder::new(runtime.handle(), runtime.handle(), pool);
builder_b.register::<Message1>(MESSAGE1_SETTINGS).unwrap();
builder_b.register::<Message2>(MESSAGE2_SETTINGS).unwrap();
let mut channels_b = builder_b.build(&mut multiplexer_b);
// Spawn a task that simulates a perfect network connection, and takes outgoing packets from
// each multiplexer and gives it to the other.
runtime.spawn(async move {
// We need to send packets bidirectionally from A -> B and B -> A, because reliable message
// channels must have a way to send acknowledgments.
let (mut a_incoming, mut a_outgoing) = multiplexer_a.start();
let (mut b_incoming, mut b_outgoing) = multiplexer_b.start();
loop {
// How to best send packets from the multiplexer to the internet and vice versa is
// somewhat complex. This is not a great example of how to do it.
//
// Calling `x_incoming.send(packet).await` here is using `IncomingMultiplexedPackets`
// `Sink` implementation, which forwards to the incoming spsc channel for whatever
// channel this packet is for. `turbulence` *only* uses sync channels with static
// size, so it is expected that this buffer might be full. You might want to instead
// use `IncomingMultiplexedPackets::try_send` here and if the incoming buffer is full,
// simply drop the packet. A full buffer means some level of the pipeline cannot keep
// up, and dropping the packet rather than blocking on delivering here means that
// a backup on one channel will not potentially block other channels from receiving
// packets.
//
// On the outgoing side, since `turbulence` assumes an unreliable transport, it also
// assumes that the actual outgoing transport can send at more or less an arbitrary
// rate. For this reason, the different internal channel types *block* on sending
// outgoing packets. It is assumed that the outgoing packet buffer would only be full
// under very high, temporary CPU load on the host, and they block to let the task that
// actually sends packets catch up. This assumption works if the outgoing stream is only
// really CPU bound: that it is not harmful to block on outgoing packets because we're
// cooperating with a task that will send UDP packets as fast as it can anyway, so we
// won't be blocking for long (and it's better not to burn up even more CPU making more
// packets that might not be sent).
//
// So why the difference, why drop incoming packets but block on outgoing packets? Well,
// this again assumes that the task that sends packets is utterly simple, that it is a
// task that just calls `sendto` or equivalent as fast as it can. On the incoming side
// the pipeline is much longer, and will usually include the actual main game loop.
// "Blocking" in this case may simply mean only processing a maximum number of incoming
// messages per tick, or something along those lines. In that case, since "blocking" is
// not a function of purely CPU load, dropping incoming packets for fairness and latency
// may be reasonable. On the outgoing side, we're not assuming that we may have somehow
// accidentally *sent* too much data, we of course assume that we are following our
// *own* rules, so the only cause of a backup should be very high CPU load.
//
// Since this test unrealistically assumes perfect delivery of an unreliable channel,
// and since this is all hard to simulate in an example with no actual network involved,
// we just provide perfect instant delivery. None of the subtlety of doing this in a
// real project is captured in this simplistic example.
match future::select(a_outgoing.next(), b_outgoing.next()).await {
Either::Left((Some(packet), _)) => {
b_incoming.send(packet).await.unwrap();
}
Either::Right((Some(packet), _)) => {
a_incoming.send(packet).await.unwrap();
}
Either::Left((None, _)) | Either::Right((None, _)) => break,
}
}
});
let (is_done_send, mut is_done_recv) = oneshot::channel();
runtime.spawn(async move {
// Now send some traffic across...
// We're using the async `MessageChannels` API, but in a game you might use the sync API.
channels_a.async_send(Message1(42)).await.unwrap();
channels_a.flush::<Message1>();
assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 42);
// Since our underlying simulated network is perfect, our unreliable message will always
// arrive.
channels_a.async_send(Message2(13)).await.unwrap();
channels_a.flush::<Message2>();
assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 13);
// Each message channel is independent of the others, and they all have their own
// independent instances of message coalescing and reliability protocols.
channels_a.async_send(Message1(20)).await.unwrap();
channels_a.async_send(Message2(30)).await.unwrap();
channels_a.async_send(Message1(21)).await.unwrap();
channels_a.async_send(Message2(31)).await.unwrap();
channels_a.async_send(Message1(22)).await.unwrap();
channels_a.async_send(Message2(32)).await.unwrap();
channels_a.flush::<Message1>();
channels_a.flush::<Message2>();
assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 20);
assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 21);
assert_eq!(channels_b.async_recv::<Message1>().await.unwrap().0, 22);
assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 30);
assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 31);
assert_eq!(channels_b.async_recv::<Message2>().await.unwrap().0, 32);
is_done_send.send(()).unwrap();
});
for _ in 0..100_000 {
if is_done_recv.try_recv().unwrap().is_some() {
return;
}
runtime.run_until_stalled();
runtime.advance_time(50);
}
panic!("didn't finish in time");
}
================================================
FILE: tests/packet_multiplexer.rs
================================================
use futures::{
executor::LocalPool,
future::{self, Either},
task::SpawnExt,
SinkExt, StreamExt,
};
use turbulence::{
buffer::BufferPacketPool,
packet::{Packet, PacketPool},
packet_multiplexer::{MuxPacketPool, PacketMultiplexer},
};
mod util;
use self::util::SimpleBufferPool;
#[test]
fn test_multiplexer() {
let mut pool = LocalPool::new();
let spawner = pool.spawner();
let mut packet_pool = MuxPacketPool::new(BufferPacketPool::new(SimpleBufferPool(32)));
let mut multiplexer_a = PacketMultiplexer::new();
let (mut sender4a, mut receiver4a, _) = multiplexer_a.open_channel(4, 8).unwrap();
let (mut sender32a, mut receiver32a, _) = multiplexer_a.open_channel(32, 8).unwrap();
let mut multiplexer_b = PacketMultiplexer::new();
let (mut sender4b, mut receiver4b, _) = multiplexer_b.open_channel(4, 8).unwrap();
let (mut sender32b, mut receiver32b, _) = multiplexer_b.open_channel(32, 8).unwrap();
spawner
.spawn(async move {
let (mut a_incoming, mut a_outgoing) = multiplexer_a.start();
let (mut b_incoming, mut b_outgoing) = multiplexer_b.start();
loop {
match future::select(a_outgoing.next(), b_outgoing.next()).await {
Either::Left((Some(packet), _)) => {
b_incoming.send(packet).await.unwrap();
}
Either::Right((Some(packet), _)) => {
a_incoming.send(packet).await.unwrap();
}
Either::Left((None, _)) | Either::Right((None, _)) => break,
}
}
})
.unwrap();
spawner
.spawn(async move {
let mut packet = packet_pool.acquire();
packet.resize(1, 17);
sender4a.send(packet).await.unwrap();
let mut packet = packet_pool.acquire();
packet.resize(1, 18);
sender4b.send(packet).await.unwrap();
let mut packet = packet_pool.acquire();
packet.resize(1, 19);
sender32a.send(packet).await.unwrap();
let mut packet = packet_pool.acquire();
packet.resize(1, 20);
sender32b.send(packet).await.unwrap();
let packet = receiver4a.next().await.unwrap();
assert_eq!(packet[0], 18);
let packet = receiver4b.next().await.unwrap();
assert_eq!(packet[0], 17);
let packet = receiver32a.next().await.unwrap();
assert_eq!(packet[0], 20);
let packet = receiver32b.next().await.unwrap();
assert_eq!(packet[0], 19);
})
.unwrap();
pool.run();
}
================================================
FILE: tests/reliable_bincode_channel.rs
================================================
use std::time::Duration;
use futures::channel::oneshot;
use rand::{rngs::SmallRng, thread_rng, SeedableRng};
use tu
gitextract_lm1348nn/
├── .circleci/
│ └── config.yml
├── .gitignore
├── CHANGELOG.md
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-CC0
├── LICENSE-MIT
├── README.md
├── src/
│ ├── bandwidth_limiter.rs
│ ├── buffer.rs
│ ├── compressed_bincode_channel.rs
│ ├── event_watch.rs
│ ├── lib.rs
│ ├── message_channels.rs
│ ├── packet.rs
│ ├── packet_multiplexer.rs
│ ├── reliable_bincode_channel.rs
│ ├── reliable_channel.rs
│ ├── ring_buffer.rs
│ ├── runtime.rs
│ ├── spsc.rs
│ ├── unreliable_bincode_channel.rs
│ ├── unreliable_channel.rs
│ └── windows.rs
└── tests/
├── compressed_bincode_channel.rs
├── message_channels.rs
├── packet_multiplexer.rs
├── reliable_bincode_channel.rs
├── reliable_channel.rs
├── unreliable_bincode_channel.rs
├── unreliable_channel.rs
└── util/
└── mod.rs
SYMBOL INDEX (386 symbols across 23 files)
FILE: src/bandwidth_limiter.rs
type BandwidthLimiter (line 5) | pub struct BandwidthLimiter<T: Timer> {
function new (line 14) | pub fn new(timer: &T, bandwidth: u32, burst_bandwidth: u32) -> Self {
function delay_until_available (line 25) | pub fn delay_until_available(&self, timer: &T) -> Option<T::Sleep> {
function update_available (line 37) | pub fn update_available(&mut self, timer: &T) {
function bytes_available (line 51) | pub fn bytes_available(&self) -> bool {
function take_bytes (line 56) | pub fn take_bytes(&mut self, bytes: u32) {
FILE: src/buffer.rs
type BufferPool (line 7) | pub trait BufferPool {
method capacity (line 10) | fn capacity(&self) -> usize;
method acquire (line 11) | fn acquire(&mut self) -> Self::Buffer;
type BufferPacketPool (line 16) | pub struct BufferPacketPool<B>(B);
function new (line 19) | pub fn new(buffer_pool: B) -> Self {
type Packet (line 25) | type Packet = BufferPacket<B::Buffer>;
method capacity (line 27) | fn capacity(&self) -> usize {
method acquire (line 31) | fn acquire(&mut self) -> Self::Packet {
type BufferPacket (line 40) | pub struct BufferPacket<B> {
method resize (line 49) | fn resize(&mut self, len: usize, val: u8) {
type Target (line 62) | type Target = [u8];
method deref (line 64) | fn deref(&self) -> &[u8] {
method deref_mut (line 73) | fn deref_mut(&mut self) -> &mut [u8] {
FILE: src/compressed_bincode_channel.rs
constant MAX_MESSAGE_LEN (line 20) | pub const MAX_MESSAGE_LEN: u16 = u16::MAX;
type SendError (line 23) | pub enum SendError {
type RecvError (line 33) | pub enum RecvError {
type CompressedBincodeChannel (line 55) | pub struct CompressedBincodeChannel {
method from (line 74) | fn from(channel: ReliableChannel) -> Self {
method new (line 80) | pub fn new(channel: ReliableChannel) -> Self {
method into_inner (line 95) | pub fn into_inner(self) -> ReliableChannel {
method send (line 103) | pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), Send...
method try_send (line 107) | pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, Send...
method flush (line 119) | pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
method try_flush (line 123) | pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
method recv (line 135) | pub async fn recv<M: DeserializeOwned>(&mut self) -> Result<M, RecvErr...
method try_recv (line 140) | pub fn try_recv<M: DeserializeOwned>(&mut self) -> Result<Option<M>, R...
method poll_send (line 148) | pub fn poll_send<M: Serialize>(
method poll_flush (line 165) | pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reli...
method poll_recv (line 172) | pub fn poll_recv<M: DeserializeOwned>(
method poll_recv_ready (line 180) | fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), Rec...
method recv_next (line 212) | fn recv_next<M: DeserializeOwned>(&mut self) -> Result<M, bincode::Err...
method poll_write_send_chunk (line 220) | fn poll_write_send_chunk(
method poll_finish_write (line 262) | fn poll_finish_write(&mut self, cx: &mut Context) -> Poll<Result<(), r...
method poll_finish_read (line 272) | fn poll_finish_read(&mut self, cx: &mut Context) -> Poll<Result<(), re...
method bincode_config (line 282) | fn bincode_config(&self) -> impl bincode::Options + Copy {
type CompressedTypedChannel (line 288) | pub struct CompressedTypedChannel<M> {
function from (line 294) | fn from(channel: ReliableChannel) -> Self {
function new (line 300) | pub fn new(channel: ReliableChannel) -> Self {
function into_inner (line 307) | pub fn into_inner(self) -> ReliableChannel {
function flush (line 311) | pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
function try_flush (line 315) | pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
function poll_flush (line 319) | pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliab...
function send (line 325) | pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {
function try_send (line 329) | pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {
function poll_send (line 333) | pub fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<()...
function recv (line 339) | pub async fn recv(&mut self) -> Result<M, RecvError> {
function try_recv (line 343) | pub fn try_recv(&mut self) -> Result<Option<M>, RecvError> {
function poll_recv (line 347) | pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, RecvErro...
FILE: src/event_watch.rs
function channel (line 27) | pub fn channel() -> (Sender, Receiver) {
type Sender (line 39) | pub struct Sender(Arc<State>);
method signal (line 42) | pub fn signal(&self) {
type Receiver (line 49) | pub struct Receiver(Arc<State>);
method wait (line 52) | pub async fn wait(&mut self) {
type State (line 70) | struct State {
FILE: src/message_channels.rs
type MessageChannelSettings (line 33) | pub struct MessageChannelSettings {
type MessageChannelMode (line 45) | pub enum MessageChannelMode {
type ChannelMessage (line 51) | pub trait ChannelMessage: Serialize + DeserializeOwned + Send + Sync + '...
type ChannelAlreadyRegistered (line 56) | pub enum ChannelAlreadyRegistered {
type TaskError (line 63) | pub type TaskError = Box<dyn Error + Send + Sync>;
type ChannelTaskError (line 67) | pub struct ChannelTaskError {
type MessageChannelsBuilder (line 72) | pub struct MessageChannelsBuilder<S, T, P>
function new (line 91) | pub fn new(spawn: S, timer: T, pool: P) -> Self {
function register (line 114) | pub fn register<M: ChannelMessage>(
function build (line 137) | pub fn build(self, multiplexer: &mut PacketMultiplexer<P::Packet>) -> Me...
type MessageTypeUnregistered (line 184) | pub struct MessageTypeUnregistered(&'static str);
type MessageChannelsDisconnected (line 188) | pub struct MessageChannelsDisconnected;
type TryAsyncMessageError (line 191) | pub enum TryAsyncMessageError {
type MessageChannels (line 209) | pub struct MessageChannels {
method is_connected (line 221) | pub fn is_connected(&self) -> bool {
method recv_err (line 230) | pub async fn recv_err(self) -> ChannelTaskError {
method send (line 247) | pub fn send<M: ChannelMessage>(&mut self, message: M) -> Option<M> {
method try_send (line 253) | pub fn try_send<M: ChannelMessage>(
method async_send (line 281) | pub async fn async_send<M: ChannelMessage>(
method try_async_send (line 293) | pub async fn try_async_send<M: ChannelMessage>(
method flush (line 319) | pub fn flush<M: ChannelMessage>(&mut self) {
method try_flush (line 325) | pub fn try_flush<M: ChannelMessage>(&mut self) -> Result<(), MessageTy...
method recv (line 336) | pub fn recv<M: ChannelMessage>(&mut self) -> Option<M> {
method try_recv (line 342) | pub fn try_recv<M: ChannelMessage>(&mut self) -> Result<Option<M>, Mes...
method async_recv (line 368) | pub async fn async_recv<M: ChannelMessage>(
method try_async_recv (line 379) | pub async fn try_async_recv<M: ChannelMessage>(&mut self) -> Result<M,...
method statistics (line 392) | pub fn statistics<M: ChannelMessage>(&self) -> &ChannelStatistics {
method try_statistics (line 396) | pub fn try_statistics<M: ChannelMessage>(
type ChannelTask (line 403) | type ChannelTask = BoxFuture<'static, Result<(), TaskError>>;
type RegisterFn (line 404) | type RegisterFn<S, T, P> = fn(
type ChannelDisconnected (line 415) | struct ChannelDisconnected;
type ChannelSet (line 417) | struct ChannelSet<M> {
type ChannelsMap (line 425) | struct ChannelsMap(FxHashMap<TypeId, Box<dyn Any + Send + Sync>>);
method insert (line 428) | fn insert<M: ChannelMessage>(&mut self, channel_set: ChannelSet<M>) ->...
method get (line 434) | fn get<M: ChannelMessage>(&self) -> Result<&ChannelSet<M>, MessageType...
method get_mut (line 443) | fn get_mut<M: ChannelMessage>(
function register_message_type (line 455) | fn register_message_type<S, T, P, M>(
type MessageBincodeChannel (line 535) | trait MessageBincodeChannel<M: ChannelMessage> {
method poll_recv (line 536) | fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>>;
method poll_send (line 537) | fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), ...
method poll_flush (line 538) | fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskErro...
function poll_recv (line 547) | fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {
function poll_send (line 551) | fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), Ta...
function poll_flush (line 556) | fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {
function poll_recv (line 562) | fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {
function poll_send (line 566) | fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), Ta...
function poll_flush (line 571) | fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {
function poll_recv (line 577) | fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<M, TaskError>> {
function poll_send (line 581) | fn poll_send(&mut self, cx: &mut Context, msg: &M) -> Poll<Result<(), Ta...
function poll_flush (line 585) | fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), TaskError>> {
function channel_task (line 590) | async fn channel_task<M: ChannelMessage>(
FILE: src/packet.rs
constant MAX_PACKET_LEN (line 7) | pub const MAX_PACKET_LEN: u16 = 32768;
type Packet (line 10) | pub trait Packet: Deref<Target = [u8]> + DerefMut {
method resize (line 12) | fn resize(&mut self, len: usize, val: u8);
method extend (line 14) | fn extend(&mut self, other: &[u8]) {
method truncate (line 21) | fn truncate(&mut self, len: usize) {
method clear (line 26) | fn clear(&mut self) {
type PacketPool (line 37) | pub trait PacketPool {
method capacity (line 41) | fn capacity(&self) -> usize;
method acquire (line 43) | fn acquire(&mut self) -> Self::Packet;
FILE: src/packet_multiplexer.rs
type PacketChannel (line 23) | pub type PacketChannel = u8;
type MuxPacket (line 27) | pub struct MuxPacket<P>(P);
method resize (line 33) | fn resize(&mut self, len: usize, val: u8) {
method extend (line 37) | fn extend(&mut self, other: &[u8]) {
method truncate (line 41) | fn truncate(&mut self, len: usize) {
method clear (line 45) | fn clear(&mut self) {
type Target (line 54) | type Target = [u8];
method deref (line 56) | fn deref(&self) -> &[u8] {
method deref_mut (line 65) | fn deref_mut(&mut self) -> &mut [u8] {
type MuxPacketPool (line 71) | pub struct MuxPacketPool<P>(P);
function new (line 74) | pub fn new(packet_pool: P) -> Self {
type Packet (line 83) | type Packet = MuxPacket<P::Packet>;
method capacity (line 85) | fn capacity(&self) -> usize {
method acquire (line 89) | fn acquire(&mut self) -> MuxPacket<P::Packet> {
function from (line 97) | fn from(pool: P) -> MuxPacketPool<P> {
type DuplicateChannel (line 104) | pub struct DuplicateChannel;
type ChannelTotals (line 107) | pub struct ChannelTotals {
type ChannelStatistics (line 113) | pub struct ChannelStatistics(Arc<ChannelStatisticsData>);
method incoming_totals (line 116) | pub fn incoming_totals(&self) -> ChannelTotals {
method outgoing_totals (line 123) | pub fn outgoing_totals(&self) -> ChannelTotals {
type PacketMultiplexer (line 136) | pub struct PacketMultiplexer<P> {
function new (line 145) | pub fn new() -> PacketMultiplexer<P> {
function open_channel (line 157) | pub fn open_channel(
function start (line 197) | pub fn start(self) -> (IncomingMultiplexedPackets<P>, OutgoingMultiplexe...
type IncomingError (line 212) | pub enum IncomingError {
type IncomingTrySendError (line 220) | pub enum IncomingTrySendError<P> {
function fmt (line 228) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
function is_full (line 240) | pub fn is_full(&self) -> bool {
type IncomingMultiplexedPackets (line 249) | pub struct IncomingMultiplexedPackets<P> {
function try_send (line 265) | pub fn try_send(&mut self, packet: P) -> Result<(), IncomingTrySendError...
type Error (line 290) | type Error = IncomingError;
function poll_ready (line 292) | fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result...
function start_send (line 321) | fn start_send(mut self: Pin<&mut Self>, item: P) -> Result<(), Self::Err...
function poll_flush (line 327) | fn poll_flush(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result...
function poll_close (line 349) | fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(),...
type OutgoingMultiplexedPackets (line 355) | pub struct OutgoingMultiplexedPackets<P> {
type Item (line 363) | type Item = P;
method poll_next (line 365) | fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<...
type ChannelSender (line 370) | struct ChannelSender<P> {
type ChannelReceiver (line 375) | struct ChannelReceiver<P> {
type Item (line 387) | type Item = P;
method poll_next (line 389) | fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<...
type ChannelStatisticsData (line 405) | struct ChannelStatisticsData {
method mark_incoming_packet (line 414) | fn mark_incoming_packet(&self, len: u64) {
method mark_outgoing_packet (line 419) | fn mark_outgoing_packet(&self, len: u64) {
FILE: src/reliable_bincode_channel.rs
constant MAX_MESSAGE_LEN (line 16) | pub const MAX_MESSAGE_LEN: u16 = u16::MAX;
type SendError (line 19) | pub enum SendError {
type RecvError (line 29) | pub enum RecvError {
type ReliableBincodeChannel (line 43) | pub struct ReliableBincodeChannel {
method from (line 54) | fn from(channel: ReliableChannel) -> Self {
method new (line 61) | pub fn new(channel: ReliableChannel) -> Self {
method into_inner (line 71) | pub fn into_inner(self) -> ReliableChannel {
method send (line 83) | pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), Send...
method try_send (line 89) | pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, Send...
method flush (line 101) | pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
method try_flush (line 105) | pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
method recv (line 117) | pub async fn recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, R...
method try_recv (line 122) | pub fn try_recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<Option...
method poll_send_ready (line 130) | pub fn poll_send_ready(
method try_send_ready (line 147) | pub fn try_send_ready(&mut self) -> Result<bool, reliable_channel::Err...
method start_send (line 155) | pub fn start_send<M: Serialize>(&mut self, msg: &M) -> Result<(), binc...
method poll_flush (line 168) | pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reli...
method poll_recv (line 174) | pub fn poll_recv<'a, M: Deserialize<'a>>(
method poll_recv_ready (line 182) | fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), Rec...
method recv_next (line 195) | fn recv_next<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, RecvEr...
method poll_finish_read (line 202) | fn poll_finish_read(&mut self, cx: &mut Context) -> Poll<Result<(), re...
method bincode_config (line 212) | fn bincode_config(&self) -> impl bincode::Options + Copy {
type ReliableTypedChannel (line 218) | pub struct ReliableTypedChannel<M> {
function from (line 224) | fn from(channel: ReliableChannel) -> Self {
function new (line 230) | pub fn new(channel: ReliableChannel) -> Self {
function into_inner (line 237) | pub fn into_inner(self) -> ReliableChannel {
function flush (line 241) | pub async fn flush(&mut self) -> Result<(), reliable_channel::Error> {
function try_flush (line 245) | pub fn try_flush(&mut self) -> Result<bool, reliable_channel::Error> {
function poll_flush (line 249) | pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), reliab...
function poll_send_ready (line 253) | pub fn poll_send_ready(
function try_send_ready (line 260) | pub fn try_send_ready(&mut self) -> Result<bool, reliable_channel::Error> {
function send (line 266) | pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {
function try_send (line 270) | pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {
function start_send (line 274) | pub fn start_send(&mut self, msg: &M) -> Result<(), bincode::Error> {
function recv (line 280) | pub async fn recv(&'a mut self) -> Result<M, RecvError> {
function try_recv (line 284) | pub fn try_recv(&'a mut self) -> Result<Option<M>, RecvError> {
function poll_recv (line 288) | pub fn poll_recv(&'a mut self, cx: &mut Context) -> Poll<Result<M, RecvE...
FILE: src/reliable_channel.rs
type Error (line 34) | pub enum Error {
type Settings (line 44) | pub struct Settings {
type ReliableChannel (line 79) | pub struct ReliableChannel {
method new (line 87) | pub fn new<S, T, P>(
method write (line 159) | pub async fn write(&mut self, data: &[u8]) -> Result<usize, Error> {
method flush (line 167) | pub fn flush(&mut self) -> Result<(), Error> {
method read (line 182) | pub async fn read(&mut self, data: &mut [u8]) -> Result<usize, Error> {
method poll_write (line 186) | pub fn poll_write(&mut self, cx: &mut Context, data: &[u8]) -> Poll<Re...
method poll_read (line 214) | pub fn poll_read(&mut self, cx: &mut Context, data: &mut [u8]) -> Poll...
method write_available (line 242) | pub fn write_available(&self) -> usize {
method try_write (line 247) | pub fn try_write(&mut self, data: &[u8]) -> Result<usize, Error> {
method try_read (line 256) | pub fn try_read(&mut self, data: &mut [u8]) -> Result<usize, Error> {
type Shared (line 266) | struct Shared {
type UnackedRange (line 272) | struct UnackedRange<I> {
type Task (line 279) | struct Task<T, P>
function main_loop (line 305) | async fn main_loop(mut self) -> Result<(), Error> {
function send (line 402) | async fn send(&mut self) -> Result<(), Error> {
function resend (line 448) | async fn resend(&mut self) -> Result<(), Error> {
function recv_packet (line 489) | async fn recv_packet(&mut self, packet: P::Packet) -> Result<(), Error> {
FILE: src/ring_buffer.rs
type RingBuffer (line 14) | pub struct RingBuffer {
method new (line 22) | pub fn new(capacity: usize) -> (Writer, Reader) {
method write_available (line 40) | pub fn write_available(&self) -> usize {
method read_available (line 47) | pub fn read_available(&self) -> usize {
method drop (line 56) | fn drop(&mut self) {
type Writer (line 69) | pub struct Writer(Arc<RingBuffer>);
method available (line 72) | pub fn available(&self) -> usize {
method write (line 76) | pub fn write(&mut self, mut offset: usize, mut data: &[u8]) -> usize {
method advance (line 121) | pub fn advance(&mut self, offset: usize) -> usize {
method buffer (line 132) | pub fn buffer(&self) -> &RingBuffer {
type Reader (line 137) | pub struct Reader(Arc<RingBuffer>);
method available (line 140) | pub fn available(&self) -> usize {
method read (line 144) | pub fn read(&self, mut offset: usize, mut data: &mut [u8]) -> usize {
method advance (line 189) | pub fn advance(&mut self, offset: usize) -> usize {
method buffer (line 200) | pub fn buffer(&self) -> &RingBuffer {
function collapse_position (line 205) | fn collapse_position(capacity: usize, pos: usize) -> usize {
function tail_to_head (line 213) | fn tail_to_head(capacity: usize, tail: usize, head: usize) -> usize {
function head_to_tail (line 221) | fn head_to_tail(capacity: usize, head: usize, tail: usize) -> usize {
function increment (line 225) | fn increment(capacity: usize, pos: usize, n: usize) -> usize {
function write_slice (line 238) | fn write_slice(dst: &mut [MaybeUninit<u8>], src: &[u8]) {
function basic_read_write (line 250) | fn basic_read_write() {
function threaded_read_write (line 308) | fn threaded_read_write() {
FILE: src/runtime.rs
type Spawn (line 7) | pub trait Spawn: Send + Sync {
method spawn (line 8) | fn spawn<F>(&self, future: F)
type Timer (line 15) | pub trait Timer: Send + Sync {
method now (line 20) | fn now(&self) -> Self::Instant;
method duration_between (line 24) | fn duration_between(&self, earlier: Self::Instant, later: Self::Instan...
method sleep (line 27) | fn sleep(&self, duration: Duration) -> Self::Sleep;
FILE: src/spsc.rs
type Shared (line 12) | struct Shared {
type Receiver (line 17) | pub struct Receiver<T> {
method drop (line 23) | fn drop(&mut self) {
type Item (line 31) | type Item = T;
method poll_next (line 33) | fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Option<...
function try_recv (line 50) | pub fn try_recv(&mut self) -> Result<T, TryRecvError> {
type Disconnected (line 59) | pub struct Disconnected;
type Sender (line 61) | pub struct Sender<T> {
method drop (line 68) | fn drop(&mut self) {
type Error (line 76) | type Error = Disconnected;
function poll_ready (line 78) | fn poll_ready(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result...
function start_send (line 100) | fn start_send(mut self: Pin<&mut Self>, item: T) -> Result<(), Self::Err...
function poll_flush (line 107) | fn poll_flush(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(),...
function poll_close (line 111) | fn poll_close(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Result<(),...
function try_send (line 117) | pub fn try_send(&mut self, t: T) -> Result<(), TrySendError<T>> {
function channel (line 140) | pub fn channel<T>(capacity: usize) -> (Sender<T>, Receiver<T>) {
FILE: src/unreliable_bincode_channel.rs
type SendError (line 18) | pub enum SendError {
type RecvError (line 27) | pub enum RecvError {
type UnreliableBincodeChannel (line 40) | pub struct UnreliableBincodeChannel<T, P>
function from (line 54) | fn from(channel: UnreliableChannel<T, P>) -> Self {
function new (line 64) | pub fn new(channel: UnreliableChannel<T, P>) -> Self {
function into_inner (line 71) | pub fn into_inner(self) -> UnreliableChannel<T, P> {
function max_message_len (line 78) | pub fn max_message_len(&self) -> u16 {
function send (line 89) | pub async fn send<M: Serialize>(&mut self, msg: &M) -> Result<(), SendEr...
function try_send (line 95) | pub fn try_send<M: Serialize>(&mut self, msg: &M) -> Result<bool, SendEr...
function flush (line 110) | pub async fn flush(&mut self) -> Result<(), unreliable_channel::SendErro...
function try_flush (line 114) | pub fn try_flush(&mut self) -> Result<bool, unreliable_channel::SendErro...
function recv (line 126) | pub async fn recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<M, Rec...
function try_recv (line 132) | pub fn try_recv<'a, M: Deserialize<'a>>(&'a mut self) -> Result<Option<M...
function poll_send_ready (line 140) | pub fn poll_send_ready(
function try_send_ready (line 151) | pub fn try_send_ready(&mut self) -> Result<bool, unreliable_channel::Sen...
function start_send (line 159) | pub fn start_send<M: Serialize>(&mut self, msg: &M) -> Result<(), bincod...
function poll_flush (line 168) | pub fn poll_flush(
function poll_recv (line 177) | pub fn poll_recv<'a, M: Deserialize<'a>>(
function bincode_config (line 186) | fn bincode_config(&self) -> impl bincode::Options + Copy {
type UnreliableTypedChannel (line 192) | pub struct UnreliableTypedChannel<T, P, M>
function from (line 206) | fn from(channel: UnreliableChannel<T, P>) -> Self {
function new (line 216) | pub fn new(channel: UnreliableChannel<T, P>) -> Self {
function into_inner (line 223) | pub fn into_inner(self) -> UnreliableChannel<T, P> {
function flush (line 227) | pub async fn flush(&mut self) -> Result<(), unreliable_channel::SendErro...
function try_flush (line 231) | pub fn try_flush(&mut self) -> Result<bool, unreliable_channel::SendErro...
function poll_flush (line 235) | pub fn poll_flush(
function poll_send_ready (line 242) | pub fn poll_send_ready(
function try_send_ready (line 249) | pub fn try_send_ready(&mut self) -> Result<bool, unreliable_channel::Sen...
function send (line 260) | pub async fn send(&mut self, msg: &M) -> Result<(), SendError> {
function try_send (line 264) | pub fn try_send(&mut self, msg: &M) -> Result<bool, SendError> {
function start_send (line 268) | pub fn start_send(&mut self, msg: &M) -> Result<(), bincode::Error> {
function recv (line 279) | pub async fn recv(&'a mut self) -> Result<M, RecvError> {
function try_recv (line 283) | pub fn try_recv(&'a mut self) -> Result<Option<M>, RecvError> {
function poll_recv (line 287) | pub fn poll_recv(&'a mut self, cx: &mut Context) -> Poll<Result<M, RecvE...
FILE: src/unreliable_channel.rs
type Disconnected (line 23) | pub struct Disconnected;
type SendError (line 26) | pub enum SendError {
type RecvError (line 35) | pub enum RecvError {
type Settings (line 44) | pub struct Settings {
type UnreliableChannel (line 53) | pub struct UnreliableChannel<T, P>
function new (line 73) | pub fn new(
function max_message_len (line 98) | pub fn max_message_len(&self) -> u16 {
function send (line 113) | pub async fn send(&mut self, msg: &[u8]) -> Result<(), SendError> {
function try_send (line 117) | pub fn try_send(&mut self, msg: &[u8]) -> Result<bool, SendError> {
function flush (line 131) | pub async fn flush(&mut self) -> Result<(), Disconnected> {
function try_flush (line 135) | pub fn try_flush(&mut self) -> Result<bool, Disconnected> {
function recv (line 150) | pub async fn recv(&mut self) -> Result<&[u8], RecvError> {
function try_recv (line 155) | pub fn try_recv(&mut self) -> Result<Option<&[u8]>, RecvError> {
function poll_send (line 163) | pub fn poll_send(&mut self, cx: &mut Context, msg: &[u8]) -> Poll<Result...
function poll_send_ready (line 175) | pub fn poll_send_ready(
function start_send (line 198) | pub fn start_send(&mut self) -> StartSend<P::Packet> {
function poll_flush (line 202) | pub fn poll_flush(&mut self, cx: &mut Context) -> Poll<Result<(), Discon...
function poll_recv (line 230) | pub fn poll_recv(&mut self, cx: &mut Context) -> Poll<Result<&[u8], Recv...
function poll_recv_ready (line 235) | fn poll_recv_ready(&mut self, cx: &mut Context) -> Poll<Result<(), RecvE...
function recv_next (line 250) | fn recv_next(&mut self) -> Result<&[u8], RecvError> {
type StartSend (line 273) | pub struct StartSend<'a, P> {
function new (line 280) | fn new(packet: &'a mut P, capacity: usize) -> Self {
function buffer (line 295) | pub fn buffer(&mut self) -> &mut [u8] {
function finish (line 303) | pub fn finish(self, msg_len: usize) {
FILE: src/windows.rs
type StreamPos (line 5) | pub type StreamPos = Wrapping<u32>;
function stream_cmp (line 18) | pub fn stream_cmp(a: StreamPos, b: StreamPos) -> Option<Ordering> {
function stream_lt (line 27) | pub fn stream_lt(a: StreamPos, b: StreamPos) -> bool {
function stream_le (line 31) | pub fn stream_le(a: StreamPos, b: StreamPos) -> bool {
function stream_gt (line 35) | pub fn stream_gt(a: StreamPos, b: StreamPos) -> bool {
function stream_ge (line 39) | pub fn stream_ge(a: StreamPos, b: StreamPos) -> bool {
type AckResult (line 44) | pub enum AckResult {
type SendWindowWriter (line 54) | pub struct SendWindowWriter {
method write (line 61) | pub fn write(&mut self, data: &[u8]) -> u32 {
method write_available (line 68) | pub fn write_available(&self) -> u32 {
type SendWindow (line 75) | pub struct SendWindow {
method new (line 89) | pub fn new(capacity: u32, stream_start: StreamPos) -> (SendWindow, Sen...
method write_available (line 106) | pub fn write_available(&self) -> u32 {
method send_pos (line 112) | pub fn send_pos(&self) -> StreamPos {
method send_available (line 116) | pub fn send_available(&self) -> u32 {
method send (line 127) | pub fn send(&mut self, data: &mut [u8]) -> Option<(StreamPos, StreamPo...
method unacked_start (line 151) | pub fn unacked_start(&self) -> StreamPos {
method get_unacked (line 157) | pub fn get_unacked(&self, start: StreamPos, data: &mut [u8]) {
method ack_range (line 169) | pub fn ack_range(&mut self, start: StreamPos, end: StreamPos) -> AckRe...
type RecvWindowReader (line 226) | pub struct RecvWindowReader {
method read (line 233) | pub fn read(&mut self, data: &mut [u8]) -> u32 {
type RecvWindow (line 242) | pub struct RecvWindow {
method new (line 261) | pub fn new(capacity: u32, stream_start: StreamPos) -> (RecvWindow, Rec...
method read_available (line 277) | pub fn read_available(&self) -> u32 {
method window_end (line 283) | pub fn window_end(&self) -> StreamPos {
method recv (line 304) | pub fn recv(&mut self, start_pos: StreamPos, data: &[u8]) -> Option<St...
function test_send_window (line 452) | fn test_send_window() {
function test_recv_window (line 581) | fn test_recv_window() {
FILE: tests/compressed_bincode_channel.rs
function test_compressed_bincode_channel (line 19) | fn test_compressed_bincode_channel() {
FILE: tests/message_channels.rs
type Message1 (line 28) | struct Message1(i32);
constant MESSAGE1_SETTINGS (line 30) | const MESSAGE1_SETTINGS: MessageChannelSettings = MessageChannelSettings {
type Message2 (line 51) | struct Message2(i32);
constant MESSAGE2_SETTINGS (line 53) | const MESSAGE2_SETTINGS: MessageChannelSettings = MessageChannelSettings {
function test_message_channels (line 64) | fn test_message_channels() {
FILE: tests/packet_multiplexer.rs
function test_multiplexer (line 19) | fn test_multiplexer() {
FILE: tests/reliable_bincode_channel.rs
function test_reliable_bincode_channel (line 19) | fn test_reliable_bincode_channel() {
FILE: tests/reliable_channel.rs
function test_reliable_stream (line 18) | fn test_reliable_stream() {
FILE: tests/unreliable_bincode_channel.rs
function test_unreliable_bincode_channel (line 17) | fn test_unreliable_bincode_channel() {
FILE: tests/unreliable_channel.rs
function test_unreliable_channel (line 15) | fn test_unreliable_channel() {
FILE: tests/util/mod.rs
type SimpleBufferPool (line 30) | pub struct SimpleBufferPool(pub usize);
type Buffer (line 33) | type Buffer = Box<[u8]>;
method capacity (line 35) | fn capacity(&self) -> usize {
method acquire (line 39) | fn acquire(&mut self) -> Self::Buffer {
type TimeState (line 44) | struct TimeState {
type IncomingTasks (line 49) | type IncomingTasks = Mutex<Vec<BoxFuture<'static, ()>>>;
type HandleInner (line 51) | struct HandleInner {
type SimpleRuntime (line 56) | pub struct SimpleRuntime {
method new (line 65) | pub fn new() -> Self {
method handle (line 78) | pub fn handle(&self) -> SimpleRuntimeHandle {
method advance_time (line 82) | pub fn advance_time(&mut self, millis: u64) {
method run_until_stalled (line 100) | pub fn run_until_stalled(&mut self) -> bool {
type SimpleRuntimeHandle (line 62) | pub struct SimpleRuntimeHandle(Arc<HandleInner>);
type Target (line 126) | type Target = SimpleRuntimeHandle;
method deref (line 128) | fn deref(&self) -> &Self::Target {
function do_delay (line 133) | async fn do_delay(state: Arc<HandleInner>, duration: Duration) -> u64 {
method spawn (line 154) | fn spawn<F: Future<Output = ()> + Send + 'static>(&self, f: F) {
type Instant (line 160) | type Instant = u64;
type Sleep (line 161) | type Sleep = Pin<Box<dyn Future<Output = ()> + Send>>;
method now (line 163) | fn now(&self) -> Self::Instant {
method duration_between (line 167) | fn duration_between(&self, earlier: Self::Instant, later: Self::Instant)...
method sleep (line 171) | fn sleep(&self, duration: Duration) -> Self::Sleep {
type LinkCondition (line 180) | pub struct LinkCondition {
function condition_link (line 187) | pub fn condition_link<P>(
Condensed preview — 32 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (228K chars).
[
{
"path": ".circleci/config.yml",
"chars": 1027,
"preview": "version: 2\n\njobs:\n build:\n docker:\n - image: cimg/rust:1.70.0\n steps:\n - checkout\n - run:\n "
},
{
"path": ".gitignore",
"chars": 84,
"preview": "target/\n**/*.rs.bk\nCargo.lock\n.DS_Store\n.#*\n.envrc\n.direnv\nshell.nix\n.dir-locals.el\n"
},
{
"path": "CHANGELOG.md",
"chars": 1848,
"preview": "## [0.4]\n- Don't \"nagle\" in the reliable channel, *require* flush calls to ensure data is\n sent.\n- [API Change]: Change"
},
{
"path": "Cargo.toml",
"chars": 802,
"preview": "[package]\nname = \"turbulence\"\nversion = \"0.4.0\"\nauthors = [\"kyren <kerriganw@gmail.com>\"]\nedition = \"2021\"\ndescription ="
},
{
"path": "LICENSE-APACHE",
"chars": 11357,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "LICENSE-CC0",
"chars": 7047,
"preview": "Creative Commons Legal Code\n\nCC0 1.0 Universal\n\n CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE\n"
},
{
"path": "LICENSE-MIT",
"chars": 1024,
"preview": "Permission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentati"
},
{
"path": "README.md",
"chars": 8309,
"preview": "# turbulence\n\n*We'll get there, but it's gonna be a bumpy ride.*\n\n---\n\n[]\n\nuse std::{\n future::Future,\n ops::Deref,\n pin::Pin,\n sync::{Arc, Mutex},\n task::{Conte"
}
]
About this extraction
This page contains the full source code of the kyren/turbulence GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 32 files (214.7 KB), approximately 51.6k tokens, and a symbol index with 386 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.