Showing preview only (315K chars total). Download the full file or copy to clipboard to get everything.
Repository: claytonwramsey/dumpster
Branch: master
Commit: 32bb3bbb67f9
Files: 34
Total size: 302.2 KB
Directory structure:
gitextract_qahat5j6/
├── .github/
│ └── workflows/
│ └── rust.yml
├── .gitignore
├── CHANGELOG.md
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-MIT
├── LICENSE.md
├── README.md
├── dumpster/
│ ├── .gitignore
│ ├── Cargo.toml
│ └── src/
│ ├── impls.rs
│ ├── lib.rs
│ ├── ptr.rs
│ ├── sync/
│ │ ├── cell.rs
│ │ ├── collect.rs
│ │ ├── loom_ext.rs
│ │ ├── loom_tests.rs
│ │ ├── mod.rs
│ │ └── tests.rs
│ └── unsync/
│ ├── collect.rs
│ ├── mod.rs
│ └── tests.rs
├── dumpster_bench/
│ ├── .gitignore
│ ├── Cargo.toml
│ ├── scripts/
│ │ └── make_plots.py
│ └── src/
│ ├── lib.rs
│ └── main.rs
├── dumpster_derive/
│ ├── .gitignore
│ ├── Cargo.toml
│ └── src/
│ └── lib.rs
├── dumpster_test/
│ ├── .gitignore
│ ├── Cargo.toml
│ └── src/
│ └── lib.rs
└── rustfmt.toml
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/rust.yml
================================================
name: Rust
on:
push:
branches: ["master"]
pull_request:
branches: ["master"]
env:
CARGO_TERM_COLOR: always
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-latest
- windows-latest
- macOS-latest
toolchain:
- nightly
- stable
cargo_flags:
- "--all-features"
- "--no-default-features"
- ""
exclude:
- cargo_flags: "--all-features"
toolchain: stable
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: ${{ matrix.toolchain }}
- name: Generate lockfile
run: cargo generate-lockfile
- name: Cache
id: cache-restore
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-test-${{ hashFiles('**/Cargo.lock') }}-${{ matrix.cargo_flags }}
- name: Build with tests
uses: actions-rs/cargo@v1
with:
command: test
args: --no-run --workspace ${{ matrix.cargo_flags }} --exclude dumpster_bench
- name: Run tests
uses: actions-rs/cargo@v1
with:
command: test
args: --workspace ${{ matrix.cargo_flags }} --exclude dumpster_bench
- name: Save cache
id: cache-save
uses: actions/cache/save@v4
if: always() && steps.cache-restore.cache-hit != 'true'
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-test-${{ hashFiles('**/Cargo.lock') }}-${{ matrix.cargo_flags }}
miri:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-latest
- windows-latest
- macOS-latest
toolchain:
- nightly
cargo_flags:
- "--all-features"
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: ${{ matrix.toolchain }}
components: miri
- name: Generate lockfile
run: cargo generate-lockfile
- name: Cache
id: cache-restore
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-miri-${{ hashFiles('**/Cargo.lock') }}-${{ matrix.cargo_flags }}
- name: Build miri test executables
uses: actions-rs/cargo@v1
with:
command: miri
args: test --no-run --workspace ${{ matrix.cargo_flags }} --exclude dumpster_bench
- name: Run miri tests
uses: actions-rs/cargo@v1
with:
command: miri
args: test --workspace ${{ matrix.cargo_flags }} --exclude dumpster_bench
- name: Save cache
id: cache-save
uses: actions/cache/save@v4
if: always() && steps.cache-restore.cache-hit != 'true'
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-miri-${{ hashFiles('**/Cargo.lock') }}-${{ matrix.cargo_flags }}
loom:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v2
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: stable
- name: Generate lockfile
run: cargo generate-lockfile
- name: Cache
id: cache-restore
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-loom-${{ hashFiles('**/Cargo.lock') }}
- name: Build with tests
uses: actions-rs/cargo@v1
env:
RUSTFLAGS: "--cfg loom"
with:
command: test
args: --lib -p dumpster loom --release --no-run
- name: Run tests
uses: actions-rs/cargo@v1
env:
RUSTFLAGS: "--cfg loom"
with:
command: test
args: --lib -p dumpster loom --release
- name: Save cache
id: cache-save
uses: actions/cache/save@v4
if: always() && steps.cache-restore.cache-hit != 'true'
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-loom-${{ hashFiles('**/Cargo.lock') }}
================================================
FILE: .gitignore
================================================
/target
/Cargo.lock
*.csv
.vscode
.zed
================================================
FILE: CHANGELOG.md
================================================
# `dumpster` Changelog
## 2.1.0
### New features
- Implemented `FromIterator` for `Gc<[T]>`.
## 2.0.0
### Breaking changes
- Refactored `Trace` to use `TraceWith<V>`.
### New features
- Added `sync::Gc::new_cyclic`.
## 1.2.0
### New features
- Added experimental support for testing under `loom`.
- Added `unsync::Gc::new_cyclic`.
- Implemented `Default` for `Gc`.
- Added `Gc::make_mut`.
- Added `From` implementations for `Gc`.
- Supported differing `BuildHasher` types in `Trace` implementation for `HashSet`.
- Added `sync::coerce_gc` and `unsync::coerce_gc`.
- Added `Trace` implementation to more types in the Rust standard library.
### Bug fixes
- Fixed broken references in documentation.
- Added overflow testing for `Gc` reference counts.
- `Gc`s created in a garbage-collected value's `Drop` implementation are no longer leaked.
## 1.1.1
### Bug fixes
- Using `dumpster` no longer fails under Miri as we have changed our underlying pointer model.
## 1.1.0
### New features
- Added support for [`either`](https://crates.io/crates/either).
### Bug fixes
- Derive implementations no longer erroneously refer to `heapsize`.
### Other changes
- Slight performance and code style improvements.
- Improved internal documentation on safety.
- Remove `strict-provenance` requirement as it is now stabilized.
## 1.0.0
### Breaking changes
- Rename `Collectable` to `Trace`.
## 0.2.1
### New features
- Implement `Collectable` for `std::any::TypeId`.
## 0.2.0
### New features
- Added `Gc::as_ptr`.
- Added `Gc::ptr_eq`.
- Implemented `PartialEq` and `Eq` for garbage collected pointers.
### Other
- Changed license from GNU GPLv3 or later to MPL 2.0.
- Allocations which do not contain `Gc`s will simply be reference counted.
## 0.1.2
### New features
- Implement `Collectable` for `OnceCell`, `HashMap`, and `BTreeMap`.
- Add `try_clone` and `try_deref` to `unsync::Gc` and `sync::Gc`.
- Make dereferencing `Gc` only panic on truly-dead `Gc`s.
### Bugfixes
- Prevent dead `Gc`s from escaping their `Drop` implementation, potentially causing UAFs.
- Use fully-qualified name for `Result` in derive macro, preventing some bugs.
### Other
- Improve performance in `unsync` by using `parking_lot` for concurrency primitives.
- Improve documentation of panicking behavior in `Gc`.
- Fix spelling mistakes in documentation.
## 0.1.1
### Bugfixes
- Prevent possible UAFs caused by accessing `Gc`s during `Drop` impls by panicking.
### Other
- Fix spelling mistakes in documentation.
## 0.1.0
Initial release.
================================================
FILE: Cargo.toml
================================================
[workspace]
members = [
"dumpster",
"dumpster_derive",
"dumpster_test",
"dumpster_bench",
]
resolver = "2"
[patch.crates-io]
dumpster = { path = "dumpster" }
[profile.release]
lto = true
================================================
FILE: LICENSE-APACHE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
================================================
FILE: LICENSE-MIT
================================================
Copyright (c) The Rust Project Contributors
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
================================================
FILE: LICENSE.md
================================================
Mozilla Public License Version 2.0
==================================
### 1. Definitions
**1.1. “Contributor”**
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
**1.2. “Contributor Version”**
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
**1.3. “Contribution”**
means Covered Software of a particular Contributor.
**1.4. “Covered Software”**
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
**1.5. “Incompatible With Secondary Licenses”**
means
* **(a)** that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
* **(b)** that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
**1.6. “Executable Form”**
means any form of the work other than Source Code Form.
**1.7. “Larger Work”**
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
**1.8. “License”**
means this document.
**1.9. “Licensable”**
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
**1.10. “Modifications”**
means any of the following:
* **(a)** any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
* **(b)** any new file in Source Code Form that contains any Covered
Software.
**1.11. “Patent Claims” of a Contributor**
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
**1.12. “Secondary License”**
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
**1.13. “Source Code Form”**
means the form of the work preferred for making modifications.
**1.14. “You” (or “Your”)**
means an individual or a legal entity exercising rights under this
License. For legal entities, “You” includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, “control” means **(a)** the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or **(b)** ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
### 2. License Grants and Conditions
#### 2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
* **(a)** under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
* **(b)** under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
#### 2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
#### 2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
* **(a)** for any code that a Contributor has removed from Covered Software;
or
* **(b)** for infringements caused by: **(i)** Your and any other third party's
modifications of Covered Software, or **(ii)** the combination of its
Contributions with other software (except as part of its Contributor
Version); or
* **(c)** under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
#### 2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
#### 2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
#### 2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
#### 2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
### 3. Responsibilities
#### 3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
#### 3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
* **(a)** such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
* **(b)** You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
#### 3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
#### 3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
#### 3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
### 4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: **(a)** comply with
the terms of this License to the maximum extent possible; and **(b)**
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
### 5. Termination
**5.1.** The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated **(a)** provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and **(b)** on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
**5.2.** If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
**5.3.** In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
### 6. Disclaimer of Warranty
> Covered Software is provided under this License on an “as is”
> basis, without warranty of any kind, either expressed, implied, or
> statutory, including, without limitation, warranties that the
> Covered Software is free of defects, merchantable, fit for a
> particular purpose or non-infringing. The entire risk as to the
> quality and performance of the Covered Software is with You.
> Should any Covered Software prove defective in any respect, You
> (not any Contributor) assume the cost of any necessary servicing,
> repair, or correction. This disclaimer of warranty constitutes an
> essential part of this License. No use of any Covered Software is
> authorized under this License except under this disclaimer.
### 7. Limitation of Liability
> Under no circumstances and under no legal theory, whether tort
> (including negligence), contract, or otherwise, shall any
> Contributor, or anyone who distributes Covered Software as
> permitted above, be liable to You for any direct, indirect,
> special, incidental, or consequential damages of any character
> including, without limitation, damages for lost profits, loss of
> goodwill, work stoppage, computer failure or malfunction, or any
> and all other commercial damages or losses, even if such party
> shall have been informed of the possibility of such damages. This
> limitation of liability shall not apply to liability for death or
> personal injury resulting from such party's negligence to the
> extent applicable law prohibits such limitation. Some
> jurisdictions do not allow the exclusion or limitation of
> incidental or consequential damages, so this exclusion and
> limitation may not apply to You.
### 8. Litigation
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
### 9. Miscellaneous
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
### 10. Versions of the License
#### 10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
#### 10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
#### 10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
#### 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
## Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
## Exhibit B - “Incompatible With Secondary Licenses” Notice
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.
================================================
FILE: README.md
================================================
# `dumpster`: A cycle-tracking garbage collector for Rust
[](https://crates.io/crates/dumpster)
[](https://docs.rs/dumpster)
`dumpster` is a cycle-detecting garbage collector for Rust.
It detects unreachable allocations and automatically frees them.
## Why should you use this crate?
In short, `dumpster` offers a great mix of usability, performance, and flexibility.
- `dumpster`'s API is a drop-in replacement for `std`'s reference-counted shared allocations
(`Rc` and `Arc`).
- It's very performant and has builtin implementations of both thread-local and concurrent
garbage collection.
- There are no restrictions on the reference structure within a garbage-collected allocation
(references may point in any way you like).
- It's trivial to make a custom type Trace using the provided derive macros.
- You can even store `?Sized` data in a garbage-collected pointer!
## How it works
`dumpster` is unlike most tracing garbage collectors.
Other GCs keep track of a set of roots, which can then be used to perform a sweep and find out
which allocations are reachable and which are not.
Instead, `dumpster` extends reference-counted garbage collection (such as `std::rc::Rc`) with a
cycle-detection algorithm, enabling it to effectively clean up self-referential data structures.
For a deeper dive, check out this
[blog post](https://claytonwramsey.github.io/2023/08/14/dumpster.html).
## What this library contains
`dumpster` actually contains two garbage collector implementations: one thread-local, non-`Send`
garbage collector in the module `unsync`, and one thread-safe garbage collector in the module
`sync`.
These garbage collectors can be safely mixed and matched.
This library also comes with a derive macro for creating custom Trace types.
## Examples
```rust
use dumpster::{Trace, unsync::Gc};
#[derive(Trace)]
struct Foo {
ptr: RefCell<Option<Gc<Foo>>>,
}
// Create a new garbage-collected Foo.
let foo = Gc::new(Foo {
ptr: RefCell::new(None),
});
// Insert a circular reference inside of the foo.
*foo.ptr.borrow_mut() = Some(foo.clone());
// Render the foo inaccessible.
// This may trigger a collection, but it's not guaranteed.
// If we had used `Rc` instead of `Gc`, this would have caused a memory leak.
drop(foo);
// Trigger a collection.
// This isn't necessary, but it guarantees that `foo` will be collected immediately (instead of
// later).
dumpster::unsync::collect();
```
## Installation
To install, simply add `dumpster` as a dependency to your project.
```toml
[dependencies]
dumpster = "2.1.0"
```
## Optional features
### `derive`
`derive` is enabled by default.
It enables the derive macro for `Trace`, which makes it easy for users to implement their
own Trace types.
```rust
use dumpster::{unsync::Gc, Trace};
use std::cell::RefCell;
#[derive(Trace)] // no manual implementation required
struct Foo(RefCell<Option<Gc<Foo>>>);
let my_foo = Gc::new(Foo(RefCell::new(None)));
*my_foo.0.borrow_mut() = Some(my_foo.clone());
drop(my_foo); // my_foo will be automatically cleaned up
```
### `either`
`either` is disabled by default. It adds support for the [`either`](https://crates.io/crates/either) crate,
specifically by implementing `Trace` for [`either::Either`](https://docs.rs/either/1.13.0/either/enum.Either.html).
### `coerce-unsized`
`coerce-unsized` is disabled by default.
This enables the implementation of `CoerceUnsized` for each garbage collector,
making it possible to use `Gc` with `!Sized` types conveniently.
```rust
use dumpster::unsync::Gc;
// this only works with "coerce-unsized" enabled while compiling on nightly Rust
let gc1: Gc<[u8]> = Gc::new([1, 2, 3]);
```
To use `coerce-unsized`, edit your installation to `Cargo.toml` to include the feature.
```toml
[dependencies]
dumpster = { version = "2.1.0", features = ["coerce-unsized"]}
```
## Loom support
`dumpster` has experimental support for permutation testing under [`loom`](https://github.com/tokio-rs/loom).
It is expected to be unstable and buggy.
To compile `dumpster` using `loom`, add `--cfg loom` to `RUSTFLAGS` when compiling, for example:
```sh
RUSTFLAGS='--cfg loom' cargo test
```
## License
This code is licensed under the Mozilla Public License, version 2.0.
For more information, refer to [LICENSE.md](LICENSE.md).
This project includes portions of code derived from the Rust standard library,
which is dual-licensed under the MIT and Apache 2.0 licenses.
Copyright (c) The Rust Project Developers.
================================================
FILE: dumpster/.gitignore
================================================
/target
/Cargo.lock
================================================
FILE: dumpster/Cargo.toml
================================================
[package]
name = "dumpster"
version = "2.1.0"
edition = "2021"
license = "MPL-2.0"
authors = ["Clayton Ramsey"]
description = "A concurrent cycle-tracking garbage collector."
repository = "https://github.com/claytonwramsey/dumpster"
readme = "../README.md"
keywords = ["dumpster", "garbage_collector", "gc"]
categories = ["memory-management", "data-structures"]
[features]
default = ["derive"]
coerce-unsized = []
derive = ["dep:dumpster_derive"]
either = ["dep:either"]
[dependencies]
parking_lot = "0.12.3"
dumpster_derive = { version = "2.0.0", path = "../dumpster_derive", optional = true }
either = { version = "1.13.0", optional = true }
foldhash = { version = "0.2.0", default-features = false, features = ["std"] }
[dev-dependencies]
fastrand = "2.0.0"
[target.'cfg(loom)'.dependencies]
loom = { version = "0.7.2" }
[package.metadata.playground]
features = ["derive"]
[package.metadata.docs.rs]
features = ["derive"]
targets = ["x86_64-unknown-linux-gnu"]
rustdoc-args = ["--generate-link-to-definition"]
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(loom)'] }
================================================
FILE: dumpster/src/impls.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! Implementations of [`TraceWith<V>`] for common data types.
#![allow(deprecated)]
use std::{
borrow::Cow,
cell::{Cell, OnceCell, RefCell},
collections::{BTreeMap, BTreeSet, BinaryHeap, HashMap, HashSet, LinkedList, VecDeque},
convert::Infallible,
hash::{BuildHasher, BuildHasherDefault},
marker::PhantomData,
num::{
NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize,
},
ops::Deref,
sync::{
atomic::{
AtomicI16, AtomicI32, AtomicI64, AtomicI8, AtomicIsize, AtomicU16, AtomicU32,
AtomicU64, AtomicU8, AtomicUsize,
},
Mutex, MutexGuard, OnceLock, RwLock, RwLockReadGuard, TryLockError,
},
};
use crate::{TraceWith, Visitor};
unsafe impl<V: Visitor> TraceWith<V> for Infallible {
fn accept(&self, _: &mut V) -> Result<(), ()> {
match *self {}
}
}
#[cfg(feature = "either")]
unsafe impl<V: Visitor, A: TraceWith<V>, B: TraceWith<V>> TraceWith<V> for either::Either<A, B> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
match self {
either::Either::Left(a) => a.accept(visitor),
either::Either::Right(b) => b.accept(visitor),
}
}
}
/// Implement `TraceWith<V>` trivially for some parametric `?Sized` type.
macro_rules! param_trivial_impl_unsized {
($x: ty) => {
unsafe impl<V: Visitor, T: ?Sized> TraceWith<V> for $x {
#[inline]
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
};
}
param_trivial_impl_unsized!(MutexGuard<'static, T>);
param_trivial_impl_unsized!(RwLockReadGuard<'static, T>);
param_trivial_impl_unsized!(&'static T);
param_trivial_impl_unsized!(PhantomData<T>);
/// Implement `TraceWith<V>` trivially for some parametric `Sized` type.
macro_rules! param_trivial_impl_sized {
($x: ty) => {
unsafe impl<V: Visitor, T> TraceWith<V> for $x {
#[inline]
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
};
}
param_trivial_impl_sized!(std::future::Pending<T>);
param_trivial_impl_sized!(std::mem::Discriminant<T>);
unsafe impl<V: Visitor, T: TraceWith<V> + ?Sized> TraceWith<V> for Box<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
(**self).accept(visitor)
}
}
unsafe impl<V: Visitor, T> TraceWith<V> for BuildHasherDefault<T> {
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
unsafe impl<V: Visitor, T: ToOwned> TraceWith<V> for Cow<'_, T>
where
T::Owned: TraceWith<V>,
{
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
if let Cow::Owned(ref v) = self {
v.accept(visitor)?;
}
Ok(())
}
}
unsafe impl<V: Visitor, T: TraceWith<V> + ?Sized> TraceWith<V> for RefCell<T> {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.try_borrow().map_err(|_| ())?.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V> + ?Sized> TraceWith<V> for Mutex<T> {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.try_lock()
.map_err(|e| match e {
TryLockError::Poisoned(_) => panic!(),
TryLockError::WouldBlock => (),
})?
.deref()
.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V> + ?Sized> TraceWith<V> for RwLock<T> {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.try_read()
.map_err(|e| match e {
TryLockError::Poisoned(_) => panic!(),
TryLockError::WouldBlock => (),
})?
.deref()
.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for Option<T> {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
match self {
Some(x) => x.accept(visitor),
None => Ok(()),
}
}
}
unsafe impl<V: Visitor, T: TraceWith<V>, E: TraceWith<V>> TraceWith<V> for Result<T, E> {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
match self {
Ok(t) => t.accept(visitor),
Err(e) => e.accept(visitor),
}
}
}
unsafe impl<V: Visitor, T: Copy + TraceWith<V>> TraceWith<V> for Cell<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for OnceCell<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get().map_or(Ok(()), |x| x.accept(visitor))
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for OnceLock<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get().map_or(Ok(()), |x| x.accept(visitor))
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::cmp::Reverse<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V> + ?Sized> TraceWith<V> for std::io::BufReader<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get_ref().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V> + std::io::Write + ?Sized> TraceWith<V>
for std::io::BufWriter<T>
{
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get_ref().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>, U: TraceWith<V>> TraceWith<V> for std::io::Chain<T, U> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
let (t, u) = self.get_ref();
t.accept(visitor)?;
u.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::io::Cursor<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get_ref().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V> + std::io::Write + ?Sized> TraceWith<V>
for std::io::LineWriter<T>
{
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get_ref().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::io::Take<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.get_ref().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::mem::ManuallyDrop<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
(**self).accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::num::Saturating<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::num::Wrapping<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::ops::Range<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.start.accept(visitor)?;
self.end.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::ops::RangeFrom<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.start.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::ops::RangeInclusive<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.start().accept(visitor)?;
self.end().accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::ops::RangeTo<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.end.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::ops::RangeToInclusive<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.end.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::ops::Bound<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
match self {
std::ops::Bound::Included(x) | std::ops::Bound::Excluded(x) => x.accept(visitor),
std::ops::Bound::Unbounded => Ok(()),
}
}
}
unsafe impl<V: Visitor, B: TraceWith<V>, C: TraceWith<V>> TraceWith<V>
for std::ops::ControlFlow<B, C>
{
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
match self {
std::ops::ControlFlow::Continue(c) => c.accept(visitor),
std::ops::ControlFlow::Break(b) => b.accept(visitor),
}
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::panic::AssertUnwindSafe<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::task::Poll<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
match self {
std::task::Poll::Ready(r) => r.accept(visitor),
std::task::Poll::Pending => Ok(()),
}
}
}
/// Implement [`TraceWith<V>`] for a collection data structure which has some method `iter()` that
/// iterates over all elements of the data structure and `iter_mut()` which does the same over
/// mutable references.
macro_rules! Trace_collection_impl {
($x: ty) => {
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for $x {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
for elem in self {
elem.accept(visitor)?;
}
Ok(())
}
}
};
}
Trace_collection_impl!(Vec<T>);
Trace_collection_impl!(VecDeque<T>);
Trace_collection_impl!(LinkedList<T>);
Trace_collection_impl!([T]);
Trace_collection_impl!(BinaryHeap<T>);
Trace_collection_impl!(BTreeSet<T>);
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for std::vec::IntoIter<T> {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
for elem in self.as_slice() {
elem.accept(visitor)?;
}
Ok(())
}
}
unsafe impl<Z: Visitor, K: TraceWith<Z>, V: TraceWith<Z>, S: BuildHasher + TraceWith<Z>>
TraceWith<Z> for HashMap<K, V, S>
{
fn accept(&self, visitor: &mut Z) -> Result<(), ()> {
for (k, v) in self {
k.accept(visitor)?;
v.accept(visitor)?;
}
self.hasher().accept(visitor)
}
}
unsafe impl<Z: Visitor, T: TraceWith<Z>, S: BuildHasher + TraceWith<Z>> TraceWith<Z>
for HashSet<T, S>
{
fn accept(&self, visitor: &mut Z) -> Result<(), ()> {
for elem in self {
elem.accept(visitor)?;
}
self.hasher().accept(visitor)
}
}
unsafe impl<Z: Visitor, K: TraceWith<Z>, V: TraceWith<Z>> TraceWith<Z> for BTreeMap<K, V> {
fn accept(&self, visitor: &mut Z) -> Result<(), ()> {
for (k, v) in self {
k.accept(visitor)?;
v.accept(visitor)?;
}
Ok(())
}
}
unsafe impl<V: Visitor, T: TraceWith<V>, const N: usize> TraceWith<V> for [T; N] {
#[inline]
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
for elem in self {
elem.accept(visitor)?;
}
Ok(())
}
}
/// Implement [`TraceWith<V>`] for a trivially-collected type which contains no `Gc`s in its
/// fields.
macro_rules! Trace_trivial_impl {
($x: ty) => {
unsafe impl<V: Visitor> TraceWith<V> for $x {
#[inline]
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
};
}
Trace_trivial_impl!(());
Trace_trivial_impl!(u8);
Trace_trivial_impl!(u16);
Trace_trivial_impl!(u32);
Trace_trivial_impl!(u64);
Trace_trivial_impl!(u128);
Trace_trivial_impl!(usize);
Trace_trivial_impl!(i8);
Trace_trivial_impl!(i16);
Trace_trivial_impl!(i32);
Trace_trivial_impl!(i64);
Trace_trivial_impl!(i128);
Trace_trivial_impl!(isize);
Trace_trivial_impl!(bool);
Trace_trivial_impl!(char);
Trace_trivial_impl!(f32);
Trace_trivial_impl!(f64);
Trace_trivial_impl!(AtomicU8);
Trace_trivial_impl!(AtomicU16);
Trace_trivial_impl!(AtomicU32);
Trace_trivial_impl!(AtomicU64);
Trace_trivial_impl!(AtomicUsize);
Trace_trivial_impl!(AtomicI8);
Trace_trivial_impl!(AtomicI16);
Trace_trivial_impl!(AtomicI32);
Trace_trivial_impl!(AtomicI64);
Trace_trivial_impl!(AtomicIsize);
Trace_trivial_impl!(NonZeroU8);
Trace_trivial_impl!(NonZeroU16);
Trace_trivial_impl!(NonZeroU32);
Trace_trivial_impl!(NonZeroU64);
Trace_trivial_impl!(NonZeroU128);
Trace_trivial_impl!(NonZeroUsize);
Trace_trivial_impl!(NonZeroI8);
Trace_trivial_impl!(NonZeroI16);
Trace_trivial_impl!(NonZeroI32);
Trace_trivial_impl!(NonZeroI64);
Trace_trivial_impl!(NonZeroI128);
Trace_trivial_impl!(NonZeroIsize);
Trace_trivial_impl!(std::alloc::Layout);
Trace_trivial_impl!(std::alloc::LayoutError);
Trace_trivial_impl!(std::alloc::System);
Trace_trivial_impl!(std::any::TypeId);
Trace_trivial_impl!(std::ascii::EscapeDefault);
Trace_trivial_impl!(std::backtrace::Backtrace);
Trace_trivial_impl!(std::backtrace::BacktraceStatus);
Trace_trivial_impl!(std::cmp::Ordering);
Trace_trivial_impl!(std::char::CharTryFromError);
Trace_trivial_impl!(std::char::EscapeDebug);
Trace_trivial_impl!(std::char::EscapeDefault);
Trace_trivial_impl!(std::char::EscapeUnicode);
Trace_trivial_impl!(std::char::ToLowercase);
Trace_trivial_impl!(std::char::ToUppercase);
Trace_trivial_impl!(std::env::Args);
Trace_trivial_impl!(std::env::ArgsOs);
Trace_trivial_impl!(std::env::JoinPathsError);
Trace_trivial_impl!(std::env::Vars);
Trace_trivial_impl!(std::env::VarsOs);
Trace_trivial_impl!(std::env::VarError);
Trace_trivial_impl!(std::ffi::CStr);
Trace_trivial_impl!(std::ffi::CString);
Trace_trivial_impl!(std::ffi::FromBytesUntilNulError);
Trace_trivial_impl!(std::ffi::FromVecWithNulError);
Trace_trivial_impl!(std::ffi::IntoStringError);
Trace_trivial_impl!(std::ffi::NulError);
Trace_trivial_impl!(std::ffi::OsStr);
Trace_trivial_impl!(std::ffi::OsString);
Trace_trivial_impl!(std::ffi::FromBytesWithNulError);
Trace_trivial_impl!(std::ffi::c_void);
Trace_trivial_impl!(std::fmt::Error);
Trace_trivial_impl!(std::fmt::Alignment);
Trace_trivial_impl!(std::fs::DirBuilder);
Trace_trivial_impl!(std::fs::DirEntry);
Trace_trivial_impl!(std::fs::File);
Trace_trivial_impl!(std::fs::FileTimes);
Trace_trivial_impl!(std::fs::FileType);
Trace_trivial_impl!(std::fs::Metadata);
Trace_trivial_impl!(std::fs::OpenOptions);
Trace_trivial_impl!(std::fs::Permissions);
Trace_trivial_impl!(std::fs::ReadDir);
Trace_trivial_impl!(std::fs::TryLockError);
Trace_trivial_impl!(std::hash::DefaultHasher);
Trace_trivial_impl!(std::hash::RandomState);
Trace_trivial_impl!(std::hash::SipHasher);
Trace_trivial_impl!(std::io::Empty);
Trace_trivial_impl!(std::io::Error);
Trace_trivial_impl!(std::io::PipeReader);
Trace_trivial_impl!(std::io::PipeWriter);
Trace_trivial_impl!(std::io::Repeat);
Trace_trivial_impl!(std::io::Sink);
Trace_trivial_impl!(std::io::Stdin);
Trace_trivial_impl!(std::io::Stdout);
Trace_trivial_impl!(std::io::WriterPanicked);
Trace_trivial_impl!(std::io::ErrorKind);
Trace_trivial_impl!(std::io::SeekFrom);
Trace_trivial_impl!(std::marker::PhantomPinned);
Trace_trivial_impl!(std::net::AddrParseError);
Trace_trivial_impl!(std::net::Ipv4Addr);
Trace_trivial_impl!(std::net::Ipv6Addr);
Trace_trivial_impl!(std::net::SocketAddrV4);
Trace_trivial_impl!(std::net::SocketAddrV6);
Trace_trivial_impl!(std::net::TcpListener);
Trace_trivial_impl!(std::net::TcpStream);
Trace_trivial_impl!(std::net::UdpSocket);
Trace_trivial_impl!(std::net::IpAddr);
Trace_trivial_impl!(std::net::Shutdown);
Trace_trivial_impl!(std::net::SocketAddr);
Trace_trivial_impl!(std::num::ParseFloatError);
Trace_trivial_impl!(std::num::ParseIntError);
Trace_trivial_impl!(std::num::TryFromIntError);
Trace_trivial_impl!(std::num::FpCategory);
Trace_trivial_impl!(std::num::IntErrorKind);
Trace_trivial_impl!(std::ops::RangeFull);
Trace_trivial_impl!(std::path::Path);
Trace_trivial_impl!(std::path::PathBuf);
Trace_trivial_impl!(std::path::StripPrefixError);
Trace_trivial_impl!(std::process::Child);
Trace_trivial_impl!(std::process::ChildStderr);
Trace_trivial_impl!(std::process::ChildStdin);
Trace_trivial_impl!(std::process::ChildStdout);
Trace_trivial_impl!(std::process::Command);
Trace_trivial_impl!(std::process::ExitCode);
Trace_trivial_impl!(std::process::Output);
Trace_trivial_impl!(std::process::Stdio);
Trace_trivial_impl!(std::slice::GetDisjointMutError);
Trace_trivial_impl!(str);
Trace_trivial_impl!(std::rc::Rc<str>);
Trace_trivial_impl!(std::sync::Arc<str>);
Trace_trivial_impl!(std::string::FromUtf8Error);
Trace_trivial_impl!(std::string::FromUtf16Error);
Trace_trivial_impl!(std::string::String);
Trace_trivial_impl!(std::thread::AccessError);
Trace_trivial_impl!(std::thread::Builder);
Trace_trivial_impl!(std::thread::Thread);
Trace_trivial_impl!(std::thread::ThreadId);
Trace_trivial_impl!(std::time::Duration);
Trace_trivial_impl!(std::time::Instant);
Trace_trivial_impl!(std::time::SystemTime);
Trace_trivial_impl!(std::time::SystemTimeError);
Trace_trivial_impl!(std::time::TryFromFloatSecsError);
/// Implement [`TraceWith<V>`] for a tuple.
macro_rules! Trace_tuple {
() => {}; // This case is handled above by the trivial case
($($args:ident),*) => {
unsafe impl<V: Visitor, $($args: TraceWith<V>),*> TraceWith<V> for ($($args,)*) {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
#[expect(clippy::allow_attributes)]
#[allow(non_snake_case)]
let &($(ref $args,)*) = self;
$(($args).accept(visitor)?;)*
Ok(())
}
}
}
}
Trace_tuple!();
Trace_tuple!(A);
Trace_tuple!(A, B);
Trace_tuple!(A, B, C);
Trace_tuple!(A, B, C, D);
Trace_tuple!(A, B, C, D, E);
Trace_tuple!(A, B, C, D, E, F);
Trace_tuple!(A, B, C, D, E, F, G);
Trace_tuple!(A, B, C, D, E, F, G, H);
Trace_tuple!(A, B, C, D, E, F, G, H, I);
Trace_tuple!(A, B, C, D, E, F, G, H, I, J);
/// Implement `TraceWith<V>` for one function type.
macro_rules! Trace_fn {
($ty:ty $(,$args:ident)*) => {
unsafe impl<V: Visitor, Ret $(,$args)*> TraceWith<V> for $ty {
fn accept(&self, _: &mut V) -> Result<(), ()> { Ok(()) }
}
}
}
/// Implement `TraceWith<V>` for all functions with a given set of args.
macro_rules! Trace_fn_group {
() => {
Trace_fn!(extern "Rust" fn () -> Ret);
Trace_fn!(extern "C" fn () -> Ret);
Trace_fn!(unsafe extern "Rust" fn () -> Ret);
Trace_fn!(unsafe extern "C" fn () -> Ret);
};
($($args:ident),*) => {
Trace_fn!(extern "Rust" fn ($($args),*) -> Ret, $($args),*);
Trace_fn!(extern "C" fn ($($args),*) -> Ret, $($args),*);
Trace_fn!(extern "C" fn ($($args),*, ...) -> Ret, $($args),*);
Trace_fn!(unsafe extern "Rust" fn ($($args),*) -> Ret, $($args),*);
Trace_fn!(unsafe extern "C" fn ($($args),*) -> Ret, $($args),*);
Trace_fn!(unsafe extern "C" fn ($($args),*, ...) -> Ret, $($args),*);
}
}
Trace_fn_group!();
Trace_fn_group!(A);
Trace_fn_group!(A, B);
Trace_fn_group!(A, B, C);
Trace_fn_group!(A, B, C, D);
Trace_fn_group!(A, B, C, D, E);
Trace_fn_group!(A, B, C, D, E, F);
Trace_fn_group!(A, B, C, D, E, F, G);
Trace_fn_group!(A, B, C, D, E, F, G, H);
Trace_fn_group!(A, B, C, D, E, F, G, H, I);
Trace_fn_group!(A, B, C, D, E, F, G, H, I, J);
================================================
FILE: dumpster/src/lib.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! A cycle-tracking concurrent garbage collector with an easy-to-use API.
//!
//! Most garbage collectors are _tracing_ garbage collectors, meaning that they keep track of a set
//! of roots which are directly accessible from the stack, and then use those roots to find the set
//! of all accessible allocations.
//! However, because Rust does not allow us to hook into when a value is moved, it's quite difficult
//! to detect when a garbage-collected value stops being a root.
//!
//! `dumpster` takes a different approach.
//! It begins by using simple reference counting, then automatically detects cycles.
//! Allocations are freed when their reference count reaches zero or when they are only accessible
//! via their descendants.
//!
//! Garbage-collected pointers can be created and destroyed in _O(1)_ amortized time, but destroying
//! a garbage-collected pointer may take _O(r)_, where _r_ is the number of existing
//! garbage-collected references, on occasion.
//! However, the cleanups that require _O(r)_ performance are performed once every _O(1/r)_ times
//! a reference is dropped, yielding an amortized _O(1)_ runtime.
//!
//! # Why should you use this crate?
//!
//! In short, `dumpster` offers a great mix of usability, performance, and flexibility.
//!
//! - `dumpster`'s API is a drop-in replacement for `std`'s reference-counted shared allocations
//! (`Rc` and `Arc`).
//! - It's very performant and has builtin implementations of both thread-local and concurrent
//! garbage collection.
//! - There are no restrictions on the reference structure within a garbage-collected allocation
//! (references may point in any way you like).
//! - It's trivial to make a custom type Trace using the provided derive macros.
//! - You can even store `?Sized` data in a garbage-collected pointer!
//!
//! # Module structure
//!
//! `dumpster` contains 3 core modules: the root (this module), as well as [`sync`] and [`unsync`].
//! `sync` contains an implementation of thread-safe garbage-collected pointers, while `unsync`
//! contains an implementation of thread-local garbage-collected pointers which cannot be shared
//! across threads.
//! Thread-safety requires some synchronization overhead, so for a single-threaded application,
//! it is recommended to use `unsync`.
//!
//! The project root contains common definitions across both `sync` and `unsync`.
//! Types which implement [`Trace`] can immediately be used in `unsync`, but in order to use
//! `sync`'s garbage collector, the types must also implement [`Sync`].
//!
//! # Examples
//!
//! If your code is meant to run as a single thread, or if your data doesn't need to be shared
//! across threads, you should use [`unsync::Gc`] to store your allocations.
//!
//! ```
//! use dumpster::unsync::Gc;
//! use std::cell::Cell;
//!
//! let my_gc = Gc::new(Cell::new(0451));
//!
//! let other_gc = my_gc.clone(); // shallow copy
//! other_gc.set(512);
//!
//! assert_eq!(my_gc.get(), 512);
//! ```
//!
//! For data which is shared across threads, you can use [`sync::Gc`] with the exact same API.
//!
//! ```
//! use dumpster::sync::Gc;
//! use std::sync::Mutex;
//!
//! let my_shared_gc = Gc::new(Mutex::new(25));
//! let other_shared_gc = my_shared_gc.clone();
//!
//! std::thread::scope(|s| {
//! s.spawn(move || {
//! *other_shared_gc.lock().unwrap() = 35;
//! });
//! });
//!
//! println!("{}", *my_shared_gc.lock().unwrap());
//! ```
//!
//! It's trivial to use custom data structures with the provided derive macro.
//!
//! ```
//! use dumpster::{unsync::Gc, Trace};
//! use std::cell::RefCell;
//!
//! #[derive(Trace)]
//! struct Foo {
//! refs: RefCell<Vec<Gc<Foo>>>,
//! }
//!
//! let foo = Gc::new(Foo {
//! refs: RefCell::new(Vec::new()),
//! });
//!
//! foo.refs.borrow_mut().push(foo.clone());
//!
//! drop(foo);
//!
//! // even though foo had a self reference, it still got collected!
//! ```
//!
//! # Installation
//!
//! To use `dumpster`, add the following lines to your `Cargo.toml`.
//!
//! ```toml
//! [dependencies]
//! dumpster = "2.1.0"
//! ```
//!
//! # Optional features
//!
//! ## `derive`
//!
//! `derive` is enabled by default.
//! It enables the derive macro for `Trace`, which makes it easy for users to implement their
//! own Trace types.
//!
//! ```
//! use dumpster::{unsync::Gc, Trace};
//! use std::cell::RefCell;
//!
//! #[derive(Trace)] // no manual implementation required
//! struct Foo(RefCell<Option<Gc<Foo>>>);
//!
//! let my_foo = Gc::new(Foo(RefCell::new(None)));
//! *my_foo.0.borrow_mut() = Some(my_foo.clone());
//!
//! drop(my_foo); // my_foo will be automatically cleaned up
//! ```
//!
//! ## `either`
//!
//! `either` is disabled by default. It adds support for the [`either`](https://crates.io/crates/either) crate,
//! specifically by implementing [`Trace`] for [`either::Either`](https://docs.rs/either/1.13.0/either/enum.Either.html).
//!
//! ## `coerce-unsized`
//!
//! `coerce-unsized` is disabled by default.
//! This enables the implementation of [`std::ops::CoerceUnsized`] for each garbage collector,
//! making it possible to use `Gc` with `!Sized` types conveniently.
#![cfg_attr(
feature = "coerce-unsized",
doc = r#"
```
// this only works with "coerce-unsized" enabled while compiling on nightly Rust
use dumpster::unsync::Gc;
let gc1: Gc<[u8]> = Gc::new([1, 2, 3]);
```
"#
)]
//! To use `coerce-unsized`, edit your installation to `Cargo.toml` to include the feature.
//!
//! ```toml
//! [dependencies]
//! dumpster = { version = "2.1.0", features = ["coerce-unsized"]}
//! ```
//!
//! ## Loom support
//!
//! `dumpster` has experimental support for permutation testing under [`loom`](https://github.com/tokio-rs/loom).
//! It is expected to be unstable and buggy.
//! To compile `dumpster` using `loom`, add `--cfg loom` to `RUSTFLAGS` when compiling, for example:
//!
//! ```sh
//! RUSTFLAGS='--cfg loom' cargo test
//! ```
//!
//! # License
//!
//! `dumpster` is licensed under the Mozilla Public License, version 2.0.
//! For more details, refer to
//! [LICENSE.md](https://github.com/claytonwramsey/dumpster/blob/master/LICENSE.md).
//!
//! This project includes portions of code derived from the Rust standard library,
//! which is dual-licensed under the MIT and Apache 2.0 licenses.
//! Copyright (c) The Rust Project Developers.
#![warn(clippy::pedantic)]
#![warn(clippy::cargo)]
#![warn(missing_docs)]
#![warn(clippy::missing_docs_in_private_items)]
#![warn(clippy::allow_attributes, reason = "prefer expect over allow")]
#![allow(clippy::multiple_crate_versions, clippy::result_unit_err)]
#![cfg_attr(feature = "coerce-unsized", feature(coerce_unsized))]
#![cfg_attr(feature = "coerce-unsized", feature(unsize))]
mod impls;
mod ptr;
pub mod sync;
pub mod unsync;
/// Contains the sealed trait for [`Trace`].
mod trace {
use crate::{sync::TraceSync, unsync::TraceUnsync, ContainsGcs, TraceWith};
/// The sealed trait for [`Trace`](crate::Trace),
/// hiding away the implementation details and making it
/// impossible to manually implement `Trace`.
#[expect(clippy::missing_safety_doc)]
#[expect(private_bounds)]
pub unsafe trait TraceWithV: TraceWith<ContainsGcs> + TraceSync + TraceUnsync {}
unsafe impl<T> TraceWithV for T where T: ?Sized + TraceWith<ContainsGcs> + TraceSync + TraceUnsync {}
}
/// The trait that any garbage-collected data must implement.
///
/// This trait should usually be implemented by using `#[derive(Trace)]`, using the provided
/// macro.
/// Only data structures using raw pointers or other magic should manually implement `Trace`.
///
/// To manually implement `Trace` you need to implement [`TraceWith<V>`].
/// Any type that implements `TraceWith` for all <code>V: [Visitor]</code>
/// automatically implements `Trace`.
///
/// # Examples
///
/// Implementing `Trace` for a scalar type which contains no garbage-collected references
/// is very easy.
/// Accepting a visitor is simply a no-op.
///
/// ```
/// use dumpster::{TraceWith, Visitor};
///
/// struct Foo(u8);
///
/// unsafe impl<V: Visitor> TraceWith<V> for Foo {
/// fn accept(&self, visitor: &mut V) -> Result<(), ()> {
/// Ok(())
/// }
/// }
/// ```
///
/// However, if a data structure contains a garbage collected pointer, it must delegate to its
/// fields in `accept`.
///
/// ```
/// use dumpster::{unsync::Gc, TraceWith, Visitor};
///
/// struct Bar(Gc<Bar>);
///
/// unsafe impl<V: Visitor> TraceWith<V> for Bar {
/// fn accept(&self, visitor: &mut V) -> Result<(), ()> {
/// self.0.accept(visitor)
/// }
/// }
/// ```
///
/// A data structure with two or more fields which could own a garbage-collected pointer should
/// delegate to both fields in a consistent order:
///
/// ```
/// use dumpster::{unsync::Gc, TraceWith, Visitor};
///
/// struct Baz {
/// a: Gc<Baz>,
/// b: Gc<Baz>,
/// }
///
/// unsafe impl<V: Visitor> TraceWith<V> for Baz {
/// fn accept(&self, visitor: &mut V) -> Result<(), ()> {
/// self.a.accept(visitor)?;
/// self.b.accept(visitor)?;
/// Ok(())
/// }
/// }
/// ```
///
/// `Trace` is dyn-compatible, so you can use it as a subtrait
/// to allocate your own trait object.
///
/// ```
/// use dumpster::{
/// unsync::{coerce_gc, Gc},
/// Trace,
/// };
///
/// trait MyTrait: Trace {}
/// impl<T: Trace> MyTrait for T {}
///
/// let gc: Gc<i32> = Gc::new(5);
/// let gc: Gc<dyn MyTrait> = coerce_gc!(gc);
/// ```
pub trait Trace: trace::TraceWithV {}
impl<T> Trace for T where T: trace::TraceWithV + ?Sized {}
/// The underlying tracing implementation powering the [`Trace`] trait.
///
/// # Safety
///
/// If the implementation of this trait is incorrect, this will result in undefined behavior,
/// typically double-frees or use-after-frees.
/// This includes [`TraceWith::accept`], even though it is a safe function, since its correctness
/// is required for safety.
///
/// The garbage collector in `dumpster` requires strong assumptions about the values inside of a
/// `Gc`; by implementing `TraceWith`, you are responsible for these assumptions.
/// Specifically, in order to be `TraceWith`, a value must have a _tree-like_ ownership structure.
/// If some type `T` implements `TraceWith`, it means that no references to a value inside `T` will
/// remain valid while `T` is moved. For instance, this means that `Rc` can never be `Trace`, as
/// moving one `Rc` will not invalidate other `Rc`s pointing to the same allocation.
/// We allow exceptions for fields of `T` that are not visited by the implementation of
/// [`TraceWith::accept`], such as borrows (see the implementation of `TraceWith` for `&T`) and
/// naturally for [`unsync::Gc`] and [`sync::Gc`].
///
/// Any structure whose implementation of `TraceWith` comes from `#[derive(Trace)]` satisfies the
/// tree-like requirement.
pub unsafe trait TraceWith<V: Visitor> {
/// Accept a visitor to this garbage-collected value.
///
/// Implementors of this function need only delegate to all fields owned by this value which
/// may contain a garbage-collected reference (either a [`sync::Gc`] or a [`unsync::Gc`]).
/// This delegation must be done in a consistent order.
///
/// For structures which have more than one field, they should return immediately after the
/// first `Err` is returned from one of its fields.
/// To do so efficiently, we recommend using the try operator (`?`) on each field and then
/// returning `Ok(())` after delegating to each field.
///
/// # Errors
///
/// Errors are returned from this function whenever a field of this object returns an error
/// after delegating acceptance to it, or if this value's data is inaccessible (such as
/// attempting to borrow from a [`RefCell`](std::cell::RefCell) which has already been
/// mutably borrowed).
fn accept(&self, visitor: &mut V) -> Result<(), ()>;
}
/// A visitor for a garbage collected value.
///
/// This visitor allows us to hide details of the implementation of the garbage-collection procedure
/// from implementors of [`Trace`].
///
/// When accepted by a `Trace`, this visitor will be delegated down until it reaches a
/// garbage-collected pointer.
/// Then, the garbage-collected pointer will call one of `visit_sync` or `visit_unsync`, depending
/// on which type of pointer it is.
///
/// In general, it's not expected for consumers of this library to write their own visitors.
pub trait Visitor {
/// Visit a synchronized garbage-collected pointer.
///
/// This function is called for every [`sync::Gc`] owned by the value that accepted this
/// visitor.
fn visit_sync<T>(&mut self, gc: &sync::Gc<T>)
where
T: Trace + Send + Sync + ?Sized;
/// Visit a thread-local garbage-collected pointer.
///
/// This function is called for every [`unsync::Gc`] owned by the value that accepted this
/// visitor.
fn visit_unsync<T>(&mut self, gc: &unsync::Gc<T>)
where
T: Trace + ?Sized;
}
// Re-export #[derive(Trace)].
//
// The reason re-exporting is not enabled by default is that disabling it would
// be annoying for crates that provide handwritten impls or data formats. They
// would need to disable default features and then explicitly re-enable std.
#[cfg(feature = "derive")]
extern crate dumpster_derive;
#[cfg(feature = "derive")]
/// The derive macro for implementing `Trace`.
///
/// This enables users of `dumpster` to easily store custom types inside a `Gc`.
/// To do so, simply annotate your type with `#[derive(Trace)]`.
///
/// # Examples
///
/// ```
/// use dumpster::Trace;
///
/// #[derive(Trace)]
/// struct Foo {
/// bar: Option<Box<Foo>>,
/// }
/// ```
///
/// You can specify the crate path for the `dumpster` crate using the `dumpster` attribute:
///
/// ```
/// use dumpster as dumpster_renamed;
/// use dumpster_renamed::Trace;
///
/// #[derive(Trace)]
/// #[dumpster(crate = dumpster_renamed)]
/// struct Foo {
/// bar: Option<Box<Foo>>,
/// }
/// ```
pub use dumpster_derive::Trace;
/// Determine whether some value contains a garbage-collected pointer.
///
/// This function will return one of three values:
/// - `Ok(true)`: The data structure contains a garbage-collected pointer.
/// - `Ok(false)`: The data structure contains no garbage-collected pointers.
/// - `Err(())`: The data structure was accessed while we checked it for garbage-collected pointers.
fn contains_gcs<T: Trace + ?Sized>(x: &T) -> Result<bool, ()> {
let mut visit = ContainsGcs(false);
x.accept(&mut visit)?;
Ok(visit.0)
}
/// A visitor structure used for determining whether some garbage-collected pointer contains a
/// `Gc` in its pointed-to value.
struct ContainsGcs(bool);
impl Visitor for ContainsGcs {
fn visit_sync<T>(&mut self, _: &sync::Gc<T>)
where
T: Trace + Send + Sync + ?Sized,
{
self.0 = true;
}
fn visit_unsync<T>(&mut self, _: &unsync::Gc<T>)
where
T: Trace + ?Sized,
{
self.0 = true;
}
}
/// Panics with a message that explains that the gc object has already been collected.
#[cold]
#[inline(never)]
fn panic_deref_of_collected_object() -> ! {
panic!(
"Attempt to dereference Gc to already-collected object. \
This means a Gc escaped from a Drop implementation, likely implying a bug in your code.",
);
}
================================================
FILE: dumpster/src/ptr.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! Custom pointer types used by this garbage collector.
use std::{
fmt,
mem::{size_of, MaybeUninit},
ptr::{addr_of, addr_of_mut, copy_nonoverlapping, null, NonNull},
};
#[repr(C)]
#[derive(Clone, Copy)]
/// A pointer for an allocation, extracted out as raw data.
/// This contains both the pointer and all the pointer's metadata, but hidden behind an unknown
/// interpretation.
/// We trust that all pointers (even to `?Sized` or `dyn` types) are 2 words or fewer in size.
/// This is a hack! Like, a big hack!
pub(crate) struct Erased([*const u8; 2]);
unsafe impl Send for Erased {}
unsafe impl Sync for Erased {}
impl Erased {
/// Construct a new erased pointer to some data from a reference
///
/// # Panics
///
/// This function will panic if the size of a reference is larger than the size of an
/// `ErasedPtr`.
/// To my knowledge, there are no pointer types with this property.
pub fn new<T: ?Sized>(reference: NonNull<T>) -> Erased {
let mut ptr = Erased([null(); 2]);
let ptr_size = size_of::<NonNull<T>>();
// Extract out the pointer as raw memory
assert!(
ptr_size <= size_of::<Erased>(),
"pointers to T are too big for storage"
);
unsafe {
// SAFETY: We know that `cleanup` has at least as much space as `ptr_size`, and that
// `box_ref` has size equal to `ptr_size`.
copy_nonoverlapping(
addr_of!(reference).cast::<u8>(),
addr_of_mut!(ptr.0).cast::<u8>(),
ptr_size,
);
}
ptr
}
/// Specify this pointer into a pointer of a particular type.
///
/// # Safety
///
/// This function must only be specified to the type that the pointer was constructed with
/// via [`Erased::new`].
pub unsafe fn specify<T: ?Sized>(self) -> NonNull<T> {
let mut box_ref: MaybeUninit<NonNull<T>> = MaybeUninit::zeroed();
// For some reason, switching the ordering of casts causes this to create wacky undefined
// behavior. Why? I don't know. I have better things to do than pontificate on this on a
// Sunday afternoon.
copy_nonoverlapping(
addr_of!(self.0).cast::<u8>(),
addr_of_mut!(box_ref).cast::<u8>(),
size_of::<NonNull<T>>(),
);
box_ref.assume_init()
}
}
impl fmt::Debug for Erased {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "ErasedPtr({:x?})", self.0)
}
}
/// A nullable pointer to an `?Sized` type.
///
/// We need this because it's actually impossible to create a null `*mut T` if `T` is `?Sized`.
pub(crate) struct Nullable<T: ?Sized>(*mut T);
impl<T: ?Sized> Nullable<T> {
/// Create a new nullable pointer from a non-null pointer.
pub fn new(ptr: NonNull<T>) -> Nullable<T> {
Nullable(ptr.as_ptr())
}
/// Convert this pointer to a null pointer.
pub fn as_null(self) -> Nullable<T> {
Nullable(self.0.with_addr(0))
}
/// Determine whether this pointer is null.
pub fn is_null(self) -> bool {
self.as_option().is_none()
}
/// Convert this pointer to an `Option<NonNull<T>>`.
pub fn as_option(self) -> Option<NonNull<T>> {
NonNull::new(self.0)
}
/// Convert this pointer to a `*mut T`.
pub fn as_ptr(self) -> *mut T {
self.0
}
/// Create a new nullable pointer from a pointer.
pub fn from_ptr(ptr: *mut T) -> Self {
Self(ptr)
}
/// Convert this pointer to a `NonNull<T>`, panicking if this pointer is null with message
/// `msg`.
pub fn expect(self, msg: &str) -> NonNull<T> {
self.as_option().expect(msg)
}
/// Convert this pointer to a `NonNull<T>`, panicking if this pointer is null.
pub fn unwrap(self) -> NonNull<T> {
self.as_option().unwrap()
}
/// Convert this pointer to a `NonNull<T>`.
///
/// # Safety
///
/// The pointer must not be null.
pub unsafe fn unwrap_unchecked(self) -> NonNull<T> {
self.as_option().unwrap_unchecked()
}
}
impl<T: ?Sized> Clone for Nullable<T> {
fn clone(&self) -> Self {
*self
}
}
impl<T: ?Sized> Copy for Nullable<T> {}
#[cfg(feature = "coerce-unsized")]
impl<T, U> std::ops::CoerceUnsized<Nullable<U>> for Nullable<T>
where
T: std::marker::Unsize<U> + ?Sized,
U: ?Sized,
{
}
impl<T: ?Sized> fmt::Debug for Nullable<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "Nullable({:x?})", self.0)
}
}
#[cfg(test)]
mod tests {
use core::any::Any;
use std::alloc::{dealloc, Layout};
use super::*;
#[test]
fn erased_alloc() {
let orig_ptr: &mut u8 = Box::leak(Box::new(7));
let erased_ptr = Erased::new(NonNull::from(orig_ptr));
unsafe {
let remade_ptr = erased_ptr.specify::<u8>();
assert_eq!(*remade_ptr.as_ref(), 7);
dealloc(remade_ptr.as_ptr(), Layout::for_value(remade_ptr.as_ref()));
}
}
#[test]
fn erased_alloc_slice() {
let orig_ptr: &mut [u8] = Box::leak(Box::new([7, 8, 9]));
let erased_ptr = Erased::new(NonNull::from(orig_ptr));
unsafe {
let remade_ptr = erased_ptr.specify::<[u8]>();
assert_eq!(remade_ptr.as_ref(), [7, 8, 9]);
dealloc(
remade_ptr.as_ptr().cast(),
Layout::for_value(remade_ptr.as_ref()),
);
}
}
#[test]
fn erased_alloc_dyn() {
let orig_ptr: &mut dyn Any = Box::leak(Box::new(7u8));
let erased_ptr = Erased::new(NonNull::from(orig_ptr));
unsafe {
let remade_ptr = erased_ptr.specify::<dyn Any>();
assert_eq!(*remade_ptr.as_ref().downcast_ref::<u8>().unwrap(), 7);
dealloc(
remade_ptr.as_ptr().cast(),
Layout::for_value(remade_ptr.as_ref()),
);
}
}
}
================================================
FILE: dumpster/src/sync/cell.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! A shim for using either Loom or the standard library in garbage-collected environments.
#[cfg(loom)]
use loom::cell::UnsafeCell;
#[cfg(not(loom))]
use std::cell::UnsafeCell;
#[derive(Debug)]
/// An unsafe cell that is agnostic over using `std` or `loom` as its backing implementation.
/// It is intended to only be used with [`Copy`] data.
pub struct UCell<T>(UnsafeCell<T>);
impl<T> UCell<T> {
/// Construct a `UCell` containing the value.
pub fn new(x: T) -> Self {
Self(UnsafeCell::new(x))
}
/// Get the value inside the `UCell`.
///
/// # Safety
///
/// This function can only be called when no other code is calling [`UCell::set`].
pub unsafe fn get(&self) -> T
where
T: Copy,
{
#[cfg(loom)]
{
*self.0.get().deref()
}
#[cfg(not(loom))]
{
*self.0.get()
}
}
/// Overwrite the value inside this cell.
///
/// # Safety
///
/// This function can only be called when no other code is calling [`UCell::set`] or
/// [`UCell::get`].
pub unsafe fn set(&self, x: T) {
#[cfg(loom)]
{
*self.0.get_mut().deref() = x;
}
#[cfg(not(loom))]
{
*self.0.get() = x;
}
}
}
#[cfg(not(loom))]
#[cfg(feature = "coerce-unsized")]
impl<T, U> std::ops::CoerceUnsized<UCell<crate::ptr::Nullable<U>>>
for UCell<crate::ptr::Nullable<T>>
where
T: std::marker::Unsize<U> + ?Sized,
U: ?Sized,
{
}
================================================
FILE: dumpster/src/sync/collect.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! A synchronized collection algorithm.
use std::{
alloc::{dealloc, Layout},
cell::{Cell, LazyCell, RefCell},
collections::hash_map::Entry,
hash::Hash,
mem::{replace, swap, take, transmute},
ptr::{drop_in_place, NonNull},
};
#[cfg(not(loom))]
use std::sync::atomic::{AtomicPtr, AtomicUsize, Ordering};
use foldhash::{HashMap, HashMapExt};
#[cfg(loom)]
use loom::{
lazy_static,
sync::atomic::{AtomicPtr, AtomicUsize, Ordering},
thread_local,
};
#[cfg(not(loom))]
use parking_lot::{Mutex, RwLock};
#[cfg(loom)]
use crate::sync::loom_ext::{Mutex, RwLock};
use crate::{ptr::Erased, Trace, Visitor};
use super::{default_collect_condition, CollectCondition, CollectInfo, Gc, GcBox, CURRENT_TAG};
/// The garbage truck, which is a global data structure containing information about allocations
/// which might need to be collected.
struct GarbageTruck {
/// The contents of the garbage truck, containing all the allocations which need to be
/// collected and have already been delivered by a [`Dumpster`].
contents: Mutex<LazyCell<HashMap<AllocationId, TrashCan>>>,
/// A lock used for synchronizing threads that are awaiting completion of a collection process.
/// This lock should be acquired for reads by threads running a collection and for writes by
/// threads awaiting collection completion.
collecting_lock: RwLock<()>,
/// The number of [`Gc`]s dropped since the last time [`GarbageTruck::collect_all()`] was
/// called.
n_gcs_dropped: AtomicUsize,
/// The number of [`Gc`]s currently existing (which have not had their internals replaced with
/// `None`).
n_gcs_existing: AtomicUsize,
/// The function which determines whether a collection should be triggered.
/// This pointer value should always be cast to a [`CollectCondition`], but since `AtomicPtr`
/// doesn't handle function pointers correctly, we just cast to `*mut ()`.
collect_condition: AtomicPtr<()>,
}
/// A structure containing the global information for the garbage collector.
pub(super) struct Dumpster {
/// A lookup table for the allocations which may need to be cleaned up later.
pub contents: RefCell<HashMap<AllocationId, TrashCan>>,
/// The number of times an allocation on this thread has been dropped.
n_drops: Cell<usize>,
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Hash)]
/// A unique identifier for an allocation.
pub(super) struct AllocationId(NonNull<GcBox<()>>);
#[derive(Debug)]
/// The information which describes an allocation that may need to be cleaned up later.
pub(super) struct TrashCan {
/// A pointer to the allocation to be cleaned up.
ptr: Erased,
/// The function which can be used to build a reference graph.
/// This function is safe to call on `ptr`.
dfs_fn: unsafe fn(Erased, &mut HashMap<AllocationId, AllocationInfo>),
}
#[derive(Debug)]
/// A node in the reference graph, which is constructed while searching for unreachable allocations.
struct AllocationInfo {
/// An erased pointer to the allocation.
ptr: Erased,
/// Function for dropping the allocation when its weak and strong count hits zero.
/// Should have the same behavior as dropping a Gc normally to a reference count of zero.
weak_drop_fn: unsafe fn(Erased),
/// Information about this allocation's reachability.
reachability: Reachability,
}
#[derive(Debug)]
/// The state of whether an allocation is reachable or of unknown reachability.
enum Reachability {
/// The information describing an allocation whose accessibility is unknown.
Unknown {
/// The IDs for the allocations directly accessible from this allocation.
children: Vec<AllocationId>,
/// The number of references in the reference count for this allocation which are
/// "unaccounted," which have not been found while constructing the graph.
/// It is the difference between the allocations indegree in the "true" reference graph vs
/// the one we are currently building.
n_unaccounted: usize,
/// A function used to destroy the allocation.
destroy_fn: unsafe fn(Erased, &HashMap<AllocationId, AllocationInfo>),
},
/// The allocation here is reachable.
/// No further information is needed.
Reachable,
}
#[cfg(not(loom))]
/// The global garbage truck.
/// All [`TrashCan`]s should eventually end up in here.
static GARBAGE_TRUCK: GarbageTruck = GarbageTruck::new();
#[cfg(loom)]
lazy_static! {
static ref GARBAGE_TRUCK: GarbageTruck = GarbageTruck::new();
}
thread_local! {
/// The dumpster for this thread.
/// Allocations which are "dirty" will be transferred to this dumpster before being moved into
/// the garbage truck for final collection.
pub(super) static DUMPSTER: Dumpster = Dumpster {
contents: RefCell::new(HashMap::new()),
n_drops: Cell::new(0),
};
}
#[cfg(not(loom))]
thread_local! {
/// Whether the currently-running thread is doing a cleanup.
/// This cannot be stored in `DUMPSTER` because otherwise it would cause weird use-after-drop
/// behavior.
static CLEANING: Cell<bool> = const { Cell::new(false) };
}
#[cfg(loom)]
thread_local! {
/// Whether the currently-running thread is doing a cleanup.
/// This cannot be stored in `DUMPSTER` because otherwise it would cause weird use-after-drop
/// behavior.
static CLEANING: Cell<bool> = Cell::new(false);
}
/// Collect all allocations in the garbage truck (but not necessarily the dumpster), then await
/// completion of the collection.
/// Ensures that all allocations dropped on the calling thread are cleaned up
pub fn collect_all_await() {
_ = DUMPSTER.try_with(|d| d.deliver_to(&GARBAGE_TRUCK));
GARBAGE_TRUCK.collect_all();
drop(GARBAGE_TRUCK.collecting_lock.read());
}
/// Notify that a `Gc` was destroyed, and update the tracking count for the number of dropped and
/// existing `Gc`s.
///
/// This may trigger a linear-time cleanup of all allocations, but this will be guaranteed to
/// occur with less-than-linear frequency, so it's always O(1).
pub fn notify_dropped_gc() {
GARBAGE_TRUCK.n_gcs_existing.fetch_sub(1, Ordering::Relaxed);
GARBAGE_TRUCK.n_gcs_dropped.fetch_add(1, Ordering::Relaxed);
// Do not do deliver or collect if we are currently cleaning or this thread is dying.
// This prevents deadlocks.
if !CLEANING.try_with(Cell::get).is_ok_and(|x| !x) {
return;
}
_ = DUMPSTER.try_with(|dumpster| {
dumpster.n_drops.set(dumpster.n_drops.get() + 1);
if dumpster.is_full() {
dumpster.deliver_to(&GARBAGE_TRUCK);
}
});
let collect_cond = unsafe {
// SAFETY: we only ever store collection conditions in the collect-condition box
transmute::<*mut (), CollectCondition>(
GARBAGE_TRUCK.collect_condition.load(Ordering::Relaxed),
)
};
if collect_cond(&CollectInfo { _private: () }) {
GARBAGE_TRUCK.collect_all();
}
}
/// Notify that a [`Gc`] was created, and increment the number of total existing `Gc`s.
pub fn notify_created_gc() {
GARBAGE_TRUCK.n_gcs_existing.fetch_add(1, Ordering::Relaxed);
}
/// Mark an allocation as "dirty," implying that it may or may not be inaccessible and need to
/// be cleaned up.
///
/// # Safety
///
/// When calling this method, you have to ensure that `allocation`
/// is [convertible to a reference](core::ptr#pointer-to-reference-conversion).
pub(super) unsafe fn mark_dirty<T>(allocation: NonNull<GcBox<T>>)
where
T: Trace + Send + Sync + ?Sized,
{
_ = DUMPSTER.try_with(|dumpster| {
if dumpster
.contents
.borrow_mut()
.insert(
AllocationId::from(allocation),
TrashCan {
ptr: Erased::new(allocation),
dfs_fn: dfs::<T>,
},
)
.is_none()
{
// SAFETY: the caller must guarantee that `allocation` meets all the
// requirements for a reference.
unsafe { allocation.as_ref() }
.weak
.fetch_add(1, Ordering::Acquire);
}
});
}
/// Mark an allocation as "clean," implying that it has already been cleaned up and does not
/// need to be cleaned again.
pub(super) fn mark_clean<T>(allocation: &GcBox<T>)
where
T: Trace + Send + Sync + ?Sized,
{
_ = DUMPSTER.try_with(|dumpster| {
if dumpster
.contents
.borrow_mut()
.remove(&AllocationId::from(allocation))
.is_some()
{
allocation.weak.fetch_sub(1, Ordering::Release);
}
});
}
#[cfg(test)]
/// Deliver all [`TrashCan`]s from this thread's dumpster into the garbage truck.
///
/// This function is available to to support testing, but currently is not part of the public API.
pub(super) fn deliver_dumpster() {
_ = DUMPSTER.try_with(|d| d.deliver_to(&GARBAGE_TRUCK));
}
/// Set the function which determines whether the garbage collector should be run.
///
/// `f` will be periodically called by the garbage collector to determine whether it should perform
/// a full traversal of the heap.
/// When `f` returns true, a traversal will begin.
///
/// # Examples
///
/// ```
/// use dumpster::sync::{set_collect_condition, CollectInfo};
///
/// /// This function will make sure a GC traversal never happens unless directly activated.
/// fn never_collect(_: &CollectInfo) -> bool {
/// false
/// }
///
/// set_collect_condition(never_collect);
/// ```
pub fn set_collect_condition(f: CollectCondition) {
GARBAGE_TRUCK
.collect_condition
.store(f as *mut (), Ordering::Relaxed);
}
/// Get the number of `[Gc]`s dropped since the last collection.
pub fn n_gcs_dropped() -> usize {
GARBAGE_TRUCK.n_gcs_dropped.load(Ordering::Relaxed)
}
/// Get the number of `[Gc]`s currently existing in the entire program.
pub fn n_gcs_existing() -> usize {
GARBAGE_TRUCK.n_gcs_existing.load(Ordering::Relaxed)
}
impl Dumpster {
/// Deliver all [`TrashCan`]s contained by this dumpster to the garbage collect, removing them
/// from the local dumpster storage and adding them to the global truck.
fn deliver_to(&self, garbage_truck: &GarbageTruck) {
let mut guard = garbage_truck.contents.lock();
self.n_drops.set(0);
self.deliver_to_contents(&mut guard);
}
/// Deliver the entries in this dumpster to `contents`.
fn deliver_to_contents(&self, contents: &mut HashMap<AllocationId, TrashCan>) {
for (id, can) in self.contents.borrow_mut().drain() {
if contents.insert(id, can).is_some() {
unsafe {
// SAFETY: an allocation can only be in the dumpster if it still exists and its
// header is valid
id.0.as_ref()
}
.weak
.fetch_sub(1, Ordering::Release);
}
}
}
/// Determine whether this dumpster is full (and therefore should have its contents delivered to
/// the garbage truck).
fn is_full(&self) -> bool {
self.contents.borrow().len() > 100_000 || self.n_drops.get() > 100_000
}
}
impl GarbageTruck {
/// Construct a new, empty garbage truck.
///
/// Since the `GarbageTruck` is meant to be a single global value, this function should only be
/// called once in the initialization of `GARBAGE_TRUCK`.
#[cfg(not(loom))]
const fn new() -> Self {
Self {
contents: Mutex::new(LazyCell::new(HashMap::new)),
collecting_lock: RwLock::new(()),
n_gcs_dropped: AtomicUsize::new(0),
n_gcs_existing: AtomicUsize::new(0),
collect_condition: AtomicPtr::new(default_collect_condition as *mut ()),
}
}
/// Construct a new, empty garbage truck.
///
/// Since the `GarbageTruck` is meant to be a single global value, this function should only be
/// called once in the initialization of `GARBAGE_TRUCK`.
#[cfg(loom)]
fn new() -> Self {
Self {
contents: Mutex::new(LazyCell::new(HashMap::new)),
collecting_lock: RwLock::new(()),
n_gcs_dropped: AtomicUsize::new(0),
n_gcs_existing: AtomicUsize::new(0),
collect_condition: AtomicPtr::new(default_collect_condition as *mut ()),
}
}
/// Search through the set of existing allocations which have been marked inaccessible, and see
/// if they are inaccessible.
/// If so, drop those allocations.
fn collect_all(&self) {
let collecting_guard = self.collecting_lock.write();
self.n_gcs_dropped.store(0, Ordering::Relaxed);
let to_collect = take(&mut **self.contents.lock());
let mut ref_graph = HashMap::with_capacity(to_collect.len());
CURRENT_TAG.fetch_add(1, Ordering::Release);
for (_, TrashCan { ptr, dfs_fn }) in to_collect {
unsafe {
// SAFETY: `ptr` may only be in `to_collect` if it was a valid pointer
// and `dfs_fn` must have been created with the intent of referring to
// the erased type of `ptr`.
dfs_fn(ptr, &mut ref_graph);
}
}
let root_ids = ref_graph
.iter()
.filter_map(|(&k, v)| match v.reachability {
Reachability::Reachable => Some(k),
Reachability::Unknown { n_unaccounted, .. } => (n_unaccounted > 0
|| unsafe {
// SAFETY: we found `k` in the reference graph,
// so it must still be an extant allocation
k.0.as_ref().weak.load(Ordering::Acquire) > 1
})
.then_some(k),
})
.collect::<Vec<_>>();
for root_id in root_ids {
mark(root_id, &mut ref_graph);
}
CLEANING.with(|c| c.set(true));
// set of allocations which must be destroyed because we were the last weak pointer to it
let mut weak_destroys = Vec::new();
for (id, node) in &ref_graph {
let header_ref = unsafe { id.0.as_ref() };
match node.reachability {
Reachability::Unknown { destroy_fn, .. } => unsafe {
// SAFETY: `destroy_fn` must have been created with `node.ptr` in mind,
// and we have proven that no other references to `node.ptr` exist
destroy_fn(node.ptr, &ref_graph);
},
Reachability::Reachable => {
if header_ref.weak.fetch_sub(1, Ordering::Release) == 1
&& header_ref.strong.load(Ordering::Acquire) == 0
{
// we are the last reference to the allocation.
// mark to be cleaned up later
// no real synchronization loss to storing the guard because we had the last
// reference anyway
weak_destroys.push((node.weak_drop_fn, node.ptr));
}
}
}
}
CLEANING.with(|c| c.set(false));
for (drop_fn, ptr) in weak_destroys {
unsafe {
// SAFETY: we have proven (via header_ref.weak = 1) that the cleaning
// process had the last reference to the allocation.
// `drop_fn` must have been created with the true value of `ptr` in mind.
drop_fn(ptr);
};
}
drop(collecting_guard);
}
}
/// Build out a part of the reference graph, making note of all allocations which are reachable from
/// the one described in `ptr`.
///
/// # Inputs
///
/// - `ptr`: A pointer to the allocation that we should start constructing from.
/// - `ref_graph`: A lookup from allocation IDs to node information about that allocation.
///
/// # Effects
///
/// `ref_graph` will be expanded to include all allocations reachable from `ptr`.
///
/// # Safety
///
/// `ptr` must have been created as a pointer to a `GcBox<T>`.
unsafe fn dfs<T: Trace + Send + Sync + ?Sized>(
ptr: Erased,
ref_graph: &mut HashMap<AllocationId, AllocationInfo>,
) {
let box_ref = unsafe {
// SAFETY: We require `ptr` to be a an erased pointer to `GcBox<T>`.
ptr.specify::<GcBox<T>>().as_ref()
};
let starting_id = AllocationId::from(box_ref);
let Entry::Vacant(v) = ref_graph.entry(starting_id) else {
// the weak count was incremented by another DFS operation elsewhere.
// Decrement it to have only one from us.
box_ref.weak.fetch_sub(1, Ordering::Release);
return;
};
let strong_count = box_ref.strong.load(Ordering::Acquire);
v.insert(AllocationInfo {
ptr,
weak_drop_fn: drop_weak_zero::<T>,
reachability: Reachability::Unknown {
children: Vec::new(),
n_unaccounted: strong_count,
destroy_fn: destroy_erased::<T>,
},
});
if box_ref
.value
.accept(&mut Dfs {
ref_graph,
current_id: starting_id,
})
.is_err()
|| box_ref.generation.load(Ordering::Acquire) >= CURRENT_TAG.load(Ordering::Relaxed)
{
// box_ref.value was accessed while we worked
// mark this allocation as reachable
mark(starting_id, ref_graph);
}
}
#[derive(Debug)]
/// The visitor structure used for building the found-reference-graph of allocations.
pub(super) struct Dfs<'a> {
/// The reference graph.
/// Each allocation is assigned a node.
ref_graph: &'a mut HashMap<AllocationId, AllocationInfo>,
/// The allocation ID currently being visited.
/// Used for knowing which node is the parent of another.
current_id: AllocationId,
}
impl Visitor for Dfs<'_> {
fn visit_sync<T>(&mut self, gc: &Gc<T>)
where
T: Trace + Send + Sync + ?Sized,
{
if Gc::is_dead(gc) {
return;
}
// must not use deref operators since we don't want to update the generation
let ptr = unsafe {
// SAFETY: This is the same as the deref implementation, but avoids
// incrementing the generation count.
gc.ptr.get().unwrap()
};
let box_ref = unsafe {
// SAFETY: same as above.
ptr.as_ref()
};
let current_tag = CURRENT_TAG.load(Ordering::Relaxed);
if gc.tag.swap(current_tag, Ordering::Relaxed) >= current_tag
|| box_ref.generation.load(Ordering::Acquire) >= current_tag
{
// This pointer was already tagged by this sweep, so it must have been moved by
mark(self.current_id, self.ref_graph);
return;
}
let mut new_id = AllocationId::from(box_ref);
let Reachability::Unknown {
ref mut children, ..
} = self
.ref_graph
.get_mut(&self.current_id)
.unwrap()
.reachability
else {
// this node has been proven reachable by something higher up. No need to keep building
// its ref graph
return;
};
children.push(new_id);
match self.ref_graph.entry(new_id) {
Entry::Occupied(mut o) => match o.get_mut().reachability {
Reachability::Unknown {
ref mut n_unaccounted,
..
} => {
*n_unaccounted -= 1;
}
Reachability::Reachable => (),
},
Entry::Vacant(v) => {
// This allocation has never been visited by the reference graph builder
let strong_count = box_ref.strong.load(Ordering::Acquire);
box_ref.weak.fetch_add(1, Ordering::Acquire);
v.insert(AllocationInfo {
ptr: Erased::new(ptr),
weak_drop_fn: drop_weak_zero::<T>,
reachability: Reachability::Unknown {
children: Vec::new(),
n_unaccounted: strong_count - 1,
destroy_fn: destroy_erased::<T>,
},
});
// Save the previously visited ID, then carry on to the next one
swap(&mut new_id, &mut self.current_id);
if box_ref.value.accept(self).is_err()
|| box_ref.generation.load(Ordering::Acquire) >= current_tag
{
// On failure, this means `**gc` is accessible, and should be marked
// as such
mark(self.current_id, self.ref_graph);
}
// Restore current_id and carry on
swap(&mut new_id, &mut self.current_id);
}
}
}
fn visit_unsync<T>(&mut self, _: &crate::unsync::Gc<T>)
where
T: Trace + ?Sized,
{
unreachable!("sync Gc cannot own an unsync Gc");
}
}
/// Traverse the reference graph, marking `root` and any allocations reachable from `root` as
/// reachable.
fn mark(root: AllocationId, graph: &mut HashMap<AllocationId, AllocationInfo>) {
let node = graph.get_mut(&root).unwrap();
if let Reachability::Unknown { children, .. } =
replace(&mut node.reachability, Reachability::Reachable)
{
for child in children {
mark(child, graph);
}
}
}
/// A visitor for decrementing the reference count of pointees.
pub(super) struct PrepareForDestruction<'a> {
/// The reference graph.
/// Must have been populated with reachability already.
graph: &'a HashMap<AllocationId, AllocationInfo>,
}
impl Visitor for PrepareForDestruction<'_> {
fn visit_sync<T>(&mut self, gc: &crate::sync::Gc<T>)
where
T: Trace + Send + Sync + ?Sized,
{
if Gc::is_dead(gc) {
return;
}
let id = AllocationId::from(unsafe {
// SAFETY: This is the same as dereferencing the GC.
gc.ptr.get().unwrap()
});
if matches!(self.graph[&id].reachability, Reachability::Reachable) {
unsafe {
// SAFETY: This is the same as dereferencing the GC.
id.0.as_ref().strong.fetch_sub(1, Ordering::Release);
}
}
unsafe {
// SAFETY: we have a unique reference to `gc` as we are destroying the structure.
gc.kill();
}
}
fn visit_unsync<T>(&mut self, _: &crate::unsync::Gc<T>)
where
T: Trace + ?Sized,
{
unreachable!("no unsync members of sync Gc possible!");
}
}
/// Destroy an allocation, obliterating its GCs, dropping it, and deallocating it.
///
/// # Safety
///
/// `ptr` must have been created from a pointer to a `GcBox<T>`.
unsafe fn destroy_erased<T: Trace + Send + Sync + ?Sized>(
ptr: Erased,
graph: &HashMap<AllocationId, AllocationInfo>,
) {
let specified = ptr.specify::<GcBox<T>>().as_mut();
specified
.value
.accept(&mut PrepareForDestruction { graph })
.expect("allocation assumed to be unreachable but somehow was accessed");
let layout = Layout::for_value(specified);
drop_in_place(specified);
dealloc(std::ptr::from_mut::<GcBox<T>>(specified).cast(), layout);
}
/// Function for handling dropping an allocation when its weak and strong reference count reach
/// zero.
///
/// # Safety
///
/// `ptr` must have been created as a pointer to a `GcBox<T>`.
unsafe fn drop_weak_zero<T: Trace + Send + Sync + ?Sized>(ptr: Erased) {
let mut specified = ptr.specify::<GcBox<T>>();
assert_eq!(specified.as_ref().weak.load(Ordering::Relaxed), 0);
assert_eq!(specified.as_ref().strong.load(Ordering::Relaxed), 0);
let layout = Layout::for_value(specified.as_ref());
drop_in_place(specified.as_mut());
dealloc(specified.as_ptr().cast(), layout);
}
unsafe impl Send for AllocationId {}
unsafe impl Sync for AllocationId {}
impl<T> From<&GcBox<T>> for AllocationId
where
T: Trace + Send + Sync + ?Sized,
{
fn from(value: &GcBox<T>) -> Self {
AllocationId(NonNull::from(value).cast())
}
}
impl<T> From<NonNull<GcBox<T>>> for AllocationId
where
T: Trace + Send + Sync + ?Sized,
{
fn from(value: NonNull<GcBox<T>>) -> Self {
AllocationId(value.cast())
}
}
#[cfg(not(loom))] // cannot access lazy static in drop
impl Drop for Dumpster {
fn drop(&mut self) {
self.deliver_to(&GARBAGE_TRUCK);
// collect_all();
}
}
================================================
FILE: dumpster/src/sync/loom_ext.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! Tests for running under loom.
#![cfg_attr(not(test), allow(dead_code))]
use std::{
mem::MaybeUninit,
ops::Deref,
sync::{PoisonError, TryLockError},
};
use loom::{
cell::UnsafeCell,
sync::{
Mutex as MutexImpl, MutexGuard, RwLock as RwLockImpl, RwLockReadGuard, RwLockWriteGuard,
},
};
use crate::{TraceWith, Visitor};
/// Simple wrapper mutex type.
pub struct Mutex<T: ?Sized>(MutexImpl<T>);
unsafe impl<V: Visitor, T: TraceWith<V> + ?Sized> TraceWith<V> for Mutex<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0
.try_lock()
.map_err(|e| match e {
TryLockError::Poisoned(_) => panic!(),
TryLockError::WouldBlock => (),
})?
.deref()
.accept(visitor)
}
}
impl<T> Mutex<T> {
/// Construct a new mutex.
pub fn new(value: T) -> Self {
Self(MutexImpl::new(value))
}
/// Lock the mutex.
pub fn lock(&self) -> MutexGuard<'_, T> {
self.0.lock().unwrap_or_else(PoisonError::into_inner)
}
#[expect(dead_code)]
/// Is the mutex locked?
pub fn is_locked(&self) -> bool {
!matches!(self.0.try_lock(), Err(TryLockError::WouldBlock))
}
}
/// A read-write lock
pub struct RwLock<T>(RwLockImpl<T>);
impl<T> RwLock<T> {
/// Construct a rwlock.
pub fn new(value: T) -> Self {
Self(RwLockImpl::new(value))
}
/// Get a read guard.
pub fn read(&self) -> RwLockReadGuard<'_, T> {
self.0.read().unwrap_or_else(PoisonError::into_inner)
}
/// Get a write guard.
pub fn write(&self) -> RwLockWriteGuard<'_, T> {
self.0.write().unwrap_or_else(PoisonError::into_inner)
}
}
/// A once-object.
struct Once {
/// Completed?
is_completed: Mutex<bool>,
}
impl Once {
/// Construct a once.
fn new() -> Self {
Self {
is_completed: Mutex::new(false),
}
}
/// Call a function once.
fn call_once(&self, f: impl FnOnce()) {
let mut is_completed = self.is_completed.lock();
if *is_completed {
return;
}
f();
*is_completed = true;
}
/// Determine if we are completed.
fn is_completed(&self) -> bool {
*self.is_completed.lock()
}
}
/// A once-lock.
pub struct OnceLock<T> {
/// A thing that does it once.
once: Once,
/// The data.
value: UnsafeCell<MaybeUninit<T>>,
}
unsafe impl<T: Sync + Send> Sync for OnceLock<T> {}
unsafe impl<T: Send> Send for OnceLock<T> {}
unsafe impl<V: Visitor, T: TraceWith<V>> TraceWith<V> for OnceLock<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.with(|value| value.accept(visitor)).unwrap_or(Ok(()))
}
}
impl<T> OnceLock<T> {
/// Construct a once-lock.
pub fn new() -> Self {
Self {
once: Once::new(),
value: UnsafeCell::new(MaybeUninit::uninit()),
}
}
/// Call a function uncheckedly.
unsafe fn with_unchecked<R>(&self, f: impl FnOnce(&T) -> R) -> R {
self.value
.with(|ptr| f(unsafe { (*ptr).assume_init_ref() }))
}
/// Apply a function.
pub fn with<R>(&self, f: impl FnOnce(&T) -> R) -> Option<R> {
if self.once.is_completed() {
Some(unsafe { self.with_unchecked(f) })
} else {
None
}
}
/// Apply or initialize.
pub fn with_or_init<R>(&self, init: impl FnOnce() -> T, f: impl FnOnce(&T) -> R) -> R {
self.once.call_once(|| {
self.value.with_mut(|ptr| unsafe {
(*ptr).write(init());
});
});
unsafe { self.with_unchecked(f) }
}
/// Set the value.
pub fn set(&self, value: T) {
self.with_or_init(|| value, |_| {});
}
}
#[test]
fn test_once() {
use loom::sync::{
atomic::{AtomicUsize, Ordering},
Arc,
};
loom::model(|| {
let once = Arc::new(Once::new());
let counter = Arc::new(AtomicUsize::new(0));
let mut join_handles = vec![];
for _ in 0..2 {
let once = once.clone();
let counter = counter.clone();
join_handles.push(loom::thread::spawn(move || {
once.call_once(|| {
counter.fetch_add(1, Ordering::Relaxed);
});
}));
}
for join_handle in join_handles {
join_handle.join().unwrap();
}
assert_eq!(counter.load(Ordering::Relaxed), 1);
});
}
#[test]
fn test_once_lock() {
use loom::sync::{
atomic::{AtomicUsize, Ordering},
Arc,
};
loom::model(|| {
let once_lock = Arc::new(OnceLock::<String>::new());
let counter = Arc::new(AtomicUsize::new(0));
let mut join_handles = vec![];
for _ in 0..2 {
let once_lock = once_lock.clone();
let counter = counter.clone();
join_handles.push(loom::thread::spawn({
move || {
once_lock.with_or_init(
|| {
counter.fetch_add(1, Ordering::Relaxed);
String::from("test")
},
|value| {
assert_eq!(value, "test");
},
);
}
}));
}
for join_handle in join_handles {
join_handle.join().unwrap();
}
assert_eq!(counter.load(Ordering::Relaxed), 1);
});
}
================================================
FILE: dumpster/src/sync/loom_tests.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
use loom::{
lazy_static,
sync::atomic::{AtomicUsize, Ordering},
};
use loom_ext::{Mutex, OnceLock};
use crate::Visitor;
use super::*;
struct DropCount<'a>(&'a AtomicUsize);
impl Drop for DropCount<'_> {
fn drop(&mut self) {
self.0.fetch_add(1, Ordering::Release);
}
}
unsafe impl<V: Visitor> TraceWith<V> for DropCount<'_> {
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
struct MultiRef {
refs: Mutex<Vec<Gc<MultiRef>>>,
#[expect(unused)]
count: DropCount<'static>,
}
unsafe impl<V: Visitor> TraceWith<V> for MultiRef {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.refs.accept(visitor)
}
}
#[test]
fn loom_single_alloc() {
lazy_static! {
static ref DROP_COUNT: AtomicUsize = AtomicUsize::new(0);
}
loom::model(|| {
let gc1 = Gc::new(DropCount(&DROP_COUNT));
collect();
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 1);
});
}
#[test]
fn loom_self_referential() {
struct Foo(Mutex<Option<Gc<Foo>>>);
lazy_static! {
static ref DROP_COUNT: AtomicUsize = AtomicUsize::new(0);
}
unsafe impl<V: Visitor> TraceWith<V> for Foo {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
impl Drop for Foo {
fn drop(&mut self) {
// println!("begin increment of the drop count!");
DROP_COUNT.fetch_add(1, Ordering::Release);
}
}
loom::model(|| {
let gc1 = Gc::new(Foo(Mutex::new(None)));
*gc1.0.lock() = Some(Gc::clone(&gc1));
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 1);
});
}
#[test]
fn loom_two_cycle() {
lazy_static! {
static ref DROP_0: AtomicUsize = AtomicUsize::new(0);
static ref DROP_1: AtomicUsize = AtomicUsize::new(0);
}
loom::model(|| {
let gc0 = Gc::new(MultiRef {
refs: Mutex::new(Vec::new()),
count: DropCount(&DROP_0),
});
let gc1 = Gc::new(MultiRef {
refs: Mutex::new(vec![Gc::clone(&gc0)]),
count: DropCount(&DROP_1),
});
gc0.refs.lock().push(Gc::clone(&gc1));
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
drop(gc0);
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 1);
assert_eq!(DROP_0.load(Ordering::Acquire), 1);
});
}
#[test]
#[ignore = "not going to fix this for now"]
/// Test that creating a `Gc` during a `Drop` implementation will still not leak the `Gc`.
fn loom_sync_leak_by_creation_in_drop() {
lazy_static! {
static ref BAR_DROP_COUNT: [AtomicUsize; 2] = [AtomicUsize::new(0), AtomicUsize::new(0)];
}
struct Foo(OnceLock<Gc<Self>>, usize);
struct Bar(OnceLock<Gc<Self>>, usize);
unsafe impl<V: Visitor> TraceWith<V> for Foo {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
unsafe impl<V: Visitor> TraceWith<V> for Bar {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
impl Drop for Foo {
fn drop(&mut self) {
println!("calling drop for foo");
let gcbar = Gc::new(Bar(OnceLock::new(), self.1));
gcbar.0.set(gcbar.clone());
drop(gcbar);
// MUST be included for the test to succeed (in case Foo is collected on separate
// thread)
crate::sync::collect::deliver_dumpster();
println!("drop for foo done");
}
}
impl Drop for Bar {
fn drop(&mut self) {
println!("drop Bar");
BAR_DROP_COUNT[self.1].fetch_add(1, Ordering::Relaxed);
}
}
loom::model(|| {
println!("=========== NEW MODEL ITERATION ===============");
let mut join_handles = vec![];
for i in 0..2 {
join_handles.push(loom::thread::spawn(move || {
let foo = Gc::new(Foo(OnceLock::new(), i));
foo.0.set(foo.clone());
drop(foo);
println!("===== collect from {i} number 1");
collect(); // causes Bar to be created and then leaked
println!("===== collect from {i} number 2");
collect(); // cleans up Bar (eventually)
assert_eq!(
BAR_DROP_COUNT[i].load(Ordering::Relaxed),
1,
"failed to collect on thread 0"
);
collect::DUMPSTER.with(|d| println!("{:?}", d.contents));
assert!(collect::DUMPSTER.with(|d| d.contents.borrow().is_empty()));
}));
}
for join_handle in join_handles {
join_handle.join().unwrap();
}
});
}
================================================
FILE: dumpster/src/sync/mod.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! Thread-safe shared garbage collection.
//!
//! Most users of this module will be interested in using [`Gc`] directly out of the box - this will
//! just work.
//! Those with more particular needs (such as benchmarking) should turn toward
//! [`set_collect_condition`] in order to tune exactly when the garbage collector does cleanups.
//!
//! # Examples
//!
//! ```
//! use dumpster::sync::Gc;
//!
//! let my_gc = Gc::new(100);
//! let other_gc = my_gc.clone();
//!
//! drop(my_gc);
//! drop(other_gc);
//!
//! // contents of the Gc are automatically freed
//! ```
mod cell;
mod collect;
#[cfg(loom)]
mod loom_ext;
#[cfg(all(loom, test))]
mod loom_tests;
#[cfg(all(test, not(loom)))]
mod tests;
#[cfg(loom)]
use loom::{
lazy_static,
sync::atomic::{fence, AtomicUsize, Ordering},
};
use std::fmt::Display;
#[cfg(not(loom))]
use std::sync::atomic::{fence, AtomicUsize, Ordering};
use std::{
alloc::{dealloc, handle_alloc_error, Layout},
any::TypeId,
borrow::{Borrow, Cow},
fmt::Debug,
mem::{self, ManuallyDrop, MaybeUninit},
num::NonZeroUsize,
ops::Deref,
ptr::{self, addr_of, addr_of_mut, drop_in_place, NonNull},
slice,
};
use crate::{
contains_gcs, panic_deref_of_collected_object,
ptr::Nullable,
sync::{
cell::UCell,
collect::{Dfs, PrepareForDestruction},
},
Trace, TraceWith, Visitor,
};
use self::collect::{
collect_all_await, mark_clean, mark_dirty, n_gcs_dropped, n_gcs_existing, notify_created_gc,
notify_dropped_gc,
};
/// A soft limit on the amount of references that may be made to a `Gc`.
///
/// Going above this limit will abort your program (although not
/// necessarily) at _exactly_ `MAX_REFCOUNT + 1` references.
///
/// See comment in `Gc::clone`.
const MAX_STRONG_COUNT: usize = (isize::MAX) as usize;
/// Allows tracing with all sync visitors.
#[expect(private_bounds)]
pub(crate) trait TraceSync:
for<'a> TraceWith<Dfs<'a>> + for<'a> TraceWith<PrepareForDestruction<'a>> + TraceWith<Rehydrate>
{
}
impl<T> TraceSync for T where
T: ?Sized
+ for<'a> TraceWith<Dfs<'a>>
+ for<'a> TraceWith<PrepareForDestruction<'a>>
+ TraceWith<Rehydrate>
{
}
/// A thread-safe garbage-collected pointer.
///
/// This pointer can be duplicated and then shared across threads.
/// Garbage collection is performed concurrently.
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
/// use std::sync::atomic::{AtomicUsize, Ordering};
///
/// let shared = Gc::new(AtomicUsize::new(0));
///
/// std::thread::scope(|s| {
/// s.spawn(|| {
/// let other_gc = shared.clone();
/// other_gc.store(1, Ordering::Relaxed);
/// });
///
/// shared.store(2, Ordering::Relaxed);
/// });
///
/// println!("{}", shared.load(Ordering::Relaxed));
/// ```
///
/// # Interaction with `Drop`
///
/// While collecting cycles, it's possible for a `Gc` to exist that points to some deallocated
/// object.
/// To prevent undefined behavior, these `Gc`s are marked as dead during collection and rendered
/// inaccessible.
/// Dereferencing or cloning a `Gc` during the `Drop` implementation of a `Trace` type could
/// result in the program panicking to keep the program from accessing memory after freeing it.
/// If you're accessing a `Gc` during a `Drop` implementation, make sure to use the fallible
/// operations [`Gc::try_deref`] and [`Gc::try_clone`].
pub struct Gc<T: Trace + Send + Sync + ?Sized + 'static> {
/// The pointer to the allocation.
ptr: UCell<Nullable<GcBox<T>>>,
/// The tag information of this pointer, used for mutation detection when marking.
tag: AtomicUsize,
}
#[cfg(not(loom))]
/// The tag of the current sweep operation.
/// All new allocations are minted with the current tag.
static CURRENT_TAG: AtomicUsize = AtomicUsize::new(0);
#[cfg(loom)]
lazy_static! {
static ref CURRENT_TAG: AtomicUsize = AtomicUsize::new(0);
}
#[repr(C)]
// This is only public to make the `sync_coerce_gc` macro work.
#[doc(hidden)]
/// The backing allocation for a [`Gc`].
pub struct GcBox<T>
where
T: Trace + Send + Sync + ?Sized,
{
/// The "strong" count, which is the number of extant `Gc`s to this allocation.
/// If the strong count is zero, a value contained in the allocation may be dropped, but the
/// allocation itself must still be valid.
strong: AtomicUsize,
/// The "weak" count, which is the number of references to this allocation stored in to-collect
/// buffers by the collection algorithm.
/// If the weak count is zero, the allocation may be destroyed.
weak: AtomicUsize,
/// The current generation number of the allocation.
/// The generation number is assigned to the global generation every time a strong reference is
/// created or destroyed or a `Gc` pointing to this allocation is dereferenced.
generation: AtomicUsize,
/// The actual data stored in the allocation.
value: T,
}
unsafe impl<T> Send for Gc<T> where T: Trace + Send + Sync + ?Sized {}
unsafe impl<T> Sync for Gc<T> where T: Trace + Send + Sync + ?Sized {}
/// Begin a collection operation of the allocations on the heap.
///
/// Due to concurrency issues, this might not collect every single unreachable allocation that
/// currently exists, but often calling `collect()` will get allocations made by this thread.
///
/// # Examples
///
/// ```
/// use dumpster::sync::{collect, Gc};
///
/// let gc = Gc::new(vec![1, 2, 3]);
/// drop(gc);
///
/// collect(); // the vector originally in `gc` _might_ be dropped now, but could be dropped later
/// ```
pub fn collect() {
collect_all_await();
}
#[derive(Debug)]
/// Information passed to a [`CollectCondition`] used to determine whether the garbage collector
/// should start collecting.
///
/// A `CollectInfo` is exclusively created by being passed as an argument to the collection
/// condition.
/// To set a custom collection condition, refer to [`set_collect_condition`].
///
/// # Examples
///
/// ```
/// use dumpster::sync::{set_collect_condition, CollectInfo};
///
/// fn my_collect_condition(info: &CollectInfo) -> bool {
/// (info.n_gcs_dropped_since_last_collect() + info.n_gcs_existing()) % 2 == 0
/// }
///
/// set_collect_condition(my_collect_condition);
/// ```
pub struct CollectInfo {
/// Dummy value so this is a private structure.
_private: (),
}
/// A function which determines whether the garbage collector should start collecting.
/// This type primarily exists so that it can be used with [`set_collect_condition`].
///
/// # Examples
///
/// ```rust
/// use dumpster::sync::{set_collect_condition, CollectInfo};
///
/// fn always_collect(_: &CollectInfo) -> bool {
/// true
/// }
///
/// set_collect_condition(always_collect);
/// ```
pub type CollectCondition = fn(&CollectInfo) -> bool;
#[must_use]
/// The default collection condition used by the garbage collector.
///
/// There are no guarantees about what this function returns, other than that it will return `true`
/// with sufficient frequency to ensure that all `Gc` operations are amortized _O(1)_ in runtime.
///
/// This function isn't really meant to be called by users, but rather it's supposed to be handed
/// off to [`set_collect_condition`] to return to the default operating mode of the library.
///
/// This collection condition applies globally, i.e. to every thread.
///
/// # Examples
///
/// ```rust
/// use dumpster::sync::{default_collect_condition, set_collect_condition, CollectInfo};
///
/// fn other_collect_condition(info: &CollectInfo) -> bool {
/// info.n_gcs_existing() >= 25 || default_collect_condition(info)
/// }
///
/// // Use my custom collection condition.
/// set_collect_condition(other_collect_condition);
///
/// // I'm sick of the custom collection condition.
/// // Return to the original.
/// set_collect_condition(default_collect_condition);
/// ```
pub fn default_collect_condition(info: &CollectInfo) -> bool {
info.n_gcs_dropped_since_last_collect() > info.n_gcs_existing()
}
pub use collect::set_collect_condition;
impl<T> Gc<T>
where
T: Trace + Send + Sync + ?Sized,
{
/// Construct a new garbage-collected value.
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
///
/// let _ = Gc::new(0);
/// ```
pub fn new(value: T) -> Gc<T>
where
T: Sized,
{
notify_created_gc();
Gc {
ptr: UCell::new(Nullable::new(NonNull::from(Box::leak(Box::new(GcBox {
strong: AtomicUsize::new(1),
weak: AtomicUsize::new(0),
generation: AtomicUsize::new(CURRENT_TAG.load(Ordering::Acquire)),
value,
}))))),
tag: AtomicUsize::new(0),
}
}
/// Construct a self-referencing `Gc`.
///
/// `new_cyclic` first allocates memory for `T`, then constructs a dead `Gc`.
/// The dead `Gc` is then passed to `data_fn` to construct a value of `T`, which
/// is stored in the allocation. Finally, `new_cyclic` will update the dead self-referential
/// `Gc`s and rehydrate them to produce the final value.
///
/// # Panics
///
/// If `data_fn` panics, the panic is propagated to the caller.
/// The allocation is cleaned up normally.
///
/// Additionally, if, when attempting to rehydrate the `Gc` members of `F`, the visitor fails to
/// reach a `Gc`, this function will panic and reserve the allocation to be cleaned up
/// later.
///
/// # Notes on safety
///
/// Incorrect implementations of `data_fn` may have unusual or strange results.
/// Although `dumpster` guarantees that it will be safe, and will do its best to ensure correct
/// results, it is generally unwise to allow dead `Gc`s to exist for long.
/// If you implement `data_fn` wrong, this may cause panics later on inside of the collection
/// process.
///
/// # Examples
///
/// ```
/// use dumpster::{sync::Gc, Trace};
///
/// #[derive(Trace)]
/// struct Cycle {
/// this: Gc<Self>,
/// }
///
/// let gc = Gc::new_cyclic(|this| Cycle { this });
/// assert!(Gc::ptr_eq(&gc, &gc.this));
/// ```
pub fn new_cyclic<F: FnOnce(Self) -> T>(data_fn: F) -> Self
where
T: Sized,
{
/// A struct containing an uninitialized value of `T`.
/// May only be used inside `new_cyclic`.
#[repr(transparent)]
struct Uninitialized<T>(MaybeUninit<T>);
unsafe impl<V: Visitor, T> TraceWith<V> for Uninitialized<T> {
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
/// Data structure for cleaning up the allocation in case we panic along the way.
struct CleanUp<T: Trace + Send + Sync + 'static> {
/// Is `true` if the [`GcBox::value`] is initialized.
initialized: bool,
/// Pointer to the `GcBox` with a maybe uninitialized value.
ptr: NonNull<GcBox<T>>,
}
impl<T: Trace + Send + Sync + 'static> Drop for CleanUp<T> {
fn drop(&mut self) {
if self.initialized {
// push this `Gc` into the destruction queue
unsafe { mark_dirty(self.ptr) };
} else {
// deallocate because this `Gc` is not initialized
unsafe {
dealloc(
self.ptr.as_ptr().cast::<u8>(),
Layout::for_value(self.ptr.as_ref()),
);
}
}
}
}
// make an uninitialized allocation
notify_created_gc();
let mut gcbox = NonNull::from(Box::leak(Box::new(GcBox {
strong: AtomicUsize::new(1),
weak: AtomicUsize::new(0),
generation: AtomicUsize::new(CURRENT_TAG.load(Ordering::Acquire)),
value: Uninitialized(MaybeUninit::<T>::uninit()),
})));
let mut cleanup = CleanUp {
ptr: gcbox,
initialized: false,
};
// nilgc is a dead Gc
let nilgc = Gc {
tag: AtomicUsize::new(0),
ptr: UCell::new(Nullable::new(gcbox.cast::<GcBox<T>>()).as_null()),
};
assert!(Gc::is_dead(&nilgc));
unsafe {
// SAFETY: `gcbox` is a valid pointer to an uninitialized datum that we have allocated.
gcbox.as_mut().value = Uninitialized(MaybeUninit::new(data_fn(nilgc)));
}
cleanup.initialized = true;
let gcbox = gcbox.cast::<GcBox<T>>();
let res = unsafe {
// SAFETY: the above unsafe block correctly constructed the Uninitialized value, so it
// is safe to cast `gcbox` and then construct a reference.
gcbox.as_ref().value.accept(&mut Rehydrate {
ptr: Nullable::new(gcbox.cast()),
type_id: TypeId::of::<T>(),
})
};
assert!(
res.is_ok(),
"visitor must be able to access all Gc fields of structure when rehydrating dead Gcs"
);
let gc = Gc {
ptr: UCell::new(Nullable::new(gcbox)),
tag: AtomicUsize::new(CURRENT_TAG.load(Ordering::Acquire)),
};
let _ = ManuallyDrop::new(cleanup);
gc
}
/// Attempt to dereference this `Gc`.
///
/// This function will return `None` if `self` is a "dead" `Gc`, which points to an
/// already-deallocated object.
/// This can only occur if a `Gc` is accessed during the `Drop` implementation of a
/// [`Trace`] object.
///
/// For a version which panics instead of returning `None`, consider using [`Deref`].
///
/// # Examples
///
/// For a still-living `Gc`, this always returns `Some`.
///
/// ```
/// use dumpster::sync::Gc;
///
/// let gc1 = Gc::new(0);
/// assert!(Gc::try_deref(&gc1).is_some());
/// ```
///
/// The only way to get a `Gc` that fails on `try_deref` is by accessing a `Gc` during its
/// `Drop` implementation.
///
/// ```
/// use dumpster::{sync::Gc, Trace};
/// use std::sync::Mutex;
///
/// #[derive(Trace)]
/// struct Cycle(Mutex<Option<Gc<Self>>>);
///
/// impl Drop for Cycle {
/// fn drop(&mut self) {
/// let guard = self.0.lock().unwrap();
/// let maybe_ref = Gc::try_deref(guard.as_ref().unwrap());
/// assert!(maybe_ref.is_none());
/// }
/// }
///
/// let gc1 = Gc::new(Cycle(Mutex::new(None)));
/// *gc1.0.lock().unwrap() = Some(gc1.clone());
/// # drop(gc1);
/// # dumpster::sync::collect();
/// ```
pub fn try_deref(gc: &Gc<T>) -> Option<&T> {
unsafe { (!gc.ptr.get().is_null()).then(|| &**gc) }
}
/// Attempt to clone this `Gc`.
///
/// This function will return `None` if `self` is a "dead" `Gc`, which does not point to an
/// existing object. For details on dead `Gc`s, refer to [`Gc::is_dead`].
///
/// For a version that simply clones the dead `Gc`, use [`Clone`].
///
/// # Examples
///
/// For a still-living `Gc`, this always returns `Some`.
///
/// ```
/// use dumpster::sync::Gc;
///
/// let gc1 = Gc::new(0);
/// let gc2 = Gc::try_clone(&gc1).unwrap();
/// ```
///
/// The only way to get a `Gc` which fails on `try_clone` is by accessing a `Gc` during its
/// `Drop` implementation.
///
/// ```
/// use dumpster::{sync::Gc, Trace};
///
/// #[derive(Trace)]
/// struct Cycle(Gc<Self>);
///
/// impl Drop for Cycle {
/// fn drop(&mut self) {
/// let cloned = Gc::try_clone(&self.0);
/// assert!(cloned.is_none());
/// }
/// }
///
/// let gc1 = Gc::new_cyclic(|gc| Cycle(gc));
/// # drop(gc1);
/// # dumpster::sync::collect();
/// ```
pub fn try_clone(gc: &Gc<T>) -> Option<Gc<T>> {
unsafe { (!gc.ptr.get().is_null()).then(|| gc.clone()) }
}
/// Provides a raw pointer to the data.
///
/// Panics if `self` is a "dead" `Gc`,
/// which points to an already-deallocated object.
/// This can only occur if a `Gc` is accessed during the `Drop` implementation of a
/// [`Trace`] object.
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
/// let x = Gc::new("hello".to_owned());
/// let y = Gc::clone(&x);
/// let x_ptr = Gc::as_ptr(&x);
/// assert_eq!(x_ptr, Gc::as_ptr(&x));
/// assert_eq!(unsafe { &*x_ptr }, "hello");
/// ```
pub fn as_ptr(gc: &Gc<T>) -> *const T {
unsafe {
let ptr = NonNull::as_ptr(gc.ptr.get().unwrap());
addr_of_mut!((*ptr).value)
}
}
/// Determine whether two `Gc`s are equivalent by reference.
/// Returns `true` if both `this` and `other` point to the same value, in the same style as
/// [`std::ptr::eq`].
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
///
/// let gc1 = Gc::new(0);
/// let gc2 = Gc::clone(&gc1); // points to same spot as `gc1`
/// let gc3 = Gc::new(0); // same value, but points to a different object than `gc1`
///
/// assert!(Gc::ptr_eq(&gc1, &gc2));
/// assert!(!Gc::ptr_eq(&gc1, &gc3));
/// ```
pub fn ptr_eq(this: &Gc<T>, other: &Gc<T>) -> bool {
unsafe { this.ptr.get() }.as_option() == unsafe { other.ptr.get() }.as_option()
}
/// Get the number of references to the value pointed to by this `Gc`.
///
/// This does not include internal references generated by the garbage collector.
///
/// # Panics
///
/// This function may panic if the `Gc` whose reference count we are loading is "dead" (i.e.
/// generated through a `Drop` implementation). For further reference, take a look at
/// [`Gc::is_dead`].
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
///
/// let gc = Gc::new(());
/// assert_eq!(Gc::ref_count(&gc).get(), 1);
/// let gc2 = gc.clone();
/// assert_eq!(Gc::ref_count(&gc).get(), 2);
/// drop(gc);
/// drop(gc2);
/// ```
pub fn ref_count(gc: &Self) -> NonZeroUsize {
let box_ptr = unsafe { gc.ptr.get() }.expect(
"Attempt to dereference Gc to already-collected object. \
This means a Gc escaped from a Drop implementation, likely implying a bug in your code.",
);
let box_ref = unsafe { box_ptr.as_ref() };
NonZeroUsize::new(box_ref.strong.load(Ordering::Relaxed))
.expect("strong count to a GcBox may never be zero while a Gc to it exists")
}
/// Determine whether this is a dead `Gc`.
///
/// A `Gc` is dead if it is not usable as a reference to any value.
/// Currently, a dead `Gc` may only be produced by accessing a `Gc` inside of the `Drop`
/// implementation of a garbage-collected value or by using the `Gc` provided to the
/// construction function in [`Gc::new_cyclic`].
///
/// # Examples
///
/// ```
/// use dumpster::{sync::Gc, Trace};
///
/// #[derive(Trace)]
/// struct Cycle(Gc<Self>);
///
/// impl Drop for Cycle {
/// fn drop(&mut self) {
/// assert!(Gc::is_dead(&self.0));
/// }
/// }
///
/// let gc1 = Gc::new_cyclic(|gc| Cycle(gc));
/// # drop(gc1);
/// # dumpster::sync::collect();
/// ```
#[inline]
pub fn is_dead(gc: &Self) -> bool {
unsafe { gc.ptr.get() }.is_null()
}
/// Consumes the `Gc<T>`, returning the inner `GcBox<T>` pointer and tag.
#[inline]
#[must_use]
fn into_ptr(this: Self) -> (*const GcBox<T>, usize) {
let this = ManuallyDrop::new(this);
let tag = &raw const this.tag;
let ptr = unsafe { this.ptr.get().as_ptr() };
let tag = unsafe { tag.read() }.into_inner();
(ptr, tag)
}
/// Constructs a `Gc<T>` from the innner `GcBox<T>` pointer and tag.
#[inline]
#[must_use]
unsafe fn from_ptr(ptr: *const GcBox<T>, tag: usize) -> Self {
Self {
ptr: UCell::new(Nullable::from_ptr(ptr.cast_mut())),
tag: AtomicUsize::new(tag),
}
}
/// Kill this `Gc`, making it dead.
///
/// # Safety
///
/// The caller is responsible for making sure that no other code can access this `Gc` while
/// `kill` is running.
unsafe fn kill(&self) {
self.ptr.set(self.ptr.get().as_null());
}
/// Exists solely for the [`coerce_gc`] macro.
#[inline]
#[must_use]
#[doc(hidden)]
pub fn __private_into_ptr(this: Self) -> (*const GcBox<T>, usize) {
Self::into_ptr(this)
}
/// Exists solely for the [`coerce_gc`] macro.
#[inline]
#[must_use]
#[doc(hidden)]
pub unsafe fn __private_from_ptr(ptr: *const GcBox<T>, tag: usize) -> Self {
Self::from_ptr(ptr, tag)
}
}
/// A struct for converting dead `Gc`s into live ones.
///
/// This is used in [`Gc::new_cyclic`].
pub(super) struct Rehydrate {
/// The pointer to the currently hydrating [`GcBox`].
ptr: Nullable<GcBox<()>>,
/// The [`TypeId`] of `T` in `Gc<T>` to be hydrated.
type_id: TypeId,
}
impl Visitor for Rehydrate {
fn visit_sync<T>(&mut self, gc: &Gc<T>)
where
T: Trace + Send + Sync + ?Sized,
{
if Gc::is_dead(gc) && TypeId::of::<T>() == self.type_id {
unsafe {
// SAFETY: it is safe to transmute these pointers because we have checked
// that they are of the same type.
// Additionally, the `GcBox` has been fully initialized, so it is safe to
// create a reference here.
let cell_ptr = (&raw const gc.ptr).cast::<UCell<Nullable<GcBox<()>>>>();
(*cell_ptr).set(self.ptr);
let box_ref = &*self.ptr.as_ptr();
let old_strong = box_ref.strong.fetch_add(1, Ordering::Relaxed);
// Check for overflow. See implementation of clone for details.
if old_strong > MAX_STRONG_COUNT {
std::process::abort();
}
box_ref
.generation
.store(CURRENT_TAG.load(Ordering::Acquire), Ordering::Release);
notify_created_gc();
}
}
}
fn visit_unsync<T>(&mut self, _: &crate::unsync::Gc<T>)
where
T: Trace + ?Sized,
{
}
}
impl<T: Trace + Send + Sync + Clone> Gc<T> {
/// Makes a mutable reference to the given `Gc`.
///
/// If there are other `Gc` pointers to the same allocation, then `make_mut` will
/// [`clone`] the inner value to a new allocation to ensure unique ownership. This is also
/// referred to as clone-on-write.
///
/// [`clone`]: Clone::clone
///
/// # Panics
///
/// This function may panic if the `Gc` whose reference count we are loading is "dead" (i.e.
/// generated through a `Drop` implementation). For further reference, take a look at
/// [`Gc::is_dead`].
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
///
/// let mut data = Gc::new(5);
///
/// *Gc::make_mut(&mut data) += 1; // Won't clone anything
/// let mut other_data = Gc::clone(&data); // Won't clone inner data
/// *Gc::make_mut(&mut data) += 1; // Clones inner data
/// *Gc::make_mut(&mut data) += 1; // Won't clone anything
/// *Gc::make_mut(&mut other_data) *= 2; // Won't clone anything
///
/// // Now `data` and `other_data` point to different allocations.
/// assert_eq!(*data, 8);
/// assert_eq!(*other_data, 12);
/// ```
#[inline]
pub fn make_mut(this: &mut Self) -> &mut T {
if Gc::is_dead(this) {
panic_deref_of_collected_object();
}
// SAFETY: we checked above that the object is alive (not null)
let box_ref = unsafe { this.ptr.get().unwrap_unchecked().as_ref() };
let strong = box_ref.strong.load(Ordering::Acquire);
let weak = box_ref.weak.load(Ordering::Acquire);
if strong != 1 || weak != 0 {
// We don't have unique access to the value so we need to clone it.
*this = Gc::new(box_ref.value.clone());
}
// SAFETY: we have exclusive access to this `GcBox` because we ensured
// that we hold the only reference to this allocation.
// No other `Gc`s point to this allocation because the strong count is 1, and there are no
// loose pointers internal to the collector because the weak count is 0.
unsafe { &mut (*this.ptr.get().as_ptr()).value }
}
}
/// Allows coercing `T` of [`Gc<T>`](Gc).
///
/// This means that you can convert a `Gc` containing a strictly-sized type (such as `[T; N]`) into
/// a `Gc` containing its unsized version (such as `[T]`), all without using nightly-only features.
///
/// This is one of two easy ways to create a `Gc<[T]>`; the other method is to use [`FromIterator`].
///
/// # Examples
///
/// ```
/// use dumpster::sync::{coerce_gc, Gc};
///
/// let gc1: Gc<[u8; 3]> = Gc::new([7, 8, 9]);
/// let gc2: Gc<[u8]> = coerce_gc!(gc1);
/// assert_eq!(&gc2[..], &[7, 8, 9]);
/// ```
///
/// Note that although this macro allows for type conversion, it _cannot_ be used for converting
/// between incompatible types.
///
/// ```compile_fail
/// // This program is incorrect!
/// use dumpster::sync::{Gc, coerce_gc};
///
/// let gc1: Gc<u8> = Gc::new(1);
/// let gc2: Gc<i8> = coerce_gc!(gc1);
/// ```
#[doc(hidden)]
#[macro_export]
macro_rules! __sync_coerce_gc {
($gc:expr) => {{
// Temporarily convert the `Gc` into a raw pointer to allow for coercion to occur.
let (ptr, tag): (*const _, usize) = $crate::sync::Gc::__private_into_ptr($gc);
unsafe { $crate::sync::Gc::__private_from_ptr(ptr, tag) }
}};
}
#[doc(inline)]
pub use crate::__sync_coerce_gc as coerce_gc;
impl<T> Clone for Gc<T>
where
T: Trace + Send + Sync + ?Sized,
{
/// Clone a garbage-collected reference.
/// This does not clone the underlying data.
/// If this `Gc` is [dead](`Gc::is_dead`), this will produce another dead `Gc`.
///
/// For a fallible version, refer to [`Gc::try_clone`].
///
/// # Examples
///
/// ```
/// use dumpster::sync::Gc;
/// use std::sync::atomic::{AtomicU8, Ordering};
///
/// let gc1 = Gc::new(AtomicU8::new(0));
/// let gc2 = gc1.clone();
///
/// gc1.store(1, Ordering::Relaxed);
/// assert_eq!(gc2.load(Ordering::Relaxed), 1);
/// ```
///
/// Note that you can also clone a dead `Gc`.
///
/// ```
/// use dumpster::{sync::Gc, Trace};
/// use std::sync::Mutex;
///
/// #[derive(Trace)]
/// struct Cycle(Gc<Self>);
///
/// impl Drop for Cycle {
/// fn drop(&mut self) {
/// let gc = self.0.clone();
/// assert!(Gc::is_dead(&gc));
/// }
/// }
///
/// let gc1 = Gc::new_cyclic(|gc| Cycle(gc));
/// # drop(gc1);
/// # dumpster::sync::collect();
/// ```
fn clone(&self) -> Gc<T> {
if Gc::is_dead(self) {
// Clone dead Gcs by doing a naive copy.
return unsafe { ptr::read(self) };
}
let box_ref = unsafe { self.ptr.get().unwrap().as_ref() };
// increment strong count before generation to ensure cleanup never underestimates ref count
let old_strong = box_ref.strong.fetch_add(1, Ordering::Acquire);
// We need to guard against massive refcounts in case someone is `mem::forget`ing
// Gcs. If we don't do this the count can overflow and users will use-after free. This
// branch will never be taken in any realistic program. We abort because such a program is
// incredibly degenerate, and we don't care to support it.
//
// This check is not 100% water-proof: we error when the refcount grows beyond `isize::MAX`.
// But we do that check *after* having done the increment, so there is a chance here that
// the worst already happened and we actually do overflow the `usize` counter. However, that
// requires the counter to grow from `isize::MAX` to `usize::MAX` between the increment
// above and the `abort` below, which seems exceedingly unlikely.
if old_strong > MAX_STRONG_COUNT {
std::process::abort();
}
box_ref
.generation
.store(CURRENT_TAG.load(Ordering::Acquire), Ordering::Release);
notify_created_gc();
// mark_clean(box_ref); // causes performance drops
Gc {
ptr: UCell::new(unsafe { self.ptr.get() }),
tag: AtomicUsize::new(CURRENT_TAG.load(Ordering::Acquire)),
}
}
}
impl<T> Drop for Gc<T>
where
T: Trace + Send + Sync + ?Sized,
{
fn drop(&mut self) {
let Some(mut ptr) = unsafe { self.ptr.get() }.as_option() else {
return;
};
let box_ref = unsafe { ptr.as_ref() };
box_ref.weak.fetch_add(1, Ordering::AcqRel); // ensures that this allocation wasn't freed
// while we weren't looking
box_ref
.generation
.store(CURRENT_TAG.load(Ordering::Relaxed), Ordering::Release);
match box_ref.strong.fetch_sub(1, Ordering::AcqRel) {
0 => unreachable!("strong cannot reach zero while a Gc to it exists"),
1 => {
mark_clean(box_ref);
if box_ref.weak.fetch_sub(1, Ordering::Release) == 1 {
// destroyed the last weak reference! we can safely deallocate this
let layout = Layout::for_value(box_ref);
fence(Ordering::Acquire);
unsafe {
drop_in_place(ptr.as_mut());
dealloc(ptr.as_ptr().cast(), layout);
}
}
}
_ => {
if contains_gcs(&box_ref.value).unwrap_or(true) {
// SAFETY: `ptr` is convertible to a reference
// We don't use `box_ref` here because that pointer
// only has `SharedReadOnly` permissions under the stacked borrows model
// when we need `Unique` for the `TrashCan`.
unsafe { mark_dirty(ptr) };
}
box_ref.weak.fetch_sub(1, Ordering::Release);
}
}
notify_dropped_gc();
}
}
impl CollectInfo {
#[must_use]
/// Get the number of times that a [`Gc`] has been dropped since the last time a collection
/// operation was performed.
///
/// # Examples
///
/// ```
/// use dumpster::sync::{set_collect_condition, CollectInfo};
///
/// // Collection condition for whether many Gc's have been dropped.
/// fn have_many_gcs_dropped(info: &CollectInfo) -> bool {
/// info.n_gcs_dropped_since_last_collect() > 100
/// }
///
/// set_collect_condition(have_many_gcs_dropped);
/// ```
pub fn n_gcs_dropped_since_last_collect(&self) -> usize {
n_gcs_dropped()
}
#[must_use]
/// Get the total number of [`Gc`]s which currently exist.
///
/// # Examples
///
/// ```
/// use dumpster::sync::{set_collect_condition, CollectInfo};
///
/// // Collection condition for whether many Gc's currently exist.
/// fn do_many_gcs_exist(info: &CollectInfo) -> bool {
/// info.n_gcs_existing() > 100
/// }
///
/// set_collect_condition(do_many_gcs_exist);
/// ```
pub fn n_gcs_existing(&self) -> usize {
n_gcs_existing()
}
}
impl<T: Trace + Send + Sync + ?Sized> Gc<T> {
/// Allocates an `GcBox<T>` with sufficient space for
/// a value of the provided layout.
///
/// The function `mem_to_gc_box` is called with the data pointer
/// and must return back a pointer for the `GcBox<T>`.
unsafe fn allocate_for_layout(
value_layout: Layout,
mem_to_gc_box: impl FnOnce(*mut u8) -> *mut GcBox<T>,
) -> *mut GcBox<T> {
let layout = Layout::new::<GcBox<()>>()
.extend(value_layout)
.unwrap()
.0
.pad_to_align();
Self::allocate_for_layout_of_box(layout, mem_to_gc_box)
}
/// Allocates an `GcBox<T>` with the given layout.
///
/// The function `mem_to_gc_box` is called with the data pointer
/// and must return back a pointer for the `GcBox<T>`.
unsafe fn allocate_for_layout_of_box(
layout: Layout,
mem_to_gc_box: impl FnOnce(*mut u8) -> *mut GcBox<T>,
) -> *mut GcBox<T> {
// SAFETY: layout has non-zero size because of the `ref_count` field
let ptr = unsafe { std::alloc::alloc(layout) };
if ptr.is_null() {
handle_alloc_error(layout);
}
let inner = mem_to_gc_box(ptr);
unsafe {
(&raw mut (*inner).strong).write(AtomicUsize::new(1));
(&raw mut (*inner).weak).write(AtomicUsize::new(0));
(&raw mut (*inner).generation).write(AtomicUsize::new(0));
}
inner
}
}
impl<T: Trace + Send + Sync> Gc<[T]> {
/// Allocates an `GcBox<[T]>` with the given length.
fn allocate_for_slice(len: usize) -> *mut GcBox<[T]> {
unsafe {
Self::allocate_for_layout(Layout::array::<T>(len).unwrap(), |mem| {
ptr::slice_from_raw_parts_mut(mem.cast::<T>(), len) as *mut GcBox<[T]>
})
}
}
}
unsafe impl<V: Visitor, T: Trace + Send + Sync + ?Sized> TraceWith<V> for Gc<T> {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
visitor.visit_sync(self);
Ok(())
}
}
impl<T: Trace + Send + Sync + ?Sized> Deref for Gc<T> {
type Target = T;
/// Dereference this pointer, creating a reference to the contained value `T`.
///
/// # Panics
///
/// This function may panic if it is called from within the implementation of `std::ops::Drop`
/// of its owning value, since returning such a reference could cause a use-after-free.
/// It is not guaranteed to panic.
///
/// # Examples
///
/// The following is a correct time to dereference a `Gc`.
///
/// ```
/// use dumpster::sync::Gc;
///
/// let my_gc = Gc::new(0u8);
/// let my_ref: &u8 = &my_gc;
/// ```
///
/// Dereferencing a `Gc` while dropping is not correct.
///
/// ```should_panic
/// // This is wrong!
/// use dumpster::{sync::Gc, Trace};
/// use std::sync::Mutex;
///
/// #[derive(Trace)]
/// struct Bad {
/// s: String,
/// cycle: Mutex<Option<Gc<Bad>>>,
/// }
///
/// impl Drop for Bad {
/// fn drop(&mut self) {
/// println!("{}", self.cycle.lock().unwrap().as_ref().unwrap().s)
/// }
/// }
///
/// let foo = Gc::new(Bad {
/// s: "foo".to_string(),
/// cycle: Mutex::new(None),
/// });
/// ```
fn deref(&self) -> &Self::Target {
let box_ref = unsafe {
self.ptr.get().expect(
"Attempting to dereference Gc to already-deallocated object.\
This is caused by accessing a Gc during a Drop implementation, likely implying a bug in your code."
).as_ref()
};
let current_tag = CURRENT_TAG.load(Ordering::Acquire);
self.tag.store(current_tag, Ordering::Release);
box_ref.generation.store(current_tag, Ordering::Release);
&box_ref.value
}
}
impl<T> PartialEq<Gc<T>> for Gc<T>
where
T: Trace + Send + Sync + ?Sized + PartialEq,
{
/// Test for equality on two `Gc`s.
///
/// Two `Gc`s are equal if their inner values are equal, even if they are stored in different
/// allocations.
/// Because `PartialEq` does not imply reflexivity, and there is no current path for trait
/// specialization, this function does not do a "fast-path" check for reference equality.
/// Therefore, if two `Gc`s point to the same allocation, the implementation of `eq` will still
/// require a direct call to `eq` on the values.
///
/// # Panics
///
/// This function may panic if it is called from within the implementation of `std::ops::Drop`
/// of its owning value, since returning such a reference could cause a use-after-free.
/// It is not guaranteed to panic.
/// Additionally, if this `Gc` is moved out of an allocation during a `Drop` implementation, it
/// could later cause a panic.
/// For further details, refer to the main documentation for `Gc`.
///
/// ```
/// use dumpster::sync::Gc;
///
/// let gc = Gc::new(6);
/// assert!(gc == Gc::new(6));
/// ```
fn eq(&self, other: &Gc<T>) -> bool {
self.as_ref() == other.as_ref()
}
}
impl<T> Eq for Gc<T> where T: Trace + Send + Sync + ?Sized + PartialEq {}
impl<T: Trace + Send + Sync + ?Sized> AsRef<T> for Gc<T> {
fn as_ref(&self) -> &T {
self
}
}
impl<T: Trace + Send + Sync + ?Sized> Borrow<T> for Gc<T> {
fn borrow(&self) -> &T {
self
}
}
impl<T: Trace + Send + Sync + Default> Default for Gc<T> {
fn default() -> Self {
Gc::new(T::default())
}
}
impl<T: Trace + Send + Sync + ?Sized> std::fmt::Pointer for Gc<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
std::fmt::Pointer::fmt(&addr_of!(**self), f)
}
}
#[cfg(not(loom))]
#[cfg(feature = "coerce-unsized")]
impl<T, U> std::ops::CoerceUnsized<Gc<U>> for Gc<T>
where
T: std::marker::Unsize<U> + Trace + Send + Sync + ?Sized,
U: Trace + Send + Sync + ?Sized,
{
}
impl<T: Trace + Send + Sync + ?Sized> Debug for Gc<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"Gc({:?}, {})",
self.ptr,
self.tag.load(Ordering::Acquire)
)
}
}
impl<T: Trace + Send + Sync + Display + ?Sized> Display for Gc<T> {
/// Formats the value using its `Display` implementation.
///
/// # Note
///
/// If `T` contains cyclic references through `Gc` pointers and its `Display` implementation
/// attempts to traverse them, this may cause infinite recursion. Types with potential cycles
/// should implement `Display` to avoid following cyclic references.
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
Display::fmt(&**self, f)
}
}
impl<T: Trace + Send + Sync> From<T> for Gc<T> {
/// Converts a generic type `T` into an `Gc<T>`
///
/// The conversion allocates on the heap and moves `t`
/// from the stack into it.
///
/// # Example
/// ```rust
/// # use dumpster::sync::Gc;
/// let x = 5;
/// let rc = Gc::new(5);
///
/// assert_eq!(Gc::from(x), rc);
/// ```
fn from(value: T) -> Self {
Gc::new(value)
}
}
impl<T: Trace + Send + Sync, const N: usize> From<[T; N]> for Gc<[T]> {
/// Converts a [`[T; N]`](prim@array) into an `Gc<[T]>`.
///
/// The conversion moves the array into a newly allocated `Gc`.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let original: [i32; 3] = [1, 2, 3];
/// let shared: Gc<[i32]> = Gc::from(original);
/// assert_eq!(&[1, 2, 3], &shared[..]);
/// ```
#[inline]
fn from(v: [T; N]) -> Gc<[T]> {
coerce_gc!(Gc::<[T; N]>::from(v))
}
}
impl<T: Trace + Send + Sync + Clone> From<&[T]> for Gc<[T]> {
/// Allocates a garbage-collected slice and fills it by cloning `slice`'s items.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let original: &[i32] = &[1, 2, 3];
/// let shared: Gc<[i32]> = Gc::from(original);
/// assert_eq!(&[1, 2, 3], &shared[..]);
/// ```
#[inline]
fn from(slice: &[T]) -> Gc<[T]> {
// Panic guard while cloning T elements.
// In the event of a panic, elements that have been written
// into the new GcBox will be dropped, then the memory freed.
struct Guard<T> {
/// pointer to `GcBox` to deallocate on panic
mem: *mut u8,
/// layout of the `GcBox` to deallocate on panic
layout: Layout,
/// pointer to the `GcBox`'s value
elems: *mut T,
/// the number of elements cloned so far
n_elems: usize,
}
impl<T> Drop for Guard<T> {
fn drop(&mut self) {
unsafe {
let slice = slice::from_raw_parts_mut(self.elems, self.n_elems);
ptr::drop_in_place(slice);
dealloc(self.mem, self.layout);
}
}
}
unsafe {
let value_layout = Layout::array::<T>(slice.len()).unwrap();
let layout = Layout::new::<GcBox<()>>()
.extend(value_layout)
.unwrap()
.0
.pad_to_align();
let ptr = Self::allocate_for_layout_of_box(layout, |mem| {
ptr::slice_from_raw_parts_mut(mem.cast::<T>(), slice.len()) as *mut GcBox<[T]>
});
// Pointer to first element
let elems = (&raw mut (*ptr).value).cast::<T>();
let mut guard = Guard {
mem: ptr.cast::<u8>(),
layout,
elems,
n_elems: 0,
};
for (i, item) in slice.iter().enumerate() {
ptr::write(elems.add(i), item.clone());
guard.n_elems += 1;
}
// All clear. Forget the guard so it doesn't free the new GcBox.
mem::forget(guard);
notify_created_gc();
Self {
ptr: UCell::new(Nullable::from_ptr(ptr)),
tag: AtomicUsize::new(0),
}
}
}
}
impl<T: Trace + Send + Sync + Clone> From<&mut [T]> for Gc<[T]> {
/// Allocates a garbage-collected slice and fills it by cloning `v`'s items.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let mut original = [1, 2, 3];
/// let original: &mut [i32] = &mut original;
/// let shared: Gc<[i32]> = Gc::from(original);
/// assert_eq!(&[1, 2, 3], &shared[..]);
/// ```
#[inline]
fn from(value: &mut [T]) -> Self {
Gc::from(&*value)
}
}
impl From<&str> for Gc<str> {
/// Allocates a garbage-collected string slice and copies `v` into it.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let shared: Gc<str> = Gc::from("statue");
/// assert_eq!("statue", &shared[..]);
/// ```
#[inline]
fn from(v: &str) -> Self {
let bytes = Gc::<[u8]>::from(v.as_bytes());
let (ptr, tag) = Gc::into_ptr(bytes);
unsafe { Gc::from_ptr(ptr as *const GcBox<str>, tag) }
}
}
impl From<&mut str> for Gc<str> {
/// Allocates a garbage-collected string slice and copies `v` into it.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let mut original = String::from("statue");
/// let original: &mut str = &mut original;
/// let shared: Gc<str> = Gc::from(original);
/// assert_eq!("statue", &shared[..]);
/// ```
#[inline]
fn from(v: &mut str) -> Self {
Gc::from(&*v)
}
}
impl From<Gc<str>> for Gc<[u8]> {
/// Converts a garbage-collected string slice into a byte slice.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let string: Gc<str> = Gc::from("eggplant");
/// let bytes: Gc<[u8]> = Gc::from(string);
/// assert_eq!("eggplant".as_bytes(), bytes.as_ref());
/// ```
#[inline]
fn from(value: Gc<str>) -> Self {
let (ptr, tag) = Gc::into_ptr(value);
unsafe { Gc::from_ptr(ptr as *const GcBox<[u8]>, tag) }
}
}
impl From<String> for Gc<str> {
/// Allocates a garbage-collected string slice and copies `v` into it.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let original: String = "statue".to_owned();
/// let shared: Gc<str> = Gc::from(original);
/// assert_eq!("statue", &shared[..]);
/// ```
#[inline]
fn from(value: String) -> Self {
Self::from(&value[..])
}
}
impl<T: Trace + Send + Sync> From<Box<T>> for Gc<T> {
/// Move a boxed object to a new, garbage collected, allocation.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let original: Box<i32> = Box::new(1);
/// let shared: Gc<i32> = Gc::from(original);
/// assert_eq!(1, *shared);
/// ```
#[inline]
fn from(src: Box<T>) -> Self {
unsafe {
let layout = Layout::for_value(&*src);
let gc_ptr = Gc::allocate_for_layout(layout, <*mut u8>::cast::<GcBox<T>>);
// Copy value as bytes
ptr::copy_nonoverlapping(
(&raw const *src).cast::<u8>(),
(&raw mut (*gc_ptr).value).cast::<u8>(),
layout.size(),
);
// Free the allocation without dropping its contents
let bptr = Box::into_raw(src);
let src = Box::from_raw(bptr.cast::<mem::ManuallyDrop<T>>());
drop(src);
notify_created_gc();
Self::from_ptr(gc_ptr, 0)
}
}
}
impl<T: Trace + Send + Sync> From<Vec<T>> for Gc<[T]> {
/// Allocates a garbage-collected slice and moves `vec`'s items into it.
///
/// # Example
///
/// ```
/// # use dumpster::sync::Gc;
/// let unique: Vec<i32> = vec![1, 2, 3];
/// let shared: Gc<[i32]> = Gc::from(unique);
/// assert_eq!(&[1, 2, 3], &shared[..]);
/// ```
#[inline]
fn from(vec: Vec<T>) -> Self {
let mut vec = ManuallyDrop::new(vec);
let vec_cap = vec.capacity();
let vec_len = vec.len();
let vec_ptr = vec.as_mut_ptr();
let gc_ptr = Self::allocate_for_slice(vec_len);
unsafe {
let dst_ptr = (&raw mut (*gc_ptr).value).cast::<T>();
ptr::copy_nonoverlapping(vec_ptr, dst_ptr, vec_len);
let _ = Vec::from_raw_parts(vec_ptr, 0, vec_cap);
notify_created_gc();
Self::from_ptr(gc_ptr, 0)
}
}
}
impl<'a, B: Trace + Send + Sync> From<Cow<'a, B>> for Gc<B>
where
B: ToOwned + ?Sized,
Gc<B>: From<&'a B> + From<B::Owned>,
{
/// Creates a garbage-collected pointer from a clone-on-write pointer by
/// copying its content.
///
/// # Example
///
/// ```rust
/// # use dumpster::sync::Gc;
/// # use std::borrow::Cow;
/// let cow: Cow<'_, str> = Cow::Borrowed("eggplant");
/// let shared: Gc<str> = Gc::from(cow);
/// assert_eq!("eggplant", &shared[..]);
/// ```
#[inline]
fn from(cow: Cow<'a, B>) -> Gc<B> {
match cow {
Cow::Borrowed(s) => Gc::from(s),
Cow::Owned(s) => Gc::from(s),
}
}
}
impl<T> FromIterator<T> for Gc<[T]>
where
T: Trace + Send + Sync,
{
fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Self {
// Collect into a `Vec` for O(n) performance.
// TODO: this could be slightly optimized by using the `Gc<[]>` layout for perf, but this is
// a later problem.
Self::from(iter.into_iter().collect::<Vec<_>>())
}
}
================================================
FILE: dumpster/src/sync/tests.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
use std::{
collections::hash_map::Entry,
mem::{swap, take, transmute, MaybeUninit},
ptr::NonNull,
sync::{
atomic::{AtomicUsize, Ordering},
Mutex, OnceLock,
},
};
use foldhash::{HashMap, HashMapExt};
use crate::{sync::coerce_gc, Visitor};
use super::*;
struct DropCount<'a>(&'a AtomicUsize);
impl Drop for DropCount<'_> {
fn drop(&mut self) {
self.0.fetch_add(1, Ordering::Release);
}
}
unsafe impl<V: Visitor> TraceWith<V> for DropCount<'_> {
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
struct MultiRef {
refs: Mutex<Vec<Gc<MultiRef>>>,
#[expect(unused)]
count: DropCount<'static>,
}
unsafe impl<V: Visitor> TraceWith<V> for MultiRef {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.refs.accept(visitor)
}
}
#[test]
fn single_alloc() {
static DROP_COUNT: AtomicUsize = AtomicUsize::new(0);
let gc1 = Gc::new(DropCount(&DROP_COUNT));
collect();
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 1);
}
#[test]
fn ref_count() {
static DROP_COUNT: AtomicUsize = AtomicUsize::new(0);
let gc1 = Gc::new(DropCount(&DROP_COUNT));
let gc2 = Gc::clone(&gc1);
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 0);
drop(gc1);
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 0);
drop(gc2);
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 1);
}
#[test]
fn self_referential() {
struct Foo(Mutex<Option<Gc<Foo>>>);
static DROP_COUNT: AtomicUsize = AtomicUsize::new(0);
unsafe impl<V: Visitor> TraceWith<V> for Foo {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
impl Drop for Foo {
fn drop(&mut self) {
println!("begin increment of the drop count!");
DROP_COUNT.fetch_add(1, Ordering::Release);
}
}
let gc1 = Gc::new(Foo(Mutex::new(None)));
*gc1.0.lock().unwrap() = Some(Gc::clone(&gc1));
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_COUNT.load(Ordering::Acquire), 1);
}
#[test]
fn two_cycle() {
static DROP_0: AtomicUsize = AtomicUsize::new(0);
static DROP_1: AtomicUsize = AtomicUsize::new(0);
let gc0 = Gc::new(MultiRef {
refs: Mutex::new(Vec::new()),
count: DropCount(&DROP_0),
});
let gc1 = Gc::new(MultiRef {
refs: Mutex::new(vec![Gc::clone(&gc0)]),
count: DropCount(&DROP_1),
});
gc0.refs.lock().unwrap().push(Gc::clone(&gc1));
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
drop(gc0);
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 1);
assert_eq!(DROP_0.load(Ordering::Acquire), 1);
}
#[test]
fn self_ref_two_cycle() {
static DROP_0: AtomicUsize = AtomicUsize::new(0);
static DROP_1: AtomicUsize = AtomicUsize::new(0);
let gc0 = Gc::new(MultiRef {
refs: Mutex::new(Vec::new()),
count: DropCount(&DROP_0),
});
let gc1 = Gc::new(MultiRef {
refs: Mutex::new(vec![Gc::clone(&gc0)]),
count: DropCount(&DROP_1),
});
gc0.refs.lock().unwrap().extend([gc0.clone(), gc1.clone()]);
gc1.refs.lock().unwrap().push(gc1.clone());
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
drop(gc0);
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
assert_eq!(DROP_0.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(DROP_0.load(Ordering::Acquire), 1);
assert_eq!(DROP_0.load(Ordering::Acquire), 1);
}
#[test]
fn parallel_loop() {
static COUNT_1: AtomicUsize = AtomicUsize::new(0);
static COUNT_2: AtomicUsize = AtomicUsize::new(0);
static COUNT_3: AtomicUsize = AtomicUsize::new(0);
static COUNT_4: AtomicUsize = AtomicUsize::new(0);
let gc1 = Gc::new(MultiRef {
count: DropCount(&COUNT_1),
refs: Mutex::new(Vec::new()),
});
let gc2 = Gc::new(MultiRef {
count: DropCount(&COUNT_2),
refs: Mutex::new(vec![Gc::clone(&gc1)]),
});
let gc3 = Gc::new(MultiRef {
count: DropCount(&COUNT_3),
refs: Mutex::new(vec![Gc::clone(&gc1)]),
});
let gc4 = Gc::new(MultiRef {
count: DropCount(&COUNT_4),
refs: Mutex::new(vec![Gc::clone(&gc2), Gc::clone(&gc3)]),
});
gc1.refs.lock().unwrap().push(Gc::clone(&gc4));
assert_eq!(COUNT_1.load(Ordering::Acquire), 0);
assert_eq!(COUNT_2.load(Ordering::Acquire), 0);
assert_eq!(COUNT_3.load(Ordering::Acquire), 0);
assert_eq!(COUNT_4.load(Ordering::Acquire), 0);
drop(gc1);
collect();
assert_eq!(COUNT_1.load(Ordering::Acquire), 0);
assert_eq!(COUNT_2.load(Ordering::Acquire), 0);
assert_eq!(COUNT_3.load(Ordering::Acquire), 0);
assert_eq!(COUNT_4.load(Ordering::Acquire), 0);
drop(gc2);
collect();
assert_eq!(COUNT_1.load(Ordering::Acquire), 0);
assert_eq!(COUNT_2.load(Ordering::Acquire), 0);
assert_eq!(COUNT_3.load(Ordering::Acquire), 0);
assert_eq!(COUNT_4.load(Ordering::Acquire), 0);
drop(gc3);
collect();
assert_eq!(COUNT_1.load(Ordering::Acquire), 0);
assert_eq!(COUNT_2.load(Ordering::Acquire), 0);
assert_eq!(COUNT_3.load(Ordering::Acquire), 0);
assert_eq!(COUNT_4.load(Ordering::Acquire), 0);
drop(gc4);
collect();
assert_eq!(COUNT_1.load(Ordering::Acquire), 1);
assert_eq!(COUNT_2.load(Ordering::Acquire), 1);
assert_eq!(COUNT_3.load(Ordering::Acquire), 1);
assert_eq!(COUNT_4.load(Ordering::Acquire), 1);
}
#[test]
/// Test that we can drop a Gc which points to some allocation with a locked Mutex inside it
// note: I tried using `ntest::timeout` but for some reason that caused this test to trivially pass.
fn deadlock() {
let gc1 = Gc::new(Mutex::new(()));
let gc2 = gc1.clone();
let guard = gc1.lock();
drop(gc2);
collect();
drop(guard);
}
#[test]
fn open_drop() {
static COUNT_1: AtomicUsize = AtomicUsize::new(0);
let gc1 = Gc::new(MultiRef {
refs: Mutex::new(Vec::new()),
count: DropCount(&COUNT_1),
});
gc1.refs.lock().unwrap().push(gc1.clone());
let guard = gc1.refs.lock();
collect();
assert_eq!(COUNT_1.load(Ordering::Acquire), 0);
drop(guard);
drop(gc1);
collect();
assert_eq!(COUNT_1.load(Ordering::Acquire), 1);
}
#[test]
#[cfg_attr(miri, ignore = "miri is too slow")]
fn eventually_collect() {
static COUNT_1: AtomicUsize = AtomicUsize::new(0);
static COUNT_2: AtomicUsize = AtomicUsize::new(0);
let gc1 = Gc::new(MultiRef {
refs: Mutex::new(Vec::new()),
count: DropCount(&COUNT_1),
});
let gc2 = Gc::new(MultiRef {
refs: Mutex::new(vec![gc1.clone()]),
count: DropCount(&COUNT_2),
});
gc1.refs.lock().unwrap().push(gc2.clone());
assert_eq!(COUNT_1.load(Ordering::Acquire), 0);
assert_eq!(COUNT_2.load(Ordering::Acquire), 0);
drop(gc1);
drop(gc2);
for _ in 0..200_000 {
let gc = Gc::new(());
drop(gc);
}
// after enough time, gc1 and gc2 should have been collected
assert_eq!(COUNT_1.load(Ordering::Acquire), 1);
assert_eq!(COUNT_2.load(Ordering::Acquire), 1);
}
#[test]
#[cfg(feature = "coerce-unsized")]
fn coerce_array() {
let gc1: Gc<[u8; 3]> = Gc::new([0, 0, 0]);
let gc2: Gc<[u8]> = gc1;
assert_eq!(gc2.len(), 3);
assert_eq!(
std::mem::size_of::<Gc<[u8]>>(),
3 * std::mem::size_of::<usize>()
);
}
#[test]
fn coerce_array_using_macro() {
let gc1: Gc<[u8; 3]> = Gc::new([0, 0, 0]);
let gc2: Gc<[u8]> = coerce_gc!(gc1);
assert_eq!(gc2.len(), 3);
assert_eq!(
std::mem::size_of::<Gc<[u8]>>(),
3 * std::mem::size_of::<usize>()
);
}
#[test]
fn malicious() {
static EVIL: AtomicUsize = AtomicUsize::new(0);
static A_DROP_DETECT: AtomicUsize = AtomicUsize::new(0);
struct A {
x: Gc<X>,
y: Gc<Y>,
}
struct X {
a: Mutex<Option<Gc<A>>>,
y: NonNull<Y>,
}
struct Y {
a: Mutex<Option<Gc<A>>>,
}
unsafe impl Send for X {}
unsafe impl<V: Visitor> TraceWith<V> for A {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.x.accept(visitor)?;
self.y.accept(visitor)
}
}
unsafe impl<V: Visitor> TraceWith<V> for X {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.a.accept(visitor)?;
if EVIL.fetch_add(1, Ordering::Relaxed) == 1 {
println!("committing evil...");
// simulates a malicious thread
let y = unsafe { self.y.as_ref() };
*y.a.lock().unwrap() = (*self.a.lock().unwrap()).take();
}
Ok(())
}
}
unsafe impl<V: Visitor> TraceWith<V> for Y {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.a.accept(visitor)
}
}
unsafe impl Sync for X {}
impl Drop for A {
fn drop(&mut self) {
A_DROP_DETECT.fetch_add(1, Ordering::Relaxed);
}
}
let y = Gc::new(Y {
a: Mutex::new(None),
});
let x = Gc::new(X {
a: Mutex::new(None),
y: NonNull::from(y.as_ref()),
});
let a = Gc::new(A { x, y });
*a.x.a.lock().unwrap() = Some(a.clone());
collect();
drop(a.clone());
EVIL.store(1, Ordering::Relaxed);
collect();
assert_eq!(A_DROP_DETECT.load(Ordering::Relaxed), 0);
drop(a);
collect();
assert_eq!(A_DROP_DETECT.load(Ordering::Relaxed), 1);
}
#[test]
#[cfg_attr(miri, ignore = "miri is too slow")]
#[expect(clippy::too_many_lines)]
fn fuzz() {
const N: usize = 20_000;
static DROP_DETECTORS: [AtomicUsize; N] = {
let mut detectors: [MaybeUninit<AtomicUsize>; N] =
unsafe { transmute(MaybeUninit::<[AtomicUsize; N]>::uninit()) };
let mut i = 0;
while i < N {
detectors[i] = MaybeUninit::new(AtomicUsize::new(0));
i += 1;
}
unsafe { transmute(detectors) }
};
#[derive(Debug)]
struct Alloc {
refs: Mutex<Vec<Gc<Alloc>>>,
id: usize,
}
impl Drop for Alloc {
fn drop(&mut self) {
DROP_DETECTORS[self.id].fetch_add(1, Ordering::Relaxed);
}
}
unsafe impl<V: Visitor> TraceWith<V> for Alloc {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.refs.accept(visitor)
}
}
fn dfs(alloc: &Gc<Alloc>, graph: &mut HashMap<usize, Vec<usize>>) {
if let Entry::Vacant(v) = graph.entry(alloc.id) {
if alloc.id == 2822 || alloc.id == 2814 {
println!("{} - {alloc:?}", alloc.id);
}
v.insert(Vec::new());
alloc.refs.lock().unwrap().iter().for_each(|a| {
graph.get_mut(&alloc.id).unwrap().push(a.id);
dfs(a, graph);
});
}
}
fastrand::seed(12345);
let mut gcs = (0..50)
.map(|i| {
Gc::new(Alloc {
refs: Mutex::new(Vec::new()),
id: i,
})
})
.collect::<Vec<_>>();
let mut next_detector = 50;
for _ in 0..N {
if gcs.is_empty() {
gcs.push(Gc::new(Alloc {
refs: Mutex::new(Vec::new()),
id: next_detector,
}));
next_detector += 1;
}
match fastrand::u8(0..4) {
0 => {
println!("add gc {next_detector}");
gcs.push(Gc::new(Alloc {
refs: Mutex::new(Vec::new()),
id: next_detector,
}));
next_detector += 1;
}
1 => {
if gcs.len() > 1 {
let from = fastrand::usize(0..gcs.len());
let to = fastrand::usize(0..gcs.len());
println!("add ref {} -> {}", gcs[from].id, gcs[to].id);
let new_gc = gcs[to].clone();
let mut guard = gcs[from].refs.lock().unwrap();
guard.push(new_gc);
}
}
2 => {
let idx = fastrand::usize(0..gcs.len());
println!("remove gc {}", gcs[idx].id);
gcs.swap_remove(idx);
}
3 => {
let from = fastrand::usize(0..gcs.len());
let mut guard = gcs[from].refs.lock().unwrap();
if !guard.is_empty() {
let to = fastrand::usize(0..guard.len());
println!("drop ref {} -> {}", gcs[from].id, guard[to].id);
guard.swap_remove(to);
}
}
_ => unreachable!(),
}
}
let mut graph = HashMap::new();
graph.insert(9999, Vec::new());
for alloc in &gcs {
graph.get_mut(&9999).unwrap().push(alloc.id);
dfs(alloc, &mut graph);
}
println!("{graph:#?}");
drop(gcs);
collect();
let mut n_missing = 0;
for (id, count) in DROP_DETECTORS[..next_detector].iter().enumerate() {
let num = count.load(Ordering::Relaxed);
if num != 1 {
println!("expected 1 for id {id} but got {num}");
n_missing += 1;
}
}
assert_eq!(n_missing, 0);
}
#[test]
fn root_canal() {
struct A {
b: Gc<B>,
}
struct B {
a0: Mutex<Option<Gc<A>>>,
a1: Mutex<Option<Gc<A>>>,
a2: Mutex<Option<Gc<A>>>,
a3: Mutex<Option<Gc<A>>>,
}
unsafe impl<V: Visitor> TraceWith<V> for A {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.b.accept(visitor)
}
}
unsafe impl<V: Visitor> TraceWith<V> for B {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
let n_prior_visits = B_VISIT_COUNT.fetch_add(1, Ordering::Relaxed);
self.a0.accept(visitor)?;
self.a1.accept(visitor)?;
// simulate a malicious thread swapping things around
if n_prior_visits == 1 {
println!("committing evil...");
swap(
&mut *SMUGGLED_POINTERS[0].lock().unwrap(),
&mut *SMUGGLED_POINTERS[1]
.lock()
.unwrap()
.as_ref()
.unwrap()
.b
.a0
.lock()
.unwrap(),
);
swap(&mut *self.a0.lock().unwrap(), &mut *self.a2.lock().unwrap());
swap(
&mut *SMUGGLED_POINTERS[0].lock().unwrap(),
&mut *SMUGGLED_POINTERS[1]
.lock()
.unwrap()
.as_ref()
.unwrap()
.b
.a1
.lock()
.unwrap(),
);
swap(&mut *self.a1.lock().unwrap(), &mut *self.a3.lock().unwrap());
}
self.a2.accept(visitor)?;
self.a3.accept(visitor)?;
// smuggle out some pointers
if n_prior_visits == 0 {
println!("smuggling...");
*SMUGGLED_POINTERS[0].lock().unwrap() = take(&mut *self.a2.lock().unwrap());
*SMUGGLED_POINTERS[1].lock().unwrap() = take(&mut *self.a3.lock().unwrap());
}
Ok(())
}
}
impl Drop for B {
fn drop(&mut self) {
B_DROP_DETECT.fetch_add(1, Ordering::Relaxed);
}
}
static SMUGGLED_POINTERS: [Mutex<Option<Gc<A>>>; 2] = [Mutex::new(None), Mutex::new(None)];
static B_VISIT_COUNT: AtomicUsize = AtomicUsize::new(0);
static B_DROP_DETECT: AtomicUsize = AtomicUsize::new(0);
let a = Gc::new(A {
b: Gc::new(B {
a0: Mutex::new(None),
a1: Mutex::new(None),
a2: Mutex::new(None),
a3: Mutex::new(None),
}),
});
*a.b.a0.lock().unwrap() = Some(a.clone());
*a.b.a1.lock().unwrap() = Some(a.clone());
*a.b.a2.lock().unwrap() = Some(a.clone());
*a.b.a3.lock().unwrap() = Some(a.clone());
drop(a.clone());
collect();
println!("{}", CURRENT_TAG.load(Ordering::Relaxed));
assert!(dbg!(SMUGGLED_POINTERS[0].lock().unwrap().as_ref()).is_some());
assert!(SMUGGLED_POINTERS[1].lock().unwrap().as_ref().is_some());
println!("{}", B_VISIT_COUNT.load(Ordering::Relaxed));
assert_eq!(B_DROP_DETECT.load(Ordering::Relaxed), 0);
drop(a);
assert_eq!(B_DROP_DETECT.load(Ordering::Relaxed), 0);
collect();
println!("{}", CURRENT_TAG.load(Ordering::Relaxed));
assert_eq!(B_DROP_DETECT.load(Ordering::Relaxed), 0);
*SMUGGLED_POINTERS[0].lock().unwrap() = None;
*SMUGGLED_POINTERS[1].lock().unwrap() = None;
collect();
assert_eq!(B_DROP_DETECT.load(Ordering::Relaxed), 1);
}
#[test]
#[should_panic = "Attempting to dereference Gc to already-deallocated object.This is caused by accessing a Gc during a Drop implementation, likely implying a bug in your code."]
fn escape_dead_pointer() {
static ESCAPED: Mutex<Option<Gc<Escape>>> = Mutex::new(None);
struct Escape {
x: u8,
ptr: Mutex<Option<Gc<Escape>>>,
}
impl Drop for Escape {
fn drop(&mut self) {
let mut escaped_guard = ESCAPED.lock().unwrap();
if escaped_guard.is_none() {
*escaped_guard = self.ptr.lock().unwrap().take();
}
}
}
unsafe impl<V: Visitor> TraceWith<V> for Escape {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.ptr.accept(visitor)
}
}
let esc = Gc::new(Escape {
x: 0,
ptr: Mutex::new(None),
});
*(*esc).ptr.lock().unwrap() = Some(esc.clone());
drop(esc);
collect();
println!("{}", ESCAPED.lock().unwrap().as_ref().unwrap().x);
}
#[test]
fn from_box() {
let gc: Gc<String> = Gc::from(Box::new(String::from("hello")));
// The `From<Box<T>>` implementation executes a different code path to
// construct the `Gc`.
//
// Here we ensure that the metadata is initialized to a valid state.
unsafe {
let gc_box = gc.ptr.get().unwrap().as_ref();
assert_eq!(gc_box.strong.load(Ordering::SeqCst), 1);
assert_eq!(gc_box.weak.load(Ordering::SeqCst), 0);
}
assert_eq!(&*gc, "hello");
}
#[test]
fn from_slice() {
let gc: Gc<[String]> = Gc::from(&[String::from("hello"), String::from("world")][..]);
// The `From<&[T]>` implementation executes a different code path to
// construct the `Gc`.
//
// Here we ensure that the metadata is initialized to a valid state.
unsafe {
let gc_box = gc.ptr.get().unwrap().as_ref();
assert_eq!(gc_box.strong.load(Ordering::SeqCst), 1);
assert_eq!(gc_box.weak.load(Ordering::SeqCst), 0);
}
assert_eq!(&*gc, ["hello", "world"]);
}
#[test]
#[should_panic = "told you"]
fn from_slice_panic() {
struct MayPanicOnClone {
value: String,
panic: bool,
}
impl Clone for MayPanicOnClone {
fn clone(&self) -> Self {
assert!(!self.panic, "told you");
Self {
value: self.value.clone(),
panic: self.panic,
}
}
}
unsafe impl<V: Visitor> TraceWith<V> for MayPanicOnClone {
fn accept(&self, _: &mut V) -> Result<(), ()> {
Ok(())
}
}
let slice: &[MayPanicOnClone] = &[
MayPanicOnClone {
value: String::from("a"),
panic: false,
},
MayPanicOnClone {
value: String::from("b"),
panic: false,
},
MayPanicOnClone {
value: String::from("c"),
panic: true,
},
];
let _: Gc<[MayPanicOnClone]> = Gc::from(slice);
}
#[test]
fn from_vec() {
let gc: Gc<[String]> = Gc::from(vec![String::from("hello"), String::from("world")]);
// The `From<Vec<T>>` implementation executes a different code path to
// construct the `Gc`.
//
// Here we ensure that the metadata is initialized to a valid state.
unsafe {
let gc_box = gc.ptr.get().unwrap().as_ref();
assert_eq!(gc_box.strong.load(Ordering::SeqCst), 1);
assert_eq!(gc_box.weak.load(Ordering::SeqCst), 0);
}
assert_eq!(&*gc, ["hello", "world"]);
}
#[test]
fn make_mut() {
let mut a = Gc::new(42);
let mut b = a.clone();
let mut c = b.clone();
assert_eq!(*Gc::make_mut(&mut a), 42);
assert_eq!(*Gc::make_mut(&mut b), 42);
assert_eq!(*Gc::make_mut(&mut c), 42);
*Gc::make_mut(&mut a) += 1;
*Gc::make_mut(&mut b) += 2;
*Gc::make_mut(&mut c) += 3;
assert_eq!(*a, 43);
assert_eq!(*b, 44);
assert_eq!(*c, 45);
// they should all be unique
assert_eq!(Gc::ref_count(&a).get(), 1);
assert_eq!(Gc::ref_count(&b).get(), 1);
assert_eq!(Gc::ref_count(&c).get(), 1);
}
#[test]
fn make_mut_2() {
let mut a = Gc::new(42);
let b = a.clone();
let c = b.clone();
assert_eq!(*a, 42);
assert_eq!(*b, 42);
assert_eq!(*c, 42);
*Gc::make_mut(&mut a) += 1;
assert_eq!(*a, 43);
assert_eq!(*b, 42);
assert_eq!(*c, 42);
// a should be unique
// b and c should share their object
assert_eq!(Gc::ref_count(&a).get(), 1);
assert_eq!(Gc::ref_count(&b).get(), 2);
assert_eq!(Gc::ref_count(&c).get(), 2);
}
#[test]
fn make_mut_of_object_in_dumpster() {
#[derive(Clone)]
struct Foo {
// just some gc pointer so foo lands in the dumpster
something: Gc<i32>,
}
unsafe impl<V: Visitor> TraceWith<V> for Foo {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.something.accept(visitor)
}
}
let mut foo = Gc::new(Foo {
something: Gc::new(5),
});
drop(foo.clone());
// now foo is in the dumpster
// and its ref count is one
assert_eq!(Gc::ref_count(&foo).get(), 1);
// we get a mut reference
let foo_mut = Gc::make_mut(&mut foo);
// now we collect garbage while we're also holding onto a mutable reference to foo
// if foo is still in the dumpster then the collection will dereference it and cause UB
collect();
// we need to do something with `foo_mut` here so the mutable borrow is actually held
// during collection
assert_eq!(*foo_mut.something, 5);
}
#[test]
#[should_panic = "panic on visit"]
#[cfg_attr(miri, ignore = "intentionally leaks memory")]
fn panic_visit() {
#[expect(unused)]
struct PanicVisit(Gc<Self>);
/// We technically can make it part of the contract for `Trace` to reject panicking impls,
/// but it is good form to accept these even though they are malformed.
unsafe impl<V: Visitor> TraceWith<V> for PanicVisit {
fn accept(&self, _: &mut V) -> Result<(), ()> {
panic!("panic on visit");
}
}
let gc = Gc::new_cyclic(PanicVisit);
let _ = gc.clone();
drop(gc);
collect();
}
#[test]
/// Test that creating a `Gc` during a `Drop` implementation will still not leak the `Gc`.
fn sync_leak_by_creation_in_drop() {
static BAR_DROP_COUNT: AtomicUsize = AtomicUsize::new(0);
struct Foo(OnceLock<Gc<Self>>);
struct Bar(OnceLock<Gc<Self>>);
unsafe impl<V: Visitor> TraceWith<V> for Foo {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
unsafe impl<V: Visitor> TraceWith<V> for Bar {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
impl Drop for Foo {
fn drop(&mut self) {
let gcbar = Gc::new(Bar(OnceLock::new()));
let _ = gcbar.0.set(gcbar.clone());
drop(gcbar);
crate::sync::collect::deliver_dumpster(); // needed to prevent allocation from being
// lost in other thread
}
}
impl Drop for Bar {
fn drop(&mut self) {
BAR_DROP_COUNT.fetch_add(1, Ordering::Relaxed);
}
}
let foo = Gc::new(Foo(OnceLock::new()));
let _ = foo.0.set(foo.clone());
drop(foo);
collect(); // causes Bar to be created and then leaked
collect(); // cleans up Bar (eventually)
assert!(super::collect::DUMPSTER.with(|d| d.contents.borrow().is_empty()));
assert_eq!(BAR_DROP_COUNT.load(Ordering::Relaxed), 1);
}
#[test]
fn custom_trait_object() {
trait MyTrait: Trace + Send + Sync {}
impl<T: Trace + Send + Sync> MyTrait for T {}
let gc = Gc::new(5i32);
let gc: Gc<dyn MyTrait> = coerce_gc!(gc);
_ = gc;
}
#[test]
fn new_cyclic_simple() {
struct Cycle(Gc<Self>);
unsafe impl<V: Visitor> TraceWith<V> for Cycle {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.0.accept(visitor)
}
}
let gc = Gc::new_cyclic(Cycle);
assert_eq!(Gc::ref_count(&gc).get(), 2);
drop(gc);
}
#[test]
#[should_panic = "told you"]
fn panic_new_cyclic() {
let _ = Gc::<()>::new_cyclic(|_| panic!("told you"));
}
#[test]
fn gc_from_iter() {
let _gc = (0..100).collect::<Gc<[_]>>();
}
#[test]
fn self_referential_from_iter() {
struct Ab {
a: Gc<Self>,
b: Gc<Self>,
}
unsafe impl<V: Visitor> TraceWith<V> for Ab {
fn accept(&self, visitor: &mut V) -> Result<(), ()> {
self.a.accept(visitor)?;
self.b.accept(visitor)?;
Ok(())
}
}
let mut gcs = Vec::<Gc<Ab>>::new();
gcs.push(Gc::new_cyclic(|a: Gc<Ab>| Ab { a: a.clone(), b: a }));
for _ in 0..10 {
let b = gcs.last().unwrap().clone();
gcs.push(Gc::new_cyclic(|a: Gc<Ab>| Ab { a, b }));
}
let _big_gc = gcs.into_iter().collect::<Gc<[_]>>();
}
================================================
FILE: dumpster/src/unsync/collect.rs
================================================
/*
dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*/
//! Implementations of the single-threaded garbage-collection logic.
use std::{
alloc::{dealloc, Layout},
cell::{Cell, RefCell},
collections::hash_map::Entry,
mem::take,
num::NonZeroUsize,
ptr::{drop_in_place, NonNull},
};
use foldhash::{HashMap, HashMapExt, HashSet, HashSetExt};
use crate::{
ptr::Erased,
unsync::{default_collect_condition, CollectInfo, Gc},
Trace, Visitor,
};
use super::{CollectCondition, GcBox};
thread_local! {
/// Whether the current thread is running a cleanup process.
static COLLECTING: Cell<bool> = const { Cell::new(false) };
/// The global collection of allocation information for this thread.
pub(super) static DUMPSTER: Dumpster = Dumpster {
to_collect: RefCell::new(HashMap::new()),
n_ref_drops: Cell::new(0),
n_refs_living: Cell::new(0),
collect_condition: Cell::new(default_collect_condition),
};
}
/// A dumpster is a collection of all the garbage that may or may not need to be cleaned up.
/// It also contains information relevant to when a cleanup should be triggered.
pub(super) struct Dumpster {
/// A map from allocation IDs for allocations which may need to be collected to pointers to
/// their allocations.
to_collect: RefCell<HashMap<AllocationId, Cleanup>>,
/// The number of times a reference has been dropped since the last collection was triggered.
pub n_ref_drops: Cell<usize>,
/// The number of references that currently exist in the entire heap and stack.
pub n_refs_living: Cell<usize>,
/// The function for determining whether a collection should be run.
pub collect_condition: Cell<CollectCondition>,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
/// A unique identifier for an allocated garbage-collected block.
///
/// It contains a pointer to the reference count of the allocation.
struct AllocationId(pub NonNull<Cell<NonZeroUsize>>);
impl<T> From<NonNull<GcBox<T>>> for AllocationId
where
T:
gitextract_qahat5j6/ ├── .github/ │ └── workflows/ │ └── rust.yml ├── .gitignore ├── CHANGELOG.md ├── Cargo.toml ├── LICENSE-APACHE ├── LICENSE-MIT ├── LICENSE.md ├── README.md ├── dumpster/ │ ├── .gitignore │ ├── Cargo.toml │ └── src/ │ ├── impls.rs │ ├── lib.rs │ ├── ptr.rs │ ├── sync/ │ │ ├── cell.rs │ │ ├── collect.rs │ │ ├── loom_ext.rs │ │ ├── loom_tests.rs │ │ ├── mod.rs │ │ └── tests.rs │ └── unsync/ │ ├── collect.rs │ ├── mod.rs │ └── tests.rs ├── dumpster_bench/ │ ├── .gitignore │ ├── Cargo.toml │ ├── scripts/ │ │ └── make_plots.py │ └── src/ │ ├── lib.rs │ └── main.rs ├── dumpster_derive/ │ ├── .gitignore │ ├── Cargo.toml │ └── src/ │ └── lib.rs ├── dumpster_test/ │ ├── .gitignore │ ├── Cargo.toml │ └── src/ │ └── lib.rs └── rustfmt.toml
SYMBOL INDEX (414 symbols across 17 files)
FILE: dumpster/src/impls.rs
method accept (line 37) | fn accept(&self, _: &mut V) -> Result<(), ()> {
function accept (line 44) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 85) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 91) | fn accept(&self, _: &mut V) -> Result<(), ()> {
function accept (line 100) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 110) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 117) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 130) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 143) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 153) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 162) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 168) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 174) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 180) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 186) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 194) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 200) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 208) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 216) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 222) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 228) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 234) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 240) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 246) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 253) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 259) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 266) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 272) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 278) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 289) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 298) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 304) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 338) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function accept (line 349) | fn accept(&self, visitor: &mut Z) -> Result<(), ()> {
function accept (line 361) | fn accept(&self, visitor: &mut Z) -> Result<(), ()> {
function accept (line 370) | fn accept(&self, visitor: &mut Z) -> Result<(), ()> {
function accept (line 381) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
FILE: dumpster/src/lib.rs
type TraceWithV (line 213) | pub unsafe trait TraceWithV: TraceWith<ContainsGcs> + TraceSync + TraceU...
type Trace (line 296) | pub trait Trace: trace::TraceWithV {}
type TraceWith (line 321) | pub unsafe trait TraceWith<V: Visitor> {
method accept (line 339) | fn accept(&self, visitor: &mut V) -> Result<(), ()>;
type Visitor (line 353) | pub trait Visitor {
method visit_sync (line 358) | fn visit_sync<T>(&mut self, gc: &sync::Gc<T>)
method visit_unsync (line 366) | fn visit_unsync<T>(&mut self, gc: &unsync::Gc<T>)
method visit_sync (line 427) | fn visit_sync<T>(&mut self, _: &sync::Gc<T>)
method visit_unsync (line 434) | fn visit_unsync<T>(&mut self, _: &unsync::Gc<T>)
function contains_gcs (line 416) | fn contains_gcs<T: Trace + ?Sized>(x: &T) -> Result<bool, ()> {
type ContainsGcs (line 424) | struct ContainsGcs(bool);
function panic_deref_of_collected_object (line 445) | fn panic_deref_of_collected_object() -> ! {
FILE: dumpster/src/ptr.rs
type Erased (line 24) | pub(crate) struct Erased([*const u8; 2]);
method new (line 37) | pub fn new<T: ?Sized>(reference: NonNull<T>) -> Erased {
method specify (line 64) | pub unsafe fn specify<T: ?Sized>(self) -> NonNull<T> {
method fmt (line 81) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
type Nullable (line 89) | pub(crate) struct Nullable<T: ?Sized>(*mut T);
function new (line 93) | pub fn new(ptr: NonNull<T>) -> Nullable<T> {
function as_null (line 98) | pub fn as_null(self) -> Nullable<T> {
function is_null (line 103) | pub fn is_null(self) -> bool {
function as_option (line 108) | pub fn as_option(self) -> Option<NonNull<T>> {
function as_ptr (line 113) | pub fn as_ptr(self) -> *mut T {
function from_ptr (line 118) | pub fn from_ptr(ptr: *mut T) -> Self {
function expect (line 124) | pub fn expect(self, msg: &str) -> NonNull<T> {
function unwrap (line 129) | pub fn unwrap(self) -> NonNull<T> {
function unwrap_unchecked (line 138) | pub unsafe fn unwrap_unchecked(self) -> NonNull<T> {
method clone (line 144) | fn clone(&self) -> Self {
function fmt (line 159) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
function erased_alloc (line 172) | fn erased_alloc() {
function erased_alloc_slice (line 184) | fn erased_alloc_slice() {
function erased_alloc_dyn (line 199) | fn erased_alloc_dyn() {
FILE: dumpster/src/sync/cell.rs
type UCell (line 20) | pub struct UCell<T>(UnsafeCell<T>);
function new (line 24) | pub fn new(x: T) -> Self {
function get (line 33) | pub unsafe fn get(&self) -> T
function set (line 53) | pub unsafe fn set(&self, x: T) {
FILE: dumpster/src/sync/collect.rs
type GarbageTruck (line 44) | struct GarbageTruck {
method new (line 331) | const fn new() -> Self {
method new (line 346) | fn new() -> Self {
method collect_all (line 359) | fn collect_all(&self) {
type Dumpster (line 65) | pub(super) struct Dumpster {
method deliver_to (line 297) | fn deliver_to(&self, garbage_truck: &GarbageTruck) {
method deliver_to_contents (line 304) | fn deliver_to_contents(&self, contents: &mut HashMap<AllocationId, Tra...
method is_full (line 320) | fn is_full(&self) -> bool {
type AllocationId (line 74) | pub(super) struct AllocationId(NonNull<GcBox<()>>);
method from (line 685) | fn from(value: &GcBox<T>) -> Self {
method from (line 694) | fn from(value: NonNull<GcBox<T>>) -> Self {
type TrashCan (line 78) | pub(super) struct TrashCan {
type AllocationInfo (line 88) | struct AllocationInfo {
type Reachability (line 100) | enum Reachability {
function collect_all_await (line 157) | pub fn collect_all_await() {
function notify_dropped_gc (line 168) | pub fn notify_dropped_gc() {
function notify_created_gc (line 197) | pub fn notify_created_gc() {
function mark_dirty (line 208) | pub(super) unsafe fn mark_dirty<T>(allocation: NonNull<GcBox<T>>)
function mark_clean (line 236) | pub(super) fn mark_clean<T>(allocation: &GcBox<T>)
function deliver_dumpster (line 256) | pub(super) fn deliver_dumpster() {
function set_collect_condition (line 278) | pub fn set_collect_condition(f: CollectCondition) {
function n_gcs_dropped (line 285) | pub fn n_gcs_dropped() -> usize {
function n_gcs_existing (line 290) | pub fn n_gcs_existing() -> usize {
function dfs (line 447) | unsafe fn dfs<T: Trace + Send + Sync + ?Sized>(
type Dfs (line 490) | pub(super) struct Dfs<'a> {
method visit_sync (line 500) | fn visit_sync<T>(&mut self, gc: &Gc<T>)
method visit_unsync (line 583) | fn visit_unsync<T>(&mut self, _: &crate::unsync::Gc<T>)
function mark (line 593) | fn mark(root: AllocationId, graph: &mut HashMap<AllocationId, Allocation...
type PrepareForDestruction (line 605) | pub(super) struct PrepareForDestruction<'a> {
method visit_sync (line 612) | fn visit_sync<T>(&mut self, gc: &crate::sync::Gc<T>)
method visit_unsync (line 635) | fn visit_unsync<T>(&mut self, _: &crate::unsync::Gc<T>)
function destroy_erased (line 648) | unsafe fn destroy_erased<T: Trace + Send + Sync + ?Sized>(
function drop_weak_zero (line 668) | unsafe fn drop_weak_zero<T: Trace + Send + Sync + ?Sized>(ptr: Erased) {
method drop (line 701) | fn drop(&mut self) {
FILE: dumpster/src/sync/loom_ext.rs
type Mutex (line 29) | pub struct Mutex<T: ?Sized>(MutexImpl<T>);
function accept (line 32) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function new (line 46) | pub fn new(value: T) -> Self {
function lock (line 51) | pub fn lock(&self) -> MutexGuard<'_, T> {
function is_locked (line 57) | pub fn is_locked(&self) -> bool {
type RwLock (line 63) | pub struct RwLock<T>(RwLockImpl<T>);
function new (line 67) | pub fn new(value: T) -> Self {
function read (line 72) | pub fn read(&self) -> RwLockReadGuard<'_, T> {
function write (line 77) | pub fn write(&self) -> RwLockWriteGuard<'_, T> {
type Once (line 83) | struct Once {
method new (line 90) | fn new() -> Self {
method call_once (line 97) | fn call_once(&self, f: impl FnOnce()) {
method is_completed (line 109) | fn is_completed(&self) -> bool {
type OnceLock (line 115) | pub struct OnceLock<T> {
function accept (line 126) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function new (line 133) | pub fn new() -> Self {
function with_unchecked (line 141) | unsafe fn with_unchecked<R>(&self, f: impl FnOnce(&T) -> R) -> R {
function with (line 147) | pub fn with<R>(&self, f: impl FnOnce(&T) -> R) -> Option<R> {
function with_or_init (line 156) | pub fn with_or_init<R>(&self, init: impl FnOnce() -> T, f: impl FnOnce(&...
function set (line 167) | pub fn set(&self, value: T) {
function test_once (line 173) | fn test_once() {
function test_once_lock (line 205) | fn test_once_lock() {
FILE: dumpster/src/sync/loom_tests.rs
type DropCount (line 20) | struct DropCount<'a>(&'a AtomicUsize);
method drop (line 23) | fn drop(&mut self) {
function accept (line 29) | fn accept(&self, _: &mut V) -> Result<(), ()> {
type MultiRef (line 34) | struct MultiRef {
method accept (line 41) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function loom_single_alloc (line 47) | fn loom_single_alloc() {
function loom_self_referential (line 64) | fn loom_self_referential() {
function loom_two_cycle (line 96) | fn loom_two_cycle() {
function loom_sync_leak_by_creation_in_drop (line 130) | fn loom_sync_leak_by_creation_in_drop() {
FILE: dumpster/src/sync/mod.rs
constant MAX_STRONG_COUNT (line 80) | const MAX_STRONG_COUNT: usize = (isize::MAX) as usize;
type TraceSync (line 84) | pub(crate) trait TraceSync:
type Gc (line 132) | pub struct Gc<T: Trace + Send + Sync + ?Sized + 'static> {
type GcBox (line 153) | pub struct GcBox<T>
function collect (line 191) | pub fn collect() {
type CollectInfo (line 214) | pub struct CollectInfo {
method n_gcs_dropped_since_last_collect (line 948) | pub fn n_gcs_dropped_since_last_collect(&self) -> usize {
method n_gcs_existing (line 967) | pub fn n_gcs_existing(&self) -> usize {
type CollectCondition (line 233) | pub type CollectCondition = fn(&CollectInfo) -> bool;
function default_collect_condition (line 262) | pub fn default_collect_condition(info: &CollectInfo) -> bool {
function new (line 281) | pub fn new(value: T) -> Gc<T>
function new_cyclic (line 334) | pub fn new_cyclic<F: FnOnce(Self) -> T>(data_fn: F) -> Self
function try_deref (line 465) | pub fn try_deref(gc: &Gc<T>) -> Option<&T> {
function try_clone (line 507) | pub fn try_clone(gc: &Gc<T>) -> Option<Gc<T>> {
function as_ptr (line 528) | pub fn as_ptr(gc: &Gc<T>) -> *const T {
function ptr_eq (line 551) | pub fn ptr_eq(this: &Gc<T>, other: &Gc<T>) -> bool {
function ref_count (line 577) | pub fn ref_count(gc: &Self) -> NonZeroUsize {
function is_dead (line 613) | pub fn is_dead(gc: &Self) -> bool {
function into_ptr (line 620) | fn into_ptr(this: Self) -> (*const GcBox<T>, usize) {
function from_ptr (line 631) | unsafe fn from_ptr(ptr: *const GcBox<T>, tag: usize) -> Self {
function kill (line 644) | unsafe fn kill(&self) {
function __private_into_ptr (line 652) | pub fn __private_into_ptr(this: Self) -> (*const GcBox<T>, usize) {
function __private_from_ptr (line 660) | pub unsafe fn __private_from_ptr(ptr: *const GcBox<T>, tag: usize) -> Se...
type Rehydrate (line 668) | pub(super) struct Rehydrate {
method visit_sync (line 676) | fn visit_sync<T>(&mut self, gc: &Gc<T>)
method visit_unsync (line 703) | fn visit_unsync<T>(&mut self, _: &crate::unsync::Gc<T>)
function make_mut (line 743) | pub fn make_mut(this: &mut Self) -> &mut T {
method clone (line 850) | fn clone(&self) -> Gc<T> {
method drop (line 892) | fn drop(&mut self) {
function allocate_for_layout (line 978) | unsafe fn allocate_for_layout(
function allocate_for_layout_of_box (line 995) | unsafe fn allocate_for_layout_of_box(
function allocate_for_slice (line 1020) | fn allocate_for_slice(len: usize) -> *mut GcBox<[T]> {
function accept (line 1030) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
type Target (line 1037) | type Target = T;
method deref (line 1082) | fn deref(&self) -> &Self::Target {
function eq (line 1124) | fn eq(&self, other: &Gc<T>) -> bool {
function as_ref (line 1132) | fn as_ref(&self) -> &T {
function borrow (line 1138) | fn borrow(&self) -> &T {
method default (line 1144) | fn default() -> Self {
function fmt (line 1150) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
method fmt (line 1165) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
method fmt (line 1183) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
function from (line 1202) | fn from(value: T) -> Self {
function from (line 1221) | fn from(v: [T; N]) -> Gc<[T]> {
function from (line 1238) | fn from(slice: &[T]) -> Gc<[T]> {
function from (line 1318) | fn from(value: &mut [T]) -> Self {
function from (line 1334) | fn from(v: &str) -> Self {
function from (line 1354) | fn from(v: &mut str) -> Self {
function from (line 1371) | fn from(value: Gc<str>) -> Self {
function from (line 1389) | fn from(value: String) -> Self {
function from (line 1406) | fn from(src: Box<T>) -> Self {
function from (line 1441) | fn from(vec: Vec<T>) -> Self {
function from (line 1479) | fn from(cow: Cow<'a, B>) -> Gc<B> {
function from_iter (line 1491) | fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Self {
FILE: dumpster/src/sync/tests.rs
type DropCount (line 25) | struct DropCount<'a>(&'a AtomicUsize);
method drop (line 28) | fn drop(&mut self) {
function accept (line 34) | fn accept(&self, _: &mut V) -> Result<(), ()> {
type MultiRef (line 39) | struct MultiRef {
method accept (line 46) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function single_alloc (line 52) | fn single_alloc() {
function ref_count (line 64) | fn ref_count() {
function self_referential (line 77) | fn self_referential() {
function two_cycle (line 104) | fn two_cycle() {
function self_ref_two_cycle (line 132) | fn self_ref_two_cycle() {
function parallel_loop (line 161) | fn parallel_loop() {
function deadlock (line 218) | fn deadlock() {
function open_drop (line 229) | fn open_drop() {
function eventually_collect (line 249) | fn eventually_collect() {
function coerce_array (line 281) | fn coerce_array() {
function coerce_array_using_macro (line 292) | fn coerce_array_using_macro() {
function malicious (line 303) | fn malicious() {
function fuzz (line 379) | fn fuzz() {
function root_canal (line 504) | fn root_canal() {
function escape_dead_pointer (line 621) | fn escape_dead_pointer() {
function from_box (line 656) | fn from_box() {
function from_slice (line 673) | fn from_slice() {
function from_slice_panic (line 691) | fn from_slice_panic() {
function from_vec (line 733) | fn from_vec() {
function make_mut (line 750) | fn make_mut() {
function make_mut_2 (line 774) | fn make_mut_2() {
function make_mut_of_object_in_dumpster (line 797) | fn make_mut_of_object_in_dumpster() {
function panic_visit (line 835) | fn panic_visit() {
function sync_leak_by_creation_in_drop (line 855) | fn sync_leak_by_creation_in_drop() {
function custom_trait_object (line 901) | fn custom_trait_object() {
function new_cyclic_simple (line 911) | fn new_cyclic_simple() {
function panic_new_cyclic (line 925) | fn panic_new_cyclic() {
function gc_from_iter (line 930) | fn gc_from_iter() {
function self_referential_from_iter (line 935) | fn self_referential_from_iter() {
FILE: dumpster/src/unsync/collect.rs
type Dumpster (line 44) | pub(super) struct Dumpster {
method collect_all (line 114) | pub fn collect_all(&self) {
method mark_dirty (line 183) | pub fn mark_dirty<T: Trace + ?Sized>(&self, box_ptr: NonNull<GcBox<T>>) {
method mark_cleaned (line 192) | pub fn mark_cleaned<T: Trace + ?Sized>(&self, box_ptr: NonNull<GcBox<T...
method notify_dropped_gc (line 201) | pub fn notify_dropped_gc(&self) {
method notify_created_gc (line 219) | pub fn notify_created_gc(&self) {
type AllocationId (line 60) | struct AllocationId(pub NonNull<Cell<NonZeroUsize>>);
method from (line 67) | fn from(value: NonNull<GcBox<T>>) -> Self {
type Cleanup (line 75) | struct Cleanup {
method new (line 102) | fn new<T: Trace + ?Sized>(box_ptr: NonNull<GcBox<T>>) -> Cleanup {
method drop (line 225) | fn drop(&mut self) {
type Dfs (line 232) | pub(super) struct Dfs {
type Reachability (line 241) | struct Reachability {
method visit_sync (line 252) | fn visit_sync<T>(&mut self, _: &crate::sync::Gc<T>)
method visit_unsync (line 260) | fn visit_unsync<T>(&mut self, gc: &Gc<T>)
type Mark (line 288) | pub(super) struct Mark {
method visit_sync (line 294) | fn visit_sync<T>(&mut self, _: &crate::sync::Gc<T>)
method visit_unsync (line 302) | fn visit_unsync<T>(&mut self, gc: &Gc<T>)
type DropAlloc (line 317) | pub(super) struct DropAlloc<'a> {
method visit_sync (line 325) | fn visit_sync<T>(&mut self, _: &crate::sync::Gc<T>)
method visit_unsync (line 332) | fn visit_unsync<T>(&mut self, gc: &Gc<T>)
function drop_assist (line 366) | unsafe fn drop_assist<T: Trace + ?Sized>(ptr: Erased, visitor: &mut Drop...
FILE: dumpster/src/unsync/mod.rs
type TraceUnsync (line 59) | pub(crate) trait TraceUnsync:
type Gc (line 101) | pub struct Gc<T: Trace + ?Sized + 'static> {
function collect (line 139) | pub fn collect() {
type CollectInfo (line 145) | pub struct CollectInfo {
method n_gcs_dropped_since_last_collect (line 1030) | pub fn n_gcs_dropped_since_last_collect(&self) -> usize {
method n_gcs_existing (line 1049) | pub fn n_gcs_existing(&self) -> usize {
type CollectCondition (line 164) | pub type CollectCondition = fn(&CollectInfo) -> bool;
function default_collect_condition (line 185) | pub fn default_collect_condition(info: &CollectInfo) -> bool {
function set_collect_condition (line 207) | pub fn set_collect_condition(f: CollectCondition) {
type GcBox (line 215) | pub struct GcBox<T: Trace + ?Sized> {
function new (line 232) | pub fn new(value: T) -> Gc<T>
function new_cyclic (line 282) | pub fn new_cyclic<F: FnOnce(Gc<T>) -> T>(data_fn: F) -> Self
function try_deref (line 406) | pub fn try_deref(gc: &Gc<T>) -> Option<&T> {
function try_clone (line 450) | pub fn try_clone(gc: &Gc<T>) -> Option<Gc<T>> {
function as_ptr (line 471) | pub fn as_ptr(gc: &Gc<T>) -> *const T {
function ptr_eq (line 492) | pub fn ptr_eq(this: &Gc<T>, other: &Gc<T>) -> bool {
function ref_count (line 518) | pub fn ref_count(gc: &Self) -> NonZeroUsize {
function is_dead (line 551) | pub fn is_dead(gc: &Self) -> bool {
function into_ptr (line 558) | fn into_ptr(this: Self) -> *const GcBox<T> {
function from_ptr (line 566) | unsafe fn from_ptr(ptr: *const GcBox<T>) -> Self {
function __private_into_ptr (line 576) | pub fn __private_into_ptr(this: Self) -> *const GcBox<T> {
function __private_from_ptr (line 584) | pub unsafe fn __private_from_ptr(ptr: *const GcBox<T>) -> Self {
function kill (line 589) | fn kill(&self) {
type Rehydrate (line 597) | pub(super) struct Rehydrate {
method visit_sync (line 605) | fn visit_sync<T>(&mut self, _: &crate::sync::Gc<T>)
method visit_unsync (line 611) | fn visit_unsync<T>(&mut self, gc: &Gc<T>)
function make_mut (line 667) | pub fn make_mut(this: &mut Self) -> &mut T {
function allocate_for_layout (line 699) | unsafe fn allocate_for_layout(
function allocate_for_layout_of_box (line 716) | unsafe fn allocate_for_layout_of_box(
function allocate_for_slice (line 740) | fn allocate_for_slice(len: usize) -> *mut GcBox<[T]> {
type Target (line 790) | type Target = T;
method deref (line 837) | fn deref(&self) -> &Self::Target {
method clone (line 884) | fn clone(&self) -> Self {
method drop (line 911) | fn drop(&mut self) {
function eq (line 1006) | fn eq(&self, other: &Gc<T>) -> bool {
function accept (line 1055) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
function as_ref (line 1062) | fn as_ref(&self) -> &T {
function borrow (line 1068) | fn borrow(&self) -> &T {
method default (line 1074) | fn default() -> Self {
function fmt (line 1080) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
method fmt (line 1101) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
function from (line 1120) | fn from(value: T) -> Self {
function from (line 1139) | fn from(v: [T; N]) -> Gc<[T]> {
function from (line 1156) | fn from(slice: &[T]) -> Gc<[T]> {
function from (line 1235) | fn from(value: &mut [T]) -> Self {
function from (line 1251) | fn from(v: &str) -> Self {
function from (line 1270) | fn from(v: &mut str) -> Self {
function from (line 1287) | fn from(value: Gc<str>) -> Self {
function from (line 1304) | fn from(value: String) -> Self {
function from (line 1321) | fn from(src: Box<T>) -> Self {
function from (line 1356) | fn from(vec: Vec<T>) -> Self {
function from (line 1394) | fn from(cow: Cow<'a, B>) -> Gc<B> {
function from_iter (line 1406) | fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Self {
FILE: dumpster/src/unsync/tests.rs
type DropCount (line 26) | struct DropCount(&'static AtomicUsize);
method accept (line 35) | fn accept(&self, _: &mut V) -> Result<(), ()> {
method drop (line 29) | fn drop(&mut self) {
function simple (line 42) | fn simple() {
type MultiRef (line 73) | struct MultiRef {
method accept (line 79) | fn accept(&self, visitor: &mut V) -> Result<(), ()> {
method drop (line 85) | fn drop(&mut self) {
function self_referential (line 91) | fn self_referential() {
function cyclic (line 118) | fn cyclic() {
function complete_graph (line 148) | fn complete_graph(detectors: &'static [AtomicUsize]) -> Vec<Gc<MultiRef>> {
function complete4 (line 166) | fn complete4() {
function parallel_loop (line 193) | fn parallel_loop() {
function double_borrow (line 246) | fn double_borrow() {
function coerce_array (line 267) | fn coerce_array() {
function coerce_array_using_macro (line 278) | fn coerce_array_using_macro() {
function escape_dead_pointer (line 290) | fn escape_dead_pointer() {
function from_box (line 327) | fn from_box() {
function from_slice (line 340) | fn from_slice() {
function from_slice_panic (line 354) | fn from_slice_panic() {
function from_vec (line 396) | fn from_vec() {
function make_mut (line 409) | fn make_mut() {
function make_mut_2 (line 433) | fn make_mut_2() {
function make_mut_of_object_in_dumpster (line 456) | fn make_mut_of_object_in_dumpster() {
function panic_visit (line 494) | fn panic_visit() {
function new_cyclic_nothing (line 513) | fn new_cyclic_nothing() {
function new_cyclic_one (line 523) | fn new_cyclic_one() {
function new_cyclic_panic (line 544) | fn new_cyclic_panic() {
function dead_inside_alive (line 549) | fn dead_inside_alive() {
function leak_by_creation_in_drop (line 583) | fn leak_by_creation_in_drop() {
function unsync_fuzz (line 625) | fn unsync_fuzz() {
function custom_trait_object (line 748) | fn custom_trait_object() {
function gc_from_iter (line 758) | fn gc_from_iter() {
function self_referential_from_iter (line 763) | fn self_referential_from_iter() {
FILE: dumpster_bench/scripts/make_plots.py
function violin (line 45) | def violin(times: dict, name: str):
FILE: dumpster_bench/src/lib.rs
type Multiref (line 21) | pub trait Multiref: Clone {
method new (line 23) | fn new(points_to: Vec<Self>) -> Self;
method apply (line 25) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>));
method collect (line 27) | fn collect();
method new (line 137) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 143) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 147) | fn collect() {
method new (line 153) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 159) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 163) | fn collect() {
method new (line 169) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 175) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 179) | fn collect() {
method new (line 185) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 191) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 195) | fn collect() {
method new (line 202) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 208) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 212) | fn collect() {
method new (line 218) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 224) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 228) | fn collect() {
method new (line 234) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 240) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 244) | fn collect() {
method new (line 250) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 254) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 258) | fn collect() {
method new (line 264) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 270) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 274) | fn collect() {
method new (line 280) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 286) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 290) | fn collect() {}
method new (line 294) | fn new(points_to: Vec<Self>) -> Self {
method apply (line 300) | fn apply(&self, f: impl FnOnce(&mut Vec<Self>)) {
method collect (line 304) | fn collect() {}
type SyncMultiref (line 31) | pub trait SyncMultiref: Send + Sync + Multiref {}
type RcMultiref (line 37) | pub struct RcMultiref {
type ArcMultiref (line 43) | pub struct ArcMultiref {
type DumpsterSyncMultiref (line 48) | pub struct DumpsterSyncMultiref {
type DumpsterUnsyncMultiref (line 53) | pub struct DumpsterUnsyncMultiref {
type GcMultiref (line 57) | pub struct GcMultiref {
method trace (line 85) | unsafe fn trace(&self) {
method root (line 90) | unsafe fn root(&self) {
method unroot (line 95) | unsafe fn unroot(&self) {
method finalize_glue (line 100) | fn finalize_glue(&self) {
type BaconRajanMultiref (line 61) | pub struct BaconRajanMultiref {
method trace (line 76) | fn trace(&self, tracer: &mut bacon_rajan_cc::Tracer) {
type ShredderMultiref (line 66) | pub struct ShredderMultiref {
type ShredderSyncMultiref (line 71) | pub struct ShredderSyncMultiref {
type RustCcMultiRef (line 106) | pub struct RustCcMultiRef {
method trace (line 111) | fn trace(&self, ctx: &mut rust_cc::Context<'_>) {
type TracingRcUnsyncMultiRef (line 116) | pub struct TracingRcUnsyncMultiRef {
method visit_children (line 121) | fn visit_children(&self, visitor: &mut tracing_rc::rc::GcVisitor) {
type TracingRcSyncMultiRef (line 126) | pub struct TracingRcSyncMultiRef {
method visit_children (line 131) | fn visit_children(&self, visitor: &mut tracing_rc::sync::GcVisitor) {
FILE: dumpster_bench/src/main.rs
type BenchmarkData (line 28) | struct BenchmarkData {
method fmt (line 37) | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
function unsync_never_collect (line 50) | fn unsync_never_collect(_: &dumpster::unsync::CollectInfo) -> bool {
function sync_never_collect (line 54) | fn sync_never_collect(_: &dumpster::sync::CollectInfo) -> bool {
function main (line 58) | fn main() {
function single_threaded (line 189) | fn single_threaded<M: Multiref>(name: &'static str, n_iters: usize) -> B...
function multi_threaded (line 249) | fn multi_threaded<M: SyncMultiref>(
FILE: dumpster_derive/src/lib.rs
function derive_trace (line 23) | pub fn derive_trace(input: proc_macro::TokenStream) -> proc_macro::Token...
function add_trait_bounds (line 88) | fn add_trait_bounds(dumpster: &Path, mut generics: Generics) -> Generics {
function delegate_methods (line 101) | fn delegate_methods(dumpster: &Path, name: &Ident, data: &Data) -> Token...
FILE: dumpster_test/src/lib.rs
type Empty (line 23) | struct Empty;
type UnitTuple (line 27) | struct UnitTuple();
type MultiRef (line 30) | struct MultiRef {
type Refs (line 37) | enum Refs {
type A (line 45) | enum A {
type B (line 51) | enum B {
type Generic (line 57) | struct Generic<T> {
method drop (line 62) | fn drop(&mut self) {
function unit (line 68) | fn unit() {
function self_referential (line 89) | fn self_referential() {
function double_loop (line 105) | fn double_loop() {
function parallel_loop (line 123) | fn parallel_loop() {
function unsync_as_ptr (line 164) | fn unsync_as_ptr() {
Condensed preview — 34 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (322K chars).
[
{
"path": ".github/workflows/rust.yml",
"chars": 4721,
"preview": "name: Rust\n\non:\n push:\n branches: [\"master\"]\n pull_request:\n branches: [\"master\"]\n\nenv:\n CARGO_TERM_COLOR: alwa"
},
{
"path": ".gitignore",
"chars": 40,
"preview": "/target\n/Cargo.lock\n\n*.csv\n.vscode\n.zed\n"
},
{
"path": "CHANGELOG.md",
"chars": 2552,
"preview": "# `dumpster` Changelog\n\n## 2.1.0\n\n### New features\n\n- Implemented `FromIterator` for `Gc<[T]>`.\n\n## 2.0.0\n\n### Breaking "
},
{
"path": "Cargo.toml",
"chars": 204,
"preview": "[workspace]\nmembers = [\n \"dumpster\",\n \"dumpster_derive\",\n \"dumpster_test\",\n \"dumpster_bench\",\n]\nresolver = \""
},
{
"path": "LICENSE-APACHE",
"chars": 9723,
"preview": " Apache License\n Version 2.0, January 2004\n http"
},
{
"path": "LICENSE-MIT",
"chars": 1068,
"preview": "Copyright (c) The Rust Project Contributors\n\nPermission is hereby granted, free of charge, to any\nperson obtaining a cop"
},
{
"path": "LICENSE.md",
"chars": 15856,
"preview": "Mozilla Public License Version 2.0\r\n==================================\r\n\r\n### 1. Definitions\r\n\r\n**1.1. “Contributor”** "
},
{
"path": "README.md",
"chars": 4588,
"preview": "# `dumpster`: A cycle-tracking garbage collector for Rust\n\n[]"
},
{
"path": "dumpster/.gitignore",
"chars": 20,
"preview": "/target\n/Cargo.lock\n"
},
{
"path": "dumpster/Cargo.toml",
"chars": 1097,
"preview": "[package]\nname = \"dumpster\"\nversion = \"2.1.0\"\nedition = \"2021\"\nlicense = \"MPL-2.0\"\nauthors = [\"Clayton Ramsey\"]\ndescript"
},
{
"path": "dumpster/src/impls.rs",
"chars": 19821,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/lib.rs",
"chars": 15685,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/ptr.rs",
"chars": 6357,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/sync/cell.rs",
"chars": 1840,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/sync/collect.rs",
"chars": 25128,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/sync/loom_ext.rs",
"chars": 5936,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/sync/loom_tests.rs",
"chars": 5580,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/sync/mod.rs",
"chars": 48158,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/sync/tests.rs",
"chars": 26791,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/unsync/collect.rs",
"chars": 12696,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/unsync/mod.rs",
"chars": 44957,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster/src/unsync/tests.rs",
"chars": 20868,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust. Copyright (C) 2023 Clayton Ramsey.\n\n This Source Cod"
},
{
"path": "dumpster_bench/.gitignore",
"chars": 20,
"preview": "/target\n/Cargo.lock\n"
},
{
"path": "dumpster_bench/Cargo.toml",
"chars": 699,
"preview": "[package]\nname = \"dumpster_bench\"\nversion = \"0.1.0\"\nedition = \"2021\"\nlicense = \"MPL-2.0\"\nauthors = [\"Clayton Ramsey\"]\nde"
},
{
"path": "dumpster_bench/scripts/make_plots.py",
"chars": 1988,
"preview": "# dumpster, a cycle-tracking garbage collector for Rust.\n# Copyright (C) 2023 Clayton Ramsey.\n\n# This Source Code Form i"
},
{
"path": "dumpster_bench/src/lib.rs",
"chars": 7553,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust.\n Copyright (C) 2023 Clayton Ramsey.\n\n This Source Co"
},
{
"path": "dumpster_bench/src/main.rs",
"chars": 12144,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust.\n Copyright (C) 2023 Clayton Ramsey.\n\n This Source Co"
},
{
"path": "dumpster_derive/.gitignore",
"chars": 20,
"preview": "/target\n/Cargo.lock\n"
},
{
"path": "dumpster_derive/Cargo.toml",
"chars": 567,
"preview": "[package]\nname = \"dumpster_derive\"\nversion = \"2.0.0\"\nedition = \"2021\"\nlicense = \"MPL-2.0\"\nauthors = [\"Clayton Ramsey\"]\nd"
},
{
"path": "dumpster_derive/src/lib.rs",
"chars": 7632,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust.\n Copyright (C) 2023 Clayton Ramsey.\n\n This Source Co"
},
{
"path": "dumpster_test/.gitignore",
"chars": 20,
"preview": "/target\n/Cargo.lock\n"
},
{
"path": "dumpster_test/Cargo.toml",
"chars": 488,
"preview": "[package]\nname = \"dumpster_test\"\nversion = \"0.1.0\"\nedition = \"2021\"\nlicense = \"MPL-2.0\"\nauthors = [\"Clayton Ramsey\"]\ndes"
},
{
"path": "dumpster_test/src/lib.rs",
"chars": 4492,
"preview": "/*\n dumpster, a cycle-tracking garbage collector for Rust.\n Copyright (C) 2023 Clayton Ramsey.\n\n This Source Co"
},
{
"path": "rustfmt.toml",
"chars": 129,
"preview": "newline_style = \"Unix\"\nwrap_comments = true\ncomment_width = 100\nformat_code_in_doc_comments = true\nimports_granularity ="
}
]
About this extraction
This page contains the full source code of the claytonwramsey/dumpster GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 34 files (302.2 KB), approximately 79.4k tokens, and a symbol index with 414 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.