Repository: Amanieu/atomic-rs
Branch: master
Commit: 44c213a73cb4
Files: 12
Total size: 61.2 KB
Directory structure:
gitextract_h1dlc86c/
├── .github/
│ └── workflows/
│ └── release-plz.yml
├── .gitignore
├── .travis.yml
├── CHANGELOG.md
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-MIT
├── README.md
└── src/
├── fallback.rs
├── lib.rs
├── ops.rs
└── serde_impl.rs
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/release-plz.yml
================================================
name: Release-plz
on:
push:
branches:
- master
jobs:
release-plz-release:
name: Release-plz release
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'Amanieu' }}
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
- name: Run release-plz
uses: release-plz/action@v0.5
with:
command: release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
release-plz-pr:
name: Release-plz PR
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'Amanieu' }}
permissions:
pull-requests: write
contents: write
concurrency:
group: release-plz-${{ github.ref }}
cancel-in-progress: false
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
- name: Run release-plz
uses: release-plz/action@v0.5
with:
command: release-pr
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
================================================
FILE: .gitignore
================================================
target
Cargo.lock
================================================
FILE: .travis.yml
================================================
language: rust
sudo: false
rust:
- nightly
- beta
- stable
- 1.45.0
script:
- cargo build
- cargo test
- cargo doc
- if [ $TRAVIS_RUST_VERSION = nightly ]; then rustup target add aarch64-unknown-none; fi
- if [ $TRAVIS_RUST_VERSION = nightly ]; then RUSTFLAGS="-Zcrate-attr=feature(integer_atomics)" cargo check --target=aarch64-unknown-none; fi
notifications:
email: false
================================================
FILE: CHANGELOG.md
================================================
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
## [0.6.1](https://github.com/Amanieu/atomic-rs/compare/v0.6.0...v0.6.1) - 2025-06-19
### Other
- Implement (de)serialization with `serde`.
================================================
FILE: Cargo.toml
================================================
[package]
name = "atomic"
version = "0.6.1"
edition = "2018"
authors = ["Amanieu d'Antras <amanieu@gmail.com>"]
description = "Generic Atomic<T> wrapper type"
license = "Apache-2.0/MIT"
repository = "https://github.com/Amanieu/atomic-rs"
readme = "README.md"
keywords = ["atomic", "no_std"]
[features]
default = ["fallback"]
std = []
fallback = []
nightly = []
serde = ["dep:serde"]
[dependencies]
bytemuck = "1.13.1"
serde = { version = "1.0.219", default-features = false, optional = true }
[dev-dependencies]
bytemuck = { version = "1.13.1", features = ["derive"] }
serde = { version = "1.0.219", default-features = false, features = ["derive"] }
serde_json = { version = "1.0.140" }
================================================
FILE: LICENSE-APACHE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: LICENSE-MIT
================================================
Copyright (c) 2016 The Rust Project Developers
Permission is hereby granted, free of charge, to any
person obtaining a copy of this software and associated
documentation files (the "Software"), to deal in the
Software without restriction, including without
limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software
is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice
shall be included in all copies or substantial portions
of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED
TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR
IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
================================================
FILE: README.md
================================================
Generic `Atomic<T>` for Rust
============================
[](https://travis-ci.org/Amanieu/atomic-rs) [](https://crates.io/crates/atomic)
A Rust library which provides a generic `Atomic<T>` type for all `T: NoUninit` types, unlike the standard library which only provides a few fixed atomic types (`AtomicBool`, `AtomicIsize`, `AtomicUsize`, `AtomicPtr`). The `NoUninit` bound is from the [bytemuck] crate, and indicates that a type has no internal padding bytes. You will need to derive or implement this trait for all types used with `Atomic<T>`.
This library will use native atomic instructions if possible, and will otherwise fall back to a lock-based mechanism. You can use the `Atomic::<T>::is_lock_free()` function to check whether native atomic operations are supported for a given type. Note that a type must have a power-of-2 size and alignment in order to be used by native atomic instructions.
This crate uses `#![no_std]` and only depends on libcore.
[bytemuck]: https://docs.rs/bytemuck
[Documentation](https://docs.rs/atomic)
## Features
This crate has the following [Cargo
features](https://doc.rust-lang.org/cargo/reference/features.html):
* `fallback`: Fall back to locks when atomic instructions cannot be
used. (Enabled by default.)
* `serde`: Enables serialization and serialization of `Atomic<T>` with
[serde](https://docs.rs/serde/latest/serde/).
## Usage
Add this to your `Cargo.toml`:
```toml
[dependencies]
atomic = "0.6"
```
and this to your crate root:
```rust
extern crate atomic;
```
## License
Licensed under either of
* Apache License, Version 2.0, ([LICENSE-APACHE](LICENSE-APACHE) or http://www.apache.org/licenses/LICENSE-2.0)
* MIT license ([LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT)
at your option.
### Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any
additional terms or conditions.
================================================
FILE: src/fallback.rs
================================================
// Copyright 2016 Amanieu d'Antras
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
use core::cmp;
use core::hint;
use core::num::Wrapping;
use core::ops;
use core::ptr;
use core::sync::atomic::{AtomicUsize, Ordering};
use bytemuck::NoUninit;
// We use an AtomicUsize instead of an AtomicBool because it performs better
// on architectures that don't have byte-sized atomics.
//
// We give each spinlock its own cache line to avoid false sharing.
#[repr(align(64))]
struct SpinLock(AtomicUsize);
impl SpinLock {
fn lock(&self) {
while self
.0
.compare_exchange_weak(0, 1, Ordering::Acquire, Ordering::Relaxed)
.is_err()
{
while self.0.load(Ordering::Relaxed) != 0 {
hint::spin_loop();
}
}
}
fn unlock(&self) {
self.0.store(0, Ordering::Release);
}
}
// A big array of spinlocks which we use to guard atomic accesses. A spinlock is
// chosen based on a hash of the address of the atomic object, which helps to
// reduce contention compared to a single global lock.
macro_rules! array {
(@accum (0, $($_es:expr),*) -> ($($body:tt)*))
=> {array!(@as_expr [$($body)*])};
(@accum (1, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (0, $($es),*) -> ($($body)* $($es,)*))};
(@accum (2, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (0, $($es),*) -> ($($body)* $($es,)* $($es,)*))};
(@accum (4, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (2, $($es,)* $($es),*) -> ($($body)*))};
(@accum (8, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (4, $($es,)* $($es),*) -> ($($body)*))};
(@accum (16, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (8, $($es,)* $($es),*) -> ($($body)*))};
(@accum (32, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (16, $($es,)* $($es),*) -> ($($body)*))};
(@accum (64, $($es:expr),*) -> ($($body:tt)*))
=> {array!(@accum (32, $($es,)* $($es),*) -> ($($body)*))};
(@as_expr $e:expr) => {$e};
[$e:expr; $n:tt] => { array!(@accum ($n, $e) -> ()) };
}
static SPINLOCKS: [SpinLock; 64] = array![SpinLock(AtomicUsize::new(0)); 64];
// Spinlock pointer hashing function from compiler-rt
#[inline]
fn lock_for_addr(addr: usize) -> &'static SpinLock {
// Disregard the lowest 4 bits. We want all values that may be part of the
// same memory operation to hash to the same value and therefore use the same
// lock.
let mut hash = addr >> 4;
// Use the next bits as the basis for the hash
let low = hash & (SPINLOCKS.len() - 1);
// Now use the high(er) set of bits to perturb the hash, so that we don't
// get collisions from atomic fields in a single object
hash >>= 16;
hash ^= low;
// Return a pointer to the lock to use
&SPINLOCKS[hash & (SPINLOCKS.len() - 1)]
}
#[inline]
fn lock(addr: usize) -> LockGuard {
let lock = lock_for_addr(addr);
lock.lock();
LockGuard(lock)
}
struct LockGuard(&'static SpinLock);
impl Drop for LockGuard {
#[inline]
fn drop(&mut self) {
self.0.unlock();
}
}
#[inline]
pub unsafe fn atomic_load<T>(dst: *mut T) -> T {
let _l = lock(dst as usize);
ptr::read(dst)
}
#[inline]
pub unsafe fn atomic_store<T>(dst: *mut T, val: T) {
let _l = lock(dst as usize);
ptr::write(dst, val);
}
#[inline]
pub unsafe fn atomic_swap<T>(dst: *mut T, val: T) -> T {
let _l = lock(dst as usize);
ptr::replace(dst, val)
}
#[inline]
pub unsafe fn atomic_compare_exchange<T: NoUninit>(
dst: *mut T,
current: T,
new: T,
) -> Result<T, T> {
let _l = lock(dst as usize);
let result = ptr::read(dst);
// compare_exchange compares with memcmp instead of Eq
let a = bytemuck::bytes_of(&result);
let b = bytemuck::bytes_of(¤t);
if a == b {
ptr::write(dst, new);
Ok(result)
} else {
Err(result)
}
}
#[inline]
pub unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T) -> T
where
Wrapping<T>: ops::Add<Output = Wrapping<T>>,
{
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, (Wrapping(result) + Wrapping(val)).0);
result
}
#[inline]
pub unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T) -> T
where
Wrapping<T>: ops::Sub<Output = Wrapping<T>>,
{
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, (Wrapping(result) - Wrapping(val)).0);
result
}
#[inline]
pub unsafe fn atomic_and<T: Copy + ops::BitAnd<Output = T>>(dst: *mut T, val: T) -> T {
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, result & val);
result
}
#[inline]
pub unsafe fn atomic_or<T: Copy + ops::BitOr<Output = T>>(dst: *mut T, val: T) -> T {
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, result | val);
result
}
#[inline]
pub unsafe fn atomic_xor<T: Copy + ops::BitXor<Output = T>>(dst: *mut T, val: T) -> T {
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, result ^ val);
result
}
#[inline]
pub unsafe fn atomic_min<T: Copy + cmp::Ord>(dst: *mut T, val: T) -> T {
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, cmp::min(result, val));
result
}
#[inline]
pub unsafe fn atomic_max<T: Copy + cmp::Ord>(dst: *mut T, val: T) -> T {
let _l = lock(dst as usize);
let result = ptr::read(dst);
ptr::write(dst, cmp::max(result, val));
result
}
================================================
FILE: src/lib.rs
================================================
// Copyright 2016 Amanieu d'Antras
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
//! Generic `Atomic<T>` wrapper type
//!
//! Atomic types provide primitive shared-memory communication between
//! threads, and are the building blocks of other concurrent types.
//!
//! This library defines a generic atomic wrapper type `Atomic<T>` for all
//! `T: NoUninit` types.
//! Atomic types present operations that, when used correctly, synchronize
//! updates between threads.
//!
//! The `NoUninit` bound is from the [bytemuck] crate, and indicates that a
//! type has no internal padding bytes. You will need to derive or implement
//! this trait for all types used with `Atomic<T>`.
//!
//! Each method takes an `Ordering` which represents the strength of
//! the memory barrier for that operation. These orderings are the
//! same as [LLVM atomic orderings][1].
//!
//! [1]: http://llvm.org/docs/LangRef.html#memory-model-for-concurrent-operations
//!
//! Atomic variables are safe to share between threads (they implement `Sync`)
//! but they do not themselves provide the mechanism for sharing. The most
//! common way to share an atomic variable is to put it into an `Arc` (an
//! atomically-reference-counted shared pointer).
//!
//! Most atomic types may be stored in static variables, initialized using
//! the `const fn` constructors. Atomic statics are often used for lazy global
//! initialization.
//!
//! [bytemuck]: https://docs.rs/bytemuck
#![warn(missing_docs)]
#![warn(rust_2018_idioms)]
#![no_std]
#![cfg_attr(feature = "nightly", feature(integer_atomics))]
#[cfg(any(test, feature = "std"))]
#[macro_use]
extern crate std;
use core::mem::MaybeUninit;
// Re-export some useful definitions from libcore
pub use core::sync::atomic::{fence, Ordering};
use core::cell::UnsafeCell;
use core::fmt;
#[cfg(feature = "std")]
use std::panic::RefUnwindSafe;
use bytemuck::NoUninit;
#[cfg(feature = "fallback")]
mod fallback;
mod ops;
/// A generic atomic wrapper type which allows an object to be safely shared
/// between threads.
#[repr(transparent)]
pub struct Atomic<T> {
// The MaybeUninit is here to work around rust-lang/rust#87341.
v: UnsafeCell<MaybeUninit<T>>,
}
// Atomic<T> is only Sync if T is Send
unsafe impl<T: Copy + Send> Sync for Atomic<T> {}
// Given that atomicity is guaranteed, Atomic<T> is RefUnwindSafe if T is
//
// This is trivially correct for native lock-free atomic types. For those whose
// atomicity is emulated using a spinlock, it is still correct because the
// `Atomic` API does not allow doing any panic-inducing operation after writing
// to the target object.
#[cfg(feature = "std")]
impl<T: RefUnwindSafe> RefUnwindSafe for Atomic<T> {}
impl<T: Default> Default for Atomic<T> {
#[inline]
fn default() -> Self {
Self::new(Default::default())
}
}
impl<T: NoUninit + fmt::Debug> fmt::Debug for Atomic<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_tuple("Atomic")
.field(&self.load(Ordering::Relaxed))
.finish()
}
}
impl<T> Atomic<T> {
/// Creates a new `Atomic`.
#[inline]
pub const fn new(v: T) -> Atomic<T> {
Atomic {
v: UnsafeCell::new(MaybeUninit::new(v)),
}
}
/// Checks if `Atomic` objects of this type are lock-free.
///
/// If an `Atomic` is not lock-free then it may be implemented using locks
/// internally, which makes it unsuitable for some situations (such as
/// communicating with a signal handler).
#[inline]
pub const fn is_lock_free() -> bool {
ops::atomic_is_lock_free::<T>()
}
}
impl<T: NoUninit> Atomic<T> {
#[inline]
fn inner_ptr(&self) -> *mut T {
self.v.get() as *mut T
}
/// Returns a mutable reference to the underlying type.
///
/// This is safe because the mutable reference guarantees that no other threads are
/// concurrently accessing the atomic data.
#[inline]
pub fn get_mut(&mut self) -> &mut T {
unsafe { &mut *self.inner_ptr() }
}
/// Consumes the atomic and returns the contained value.
///
/// This is safe because passing `self` by value guarantees that no other threads are
/// concurrently accessing the atomic data.
#[inline]
pub fn into_inner(self) -> T {
unsafe { self.v.into_inner().assume_init() }
}
/// Loads a value from the `Atomic`.
///
/// `load` takes an `Ordering` argument which describes the memory ordering
/// of this operation.
///
/// # Panics
///
/// Panics if `order` is `Release` or `AcqRel`.
#[inline]
pub fn load(&self, order: Ordering) -> T {
unsafe { ops::atomic_load(self.inner_ptr(), order) }
}
/// Stores a value into the `Atomic`.
///
/// `store` takes an `Ordering` argument which describes the memory ordering
/// of this operation.
///
/// # Panics
///
/// Panics if `order` is `Acquire` or `AcqRel`.
#[inline]
pub fn store(&self, val: T, order: Ordering) {
unsafe {
ops::atomic_store(self.inner_ptr(), val, order);
}
}
/// Stores a value into the `Atomic`, returning the old value.
///
/// `swap` takes an `Ordering` argument which describes the memory ordering
/// of this operation.
#[inline]
pub fn swap(&self, val: T, order: Ordering) -> T {
unsafe { ops::atomic_swap(self.inner_ptr(), val, order) }
}
/// Stores a value into the `Atomic` if the current value is the same as the
/// `current` value.
///
/// The return value is a result indicating whether the new value was
/// written and containing the previous value. On success this value is
/// guaranteed to be equal to `new`.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory
/// ordering of this operation. The first describes the required ordering if
/// the operation succeeds while the second describes the required ordering
/// when the operation fails. The failure ordering can't be `Release` or
/// `AcqRel` and must be equivalent or weaker than the success ordering.
#[inline]
pub fn compare_exchange(
&self,
current: T,
new: T,
success: Ordering,
failure: Ordering,
) -> Result<T, T> {
unsafe { ops::atomic_compare_exchange(self.inner_ptr(), current, new, success, failure) }
}
/// Stores a value into the `Atomic` if the current value is the same as the
/// `current` value.
///
/// Unlike `compare_exchange`, this function is allowed to spuriously fail
/// even when the comparison succeeds, which can result in more efficient
/// code on some platforms. The return value is a result indicating whether
/// the new value was written and containing the previous value.
///
/// `compare_exchange` takes two `Ordering` arguments to describe the memory
/// ordering of this operation. The first describes the required ordering if
/// the operation succeeds while the second describes the required ordering
/// when the operation fails. The failure ordering can't be `Release` or
/// `AcqRel` and must be equivalent or weaker than the success ordering.
/// success ordering.
#[inline]
pub fn compare_exchange_weak(
&self,
current: T,
new: T,
success: Ordering,
failure: Ordering,
) -> Result<T, T> {
unsafe {
ops::atomic_compare_exchange_weak(self.inner_ptr(), current, new, success, failure)
}
}
/// Fetches the value, and applies a function to it that returns an optional
/// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
/// `Err(previous_value)`.
///
/// Note: This may call the function multiple times if the value has been changed from other threads in
/// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
/// only once to the stored value.
///
/// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
/// The first describes the required ordering for when the operation finally succeeds while the second
/// describes the required ordering for loads. These correspond to the success and failure orderings of
/// [`compare_exchange`] respectively.
///
/// Using [`Acquire`] as success ordering makes the store part
/// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
/// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`]
/// and must be equivalent to or weaker than the success ordering.
///
/// [`compare_exchange`]: #method.compare_exchange
/// [`Ordering`]: enum.Ordering.html
/// [`Relaxed`]: enum.Ordering.html#variant.Relaxed
/// [`Release`]: enum.Ordering.html#variant.Release
/// [`Acquire`]: enum.Ordering.html#variant.Acquire
/// [`SeqCst`]: enum.Ordering.html#variant.SeqCst
///
/// # Examples
///
/// ```rust
/// use atomic::{Atomic, Ordering};
///
/// let x = Atomic::new(7);
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
/// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
/// assert_eq!(x.load(Ordering::SeqCst), 9);
/// ```
#[inline]
pub fn fetch_update<F>(
&self,
set_order: Ordering,
fetch_order: Ordering,
mut f: F,
) -> Result<T, T>
where
F: FnMut(T) -> Option<T>,
{
let mut prev = self.load(fetch_order);
while let Some(next) = f(prev) {
match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
x @ Ok(_) => return x,
Err(next_prev) => prev = next_prev,
}
}
Err(prev)
}
}
impl Atomic<bool> {
/// Logical "and" with a boolean value.
///
/// Performs a logical "and" operation on the current value and the argument
/// `val`, and sets the new value to the result.
///
/// Returns the previous value.
#[inline]
pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
unsafe { ops::atomic_and(self.inner_ptr(), val, order) }
}
/// Logical "or" with a boolean value.
///
/// Performs a logical "or" operation on the current value and the argument
/// `val`, and sets the new value to the result.
///
/// Returns the previous value.
#[inline]
pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
unsafe { ops::atomic_or(self.inner_ptr(), val, order) }
}
/// Logical "xor" with a boolean value.
///
/// Performs a logical "xor" operation on the current value and the argument
/// `val`, and sets the new value to the result.
///
/// Returns the previous value.
#[inline]
pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
unsafe { ops::atomic_xor(self.inner_ptr(), val, order) }
}
}
macro_rules! atomic_ops_common {
($($t:ty)*) => ($(
impl Atomic<$t> {
/// Add to the current value, returning the previous value.
#[inline]
pub fn fetch_add(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_add(self.inner_ptr(), val, order) }
}
/// Subtract from the current value, returning the previous value.
#[inline]
pub fn fetch_sub(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_sub(self.inner_ptr(), val, order) }
}
/// Bitwise and with the current value, returning the previous value.
#[inline]
pub fn fetch_and(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_and(self.inner_ptr(), val, order) }
}
/// Bitwise or with the current value, returning the previous value.
#[inline]
pub fn fetch_or(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_or(self.inner_ptr(), val, order) }
}
/// Bitwise xor with the current value, returning the previous value.
#[inline]
pub fn fetch_xor(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_xor(self.inner_ptr(), val, order) }
}
}
)*);
}
macro_rules! atomic_ops_signed {
($($t:ty)*) => (
atomic_ops_common!{ $($t)* }
$(
impl Atomic<$t> {
/// Minimum with the current value.
#[inline]
pub fn fetch_min(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_min(self.inner_ptr(), val, order) }
}
/// Maximum with the current value.
#[inline]
pub fn fetch_max(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_max(self.inner_ptr(), val, order) }
}
}
)*
);
}
macro_rules! atomic_ops_unsigned {
($($t:ty)*) => (
atomic_ops_common!{ $($t)* }
$(
impl Atomic<$t> {
/// Minimum with the current value.
#[inline]
pub fn fetch_min(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_umin(self.inner_ptr(), val, order) }
}
/// Maximum with the current value.
#[inline]
pub fn fetch_max(&self, val: $t, order: Ordering) -> $t {
unsafe { ops::atomic_umax(self.inner_ptr(), val, order) }
}
}
)*
);
}
atomic_ops_signed! { i8 i16 i32 i64 isize i128 }
atomic_ops_unsigned! { u8 u16 u32 u64 usize u128 }
#[cfg(feature = "serde")]
mod serde_impl;
#[cfg(test)]
mod tests {
use super::{Atomic, Ordering::*};
use bytemuck::NoUninit;
use core::mem;
#[derive(Copy, Clone, Eq, PartialEq, Debug, Default, NoUninit)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[repr(C)]
struct Foo(u8, u8);
#[derive(Copy, Clone, Eq, PartialEq, Debug, Default, NoUninit)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[repr(C)]
struct Bar(u64, u64);
#[derive(Copy, Clone, Eq, PartialEq, Debug, Default, NoUninit)]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[repr(C)]
struct Quux(u32);
#[cfg(feature = "serde")]
fn assert_serde<T>(atomic: &Atomic<T>, value: T)
where
T: NoUninit
+ PartialEq
+ std::fmt::Debug
+ for<'a> serde::Deserialize<'a>
+ serde::Serialize,
{
let s = serde_json::to_string(atomic).unwrap();
assert_eq!(s, serde_json::to_string(&value).unwrap());
let x: Atomic<T> = serde_json::from_str(&s).unwrap();
assert_eq!(x.load(SeqCst), value);
}
#[test]
fn atomic_bool() {
let a = Atomic::new(false);
assert_eq!(
Atomic::<bool>::is_lock_free(),
cfg!(target_has_atomic = "8"),
);
assert_eq!(format!("{:?}", a), "Atomic(false)");
assert_eq!(a.load(SeqCst), false);
a.store(true, SeqCst);
assert_eq!(a.swap(false, SeqCst), true);
assert_eq!(a.compare_exchange(true, false, SeqCst, SeqCst), Err(false));
assert_eq!(a.compare_exchange(false, true, SeqCst, SeqCst), Ok(false));
assert_eq!(a.fetch_and(false, SeqCst), true);
assert_eq!(a.fetch_or(true, SeqCst), false);
assert_eq!(a.fetch_xor(false, SeqCst), true);
assert_eq!(a.load(SeqCst), true);
#[cfg(feature = "serde")]
assert_serde(&a, true);
}
#[test]
fn atomic_i8() {
let a = Atomic::new(0i8);
assert_eq!(Atomic::<i8>::is_lock_free(), cfg!(target_has_atomic = "8"));
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
// Make sure overflows are handled correctly
assert_eq!(a.fetch_sub(-56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), -74);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(-25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_i16() {
let a = Atomic::new(0i16);
assert_eq!(
Atomic::<i16>::is_lock_free(),
cfg!(target_has_atomic = "16")
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(-56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 182);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(-25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_i32() {
let a = Atomic::new(0i32);
assert_eq!(
Atomic::<i32>::is_lock_free(),
cfg!(target_has_atomic = "32")
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(-56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 182);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(-25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
// on 32-bit x86 64 bit atomics exist, but they can't be used to implement
// atomic<i64> because AtomicI64 has a greater alignment requirement than
// i64.
#[cfg(any(
feature = "fallback",
all(target_has_atomic = "64", not(target_arch = "x86"))
))]
#[test]
fn atomic_i64() {
let a = Atomic::new(0i64);
assert_eq!(
Atomic::<i64>::is_lock_free(),
cfg!(target_has_atomic = "64") && mem::align_of::<i64>() == 8
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(-56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 182);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(-25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[cfg(any(feature = "fallback", target_has_atomic = "128"))]
#[test]
fn atomic_i128() {
let a = Atomic::new(0i128);
assert_eq!(
Atomic::<i128>::is_lock_free(),
cfg!(feature = "nightly") & cfg!(target_has_atomic = "128")
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(-56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 182);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(-25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_isize() {
let a = Atomic::new(0isize);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(-56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 182);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(-25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_u8() {
let a = Atomic::new(0u8);
assert_eq!(Atomic::<u8>::is_lock_free(), cfg!(target_has_atomic = "8"));
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 70);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_u16() {
let a = Atomic::new(0u16);
assert_eq!(
Atomic::<u16>::is_lock_free(),
cfg!(target_has_atomic = "16")
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 70);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_u32() {
let a = Atomic::new(0u32);
assert_eq!(
Atomic::<u32>::is_lock_free(),
cfg!(target_has_atomic = "32")
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 70);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
// on 32-bit x86 64 bit atomics exist, but they can't be used to implement
// atomic<u64> because AtomicU64 has a greater alignment requirement than
// u64.
#[cfg(any(
feature = "fallback",
all(target_has_atomic = "64", not(target_arch = "x86"))
))]
#[test]
fn atomic_u64() {
let a = Atomic::new(0u64);
assert_eq!(
Atomic::<u64>::is_lock_free(),
cfg!(target_has_atomic = "64") && mem::align_of::<u64>() == 8
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 70);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[cfg(any(feature = "fallback", target_has_atomic = "128"))]
#[test]
fn atomic_u128() {
let a = Atomic::new(0u128);
assert_eq!(
Atomic::<u128>::is_lock_free(),
cfg!(feature = "nightly") & cfg!(target_has_atomic = "128")
);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 70);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[test]
fn atomic_usize() {
let a = Atomic::new(0usize);
assert_eq!(format!("{:?}", a), "Atomic(0)");
assert_eq!(a.load(SeqCst), 0);
a.store(1, SeqCst);
assert_eq!(a.swap(2, SeqCst), 1);
assert_eq!(a.compare_exchange(5, 45, SeqCst, SeqCst), Err(2));
assert_eq!(a.compare_exchange(2, 3, SeqCst, SeqCst), Ok(2));
assert_eq!(a.fetch_add(123, SeqCst), 3);
assert_eq!(a.fetch_sub(56, SeqCst), 126);
assert_eq!(a.fetch_and(7, SeqCst), 70);
assert_eq!(a.fetch_or(64, SeqCst), 6);
assert_eq!(a.fetch_xor(1, SeqCst), 70);
assert_eq!(a.fetch_min(30, SeqCst), 71);
assert_eq!(a.fetch_max(25, SeqCst), 30);
assert_eq!(a.load(SeqCst), 30);
#[cfg(feature = "serde")]
assert_serde(&a, 30);
}
#[cfg(feature = "fallback")]
#[test]
fn atomic_foo() {
let a = Atomic::default();
assert_eq!(Atomic::<Foo>::is_lock_free(), false);
assert_eq!(format!("{:?}", a), "Atomic(Foo(0, 0))");
assert_eq!(a.load(SeqCst), Foo(0, 0));
a.store(Foo(1, 1), SeqCst);
assert_eq!(a.swap(Foo(2, 2), SeqCst), Foo(1, 1));
assert_eq!(
a.compare_exchange(Foo(5, 5), Foo(45, 45), SeqCst, SeqCst),
Err(Foo(2, 2))
);
assert_eq!(
a.compare_exchange(Foo(2, 2), Foo(3, 3), SeqCst, SeqCst),
Ok(Foo(2, 2))
);
assert_eq!(a.load(SeqCst), Foo(3, 3));
#[cfg(feature = "serde")]
assert_serde(&a, Foo(3, 3));
}
#[cfg(feature = "fallback")]
#[test]
fn atomic_bar() {
let a = Atomic::default();
assert_eq!(Atomic::<Bar>::is_lock_free(), false);
assert_eq!(format!("{:?}", a), "Atomic(Bar(0, 0))");
assert_eq!(a.load(SeqCst), Bar(0, 0));
a.store(Bar(1, 1), SeqCst);
assert_eq!(a.swap(Bar(2, 2), SeqCst), Bar(1, 1));
assert_eq!(
a.compare_exchange(Bar(5, 5), Bar(45, 45), SeqCst, SeqCst),
Err(Bar(2, 2))
);
assert_eq!(
a.compare_exchange(Bar(2, 2), Bar(3, 3), SeqCst, SeqCst),
Ok(Bar(2, 2))
);
assert_eq!(a.load(SeqCst), Bar(3, 3));
#[cfg(feature = "serde")]
assert_serde(&a, Bar(3, 3));
}
#[test]
fn atomic_quxx() {
let a = Atomic::default();
assert_eq!(
Atomic::<Quux>::is_lock_free(),
cfg!(target_has_atomic = "32")
);
assert_eq!(format!("{:?}", a), "Atomic(Quux(0))");
assert_eq!(a.load(SeqCst), Quux(0));
a.store(Quux(1), SeqCst);
assert_eq!(a.swap(Quux(2), SeqCst), Quux(1));
assert_eq!(
a.compare_exchange(Quux(5), Quux(45), SeqCst, SeqCst),
Err(Quux(2))
);
assert_eq!(
a.compare_exchange(Quux(2), Quux(3), SeqCst, SeqCst),
Ok(Quux(2))
);
assert_eq!(a.load(SeqCst), Quux(3));
#[cfg(feature = "serde")]
assert_serde(&a, Quux(3));
}
}
================================================
FILE: src/ops.rs
================================================
// Copyright 2016 Amanieu d'Antras
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
use bytemuck::NoUninit;
#[cfg(feature = "fallback")]
use crate::fallback;
use core::cmp;
use core::mem;
use core::num::Wrapping;
use core::ops;
use core::sync::atomic::Ordering;
macro_rules! match_atomic {
($type:ident, $atomic:ident, $impl:expr, $fallback_impl:expr) => {
match mem::size_of::<$type>() {
#[cfg(target_has_atomic = "8")]
1 if mem::align_of::<$type>() >= 1 => {
type $atomic = core::sync::atomic::AtomicU8;
$impl
}
#[cfg(target_has_atomic = "16")]
2 if mem::align_of::<$type>() >= 2 => {
type $atomic = core::sync::atomic::AtomicU16;
$impl
}
#[cfg(target_has_atomic = "32")]
4 if mem::align_of::<$type>() >= 4 => {
type $atomic = core::sync::atomic::AtomicU32;
$impl
}
#[cfg(target_has_atomic = "64")]
8 if mem::align_of::<$type>() >= 8 => {
type $atomic = core::sync::atomic::AtomicU64;
$impl
}
#[cfg(all(feature = "nightly", target_has_atomic = "128"))]
16 if mem::align_of::<$type>() >= 16 => {
type $atomic = core::sync::atomic::AtomicU128;
$impl
}
#[cfg(feature = "fallback")]
_ => $fallback_impl,
#[cfg(not(feature = "fallback"))]
_ => panic!("Atomic operations for type `{}` are not available as the `fallback` feature of the `atomic` crate is disabled.", core::any::type_name::<$type>()),
}
};
}
macro_rules! match_signed_atomic {
($type:ident, $atomic:ident, $impl:expr, $fallback_impl:expr) => {
match mem::size_of::<$type>() {
#[cfg(target_has_atomic = "8")]
1 if mem::align_of::<$type>() >= 1 => {
type $atomic = core::sync::atomic::AtomicI8;
$impl
}
#[cfg(target_has_atomic = "16")]
2 if mem::align_of::<$type>() >= 2 => {
type $atomic = core::sync::atomic::AtomicI16;
$impl
}
#[cfg(target_has_atomic = "32")]
4 if mem::align_of::<$type>() >= 4 => {
type $atomic = core::sync::atomic::AtomicI32;
$impl
}
#[cfg(target_has_atomic = "64")]
8 if mem::align_of::<$type>() >= 8 => {
type $atomic = core::sync::atomic::AtomicI64;
$impl
}
#[cfg(all(feature = "nightly", target_has_atomic = "128"))]
16 if mem::align_of::<$type>() >= 16 => {
type $atomic = core::sync::atomic::AtomicI128;
$impl
}
#[cfg(feature = "fallback")]
_ => $fallback_impl,
#[cfg(not(feature = "fallback"))]
_ => panic!("Atomic operations for type `{}` are not available as the `fallback` feature of the `atomic` crate is disabled.", core::any::type_name::<$type>()),
}
};
}
#[inline]
pub const fn atomic_is_lock_free<T>() -> bool {
let size = mem::size_of::<T>();
let align = mem::align_of::<T>();
(cfg!(target_has_atomic = "8") & (size == 1) & (align >= 1))
| (cfg!(target_has_atomic = "16") & (size == 2) & (align >= 2))
| (cfg!(target_has_atomic = "32") & (size == 4) & (align >= 4))
| (cfg!(target_has_atomic = "64") & (size == 8) & (align >= 8))
| (cfg!(feature = "nightly")
& cfg!(target_has_atomic = "128")
& (size == 16)
& (align >= 16))
}
#[inline]
pub unsafe fn atomic_load<T: NoUninit>(dst: *mut T, order: Ordering) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).load(order)),
fallback::atomic_load(dst)
)
}
#[inline]
pub unsafe fn atomic_store<T: NoUninit>(dst: *mut T, val: T, order: Ordering) {
match_atomic!(
T,
A,
(*(dst as *const A)).store(mem::transmute_copy(&val), order),
fallback::atomic_store(dst, val)
)
}
#[inline]
pub unsafe fn atomic_swap<T: NoUninit>(dst: *mut T, val: T, order: Ordering) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).swap(mem::transmute_copy(&val), order)),
fallback::atomic_swap(dst, val)
)
}
#[inline]
unsafe fn map_result<T, U>(r: Result<T, T>) -> Result<U, U> {
match r {
Ok(x) => Ok(mem::transmute_copy(&x)),
Err(x) => Err(mem::transmute_copy(&x)),
}
}
#[inline]
pub unsafe fn atomic_compare_exchange<T: NoUninit>(
dst: *mut T,
current: T,
new: T,
success: Ordering,
failure: Ordering,
) -> Result<T, T> {
match_atomic!(
T,
A,
map_result((*(dst as *const A)).compare_exchange(
mem::transmute_copy(¤t),
mem::transmute_copy(&new),
success,
failure,
)),
fallback::atomic_compare_exchange(dst, current, new)
)
}
#[inline]
pub unsafe fn atomic_compare_exchange_weak<T: NoUninit>(
dst: *mut T,
current: T,
new: T,
success: Ordering,
failure: Ordering,
) -> Result<T, T> {
match_atomic!(
T,
A,
map_result((*(dst as *const A)).compare_exchange_weak(
mem::transmute_copy(¤t),
mem::transmute_copy(&new),
success,
failure,
)),
fallback::atomic_compare_exchange(dst, current, new)
)
}
#[inline]
pub unsafe fn atomic_add<T: NoUninit>(dst: *mut T, val: T, order: Ordering) -> T
where
Wrapping<T>: ops::Add<Output = Wrapping<T>>,
{
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_add(mem::transmute_copy(&val), order),),
fallback::atomic_add(dst, val)
)
}
#[inline]
pub unsafe fn atomic_sub<T: NoUninit>(dst: *mut T, val: T, order: Ordering) -> T
where
Wrapping<T>: ops::Sub<Output = Wrapping<T>>,
{
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_sub(mem::transmute_copy(&val), order),),
fallback::atomic_sub(dst, val)
)
}
#[inline]
pub unsafe fn atomic_and<T: NoUninit + ops::BitAnd<Output = T>>(
dst: *mut T,
val: T,
order: Ordering,
) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_and(mem::transmute_copy(&val), order),),
fallback::atomic_and(dst, val)
)
}
#[inline]
pub unsafe fn atomic_or<T: NoUninit + ops::BitOr<Output = T>>(
dst: *mut T,
val: T,
order: Ordering,
) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_or(mem::transmute_copy(&val), order),),
fallback::atomic_or(dst, val)
)
}
#[inline]
pub unsafe fn atomic_xor<T: NoUninit + ops::BitXor<Output = T>>(
dst: *mut T,
val: T,
order: Ordering,
) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_xor(mem::transmute_copy(&val), order),),
fallback::atomic_xor(dst, val)
)
}
#[inline]
pub unsafe fn atomic_min<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, order: Ordering) -> T {
match_signed_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_min(mem::transmute_copy(&val), order),),
fallback::atomic_min(dst, val)
)
}
#[inline]
pub unsafe fn atomic_max<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, order: Ordering) -> T {
match_signed_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_max(mem::transmute_copy(&val), order),),
fallback::atomic_max(dst, val)
)
}
#[inline]
pub unsafe fn atomic_umin<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, order: Ordering) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_min(mem::transmute_copy(&val), order),),
fallback::atomic_min(dst, val)
)
}
#[inline]
pub unsafe fn atomic_umax<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, order: Ordering) -> T {
match_atomic!(
T,
A,
mem::transmute_copy(&(*(dst as *const A)).fetch_max(mem::transmute_copy(&val), order),),
fallback::atomic_max(dst, val)
)
}
================================================
FILE: src/serde_impl.rs
================================================
use core::sync::atomic::Ordering;
use bytemuck::NoUninit;
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use crate::Atomic;
impl<T> Serialize for Atomic<T>
where
T: NoUninit + Serialize,
{
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
// Matches the atomic ordering used in `Debug` for `Atomic<T>`.
self.load(Ordering::Relaxed).serialize(serializer)
}
}
impl<'de, T> Deserialize<'de> for Atomic<T>
where
T: for<'a> Deserialize<'a>,
{
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: Deserializer<'de>,
{
Deserialize::deserialize(deserializer).map(Self::new)
}
}
gitextract_h1dlc86c/
├── .github/
│ └── workflows/
│ └── release-plz.yml
├── .gitignore
├── .travis.yml
├── CHANGELOG.md
├── Cargo.toml
├── LICENSE-APACHE
├── LICENSE-MIT
├── README.md
└── src/
├── fallback.rs
├── lib.rs
├── ops.rs
└── serde_impl.rs
SYMBOL INDEX (73 symbols across 4 files)
FILE: src/fallback.rs
type SpinLock (line 22) | struct SpinLock(AtomicUsize);
method lock (line 25) | fn lock(&self) {
method unlock (line 37) | fn unlock(&self) {
function lock_for_addr (line 71) | fn lock_for_addr(addr: usize) -> &'static SpinLock {
function lock (line 87) | fn lock(addr: usize) -> LockGuard {
type LockGuard (line 93) | struct LockGuard(&'static SpinLock);
method drop (line 96) | fn drop(&mut self) {
function atomic_load (line 102) | pub unsafe fn atomic_load<T>(dst: *mut T) -> T {
function atomic_store (line 108) | pub unsafe fn atomic_store<T>(dst: *mut T, val: T) {
function atomic_swap (line 114) | pub unsafe fn atomic_swap<T>(dst: *mut T, val: T) -> T {
function atomic_compare_exchange (line 120) | pub unsafe fn atomic_compare_exchange<T: NoUninit>(
function atomic_add (line 139) | pub unsafe fn atomic_add<T: Copy>(dst: *mut T, val: T) -> T
function atomic_sub (line 150) | pub unsafe fn atomic_sub<T: Copy>(dst: *mut T, val: T) -> T
function atomic_and (line 161) | pub unsafe fn atomic_and<T: Copy + ops::BitAnd<Output = T>>(dst: *mut T,...
function atomic_or (line 169) | pub unsafe fn atomic_or<T: Copy + ops::BitOr<Output = T>>(dst: *mut T, v...
function atomic_xor (line 177) | pub unsafe fn atomic_xor<T: Copy + ops::BitXor<Output = T>>(dst: *mut T,...
function atomic_min (line 185) | pub unsafe fn atomic_min<T: Copy + cmp::Ord>(dst: *mut T, val: T) -> T {
function atomic_max (line 193) | pub unsafe fn atomic_max<T: Copy + cmp::Ord>(dst: *mut T, val: T) -> T {
FILE: src/lib.rs
type Atomic (line 67) | pub struct Atomic<T> {
method default (line 86) | fn default() -> Self {
function fmt (line 92) | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
function new (line 102) | pub const fn new(v: T) -> Atomic<T> {
function is_lock_free (line 114) | pub const fn is_lock_free() -> bool {
function inner_ptr (line 121) | fn inner_ptr(&self) -> *mut T {
function get_mut (line 130) | pub fn get_mut(&mut self) -> &mut T {
function into_inner (line 139) | pub fn into_inner(self) -> T {
function load (line 152) | pub fn load(&self, order: Ordering) -> T {
function store (line 165) | pub fn store(&self, val: T, order: Ordering) {
function swap (line 176) | pub fn swap(&self, val: T, order: Ordering) -> T {
function compare_exchange (line 193) | pub fn compare_exchange(
function compare_exchange_weak (line 218) | pub fn compare_exchange_weak(
function fetch_update (line 267) | pub fn fetch_update<F>(
function fetch_and (line 295) | pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
function fetch_or (line 306) | pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
function fetch_xor (line 317) | pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
type Foo (line 412) | struct Foo(u8, u8);
type Bar (line 417) | struct Bar(u64, u64);
type Quux (line 422) | struct Quux(u32);
function assert_serde (line 425) | fn assert_serde<T>(atomic: &Atomic<T>, value: T)
function atomic_bool (line 441) | fn atomic_bool() {
function atomic_i8 (line 463) | fn atomic_i8() {
function atomic_i16 (line 487) | fn atomic_i16() {
function atomic_i32 (line 513) | fn atomic_i32() {
function atomic_i64 (line 546) | fn atomic_i64() {
function atomic_i128 (line 573) | fn atomic_i128() {
function atomic_isize (line 599) | fn atomic_isize() {
function atomic_u8 (line 621) | fn atomic_u8() {
function atomic_u16 (line 644) | fn atomic_u16() {
function atomic_u32 (line 670) | fn atomic_u32() {
function atomic_u64 (line 703) | fn atomic_u64() {
function atomic_u128 (line 730) | fn atomic_u128() {
function atomic_usize (line 756) | fn atomic_usize() {
function atomic_foo (line 779) | fn atomic_foo() {
function atomic_bar (line 802) | fn atomic_bar() {
function atomic_quxx (line 824) | fn atomic_quxx() {
FILE: src/ops.rs
function atomic_is_lock_free (line 101) | pub const fn atomic_is_lock_free<T>() -> bool {
function atomic_load (line 116) | pub unsafe fn atomic_load<T: NoUninit>(dst: *mut T, order: Ordering) -> T {
function atomic_store (line 126) | pub unsafe fn atomic_store<T: NoUninit>(dst: *mut T, val: T, order: Orde...
function atomic_swap (line 136) | pub unsafe fn atomic_swap<T: NoUninit>(dst: *mut T, val: T, order: Order...
function map_result (line 146) | unsafe fn map_result<T, U>(r: Result<T, T>) -> Result<U, U> {
function atomic_compare_exchange (line 154) | pub unsafe fn atomic_compare_exchange<T: NoUninit>(
function atomic_compare_exchange_weak (line 175) | pub unsafe fn atomic_compare_exchange_weak<T: NoUninit>(
function atomic_add (line 196) | pub unsafe fn atomic_add<T: NoUninit>(dst: *mut T, val: T, order: Orderi...
function atomic_sub (line 209) | pub unsafe fn atomic_sub<T: NoUninit>(dst: *mut T, val: T, order: Orderi...
function atomic_and (line 222) | pub unsafe fn atomic_and<T: NoUninit + ops::BitAnd<Output = T>>(
function atomic_or (line 236) | pub unsafe fn atomic_or<T: NoUninit + ops::BitOr<Output = T>>(
function atomic_xor (line 250) | pub unsafe fn atomic_xor<T: NoUninit + ops::BitXor<Output = T>>(
function atomic_min (line 264) | pub unsafe fn atomic_min<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, or...
function atomic_max (line 274) | pub unsafe fn atomic_max<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, or...
function atomic_umin (line 284) | pub unsafe fn atomic_umin<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, o...
function atomic_umax (line 294) | pub unsafe fn atomic_umax<T: NoUninit + cmp::Ord>(dst: *mut T, val: T, o...
FILE: src/serde_impl.rs
method serialize (line 12) | fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
function deserialize (line 25) | fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
Condensed preview — 12 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (65K chars).
[
{
"path": ".github/workflows/release-plz.yml",
"chars": 1404,
"preview": "name: Release-plz\n\non:\n push:\n branches:\n - master\n\njobs:\n release-plz-release:\n name: Release-plz release\n"
},
{
"path": ".gitignore",
"chars": 18,
"preview": "target\nCargo.lock\n"
},
{
"path": ".travis.yml",
"chars": 379,
"preview": "language: rust\nsudo: false\n\nrust:\n- nightly\n- beta\n- stable\n- 1.45.0\n\nscript:\n- cargo build\n- cargo test\n- cargo doc\n- i"
},
{
"path": "CHANGELOG.md",
"chars": 412,
"preview": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Change"
},
{
"path": "Cargo.toml",
"chars": 690,
"preview": "[package]\nname = \"atomic\"\nversion = \"0.6.1\"\nedition = \"2018\"\nauthors = [\"Amanieu d'Antras <amanieu@gmail.com>\"]\ndescript"
},
{
"path": "LICENSE-APACHE",
"chars": 10847,
"preview": " Apache License\n Version 2.0, January 2004\n http"
},
{
"path": "LICENSE-MIT",
"chars": 1071,
"preview": "Copyright (c) 2016 The Rust Project Developers\n\nPermission is hereby granted, free of charge, to any\nperson obtaining a "
},
{
"path": "README.md",
"chars": 2161,
"preview": "Generic `Atomic<T>` for Rust\n============================\n\n[. The extraction includes 12 files (61.2 KB), approximately 17.1k tokens, and a symbol index with 73 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.