portable_atomic/
lib.rs

1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- Note: Document from sync-markdown-to-rustdoc:start through sync-markdown-to-rustdoc:end
5     is synchronized from README.md. Any changes to that range are not preserved. -->
6<!-- tidy:sync-markdown-to-rustdoc:start -->
7
8Portable atomic types including support for 128-bit atomics, atomic float, etc.
9
10- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
11- Provide `AtomicI128` and `AtomicU128`.
12- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
13- Provide `AtomicF16` and `AtomicF128` for [unstable `f16` and `f128`](https://github.com/rust-lang/rust/issues/116909). ([optional, requires the `float` feature and unstable cfgs](#optional-features-float))
14- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
15- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
16- Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108).
17- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
18- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
19
20<!-- TODO:
21- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
22- mention optimizations not available in the standard library's equivalents
23-->
24
25portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
26
27## Usage
28
29Add this to your `Cargo.toml`:
30
31```toml
32[dependencies]
33portable-atomic = "1"
34```
35
36The default features are mainly for users who use atomics larger than the pointer width.
37If you don't need them, disabling the default features may reduce code size and compile time slightly.
38
39```toml
40[dependencies]
41portable-atomic = { version = "1", default-features = false }
42```
43
44If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the `portable-atomic` to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
45
46```toml
47[dependencies]
48portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
49```
50
51## 128-bit atomics support
52
53Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.59+), Arm64EC (Rust 1.84+), s390x (Rust 1.84+), and powerpc64 (nightly only), otherwise the fallback implementation is used.
54
55On x86_64, even if `cmpxchg16b` is not available at compile-time (note: `cmpxchg16b` target feature is enabled by default only on Apple and Windows (except Windows 7) targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
56
57They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
58
59See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
60
61## Optional features
62
63- **`fallback`** *(enabled by default)*<br>
64  Enable fallback implementations.
65
66  Disabling this allows only atomic types for which the platform natively supports atomic operations.
67
68- <a name="optional-features-float"></a>**`float`**<br>
69  Provide `AtomicF{32,64}`.
70
71  - When unstable `--cfg portable_atomic_unstable_f16` is also enabled, `AtomicF16` for [unstable `f16`](https://github.com/rust-lang/rust/issues/116909) is also provided.
72  - When unstable `--cfg portable_atomic_unstable_f128` is also enabled, `AtomicF128` for [unstable `f128`](https://github.com/rust-lang/rust/issues/116909) is also provided.
73
74  Note:
75  - Most of `fetch_*` operations of atomic floats are implemented using CAS loops, which can be slower than equivalent operations of atomic integers. (AArch64 with FEAT_LSFE and GPU targets have atomic instructions for float, [so we plan to use these instructions for them in the future.](https://github.com/taiki-e/portable-atomic/issues/34))
76  - Unstable cfgs are outside of the normal semver guarantees and minor or patch versions of portable-atomic may make breaking changes to them at any time.
77
78- **`std`**<br>
79  Use `std`.
80
81- <a name="optional-features-require-cas"></a>**`require-cas`**<br>
82  Emit compile error if atomic CAS is not available. See [Usage](#usage) section and [#100](https://github.com/taiki-e/portable-atomic/pull/100) for more.
83
84- <a name="optional-features-serde"></a>**`serde`**<br>
85  Implement `serde::{Serialize,Deserialize}` for atomic types.
86
87  Note:
88  - The MSRV when this feature is enabled depends on the MSRV of [serde].
89
90- <a name="optional-features-critical-section"></a>**`critical-section`**<br>
91  When this feature is enabled, this crate uses [critical-section] to provide atomic CAS for targets where
92  it is not natively available. When enabling it, you should provide a suitable critical section implementation
93  for the current target, see the [critical-section] documentation for details on how to do so.
94
95  `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) can't be used,
96  such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
97  needs extra care due to e.g. real-time requirements.
98
99  Note that with the `critical-section` feature, critical sections are taken for all atomic operations, while with
100  [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) some operations don't require disabling interrupts (loads and stores, but
101  additionally on MSP430 `add`, `sub`, `and`, `or`, `xor`, `not`). Therefore, for better performance, if
102  all the `critical-section` implementation for your target does is disable interrupts, prefer using
103  `unsafe-assume-single-core` feature instead.
104
105  Note:
106  - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
107  - It is usually *not* recommended to always enable this feature in dependencies of the library.
108
109    Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations ([Implementations provided by `unsafe-assume-single-core` feature, default implementations on MSP430 and AVR](#optional-features-unsafe-assume-single-core), implementation proposed in [#60], etc. Other systems may also be supported in the future).
110
111    The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
112
113    As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
114
115    ```toml
116    [dependencies]
117    portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
118    crate-provides-critical-section-impl = "..."
119    crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
120    ```
121
122- <a name="optional-features-unsafe-assume-single-core"></a>**`unsafe-assume-single-core`**<br>
123  Assume that the target is single-core.
124  When this feature is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
125
126  This feature is `unsafe`, and note the following safety requirements:
127  - Enabling this feature for multi-core systems is always **unsound**.
128  - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
129    Enabling this feature in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
130
131    The following are known cases:
132    - On pre-v6 Arm, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature together.
133    - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
134
135    See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
136
137  Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature.
138
139  It is **very strongly discouraged** to enable this feature in libraries that depend on `portable-atomic`. The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
140
141  Armv6-M (thumbv6m), pre-v6 Arm (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported.
142
143  Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature.
144
145  Enabling this feature for targets that have atomic CAS will result in a compile error.
146
147  Feel free to submit an issue if your target is not supported yet.
148
149## Optional cfg
150
151One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
152
153```toml
154# .cargo/config.toml
155[target.<target>]
156rustflags = ["--cfg", "portable_atomic_no_outline_atomics"]
157```
158
159Or set environment variable:
160
161```sh
162RUSTFLAGS="--cfg portable_atomic_no_outline_atomics" cargo ...
163```
164
165- <a name="optional-cfg-unsafe-assume-single-core"></a>**`--cfg portable_atomic_unsafe_assume_single_core`**<br>
166  Since 1.4.0, this cfg is an alias of [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core).
167
168  Originally, we were providing these as cfgs instead of features, but based on a strong request from the embedded ecosystem, we have agreed to provide them as features as well. See [#94](https://github.com/taiki-e/portable-atomic/pull/94) for more.
169
170- <a name="optional-cfg-no-outline-atomics"></a>**`--cfg portable_atomic_no_outline_atomics`**<br>
171  Disable dynamic dispatching by run-time CPU feature detection.
172
173  If dynamic dispatching by run-time CPU feature detection is enabled, it allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
174
175  Note:
176  - Dynamic detection is currently only supported in x86_64, AArch64, Arm, RISC-V, Arm64EC, and powerpc64, otherwise it works the same as when this cfg is set.
177  - If the required target features are enabled at compile-time, the atomic operations are inlined.
178  - This is compatible with no-std (as with all features except `std`).
179  - On some targets, run-time detection is disabled by default mainly for compatibility with incomplete build environments or support for it is experimental, and can be enabled by `--cfg portable_atomic_outline_atomics`. (When both cfg are enabled, `*_no_*` cfg is preferred.)
180  - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
181
182  See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
183
184## Related Projects
185
186- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
187- [atomic-memcpy]: Byte-wise atomic memcpy.
188
189[#60]: https://github.com/taiki-e/portable-atomic/issues/60
190[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
191[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
192[critical-section]: https://github.com/rust-embedded/critical-section
193[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
194[serde]: https://github.com/serde-rs/serde
195
196<!-- tidy:sync-markdown-to-rustdoc:end -->
197*/
198
199#![no_std]
200#![doc(test(
201    no_crate_inject,
202    attr(
203        deny(warnings, rust_2018_idioms, single_use_lifetimes),
204        allow(dead_code, unused_variables)
205    )
206))]
207#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
208#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
209#![warn(
210    // Lints that may help when writing public library.
211    missing_debug_implementations,
212    // missing_docs,
213    clippy::alloc_instead_of_core,
214    clippy::exhaustive_enums,
215    clippy::exhaustive_structs,
216    clippy::impl_trait_in_params,
217    clippy::missing_inline_in_public_items,
218    clippy::std_instead_of_alloc,
219    clippy::std_instead_of_core,
220    // Code outside of cfg(feature = "float") shouldn't use float.
221    clippy::float_arithmetic,
222)]
223#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
224#![cfg_attr(portable_atomic_no_strict_provenance, allow(unstable_name_collisions))]
225#![allow(clippy::inline_always, clippy::used_underscore_items)]
226// asm_experimental_arch
227// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
228// On tier 2 platforms (powerpc64), we use cfg set by build script to
229// determine whether this feature is available or not.
230#![cfg_attr(
231    all(
232        not(portable_atomic_no_asm),
233        any(
234            target_arch = "avr",
235            target_arch = "msp430",
236            all(target_arch = "xtensa", portable_atomic_unsafe_assume_single_core),
237            all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
238        ),
239    ),
240    feature(asm_experimental_arch)
241)]
242// f16/f128
243// cfg is unstable and explicitly enabled by the user
244#![cfg_attr(portable_atomic_unstable_f16, feature(f16))]
245#![cfg_attr(portable_atomic_unstable_f128, feature(f128))]
246// Old nightly only
247// These features are already stabilized or have already been removed from compilers,
248// and can safely be enabled for old nightly as long as version detection works.
249// - cfg(target_has_atomic)
250// - asm! on AArch64, Arm, RISC-V, x86, x86_64, Arm64EC, s390x
251// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
252// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
253// This also helps us test that our assembly code works with the minimum external
254// LLVM version of the first rustc version that inline assembly stabilized.
255#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
256#![cfg_attr(
257    all(
258        portable_atomic_unstable_asm,
259        any(
260            target_arch = "aarch64",
261            target_arch = "arm",
262            target_arch = "riscv32",
263            target_arch = "riscv64",
264            target_arch = "x86",
265            target_arch = "x86_64",
266        ),
267    ),
268    feature(asm)
269)]
270#![cfg_attr(
271    all(
272        portable_atomic_unstable_asm_experimental_arch,
273        any(target_arch = "arm64ec", target_arch = "s390x"),
274    ),
275    feature(asm_experimental_arch)
276)]
277#![cfg_attr(
278    all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
279    feature(llvm_asm)
280)]
281#![cfg_attr(
282    all(
283        target_arch = "arm",
284        portable_atomic_unstable_isa_attribute,
285        any(test, portable_atomic_unsafe_assume_single_core),
286        not(any(target_feature = "v6", portable_atomic_target_feature = "v6")),
287        not(target_has_atomic = "ptr"),
288    ),
289    feature(isa_attribute)
290)]
291// Miri and/or ThreadSanitizer only
292// They do not support inline assembly, so we need to use unstable features instead.
293// Since they require nightly compilers anyway, we can use the unstable features.
294// This is not an ideal situation, but it is still better than always using lock-based
295// fallback and causing memory ordering problems to be missed by these checkers.
296#![cfg_attr(
297    all(
298        any(
299            target_arch = "aarch64",
300            target_arch = "arm64ec",
301            target_arch = "powerpc64",
302            target_arch = "s390x",
303        ),
304        any(miri, portable_atomic_sanitize_thread),
305    ),
306    allow(internal_features)
307)]
308#![cfg_attr(
309    all(
310        any(
311            target_arch = "aarch64",
312            target_arch = "arm64ec",
313            target_arch = "powerpc64",
314            target_arch = "s390x",
315        ),
316        any(miri, portable_atomic_sanitize_thread),
317    ),
318    feature(core_intrinsics)
319)]
320// docs.rs only (cfg is enabled by docs.rs, not build script)
321#![cfg_attr(docsrs, feature(doc_cfg))]
322#![cfg_attr(
323    all(
324        portable_atomic_no_atomic_load_store,
325        not(any(
326            target_arch = "avr",
327            target_arch = "bpf",
328            target_arch = "msp430",
329            target_arch = "riscv32",
330            target_arch = "riscv64",
331            feature = "critical-section",
332        )),
333    ),
334    allow(unused_imports, unused_macros, clippy::unused_trait_names)
335)]
336
337// There are currently no 128-bit or higher builtin targets.
338// (Although some of our generic code is written with the future
339// addition of 128-bit targets in mind.)
340// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
341#[cfg(not(any(
342    target_pointer_width = "16",
343    target_pointer_width = "32",
344    target_pointer_width = "64",
345)))]
346compile_error!(
347    "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
348     if you need support for others, \
349     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
350);
351
352#[cfg(portable_atomic_unsafe_assume_single_core)]
353#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(not(portable_atomic_no_atomic_cas)))]
354#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(target_has_atomic = "ptr"))]
355compile_error!(
356    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
357     is not compatible with target that supports atomic CAS;\n\
358     see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
359);
360#[cfg(portable_atomic_unsafe_assume_single_core)]
361#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
362#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
363#[cfg(not(any(
364    target_arch = "arm",
365    target_arch = "avr",
366    target_arch = "msp430",
367    target_arch = "riscv32",
368    target_arch = "riscv64",
369    target_arch = "xtensa",
370)))]
371compile_error!(
372    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
373     is not supported yet on this target;\n\
374     if you need unsafe-assume-single-core support for this target,\n\
375     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
376);
377
378#[cfg(portable_atomic_no_outline_atomics)]
379#[cfg(not(any(
380    target_arch = "aarch64",
381    target_arch = "arm",
382    target_arch = "arm64ec",
383    target_arch = "powerpc64",
384    target_arch = "riscv32",
385    target_arch = "riscv64",
386    target_arch = "x86_64",
387)))]
388compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
389#[cfg(portable_atomic_outline_atomics)]
390#[cfg(not(any(
391    target_arch = "aarch64",
392    target_arch = "powerpc64",
393    target_arch = "riscv32",
394    target_arch = "riscv64",
395)))]
396compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
397
398#[cfg(portable_atomic_disable_fiq)]
399#[cfg(not(all(
400    target_arch = "arm",
401    not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
402)))]
403compile_error!(
404    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on pre-v6 Arm"
405);
406#[cfg(portable_atomic_s_mode)]
407#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
408compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
409#[cfg(portable_atomic_force_amo)]
410#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
411compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
412
413#[cfg(portable_atomic_disable_fiq)]
414#[cfg(not(portable_atomic_unsafe_assume_single_core))]
415compile_error!(
416    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
417);
418#[cfg(portable_atomic_s_mode)]
419#[cfg(not(portable_atomic_unsafe_assume_single_core))]
420compile_error!(
421    "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
422);
423#[cfg(portable_atomic_force_amo)]
424#[cfg(not(portable_atomic_unsafe_assume_single_core))]
425compile_error!(
426    "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
427);
428
429#[cfg(all(portable_atomic_unsafe_assume_single_core, feature = "critical-section"))]
430compile_error!(
431    "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) at the same time"
432);
433
434#[cfg(feature = "require-cas")]
435#[cfg_attr(
436    portable_atomic_no_cfg_target_has_atomic,
437    cfg(not(any(
438        not(portable_atomic_no_atomic_cas),
439        portable_atomic_unsafe_assume_single_core,
440        feature = "critical-section",
441        target_arch = "avr",
442        target_arch = "msp430",
443    )))
444)]
445#[cfg_attr(
446    not(portable_atomic_no_cfg_target_has_atomic),
447    cfg(not(any(
448        target_has_atomic = "ptr",
449        portable_atomic_unsafe_assume_single_core,
450        feature = "critical-section",
451        target_arch = "avr",
452        target_arch = "msp430",
453    )))
454)]
455compile_error!(
456    "dependents require atomic CAS but not available on this target by default;\n\
457    consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features.\n\
458    see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
459);
460
461#[cfg(any(test, feature = "std"))]
462extern crate std;
463
464#[macro_use]
465mod cfgs;
466#[cfg(target_pointer_width = "16")]
467pub use self::{cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
468#[cfg(target_pointer_width = "32")]
469pub use self::{cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
470#[cfg(target_pointer_width = "64")]
471pub use self::{cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
472#[cfg(target_pointer_width = "128")]
473pub use self::{cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
474
475#[macro_use]
476mod utils;
477
478#[cfg(test)]
479#[macro_use]
480mod tests;
481
482#[doc(no_inline)]
483pub use core::sync::atomic::Ordering;
484
485// LLVM doesn't support fence/compiler_fence for MSP430.
486#[cfg(target_arch = "msp430")]
487pub use self::imp::msp430::{compiler_fence, fence};
488#[doc(no_inline)]
489#[cfg(not(target_arch = "msp430"))]
490pub use core::sync::atomic::{compiler_fence, fence};
491
492mod imp;
493
494pub mod hint {
495    //! Re-export of the [`core::hint`] module.
496    //!
497    //! The only difference from the [`core::hint`] module is that [`spin_loop`]
498    //! is available in all rust versions that this crate supports.
499    //!
500    //! ```
501    //! use portable_atomic::hint;
502    //!
503    //! hint::spin_loop();
504    //! ```
505
506    #[doc(no_inline)]
507    pub use core::hint::*;
508
509    /// Emits a machine instruction to signal the processor that it is running in
510    /// a busy-wait spin-loop ("spin lock").
511    ///
512    /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
513    /// for example, saving power or switching hyper-threads.
514    ///
515    /// This function is different from [`thread::yield_now`] which directly
516    /// yields to the system's scheduler, whereas `spin_loop` does not interact
517    /// with the operating system.
518    ///
519    /// A common use case for `spin_loop` is implementing bounded optimistic
520    /// spinning in a CAS loop in synchronization primitives. To avoid problems
521    /// like priority inversion, it is strongly recommended that the spin loop is
522    /// terminated after a finite amount of iterations and an appropriate blocking
523    /// syscall is made.
524    ///
525    /// **Note:** On platforms that do not support receiving spin-loop hints this
526    /// function does not do anything at all.
527    ///
528    /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
529    #[inline]
530    pub fn spin_loop() {
531        #[allow(deprecated)]
532        core::sync::atomic::spin_loop_hint();
533    }
534}
535
536#[cfg(doc)]
537use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
538use core::{fmt, ptr};
539
540#[cfg(portable_atomic_no_strict_provenance)]
541#[cfg(miri)]
542use crate::utils::ptr::PtrExt as _;
543
544cfg_has_atomic_8! {
545/// A boolean type which can be safely shared between threads.
546///
547/// This type has the same in-memory representation as a [`bool`].
548///
549/// If the compiler and the platform support atomic loads and stores of `u8`,
550/// this type is a wrapper for the standard library's
551/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
552/// but the compiler does not, atomic operations are implemented using inline
553/// assembly.
554#[repr(C, align(1))]
555pub struct AtomicBool {
556    v: core::cell::UnsafeCell<u8>,
557}
558
559impl Default for AtomicBool {
560    /// Creates an `AtomicBool` initialized to `false`.
561    #[inline]
562    fn default() -> Self {
563        Self::new(false)
564    }
565}
566
567impl From<bool> for AtomicBool {
568    /// Converts a `bool` into an `AtomicBool`.
569    #[inline]
570    fn from(b: bool) -> Self {
571        Self::new(b)
572    }
573}
574
575// Send is implicitly implemented.
576// SAFETY: any data races are prevented by disabling interrupts or
577// atomic intrinsics (see module-level comments).
578unsafe impl Sync for AtomicBool {}
579
580// UnwindSafe is implicitly implemented.
581#[cfg(not(portable_atomic_no_core_unwind_safe))]
582impl core::panic::RefUnwindSafe for AtomicBool {}
583#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
584impl std::panic::RefUnwindSafe for AtomicBool {}
585
586impl_debug_and_serde!(AtomicBool);
587
588impl AtomicBool {
589    /// Creates a new `AtomicBool`.
590    ///
591    /// # Examples
592    ///
593    /// ```
594    /// use portable_atomic::AtomicBool;
595    ///
596    /// let atomic_true = AtomicBool::new(true);
597    /// let atomic_false = AtomicBool::new(false);
598    /// ```
599    #[inline]
600    #[must_use]
601    pub const fn new(v: bool) -> Self {
602        static_assert_layout!(AtomicBool, bool);
603        Self { v: core::cell::UnsafeCell::new(v as u8) }
604    }
605
606    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
607    const_fn! {
608        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
609        /// Creates a new `AtomicBool` from a pointer.
610        ///
611        /// This is `const fn` on Rust 1.83+.
612        ///
613        /// # Safety
614        ///
615        /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
616        ///   be bigger than `align_of::<bool>()`).
617        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
618        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
619        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
620        ///   value (or vice-versa).
621        ///   * In other words, time periods where the value is accessed atomically may not overlap
622        ///     with periods where the value is accessed non-atomically.
623        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
624        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
625        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
626        ///     from the same thread.
627        /// * If this atomic type is *not* lock-free:
628        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
629        ///     with accesses via the returned value (or vice-versa).
630        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
631        ///     be compatible with operations performed by this atomic type.
632        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
633        ///   these are not supported by the memory model.
634        ///
635        /// [valid]: core::ptr#safety
636        #[inline]
637        #[must_use]
638        pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
639            #[allow(clippy::cast_ptr_alignment)]
640            // SAFETY: guaranteed by the caller
641            unsafe { &*(ptr as *mut Self) }
642        }
643    }
644
645    /// Returns `true` if operations on values of this type are lock-free.
646    ///
647    /// If the compiler or the platform doesn't support the necessary
648    /// atomic instructions, global locks for every potentially
649    /// concurrent atomic operation will be used.
650    ///
651    /// # Examples
652    ///
653    /// ```
654    /// use portable_atomic::AtomicBool;
655    ///
656    /// let is_lock_free = AtomicBool::is_lock_free();
657    /// ```
658    #[inline]
659    #[must_use]
660    pub fn is_lock_free() -> bool {
661        imp::AtomicU8::is_lock_free()
662    }
663
664    /// Returns `true` if operations on values of this type are lock-free.
665    ///
666    /// If the compiler or the platform doesn't support the necessary
667    /// atomic instructions, global locks for every potentially
668    /// concurrent atomic operation will be used.
669    ///
670    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
671    /// this type may be lock-free even if the function returns false.
672    ///
673    /// # Examples
674    ///
675    /// ```
676    /// use portable_atomic::AtomicBool;
677    ///
678    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
679    /// ```
680    #[inline]
681    #[must_use]
682    pub const fn is_always_lock_free() -> bool {
683        imp::AtomicU8::IS_ALWAYS_LOCK_FREE
684    }
685    #[cfg(test)]
686    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
687
688    const_fn! {
689        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
690        /// Returns a mutable reference to the underlying [`bool`].
691        ///
692        /// This is safe because the mutable reference guarantees that no other threads are
693        /// concurrently accessing the atomic data.
694        ///
695        /// This is `const fn` on Rust 1.83+.
696        ///
697        /// # Examples
698        ///
699        /// ```
700        /// use portable_atomic::{AtomicBool, Ordering};
701        ///
702        /// let mut some_bool = AtomicBool::new(true);
703        /// assert_eq!(*some_bool.get_mut(), true);
704        /// *some_bool.get_mut() = false;
705        /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
706        /// ```
707        #[inline]
708        pub const fn get_mut(&mut self) -> &mut bool {
709            // SAFETY: the mutable reference guarantees unique ownership.
710            unsafe { &mut *self.as_ptr() }
711        }
712    }
713
714    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
715    // https://github.com/rust-lang/rust/issues/76314
716
717    const_fn! {
718        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
719        /// Consumes the atomic and returns the contained value.
720        ///
721        /// This is safe because passing `self` by value guarantees that no other threads are
722        /// concurrently accessing the atomic data.
723        ///
724        /// This is `const fn` on Rust 1.56+.
725        ///
726        /// # Examples
727        ///
728        /// ```
729        /// use portable_atomic::AtomicBool;
730        ///
731        /// let some_bool = AtomicBool::new(true);
732        /// assert_eq!(some_bool.into_inner(), true);
733        /// ```
734        #[inline]
735        pub const fn into_inner(self) -> bool {
736            // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
737            // so they can be safely transmuted.
738            // (const UnsafeCell::into_inner is unstable)
739            unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
740        }
741    }
742
743    /// Loads a value from the bool.
744    ///
745    /// `load` takes an [`Ordering`] argument which describes the memory ordering
746    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
747    ///
748    /// # Panics
749    ///
750    /// Panics if `order` is [`Release`] or [`AcqRel`].
751    ///
752    /// # Examples
753    ///
754    /// ```
755    /// use portable_atomic::{AtomicBool, Ordering};
756    ///
757    /// let some_bool = AtomicBool::new(true);
758    ///
759    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
760    /// ```
761    #[inline]
762    #[cfg_attr(
763        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
764        track_caller
765    )]
766    pub fn load(&self, order: Ordering) -> bool {
767        self.as_atomic_u8().load(order) != 0
768    }
769
770    /// Stores a value into the bool.
771    ///
772    /// `store` takes an [`Ordering`] argument which describes the memory ordering
773    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
774    ///
775    /// # Panics
776    ///
777    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
778    ///
779    /// # Examples
780    ///
781    /// ```
782    /// use portable_atomic::{AtomicBool, Ordering};
783    ///
784    /// let some_bool = AtomicBool::new(true);
785    ///
786    /// some_bool.store(false, Ordering::Relaxed);
787    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
788    /// ```
789    #[inline]
790    #[cfg_attr(
791        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
792        track_caller
793    )]
794    pub fn store(&self, val: bool, order: Ordering) {
795        self.as_atomic_u8().store(val as u8, order);
796    }
797
798    cfg_has_atomic_cas_or_amo32! {
799    /// Stores a value into the bool, returning the previous value.
800    ///
801    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
802    /// of this operation. All ordering modes are possible. Note that using
803    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
804    /// using [`Release`] makes the load part [`Relaxed`].
805    ///
806    /// # Examples
807    ///
808    /// ```
809    /// use portable_atomic::{AtomicBool, Ordering};
810    ///
811    /// let some_bool = AtomicBool::new(true);
812    ///
813    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
814    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
815    /// ```
816    #[inline]
817    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
818    pub fn swap(&self, val: bool, order: Ordering) -> bool {
819        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
820        {
821            // See https://github.com/rust-lang/rust/pull/114034 for details.
822            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
823            // https://godbolt.org/z/ofbGGdx44
824            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
825        }
826        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
827        {
828            self.as_atomic_u8().swap(val as u8, order) != 0
829        }
830    }
831
832    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
833    ///
834    /// The return value is a result indicating whether the new value was written and containing
835    /// the previous value. On success this value is guaranteed to be equal to `current`.
836    ///
837    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
838    /// ordering of this operation. `success` describes the required ordering for the
839    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
840    /// `failure` describes the required ordering for the load operation that takes place when
841    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
842    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
843    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
844    ///
845    /// # Panics
846    ///
847    /// Panics if `failure` is [`Release`], [`AcqRel`].
848    ///
849    /// # Examples
850    ///
851    /// ```
852    /// use portable_atomic::{AtomicBool, Ordering};
853    ///
854    /// let some_bool = AtomicBool::new(true);
855    ///
856    /// assert_eq!(
857    ///     some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
858    ///     Ok(true)
859    /// );
860    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
861    ///
862    /// assert_eq!(
863    ///     some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
864    ///     Err(false)
865    /// );
866    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
867    /// ```
868    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
869    #[inline]
870    #[cfg_attr(
871        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
872        track_caller
873    )]
874    pub fn compare_exchange(
875        &self,
876        current: bool,
877        new: bool,
878        success: Ordering,
879        failure: Ordering,
880    ) -> Result<bool, bool> {
881        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
882        {
883            // See https://github.com/rust-lang/rust/pull/114034 for details.
884            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
885            // https://godbolt.org/z/ofbGGdx44
886            crate::utils::assert_compare_exchange_ordering(success, failure);
887            let order = crate::utils::upgrade_success_ordering(success, failure);
888            let old = if current == new {
889                // This is a no-op, but we still need to perform the operation
890                // for memory ordering reasons.
891                self.fetch_or(false, order)
892            } else {
893                // This sets the value to the new one and returns the old one.
894                self.swap(new, order)
895            };
896            if old == current { Ok(old) } else { Err(old) }
897        }
898        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
899        {
900            match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
901                Ok(x) => Ok(x != 0),
902                Err(x) => Err(x != 0),
903            }
904        }
905    }
906
907    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
908    ///
909    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
910    /// comparison succeeds, which can result in more efficient code on some platforms. The
911    /// return value is a result indicating whether the new value was written and containing the
912    /// previous value.
913    ///
914    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
915    /// ordering of this operation. `success` describes the required ordering for the
916    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
917    /// `failure` describes the required ordering for the load operation that takes place when
918    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
919    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
920    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
921    ///
922    /// # Panics
923    ///
924    /// Panics if `failure` is [`Release`], [`AcqRel`].
925    ///
926    /// # Examples
927    ///
928    /// ```
929    /// use portable_atomic::{AtomicBool, Ordering};
930    ///
931    /// let val = AtomicBool::new(false);
932    ///
933    /// let new = true;
934    /// let mut old = val.load(Ordering::Relaxed);
935    /// loop {
936    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
937    ///         Ok(_) => break,
938    ///         Err(x) => old = x,
939    ///     }
940    /// }
941    /// ```
942    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
943    #[inline]
944    #[cfg_attr(
945        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
946        track_caller
947    )]
948    pub fn compare_exchange_weak(
949        &self,
950        current: bool,
951        new: bool,
952        success: Ordering,
953        failure: Ordering,
954    ) -> Result<bool, bool> {
955        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
956        {
957            // See https://github.com/rust-lang/rust/pull/114034 for details.
958            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
959            // https://godbolt.org/z/ofbGGdx44
960            self.compare_exchange(current, new, success, failure)
961        }
962        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
963        {
964            match self
965                .as_atomic_u8()
966                .compare_exchange_weak(current as u8, new as u8, success, failure)
967            {
968                Ok(x) => Ok(x != 0),
969                Err(x) => Err(x != 0),
970            }
971        }
972    }
973
974    /// Logical "and" with a boolean value.
975    ///
976    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
977    /// the new value to the result.
978    ///
979    /// Returns the previous value.
980    ///
981    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
982    /// of this operation. All ordering modes are possible. Note that using
983    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
984    /// using [`Release`] makes the load part [`Relaxed`].
985    ///
986    /// # Examples
987    ///
988    /// ```
989    /// use portable_atomic::{AtomicBool, Ordering};
990    ///
991    /// let foo = AtomicBool::new(true);
992    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
993    /// assert_eq!(foo.load(Ordering::SeqCst), false);
994    ///
995    /// let foo = AtomicBool::new(true);
996    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
997    /// assert_eq!(foo.load(Ordering::SeqCst), true);
998    ///
999    /// let foo = AtomicBool::new(false);
1000    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1001    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1002    /// ```
1003    #[inline]
1004    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1005    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1006        self.as_atomic_u8().fetch_and(val as u8, order) != 0
1007    }
1008
1009    /// Logical "and" with a boolean value.
1010    ///
1011    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1012    /// the new value to the result.
1013    ///
1014    /// Unlike `fetch_and`, this does not return the previous value.
1015    ///
1016    /// `and` takes an [`Ordering`] argument which describes the memory ordering
1017    /// of this operation. All ordering modes are possible. Note that using
1018    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1019    /// using [`Release`] makes the load part [`Relaxed`].
1020    ///
1021    /// This function may generate more efficient code than `fetch_and` on some platforms.
1022    ///
1023    /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
1024    /// - MSP430: `and` instead of disabling interrupts
1025    ///
1026    /// Note: On x86/x86_64, the use of either function should not usually
1027    /// affect the generated code, because LLVM can properly optimize the case
1028    /// where the result is unused.
1029    ///
1030    /// # Examples
1031    ///
1032    /// ```
1033    /// use portable_atomic::{AtomicBool, Ordering};
1034    ///
1035    /// let foo = AtomicBool::new(true);
1036    /// foo.and(false, Ordering::SeqCst);
1037    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1038    ///
1039    /// let foo = AtomicBool::new(true);
1040    /// foo.and(true, Ordering::SeqCst);
1041    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1042    ///
1043    /// let foo = AtomicBool::new(false);
1044    /// foo.and(false, Ordering::SeqCst);
1045    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1046    /// ```
1047    #[inline]
1048    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1049    pub fn and(&self, val: bool, order: Ordering) {
1050        self.as_atomic_u8().and(val as u8, order);
1051    }
1052
1053    /// Logical "nand" with a boolean value.
1054    ///
1055    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1056    /// the new value to the result.
1057    ///
1058    /// Returns the previous value.
1059    ///
1060    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1061    /// of this operation. All ordering modes are possible. Note that using
1062    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1063    /// using [`Release`] makes the load part [`Relaxed`].
1064    ///
1065    /// # Examples
1066    ///
1067    /// ```
1068    /// use portable_atomic::{AtomicBool, Ordering};
1069    ///
1070    /// let foo = AtomicBool::new(true);
1071    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1072    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1073    ///
1074    /// let foo = AtomicBool::new(true);
1075    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1076    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1077    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1078    ///
1079    /// let foo = AtomicBool::new(false);
1080    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1081    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1082    /// ```
1083    #[inline]
1084    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1085    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1086        // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L973-L985
1087        if val {
1088            // !(x & true) == !x
1089            // We must invert the bool.
1090            self.fetch_xor(true, order)
1091        } else {
1092            // !(x & false) == true
1093            // We must set the bool to true.
1094            self.swap(true, order)
1095        }
1096    }
1097
1098    /// Logical "or" with a boolean value.
1099    ///
1100    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1101    /// new value to the result.
1102    ///
1103    /// Returns the previous value.
1104    ///
1105    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1106    /// of this operation. All ordering modes are possible. Note that using
1107    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1108    /// using [`Release`] makes the load part [`Relaxed`].
1109    ///
1110    /// # Examples
1111    ///
1112    /// ```
1113    /// use portable_atomic::{AtomicBool, Ordering};
1114    ///
1115    /// let foo = AtomicBool::new(true);
1116    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1117    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1118    ///
1119    /// let foo = AtomicBool::new(true);
1120    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1121    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1122    ///
1123    /// let foo = AtomicBool::new(false);
1124    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1125    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1126    /// ```
1127    #[inline]
1128    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1129    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1130        self.as_atomic_u8().fetch_or(val as u8, order) != 0
1131    }
1132
1133    /// Logical "or" with a boolean value.
1134    ///
1135    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1136    /// new value to the result.
1137    ///
1138    /// Unlike `fetch_or`, this does not return the previous value.
1139    ///
1140    /// `or` takes an [`Ordering`] argument which describes the memory ordering
1141    /// of this operation. All ordering modes are possible. Note that using
1142    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1143    /// using [`Release`] makes the load part [`Relaxed`].
1144    ///
1145    /// This function may generate more efficient code than `fetch_or` on some platforms.
1146    ///
1147    /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1148    /// - MSP430: `bis` instead of disabling interrupts
1149    ///
1150    /// Note: On x86/x86_64, the use of either function should not usually
1151    /// affect the generated code, because LLVM can properly optimize the case
1152    /// where the result is unused.
1153    ///
1154    /// # Examples
1155    ///
1156    /// ```
1157    /// use portable_atomic::{AtomicBool, Ordering};
1158    ///
1159    /// let foo = AtomicBool::new(true);
1160    /// foo.or(false, Ordering::SeqCst);
1161    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1162    ///
1163    /// let foo = AtomicBool::new(true);
1164    /// foo.or(true, Ordering::SeqCst);
1165    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1166    ///
1167    /// let foo = AtomicBool::new(false);
1168    /// foo.or(false, Ordering::SeqCst);
1169    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1170    /// ```
1171    #[inline]
1172    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1173    pub fn or(&self, val: bool, order: Ordering) {
1174        self.as_atomic_u8().or(val as u8, order);
1175    }
1176
1177    /// Logical "xor" with a boolean value.
1178    ///
1179    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1180    /// the new value to the result.
1181    ///
1182    /// Returns the previous value.
1183    ///
1184    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1185    /// of this operation. All ordering modes are possible. Note that using
1186    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1187    /// using [`Release`] makes the load part [`Relaxed`].
1188    ///
1189    /// # Examples
1190    ///
1191    /// ```
1192    /// use portable_atomic::{AtomicBool, Ordering};
1193    ///
1194    /// let foo = AtomicBool::new(true);
1195    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1196    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1197    ///
1198    /// let foo = AtomicBool::new(true);
1199    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1200    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1201    ///
1202    /// let foo = AtomicBool::new(false);
1203    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1204    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1205    /// ```
1206    #[inline]
1207    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1208    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1209        self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1210    }
1211
1212    /// Logical "xor" with a boolean value.
1213    ///
1214    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1215    /// the new value to the result.
1216    ///
1217    /// Unlike `fetch_xor`, this does not return the previous value.
1218    ///
1219    /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1220    /// of this operation. All ordering modes are possible. Note that using
1221    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1222    /// using [`Release`] makes the load part [`Relaxed`].
1223    ///
1224    /// This function may generate more efficient code than `fetch_xor` on some platforms.
1225    ///
1226    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1227    /// - MSP430: `xor` instead of disabling interrupts
1228    ///
1229    /// Note: On x86/x86_64, the use of either function should not usually
1230    /// affect the generated code, because LLVM can properly optimize the case
1231    /// where the result is unused.
1232    ///
1233    /// # Examples
1234    ///
1235    /// ```
1236    /// use portable_atomic::{AtomicBool, Ordering};
1237    ///
1238    /// let foo = AtomicBool::new(true);
1239    /// foo.xor(false, Ordering::SeqCst);
1240    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1241    ///
1242    /// let foo = AtomicBool::new(true);
1243    /// foo.xor(true, Ordering::SeqCst);
1244    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1245    ///
1246    /// let foo = AtomicBool::new(false);
1247    /// foo.xor(false, Ordering::SeqCst);
1248    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1249    /// ```
1250    #[inline]
1251    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1252    pub fn xor(&self, val: bool, order: Ordering) {
1253        self.as_atomic_u8().xor(val as u8, order);
1254    }
1255
1256    /// Logical "not" with a boolean value.
1257    ///
1258    /// Performs a logical "not" operation on the current value, and sets
1259    /// the new value to the result.
1260    ///
1261    /// Returns the previous value.
1262    ///
1263    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1264    /// of this operation. All ordering modes are possible. Note that using
1265    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1266    /// using [`Release`] makes the load part [`Relaxed`].
1267    ///
1268    /// # Examples
1269    ///
1270    /// ```
1271    /// use portable_atomic::{AtomicBool, Ordering};
1272    ///
1273    /// let foo = AtomicBool::new(true);
1274    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1275    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1276    ///
1277    /// let foo = AtomicBool::new(false);
1278    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1279    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1280    /// ```
1281    #[inline]
1282    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1283    pub fn fetch_not(&self, order: Ordering) -> bool {
1284        self.fetch_xor(true, order)
1285    }
1286
1287    /// Logical "not" with a boolean value.
1288    ///
1289    /// Performs a logical "not" operation on the current value, and sets
1290    /// the new value to the result.
1291    ///
1292    /// Unlike `fetch_not`, this does not return the previous value.
1293    ///
1294    /// `not` takes an [`Ordering`] argument which describes the memory ordering
1295    /// of this operation. All ordering modes are possible. Note that using
1296    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1297    /// using [`Release`] makes the load part [`Relaxed`].
1298    ///
1299    /// This function may generate more efficient code than `fetch_not` on some platforms.
1300    ///
1301    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1302    /// - MSP430: `xor` instead of disabling interrupts
1303    ///
1304    /// Note: On x86/x86_64, the use of either function should not usually
1305    /// affect the generated code, because LLVM can properly optimize the case
1306    /// where the result is unused.
1307    ///
1308    /// # Examples
1309    ///
1310    /// ```
1311    /// use portable_atomic::{AtomicBool, Ordering};
1312    ///
1313    /// let foo = AtomicBool::new(true);
1314    /// foo.not(Ordering::SeqCst);
1315    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1316    ///
1317    /// let foo = AtomicBool::new(false);
1318    /// foo.not(Ordering::SeqCst);
1319    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1320    /// ```
1321    #[inline]
1322    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1323    pub fn not(&self, order: Ordering) {
1324        self.xor(true, order);
1325    }
1326
1327    /// Fetches the value, and applies a function to it that returns an optional
1328    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1329    /// returned `Some(_)`, else `Err(previous_value)`.
1330    ///
1331    /// Note: This may call the function multiple times if the value has been
1332    /// changed from other threads in the meantime, as long as the function
1333    /// returns `Some(_)`, but the function will have been applied only once to
1334    /// the stored value.
1335    ///
1336    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1337    /// ordering of this operation. The first describes the required ordering for
1338    /// when the operation finally succeeds while the second describes the
1339    /// required ordering for loads. These correspond to the success and failure
1340    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1341    ///
1342    /// Using [`Acquire`] as success ordering makes the store part of this
1343    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1344    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1345    /// [`Acquire`] or [`Relaxed`].
1346    ///
1347    /// # Considerations
1348    ///
1349    /// This method is not magic; it is not provided by the hardware.
1350    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1351    /// and suffers from the same drawbacks.
1352    /// In particular, this method will not circumvent the [ABA Problem].
1353    ///
1354    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1355    ///
1356    /// # Panics
1357    ///
1358    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1359    ///
1360    /// # Examples
1361    ///
1362    /// ```
1363    /// use portable_atomic::{AtomicBool, Ordering};
1364    ///
1365    /// let x = AtomicBool::new(false);
1366    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1367    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1368    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1369    /// assert_eq!(x.load(Ordering::SeqCst), false);
1370    /// ```
1371    #[inline]
1372    #[cfg_attr(
1373        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1374        track_caller
1375    )]
1376    pub fn fetch_update<F>(
1377        &self,
1378        set_order: Ordering,
1379        fetch_order: Ordering,
1380        mut f: F,
1381    ) -> Result<bool, bool>
1382    where
1383        F: FnMut(bool) -> Option<bool>,
1384    {
1385        let mut prev = self.load(fetch_order);
1386        while let Some(next) = f(prev) {
1387            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1388                x @ Ok(_) => return x,
1389                Err(next_prev) => prev = next_prev,
1390            }
1391        }
1392        Err(prev)
1393    }
1394    } // cfg_has_atomic_cas_or_amo32!
1395
1396    const_fn! {
1397        // This function is actually `const fn`-compatible on Rust 1.32+,
1398        // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1399        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1400        /// Returns a mutable pointer to the underlying [`bool`].
1401        ///
1402        /// Returning an `*mut` pointer from a shared reference to this atomic is
1403        /// safe because the atomic types work with interior mutability. Any use of
1404        /// the returned raw pointer requires an `unsafe` block and has to uphold
1405        /// the safety requirements. If there is concurrent access, note the following
1406        /// additional safety requirements:
1407        ///
1408        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1409        ///   operations on it must be atomic.
1410        /// - Otherwise, any concurrent operations on it must be compatible with
1411        ///   operations performed by this atomic type.
1412        ///
1413        /// This is `const fn` on Rust 1.58+.
1414        #[inline]
1415        pub const fn as_ptr(&self) -> *mut bool {
1416            self.v.get() as *mut bool
1417        }
1418    }
1419
1420    #[inline(always)]
1421    fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1422        // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1423        // and both access data in the same way.
1424        unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1425    }
1426}
1427// See https://github.com/taiki-e/portable-atomic/issues/180
1428#[cfg(not(feature = "require-cas"))]
1429cfg_no_atomic_cas! {
1430#[doc(hidden)]
1431#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
1432impl<'a> AtomicBool {
1433    cfg_no_atomic_cas_or_amo32! {
1434    #[inline]
1435    pub fn swap(&self, val: bool, order: Ordering) -> bool
1436    where
1437        &'a Self: HasSwap,
1438    {
1439        unimplemented!()
1440    }
1441    #[inline]
1442    pub fn compare_exchange(
1443        &self,
1444        current: bool,
1445        new: bool,
1446        success: Ordering,
1447        failure: Ordering,
1448    ) -> Result<bool, bool>
1449    where
1450        &'a Self: HasCompareExchange,
1451    {
1452        unimplemented!()
1453    }
1454    #[inline]
1455    pub fn compare_exchange_weak(
1456        &self,
1457        current: bool,
1458        new: bool,
1459        success: Ordering,
1460        failure: Ordering,
1461    ) -> Result<bool, bool>
1462    where
1463        &'a Self: HasCompareExchangeWeak,
1464    {
1465        unimplemented!()
1466    }
1467    #[inline]
1468    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1469    where
1470        &'a Self: HasFetchAnd,
1471    {
1472        unimplemented!()
1473    }
1474    #[inline]
1475    pub fn and(&self, val: bool, order: Ordering)
1476    where
1477        &'a Self: HasAnd,
1478    {
1479        unimplemented!()
1480    }
1481    #[inline]
1482    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1483    where
1484        &'a Self: HasFetchNand,
1485    {
1486        unimplemented!()
1487    }
1488    #[inline]
1489    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1490    where
1491        &'a Self: HasFetchOr,
1492    {
1493        unimplemented!()
1494    }
1495    #[inline]
1496    pub fn or(&self, val: bool, order: Ordering)
1497    where
1498        &'a Self: HasOr,
1499    {
1500        unimplemented!()
1501    }
1502    #[inline]
1503    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1504    where
1505        &'a Self: HasFetchXor,
1506    {
1507        unimplemented!()
1508    }
1509    #[inline]
1510    pub fn xor(&self, val: bool, order: Ordering)
1511    where
1512        &'a Self: HasXor,
1513    {
1514        unimplemented!()
1515    }
1516    #[inline]
1517    pub fn fetch_not(&self, order: Ordering) -> bool
1518    where
1519        &'a Self: HasFetchNot,
1520    {
1521        unimplemented!()
1522    }
1523    #[inline]
1524    pub fn not(&self, order: Ordering)
1525    where
1526        &'a Self: HasNot,
1527    {
1528        unimplemented!()
1529    }
1530    #[inline]
1531    pub fn fetch_update<F>(
1532        &self,
1533        set_order: Ordering,
1534        fetch_order: Ordering,
1535        f: F,
1536    ) -> Result<bool, bool>
1537    where
1538        F: FnMut(bool) -> Option<bool>,
1539        &'a Self: HasFetchUpdate,
1540    {
1541        unimplemented!()
1542    }
1543    } // cfg_no_atomic_cas_or_amo32!
1544}
1545} // cfg_no_atomic_cas!
1546} // cfg_has_atomic_8!
1547
1548cfg_has_atomic_ptr! {
1549/// A raw pointer type which can be safely shared between threads.
1550///
1551/// This type has the same in-memory representation as a `*mut T`.
1552///
1553/// If the compiler and the platform support atomic loads and stores of pointers,
1554/// this type is a wrapper for the standard library's
1555/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1556/// but the compiler does not, atomic operations are implemented using inline
1557/// assembly.
1558// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1559// will show clearer docs.
1560#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1561#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1562#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1563#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1564pub struct AtomicPtr<T> {
1565    inner: imp::AtomicPtr<T>,
1566}
1567
1568impl<T> Default for AtomicPtr<T> {
1569    /// Creates a null `AtomicPtr<T>`.
1570    #[inline]
1571    fn default() -> Self {
1572        Self::new(ptr::null_mut())
1573    }
1574}
1575
1576impl<T> From<*mut T> for AtomicPtr<T> {
1577    #[inline]
1578    fn from(p: *mut T) -> Self {
1579        Self::new(p)
1580    }
1581}
1582
1583impl<T> fmt::Debug for AtomicPtr<T> {
1584    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1585    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1586        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1587        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1588    }
1589}
1590
1591impl<T> fmt::Pointer for AtomicPtr<T> {
1592    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1593    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1594        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1595        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1596    }
1597}
1598
1599// UnwindSafe is implicitly implemented.
1600#[cfg(not(portable_atomic_no_core_unwind_safe))]
1601impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1602#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1603impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1604
1605impl<T> AtomicPtr<T> {
1606    /// Creates a new `AtomicPtr`.
1607    ///
1608    /// # Examples
1609    ///
1610    /// ```
1611    /// use portable_atomic::AtomicPtr;
1612    ///
1613    /// let ptr = &mut 5;
1614    /// let atomic_ptr = AtomicPtr::new(ptr);
1615    /// ```
1616    #[inline]
1617    #[must_use]
1618    pub const fn new(p: *mut T) -> Self {
1619        static_assert_layout!(AtomicPtr<()>, *mut ());
1620        Self { inner: imp::AtomicPtr::new(p) }
1621    }
1622
1623    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1624    const_fn! {
1625        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1626        /// Creates a new `AtomicPtr` from a pointer.
1627        ///
1628        /// This is `const fn` on Rust 1.83+.
1629        ///
1630        /// # Safety
1631        ///
1632        /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1633        ///   can be bigger than `align_of::<*mut T>()`).
1634        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1635        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1636        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1637        ///   value (or vice-versa).
1638        ///   * In other words, time periods where the value is accessed atomically may not overlap
1639        ///     with periods where the value is accessed non-atomically.
1640        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1641        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1642        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1643        ///     from the same thread.
1644        /// * If this atomic type is *not* lock-free:
1645        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
1646        ///     with accesses via the returned value (or vice-versa).
1647        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1648        ///     be compatible with operations performed by this atomic type.
1649        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1650        ///   these are not supported by the memory model.
1651        ///
1652        /// [valid]: core::ptr#safety
1653        #[inline]
1654        #[must_use]
1655        pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1656            #[allow(clippy::cast_ptr_alignment)]
1657            // SAFETY: guaranteed by the caller
1658            unsafe { &*(ptr as *mut Self) }
1659        }
1660    }
1661
1662    /// Returns `true` if operations on values of this type are lock-free.
1663    ///
1664    /// If the compiler or the platform doesn't support the necessary
1665    /// atomic instructions, global locks for every potentially
1666    /// concurrent atomic operation will be used.
1667    ///
1668    /// # Examples
1669    ///
1670    /// ```
1671    /// use portable_atomic::AtomicPtr;
1672    ///
1673    /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1674    /// ```
1675    #[inline]
1676    #[must_use]
1677    pub fn is_lock_free() -> bool {
1678        <imp::AtomicPtr<T>>::is_lock_free()
1679    }
1680
1681    /// Returns `true` if operations on values of this type are lock-free.
1682    ///
1683    /// If the compiler or the platform doesn't support the necessary
1684    /// atomic instructions, global locks for every potentially
1685    /// concurrent atomic operation will be used.
1686    ///
1687    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1688    /// this type may be lock-free even if the function returns false.
1689    ///
1690    /// # Examples
1691    ///
1692    /// ```
1693    /// use portable_atomic::AtomicPtr;
1694    ///
1695    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1696    /// ```
1697    #[inline]
1698    #[must_use]
1699    pub const fn is_always_lock_free() -> bool {
1700        <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1701    }
1702    #[cfg(test)]
1703    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1704
1705    const_fn! {
1706        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1707        /// Returns a mutable reference to the underlying pointer.
1708        ///
1709        /// This is safe because the mutable reference guarantees that no other threads are
1710        /// concurrently accessing the atomic data.
1711        ///
1712        /// This is `const fn` on Rust 1.83+.
1713        ///
1714        /// # Examples
1715        ///
1716        /// ```
1717        /// use portable_atomic::{AtomicPtr, Ordering};
1718        ///
1719        /// let mut data = 10;
1720        /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1721        /// let mut other_data = 5;
1722        /// *atomic_ptr.get_mut() = &mut other_data;
1723        /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1724        /// ```
1725        #[inline]
1726        pub const fn get_mut(&mut self) -> &mut *mut T {
1727            // SAFETY: the mutable reference guarantees unique ownership.
1728            // (core::sync::atomic::Atomic*::get_mut is not const yet)
1729            unsafe { &mut *self.as_ptr() }
1730        }
1731    }
1732
1733    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1734    // https://github.com/rust-lang/rust/issues/76314
1735
1736    const_fn! {
1737        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1738        /// Consumes the atomic and returns the contained value.
1739        ///
1740        /// This is safe because passing `self` by value guarantees that no other threads are
1741        /// concurrently accessing the atomic data.
1742        ///
1743        /// This is `const fn` on Rust 1.56+.
1744        ///
1745        /// # Examples
1746        ///
1747        /// ```
1748        /// use portable_atomic::AtomicPtr;
1749        ///
1750        /// let mut data = 5;
1751        /// let atomic_ptr = AtomicPtr::new(&mut data);
1752        /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1753        /// ```
1754        #[inline]
1755        pub const fn into_inner(self) -> *mut T {
1756            // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1757            // so they can be safely transmuted.
1758            // (const UnsafeCell::into_inner is unstable)
1759            unsafe { core::mem::transmute(self) }
1760        }
1761    }
1762
1763    /// Loads a value from the pointer.
1764    ///
1765    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1766    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1767    ///
1768    /// # Panics
1769    ///
1770    /// Panics if `order` is [`Release`] or [`AcqRel`].
1771    ///
1772    /// # Examples
1773    ///
1774    /// ```
1775    /// use portable_atomic::{AtomicPtr, Ordering};
1776    ///
1777    /// let ptr = &mut 5;
1778    /// let some_ptr = AtomicPtr::new(ptr);
1779    ///
1780    /// let value = some_ptr.load(Ordering::Relaxed);
1781    /// ```
1782    #[inline]
1783    #[cfg_attr(
1784        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1785        track_caller
1786    )]
1787    pub fn load(&self, order: Ordering) -> *mut T {
1788        self.inner.load(order)
1789    }
1790
1791    /// Stores a value into the pointer.
1792    ///
1793    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1794    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1795    ///
1796    /// # Panics
1797    ///
1798    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1799    ///
1800    /// # Examples
1801    ///
1802    /// ```
1803    /// use portable_atomic::{AtomicPtr, Ordering};
1804    ///
1805    /// let ptr = &mut 5;
1806    /// let some_ptr = AtomicPtr::new(ptr);
1807    ///
1808    /// let other_ptr = &mut 10;
1809    ///
1810    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1811    /// ```
1812    #[inline]
1813    #[cfg_attr(
1814        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1815        track_caller
1816    )]
1817    pub fn store(&self, ptr: *mut T, order: Ordering) {
1818        self.inner.store(ptr, order);
1819    }
1820
1821    cfg_has_atomic_cas_or_amo32! {
1822    /// Stores a value into the pointer, returning the previous value.
1823    ///
1824    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1825    /// of this operation. All ordering modes are possible. Note that using
1826    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1827    /// using [`Release`] makes the load part [`Relaxed`].
1828    ///
1829    /// # Examples
1830    ///
1831    /// ```
1832    /// use portable_atomic::{AtomicPtr, Ordering};
1833    ///
1834    /// let ptr = &mut 5;
1835    /// let some_ptr = AtomicPtr::new(ptr);
1836    ///
1837    /// let other_ptr = &mut 10;
1838    ///
1839    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1840    /// ```
1841    #[inline]
1842    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1843    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1844        self.inner.swap(ptr, order)
1845    }
1846
1847    cfg_has_atomic_cas! {
1848    /// Stores a value into the pointer if the current value is the same as the `current` value.
1849    ///
1850    /// The return value is a result indicating whether the new value was written and containing
1851    /// the previous value. On success this value is guaranteed to be equal to `current`.
1852    ///
1853    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1854    /// ordering of this operation. `success` describes the required ordering for the
1855    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1856    /// `failure` describes the required ordering for the load operation that takes place when
1857    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1858    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1859    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1860    ///
1861    /// # Panics
1862    ///
1863    /// Panics if `failure` is [`Release`], [`AcqRel`].
1864    ///
1865    /// # Examples
1866    ///
1867    /// ```
1868    /// use portable_atomic::{AtomicPtr, Ordering};
1869    ///
1870    /// let ptr = &mut 5;
1871    /// let some_ptr = AtomicPtr::new(ptr);
1872    ///
1873    /// let other_ptr = &mut 10;
1874    ///
1875    /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
1876    /// ```
1877    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1878    #[inline]
1879    #[cfg_attr(
1880        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1881        track_caller
1882    )]
1883    pub fn compare_exchange(
1884        &self,
1885        current: *mut T,
1886        new: *mut T,
1887        success: Ordering,
1888        failure: Ordering,
1889    ) -> Result<*mut T, *mut T> {
1890        self.inner.compare_exchange(current, new, success, failure)
1891    }
1892
1893    /// Stores a value into the pointer if the current value is the same as the `current` value.
1894    ///
1895    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1896    /// comparison succeeds, which can result in more efficient code on some platforms. The
1897    /// return value is a result indicating whether the new value was written and containing the
1898    /// previous value.
1899    ///
1900    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1901    /// ordering of this operation. `success` describes the required ordering for the
1902    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1903    /// `failure` describes the required ordering for the load operation that takes place when
1904    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1905    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1906    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1907    ///
1908    /// # Panics
1909    ///
1910    /// Panics if `failure` is [`Release`], [`AcqRel`].
1911    ///
1912    /// # Examples
1913    ///
1914    /// ```
1915    /// use portable_atomic::{AtomicPtr, Ordering};
1916    ///
1917    /// let some_ptr = AtomicPtr::new(&mut 5);
1918    ///
1919    /// let new = &mut 10;
1920    /// let mut old = some_ptr.load(Ordering::Relaxed);
1921    /// loop {
1922    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1923    ///         Ok(_) => break,
1924    ///         Err(x) => old = x,
1925    ///     }
1926    /// }
1927    /// ```
1928    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1929    #[inline]
1930    #[cfg_attr(
1931        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1932        track_caller
1933    )]
1934    pub fn compare_exchange_weak(
1935        &self,
1936        current: *mut T,
1937        new: *mut T,
1938        success: Ordering,
1939        failure: Ordering,
1940    ) -> Result<*mut T, *mut T> {
1941        self.inner.compare_exchange_weak(current, new, success, failure)
1942    }
1943
1944    /// Fetches the value, and applies a function to it that returns an optional
1945    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1946    /// returned `Some(_)`, else `Err(previous_value)`.
1947    ///
1948    /// Note: This may call the function multiple times if the value has been
1949    /// changed from other threads in the meantime, as long as the function
1950    /// returns `Some(_)`, but the function will have been applied only once to
1951    /// the stored value.
1952    ///
1953    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1954    /// ordering of this operation. The first describes the required ordering for
1955    /// when the operation finally succeeds while the second describes the
1956    /// required ordering for loads. These correspond to the success and failure
1957    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1958    ///
1959    /// Using [`Acquire`] as success ordering makes the store part of this
1960    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1961    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1962    /// [`Acquire`] or [`Relaxed`].
1963    ///
1964    /// # Panics
1965    ///
1966    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1967    ///
1968    /// # Considerations
1969    ///
1970    /// This method is not magic; it is not provided by the hardware.
1971    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1972    /// and suffers from the same drawbacks.
1973    /// In particular, this method will not circumvent the [ABA Problem].
1974    ///
1975    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1976    ///
1977    /// # Examples
1978    ///
1979    /// ```
1980    /// use portable_atomic::{AtomicPtr, Ordering};
1981    ///
1982    /// let ptr: *mut _ = &mut 5;
1983    /// let some_ptr = AtomicPtr::new(ptr);
1984    ///
1985    /// let new: *mut _ = &mut 10;
1986    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1987    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1988    ///     if x == ptr {
1989    ///         Some(new)
1990    ///     } else {
1991    ///         None
1992    ///     }
1993    /// });
1994    /// assert_eq!(result, Ok(ptr));
1995    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1996    /// ```
1997    #[inline]
1998    #[cfg_attr(
1999        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2000        track_caller
2001    )]
2002    pub fn fetch_update<F>(
2003        &self,
2004        set_order: Ordering,
2005        fetch_order: Ordering,
2006        mut f: F,
2007    ) -> Result<*mut T, *mut T>
2008    where
2009        F: FnMut(*mut T) -> Option<*mut T>,
2010    {
2011        let mut prev = self.load(fetch_order);
2012        while let Some(next) = f(prev) {
2013            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2014                x @ Ok(_) => return x,
2015                Err(next_prev) => prev = next_prev,
2016            }
2017        }
2018        Err(prev)
2019    }
2020
2021    #[cfg(miri)]
2022    #[inline]
2023    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2024    fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T
2025    where
2026        F: FnMut(*mut T) -> *mut T,
2027    {
2028        // This is a private function and all instances of `f` only operate on the value
2029        // loaded, so there is no need to synchronize the first load/failed CAS.
2030        let mut prev = self.load(Ordering::Relaxed);
2031        loop {
2032            let next = f(prev);
2033            match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) {
2034                Ok(x) => return x,
2035                Err(next_prev) => prev = next_prev,
2036            }
2037        }
2038    }
2039    } // cfg_has_atomic_cas!
2040
2041    /// Offsets the pointer's address by adding `val` (in units of `T`),
2042    /// returning the previous pointer.
2043    ///
2044    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2045    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2046    ///
2047    /// This method operates in units of `T`, which means that it cannot be used
2048    /// to offset the pointer by an amount which is not a multiple of
2049    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2050    /// work with a deliberately misaligned pointer. In such cases, you may use
2051    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2052    ///
2053    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2054    /// memory ordering of this operation. All ordering modes are possible. Note
2055    /// that using [`Acquire`] makes the store part of this operation
2056    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2057    ///
2058    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2059    ///
2060    /// # Examples
2061    ///
2062    /// ```
2063    /// # #![allow(unstable_name_collisions)]
2064    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2065    /// use portable_atomic::{AtomicPtr, Ordering};
2066    ///
2067    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2068    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2069    /// // Note: units of `size_of::<i64>()`.
2070    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2071    /// ```
2072    #[inline]
2073    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2074    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2075        self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2076    }
2077
2078    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2079    /// returning the previous pointer.
2080    ///
2081    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2082    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2083    ///
2084    /// This method operates in units of `T`, which means that it cannot be used
2085    /// to offset the pointer by an amount which is not a multiple of
2086    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2087    /// work with a deliberately misaligned pointer. In such cases, you may use
2088    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2089    ///
2090    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2091    /// ordering of this operation. All ordering modes are possible. Note that
2092    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2093    /// and using [`Release`] makes the load part [`Relaxed`].
2094    ///
2095    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2096    ///
2097    /// # Examples
2098    ///
2099    /// ```
2100    /// use portable_atomic::{AtomicPtr, Ordering};
2101    ///
2102    /// let array = [1i32, 2i32];
2103    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2104    ///
2105    /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2106    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2107    /// ```
2108    #[inline]
2109    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2110    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2111        self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2112    }
2113
2114    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2115    /// previous pointer.
2116    ///
2117    /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2118    /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2119    ///
2120    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2121    /// memory ordering of this operation. All ordering modes are possible. Note
2122    /// that using [`Acquire`] makes the store part of this operation
2123    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2124    ///
2125    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2126    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2127    ///
2128    /// # Examples
2129    ///
2130    /// ```
2131    /// # #![allow(unstable_name_collisions)]
2132    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2133    /// use portable_atomic::{AtomicPtr, Ordering};
2134    ///
2135    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2136    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2137    /// // Note: in units of bytes, not `size_of::<i64>()`.
2138    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2139    /// ```
2140    #[inline]
2141    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2142    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2143        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2144        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2145        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2146        // compatible and is sound.
2147        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2148        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2149        #[cfg(miri)]
2150        {
2151            self.fetch_update_(order, |x| x.with_addr(x.addr().wrapping_add(val)))
2152        }
2153        #[cfg(not(miri))]
2154        {
2155            crate::utils::ptr::with_exposed_provenance_mut(
2156                self.as_atomic_usize().fetch_add(val, order)
2157            )
2158        }
2159    }
2160
2161    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2162    /// previous pointer.
2163    ///
2164    /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2165    /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2166    ///
2167    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2168    /// memory ordering of this operation. All ordering modes are possible. Note
2169    /// that using [`Acquire`] makes the store part of this operation
2170    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2171    ///
2172    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2173    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2174    ///
2175    /// # Examples
2176    ///
2177    /// ```
2178    /// # #![allow(unstable_name_collisions)]
2179    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2180    /// use portable_atomic::{AtomicPtr, Ordering};
2181    ///
2182    /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2183    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2184    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2185    /// ```
2186    #[inline]
2187    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2188    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2189        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2190        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2191        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2192        // compatible and is sound.
2193        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2194        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2195        #[cfg(miri)]
2196        {
2197            self.fetch_update_(order, |x| x.with_addr(x.addr().wrapping_sub(val)))
2198        }
2199        #[cfg(not(miri))]
2200        {
2201            crate::utils::ptr::with_exposed_provenance_mut(
2202                self.as_atomic_usize().fetch_sub(val, order)
2203            )
2204        }
2205    }
2206
2207    /// Performs a bitwise "or" operation on the address of the current pointer,
2208    /// and the argument `val`, and stores a pointer with provenance of the
2209    /// current pointer and the resulting address.
2210    ///
2211    /// This is equivalent to using [`map_addr`] to atomically perform
2212    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2213    /// pointer schemes to atomically set tag bits.
2214    ///
2215    /// **Caveat**: This operation returns the previous value. To compute the
2216    /// stored value without losing provenance, you may use [`map_addr`]. For
2217    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2218    ///
2219    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2220    /// ordering of this operation. All ordering modes are possible. Note that
2221    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2222    /// and using [`Release`] makes the load part [`Relaxed`].
2223    ///
2224    /// This API and its claimed semantics are part of the Strict Provenance
2225    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2226    /// details.
2227    ///
2228    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2229    ///
2230    /// # Examples
2231    ///
2232    /// ```
2233    /// # #![allow(unstable_name_collisions)]
2234    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2235    /// use portable_atomic::{AtomicPtr, Ordering};
2236    ///
2237    /// let pointer = &mut 3i64 as *mut i64;
2238    ///
2239    /// let atom = AtomicPtr::<i64>::new(pointer);
2240    /// // Tag the bottom bit of the pointer.
2241    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2242    /// // Extract and untag.
2243    /// let tagged = atom.load(Ordering::Relaxed);
2244    /// assert_eq!(tagged.addr() & 1, 1);
2245    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2246    /// ```
2247    #[inline]
2248    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2249    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2250        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2251        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2252        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2253        // compatible and is sound.
2254        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2255        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2256        #[cfg(miri)]
2257        {
2258            self.fetch_update_(order, |x| x.with_addr(x.addr() | val))
2259        }
2260        #[cfg(not(miri))]
2261        {
2262            crate::utils::ptr::with_exposed_provenance_mut(
2263                self.as_atomic_usize().fetch_or(val, order)
2264            )
2265        }
2266    }
2267
2268    /// Performs a bitwise "and" operation on the address of the current
2269    /// pointer, and the argument `val`, and stores a pointer with provenance of
2270    /// the current pointer and the resulting address.
2271    ///
2272    /// This is equivalent to using [`map_addr`] to atomically perform
2273    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2274    /// pointer schemes to atomically unset tag bits.
2275    ///
2276    /// **Caveat**: This operation returns the previous value. To compute the
2277    /// stored value without losing provenance, you may use [`map_addr`]. For
2278    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2279    ///
2280    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2281    /// ordering of this operation. All ordering modes are possible. Note that
2282    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2283    /// and using [`Release`] makes the load part [`Relaxed`].
2284    ///
2285    /// This API and its claimed semantics are part of the Strict Provenance
2286    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2287    /// details.
2288    ///
2289    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2290    ///
2291    /// # Examples
2292    ///
2293    /// ```
2294    /// # #![allow(unstable_name_collisions)]
2295    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2296    /// use portable_atomic::{AtomicPtr, Ordering};
2297    ///
2298    /// let pointer = &mut 3i64 as *mut i64;
2299    /// // A tagged pointer
2300    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2301    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2302    /// // Untag, and extract the previously tagged pointer.
2303    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2304    /// assert_eq!(untagged, pointer);
2305    /// ```
2306    #[inline]
2307    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2308    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2309        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2310        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2311        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2312        // compatible and is sound.
2313        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2314        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2315        #[cfg(miri)]
2316        {
2317            self.fetch_update_(order, |x| x.with_addr(x.addr() & val))
2318        }
2319        #[cfg(not(miri))]
2320        {
2321            crate::utils::ptr::with_exposed_provenance_mut(
2322                self.as_atomic_usize().fetch_and(val, order)
2323            )
2324        }
2325    }
2326
2327    /// Performs a bitwise "xor" operation on the address of the current
2328    /// pointer, and the argument `val`, and stores a pointer with provenance of
2329    /// the current pointer and the resulting address.
2330    ///
2331    /// This is equivalent to using [`map_addr`] to atomically perform
2332    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2333    /// pointer schemes to atomically toggle tag bits.
2334    ///
2335    /// **Caveat**: This operation returns the previous value. To compute the
2336    /// stored value without losing provenance, you may use [`map_addr`]. For
2337    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2338    ///
2339    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2340    /// ordering of this operation. All ordering modes are possible. Note that
2341    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2342    /// and using [`Release`] makes the load part [`Relaxed`].
2343    ///
2344    /// This API and its claimed semantics are part of the Strict Provenance
2345    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2346    /// details.
2347    ///
2348    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2349    ///
2350    /// # Examples
2351    ///
2352    /// ```
2353    /// # #![allow(unstable_name_collisions)]
2354    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2355    /// use portable_atomic::{AtomicPtr, Ordering};
2356    ///
2357    /// let pointer = &mut 3i64 as *mut i64;
2358    /// let atom = AtomicPtr::<i64>::new(pointer);
2359    ///
2360    /// // Toggle a tag bit on the pointer.
2361    /// atom.fetch_xor(1, Ordering::Relaxed);
2362    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2363    /// ```
2364    #[inline]
2365    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2366    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2367        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2368        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2369        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2370        // compatible and is sound.
2371        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2372        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2373        #[cfg(miri)]
2374        {
2375            self.fetch_update_(order, |x| x.with_addr(x.addr() ^ val))
2376        }
2377        #[cfg(not(miri))]
2378        {
2379            crate::utils::ptr::with_exposed_provenance_mut(
2380                self.as_atomic_usize().fetch_xor(val, order)
2381            )
2382        }
2383    }
2384
2385    /// Sets the bit at the specified bit-position to 1.
2386    ///
2387    /// Returns `true` if the specified bit was previously set to 1.
2388    ///
2389    /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2390    /// of this operation. All ordering modes are possible. Note that using
2391    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2392    /// using [`Release`] makes the load part [`Relaxed`].
2393    ///
2394    /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2395    ///
2396    /// # Examples
2397    ///
2398    /// ```
2399    /// # #![allow(unstable_name_collisions)]
2400    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2401    /// use portable_atomic::{AtomicPtr, Ordering};
2402    ///
2403    /// let pointer = &mut 3i64 as *mut i64;
2404    ///
2405    /// let atom = AtomicPtr::<i64>::new(pointer);
2406    /// // Tag the bottom bit of the pointer.
2407    /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2408    /// // Extract and untag.
2409    /// let tagged = atom.load(Ordering::Relaxed);
2410    /// assert_eq!(tagged.addr() & 1, 1);
2411    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2412    /// ```
2413    #[inline]
2414    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2415    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2416        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2417        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2418        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2419        // compatible and is sound.
2420        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2421        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2422        #[cfg(miri)]
2423        {
2424            let mask = 1_usize.wrapping_shl(bit);
2425            self.fetch_or(mask, order).addr() & mask != 0
2426        }
2427        #[cfg(not(miri))]
2428        {
2429            self.as_atomic_usize().bit_set(bit, order)
2430        }
2431    }
2432
2433    /// Clears the bit at the specified bit-position to 1.
2434    ///
2435    /// Returns `true` if the specified bit was previously set to 1.
2436    ///
2437    /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2438    /// of this operation. All ordering modes are possible. Note that using
2439    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2440    /// using [`Release`] makes the load part [`Relaxed`].
2441    ///
2442    /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2443    ///
2444    /// # Examples
2445    ///
2446    /// ```
2447    /// # #![allow(unstable_name_collisions)]
2448    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2449    /// use portable_atomic::{AtomicPtr, Ordering};
2450    ///
2451    /// let pointer = &mut 3i64 as *mut i64;
2452    /// // A tagged pointer
2453    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2454    /// assert!(atom.bit_set(0, Ordering::Relaxed));
2455    /// // Untag
2456    /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2457    /// ```
2458    #[inline]
2459    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2460    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2461        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2462        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2463        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2464        // compatible and is sound.
2465        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2466        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2467        #[cfg(miri)]
2468        {
2469            let mask = 1_usize.wrapping_shl(bit);
2470            self.fetch_and(!mask, order).addr() & mask != 0
2471        }
2472        #[cfg(not(miri))]
2473        {
2474            self.as_atomic_usize().bit_clear(bit, order)
2475        }
2476    }
2477
2478    /// Toggles the bit at the specified bit-position.
2479    ///
2480    /// Returns `true` if the specified bit was previously set to 1.
2481    ///
2482    /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2483    /// of this operation. All ordering modes are possible. Note that using
2484    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2485    /// using [`Release`] makes the load part [`Relaxed`].
2486    ///
2487    /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2488    ///
2489    /// # Examples
2490    ///
2491    /// ```
2492    /// # #![allow(unstable_name_collisions)]
2493    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2494    /// use portable_atomic::{AtomicPtr, Ordering};
2495    ///
2496    /// let pointer = &mut 3i64 as *mut i64;
2497    /// let atom = AtomicPtr::<i64>::new(pointer);
2498    ///
2499    /// // Toggle a tag bit on the pointer.
2500    /// atom.bit_toggle(0, Ordering::Relaxed);
2501    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2502    /// ```
2503    #[inline]
2504    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2505    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2506        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2507        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2508        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2509        // compatible and is sound.
2510        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2511        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2512        #[cfg(miri)]
2513        {
2514            let mask = 1_usize.wrapping_shl(bit);
2515            self.fetch_xor(mask, order).addr() & mask != 0
2516        }
2517        #[cfg(not(miri))]
2518        {
2519            self.as_atomic_usize().bit_toggle(bit, order)
2520        }
2521    }
2522
2523    #[cfg(not(miri))]
2524    #[inline(always)]
2525    fn as_atomic_usize(&self) -> &AtomicUsize {
2526        static_assert!(
2527            core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>()
2528        );
2529        static_assert!(
2530            core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>()
2531        );
2532        // SAFETY: AtomicPtr and AtomicUsize have the same layout,
2533        // and both access data in the same way.
2534        unsafe { &*(self as *const Self as *const AtomicUsize) }
2535    }
2536    } // cfg_has_atomic_cas_or_amo32!
2537
2538    const_fn! {
2539        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2540        /// Returns a mutable pointer to the underlying pointer.
2541        ///
2542        /// Returning an `*mut` pointer from a shared reference to this atomic is
2543        /// safe because the atomic types work with interior mutability. Any use of
2544        /// the returned raw pointer requires an `unsafe` block and has to uphold
2545        /// the safety requirements. If there is concurrent access, note the following
2546        /// additional safety requirements:
2547        ///
2548        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2549        ///   operations on it must be atomic.
2550        /// - Otherwise, any concurrent operations on it must be compatible with
2551        ///   operations performed by this atomic type.
2552        ///
2553        /// This is `const fn` on Rust 1.58+.
2554        #[inline]
2555        pub const fn as_ptr(&self) -> *mut *mut T {
2556            self.inner.as_ptr()
2557        }
2558    }
2559}
2560// See https://github.com/taiki-e/portable-atomic/issues/180
2561#[cfg(not(feature = "require-cas"))]
2562cfg_no_atomic_cas! {
2563#[doc(hidden)]
2564#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
2565impl<'a, T: 'a> AtomicPtr<T> {
2566    cfg_no_atomic_cas_or_amo32! {
2567    #[inline]
2568    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2569    where
2570        &'a Self: HasSwap,
2571    {
2572        unimplemented!()
2573    }
2574    } // cfg_no_atomic_cas_or_amo32!
2575    #[inline]
2576    pub fn compare_exchange(
2577        &self,
2578        current: *mut T,
2579        new: *mut T,
2580        success: Ordering,
2581        failure: Ordering,
2582    ) -> Result<*mut T, *mut T>
2583    where
2584        &'a Self: HasCompareExchange,
2585    {
2586        unimplemented!()
2587    }
2588    #[inline]
2589    pub fn compare_exchange_weak(
2590        &self,
2591        current: *mut T,
2592        new: *mut T,
2593        success: Ordering,
2594        failure: Ordering,
2595    ) -> Result<*mut T, *mut T>
2596    where
2597        &'a Self: HasCompareExchangeWeak,
2598    {
2599        unimplemented!()
2600    }
2601    #[inline]
2602    pub fn fetch_update<F>(
2603        &self,
2604        set_order: Ordering,
2605        fetch_order: Ordering,
2606        f: F,
2607    ) -> Result<*mut T, *mut T>
2608    where
2609        F: FnMut(*mut T) -> Option<*mut T>,
2610        &'a Self: HasFetchUpdate,
2611    {
2612        unimplemented!()
2613    }
2614    cfg_no_atomic_cas_or_amo32! {
2615    #[inline]
2616    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2617    where
2618        &'a Self: HasFetchPtrAdd,
2619    {
2620        unimplemented!()
2621    }
2622    #[inline]
2623    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2624    where
2625        &'a Self: HasFetchPtrSub,
2626    {
2627        unimplemented!()
2628    }
2629    #[inline]
2630    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2631    where
2632        &'a Self: HasFetchByteAdd,
2633    {
2634        unimplemented!()
2635    }
2636    #[inline]
2637    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2638    where
2639        &'a Self: HasFetchByteSub,
2640    {
2641        unimplemented!()
2642    }
2643    #[inline]
2644    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2645    where
2646        &'a Self: HasFetchOr,
2647    {
2648        unimplemented!()
2649    }
2650    #[inline]
2651    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2652    where
2653        &'a Self: HasFetchAnd,
2654    {
2655        unimplemented!()
2656    }
2657    #[inline]
2658    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2659    where
2660        &'a Self: HasFetchXor,
2661    {
2662        unimplemented!()
2663    }
2664    #[inline]
2665    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2666    where
2667        &'a Self: HasBitSet,
2668    {
2669        unimplemented!()
2670    }
2671    #[inline]
2672    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2673    where
2674        &'a Self: HasBitClear,
2675    {
2676        unimplemented!()
2677    }
2678    #[inline]
2679    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2680    where
2681        &'a Self: HasBitToggle,
2682    {
2683        unimplemented!()
2684    }
2685    } // cfg_no_atomic_cas_or_amo32!
2686}
2687} // cfg_no_atomic_cas!
2688} // cfg_has_atomic_ptr!
2689
2690macro_rules! atomic_int {
2691    // Atomic{I,U}* impls
2692    ($atomic_type:ident, $int_type:ident, $align:literal,
2693        $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2694        $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2695    ) => {
2696        doc_comment! {
2697            concat!("An integer type which can be safely shared between threads.
2698
2699This type has the same in-memory representation as the underlying integer type,
2700[`", stringify!($int_type), "`].
2701
2702If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2703"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2704"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2705inline assembly. Otherwise synchronizes using global locks.
2706You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2707atomic instructions or locks will be used.
2708"
2709            ),
2710            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2711            // will show clearer docs.
2712            #[repr(C, align($align))]
2713            pub struct $atomic_type {
2714                inner: imp::$atomic_type,
2715            }
2716        }
2717
2718        impl Default for $atomic_type {
2719            #[inline]
2720            fn default() -> Self {
2721                Self::new($int_type::default())
2722            }
2723        }
2724
2725        impl From<$int_type> for $atomic_type {
2726            #[inline]
2727            fn from(v: $int_type) -> Self {
2728                Self::new(v)
2729            }
2730        }
2731
2732        // UnwindSafe is implicitly implemented.
2733        #[cfg(not(portable_atomic_no_core_unwind_safe))]
2734        impl core::panic::RefUnwindSafe for $atomic_type {}
2735        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2736        impl std::panic::RefUnwindSafe for $atomic_type {}
2737
2738        impl_debug_and_serde!($atomic_type);
2739
2740        impl $atomic_type {
2741            doc_comment! {
2742                concat!(
2743                    "Creates a new atomic integer.
2744
2745# Examples
2746
2747```
2748use portable_atomic::", stringify!($atomic_type), ";
2749
2750let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2751```"
2752                ),
2753                #[inline]
2754                #[must_use]
2755                pub const fn new(v: $int_type) -> Self {
2756                    static_assert_layout!($atomic_type, $int_type);
2757                    Self { inner: imp::$atomic_type::new(v) }
2758                }
2759            }
2760
2761            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2762            #[cfg(not(portable_atomic_no_const_mut_refs))]
2763            doc_comment! {
2764                concat!("Creates a new reference to an atomic integer from a pointer.
2765
2766This is `const fn` on Rust 1.83+.
2767
2768# Safety
2769
2770* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2771  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2772* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2773* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2774  behind `ptr` must have a happens-before relationship with atomic accesses via
2775  the returned value (or vice-versa).
2776  * In other words, time periods where the value is accessed atomically may not
2777    overlap with periods where the value is accessed non-atomically.
2778  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2779    for the duration of lifetime `'a`. Most use cases should be able to follow
2780    this guideline.
2781  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2782    done from the same thread.
2783* If this atomic type is *not* lock-free:
2784  * Any accesses to the value behind `ptr` must have a happens-before relationship
2785    with accesses via the returned value (or vice-versa).
2786  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2787    be compatible with operations performed by this atomic type.
2788* This method must not be used to create overlapping or mixed-size atomic
2789  accesses, as these are not supported by the memory model.
2790
2791[valid]: core::ptr#safety"),
2792                #[inline]
2793                #[must_use]
2794                pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2795                    #[allow(clippy::cast_ptr_alignment)]
2796                    // SAFETY: guaranteed by the caller
2797                    unsafe { &*(ptr as *mut Self) }
2798                }
2799            }
2800            #[cfg(portable_atomic_no_const_mut_refs)]
2801            doc_comment! {
2802                concat!("Creates a new reference to an atomic integer from a pointer.
2803
2804This is `const fn` on Rust 1.83+.
2805
2806# Safety
2807
2808* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2809  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2810* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2811* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2812  behind `ptr` must have a happens-before relationship with atomic accesses via
2813  the returned value (or vice-versa).
2814  * In other words, time periods where the value is accessed atomically may not
2815    overlap with periods where the value is accessed non-atomically.
2816  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2817    for the duration of lifetime `'a`. Most use cases should be able to follow
2818    this guideline.
2819  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2820    done from the same thread.
2821* If this atomic type is *not* lock-free:
2822  * Any accesses to the value behind `ptr` must have a happens-before relationship
2823    with accesses via the returned value (or vice-versa).
2824  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2825    be compatible with operations performed by this atomic type.
2826* This method must not be used to create overlapping or mixed-size atomic
2827  accesses, as these are not supported by the memory model.
2828
2829[valid]: core::ptr#safety"),
2830                #[inline]
2831                #[must_use]
2832                pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2833                    #[allow(clippy::cast_ptr_alignment)]
2834                    // SAFETY: guaranteed by the caller
2835                    unsafe { &*(ptr as *mut Self) }
2836                }
2837            }
2838
2839            doc_comment! {
2840                concat!("Returns `true` if operations on values of this type are lock-free.
2841
2842If the compiler or the platform doesn't support the necessary
2843atomic instructions, global locks for every potentially
2844concurrent atomic operation will be used.
2845
2846# Examples
2847
2848```
2849use portable_atomic::", stringify!($atomic_type), ";
2850
2851let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2852```"),
2853                #[inline]
2854                #[must_use]
2855                pub fn is_lock_free() -> bool {
2856                    <imp::$atomic_type>::is_lock_free()
2857                }
2858            }
2859
2860            doc_comment! {
2861                concat!("Returns `true` if operations on values of this type are lock-free.
2862
2863If the compiler or the platform doesn't support the necessary
2864atomic instructions, global locks for every potentially
2865concurrent atomic operation will be used.
2866
2867**Note:** If the atomic operation relies on dynamic CPU feature detection,
2868this type may be lock-free even if the function returns false.
2869
2870# Examples
2871
2872```
2873use portable_atomic::", stringify!($atomic_type), ";
2874
2875const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2876```"),
2877                #[inline]
2878                #[must_use]
2879                pub const fn is_always_lock_free() -> bool {
2880                    <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2881                }
2882            }
2883            #[cfg(test)]
2884            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2885
2886            #[cfg(not(portable_atomic_no_const_mut_refs))]
2887            doc_comment! {
2888                concat!("Returns a mutable reference to the underlying integer.\n
2889This is safe because the mutable reference guarantees that no other threads are
2890concurrently accessing the atomic data.
2891
2892This is `const fn` on Rust 1.83+.
2893
2894# Examples
2895
2896```
2897use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2898
2899let mut some_var = ", stringify!($atomic_type), "::new(10);
2900assert_eq!(*some_var.get_mut(), 10);
2901*some_var.get_mut() = 5;
2902assert_eq!(some_var.load(Ordering::SeqCst), 5);
2903```"),
2904                #[inline]
2905                pub const fn get_mut(&mut self) -> &mut $int_type {
2906                    // SAFETY: the mutable reference guarantees unique ownership.
2907                    // (core::sync::atomic::Atomic*::get_mut is not const yet)
2908                    unsafe { &mut *self.as_ptr() }
2909                }
2910            }
2911            #[cfg(portable_atomic_no_const_mut_refs)]
2912            doc_comment! {
2913                concat!("Returns a mutable reference to the underlying integer.\n
2914This is safe because the mutable reference guarantees that no other threads are
2915concurrently accessing the atomic data.
2916
2917This is `const fn` on Rust 1.83+.
2918
2919# Examples
2920
2921```
2922use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2923
2924let mut some_var = ", stringify!($atomic_type), "::new(10);
2925assert_eq!(*some_var.get_mut(), 10);
2926*some_var.get_mut() = 5;
2927assert_eq!(some_var.load(Ordering::SeqCst), 5);
2928```"),
2929                #[inline]
2930                pub fn get_mut(&mut self) -> &mut $int_type {
2931                    // SAFETY: the mutable reference guarantees unique ownership.
2932                    unsafe { &mut *self.as_ptr() }
2933                }
2934            }
2935
2936            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2937            // https://github.com/rust-lang/rust/issues/76314
2938
2939            #[cfg(not(portable_atomic_no_const_transmute))]
2940            doc_comment! {
2941                concat!("Consumes the atomic and returns the contained value.
2942
2943This is safe because passing `self` by value guarantees that no other threads are
2944concurrently accessing the atomic data.
2945
2946This is `const fn` on Rust 1.56+.
2947
2948# Examples
2949
2950```
2951use portable_atomic::", stringify!($atomic_type), ";
2952
2953let some_var = ", stringify!($atomic_type), "::new(5);
2954assert_eq!(some_var.into_inner(), 5);
2955```"),
2956                #[inline]
2957                pub const fn into_inner(self) -> $int_type {
2958                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2959                    // so they can be safely transmuted.
2960                    // (const UnsafeCell::into_inner is unstable)
2961                    unsafe { core::mem::transmute(self) }
2962                }
2963            }
2964            #[cfg(portable_atomic_no_const_transmute)]
2965            doc_comment! {
2966                concat!("Consumes the atomic and returns the contained value.
2967
2968This is safe because passing `self` by value guarantees that no other threads are
2969concurrently accessing the atomic data.
2970
2971This is `const fn` on Rust 1.56+.
2972
2973# Examples
2974
2975```
2976use portable_atomic::", stringify!($atomic_type), ";
2977
2978let some_var = ", stringify!($atomic_type), "::new(5);
2979assert_eq!(some_var.into_inner(), 5);
2980```"),
2981                #[inline]
2982                pub fn into_inner(self) -> $int_type {
2983                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2984                    // so they can be safely transmuted.
2985                    // (const UnsafeCell::into_inner is unstable)
2986                    unsafe { core::mem::transmute(self) }
2987                }
2988            }
2989
2990            doc_comment! {
2991                concat!("Loads a value from the atomic integer.
2992
2993`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2994Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2995
2996# Panics
2997
2998Panics if `order` is [`Release`] or [`AcqRel`].
2999
3000# Examples
3001
3002```
3003use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3004
3005let some_var = ", stringify!($atomic_type), "::new(5);
3006
3007assert_eq!(some_var.load(Ordering::Relaxed), 5);
3008```"),
3009                #[inline]
3010                #[cfg_attr(
3011                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3012                    track_caller
3013                )]
3014                pub fn load(&self, order: Ordering) -> $int_type {
3015                    self.inner.load(order)
3016                }
3017            }
3018
3019            doc_comment! {
3020                concat!("Stores a value into the atomic integer.
3021
3022`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3023Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3024
3025# Panics
3026
3027Panics if `order` is [`Acquire`] or [`AcqRel`].
3028
3029# Examples
3030
3031```
3032use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3033
3034let some_var = ", stringify!($atomic_type), "::new(5);
3035
3036some_var.store(10, Ordering::Relaxed);
3037assert_eq!(some_var.load(Ordering::Relaxed), 10);
3038```"),
3039                #[inline]
3040                #[cfg_attr(
3041                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3042                    track_caller
3043                )]
3044                pub fn store(&self, val: $int_type, order: Ordering) {
3045                    self.inner.store(val, order)
3046                }
3047            }
3048
3049            cfg_has_atomic_cas_or_amo32! {
3050            $cfg_has_atomic_cas_or_amo32_or_8! {
3051            doc_comment! {
3052                concat!("Stores a value into the atomic integer, returning the previous value.
3053
3054`swap` takes an [`Ordering`] argument which describes the memory ordering
3055of this operation. All ordering modes are possible. Note that using
3056[`Acquire`] makes the store part of this operation [`Relaxed`], and
3057using [`Release`] makes the load part [`Relaxed`].
3058
3059# Examples
3060
3061```
3062use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3063
3064let some_var = ", stringify!($atomic_type), "::new(5);
3065
3066assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3067```"),
3068                #[inline]
3069                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3070                pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3071                    self.inner.swap(val, order)
3072                }
3073            }
3074            } // $cfg_has_atomic_cas_or_amo32_or_8!
3075
3076            cfg_has_atomic_cas! {
3077            doc_comment! {
3078                concat!("Stores a value into the atomic integer if the current value is the same as
3079the `current` value.
3080
3081The return value is a result indicating whether the new value was written and
3082containing the previous value. On success this value is guaranteed to be equal to
3083`current`.
3084
3085`compare_exchange` takes two [`Ordering`] arguments to describe the memory
3086ordering of this operation. `success` describes the required ordering for the
3087read-modify-write operation that takes place if the comparison with `current` succeeds.
3088`failure` describes the required ordering for the load operation that takes place when
3089the comparison fails. Using [`Acquire`] as success ordering makes the store part
3090of this operation [`Relaxed`], and using [`Release`] makes the successful load
3091[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3092
3093# Panics
3094
3095Panics if `failure` is [`Release`], [`AcqRel`].
3096
3097# Examples
3098
3099```
3100use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3101
3102let some_var = ", stringify!($atomic_type), "::new(5);
3103
3104assert_eq!(
3105    some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
3106    Ok(5),
3107);
3108assert_eq!(some_var.load(Ordering::Relaxed), 10);
3109
3110assert_eq!(
3111    some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
3112    Err(10),
3113);
3114assert_eq!(some_var.load(Ordering::Relaxed), 10);
3115```"),
3116                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3117                #[inline]
3118                #[cfg_attr(
3119                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3120                    track_caller
3121                )]
3122                pub fn compare_exchange(
3123                    &self,
3124                    current: $int_type,
3125                    new: $int_type,
3126                    success: Ordering,
3127                    failure: Ordering,
3128                ) -> Result<$int_type, $int_type> {
3129                    self.inner.compare_exchange(current, new, success, failure)
3130                }
3131            }
3132
3133            doc_comment! {
3134                concat!("Stores a value into the atomic integer if the current value is the same as
3135the `current` value.
3136Unlike [`compare_exchange`](Self::compare_exchange)
3137this function is allowed to spuriously fail even
3138when the comparison succeeds, which can result in more efficient code on some
3139platforms. The return value is a result indicating whether the new value was
3140written and containing the previous value.
3141
3142`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3143ordering of this operation. `success` describes the required ordering for the
3144read-modify-write operation that takes place if the comparison with `current` succeeds.
3145`failure` describes the required ordering for the load operation that takes place when
3146the comparison fails. Using [`Acquire`] as success ordering makes the store part
3147of this operation [`Relaxed`], and using [`Release`] makes the successful load
3148[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3149
3150# Panics
3151
3152Panics if `failure` is [`Release`], [`AcqRel`].
3153
3154# Examples
3155
3156```
3157use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3158
3159let val = ", stringify!($atomic_type), "::new(4);
3160
3161let mut old = val.load(Ordering::Relaxed);
3162loop {
3163    let new = old * 2;
3164    match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3165        Ok(_) => break,
3166        Err(x) => old = x,
3167    }
3168}
3169```"),
3170                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3171                #[inline]
3172                #[cfg_attr(
3173                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3174                    track_caller
3175                )]
3176                pub fn compare_exchange_weak(
3177                    &self,
3178                    current: $int_type,
3179                    new: $int_type,
3180                    success: Ordering,
3181                    failure: Ordering,
3182                ) -> Result<$int_type, $int_type> {
3183                    self.inner.compare_exchange_weak(current, new, success, failure)
3184                }
3185            }
3186            } // cfg_has_atomic_cas!
3187
3188            $cfg_has_atomic_cas_or_amo32_or_8! {
3189            doc_comment! {
3190                concat!("Adds to the current value, returning the previous value.
3191
3192This operation wraps around on overflow.
3193
3194`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3195of this operation. All ordering modes are possible. Note that using
3196[`Acquire`] makes the store part of this operation [`Relaxed`], and
3197using [`Release`] makes the load part [`Relaxed`].
3198
3199# Examples
3200
3201```
3202use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3203
3204let foo = ", stringify!($atomic_type), "::new(0);
3205assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3206assert_eq!(foo.load(Ordering::SeqCst), 10);
3207```"),
3208                #[inline]
3209                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3210                pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3211                    self.inner.fetch_add(val, order)
3212                }
3213            }
3214
3215            doc_comment! {
3216                concat!("Adds to the current value.
3217
3218This operation wraps around on overflow.
3219
3220Unlike `fetch_add`, this does not return the previous value.
3221
3222`add` takes an [`Ordering`] argument which describes the memory ordering
3223of this operation. All ordering modes are possible. Note that using
3224[`Acquire`] makes the store part of this operation [`Relaxed`], and
3225using [`Release`] makes the load part [`Relaxed`].
3226
3227This function may generate more efficient code than `fetch_add` on some platforms.
3228
3229- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3230
3231# Examples
3232
3233```
3234use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3235
3236let foo = ", stringify!($atomic_type), "::new(0);
3237foo.add(10, Ordering::SeqCst);
3238assert_eq!(foo.load(Ordering::SeqCst), 10);
3239```"),
3240                #[inline]
3241                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3242                pub fn add(&self, val: $int_type, order: Ordering) {
3243                    self.inner.add(val, order);
3244                }
3245            }
3246
3247            doc_comment! {
3248                concat!("Subtracts from the current value, returning the previous value.
3249
3250This operation wraps around on overflow.
3251
3252`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3253of this operation. All ordering modes are possible. Note that using
3254[`Acquire`] makes the store part of this operation [`Relaxed`], and
3255using [`Release`] makes the load part [`Relaxed`].
3256
3257# Examples
3258
3259```
3260use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3261
3262let foo = ", stringify!($atomic_type), "::new(20);
3263assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3264assert_eq!(foo.load(Ordering::SeqCst), 10);
3265```"),
3266                #[inline]
3267                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3268                pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3269                    self.inner.fetch_sub(val, order)
3270                }
3271            }
3272
3273            doc_comment! {
3274                concat!("Subtracts from the current value.
3275
3276This operation wraps around on overflow.
3277
3278Unlike `fetch_sub`, this does not return the previous value.
3279
3280`sub` takes an [`Ordering`] argument which describes the memory ordering
3281of this operation. All ordering modes are possible. Note that using
3282[`Acquire`] makes the store part of this operation [`Relaxed`], and
3283using [`Release`] makes the load part [`Relaxed`].
3284
3285This function may generate more efficient code than `fetch_sub` on some platforms.
3286
3287- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3288
3289# Examples
3290
3291```
3292use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3293
3294let foo = ", stringify!($atomic_type), "::new(20);
3295foo.sub(10, Ordering::SeqCst);
3296assert_eq!(foo.load(Ordering::SeqCst), 10);
3297```"),
3298                #[inline]
3299                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3300                pub fn sub(&self, val: $int_type, order: Ordering) {
3301                    self.inner.sub(val, order);
3302                }
3303            }
3304            } // $cfg_has_atomic_cas_or_amo32_or_8!
3305
3306            doc_comment! {
3307                concat!("Bitwise \"and\" with the current value.
3308
3309Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3310sets the new value to the result.
3311
3312Returns the previous value.
3313
3314`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3315of this operation. All ordering modes are possible. Note that using
3316[`Acquire`] makes the store part of this operation [`Relaxed`], and
3317using [`Release`] makes the load part [`Relaxed`].
3318
3319# Examples
3320
3321```
3322use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3323
3324let foo = ", stringify!($atomic_type), "::new(0b101101);
3325assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3326assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3327```"),
3328                #[inline]
3329                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3330                pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3331                    self.inner.fetch_and(val, order)
3332                }
3333            }
3334
3335            doc_comment! {
3336                concat!("Bitwise \"and\" with the current value.
3337
3338Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3339sets the new value to the result.
3340
3341Unlike `fetch_and`, this does not return the previous value.
3342
3343`and` takes an [`Ordering`] argument which describes the memory ordering
3344of this operation. All ordering modes are possible. Note that using
3345[`Acquire`] makes the store part of this operation [`Relaxed`], and
3346using [`Release`] makes the load part [`Relaxed`].
3347
3348This function may generate more efficient code than `fetch_and` on some platforms.
3349
3350- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3351- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3352
3353Note: On x86/x86_64, the use of either function should not usually
3354affect the generated code, because LLVM can properly optimize the case
3355where the result is unused.
3356
3357# Examples
3358
3359```
3360use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3361
3362let foo = ", stringify!($atomic_type), "::new(0b101101);
3363assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3364assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3365```"),
3366                #[inline]
3367                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3368                pub fn and(&self, val: $int_type, order: Ordering) {
3369                    self.inner.and(val, order);
3370                }
3371            }
3372
3373            cfg_has_atomic_cas! {
3374            doc_comment! {
3375                concat!("Bitwise \"nand\" with the current value.
3376
3377Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3378sets the new value to the result.
3379
3380Returns the previous value.
3381
3382`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3383of this operation. All ordering modes are possible. Note that using
3384[`Acquire`] makes the store part of this operation [`Relaxed`], and
3385using [`Release`] makes the load part [`Relaxed`].
3386
3387# Examples
3388
3389```
3390use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3391
3392let foo = ", stringify!($atomic_type), "::new(0x13);
3393assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3394assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3395```"),
3396                #[inline]
3397                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3398                pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3399                    self.inner.fetch_nand(val, order)
3400                }
3401            }
3402            } // cfg_has_atomic_cas!
3403
3404            doc_comment! {
3405                concat!("Bitwise \"or\" with the current value.
3406
3407Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3408sets the new value to the result.
3409
3410Returns the previous value.
3411
3412`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3413of this operation. All ordering modes are possible. Note that using
3414[`Acquire`] makes the store part of this operation [`Relaxed`], and
3415using [`Release`] makes the load part [`Relaxed`].
3416
3417# Examples
3418
3419```
3420use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3421
3422let foo = ", stringify!($atomic_type), "::new(0b101101);
3423assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3424assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3425```"),
3426                #[inline]
3427                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3428                pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3429                    self.inner.fetch_or(val, order)
3430                }
3431            }
3432
3433            doc_comment! {
3434                concat!("Bitwise \"or\" with the current value.
3435
3436Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3437sets the new value to the result.
3438
3439Unlike `fetch_or`, this does not return the previous value.
3440
3441`or` takes an [`Ordering`] argument which describes the memory ordering
3442of this operation. All ordering modes are possible. Note that using
3443[`Acquire`] makes the store part of this operation [`Relaxed`], and
3444using [`Release`] makes the load part [`Relaxed`].
3445
3446This function may generate more efficient code than `fetch_or` on some platforms.
3447
3448- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3449- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3450
3451Note: On x86/x86_64, the use of either function should not usually
3452affect the generated code, because LLVM can properly optimize the case
3453where the result is unused.
3454
3455# Examples
3456
3457```
3458use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3459
3460let foo = ", stringify!($atomic_type), "::new(0b101101);
3461assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3462assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3463```"),
3464                #[inline]
3465                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3466                pub fn or(&self, val: $int_type, order: Ordering) {
3467                    self.inner.or(val, order);
3468                }
3469            }
3470
3471            doc_comment! {
3472                concat!("Bitwise \"xor\" with the current value.
3473
3474Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3475sets the new value to the result.
3476
3477Returns the previous value.
3478
3479`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3480of this operation. All ordering modes are possible. Note that using
3481[`Acquire`] makes the store part of this operation [`Relaxed`], and
3482using [`Release`] makes the load part [`Relaxed`].
3483
3484# Examples
3485
3486```
3487use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3488
3489let foo = ", stringify!($atomic_type), "::new(0b101101);
3490assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3491assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3492```"),
3493                #[inline]
3494                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3495                pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3496                    self.inner.fetch_xor(val, order)
3497                }
3498            }
3499
3500            doc_comment! {
3501                concat!("Bitwise \"xor\" with the current value.
3502
3503Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3504sets the new value to the result.
3505
3506Unlike `fetch_xor`, this does not return the previous value.
3507
3508`xor` takes an [`Ordering`] argument which describes the memory ordering
3509of this operation. All ordering modes are possible. Note that using
3510[`Acquire`] makes the store part of this operation [`Relaxed`], and
3511using [`Release`] makes the load part [`Relaxed`].
3512
3513This function may generate more efficient code than `fetch_xor` on some platforms.
3514
3515- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3516- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3517
3518Note: On x86/x86_64, the use of either function should not usually
3519affect the generated code, because LLVM can properly optimize the case
3520where the result is unused.
3521
3522# Examples
3523
3524```
3525use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3526
3527let foo = ", stringify!($atomic_type), "::new(0b101101);
3528foo.xor(0b110011, Ordering::SeqCst);
3529assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3530```"),
3531                #[inline]
3532                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3533                pub fn xor(&self, val: $int_type, order: Ordering) {
3534                    self.inner.xor(val, order);
3535                }
3536            }
3537
3538            cfg_has_atomic_cas! {
3539            doc_comment! {
3540                concat!("Fetches the value, and applies a function to it that returns an optional
3541new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3542`Err(previous_value)`.
3543
3544Note: This may call the function multiple times if the value has been changed from other threads in
3545the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3546only once to the stored value.
3547
3548`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3549The first describes the required ordering for when the operation finally succeeds while the second
3550describes the required ordering for loads. These correspond to the success and failure orderings of
3551[`compare_exchange`](Self::compare_exchange) respectively.
3552
3553Using [`Acquire`] as success ordering makes the store part
3554of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3555[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3556
3557# Panics
3558
3559Panics if `fetch_order` is [`Release`], [`AcqRel`].
3560
3561# Considerations
3562
3563This method is not magic; it is not provided by the hardware.
3564It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3565and suffers from the same drawbacks.
3566In particular, this method will not circumvent the [ABA Problem].
3567
3568[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3569
3570# Examples
3571
3572```
3573use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3574
3575let x = ", stringify!($atomic_type), "::new(7);
3576assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3577assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3578assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3579assert_eq!(x.load(Ordering::SeqCst), 9);
3580```"),
3581                #[inline]
3582                #[cfg_attr(
3583                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3584                    track_caller
3585                )]
3586                pub fn fetch_update<F>(
3587                    &self,
3588                    set_order: Ordering,
3589                    fetch_order: Ordering,
3590                    mut f: F,
3591                ) -> Result<$int_type, $int_type>
3592                where
3593                    F: FnMut($int_type) -> Option<$int_type>,
3594                {
3595                    let mut prev = self.load(fetch_order);
3596                    while let Some(next) = f(prev) {
3597                        match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3598                            x @ Ok(_) => return x,
3599                            Err(next_prev) => prev = next_prev,
3600                        }
3601                    }
3602                    Err(prev)
3603                }
3604            }
3605            } // cfg_has_atomic_cas!
3606
3607            $cfg_has_atomic_cas_or_amo32_or_8! {
3608            doc_comment! {
3609                concat!("Maximum with the current value.
3610
3611Finds the maximum of the current value and the argument `val`, and
3612sets the new value to the result.
3613
3614Returns the previous value.
3615
3616`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3617of this operation. All ordering modes are possible. Note that using
3618[`Acquire`] makes the store part of this operation [`Relaxed`], and
3619using [`Release`] makes the load part [`Relaxed`].
3620
3621# Examples
3622
3623```
3624use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3625
3626let foo = ", stringify!($atomic_type), "::new(23);
3627assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3628assert_eq!(foo.load(Ordering::SeqCst), 42);
3629```
3630
3631If you want to obtain the maximum value in one step, you can use the following:
3632
3633```
3634use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3635
3636let foo = ", stringify!($atomic_type), "::new(23);
3637let bar = 42;
3638let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3639assert!(max_foo == 42);
3640```"),
3641                #[inline]
3642                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3643                pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3644                    self.inner.fetch_max(val, order)
3645                }
3646            }
3647
3648            doc_comment! {
3649                concat!("Minimum with the current value.
3650
3651Finds the minimum of the current value and the argument `val`, and
3652sets the new value to the result.
3653
3654Returns the previous value.
3655
3656`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3657of this operation. All ordering modes are possible. Note that using
3658[`Acquire`] makes the store part of this operation [`Relaxed`], and
3659using [`Release`] makes the load part [`Relaxed`].
3660
3661# Examples
3662
3663```
3664use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3665
3666let foo = ", stringify!($atomic_type), "::new(23);
3667assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3668assert_eq!(foo.load(Ordering::Relaxed), 23);
3669assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3670assert_eq!(foo.load(Ordering::Relaxed), 22);
3671```
3672
3673If you want to obtain the minimum value in one step, you can use the following:
3674
3675```
3676use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3677
3678let foo = ", stringify!($atomic_type), "::new(23);
3679let bar = 12;
3680let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3681assert_eq!(min_foo, 12);
3682```"),
3683                #[inline]
3684                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3685                pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3686                    self.inner.fetch_min(val, order)
3687                }
3688            }
3689            } // $cfg_has_atomic_cas_or_amo32_or_8!
3690
3691            doc_comment! {
3692                concat!("Sets the bit at the specified bit-position to 1.
3693
3694Returns `true` if the specified bit was previously set to 1.
3695
3696`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3697of this operation. All ordering modes are possible. Note that using
3698[`Acquire`] makes the store part of this operation [`Relaxed`], and
3699using [`Release`] makes the load part [`Relaxed`].
3700
3701This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3702
3703# Examples
3704
3705```
3706use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3707
3708let foo = ", stringify!($atomic_type), "::new(0b0000);
3709assert!(!foo.bit_set(0, Ordering::Relaxed));
3710assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3711assert!(foo.bit_set(0, Ordering::Relaxed));
3712assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3713```"),
3714                #[inline]
3715                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3716                pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3717                    self.inner.bit_set(bit, order)
3718                }
3719            }
3720
3721            doc_comment! {
3722                concat!("Clears the bit at the specified bit-position to 1.
3723
3724Returns `true` if the specified bit was previously set to 1.
3725
3726`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3727of this operation. All ordering modes are possible. Note that using
3728[`Acquire`] makes the store part of this operation [`Relaxed`], and
3729using [`Release`] makes the load part [`Relaxed`].
3730
3731This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3732
3733# Examples
3734
3735```
3736use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3737
3738let foo = ", stringify!($atomic_type), "::new(0b0001);
3739assert!(foo.bit_clear(0, Ordering::Relaxed));
3740assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3741```"),
3742                #[inline]
3743                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3744                pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3745                    self.inner.bit_clear(bit, order)
3746                }
3747            }
3748
3749            doc_comment! {
3750                concat!("Toggles the bit at the specified bit-position.
3751
3752Returns `true` if the specified bit was previously set to 1.
3753
3754`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3755of this operation. All ordering modes are possible. Note that using
3756[`Acquire`] makes the store part of this operation [`Relaxed`], and
3757using [`Release`] makes the load part [`Relaxed`].
3758
3759This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3760
3761# Examples
3762
3763```
3764use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3765
3766let foo = ", stringify!($atomic_type), "::new(0b0000);
3767assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3768assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3769assert!(foo.bit_toggle(0, Ordering::Relaxed));
3770assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3771```"),
3772                #[inline]
3773                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3774                pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3775                    self.inner.bit_toggle(bit, order)
3776                }
3777            }
3778
3779            doc_comment! {
3780                concat!("Logical negates the current value, and sets the new value to the result.
3781
3782Returns the previous value.
3783
3784`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3785of this operation. All ordering modes are possible. Note that using
3786[`Acquire`] makes the store part of this operation [`Relaxed`], and
3787using [`Release`] makes the load part [`Relaxed`].
3788
3789# Examples
3790
3791```
3792use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3793
3794let foo = ", stringify!($atomic_type), "::new(0);
3795assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3796assert_eq!(foo.load(Ordering::Relaxed), !0);
3797```"),
3798                #[inline]
3799                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3800                pub fn fetch_not(&self, order: Ordering) -> $int_type {
3801                    self.inner.fetch_not(order)
3802                }
3803            }
3804
3805            doc_comment! {
3806                concat!("Logical negates the current value, and sets the new value to the result.
3807
3808Unlike `fetch_not`, this does not return the previous value.
3809
3810`not` takes an [`Ordering`] argument which describes the memory ordering
3811of this operation. All ordering modes are possible. Note that using
3812[`Acquire`] makes the store part of this operation [`Relaxed`], and
3813using [`Release`] makes the load part [`Relaxed`].
3814
3815This function may generate more efficient code than `fetch_not` on some platforms.
3816
3817- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3818- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3819
3820# Examples
3821
3822```
3823use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3824
3825let foo = ", stringify!($atomic_type), "::new(0);
3826foo.not(Ordering::Relaxed);
3827assert_eq!(foo.load(Ordering::Relaxed), !0);
3828```"),
3829                #[inline]
3830                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3831                pub fn not(&self, order: Ordering) {
3832                    self.inner.not(order);
3833                }
3834            }
3835
3836            cfg_has_atomic_cas! {
3837            doc_comment! {
3838                concat!("Negates the current value, and sets the new value to the result.
3839
3840Returns the previous value.
3841
3842`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3843of this operation. All ordering modes are possible. Note that using
3844[`Acquire`] makes the store part of this operation [`Relaxed`], and
3845using [`Release`] makes the load part [`Relaxed`].
3846
3847# Examples
3848
3849```
3850use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3851
3852let foo = ", stringify!($atomic_type), "::new(5);
3853assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3854assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3855assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3856assert_eq!(foo.load(Ordering::Relaxed), 5);
3857```"),
3858                #[inline]
3859                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3860                pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3861                    self.inner.fetch_neg(order)
3862                }
3863            }
3864
3865            doc_comment! {
3866                concat!("Negates the current value, and sets the new value to the result.
3867
3868Unlike `fetch_neg`, this does not return the previous value.
3869
3870`neg` takes an [`Ordering`] argument which describes the memory ordering
3871of this operation. All ordering modes are possible. Note that using
3872[`Acquire`] makes the store part of this operation [`Relaxed`], and
3873using [`Release`] makes the load part [`Relaxed`].
3874
3875This function may generate more efficient code than `fetch_neg` on some platforms.
3876
3877- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3878
3879# Examples
3880
3881```
3882use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3883
3884let foo = ", stringify!($atomic_type), "::new(5);
3885foo.neg(Ordering::Relaxed);
3886assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3887foo.neg(Ordering::Relaxed);
3888assert_eq!(foo.load(Ordering::Relaxed), 5);
3889```"),
3890                #[inline]
3891                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3892                pub fn neg(&self, order: Ordering) {
3893                    self.inner.neg(order);
3894                }
3895            }
3896            } // cfg_has_atomic_cas!
3897            } // cfg_has_atomic_cas_or_amo32!
3898
3899            const_fn! {
3900                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3901                /// Returns a mutable pointer to the underlying integer.
3902                ///
3903                /// Returning an `*mut` pointer from a shared reference to this atomic is
3904                /// safe because the atomic types work with interior mutability. Any use of
3905                /// the returned raw pointer requires an `unsafe` block and has to uphold
3906                /// the safety requirements. If there is concurrent access, note the following
3907                /// additional safety requirements:
3908                ///
3909                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3910                ///   operations on it must be atomic.
3911                /// - Otherwise, any concurrent operations on it must be compatible with
3912                ///   operations performed by this atomic type.
3913                ///
3914                /// This is `const fn` on Rust 1.58+.
3915                #[inline]
3916                pub const fn as_ptr(&self) -> *mut $int_type {
3917                    self.inner.as_ptr()
3918                }
3919            }
3920        }
3921        // See https://github.com/taiki-e/portable-atomic/issues/180
3922        #[cfg(not(feature = "require-cas"))]
3923        cfg_no_atomic_cas! {
3924        #[doc(hidden)]
3925        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
3926        impl<'a> $atomic_type {
3927            $cfg_no_atomic_cas_or_amo32_or_8! {
3928            #[inline]
3929            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
3930            where
3931                &'a Self: HasSwap,
3932            {
3933                unimplemented!()
3934            }
3935            } // $cfg_no_atomic_cas_or_amo32_or_8!
3936            #[inline]
3937            pub fn compare_exchange(
3938                &self,
3939                current: $int_type,
3940                new: $int_type,
3941                success: Ordering,
3942                failure: Ordering,
3943            ) -> Result<$int_type, $int_type>
3944            where
3945                &'a Self: HasCompareExchange,
3946            {
3947                unimplemented!()
3948            }
3949            #[inline]
3950            pub fn compare_exchange_weak(
3951                &self,
3952                current: $int_type,
3953                new: $int_type,
3954                success: Ordering,
3955                failure: Ordering,
3956            ) -> Result<$int_type, $int_type>
3957            where
3958                &'a Self: HasCompareExchangeWeak,
3959            {
3960                unimplemented!()
3961            }
3962            $cfg_no_atomic_cas_or_amo32_or_8! {
3963            #[inline]
3964            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
3965            where
3966                &'a Self: HasFetchAdd,
3967            {
3968                unimplemented!()
3969            }
3970            #[inline]
3971            pub fn add(&self, val: $int_type, order: Ordering)
3972            where
3973                &'a Self: HasAdd,
3974            {
3975                unimplemented!()
3976            }
3977            #[inline]
3978            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
3979            where
3980                &'a Self: HasFetchSub,
3981            {
3982                unimplemented!()
3983            }
3984            #[inline]
3985            pub fn sub(&self, val: $int_type, order: Ordering)
3986            where
3987                &'a Self: HasSub,
3988            {
3989                unimplemented!()
3990            }
3991            } // $cfg_no_atomic_cas_or_amo32_or_8!
3992            cfg_no_atomic_cas_or_amo32! {
3993            #[inline]
3994            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
3995            where
3996                &'a Self: HasFetchAnd,
3997            {
3998                unimplemented!()
3999            }
4000            #[inline]
4001            pub fn and(&self, val: $int_type, order: Ordering)
4002            where
4003                &'a Self: HasAnd,
4004            {
4005                unimplemented!()
4006            }
4007            } // cfg_no_atomic_cas_or_amo32!
4008            #[inline]
4009            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
4010            where
4011                &'a Self: HasFetchNand,
4012            {
4013                unimplemented!()
4014            }
4015            cfg_no_atomic_cas_or_amo32! {
4016            #[inline]
4017            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
4018            where
4019                &'a Self: HasFetchOr,
4020            {
4021                unimplemented!()
4022            }
4023            #[inline]
4024            pub fn or(&self, val: $int_type, order: Ordering)
4025            where
4026                &'a Self: HasOr,
4027            {
4028                unimplemented!()
4029            }
4030            #[inline]
4031            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
4032            where
4033                &'a Self: HasFetchXor,
4034            {
4035                unimplemented!()
4036            }
4037            #[inline]
4038            pub fn xor(&self, val: $int_type, order: Ordering)
4039            where
4040                &'a Self: HasXor,
4041            {
4042                unimplemented!()
4043            }
4044            } // cfg_no_atomic_cas_or_amo32!
4045            #[inline]
4046            pub fn fetch_update<F>(
4047                &self,
4048                set_order: Ordering,
4049                fetch_order: Ordering,
4050                f: F,
4051            ) -> Result<$int_type, $int_type>
4052            where
4053                F: FnMut($int_type) -> Option<$int_type>,
4054                &'a Self: HasFetchUpdate,
4055            {
4056                unimplemented!()
4057            }
4058            $cfg_no_atomic_cas_or_amo32_or_8! {
4059            #[inline]
4060            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
4061            where
4062                &'a Self: HasFetchMax,
4063            {
4064                unimplemented!()
4065            }
4066            #[inline]
4067            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
4068            where
4069                &'a Self: HasFetchMin,
4070            {
4071                unimplemented!()
4072            }
4073            } // $cfg_no_atomic_cas_or_amo32_or_8!
4074            cfg_no_atomic_cas_or_amo32! {
4075            #[inline]
4076            pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
4077            where
4078                &'a Self: HasBitSet,
4079            {
4080                unimplemented!()
4081            }
4082            #[inline]
4083            pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
4084            where
4085                &'a Self: HasBitClear,
4086            {
4087                unimplemented!()
4088            }
4089            #[inline]
4090            pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
4091            where
4092                &'a Self: HasBitToggle,
4093            {
4094                unimplemented!()
4095            }
4096            #[inline]
4097            pub fn fetch_not(&self, order: Ordering) -> $int_type
4098            where
4099                &'a Self: HasFetchNot,
4100            {
4101                unimplemented!()
4102            }
4103            #[inline]
4104            pub fn not(&self, order: Ordering)
4105            where
4106                &'a Self: HasNot,
4107            {
4108                unimplemented!()
4109            }
4110            } // cfg_no_atomic_cas_or_amo32!
4111            #[inline]
4112            pub fn fetch_neg(&self, order: Ordering) -> $int_type
4113            where
4114                &'a Self: HasFetchNeg,
4115            {
4116                unimplemented!()
4117            }
4118            #[inline]
4119            pub fn neg(&self, order: Ordering)
4120            where
4121                &'a Self: HasNeg,
4122            {
4123                unimplemented!()
4124            }
4125        }
4126        } // cfg_no_atomic_cas!
4127        $(
4128            #[$cfg_float]
4129            atomic_int!(float,
4130                #[$cfg_float] $atomic_float_type, $float_type, $atomic_type, $int_type, $align
4131            );
4132        )?
4133    };
4134
4135    // AtomicF* impls
4136    (float,
4137        #[$cfg_float:meta]
4138        $atomic_type:ident,
4139        $float_type:ident,
4140        $atomic_int_type:ident,
4141        $int_type:ident,
4142        $align:literal
4143    ) => {
4144        doc_comment! {
4145            concat!("A floating point type which can be safely shared between threads.
4146
4147This type has the same in-memory representation as the underlying floating point type,
4148[`", stringify!($float_type), "`].
4149"
4150            ),
4151            #[cfg_attr(docsrs, doc($cfg_float))]
4152            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4153            // will show clearer docs.
4154            #[repr(C, align($align))]
4155            pub struct $atomic_type {
4156                inner: imp::float::$atomic_type,
4157            }
4158        }
4159
4160        impl Default for $atomic_type {
4161            #[inline]
4162            fn default() -> Self {
4163                Self::new($float_type::default())
4164            }
4165        }
4166
4167        impl From<$float_type> for $atomic_type {
4168            #[inline]
4169            fn from(v: $float_type) -> Self {
4170                Self::new(v)
4171            }
4172        }
4173
4174        // UnwindSafe is implicitly implemented.
4175        #[cfg(not(portable_atomic_no_core_unwind_safe))]
4176        impl core::panic::RefUnwindSafe for $atomic_type {}
4177        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4178        impl std::panic::RefUnwindSafe for $atomic_type {}
4179
4180        impl_debug_and_serde!($atomic_type);
4181
4182        impl $atomic_type {
4183            /// Creates a new atomic float.
4184            #[inline]
4185            #[must_use]
4186            pub const fn new(v: $float_type) -> Self {
4187                static_assert_layout!($atomic_type, $float_type);
4188                Self { inner: imp::float::$atomic_type::new(v) }
4189            }
4190
4191            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4192            #[cfg(not(portable_atomic_no_const_mut_refs))]
4193            doc_comment! {
4194                concat!("Creates a new reference to an atomic float from a pointer.
4195
4196This is `const fn` on Rust 1.83+.
4197
4198# Safety
4199
4200* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4201  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4202* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4203* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4204  behind `ptr` must have a happens-before relationship with atomic accesses via
4205  the returned value (or vice-versa).
4206  * In other words, time periods where the value is accessed atomically may not
4207    overlap with periods where the value is accessed non-atomically.
4208  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4209    for the duration of lifetime `'a`. Most use cases should be able to follow
4210    this guideline.
4211  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4212    done from the same thread.
4213* If this atomic type is *not* lock-free:
4214  * Any accesses to the value behind `ptr` must have a happens-before relationship
4215    with accesses via the returned value (or vice-versa).
4216  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4217    be compatible with operations performed by this atomic type.
4218* This method must not be used to create overlapping or mixed-size atomic
4219  accesses, as these are not supported by the memory model.
4220
4221[valid]: core::ptr#safety"),
4222                #[inline]
4223                #[must_use]
4224                pub const unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4225                    #[allow(clippy::cast_ptr_alignment)]
4226                    // SAFETY: guaranteed by the caller
4227                    unsafe { &*(ptr as *mut Self) }
4228                }
4229            }
4230            #[cfg(portable_atomic_no_const_mut_refs)]
4231            doc_comment! {
4232                concat!("Creates a new reference to an atomic float from a pointer.
4233
4234This is `const fn` on Rust 1.83+.
4235
4236# Safety
4237
4238* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4239  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4240* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4241* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4242  behind `ptr` must have a happens-before relationship with atomic accesses via
4243  the returned value (or vice-versa).
4244  * In other words, time periods where the value is accessed atomically may not
4245    overlap with periods where the value is accessed non-atomically.
4246  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4247    for the duration of lifetime `'a`. Most use cases should be able to follow
4248    this guideline.
4249  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4250    done from the same thread.
4251* If this atomic type is *not* lock-free:
4252  * Any accesses to the value behind `ptr` must have a happens-before relationship
4253    with accesses via the returned value (or vice-versa).
4254  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4255    be compatible with operations performed by this atomic type.
4256* This method must not be used to create overlapping or mixed-size atomic
4257  accesses, as these are not supported by the memory model.
4258
4259[valid]: core::ptr#safety"),
4260                #[inline]
4261                #[must_use]
4262                pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4263                    #[allow(clippy::cast_ptr_alignment)]
4264                    // SAFETY: guaranteed by the caller
4265                    unsafe { &*(ptr as *mut Self) }
4266                }
4267            }
4268
4269            /// Returns `true` if operations on values of this type are lock-free.
4270            ///
4271            /// If the compiler or the platform doesn't support the necessary
4272            /// atomic instructions, global locks for every potentially
4273            /// concurrent atomic operation will be used.
4274            #[inline]
4275            #[must_use]
4276            pub fn is_lock_free() -> bool {
4277                <imp::float::$atomic_type>::is_lock_free()
4278            }
4279
4280            /// Returns `true` if operations on values of this type are lock-free.
4281            ///
4282            /// If the compiler or the platform doesn't support the necessary
4283            /// atomic instructions, global locks for every potentially
4284            /// concurrent atomic operation will be used.
4285            ///
4286            /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4287            /// this type may be lock-free even if the function returns false.
4288            #[inline]
4289            #[must_use]
4290            pub const fn is_always_lock_free() -> bool {
4291                <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4292            }
4293            #[cfg(test)]
4294            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4295
4296            const_fn! {
4297                const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
4298                /// Returns a mutable reference to the underlying float.
4299                ///
4300                /// This is safe because the mutable reference guarantees that no other threads are
4301                /// concurrently accessing the atomic data.
4302                ///
4303                /// This is `const fn` on Rust 1.83+.
4304                #[inline]
4305                pub const fn get_mut(&mut self) -> &mut $float_type {
4306                    // SAFETY: the mutable reference guarantees unique ownership.
4307                    unsafe { &mut *self.as_ptr() }
4308                }
4309            }
4310
4311            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4312            // https://github.com/rust-lang/rust/issues/76314
4313
4314            const_fn! {
4315                const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4316                /// Consumes the atomic and returns the contained value.
4317                ///
4318                /// This is safe because passing `self` by value guarantees that no other threads are
4319                /// concurrently accessing the atomic data.
4320                ///
4321                /// This is `const fn` on Rust 1.56+.
4322                #[inline]
4323                pub const fn into_inner(self) -> $float_type {
4324                    // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4325                    // so they can be safely transmuted.
4326                    // (const UnsafeCell::into_inner is unstable)
4327                    unsafe { core::mem::transmute(self) }
4328                }
4329            }
4330
4331            /// Loads a value from the atomic float.
4332            ///
4333            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4334            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4335            ///
4336            /// # Panics
4337            ///
4338            /// Panics if `order` is [`Release`] or [`AcqRel`].
4339            #[inline]
4340            #[cfg_attr(
4341                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4342                track_caller
4343            )]
4344            pub fn load(&self, order: Ordering) -> $float_type {
4345                self.inner.load(order)
4346            }
4347
4348            /// Stores a value into the atomic float.
4349            ///
4350            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4351            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4352            ///
4353            /// # Panics
4354            ///
4355            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4356            #[inline]
4357            #[cfg_attr(
4358                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4359                track_caller
4360            )]
4361            pub fn store(&self, val: $float_type, order: Ordering) {
4362                self.inner.store(val, order)
4363            }
4364
4365            cfg_has_atomic_cas_or_amo32! {
4366            /// Stores a value into the atomic float, returning the previous value.
4367            ///
4368            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4369            /// of this operation. All ordering modes are possible. Note that using
4370            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4371            /// using [`Release`] makes the load part [`Relaxed`].
4372            #[inline]
4373            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4374            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4375                self.inner.swap(val, order)
4376            }
4377
4378            cfg_has_atomic_cas! {
4379            /// Stores a value into the atomic float if the current value is the same as
4380            /// the `current` value.
4381            ///
4382            /// The return value is a result indicating whether the new value was written and
4383            /// containing the previous value. On success this value is guaranteed to be equal to
4384            /// `current`.
4385            ///
4386            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4387            /// ordering of this operation. `success` describes the required ordering for the
4388            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4389            /// `failure` describes the required ordering for the load operation that takes place when
4390            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4391            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4392            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4393            ///
4394            /// # Panics
4395            ///
4396            /// Panics if `failure` is [`Release`], [`AcqRel`].
4397            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4398            #[inline]
4399            #[cfg_attr(
4400                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4401                track_caller
4402            )]
4403            pub fn compare_exchange(
4404                &self,
4405                current: $float_type,
4406                new: $float_type,
4407                success: Ordering,
4408                failure: Ordering,
4409            ) -> Result<$float_type, $float_type> {
4410                self.inner.compare_exchange(current, new, success, failure)
4411            }
4412
4413            /// Stores a value into the atomic float if the current value is the same as
4414            /// the `current` value.
4415            /// Unlike [`compare_exchange`](Self::compare_exchange)
4416            /// this function is allowed to spuriously fail even
4417            /// when the comparison succeeds, which can result in more efficient code on some
4418            /// platforms. The return value is a result indicating whether the new value was
4419            /// written and containing the previous value.
4420            ///
4421            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4422            /// ordering of this operation. `success` describes the required ordering for the
4423            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4424            /// `failure` describes the required ordering for the load operation that takes place when
4425            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4426            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4427            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4428            ///
4429            /// # Panics
4430            ///
4431            /// Panics if `failure` is [`Release`], [`AcqRel`].
4432            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4433            #[inline]
4434            #[cfg_attr(
4435                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4436                track_caller
4437            )]
4438            pub fn compare_exchange_weak(
4439                &self,
4440                current: $float_type,
4441                new: $float_type,
4442                success: Ordering,
4443                failure: Ordering,
4444            ) -> Result<$float_type, $float_type> {
4445                self.inner.compare_exchange_weak(current, new, success, failure)
4446            }
4447
4448            /// Adds to the current value, returning the previous value.
4449            ///
4450            /// This operation wraps around on overflow.
4451            ///
4452            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4453            /// of this operation. All ordering modes are possible. Note that using
4454            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4455            /// using [`Release`] makes the load part [`Relaxed`].
4456            #[inline]
4457            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4458            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4459                self.inner.fetch_add(val, order)
4460            }
4461
4462            /// Subtracts from the current value, returning the previous value.
4463            ///
4464            /// This operation wraps around on overflow.
4465            ///
4466            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4467            /// of this operation. All ordering modes are possible. Note that using
4468            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4469            /// using [`Release`] makes the load part [`Relaxed`].
4470            #[inline]
4471            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4472            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4473                self.inner.fetch_sub(val, order)
4474            }
4475
4476            /// Fetches the value, and applies a function to it that returns an optional
4477            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4478            /// `Err(previous_value)`.
4479            ///
4480            /// Note: This may call the function multiple times if the value has been changed from other threads in
4481            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4482            /// only once to the stored value.
4483            ///
4484            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4485            /// The first describes the required ordering for when the operation finally succeeds while the second
4486            /// describes the required ordering for loads. These correspond to the success and failure orderings of
4487            /// [`compare_exchange`](Self::compare_exchange) respectively.
4488            ///
4489            /// Using [`Acquire`] as success ordering makes the store part
4490            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4491            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4492            ///
4493            /// # Panics
4494            ///
4495            /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4496            ///
4497            /// # Considerations
4498            ///
4499            /// This method is not magic; it is not provided by the hardware.
4500            /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4501            /// and suffers from the same drawbacks.
4502            /// In particular, this method will not circumvent the [ABA Problem].
4503            ///
4504            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4505            #[inline]
4506            #[cfg_attr(
4507                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4508                track_caller
4509            )]
4510            pub fn fetch_update<F>(
4511                &self,
4512                set_order: Ordering,
4513                fetch_order: Ordering,
4514                mut f: F,
4515            ) -> Result<$float_type, $float_type>
4516            where
4517                F: FnMut($float_type) -> Option<$float_type>,
4518            {
4519                let mut prev = self.load(fetch_order);
4520                while let Some(next) = f(prev) {
4521                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4522                        x @ Ok(_) => return x,
4523                        Err(next_prev) => prev = next_prev,
4524                    }
4525                }
4526                Err(prev)
4527            }
4528
4529            /// Maximum with the current value.
4530            ///
4531            /// Finds the maximum of the current value and the argument `val`, and
4532            /// sets the new value to the result.
4533            ///
4534            /// Returns the previous value.
4535            ///
4536            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4537            /// of this operation. All ordering modes are possible. Note that using
4538            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4539            /// using [`Release`] makes the load part [`Relaxed`].
4540            #[inline]
4541            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4542            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4543                self.inner.fetch_max(val, order)
4544            }
4545
4546            /// Minimum with the current value.
4547            ///
4548            /// Finds the minimum of the current value and the argument `val`, and
4549            /// sets the new value to the result.
4550            ///
4551            /// Returns the previous value.
4552            ///
4553            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4554            /// of this operation. All ordering modes are possible. Note that using
4555            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4556            /// using [`Release`] makes the load part [`Relaxed`].
4557            #[inline]
4558            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4559            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4560                self.inner.fetch_min(val, order)
4561            }
4562            } // cfg_has_atomic_cas!
4563
4564            /// Negates the current value, and sets the new value to the result.
4565            ///
4566            /// Returns the previous value.
4567            ///
4568            /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4569            /// of this operation. All ordering modes are possible. Note that using
4570            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4571            /// using [`Release`] makes the load part [`Relaxed`].
4572            #[inline]
4573            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4574            pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4575                self.inner.fetch_neg(order)
4576            }
4577
4578            /// Computes the absolute value of the current value, and sets the
4579            /// new value to the result.
4580            ///
4581            /// Returns the previous value.
4582            ///
4583            /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4584            /// of this operation. All ordering modes are possible. Note that using
4585            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4586            /// using [`Release`] makes the load part [`Relaxed`].
4587            #[inline]
4588            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4589            pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4590                self.inner.fetch_abs(order)
4591            }
4592            } // cfg_has_atomic_cas_or_amo32!
4593
4594            #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4595            doc_comment! {
4596                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4597
4598See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4599portability of this operation (there are almost no issues).
4600
4601This is `const fn` on Rust 1.58+."),
4602                #[inline]
4603                pub const fn as_bits(&self) -> &$atomic_int_type {
4604                    self.inner.as_bits()
4605                }
4606            }
4607            #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4608            doc_comment! {
4609                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4610
4611See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4612portability of this operation (there are almost no issues).
4613
4614This is `const fn` on Rust 1.58+."),
4615                #[inline]
4616                pub fn as_bits(&self) -> &$atomic_int_type {
4617                    self.inner.as_bits()
4618                }
4619            }
4620
4621            const_fn! {
4622                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4623                /// Returns a mutable pointer to the underlying float.
4624                ///
4625                /// Returning an `*mut` pointer from a shared reference to this atomic is
4626                /// safe because the atomic types work with interior mutability. Any use of
4627                /// the returned raw pointer requires an `unsafe` block and has to uphold
4628                /// the safety requirements. If there is concurrent access, note the following
4629                /// additional safety requirements:
4630                ///
4631                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4632                ///   operations on it must be atomic.
4633                /// - Otherwise, any concurrent operations on it must be compatible with
4634                ///   operations performed by this atomic type.
4635                ///
4636                /// This is `const fn` on Rust 1.58+.
4637                #[inline]
4638                pub const fn as_ptr(&self) -> *mut $float_type {
4639                    self.inner.as_ptr()
4640                }
4641            }
4642        }
4643        // See https://github.com/taiki-e/portable-atomic/issues/180
4644        #[cfg(not(feature = "require-cas"))]
4645        cfg_no_atomic_cas! {
4646        #[doc(hidden)]
4647        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4648        impl<'a> $atomic_type {
4649            cfg_no_atomic_cas_or_amo32! {
4650            #[inline]
4651            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4652            where
4653                &'a Self: HasSwap,
4654            {
4655                unimplemented!()
4656            }
4657            } // cfg_no_atomic_cas_or_amo32!
4658            #[inline]
4659            pub fn compare_exchange(
4660                &self,
4661                current: $float_type,
4662                new: $float_type,
4663                success: Ordering,
4664                failure: Ordering,
4665            ) -> Result<$float_type, $float_type>
4666            where
4667                &'a Self: HasCompareExchange,
4668            {
4669                unimplemented!()
4670            }
4671            #[inline]
4672            pub fn compare_exchange_weak(
4673                &self,
4674                current: $float_type,
4675                new: $float_type,
4676                success: Ordering,
4677                failure: Ordering,
4678            ) -> Result<$float_type, $float_type>
4679            where
4680                &'a Self: HasCompareExchangeWeak,
4681            {
4682                unimplemented!()
4683            }
4684            #[inline]
4685            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4686            where
4687                &'a Self: HasFetchAdd,
4688            {
4689                unimplemented!()
4690            }
4691            #[inline]
4692            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4693            where
4694                &'a Self: HasFetchSub,
4695            {
4696                unimplemented!()
4697            }
4698            #[inline]
4699            pub fn fetch_update<F>(
4700                &self,
4701                set_order: Ordering,
4702                fetch_order: Ordering,
4703                f: F,
4704            ) -> Result<$float_type, $float_type>
4705            where
4706                F: FnMut($float_type) -> Option<$float_type>,
4707                &'a Self: HasFetchUpdate,
4708            {
4709                unimplemented!()
4710            }
4711            #[inline]
4712            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4713            where
4714                &'a Self: HasFetchMax,
4715            {
4716                unimplemented!()
4717            }
4718            #[inline]
4719            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4720            where
4721                &'a Self: HasFetchMin,
4722            {
4723                unimplemented!()
4724            }
4725            cfg_no_atomic_cas_or_amo32! {
4726            #[inline]
4727            pub fn fetch_neg(&self, order: Ordering) -> $float_type
4728            where
4729                &'a Self: HasFetchNeg,
4730            {
4731                unimplemented!()
4732            }
4733            #[inline]
4734            pub fn fetch_abs(&self, order: Ordering) -> $float_type
4735            where
4736                &'a Self: HasFetchAbs,
4737            {
4738                unimplemented!()
4739            }
4740            } // cfg_no_atomic_cas_or_amo32!
4741        }
4742        } // cfg_no_atomic_cas!
4743    };
4744}
4745
4746cfg_has_atomic_ptr! {
4747    #[cfg(target_pointer_width = "16")]
4748    atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4749    #[cfg(target_pointer_width = "16")]
4750    atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4751    #[cfg(target_pointer_width = "32")]
4752    atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4753    #[cfg(target_pointer_width = "32")]
4754    atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4755    #[cfg(target_pointer_width = "64")]
4756    atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4757    #[cfg(target_pointer_width = "64")]
4758    atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4759    #[cfg(target_pointer_width = "128")]
4760    atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4761    #[cfg(target_pointer_width = "128")]
4762    atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4763}
4764
4765cfg_has_atomic_8! {
4766    atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4767    atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4768}
4769cfg_has_atomic_16! {
4770    atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4771    atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8,
4772        #[cfg(all(feature = "float", portable_atomic_unstable_f16))] AtomicF16, f16);
4773}
4774cfg_has_atomic_32! {
4775    atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4776    atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4777        #[cfg(feature = "float")] AtomicF32, f32);
4778}
4779cfg_has_atomic_64! {
4780    atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4781    atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4782        #[cfg(feature = "float")] AtomicF64, f64);
4783}
4784cfg_has_atomic_128! {
4785    atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4786    atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4787        #[cfg(all(feature = "float", portable_atomic_unstable_f128))] AtomicF128, f128);
4788}
4789
4790// See https://github.com/taiki-e/portable-atomic/issues/180
4791#[cfg(not(feature = "require-cas"))]
4792cfg_no_atomic_cas! {
4793cfg_no_atomic_cas_or_amo32! {
4794#[cfg(feature = "float")]
4795use self::diagnostic_helper::HasFetchAbs;
4796use self::diagnostic_helper::{
4797    HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4798    HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4799};
4800} // cfg_no_atomic_cas_or_amo32!
4801cfg_no_atomic_cas_or_amo8! {
4802use self::diagnostic_helper::{HasAdd, HasSub, HasSwap};
4803} // cfg_no_atomic_cas_or_amo8!
4804#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4805use self::diagnostic_helper::{
4806    HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4807    HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4808};
4809#[cfg_attr(
4810    any(
4811        all(
4812            portable_atomic_no_atomic_load_store,
4813            not(any(
4814                target_arch = "avr",
4815                target_arch = "bpf",
4816                target_arch = "msp430",
4817                target_arch = "riscv32",
4818                target_arch = "riscv64",
4819                feature = "critical-section",
4820            )),
4821        ),
4822        not(feature = "float"),
4823    ),
4824    allow(dead_code, unreachable_pub)
4825)]
4826#[allow(unknown_lints, unnameable_types)] // Not public API. unnameable_types is available on Rust 1.79+
4827mod diagnostic_helper {
4828    cfg_no_atomic_cas_or_amo8! {
4829    #[doc(hidden)]
4830    #[cfg_attr(
4831        not(portable_atomic_no_diagnostic_namespace),
4832        diagnostic::on_unimplemented(
4833            message = "`swap` requires atomic CAS but not available on this target by default",
4834            label = "this associated function is not available on this target by default",
4835            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4836            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4837        )
4838    )]
4839    pub trait HasSwap {}
4840    } // cfg_no_atomic_cas_or_amo8!
4841    #[doc(hidden)]
4842    #[cfg_attr(
4843        not(portable_atomic_no_diagnostic_namespace),
4844        diagnostic::on_unimplemented(
4845            message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4846            label = "this associated function is not available on this target by default",
4847            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4848            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4849        )
4850    )]
4851    pub trait HasCompareExchange {}
4852    #[doc(hidden)]
4853    #[cfg_attr(
4854        not(portable_atomic_no_diagnostic_namespace),
4855        diagnostic::on_unimplemented(
4856            message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4857            label = "this associated function is not available on this target by default",
4858            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4859            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4860        )
4861    )]
4862    pub trait HasCompareExchangeWeak {}
4863    #[doc(hidden)]
4864    #[cfg_attr(
4865        not(portable_atomic_no_diagnostic_namespace),
4866        diagnostic::on_unimplemented(
4867            message = "`fetch_add` requires atomic CAS but not available on this target by default",
4868            label = "this associated function is not available on this target by default",
4869            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4870            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4871        )
4872    )]
4873    pub trait HasFetchAdd {}
4874    cfg_no_atomic_cas_or_amo8! {
4875    #[doc(hidden)]
4876    #[cfg_attr(
4877        not(portable_atomic_no_diagnostic_namespace),
4878        diagnostic::on_unimplemented(
4879            message = "`add` requires atomic CAS but not available on this target by default",
4880            label = "this associated function is not available on this target by default",
4881            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4882            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4883        )
4884    )]
4885    pub trait HasAdd {}
4886    } // cfg_no_atomic_cas_or_amo8!
4887    #[doc(hidden)]
4888    #[cfg_attr(
4889        not(portable_atomic_no_diagnostic_namespace),
4890        diagnostic::on_unimplemented(
4891            message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4892            label = "this associated function is not available on this target by default",
4893            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4894            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4895        )
4896    )]
4897    pub trait HasFetchSub {}
4898    cfg_no_atomic_cas_or_amo8! {
4899    #[doc(hidden)]
4900    #[cfg_attr(
4901        not(portable_atomic_no_diagnostic_namespace),
4902        diagnostic::on_unimplemented(
4903            message = "`sub` requires atomic CAS but not available on this target by default",
4904            label = "this associated function is not available on this target by default",
4905            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4906            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4907        )
4908    )]
4909    pub trait HasSub {}
4910    } // cfg_no_atomic_cas_or_amo8!
4911    cfg_no_atomic_cas_or_amo32! {
4912    #[doc(hidden)]
4913    #[cfg_attr(
4914        not(portable_atomic_no_diagnostic_namespace),
4915        diagnostic::on_unimplemented(
4916            message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
4917            label = "this associated function is not available on this target by default",
4918            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4919            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4920        )
4921    )]
4922    pub trait HasFetchPtrAdd {}
4923    #[doc(hidden)]
4924    #[cfg_attr(
4925        not(portable_atomic_no_diagnostic_namespace),
4926        diagnostic::on_unimplemented(
4927            message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
4928            label = "this associated function is not available on this target by default",
4929            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4930            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4931        )
4932    )]
4933    pub trait HasFetchPtrSub {}
4934    #[doc(hidden)]
4935    #[cfg_attr(
4936        not(portable_atomic_no_diagnostic_namespace),
4937        diagnostic::on_unimplemented(
4938            message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
4939            label = "this associated function is not available on this target by default",
4940            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4941            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4942        )
4943    )]
4944    pub trait HasFetchByteAdd {}
4945    #[doc(hidden)]
4946    #[cfg_attr(
4947        not(portable_atomic_no_diagnostic_namespace),
4948        diagnostic::on_unimplemented(
4949            message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
4950            label = "this associated function is not available on this target by default",
4951            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4952            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4953        )
4954    )]
4955    pub trait HasFetchByteSub {}
4956    #[doc(hidden)]
4957    #[cfg_attr(
4958        not(portable_atomic_no_diagnostic_namespace),
4959        diagnostic::on_unimplemented(
4960            message = "`fetch_and` requires atomic CAS but not available on this target by default",
4961            label = "this associated function is not available on this target by default",
4962            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4963            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4964        )
4965    )]
4966    pub trait HasFetchAnd {}
4967    #[doc(hidden)]
4968    #[cfg_attr(
4969        not(portable_atomic_no_diagnostic_namespace),
4970        diagnostic::on_unimplemented(
4971            message = "`and` requires atomic CAS but not available on this target by default",
4972            label = "this associated function is not available on this target by default",
4973            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4974            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4975        )
4976    )]
4977    pub trait HasAnd {}
4978    } // cfg_no_atomic_cas_or_amo32!
4979    #[doc(hidden)]
4980    #[cfg_attr(
4981        not(portable_atomic_no_diagnostic_namespace),
4982        diagnostic::on_unimplemented(
4983            message = "`fetch_nand` requires atomic CAS but not available on this target by default",
4984            label = "this associated function is not available on this target by default",
4985            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4986            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4987        )
4988    )]
4989    pub trait HasFetchNand {}
4990    cfg_no_atomic_cas_or_amo32! {
4991    #[doc(hidden)]
4992    #[cfg_attr(
4993        not(portable_atomic_no_diagnostic_namespace),
4994        diagnostic::on_unimplemented(
4995            message = "`fetch_or` requires atomic CAS but not available on this target by default",
4996            label = "this associated function is not available on this target by default",
4997            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4998            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4999        )
5000    )]
5001    pub trait HasFetchOr {}
5002    #[doc(hidden)]
5003    #[cfg_attr(
5004        not(portable_atomic_no_diagnostic_namespace),
5005        diagnostic::on_unimplemented(
5006            message = "`or` requires atomic CAS but not available on this target by default",
5007            label = "this associated function is not available on this target by default",
5008            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5009            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5010        )
5011    )]
5012    pub trait HasOr {}
5013    #[doc(hidden)]
5014    #[cfg_attr(
5015        not(portable_atomic_no_diagnostic_namespace),
5016        diagnostic::on_unimplemented(
5017            message = "`fetch_xor` requires atomic CAS but not available on this target by default",
5018            label = "this associated function is not available on this target by default",
5019            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5020            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5021        )
5022    )]
5023    pub trait HasFetchXor {}
5024    #[doc(hidden)]
5025    #[cfg_attr(
5026        not(portable_atomic_no_diagnostic_namespace),
5027        diagnostic::on_unimplemented(
5028            message = "`xor` requires atomic CAS but not available on this target by default",
5029            label = "this associated function is not available on this target by default",
5030            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5031            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5032        )
5033    )]
5034    pub trait HasXor {}
5035    #[doc(hidden)]
5036    #[cfg_attr(
5037        not(portable_atomic_no_diagnostic_namespace),
5038        diagnostic::on_unimplemented(
5039            message = "`fetch_not` requires atomic CAS but not available on this target by default",
5040            label = "this associated function is not available on this target by default",
5041            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5042            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5043        )
5044    )]
5045    pub trait HasFetchNot {}
5046    #[doc(hidden)]
5047    #[cfg_attr(
5048        not(portable_atomic_no_diagnostic_namespace),
5049        diagnostic::on_unimplemented(
5050            message = "`not` requires atomic CAS but not available on this target by default",
5051            label = "this associated function is not available on this target by default",
5052            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5053            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5054        )
5055    )]
5056    pub trait HasNot {}
5057    } // cfg_no_atomic_cas_or_amo32!
5058    #[doc(hidden)]
5059    #[cfg_attr(
5060        not(portable_atomic_no_diagnostic_namespace),
5061        diagnostic::on_unimplemented(
5062            message = "`fetch_neg` requires atomic CAS but not available on this target by default",
5063            label = "this associated function is not available on this target by default",
5064            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5065            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5066        )
5067    )]
5068    pub trait HasFetchNeg {}
5069    #[doc(hidden)]
5070    #[cfg_attr(
5071        not(portable_atomic_no_diagnostic_namespace),
5072        diagnostic::on_unimplemented(
5073            message = "`neg` requires atomic CAS but not available on this target by default",
5074            label = "this associated function is not available on this target by default",
5075            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5076            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5077        )
5078    )]
5079    pub trait HasNeg {}
5080    cfg_no_atomic_cas_or_amo32! {
5081    #[cfg(feature = "float")]
5082    #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
5083    #[doc(hidden)]
5084    #[cfg_attr(
5085        not(portable_atomic_no_diagnostic_namespace),
5086        diagnostic::on_unimplemented(
5087            message = "`fetch_abs` requires atomic CAS but not available on this target by default",
5088            label = "this associated function is not available on this target by default",
5089            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5090            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5091        )
5092    )]
5093    pub trait HasFetchAbs {}
5094    } // cfg_no_atomic_cas_or_amo32!
5095    #[doc(hidden)]
5096    #[cfg_attr(
5097        not(portable_atomic_no_diagnostic_namespace),
5098        diagnostic::on_unimplemented(
5099            message = "`fetch_min` requires atomic CAS but not available on this target by default",
5100            label = "this associated function is not available on this target by default",
5101            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5102            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5103        )
5104    )]
5105    pub trait HasFetchMin {}
5106    #[doc(hidden)]
5107    #[cfg_attr(
5108        not(portable_atomic_no_diagnostic_namespace),
5109        diagnostic::on_unimplemented(
5110            message = "`fetch_max` requires atomic CAS but not available on this target by default",
5111            label = "this associated function is not available on this target by default",
5112            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5113            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5114        )
5115    )]
5116    pub trait HasFetchMax {}
5117    #[doc(hidden)]
5118    #[cfg_attr(
5119        not(portable_atomic_no_diagnostic_namespace),
5120        diagnostic::on_unimplemented(
5121            message = "`fetch_update` requires atomic CAS but not available on this target by default",
5122            label = "this associated function is not available on this target by default",
5123            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5124            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5125        )
5126    )]
5127    pub trait HasFetchUpdate {}
5128    cfg_no_atomic_cas_or_amo32! {
5129    #[doc(hidden)]
5130    #[cfg_attr(
5131        not(portable_atomic_no_diagnostic_namespace),
5132        diagnostic::on_unimplemented(
5133            message = "`bit_set` requires atomic CAS but not available on this target by default",
5134            label = "this associated function is not available on this target by default",
5135            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5136            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5137        )
5138    )]
5139    pub trait HasBitSet {}
5140    #[doc(hidden)]
5141    #[cfg_attr(
5142        not(portable_atomic_no_diagnostic_namespace),
5143        diagnostic::on_unimplemented(
5144            message = "`bit_clear` requires atomic CAS but not available on this target by default",
5145            label = "this associated function is not available on this target by default",
5146            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5147            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5148        )
5149    )]
5150    pub trait HasBitClear {}
5151    #[doc(hidden)]
5152    #[cfg_attr(
5153        not(portable_atomic_no_diagnostic_namespace),
5154        diagnostic::on_unimplemented(
5155            message = "`bit_toggle` requires atomic CAS but not available on this target by default",
5156            label = "this associated function is not available on this target by default",
5157            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
5158            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5159        )
5160    )]
5161    pub trait HasBitToggle {}
5162    } // cfg_no_atomic_cas_or_amo32!
5163}
5164} // cfg_no_atomic_cas!