heapless/pool/
mod.rs

1//! A heap-less, interrupt-safe, lock-free memory pool (\*)
2//!
3//! NOTE: This module is not available on targets that do *not* support CAS operations and are not
4//! emulated by the [`atomic_polyfill`](https://crates.io/crates/atomic-polyfill) crate (e.g.,
5//! MSP430).
6//!
7//! (\*) Currently, the implementation is only lock-free *and* `Sync` on ARMv6, ARMv7-{A,R,M} & ARMv8-M
8//! devices
9//!
10//! # Examples
11//!
12//! The most common way of using this pool is as a global singleton; the singleton mode gives you
13//! automatic deallocation of memory blocks on `drop`.
14//!
15//! ``` ignore
16//! #![no_main]
17//! #![no_std]
18//!
19//! use cortex_m_rt::{entry, exception};
20//! use heapless::{
21//!     pool,
22//!     pool::singleton::{Box, Pool},
23//! };
24//!
25//! // instantiate a memory pool of `[u8; 128]` blocks as a global singleton
26//! pool!(
27//!     // attributes can be used here
28//!     // #[link_section = ".ccram.A"]
29//!     A: [u8; 128]
30//! );
31//!
32//! #[entry]
33//! fn main() -> ! {
34//!     static mut MEMORY: [u8; 1024] = [0; 1024];
35//!
36//!     // increase the capacity of the pool by ~8 blocks
37//!     A::grow(MEMORY);
38//!
39//!     // claim a block of memory
40//!     // note that the type is `Box<A>`, and not `Box<[u8; 128]>`
41//!     // `A` is the "name" of the pool
42//!     let x: Box<A, _> = A::alloc().unwrap();
43//!     loop {
44//!         // .. do stuff with `x` ..
45//!     }
46//! }
47//!
48//! #[exception]
49//! fn SysTick() {
50//!     // claim a block of memory
51//!     let y = A::alloc().unwrap();
52//!
53//!     // .. do stuff with `y` ..
54//!
55//!     // return the memory block to the pool
56//!     drop(y);
57//! }
58//! ```
59//!
60//! # Portability
61//!
62//! This pool internally uses a Treiber stack which is known to be susceptible to the ABA problem.
63//! The only counter measure against the ABA problem that this implementation currently takes is
64//! relying on LL/SC (Link-local / Store-conditional) instructions being used to implement CAS loops
65//! on the target architecture (see section on ['Soundness'](#soundness) for more information). For
66//! this reason, `Pool` only implements `Sync` when compiling for some ARM cores.
67//!
68//! This module requires CAS atomic instructions which are not available on all architectures (e.g.
69//! ARMv6-M (`thumbv6m-none-eabi`) and MSP430 (`msp430-none-elf`)). These atomics can be emulated
70//! however with [`atomic_polyfill`](https://crates.io/crates/atomic-polyfill), which is enabled
71//! with the `cas` feature and is enabled by default for `thumbv6m-none-eabi` and `riscv32` targets.
72//! MSP430 is currently not supported by
73//! [`atomic_polyfill`](https://crates.io/crates/atomic-polyfill).
74//!
75//! # Soundness
76//!
77//! This pool uses a Treiber stack to keep a list of free memory blocks (nodes). Each of these
78//! nodes has a pointer to the next node. To claim a memory block we simply pop a node from the
79//! top of the stack and use it as a memory block. The pop operation consists of swapping the
80//! current head (top) node with the node below it. The Rust code for the `pop` operation is shown
81//! below:
82//!
83//! ``` ignore
84//! fn pop(&self) -> Option<NonNull<Node<T>>> {
85//!     let fetch_order = ..;
86//!     let set_order = ..;
87//!
88//!     // `self.head` has type `AtomicPtr<Node<T>>`
89//!     // where `struct Node<T> { next: AtomicPtr<Node<T>>, data: UnsafeCell<T> }`
90//!     let mut head = self.head.load(fetch_order);
91//!     loop {
92//!         if let Some(nn_head) = NonNull::new(head) {
93//!             let next = unsafe { (*head).next.load(Ordering::Relaxed) };
94//!
95//!             // <~ preempted
96//!
97//!             match self
98//!                 .head
99//!                 .compare_exchange_weak(head, next, set_order, fetch_order)
100//!             {
101//!                 Ok(_) => break Some(nn_head),
102//!                 // head was changed by some interrupt handler / thread
103//!                 Err(new_head) => head = new_head,
104//!             }
105//!         } else {
106//!             // stack is observed as empty
107//!             break None;
108//!         }
109//!     }
110//! }
111//! ```
112//!
113//! In general, the `pop` operation is susceptible to the ABA problem. If this operation gets
114//! preempted by some interrupt handler somewhere between the `head.load` and the
115//! `compare_and_exchange_weak`, and that handler modifies the stack in such a way that the head
116//! (top) of the stack remains unchanged then resuming the `pop` operation will corrupt the stack.
117//!
118//! An example: imagine we are doing on `pop` on stack that contains these nodes: `A -> B -> C`,
119//! `A` is the head (top), `B` is next to `A` and `C` is next to `B`. The `pop` operation will do a
120//! `CAS(&self.head, A, B)` operation to atomically change the head to `B` iff it currently is `A`.
121//! Now, let's say a handler preempts the `pop` operation before the `CAS` operation starts and it
122//! `pop`s the stack twice and then `push`es back the `A` node; now the state of the stack is `A ->
123//! C`. When the original `pop` operation is resumed it will succeed in doing the `CAS` operation
124//! setting `B` as the head of the stack. However, `B` was used by the handler as a memory block and
125//! no longer is a valid free node. As a result the stack, and thus the allocator, is in a invalid
126//! state.
127//!
128//! However, not all is lost because ARM devices use LL/SC (Link-local / Store-conditional)
129//! operations to implement CAS loops. Let's look at the actual disassembly of `pop` for the ARM
130//! Cortex-M.
131//!
132//! ``` text
133//! 08000130 <<heapless::pool::Pool<T>>::pop>:
134//!  8000130:       6802            ldr     r2, [r0, #0]
135//!  8000132:       e00c            b.n     800014e <<heapless::pool::Pool<T>>::pop+0x1e>
136//!  8000134:       4611            mov     r1, r2
137//!  8000136:       f8d2 c000       ldr.w   ip, [r2]
138//!  800013a:       e850 2f00       ldrex   r2, [r0]
139//!  800013e:       428a            cmp     r2, r1
140//!  8000140:       d103            bne.n   800014a <<heapless::pool::Pool<T>>::pop+0x1a>
141//!  8000142:       e840 c300       strex   r3, ip, [r0]
142//!  8000146:       b913            cbnz    r3, 800014e <<heapless::pool::Pool<T>>::pop+0x1e>
143//!  8000148:       e004            b.n     8000154 <<heapless::pool::Pool<T>>::pop+0x24>
144//!  800014a:       f3bf 8f2f       clrex
145//!  800014e:       2a00            cmp     r2, #0
146//!  8000150:       d1f0            bne.n   8000134 <<heapless::pool::Pool<T>>::pop+0x4>
147//!  8000152:       2100            movs    r1, #0
148//!  8000154:       4608            mov     r0, r1
149//!  8000156:       4770            bx      lr
150//! ```
151//!
152//! LDREX ("load exclusive") is the LL instruction, and STREX ("store exclusive") is the SC
153//! instruction (see [1](#references)). On the Cortex-M, STREX will always fail if the processor
154//! takes an exception between it and its corresponding LDREX operation (see [2](#references)). If
155//! STREX fails then the CAS loop is retried (see instruction @ `0x8000146`). On single core
156//! systems, preemption is required to run into the ABA problem and on Cortex-M devices preemption
157//! always involves taking an exception. Thus the underlying LL/SC operations prevent the ABA
158//! problem on Cortex-M.
159//!
160//! In the case of multi-core systems if any other core successfully does a STREX op on the head
161//! while the current core is somewhere between LDREX and STREX then the current core will fail its
162//! STREX operation.
163//!
164//! # x86_64 support / limitations
165//!
166//! *NOTE* `Pool` is only `Sync` on `x86_64` and `x86` (`i686`) if the Cargo feature "x86-sync-pool"
167//! is enabled
168//!
169//! x86_64 support is a gamble. Yes, a gamble. Do you feel lucky enough to use `Pool` on x86_64?
170//!
171//! As it's not possible to implement *ideal* LL/SC semantics (\*) on x86_64 the architecture is
172//! susceptible to the ABA problem described above. To *reduce the chances* of ABA occurring in
173//! practice we use version tags (keyword: IBM ABA-prevention tags). Again, this approach does
174//! *not* fix / prevent / avoid the ABA problem; it only reduces the chance of it occurring in
175//! practice but the chances of it occurring are not reduced to zero.
176//!
177//! How we have implemented version tags: instead of using an `AtomicPtr` to link the stack `Node`s
178//! we use an `AtomicUsize` where the 64-bit `usize` is always comprised of a monotonically
179//! increasing 32-bit tag (higher bits) and a 32-bit signed address offset. The address of a node is
180//! computed by adding the 32-bit offset to an "anchor" address (the address of a static variable
181//! that lives somewhere in the `.bss` linker section). The tag is increased every time a node is
182//! popped (removed) from the stack.
183//!
184//! To see how version tags can prevent ABA consider the example from the previous section. Let's
185//! start with a stack in this state: `(~A, 0) -> (~B, 1) -> (~C, 2)`, where `~A` represents the
186//! address of node A as a 32-bit offset from the "anchor" and the second tuple element (e.g. `0`)
187//! indicates the version of the node. For simplicity, assume a single core system: thread T1 is
188//! performing `pop` and before `CAS(&self.head, (~A, 0), (~B, 1))` is executed a context switch
189//! occurs and the core resumes T2. T2 pops the stack twice and pushes A back into the stack;
190//! because the `pop` operation increases the version the stack ends in the following state: `(~A,
191//! 1) -> (~C, 2)`. Now if T1 is resumed the CAS operation will fail because `self.head` is `(~A,
192//! 1)` and not `(~A, 0)`.
193//!
194//! When can version tags fail to prevent ABA? Using the previous example: if T2 performs a `push`
195//! followed by a `pop` `(1 << 32) - 1` times before doing its original `pop` - `pop` - `push`
196//! operation then ABA will occur because the version tag of node `A` will wraparound to its
197//! original value of `0` and the CAS operation in T1 will succeed and corrupt the stack.
198//!
199//! It does seem unlikely that (1) a thread will perform the above operation and (2) that the above
200//! operation will complete within one time slice, assuming time sliced threads. If you have thread
201//! priorities then the above operation could occur during the lifetime of many high priorities
202//! threads if T1 is running at low priority.
203//!
204//! Other implementations of version tags use more than 32 bits in their tags (e.g. "Scalable
205//! Lock-Free Dynamic Memory Allocation" uses 42-bit tags in its super blocks). In theory, one could
206//! use double-word CAS on x86_64 to pack a 64-bit tag and a 64-bit pointer in a double-word but
207//! this CAS operation is not exposed in the standard library (and I think it's not available on
208//! older x86_64 processors?)
209//!
210//! (\*) Apparently one can emulate proper LL/SC semantics on x86_64 using hazard pointers (?) --
211//! the technique appears to be documented in "ABA Prevention Using Single-Word Instructions", which
212//! is not public AFAICT -- but hazard pointers require Thread Local Storage (TLS), which is a
213//! non-starter for a `no_std` library like `heapless`.
214//!
215//! ## x86_64 Limitations
216//!
217//! *NOTE* this limitation does not apply to `x86` (32-bit address space). If you run into this
218//! issue, on an x86_64 processor try running your code compiled for `x86`, e.g. `cargo run --target
219//! i686-unknown-linux-musl`
220//!
221//! Because stack nodes must be located within +- 2 GB of the hidden `ANCHOR` variable, which
222//! lives in the `.bss` section, `Pool` may not be able to manage static references created using
223//! `Box::leak` -- these heap allocated chunks of memory may live in a very different address space.
224//! When the `Pool` is unable to manage a node because of its address it will simply discard it:
225//! `Pool::grow*` methods return the number of new memory blocks added to the pool; if these methods
226//! return `0` it means the `Pool` is unable to manage the memory given to them.
227//!
228//! # References
229//!
230//! 1. [Cortex-M3 Devices Generic User Guide (DUI 0552A)][0], Section 2.2.7 "Synchronization
231//! primitives"
232//!
233//! [0]: http://infocenter.arm.com/help/topic/com.arm.doc.dui0552a/DUI0552A_cortex_m3_dgug.pdf
234//!
235//! 2. [ARMv7-M Architecture Reference Manual (DDI 0403E.b)][1], Section A3.4 "Synchronization and
236//! semaphores"
237//!
238//! [1]: https://static.docs.arm.com/ddi0403/eb/DDI0403E_B_armv7m_arm.pdf
239//!
240//! 3. "Scalable Lock-Free Dynamic Memory Allocation" Michael, Maged M.
241//!
242//! 4. "Hazard pointers: Safe memory reclamation for lock-free objects." Michael, Maged M.
243
244use core::{any::TypeId, mem};
245use core::{
246    cmp, fmt,
247    hash::{Hash, Hasher},
248    marker::PhantomData,
249    mem::MaybeUninit,
250    ops::{Deref, DerefMut},
251    ptr::{self, NonNull},
252};
253
254pub use stack::Node;
255use stack::{Ptr, Stack};
256
257pub mod singleton;
258#[cfg_attr(any(target_arch = "x86_64", target_arch = "x86"), path = "cas.rs")]
259#[cfg_attr(
260    not(any(target_arch = "x86_64", target_arch = "x86")),
261    path = "llsc.rs"
262)]
263mod stack;
264
265/// A lock-free memory pool
266pub struct Pool<T> {
267    stack: Stack<T>,
268
269    // Current implementation is unsound on architectures that don't have LL/SC semantics so this
270    // struct is not `Sync` on those platforms
271    _not_send_or_sync: PhantomData<*const ()>,
272}
273
274// NOTE(any(test)) makes testing easier (no need to enable Cargo features for testing)
275#[cfg(any(
276    armv6m,
277    armv7a,
278    armv7r,
279    armv7m,
280    armv8m_main,
281    all(
282        any(target_arch = "x86_64", target_arch = "x86"),
283        feature = "x86-sync-pool"
284    ),
285    test
286))]
287unsafe impl<T> Sync for Pool<T> {}
288
289unsafe impl<T> Send for Pool<T> {}
290
291impl<T> Pool<T> {
292    /// Creates a new empty pool
293    pub const fn new() -> Self {
294        Pool {
295            stack: Stack::new(),
296
297            _not_send_or_sync: PhantomData,
298        }
299    }
300
301    /// Claims a memory block from the pool
302    ///
303    /// Returns `None` when the pool is observed as exhausted
304    ///
305    /// *NOTE:* This method does *not* have bounded execution time because it contains a CAS loop
306    pub fn alloc(&self) -> Option<Box<T, Uninit>> {
307        if mem::size_of::<T>() == 0 {
308            // NOTE because we return a dangling pointer to a NODE, which has non-zero size
309            // even when T is a ZST, in this case we need to make sure we
310            // - don't do pointer arithmetic on this pointer
311            // - dereference that offset-ed pointer as a ZST
312            // because miri doesn't like that
313            return Some(Box {
314                node: Ptr::dangling(),
315                _state: PhantomData,
316            });
317        }
318
319        if let Some(node) = self.stack.try_pop() {
320            Some(Box {
321                node,
322                _state: PhantomData,
323            })
324        } else {
325            None
326        }
327    }
328
329    /// Returns a memory block to the pool
330    ///
331    /// *NOTE*: `T`'s destructor (if any) will run on `value` iff `S = Init`
332    ///
333    /// *NOTE:* This method does *not* have bounded execution time because it contains a CAS loop
334    pub fn free<S>(&self, value: Box<T, S>)
335    where
336        S: 'static,
337    {
338        if TypeId::of::<S>() == TypeId::of::<Init>() {
339            let p = if mem::size_of::<T>() == 0 {
340                // any pointer will do to invoke the destructor of a ZST
341                NonNull::dangling().as_ptr()
342            } else {
343                unsafe { value.node.as_ref().data.get() }
344            };
345            unsafe {
346                ptr::drop_in_place(p);
347            }
348        }
349
350        // no operation
351        if mem::size_of::<T>() == 0 {
352            return;
353        }
354
355        self.stack.push(value.node)
356    }
357
358    /// Increases the capacity of the pool
359    ///
360    /// This method might *not* fully utilize the given memory block due to alignment requirements.
361    ///
362    /// This method returns the number of *new* blocks that can be allocated.
363    pub fn grow(&self, memory: &'static mut [u8]) -> usize {
364        if mem::size_of::<T>() == 0 {
365            // ZST use no memory so a pool of ZST always has maximum capacity
366            return usize::max_value();
367        }
368
369        let sz = mem::size_of::<Node<T>>();
370        let mut p = memory.as_mut_ptr();
371        let mut len = memory.len();
372
373        let align = mem::align_of::<Node<T>>();
374        let rem = (p as usize) % align;
375        if rem != 0 {
376            let offset = align - rem;
377
378            if offset >= len {
379                // slice is too small
380                return 0;
381            }
382
383            p = unsafe { p.add(offset) };
384            len -= offset;
385        }
386
387        let mut n = 0;
388        while len >= sz {
389            match () {
390                #[cfg(any(target_arch = "x86_64", target_arch = "x86"))]
391                () => {
392                    if let Some(p) = Ptr::new(p as *mut _) {
393                        self.stack.push(p);
394                        n += 1;
395                    }
396                }
397
398                #[cfg(not(any(target_arch = "x86_64", target_arch = "x86")))]
399                () => {
400                    self.stack.push(unsafe { Ptr::new_unchecked(p as *mut _) });
401                    n += 1;
402                }
403            }
404
405            p = unsafe { p.add(sz) };
406            len -= sz;
407        }
408
409        n
410    }
411
412    /// Increases the capacity of the pool
413    ///
414    /// Unlike [`Pool.grow`](struct.Pool.html#method.grow) this method fully utilizes the given
415    /// memory block
416    pub fn grow_exact<A>(&self, memory: &'static mut MaybeUninit<A>) -> usize
417    where
418        A: AsMut<[Node<T>]>,
419    {
420        if mem::size_of::<T>() == 0 {
421            return usize::max_value();
422        }
423
424        let nodes = unsafe { (*memory.as_mut_ptr()).as_mut() };
425        let cap = nodes.len();
426        for p in nodes {
427            match () {
428                #[cfg(any(target_arch = "x86_64", target_arch = "x86"))]
429                () => {
430                    if let Some(p) = Ptr::new(p) {
431                        self.stack.push(p);
432                    }
433                }
434
435                #[cfg(not(any(target_arch = "x86_64", target_arch = "x86")))]
436                () => self.stack.push(core::ptr::NonNull::from(p)),
437            }
438        }
439        cap
440    }
441}
442
443/// A memory block
444pub struct Box<T, STATE = Init> {
445    _state: PhantomData<STATE>,
446    node: Ptr<Node<T>>,
447}
448
449impl<T> Box<T, Uninit> {
450    /// Initializes this memory block
451    pub fn init(self, val: T) -> Box<T, Init> {
452        if mem::size_of::<T>() == 0 {
453            // no memory operation needed for ZST
454            // BUT we want to avoid calling `val`s destructor
455            mem::forget(val)
456        } else {
457            unsafe {
458                ptr::write(self.node.as_ref().data.get(), val);
459            }
460        }
461
462        Box {
463            node: self.node,
464            _state: PhantomData,
465        }
466    }
467}
468
469/// Uninitialized type state
470pub enum Uninit {}
471
472/// Initialized type state
473pub enum Init {}
474
475unsafe impl<T, S> Send for Box<T, S> where T: Send {}
476
477unsafe impl<T, S> Sync for Box<T, S> where T: Sync {}
478
479unsafe impl<T> stable_deref_trait::StableDeref for Box<T> {}
480
481impl<A, T> AsRef<[T]> for Box<A>
482where
483    A: AsRef<[T]>,
484{
485    fn as_ref(&self) -> &[T] {
486        self.deref().as_ref()
487    }
488}
489
490impl<A, T> AsMut<[T]> for Box<A>
491where
492    A: AsMut<[T]>,
493{
494    fn as_mut(&mut self) -> &mut [T] {
495        self.deref_mut().as_mut()
496    }
497}
498
499impl<T> Deref for Box<T> {
500    type Target = T;
501
502    fn deref(&self) -> &T {
503        if mem::size_of::<T>() == 0 {
504            // any pointer will do for ZST
505            unsafe { &*NonNull::dangling().as_ptr() }
506        } else {
507            unsafe { &*self.node.as_ref().data.get() }
508        }
509    }
510}
511
512impl<T> DerefMut for Box<T> {
513    fn deref_mut(&mut self) -> &mut T {
514        if mem::size_of::<T>() == 0 {
515            // any pointer will do for ZST
516            unsafe { &mut *NonNull::dangling().as_ptr() }
517        } else {
518            unsafe { &mut *self.node.as_ref().data.get() }
519        }
520    }
521}
522
523impl<T> fmt::Debug for Box<T>
524where
525    T: fmt::Debug,
526{
527    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
528        <T as fmt::Debug>::fmt(self, f)
529    }
530}
531
532impl<T> fmt::Display for Box<T>
533where
534    T: fmt::Display,
535{
536    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
537        <T as fmt::Display>::fmt(self, f)
538    }
539}
540
541impl<T> PartialEq for Box<T>
542where
543    T: PartialEq,
544{
545    fn eq(&self, rhs: &Box<T>) -> bool {
546        <T as PartialEq>::eq(self, rhs)
547    }
548}
549
550impl<T> Eq for Box<T> where T: Eq {}
551
552impl<T> PartialOrd for Box<T>
553where
554    T: PartialOrd,
555{
556    fn partial_cmp(&self, rhs: &Box<T>) -> Option<cmp::Ordering> {
557        <T as PartialOrd>::partial_cmp(self, rhs)
558    }
559}
560
561impl<T> Ord for Box<T>
562where
563    T: Ord,
564{
565    fn cmp(&self, rhs: &Box<T>) -> cmp::Ordering {
566        <T as Ord>::cmp(self, rhs)
567    }
568}
569
570impl<T> Hash for Box<T>
571where
572    T: Hash,
573{
574    fn hash<H>(&self, state: &mut H)
575    where
576        H: Hasher,
577    {
578        <T as Hash>::hash(self, state)
579    }
580}
581
582#[cfg(test)]
583mod tests {
584    use core::{
585        mem::{self, MaybeUninit},
586        sync::atomic::{AtomicUsize, Ordering},
587    };
588
589    use super::{Node, Pool};
590
591    #[test]
592    fn grow() {
593        static mut MEMORY: [u8; 1024] = [0; 1024];
594
595        static POOL: Pool<[u8; 128]> = Pool::new();
596
597        unsafe {
598            POOL.grow(&mut MEMORY);
599        }
600
601        for _ in 0..7 {
602            assert!(POOL.alloc().is_some());
603        }
604    }
605
606    #[test]
607    fn grow_exact() {
608        const SZ: usize = 8;
609        static mut MEMORY: MaybeUninit<[Node<[u8; 128]>; SZ]> = MaybeUninit::uninit();
610
611        static POOL: Pool<[u8; 128]> = Pool::new();
612
613        unsafe {
614            POOL.grow_exact(&mut MEMORY);
615        }
616
617        for _ in 0..SZ {
618            assert!(POOL.alloc().is_some());
619        }
620        assert!(POOL.alloc().is_none());
621    }
622
623    #[test]
624    fn sanity() {
625        const SZ: usize = 2 * mem::size_of::<Node<u8>>() - 1;
626        static mut MEMORY: [u8; SZ] = [0; SZ];
627
628        static POOL: Pool<u8> = Pool::new();
629
630        // empty pool
631        assert!(POOL.alloc().is_none());
632
633        POOL.grow(unsafe { &mut MEMORY });
634
635        let x = POOL.alloc().unwrap().init(0);
636        assert_eq!(*x, 0);
637
638        // pool exhausted
639        assert!(POOL.alloc().is_none());
640
641        POOL.free(x);
642
643        // should be possible to allocate again
644        assert_eq!(*POOL.alloc().unwrap().init(1), 1);
645    }
646
647    #[test]
648    fn destructors() {
649        static COUNT: AtomicUsize = AtomicUsize::new(0);
650
651        struct X;
652
653        impl X {
654            fn new() -> X {
655                COUNT.fetch_add(1, Ordering::Relaxed);
656                X
657            }
658        }
659
660        impl Drop for X {
661            fn drop(&mut self) {
662                COUNT.fetch_sub(1, Ordering::Relaxed);
663            }
664        }
665
666        static mut MEMORY: [u8; 31] = [0; 31];
667
668        static POOL: Pool<X> = Pool::new();
669
670        POOL.grow(unsafe { &mut MEMORY });
671
672        let x = POOL.alloc().unwrap().init(X::new());
673        let y = POOL.alloc().unwrap().init(X::new());
674        let z = POOL.alloc().unwrap().init(X::new());
675
676        assert_eq!(COUNT.load(Ordering::Relaxed), 3);
677
678        // this leaks memory
679        drop(x);
680
681        assert_eq!(COUNT.load(Ordering::Relaxed), 3);
682
683        // this leaks memory
684        mem::forget(y);
685
686        assert_eq!(COUNT.load(Ordering::Relaxed), 3);
687
688        // this runs `X` destructor
689        POOL.free(z);
690
691        assert_eq!(COUNT.load(Ordering::Relaxed), 2);
692    }
693}