nvalchemi.dynamics.GPUBuffer#
- class nvalchemi.dynamics.GPUBuffer(capacity, max_atoms, max_edges, device='cuda')[source]#
GPU-resident buffer for storing batched atomic data.
This buffer lazily pre-allocates a
Batchwith fixed maximum sizes for atoms and edges on the firstwrite()call. The incoming batch serves as a template for attribute keys and dtypes, ensuring all fields are preserved (not just positions and atomic_numbers).- Parameters:
capacity (int) – Maximum number of samples (graphs) to store.
max_atoms (int) – Maximum number of atoms per sample.
max_edges (int) – Maximum number of edges per sample.
device (torch.device | str, optional) – CUDA device to store data on. Default is “cuda”.
- capacity#
Maximum storage capacity.
- Type:
int
- device#
Target CUDA device for stored tensors.
- Type:
torch.device
Examples
>>> buffer = GPUBuffer(capacity=100, max_atoms=50, max_edges=200, device="cuda:0") >>> buffer.write(batch) >>> len(buffer) 2 >>> retrieved = buffer.read()
- __init__(capacity, max_atoms, max_edges, device='cuda')[source]#
Initialize the GPU buffer.
- Parameters:
capacity (int) – Maximum number of samples (graphs) to store.
max_atoms (int) – Maximum number of atoms per sample.
max_edges (int) – Maximum number of edges per sample.
device (torch.device | str, optional) – CUDA device to store data on. Default is “cuda”.
- Raises:
RuntimeError – If CUDA is not available or a non-CUDA device is specified.
- Return type:
None
Methods
__init__(capacity, max_atoms, max_edges[, ...])Initialize the GPU buffer.
drain()Read all stored samples and clear the sink.
read()Retrieve stored (non-padding) data as a single Batch.
write(batch[, mask])Store atomic data into the buffer.
zero()Clear all stored data and reset the buffer.
Attributes
Return the maximum storage capacity.
Return the storage device.
global_rankReturn the global rank of this data sink.
is_fullCheck if the buffer has reached capacity.
local_rankReturn the local rank of this data sink.