cuda.core.experimental._stream.Stream¶
- class cuda.core.experimental._stream.Stream¶
Represent a queue of GPU operations that are executed in a specific order.
Applications use streams to control the order of execution for GPU work. Work within a single stream are executed sequentially. Whereas work across multiple streams can be further controlled using stream priorities and
Event
managements.Advanced users can utilize default streams for enforce complex implicit synchronization behaviors.
Directly creating a
Stream
is not supported due to ambiguity. New streams should instead be created through aDevice
object, or created directly through using an existing handle using Stream.from_handle().Methods
- __init__()¶
- close()¶
Destroy the stream.
Destroy the stream if we own it. Borrowed foreign stream object will instead have their references released.
- static from_handle(handle: int) Stream ¶
Create a new
Stream
object from a foreign stream handle.Uses a cudaStream_t pointer address represented as a Python int to create a new
Stream
object.Note
Stream lifetime is not managed, foreign object must remain alive while this steam is active.
- record(event: Event = None, options: EventOptions = None) Event ¶
Record an event onto the stream.
Creates an Event object (or reuses the given one) by recording on the stream.
- Parameters:
event (
Event
, optional) – Optional event object to be reused for recording.options (
EventOptions
, optional) – Customizable dataclass for event creation options.
- Returns:
Newly created event object.
- Return type:
Event
- sync()¶
Synchronize the stream.
- wait(event_or_stream: Event | Stream)¶
Wait for a CUDA event or a CUDA stream.
Waiting for an event or a stream establishes a stream order.
If a
Stream
is provided, then wait until the stream’s work is completed. This is done by recording a newEvent
on the stream and then waiting on it.
Attributes
- context¶
Return the
Context
associated with this stream.
- device¶
Return the
Device
singleton associated with this stream.Note
The current context on the device may differ from this stream’s context. This case occurs when a different CUDA context is set current after a stream is created.
- handle¶
Return the underlying cudaStream_t pointer address as Python int.
- is_nonblocking¶
Return True if this is a nonblocking stream, otherwise False.
- priority¶
Return the stream priority.