nvalchemi.hooks.HookContext#
- class nvalchemi.hooks.HookContext(batch, step_count, model=None, loss=None, optimizer=None, lr_scheduler=None, gradients=None, converged_mask=None, epoch=None, global_rank=0, workflow=None)[source]#
Context object passed to hooks at each stage.
- Parameters:
batch (Batch)
step_count (int)
model (BaseModelMixin | None)
loss (torch.Tensor | None)
optimizer (torch.optim.Optimizer | None)
lr_scheduler (object | None)
gradients (dict[str, torch.Tensor] | None)
converged_mask (torch.Tensor | None)
epoch (int | None)
global_rank (int)
workflow (Any)
- step_count#
Current step number in the workflow.
- Type:
int
- model#
Model being used (if applicable).
- Type:
BaseModelMixin | None
- loss#
Current loss value (training only).
- Type:
torch.Tensor | None
- optimizer#
Optimizer being used (training only).
- Type:
torch.optim.Optimizer | None
- lr_scheduler#
Learning rate scheduler (training only).
- Type:
object | None
- gradients#
Parameter gradients (training only).
- Type:
dict[str, torch.Tensor] | None
- converged_mask#
Boolean mask of converged samples (dynamics only).
- Type:
torch.Tensor | None
- epoch#
Current epoch number (training only).
- Type:
int | None
- global_rank#
Distributed rank of this process.
- Type:
int
- workflow#
Back-reference to the engine running the hooks (e.g. a
BaseDynamicsinstance).Nonewhen the workflow does not inject itself.- Type:
Any
- __init__(batch, step_count, model=None, loss=None, optimizer=None, lr_scheduler=None, gradients=None, converged_mask=None, epoch=None, global_rank=0, workflow=None)#
- Parameters:
batch (Batch)
step_count (int)
model (BaseModelMixin | None)
loss (torch.Tensor | None)
optimizer (torch.optim.Optimizer | None)
lr_scheduler (object | None)
gradients (dict[str, torch.Tensor] | None)
converged_mask (torch.Tensor | None)
epoch (int | None)
global_rank (int)
workflow (Any)
- Return type:
None
Methods
__init__(batch, step_count[, model, loss, ...])Attributes