nvalchemi.hooks.HookContext#

class nvalchemi.hooks.HookContext(batch, step_count, model=None, loss=None, optimizer=None, lr_scheduler=None, gradients=None, converged_mask=None, epoch=None, global_rank=0, workflow=None)[source]#

Context object passed to hooks at each stage.

Parameters:
batch#

Current batch being processed.

Type:

Batch

step_count#

Current step number in the workflow.

Type:

int

model#

Model being used (if applicable).

Type:

BaseModelMixin | None

loss#

Current loss value (training only).

Type:

torch.Tensor | None

optimizer#

Optimizer being used (training only).

Type:

torch.optim.Optimizer | None

lr_scheduler#

Learning rate scheduler (training only).

Type:

object | None

gradients#

Parameter gradients (training only).

Type:

dict[str, torch.Tensor] | None

converged_mask#

Boolean mask of converged samples (dynamics only).

Type:

torch.Tensor | None

epoch#

Current epoch number (training only).

Type:

int | None

global_rank#

Distributed rank of this process.

Type:

int

workflow#

Back-reference to the engine running the hooks (e.g. a BaseDynamics instance). None when the workflow does not inject itself.

Type:

Any

__init__(batch, step_count, model=None, loss=None, optimizer=None, lr_scheduler=None, gradients=None, converged_mask=None, epoch=None, global_rank=0, workflow=None)#
Parameters:
Return type:

None

Methods

__init__(batch, step_count[, model, loss, ...])

Attributes

converged_mask

epoch

global_rank

gradients

loss

lr_scheduler

model

optimizer

workflow

batch

step_count