Datamodule
ESM2FineTuneDataModule
Bases: MegatronDataModule
A PyTorch Lightning DataModule for fine-tuning ESM2 models.
This DataModule is designed to handle the data preparation and loading for fine-tuning ESM2 models. It provides a flexible way to create and manage datasets, data loaders, and sampling strategies.
Source code in bionemo/esm2/model/finetune/datamodule.py
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
|
__init__(train_dataset=None, valid_dataset=None, predict_dataset=None, seed=42, min_seq_length=None, max_seq_length=1024, micro_batch_size=4, global_batch_size=8, num_workers=2, persistent_workers=True, pin_memory=True, rampup_batch_size=None, tokenizer=tokenizer.get_tokenizer())
Initialize the ESM2FineTuneDataModule.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
train_dataset
|
DATASET_TYPES
|
The training dataset. |
None
|
valid_dataset
|
DATASET_TYPES
|
The validation dataset. |
None
|
predict_dataset
|
DATASET_TYPES
|
The prediction dataset. Should not be set together with train/valid datasets |
None
|
seed
|
int
|
The random seed to use for shuffling the datasets. Defaults to 42. |
42
|
min_seq_length
|
int | None
|
The minimum sequence length for the datasets. Defaults to None. |
None
|
max_seq_length
|
int
|
The maximum sequence length for the datasets. Defaults to 1024. |
1024
|
micro_batch_size
|
int
|
The micro-batch size for the data loader. Defaults to 4. |
4
|
global_batch_size
|
int
|
The global batch size for the data loader. Defaults to 8. |
8
|
num_workers
|
int
|
The number of worker processes for the data loader. Defaults to 10. |
2
|
persistent_workers
|
bool
|
Whether to persist the worker processes. Defaults to True. |
True
|
pin_memory
|
bool
|
Whether to pin the data in memory. Defaults to True. |
True
|
rampup_batch_size
|
list[int] | None
|
The batch size ramp-up schedule. Defaults to None. |
None
|
tokenizer
|
BioNeMoESMTokenizer
|
The tokenizer to use for tokenization. Defaults to the BioNeMoESMTokenizer. |
get_tokenizer()
|
Returns:
Type | Description |
---|---|
None
|
None |
Source code in bionemo/esm2/model/finetune/datamodule.py
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
|
predict_dataloader()
Returns the dataloader for prediction data.
Source code in bionemo/esm2/model/finetune/datamodule.py
201 202 203 204 |
|
setup(stage)
Setup the ESMDataModule.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
stage
|
str
|
Unused. |
required |
Raises:
Type | Description |
---|---|
RuntimeError
|
If the trainer is not attached, or if the trainer's max_steps is not set. |
Source code in bionemo/esm2/model/finetune/datamodule.py
109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
|
test_dataloader()
Raises a not implemented error.
Source code in bionemo/esm2/model/finetune/datamodule.py
206 207 208 |
|
train_dataloader()
Returns the dataloader for training data.
Source code in bionemo/esm2/model/finetune/datamodule.py
191 192 193 194 |
|
val_dataloader()
Returns the dataloader for validation data.
Source code in bionemo/esm2/model/finetune/datamodule.py
196 197 198 199 |
|