Skip to content

Datamodule

PickledDataWDS

Bases: WebDataModule

A LightningDataModule to process pickled data into webdataset tar files.

PickledDataWDS is a LightningDataModule to process pickled data into webdataset tar files and setup dataset and dataloader. This inherits the webdataset setup from its parent module WebDataModule. This data module takes a directory of pickled data files, data filename prefixes for train/val/test splits, data filename suffixes and prepare webdataset tar files by globbing the specific pickle data files {dir_pickles}/{name_subset[split]}.{suffix_pickles} and outputing to webdataset tar file with the dict structure:

    {"__key__" : name.replace(".", "-"),
     suffix_pickles : pickled.dumps(data) }
NOTE: this assumes only one pickled file is processed for each sample. In its setup() function, it creates the webdataset object chaining up the input pipeline_wds workflow. In its train/val/test_dataloader(), it creates the WebLoader object chaining up the pipeline_prebatch_wld workflow.

Examples:

  1. create the data module with a directory of pickle files and the file name prefix thereof for different splits to used by Lightning.Trainer.fit()
>>> from bionemo.core.data.datamodule import Split, PickledDataWDS

>>> dir_pickles = "/path/to/my/pickles/dir"

>>> # the following will use `sample1.mydata.pt` and `sample2.mydata.pt` as the
>>> # training dataset and `sample4.mydata.pt` and `sample5.mydata.pt` as the
>>> # validation dataset

>>> suffix_pickles = "mydata.pt"

>>> names_subset = {
>>>     Split.train: [sample1, sample2],
>>>     Split.val: [sample4, sample5],
>>> }

>>> # the following setting will attempt to create at least 5 tar files in
>>> # `/path/to/output/tars/dir/myshards-00000{0-5}.tar`

>>> n_tars_wds = 5
>>> prefix_tars_wds = "myshards"
>>> output_dir_tar_files = {
        Split.train : "/path/to/output/tars/dir-train",
        Split.val : "/path/to/output/tars/dir-val",
        Split.test : "/path/to/output/tars/dir-test",
    }

>>> # see the `WebDataModule` API doc for the definition of global_batch_size
>>> global_batch_size = 16

>>> # user can optionally customize the data processing routines and kwargs used
>>> # in the WebDataset and WebLoader (see the examples in `WebDataModule`)

>>> pipeline_wds = { Split.train: ... }

>>> pipeline_prebatch_wld = { Split.train: ... }

>>> kwargs_wds = { Split.train: ..., Split.val: ... }

>>> kwargs_wld = { Split.train: ..., Split.val: ... }

>>> # create the data module
>>> data_module = PickledDataWDS(
>>>     dir_pickles,
>>>     names_subset,
>>>     suffix_pickles, # `WebDataModule` args
>>>     output_dir_tar_files, # `WebDataModule` args
>>>     global_batch_size, # `WebDataModule` args
>>>     n_tars_wds=n_tars_wds,
>>>     prefix_tars_wds=prefix_tars_wds, # `WebDataModule` kwargs
>>>     pipeline_wds=pipeline_wds, # `WebDataModule` kwargs
>>>     pipeline_prebatch_wld=pipelines_wdl_batch, # `WebDataModule` kwargs
>>>     kwargs_wds=kwargs_wds, # `WebDataModule` kwargs
>>>     kwargs_wld=kwargs_wld, # `WebDataModule` kwargs
>>> )
Source code in bionemo/webdatamodule/datamodule.py
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
class PickledDataWDS(WebDataModule):
    """A LightningDataModule to process pickled data into webdataset tar files.

    `PickledDataWDS` is a LightningDataModule to process pickled data into webdataset tar files
    and setup dataset and dataloader. This inherits the webdataset setup from its parent module
    `WebDataModule`. This data module takes a directory of pickled data files, data filename
    prefixes for train/val/test splits, data filename suffixes and prepare webdataset tar files
    by globbing the specific pickle data files `{dir_pickles}/{name_subset[split]}.{suffix_pickles}`
    and outputing to webdataset tar file with the dict structure:
    ```
        {"__key__" : name.replace(".", "-"),
         suffix_pickles : pickled.dumps(data) }
    ```
    NOTE: this assumes only one pickled file is processed for each sample. In
    its setup() function, it creates the webdataset object chaining up the input
    `pipeline_wds` workflow. In its train/val/test_dataloader(), it creates the
    WebLoader object chaining up the `pipeline_prebatch_wld` workflow.

    Examples:
    --------
    1. create the data module with a directory of pickle files and the file name
    prefix thereof for different splits to used by `Lightning.Trainer.fit()`

    ```
    >>> from bionemo.core.data.datamodule import Split, PickledDataWDS

    >>> dir_pickles = "/path/to/my/pickles/dir"

    >>> # the following will use `sample1.mydata.pt` and `sample2.mydata.pt` as the
    >>> # training dataset and `sample4.mydata.pt` and `sample5.mydata.pt` as the
    >>> # validation dataset

    >>> suffix_pickles = "mydata.pt"

    >>> names_subset = {
    >>>     Split.train: [sample1, sample2],
    >>>     Split.val: [sample4, sample5],
    >>> }

    >>> # the following setting will attempt to create at least 5 tar files in
    >>> # `/path/to/output/tars/dir/myshards-00000{0-5}.tar`

    >>> n_tars_wds = 5
    >>> prefix_tars_wds = "myshards"
    >>> output_dir_tar_files = {
            Split.train : "/path/to/output/tars/dir-train",
            Split.val : "/path/to/output/tars/dir-val",
            Split.test : "/path/to/output/tars/dir-test",
        }

    >>> # see the `WebDataModule` API doc for the definition of global_batch_size
    >>> global_batch_size = 16

    >>> # user can optionally customize the data processing routines and kwargs used
    >>> # in the WebDataset and WebLoader (see the examples in `WebDataModule`)

    >>> pipeline_wds = { Split.train: ... }

    >>> pipeline_prebatch_wld = { Split.train: ... }

    >>> kwargs_wds = { Split.train: ..., Split.val: ... }

    >>> kwargs_wld = { Split.train: ..., Split.val: ... }

    >>> # create the data module
    >>> data_module = PickledDataWDS(
    >>>     dir_pickles,
    >>>     names_subset,
    >>>     suffix_pickles, # `WebDataModule` args
    >>>     output_dir_tar_files, # `WebDataModule` args
    >>>     global_batch_size, # `WebDataModule` args
    >>>     n_tars_wds=n_tars_wds,
    >>>     prefix_tars_wds=prefix_tars_wds, # `WebDataModule` kwargs
    >>>     pipeline_wds=pipeline_wds, # `WebDataModule` kwargs
    >>>     pipeline_prebatch_wld=pipelines_wdl_batch, # `WebDataModule` kwargs
    >>>     kwargs_wds=kwargs_wds, # `WebDataModule` kwargs
    >>>     kwargs_wld=kwargs_wld, # `WebDataModule` kwargs
    >>> )
    ```
    """

    def __init__(
        self,
        dir_pickles: str,
        names_subset: Dict[Split, List[str]],
        *args,
        n_tars_wds: Optional[int] = None,
        **kwargs,
    ) -> None:
        """Constructor.

        Args:
            dir_pickles: input directory of pickled data files
            names_subset: list of filename prefix of
                the data samples to be loaded in the dataset and dataloader for
                each of the split
            *args: arguments passed to the parent WebDataModule after its
            `n_samples` args (where `n_samples` is deduced from the length of
            `names_subset` arg of this class)
            n_tars_wds: attempt to create at least this number of
                webdataset shards
            **kwargs: arguments passed to the parent WebDataModule
        """
        super().__init__(
            {split: len(names_subset[split]) for split in names_subset.keys()},
            *args,
            **kwargs,
        )

        self._dir_pickles = dir_pickles

        self._names_subset = names_subset

        self._n_tars_wds = n_tars_wds

    def prepare_data(self) -> None:
        """This is called only by the main process by the Lightning workflow.

        Do not rely on this data module object's state update here as there is no
        way to communicate the state update to other subprocesses. The nesting
        `pickles_to_tars` function goes through the data name prefixes in the
        different splits, read the corresponding pickled file and output a
        webdataset tar archive with the dict structure: {"__key__" :
        name.replace(".", "-"), suffix_pickles : pickled.dumps(data) }.
        """
        for split in self._names_subset.keys():
            # create wds shards (tar files) for train set
            pickles_to_tars(
                self._dir_pickles,
                self._names_subset[split],
                self._suffix_keys_wds,
                self._dirs_tars_wds[split],
                self._prefix_tars_wds,
                min_num_shards=self._n_tars_wds,
            )

__init__(dir_pickles, names_subset, *args, n_tars_wds=None, **kwargs)

Constructor.

Parameters:

Name Type Description Default
dir_pickles str

input directory of pickled data files

required
names_subset Dict[Split, List[str]]

list of filename prefix of the data samples to be loaded in the dataset and dataloader for each of the split

required
*args

arguments passed to the parent WebDataModule after its

()
n_tars_wds Optional[int]

attempt to create at least this number of webdataset shards

None
**kwargs

arguments passed to the parent WebDataModule

{}
Source code in bionemo/webdatamodule/datamodule.py
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
def __init__(
    self,
    dir_pickles: str,
    names_subset: Dict[Split, List[str]],
    *args,
    n_tars_wds: Optional[int] = None,
    **kwargs,
) -> None:
    """Constructor.

    Args:
        dir_pickles: input directory of pickled data files
        names_subset: list of filename prefix of
            the data samples to be loaded in the dataset and dataloader for
            each of the split
        *args: arguments passed to the parent WebDataModule after its
        `n_samples` args (where `n_samples` is deduced from the length of
        `names_subset` arg of this class)
        n_tars_wds: attempt to create at least this number of
            webdataset shards
        **kwargs: arguments passed to the parent WebDataModule
    """
    super().__init__(
        {split: len(names_subset[split]) for split in names_subset.keys()},
        *args,
        **kwargs,
    )

    self._dir_pickles = dir_pickles

    self._names_subset = names_subset

    self._n_tars_wds = n_tars_wds

prepare_data()

This is called only by the main process by the Lightning workflow.

Do not rely on this data module object's state update here as there is no way to communicate the state update to other subprocesses. The nesting pickles_to_tars function goes through the data name prefixes in the different splits, read the corresponding pickled file and output a webdataset tar archive with the dict structure: {"key" : name.replace(".", "-"), suffix_pickles : pickled.dumps(data) }.

Source code in bionemo/webdatamodule/datamodule.py
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
def prepare_data(self) -> None:
    """This is called only by the main process by the Lightning workflow.

    Do not rely on this data module object's state update here as there is no
    way to communicate the state update to other subprocesses. The nesting
    `pickles_to_tars` function goes through the data name prefixes in the
    different splits, read the corresponding pickled file and output a
    webdataset tar archive with the dict structure: {"__key__" :
    name.replace(".", "-"), suffix_pickles : pickled.dumps(data) }.
    """
    for split in self._names_subset.keys():
        # create wds shards (tar files) for train set
        pickles_to_tars(
            self._dir_pickles,
            self._names_subset[split],
            self._suffix_keys_wds,
            self._dirs_tars_wds[split],
            self._prefix_tars_wds,
            min_num_shards=self._n_tars_wds,
        )

Split

Bases: Enum

Names for each data split.

Source code in bionemo/webdatamodule/datamodule.py
27
28
29
30
31
32
class Split(Enum):
    """Names for each data split."""

    train = auto()
    val = auto()
    test = auto()

WebDataModule

Bases: LightningDataModule

A LightningDataModule for using webdataset tar files.

WebDataModule is a LightningDataModule for using webdataset tar files to setup PyTorch datasets and dataloaders. This data module takes as input a dictionary: Split -> tar file directory and vaiours webdataset config settings. In its setup() function, it creates the webdataset object chaining up the input pipeline_wds workflow. In its train/val/test_dataloader(), it creates the WebLoader object chaining up the pipeline_prebatch_wld workflow.

Examples:

  1. create the data module with input directory to webdataset tar files. Depending on which of the downstream Lightning.Trainer methods are called, e.g., Trainer.fit(), Trainer.validate(), Trainer.test() or Trainer.predict(), only a subset of the train, val and test splits need to be specified in the various input options to the data module:

  2. Trainer.fit() requires the train and val splits

  3. Trainer.validate() requires the val split
  4. Trainer.test() requires the test splits
  5. Trainer.predict() requires the test splits

Here is an example of constructing the data module for Trainer.fit():

>>> from bionemo.webdatamodule.datamodule import Split, WebDataModule
>>>
>>> tar_file_prefix = "shards"
>>>
>>> dirs_of_tar_files = {
>>>     Split.train: "/path/to/train/split/tars",
>>>     Split.val: "/path/to/val/split/tars",
>>> }
>>>
>>> n_samples {
>>>     Split.train: 1000,
>>>     Split.val: 100,
>>> }
>>>
>>> # this is the string to retrieve the corresponding data object from the
>>> # webdataset file (see
>>> # https://github.com/webdataset/webdataset?tab=readme-ov-file#the-webdataset-format
>>> # for details)
>>> suffix_keys_wds = "tensor.pyd"
>>>
>>> # see the API doc for the definition of global_batch_size
>>> global_batch_size = 16
>>>
>>> seed = 27193781
>>>
>>> # Specify the routines to process the samples in the WebDataset object.
>>> # The routine is a generator of an Iterable of generators that are chained
>>> # together by nested function calling. The following is equivalent of
>>> # defining a overall generator of `shuffle(untuple(...))` which
>>> # untuples the samples and shuffles them. See webdataset's Documentation
>>> # for details.
>>> # NOTE: the `untuple` is almost always necessary due to the webdataset's
>>> # file parsing rule.
>>>
>>> untuple = lambda source : (sample for (sample,) in source)
>>>
>>> from webdatast import shuffle
>>> pipeline_wds = {
>>>     Split.train : [untuple, shuffle(n_samples[Split.train],
>>>                                     rng=random.Random(seed_rng_shfl))],
>>>     Split.val: untuple
>>> }
>>>
>>> # Similarly the user can optionally define the processing routine on the
>>> # WebLoader (the dataloader of webdataset).
>>> # NOTE: these routines by default take unbatched sample as input so the
>>> # user can customize their batching routines here
>>>
>>> batch = batched(local_batch_size, collation_fn=lambda
                    list_samples : torch.vstack(list_samples))
>>> pipeline_prebatch_wld = {
        Split.train: [shuffle(n_samples[Split.train],
                              rng=random.Random(seed_rng_shfl)), batch],
        Split.val : batch,
        Split.test : batch
    }
>>>
>>> # the user can optionally specify the kwargs for WebDataset and
>>> # WebLoader
>>>
>>> kwargs_wds = {
>>>     split : {'shardshuffle' : split == Split.train,
>>>              'nodesplitter' : wds.split_by_node,
>>>              'seed' : seed_rng_shfl}
>>>     for split in Split
>>>     }
>>>
>>> kwargs_wld = {
>>>     split : {"num_workers": 2} for split in Split
>>>     }
>>>
>>> # construct the data module
>>> data_module = WebDataModule(n_samples, suffix_keys_wds,
                                dirs_of_tar_files, global_batch_size,
                                prefix_tars_wds=tar_file_prefix,
                                pipeline_wds=pipeline_wds,
                                pipeline_prebatch_wld=pipeline_prebatch_wld,
                                kwargs_wds=kwargs_wds,
                                kwargs_wld=kwargs_wld)

Source code in bionemo/webdatamodule/datamodule.py
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
class WebDataModule(L.LightningDataModule):
    """A LightningDataModule for using webdataset tar files.

    `WebDataModule` is a `LightningDataModule` for using webdataset tar files to setup PyTorch
    datasets and dataloaders. This data module takes as input a dictionary: Split -> tar file
    directory and vaiours webdataset config settings. In its setup() function, it creates the
    webdataset object chaining up the input `pipeline_wds` workflow. In its train/val/test_dataloader(),
    it creates the WebLoader object chaining up the `pipeline_prebatch_wld` workflow.

    Examples:
    --------
    1. create the data module with input directory to webdataset tar files.
    Depending on which of the downstream Lightning.Trainer methods are called,
    e.g., `Trainer.fit()`, `Trainer.validate()`, `Trainer.test()` or
    `Trainer.predict()`, only a subset of the train, val and test splits need to
    be specified in the various input options to the data module:

    - `Trainer.fit()` requires the `train` and `val` splits
    - `Trainer.validate()` requires the `val` split
    - `Trainer.test()` requires the `test` splits
    - `Trainer.predict()` requires the `test` splits

    Here is an example of constructing the data module for `Trainer.fit()`:
    ```
    >>> from bionemo.webdatamodule.datamodule import Split, WebDataModule
    >>>
    >>> tar_file_prefix = "shards"
    >>>
    >>> dirs_of_tar_files = {
    >>>     Split.train: "/path/to/train/split/tars",
    >>>     Split.val: "/path/to/val/split/tars",
    >>> }
    >>>
    >>> n_samples {
    >>>     Split.train: 1000,
    >>>     Split.val: 100,
    >>> }
    >>>
    >>> # this is the string to retrieve the corresponding data object from the
    >>> # webdataset file (see
    >>> # https://github.com/webdataset/webdataset?tab=readme-ov-file#the-webdataset-format
    >>> # for details)
    >>> suffix_keys_wds = "tensor.pyd"
    >>>
    >>> # see the API doc for the definition of global_batch_size
    >>> global_batch_size = 16
    >>>
    >>> seed = 27193781
    >>>
    >>> # Specify the routines to process the samples in the WebDataset object.
    >>> # The routine is a generator of an Iterable of generators that are chained
    >>> # together by nested function calling. The following is equivalent of
    >>> # defining a overall generator of `shuffle(untuple(...))` which
    >>> # untuples the samples and shuffles them. See webdataset's Documentation
    >>> # for details.
    >>> # NOTE: the `untuple` is almost always necessary due to the webdataset's
    >>> # file parsing rule.
    >>>
    >>> untuple = lambda source : (sample for (sample,) in source)
    >>>
    >>> from webdatast import shuffle
    >>> pipeline_wds = {
    >>>     Split.train : [untuple, shuffle(n_samples[Split.train],
    >>>                                     rng=random.Random(seed_rng_shfl))],
    >>>     Split.val: untuple
    >>> }
    >>>
    >>> # Similarly the user can optionally define the processing routine on the
    >>> # WebLoader (the dataloader of webdataset).
    >>> # NOTE: these routines by default take unbatched sample as input so the
    >>> # user can customize their batching routines here
    >>>
    >>> batch = batched(local_batch_size, collation_fn=lambda
                        list_samples : torch.vstack(list_samples))
    >>> pipeline_prebatch_wld = {
            Split.train: [shuffle(n_samples[Split.train],
                                  rng=random.Random(seed_rng_shfl)), batch],
            Split.val : batch,
            Split.test : batch
        }
    >>>
    >>> # the user can optionally specify the kwargs for WebDataset and
    >>> # WebLoader
    >>>
    >>> kwargs_wds = {
    >>>     split : {'shardshuffle' : split == Split.train,
    >>>              'nodesplitter' : wds.split_by_node,
    >>>              'seed' : seed_rng_shfl}
    >>>     for split in Split
    >>>     }
    >>>
    >>> kwargs_wld = {
    >>>     split : {"num_workers": 2} for split in Split
    >>>     }
    >>>
    >>> # construct the data module
    >>> data_module = WebDataModule(n_samples, suffix_keys_wds,
                                    dirs_of_tar_files, global_batch_size,
                                    prefix_tars_wds=tar_file_prefix,
                                    pipeline_wds=pipeline_wds,
                                    pipeline_prebatch_wld=pipeline_prebatch_wld,
                                    kwargs_wds=kwargs_wds,
                                    kwargs_wld=kwargs_wld)
    ```

    """

    def __init__(
        self,
        n_samples: Dict[Split, int],
        suffix_keys_wds: Union[str, Iterable[str]],
        dirs_tars_wds: Dict[Split, str],
        global_batch_size: int,
        prefix_tars_wds: str = "wdshards",
        pipeline_wds: Optional[Dict[Split, Union[Iterable[Iterable[Any]], Iterable[Any]]]] = None,
        pipeline_prebatch_wld: Optional[Dict[Split, Union[Iterable[Iterable[Any]], Iterable[Any]]]] = None,
        kwargs_wds: Optional[Dict[Split, Dict[str, Any]]] = None,
        kwargs_wld: Optional[Dict[Split, Dict[str, Any]]] = None,
    ):
        """Constructor.

        Args:
            n_samples: input dictionary: Split -> number of data samples for each split
            suffix_keys_wds: a set of keys each
                corresponding to a data object in the webdataset tar file
                dictionary. The data objects of these keys will be extracted and
                tupled for each sample in the tar files
            dirs_tars_wds: input dictionary: Split -> tar file
                directory that contains the webdataset tar files for each split
            global_batch_size: size of batch summing across nodes in Data
                Distributed Parallel, i.e., local_batch_size * n_nodes. NOTE:
                this data module doesn't rely on the input `global_batch_size`
                for batching the samples. The batching is supposed to be done as
                a part of the input `pipeline_prebatch_wld`. `global_batch_size`
                is only used to compute a (pseudo-) epoch length for the data
                loader so that the loader yield approximately n_samples //
                global_batch_size batches
        Kwargs:
            prefix_tars_wds: name prefix of the input webdataset tar
                files. The input tar files are globbed by
                "{dirs_tars_wds[split]}/{prefix_tars_wds}-*.tar"
            pipeline_wds: a dictionary of webdatast composable, i.e.,
                functor that maps a iterator to another iterator that
                transforms the data sample yield from the dataset object, for
                different splits, or an iterable to such a sequence of such
                iterators. For example, this can be used to transform the
                sample in the worker before sending it to the main process of
                the dataloader
            pipeline_prebatch_wld: a dictionary
                of webloader composable, i.e., functor that maps a iterator to
                another iterator that transforms the data sample yield from the
                WebLoader object, for different splits, or an iterable to a
                seuqnence of such iterators. For example, this can be used for
                batching the samples. NOTE: this is applied before batching is
                yield from the WebLoader
            kwargs_wds: kwargs for the WebDataset.__init__()
            kwargs_wld : kwargs for the WebLoader.__init__(), e.g., num_workers, of each split
        """
        super().__init__()

        self._dirs_tars_wds = dirs_tars_wds

        keys_subset = self._dirs_tars_wds.keys()

        if n_samples.keys() != keys_subset:
            raise RuntimeError(
                f"Input n_samples has different keys than " f"dirs_tars_wds: {n_samples.keys()} vs " f"{keys_subset}"
            )

        self._n_samples = n_samples

        self._global_batch_size = global_batch_size

        if not isinstance(suffix_keys_wds, get_args(Union[str, Iterable])):
            raise TypeError("suffix_keys_wds can only be str or Iterable[str]")

        self._suffix_keys_wds = suffix_keys_wds

        self._prefix_tars_wds = prefix_tars_wds
        self._pipeline_wds = pipeline_wds
        self._pipeline_prebatch_wld = pipeline_prebatch_wld

        self._kwargs_wld = kwargs_wld

        self._kwargs_wds = kwargs_wds

        # to be created later in setup
        self._dataset = {}

    def prepare_data(self) -> None:
        """This is called only by the main process by the Lightning workflow.

        Do not rely on this data module object's state update here as there is no
        way to communicate the state update to other subprocesses. Is a **no-op**.
        """
        pass

    def _setup_wds(self, split: Split) -> wds.WebDataset:
        """Setup webdataset and webloader. This is called by setup().

        Args:
            split (Split): train, val or test split

        Returns:
            WebDataset

        """
        if split not in self._dirs_tars_wds.keys():
            raise RuntimeError(f"_setup_wds() is called with {split} " f"split that doesn't have the input tar dir")
        urls = sorted(glob.glob(f"{self._dirs_tars_wds[split]}/{self._prefix_tars_wds}-*.tar"))
        kwargs = self._kwargs_wds[split] if self._kwargs_wds is not None else None
        dataset = wds.WebDataset(urls, **(kwargs if kwargs is not None else {})).decode()
        if isinstance(self._suffix_keys_wds, str):
            dataset = dataset.extract_keys(f"*.{self._suffix_keys_wds}")
        else:
            dataset = dataset.extract_keys(*[f"*.{key}" for key in self._suffix_keys_wds])

        if self._pipeline_wds is not None and self._pipeline_wds[split] is not None:
            if isinstance(self._pipeline_wds[split], Iterable):
                dataset = dataset.compose(*self._pipeline_wds[split])
            else:
                dataset = dataset.compose(self._pipeline_wds[split])
        return dataset

    def setup(self, stage: str) -> None:
        """This is called on all Lightning-managed nodes in a multi-node training session.

        Args:
            stage: "fit", "test" or "predict"
        """
        if stage == "fit":
            self._dataset[Split.train] = self._setup_wds(Split.train)
            self._dataset[Split.val] = self._setup_wds(Split.val)
        elif stage == "validate":
            self._dataset[Split.val] = self._setup_wds(Split.val)
        elif stage == "test":
            self._dataset[Split.test] = self._setup_wds(Split.test)
        elif stage == "predict":
            self._dataset[Split.test] = self._setup_wds(Split.test)
        else:
            raise NotImplementedError(f"Data setup with {stage=} is not implemented.")

    def _setup_dataloader(self, split: Split) -> wds.WebLoader:
        """Setup the dataloader for the input dataset split.

        Args:
            split (Split): input split type

        Returns:
             WebLoader object

        Raises:
            ValueError if `split` doesn't correspond to a known dataset.
        """
        if self._dataset[split] is None:
            raise ValueError(
                f"_setup_dataloader() is called with {split} split without setting up the corresponding dataset."
            )
        dataset = self._dataset[split]
        n_samples = self._n_samples[split]
        n_batches = (n_samples + self._global_batch_size - 1) // self._global_batch_size
        kwargs = self._kwargs_wld[split] if self._kwargs_wld is not None else None
        loader = wds.WebLoader(dataset, batch_size=None, **(kwargs if kwargs is not None else {}))

        if self._pipeline_prebatch_wld is not None and self._pipeline_prebatch_wld[split] is not None:
            if isinstance(self._pipeline_prebatch_wld[split], Iterable):
                loader = loader.compose(*self._pipeline_prebatch_wld[split])
            else:
                loader = loader.compose(self._pipeline_prebatch_wld[split])

        loader = loader.with_epoch(n_batches)

        return loader

    def train_dataloader(self) -> wds.WebLoader:
        """Webdataset for the training data."""
        return self._setup_dataloader(Split.train)

    def val_dataloader(self) -> wds.WebLoader:
        """Webdataset for the validation data."""
        return self._setup_dataloader(Split.val)

    def test_dataloader(self) -> wds.WebLoader:
        """Webdataset for the test data."""
        return self._setup_dataloader(Split.test)

    def predict_dataloader(self) -> wds.WebLoader:
        """Alias for :func:`test_dataloader`."""
        return self._setup_dataloader(Split.test)

__init__(n_samples, suffix_keys_wds, dirs_tars_wds, global_batch_size, prefix_tars_wds='wdshards', pipeline_wds=None, pipeline_prebatch_wld=None, kwargs_wds=None, kwargs_wld=None)

Constructor.

Parameters:

Name Type Description Default
n_samples Dict[Split, int]

input dictionary: Split -> number of data samples for each split

required
suffix_keys_wds Union[str, Iterable[str]]

a set of keys each corresponding to a data object in the webdataset tar file dictionary. The data objects of these keys will be extracted and tupled for each sample in the tar files

required
dirs_tars_wds Dict[Split, str]

input dictionary: Split -> tar file directory that contains the webdataset tar files for each split

required
global_batch_size int

size of batch summing across nodes in Data Distributed Parallel, i.e., local_batch_size * n_nodes. NOTE: this data module doesn't rely on the input global_batch_size for batching the samples. The batching is supposed to be done as a part of the input pipeline_prebatch_wld. global_batch_size is only used to compute a (pseudo-) epoch length for the data loader so that the loader yield approximately n_samples // global_batch_size batches

required

Kwargs: prefix_tars_wds: name prefix of the input webdataset tar files. The input tar files are globbed by "{dirs_tars_wds[split]}/{prefix_tars_wds}-*.tar" pipeline_wds: a dictionary of webdatast composable, i.e., functor that maps a iterator to another iterator that transforms the data sample yield from the dataset object, for different splits, or an iterable to such a sequence of such iterators. For example, this can be used to transform the sample in the worker before sending it to the main process of the dataloader pipeline_prebatch_wld: a dictionary of webloader composable, i.e., functor that maps a iterator to another iterator that transforms the data sample yield from the WebLoader object, for different splits, or an iterable to a seuqnence of such iterators. For example, this can be used for batching the samples. NOTE: this is applied before batching is yield from the WebLoader kwargs_wds: kwargs for the WebDataset.init() kwargs_wld : kwargs for the WebLoader.init(), e.g., num_workers, of each split

Source code in bionemo/webdatamodule/datamodule.py
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
def __init__(
    self,
    n_samples: Dict[Split, int],
    suffix_keys_wds: Union[str, Iterable[str]],
    dirs_tars_wds: Dict[Split, str],
    global_batch_size: int,
    prefix_tars_wds: str = "wdshards",
    pipeline_wds: Optional[Dict[Split, Union[Iterable[Iterable[Any]], Iterable[Any]]]] = None,
    pipeline_prebatch_wld: Optional[Dict[Split, Union[Iterable[Iterable[Any]], Iterable[Any]]]] = None,
    kwargs_wds: Optional[Dict[Split, Dict[str, Any]]] = None,
    kwargs_wld: Optional[Dict[Split, Dict[str, Any]]] = None,
):
    """Constructor.

    Args:
        n_samples: input dictionary: Split -> number of data samples for each split
        suffix_keys_wds: a set of keys each
            corresponding to a data object in the webdataset tar file
            dictionary. The data objects of these keys will be extracted and
            tupled for each sample in the tar files
        dirs_tars_wds: input dictionary: Split -> tar file
            directory that contains the webdataset tar files for each split
        global_batch_size: size of batch summing across nodes in Data
            Distributed Parallel, i.e., local_batch_size * n_nodes. NOTE:
            this data module doesn't rely on the input `global_batch_size`
            for batching the samples. The batching is supposed to be done as
            a part of the input `pipeline_prebatch_wld`. `global_batch_size`
            is only used to compute a (pseudo-) epoch length for the data
            loader so that the loader yield approximately n_samples //
            global_batch_size batches
    Kwargs:
        prefix_tars_wds: name prefix of the input webdataset tar
            files. The input tar files are globbed by
            "{dirs_tars_wds[split]}/{prefix_tars_wds}-*.tar"
        pipeline_wds: a dictionary of webdatast composable, i.e.,
            functor that maps a iterator to another iterator that
            transforms the data sample yield from the dataset object, for
            different splits, or an iterable to such a sequence of such
            iterators. For example, this can be used to transform the
            sample in the worker before sending it to the main process of
            the dataloader
        pipeline_prebatch_wld: a dictionary
            of webloader composable, i.e., functor that maps a iterator to
            another iterator that transforms the data sample yield from the
            WebLoader object, for different splits, or an iterable to a
            seuqnence of such iterators. For example, this can be used for
            batching the samples. NOTE: this is applied before batching is
            yield from the WebLoader
        kwargs_wds: kwargs for the WebDataset.__init__()
        kwargs_wld : kwargs for the WebLoader.__init__(), e.g., num_workers, of each split
    """
    super().__init__()

    self._dirs_tars_wds = dirs_tars_wds

    keys_subset = self._dirs_tars_wds.keys()

    if n_samples.keys() != keys_subset:
        raise RuntimeError(
            f"Input n_samples has different keys than " f"dirs_tars_wds: {n_samples.keys()} vs " f"{keys_subset}"
        )

    self._n_samples = n_samples

    self._global_batch_size = global_batch_size

    if not isinstance(suffix_keys_wds, get_args(Union[str, Iterable])):
        raise TypeError("suffix_keys_wds can only be str or Iterable[str]")

    self._suffix_keys_wds = suffix_keys_wds

    self._prefix_tars_wds = prefix_tars_wds
    self._pipeline_wds = pipeline_wds
    self._pipeline_prebatch_wld = pipeline_prebatch_wld

    self._kwargs_wld = kwargs_wld

    self._kwargs_wds = kwargs_wds

    # to be created later in setup
    self._dataset = {}

predict_dataloader()

Alias for :func:test_dataloader.

Source code in bionemo/webdatamodule/datamodule.py
321
322
323
def predict_dataloader(self) -> wds.WebLoader:
    """Alias for :func:`test_dataloader`."""
    return self._setup_dataloader(Split.test)

prepare_data()

This is called only by the main process by the Lightning workflow.

Do not rely on this data module object's state update here as there is no way to communicate the state update to other subprocesses. Is a no-op.

Source code in bionemo/webdatamodule/datamodule.py
224
225
226
227
228
229
230
def prepare_data(self) -> None:
    """This is called only by the main process by the Lightning workflow.

    Do not rely on this data module object's state update here as there is no
    way to communicate the state update to other subprocesses. Is a **no-op**.
    """
    pass

setup(stage)

This is called on all Lightning-managed nodes in a multi-node training session.

Parameters:

Name Type Description Default
stage str

"fit", "test" or "predict"

required
Source code in bionemo/webdatamodule/datamodule.py
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
def setup(self, stage: str) -> None:
    """This is called on all Lightning-managed nodes in a multi-node training session.

    Args:
        stage: "fit", "test" or "predict"
    """
    if stage == "fit":
        self._dataset[Split.train] = self._setup_wds(Split.train)
        self._dataset[Split.val] = self._setup_wds(Split.val)
    elif stage == "validate":
        self._dataset[Split.val] = self._setup_wds(Split.val)
    elif stage == "test":
        self._dataset[Split.test] = self._setup_wds(Split.test)
    elif stage == "predict":
        self._dataset[Split.test] = self._setup_wds(Split.test)
    else:
        raise NotImplementedError(f"Data setup with {stage=} is not implemented.")

test_dataloader()

Webdataset for the test data.

Source code in bionemo/webdatamodule/datamodule.py
317
318
319
def test_dataloader(self) -> wds.WebLoader:
    """Webdataset for the test data."""
    return self._setup_dataloader(Split.test)

train_dataloader()

Webdataset for the training data.

Source code in bionemo/webdatamodule/datamodule.py
309
310
311
def train_dataloader(self) -> wds.WebLoader:
    """Webdataset for the training data."""
    return self._setup_dataloader(Split.train)

val_dataloader()

Webdataset for the validation data.

Source code in bionemo/webdatamodule/datamodule.py
313
314
315
def val_dataloader(self) -> wds.WebLoader:
    """Webdataset for the validation data."""
    return self._setup_dataloader(Split.val)