dataset_utils
Utility functions for getting samples and forward loop function for different datasets.
Functions
Creates and returns a forward loop function configured for a specific model, dataset, and tokenizer. |
|
Get a dataloader with the dataset name and toknizer of the target model. |
- create_forward_loop(model=None, dataset_name='cnn_dailymail', tokenizer=None, batch_size=1, num_samples=512, max_sample_length=512, device=None, include_labels=False, dataloader=None)
Creates and returns a forward loop function configured for a specific model, dataset, and tokenizer.
This function initializes a forward loop function tailored to process batches of data from the specified dataset using the given model and tokenizer. The forward loop function, when called, iterates over the dataset, applies the tokenizer to prepare the input data, feeds it into the model, and returns the model’s predictions.
Parameters: - model: The PyTorch model for inference. - dataset_name: The name of the dataset to be used. - tokenizer: The tokenizer used to preprocess text data into a format suitable for the model. - batch_size: Batch size of the returned dataloader. If 0 is provided, we auto determine the batch_size. - num_samples: Number of samples from the dataset. - max_sample_length: Maximum length of a sample. - device: Target device for the returned dataloader. - include_labels: Whether to include labels in the dataloader. - dataloader: If provided, use the provided dataloader instead.
Example usage for quantization:
import modelopt.torch.quantization as mtq # Initialize model and tokenizer # ... # Create forward loop for calibration forward_loop = create_forward_loop( model=model, dataset_name="cnn_dailymail", tokenizer=tokenizer ) # Quantize the model with the calibration dataset mtq.quantize(model, quant_cfg, forward_loop=forward_loop)
Returns: - function: A forward loop function that can be called with no arguments. When called, this function iterates over the dataset specified by dataset_name.
- Parameters:
model (Module) –
dataset_name (str) –
tokenizer (PreTrainedTokenizer | PreTrainedTokenizerFast) –
batch_size (int) –
num_samples (int) –
max_sample_length (int) –
device (str | None) –
include_labels (bool) –
dataloader (DataLoader) –
- Return type:
Callable
- get_dataset_dataloader(dataset_name='cnn_dailymail', tokenizer=None, batch_size=1, num_samples=512, max_sample_length=512, device=None, include_labels=False)
Get a dataloader with the dataset name and toknizer of the target model.
- Parameters:
dataset_name (str) – Name of the dataset to load.
tokenizer (PreTrainedTokenizer | PreTrainedTokenizerFast) – Instancne of Hugginface tokenizer.
batch_size (int) – Batch size of the returned dataloader.
num_samples (int) – Number of samples from the dataset.
max_sample_length (int) – Maximum length of a sample.
device (str | None) – Target device for the returned dataloader.
include_labels (bool) – Whether to include labels in the dataloader.
- Returns:
A instance of dataloader.
- Return type:
DataLoader