TensorRT Model Optimizer
Getting Started
Overview
Installation
Quick Start: Quantization
Quick Start: Quantization (Windows)
Quick Start: Pruning
Quick Start: Distillation
Quick Start: Sparsity
Guides
Quantization
Pruning
NAS
Distillation
Sparsity
Saving & Restoring
Speculative Decoding
Deployment
TensorRT-LLM Deployment
DirectML Deployment
Examples
All GitHub Examples
ResNet20 on CIFAR-10: Pruning
HF BERT: Prune, Distill & Quantize
Reference
Changelog
modelopt API
deploy
onnx
torch
distill
export
nas
opt
prune
quantization
algorithms
calib
config
conversion
export_onnx
extensions
mode
model_calib
model_quant
nn
optim
plugins
qtensor
quant_modules
tensor_quant
utils
sparsity
speculative
trace
utils
Support
Contact us
FAQs
TensorRT Model Optimizer
modelopt API
torch
quantization
nn
modules
quant_batchnorm
View page source
quant_batchnorm
Quantized batch normalization module.