TensorRT Model Optimizer
Getting Started
Overview
Installation
Installation for Linux
Installation for Windows
Quick Start: Quantization
Quick Start: Quantization (Windows)
Quick Start: Pruning
Quick Start: Distillation
Quick Start: Sparsity
Guides
Support Matrix
Quantization
Pruning
NAS
Distillation
Sparsity
Saving & Restoring
Speculative Decoding
Deployment
TensorRT-LLM Deployment
DirectML Deployment
Examples
All GitHub Examples
ResNet20 on CIFAR-10: Pruning
HF BERT: Prune, Distill & Quantize
Reference
Changelog
modelopt API
Support
Contact us
FAQs
TensorRT Model Optimizer
Installation
View page source
Installation
Installation for Linux
Installation for Windows