Installing on Linux
Install TensorRT-LLM (tested on Ubuntu 22.04).
sudo apt-get -y install libopenmpi-dev && pip3 install tensorrt_llm
Sanity check the installation by running the following in Python (tested on Python 3.10):
from tensorrt_llm import LLM, SamplingParams def main(): prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TinyLlama/TinyLlama-1.1B-Chat-v1.0") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") # The entry point of the program need to be protected for spawning processes. if __name__ == '__main__': main()
Known limitations
There are some known limitations when you pip install pre-built TensorRT-LLM wheel package.
C++11 ABI
The pre-built TensorRT-LLM wheel has linked against the public pytorch hosted on pypi, which turned off C++11 ABI. While the NVIDIA optimized pytorch inside NGC container nvcr.io/nvidia/pytorch:xx.xx-py3 turned on the C++11 ABI, see NGC pytorch container page . Thus we recommend users to build from source inside when using the NGC pytorch container. Build from source guideline can be found in Build from Source Code on Linux
MPI in the Slurm environment
If you encounter an error while running TensorRT-LLM in a Slurm-managed cluster, you need to reconfigure the MPI installation to work with Slurm. The setup methods depends on your slurm configuration, pls check with your admin. This is not a TensorRT-LLM specific, rather a general mpi+slurm issue.
The application appears to have been direct launched using "srun", but OMPI was not built with SLURM support. This usually happens when OMPI was not configured --with-slurm and we weren't able to discover a SLURM installation in the usual places.
CUDA Toolkit
pip install tensorrt-llm
won’t install CUDA toolkit in your system, and the CUDA Toolkit is not required if want to just deploy a TensorRT-LLM engine. TensorRT-LLM uses the ModelOpt to quantize a model, while the ModelOpt requires CUDA toolkit to jit compile certain kernels which is not included in the pytorch to do quantization effectively. Please install CUDA toolkit when you see the following message when running ModelOpt quantization./usr/local/lib/python3.10/dist-packages/modelopt/torch/utils/cpp_extension.py:65: UserWarning: CUDA_HOME environment variable is not set. Please set it to your CUDA install root. Unable to load extension modelopt_cuda_ext and falling back to CPU version.
The installation of CUDA toolkit can be found in CUDA Toolkit Documentation