API Introduction
The LLM API is a high-level Python API and designed for LLM workflows. This API is under development and might have breaking changes in the future.
Supported Models
Llama (including variants Mistral, Mixtral, InternLM)
GPT (including variants Starcoder-1/2, Santacoder)
Gemma-1/2
Phi-1/2/3
ChatGLM (including variants glm-10b, chatglm, chatglm2, chatglm3, glm4)
QWen-1/1.5/2
Falcon
Baichuan-1/2
GPT-J
Mamba-1/2
Model Preparation
The LLM
class supports input from any of following:
Hugging Face Hub: Triggers a download from the Hugging Face model hub, such as
TinyLlama/TinyLlama-1.1B-Chat-v1.0
.Local Hugging Face models: Uses a locally stored Hugging Face model.
Local TensorRT-LLM engine: Built by
trtllm-build
tool or saved by the Python LLM API.
You can use any of these formats interchangeably with the LLM(model=<any-model-path>)
constructor.
The following sections describe how to use these different formats for the LLM API.
Hugging Face Hub
Using the Hugging Face Hub is as simple as specifying the repo name in the LLM constructor:
llm = LLM(model="TinyLlama/TinyLlama-1.1B-Chat-v1.0")
Local Hugging Face Models
Given the popularity of the Hugging Face model hub, the API supports the Hugging Face format as one of the starting points. To use the API with Llama 3.1 models, download the model from the Meta Llama 3.1 8B model page by using the following command:
git lfs install
git clone https://huggingface.co/meta-llama/Meta-Llama-3.1-8B
After the model download is complete, you can load the model:
llm = LLM(model=<path_to_meta_llama_from_hf>)
Using this model is subject to a particular license. Agree to the terms and authenticate with Hugging Face to begin the download.
Local TensorRT-LLM Engine
The LLM API can use a TensorRT-LLM engine. There are two ways to build a TensorRT-LLM engine:
You can build the TensorRT-LLM engine from the Hugging Face model directly with the
trtllm-build
tool and then save the engine to disk for later use. Refer to the README in theexamples/llama
repository on GitHub.After the engine building is finished, you can load the model:
llm = LLM(model=<path_to_trt_engine>)
Alternatively, you can use an
LLM
instance to create the engine and persist to local disk:llm = LLM(<model-path>) # Save engine to local disk llm.save(<engine-dir>)
The engine can be loaded using the
model
argument as shown in the first approach.
Tips and Troubleshooting
The following tips typically assist new LLM API users who are familiar with other APIs that are part of TensorRT-LLM:
RuntimeError: only rank 0 can start multi-node session, got 1
There is no need to add an
mpirun
prefix for launching single node multi-GPU inference with the LLM API.For example, you can run
python llm_inference_distributed.py
to perform multi-GPU on a single node.Hang issue on Slurm Node
If you experience a hang or other issue on a node managed with Slurm, add prefix
mpirun -n 1 --oversubscribe --allow-run-as-root
to your launch script.For example, try
mpirun -n 1 --oversubscribe --allow-run-as-root python llm_inference_distributed.py
.MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1.
Because the LLM API relies on the
mpi4py
library, put the LLM class in a function and protect the main entrypoint to the program under the__main__
namespace to avoid a recursive spawn process inmpi4py
.This limitation is applicable for multi-GPU inference only.