Deploying TRTorch Programs

After compiling and saving TRTorch programs there is no longer a strict dependency on the full TRTorch library. All that is required to run a compiled program is the runtime. There are therfore a couple options to deploy your programs other than shipping the full trtorch compiler with your applications.

TRTorch package /

Once a program is compiled, you run it using the standard PyTorch APIs. All that is required is that the package must be imported in python or linked in C++.

Runtime Library

Distributed with the C++ distribution is . This library only contains the components necessary to run TRTorch programs. Instead of linking or importing trtorch you can link in your deployment programs or use DL_OPEN or LD_PRELOAD . For python you can load the runtime with torch.ops.load_library("") . You can then continue to use programs just as you would otherwise via PyTorch API.


If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of , check Installation documentation for more details.


If you are linking , likely using the following flags will help -Wl,--no-as-needed -ltrtorchrt -Wl,--as-needed as theres no direct symbol dependency to anything in the TRTorch runtime for most TRTorch runtime applications

An example of how to use can be found here:

Plugin Library

In the case you use TRTorch as a converter to a TensorRT engine and your engine uses plugins provided by TRTorch, TRTorch ships the library which contains the implementation of the TensorRT plugins used by TRTorch during compilation. This library can be DL_OPEN or LD_PRELOAD similar to other TensorRT plugin libraries.