Installing the NVIDIA Container Toolkit#
Installation#
Prerequisites#
- Read this section about platform support. 
- Install the NVIDIA GPU driver for your Linux distribution. NVIDIA recommends installing the driver by using the package manager for your distribution. For information about installing the driver with a package manager, refer to the NVIDIA Driver Installation Quickstart Guide. Alternatively, you can install the driver by downloading a - .runinstaller.
With apt: Ubuntu, Debian#
Note
These instructions should work for any Debian-derived distribution.
- Configure the production repository: - $ curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list - Optionally, configure the repository to use experimental packages: - $ sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list 
- Update the packages list from the repository: - $ sudo apt-get update
- Install the NVIDIA Container Toolkit packages: - $ NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1 \ sudo apt-get install -y \ nvidia-container-toolkit=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ nvidia-container-toolkit-base=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ libnvidia-container-tools=${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ libnvidia-container1=${NVIDIA_CONTAINER_TOOLKIT_VERSION} 
With dnf: RHEL/CentOS, Fedora, Amazon Linux#
Note
These instructions should work for many RPM-based distributions.
- Configure the production repository: - $ curl -s -L https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo | \ sudo tee /etc/yum.repos.d/nvidia-container-toolkit.repo - Optionally, configure the repository to use experimental packages: - $ sudo dnf-config-manager --enable nvidia-container-toolkit-experimental
- Install the NVIDIA Container Toolkit packages: - $ NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1 \ sudo dnf install -y \ nvidia-container-toolkit-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ nvidia-container-toolkit-base-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ libnvidia-container-tools-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ libnvidia-container1-${NVIDIA_CONTAINER_TOOLKIT_VERSION} 
With zypper: OpenSUSE, SLE#
- Configure the production repository: - $ sudo zypper ar https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo- Optionally, configure the repository to use experimental packages: - $ sudo zypper modifyrepo --enable nvidia-container-toolkit-experimental
- Install the NVIDIA Container Toolkit packages: - $ NVIDIA_CONTAINER_TOOLKIT_VERSION=1.17.8-1 \ sudo zypper --gpg-auto-import-keys install -y \ nvidia-container-toolkit-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ nvidia-container-toolkit-base-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ libnvidia-container-tools-${NVIDIA_CONTAINER_TOOLKIT_VERSION} \ libnvidia-container1-${NVIDIA_CONTAINER_TOOLKIT_VERSION} 
Configuration#
Prerequisites#
- You installed a supported container engine (Docker, Containerd, CRI-O, Podman). 
- You installed the NVIDIA Container Toolkit. 
Configuring Docker#
- Configure the container runtime by using the - nvidia-ctkcommand:- $ sudo nvidia-ctk runtime configure --runtime=docker - The - nvidia-ctkcommand modifies the- /etc/docker/daemon.jsonfile on the host. The file is updated so that Docker can use the NVIDIA Container Runtime.
- Restart the Docker daemon: - $ sudo systemctl restart docker
Rootless mode#
To configure the container runtime for Docker running in Rootless mode, follow these steps:
- Configure the container runtime by using the - nvidia-ctkcommand:- $ nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json 
- Restart the Rootless Docker daemon: - $ systemctl --user restart docker
- Configure - /etc/nvidia-container-runtime/config.tomlby using the- sudo nvidia-ctkcommand:- $ sudo nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
Configuring containerd (for Kubernetes)#
- Configure the container runtime by using the - nvidia-ctkcommand:- $ sudo nvidia-ctk runtime configure --runtime=containerd - The - nvidia-ctkcommand modifies the- /etc/containerd/config.tomlfile on the host. The file is updated so that containerd can use the NVIDIA Container Runtime.
- Restart containerd: - $ sudo systemctl restart containerd
Configuring containerd (for nerdctl)#
No additional configuration is needed.
You can just run nerdctl run --gpus=all, with root or without root.
You do not need to run the nvidia-ctk command mentioned above for Kubernetes.
See also the nerdctl documentation.
Configuring CRI-O#
- Configure the container runtime by using the - nvidia-ctkcommand:- $ sudo nvidia-ctk runtime configure --runtime=crio - The - nvidia-ctkcommand modifies the- /etc/crio/crio.conffile on the host. The file is updated so that CRI-O can use the NVIDIA Container Runtime.
- Restart the CRI-O daemon: - $ sudo systemctl restart crio
Configuring Podman#
For Podman, NVIDIA recommends using CDI for accessing NVIDIA devices in containers.