Container images¶
Table of Contents
Using CUDA images¶
Description¶
CUDA images come in three flavors and are available through the NVIDIA public hub repository.
base
: starting from CUDA 9.0, contains the bare minimum (libcudart) to deploy a pre~built CUDA application. Use this image if you want to manually select which CUDA packages you want to install.runtime
: extends thebase
image by adding all the shared libraries from the CUDA toolkit. Use this image if you have a pre-built application using multiple CUDA libraries.devel
: extends theruntime
image by adding the compiler toolchain, the debugging tools, the headers and the static libraries. Use this image to compile a CUDA application from sources.
Requirements¶
Running a CUDA container requires a machine with at least one CUDA-capable GPU and a driver compatible with the CUDA toolkit version you are using. The machine running the CUDA container only requires the NVIDIA driver , the CUDA toolkit doesn’t have to be installed.
NVIDIA drivers are backward-compatible with CUDA toolkits versions
CUDA toolkit version |
Driver version |
GPU architecture |
---|---|---|
6.5 |
>= 340.29 |
>= 2.0 (Fermi) |
7.0 |
>= 346.46 |
>= 2.0 (Fermi) |
7.5 |
>= 352.39 |
>= 2.0 (Fermi) |
8.0 |
== 361.93 or >= 375.51 |
== 6.0 (P100) |
8.0 |
>= 367.48 |
>= 2.0 (Fermi) |
9.0 |
>= 384.81 |
>= 3.0 (Kepler) |
9.1 |
>= 387.26 |
>= 3.0 (Kepler) |
9.2 |
>= 396.26 |
>= 3.0 (Kepler) |
10.0 |
>= 384.111, < 385.00 |
|
10.0 |
>= 410.48 |
>= 3.0 (Kepler) |
10.1 |
>= 384.111, < 385.00 |
|
10.1 |
>=410.72, < 411.00 |
|
10.1 |
>= 418.39 |
>= 3.0 (Kepler) |
Examples¶
# Running an interactive CUDA session isolating the first GPU
docker run -ti --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 nvidia/cuda
# Querying the CUDA 7.5 compiler version
docker run --rm --runtime=nvidia nvidia/cuda:7.5-devel nvcc --version
Tags available¶
Check the DockerHub Container images are available through the NVIDIA Docker Hub repository. The Dockerfiles are available on GitLab
Current supported distributions include:
[DEPRECATED] Ubuntu 14.04 LTS
Ubuntu 16.04 LTS
Ubuntu 18.04 LTS
CentOS 6
CentOS 7
For more information about a specific image please refer to its section:
Driver (Experimental)
NVIDIA Caffe (Deprecated see using NGC images)
DIGITS (Deprecated see using NGC images)
Using NGC images¶
From the official product page :
Featuring a comprehensive catalog of containers, including NVIDIA optimized deep learning frameworks, third-party managed HPC applications, and NVIDIA HPC visualization tools.
Optimized container images for deep learning are available on the NVIDIA container registry: https://www.nvidia.com/en-us/gpu-cloud/deep-learning-containers/
For any NGC related issue, please use the devtalk forums.
Using non-CUDA images¶
Setting --gpus all
will enable GPU support for any container image:
docker run --gpus all --rm debian:stretch nvidia-smi
Writing Dockerfiles¶
If the environment variables are set inside the Dockerfile, you don’t need to set them on the docker run
command-line.
For instance, if you are creating your own custom CUDA container, you should use the following:
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
These environment variables are already set in our official images pushed to Docker Hub.
For a Dockerfile using the NVIDIA Video Codec SDK, you should use:
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,video,utility