Container Device Interface Support in the GPU Operator#
About the Container Device Interface#
The Container Device Interface (CDI) is a specification for container runtimes such as cri-o, containerd, and podman that standardizes access to complex devices like NVIDIA GPUs by the container runtimes. CDI support is provided by the NVIDIA Container Toolkit and the Operator extends that support for Kubernetes clusters.
Use of CDI is transparent to cluster administrators and application developers. The benefits of CDI are largely to reduce development and support for runtime-specific plugins.
When CDI is enabled, two runtime classes, nvidia-cdi and nvidia-legacy, become available. These two runtime classes are in addition to the default runtime class, nvidia.
If you do not set CDI as the default runtime, the runtime resolves to the legacy runtime mode that the NVIDIA Container Toolkit provides on x86_64 machines or any architecture that has NVML libraries installed.
Optionally, you can specify the runtime class for a workload. See Optional: Specifying the Runtime Class for a Pod for an example.
Support for Multi-Instance GPU#
Configuring CDI is supported with Multi-Instance GPU (MIG).
Both the single and mixed strategies are supported.
Limitations and Restrictions#
CDI is not supported on Red Hat OpenShift Container Platform. CDI is supported on all other platforms listed in Supported Operating Systems and Kubernetes Platforms.
Enabling CDI is not supported with Rancher Kubernetes Engine 2 (RKE2).
Enabling CDI During Installation#
Follow the instructions for installing the Operator with Helm on the Installing the NVIDIA GPU Operator page.
When you install the Operator with Helm, specify the --set cdi.enabled=true argument.
Optionally, also specify the --set cdi.default=true argument to use the CDI runtime class by default for all pods.
Enabling CDI After Installation#
Prerequisites
You installed version 22.3.0 or newer.
(Optional) Confirm that the only runtime class is
nvidiaby running the following command:$ kubectl get runtimeclassesExample Output
NAME HANDLER AGE nvidia nvidia 47h
Procedure
To enable CDI support, perform the following steps:
Enable CDI by modifying the cluster policy:
$ kubectl patch clusterpolicies.nvidia.com/cluster-policy --type='json' \ -p='[{"op": "replace", "path": "/spec/cdi/enabled", "value":true}]'
Example Output
clusterpolicy.nvidia.com/cluster-policy patched(Optional) Set the default container runtime mode to CDI by modifying the cluster policy:
$ kubectl patch clusterpolicies.nvidia.com/cluster-policy --type='json' \ -p='[{"op": "replace", "path": "/spec/cdi/default", "value":true}]'
Example Output
clusterpolicy.nvidia.com/cluster-policy patched(Optional) Confirm that the container toolkit and device plugin pods restart:
$ kubectl get pods -n gpu-operatorExample Output
NAME READY STATUS RESTARTS AGE gpu-feature-discovery-qnw2q 1/1 Running 0 47h gpu-operator-6d59774ff-hznmr 1/1 Running 0 2d gpu-operator-node-feature-discovery-master-6d6649d597-7l8bj 1/1 Running 0 2d gpu-operator-node-feature-discovery-worker-v86vj 1/1 Running 0 2d nvidia-container-toolkit-daemonset-2768s 1/1 Running 0 2m11s nvidia-cuda-validator-ls4vc 0/1 Completed 0 47h nvidia-dcgm-exporter-fxp9h 1/1 Running 0 47h nvidia-device-plugin-daemonset-dvp4v 1/1 Running 0 2m26s nvidia-device-plugin-validator-kvxbs 0/1 Completed 0 47h nvidia-driver-daemonset-m86r7 1/1 Running 0 2d nvidia-operator-validator-xg98r 1/1 Running 0 47h
Verify that the runtime classes include nvidia-cdi and nvidia-legacy:
$ kubectl get runtimeclassesExample Output
NAME HANDLER AGE nvidia nvidia 2d nvidia-cdi nvidia-cdi 5m7s nvidia-legacy nvidia-legacy 5m7s
Disabling CDI#
To disable CDI support, perform the following steps:
If your nodes use the CRI-O container runtime, then temporarily disable the GPU Operator validator:
$ kubectl label nodes \ nvidia.com/gpu.deploy.operator-validator=false \ -l nvidia.com/gpu.present=true \ --overwrite
Tip
You can run
kubectl get nodes -o wideand view theCONTAINER-RUNTIMEcolumn to determine if your nodes use CRI-O.Disable CDI by modifying the cluster policy:
$ kubectl patch clusterpolicies.nvidia.com/cluster-policy --type='json' \ -p='[{"op": "replace", "path": "/spec/cdi/enabled", "value":false}]'
Example Output
clusterpolicy.nvidia.com/cluster-policy patchedIf you temporarily disabled the GPU Operator validator, re-enable the validator:
$ kubectl label nodes \ nvidia.com/gpu.deploy.operator-validator=true \ nvidia.com/gpu.present=true \ --overwrite
(Optional) Verify that the
nvidia-cdiandnvidia-legacyruntime classes are no longer available:$ kubectl get runtimeclassExample Output
NAME HANDLER AGE nvidia nvidia 11d
Optional: Specifying the Runtime Class for a Pod#
If you enabled CDI mode for the default container runtime, then no action is required to use CDI. However, you can use the following procedure to specify the legacy mode for a workload if you experience trouble.
If you did not enable CDI mode for the default container runtime, then you can use the following procedure to verify that CDI is enabled and as a routine practice to use the CDI mode of the container runtime.
Create a file, such as
cuda-vectoradd-cdi.yaml, with contents like the following example:apiVersion: v1 kind: Pod metadata: name: cuda-vectoradd spec: restartPolicy: OnFailure runtimeClassName: nvidia-cdi containers: - name: cuda-vectoradd image: "nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04" resources: limits: nvidia.com/gpu: 1
As an alternative, specify
nvidia-legacyto use the legacy mode of the container runtime.(Optional) Create a temporary namespace:
$ kubectl create ns demoExample Output
namespace/demo createdStart the pod:
$ kubectl apply -n demo -f cuda-vectoradd-cdi.yamlExample Output
pod/cuda-vectoradd createdView the logs from the pod:
$ kubectl logs -n demo cuda-vectoraddExample Output
[Vector addition of 50000 elements] Copy input data from the host memory to the CUDA device CUDA kernel launch with 196 blocks of 256 threads Copy output data from the CUDA device to the host memory Test PASSED Done
Delete the temporary namespace:
$ kubectl delete ns demoExample Output
namespace "demo" deleted