Installation from Source

In most cases, you should not need to build CUDA Quantum from source. For the best experience, we recommend using a container runtime to avoid conflicts with other software tools installed on the system. Note that Singularity or Docker rootless mode address common issue or concerns that are often the motivation for avoiding the use of containers. Singularity, for example, can be installed in a user folder and its installation does not require admin permissions; see this section for more detailed instructions on how to do that. Our installation guide also contains instructions for how to connect an IDE to a running container.

If you do not want use a container runtime, we also provide pre-built binaries for using CUDA Quantum with C++. These binaries are built following the instructions in this guide and should work for you as long as your system meets the compatibility requirements listed under Prerequisites. To install them, please follow the instructions here.

If you still want to build and install CUDA Quantum from source, you will need to ensure that all dependencies installed in the build and host system are compatible with your CUDA Quantum installation. The rest of this guide outlines specific compatibility requirements during the build and after installation, and walks through the installation steps.

Note

The build described in this guide does not include the support for building the Python support for CUDA Quantum. For more information about using CUDA Quantum from Python, please take a look at this page.

CUDA Quantum contains some components that are only included as pre-built binaries and not part of our open source repository. We are working on either open-sourcing these components or making them available as separate downloads in the future. Even without these components, almost all features of CUDA Quantum will be enabled in a source build, though some pieces may be less performant. At this time, the multi-GPU state vector simulator backend will not be included if you build CUDA Quantum from source.

Prerequisites

The following pre-requisites need to be satisfied both on the build system and on the host system, that is the system where the built CUDA Quantum binaries will be installed and used.

  • Linux operating system. The instructions in this guide have been validated with the AlmaLinux 8 image that serves as the base image for the manylinux_2_28 image, and should work for the operating systems CentOS 8, Debian 11 and 12, Fedora 38, OpenSUSE/SLED/SLES 15.5, RHEL 8 and 9, Rocky 8 and 9, and Ubuntu 22.04. Other operating systems may work, but have not been tested.

  • Bash shell. The CUDA Quantum build, install and run scripts expect to use /bin/bash.

  • GNU C library. Make sure that the version on the host system is the same one or newer than the version on the build system. Our own builds use version 2.28.

  • CPU with either x86-64 (x86-64-v3 architecture and newer) or ARM64 architecture. Other architectures may work but are not tested and may require adjustments to the build instructions.

  • Needed only on the host system: NVIDIA GPU with Volta, Turing, Ampere, Ada, or Hopper architecture and Compute Capability 7+. Make sure you have the latest drivers installed for your GPU, and double check that the driver version listed by the nvidia-smi command is 470.57.02 or newer.

We strongly recommend using a virtual environment for the build that includes only the tools and dependencies listed in this guide. If you have additional software installed, you will need to make sure that the build is linking against the correct libraries and versions.

Build Dependencies

In addition to the prerequisites listed above, you will need to install the following prerequisites in your build environment prior to proceeding with the build as described in the subsequent sections:

  • GNU C library: We currently statically link dependencies, in some cases including the standard libraries. We may revise that in the future. To use the current build configuration, please make sure you have the static version of the GNU C library, including the POSIX Threads library, installed on your system. The necessary package(s) can usually be obtained via package manager for your distribution.

  • Python version 3.8 or newer: The Python interpreter is required (only) for some of the LLVM build scripts and the Python version used for the build does not have to match the version on the host system.

  • Common tools: wget, git, unzip. The commands in the rest of this guide assume that these tools are present on the build system, but they can be replaced by other alternatives (such as, for example, manually going to a web page and downloading a file/folder).

The above prerequisites are no longer needed once CUDA Quantum is built and do not need to be present on the host system.

Note

The CUDA Quantum build scripts and the commands listed in the rest of this document assume you are using bash as the shell for your build.

In addition to installing the needed build dependencies listed above, make sure to set the following environment variables prior to proceeding:

export CUDAQ_INSTALL_PREFIX=/usr/local/cudaq
export CUQUANTUM_INSTALL_PREFIX=/usr/local/cuquantum
export CUTENSOR_INSTALL_PREFIX=/usr/local/cutensor
export LLVM_INSTALL_PREFIX=/usr/local/llvm
export BLAS_INSTALL_PREFIX=/usr/local/blas
export ZLIB_INSTALL_PREFIX=/usr/local/zlib
export OPENSSL_INSTALL_PREFIX=/usr/local/openssl
export CURL_INSTALL_PREFIX=/usr/local/curl

These environment variables must be set during the build. Their value can be chosen freely, but the paths specified during the build are also where the corresponding libraries will be installed on the host system. We are working on making this more flexible in the future.

Note

If you deviate from the instructions below for installing one of the dependencies and instead install it for example via package manager, you will need to make sure that the installation path matches the path you set for the corresponding environment variable(s).

Please do not set LLVM_INSTALL_PREFIX to an existing directory; To avoid compatibility issues, it is important to use the same compiler to build the LLVM/MLIR dependencies from source as is later used to build CUDA Quantum itself.

CUDA

Building CUDA Quantum requires a full installation of the CUDA toolkit. The instructions are tested using version 11.8, but any CUDA 11 or 12 version should work, as long as the installed driver on both the build and the host system supports that CUDA version. We recommend using the latest CUDA version that is supported by your driver.

Download a suitable CUDA version following the installation guide for your platform in the online documentation linked on that page.

Within the tested AlmaLinux 8 environment, for example, the following commands install CUDA 11.8:

CUDA_VERSION=11.8
CUDA_DOWNLOAD_URL=https://developer.download.nvidia.com/compute/cuda/repos
# Go to the url above, set the variables below to a suitable distribution
# and subfolder for your platform, and uncomment the line below.
# DISTRIBUTION=rhel8 CUDA_ARCH_FOLDER=x86_64

dnf config-manager --add-repo "${CUDA_DOWNLOAD_URL}/${DISTRIBUTION}/${CUDA_ARCH_FOLDER}/cuda-${DISTRIBUTION}.repo"
dnf install -y --nobest --setopt=install_weak_deps=False \
    cuda-toolkit-$(echo ${CUDA_VERSION} | tr . -)

cuQuantum

Each version of CUDA Quantum is compatible only with a specific cuQuantum version. As of CUDA Quantum 0.6, this is version 23.10. Newer versions of cuQuantum (if they exist) might be compatible but have not been tested.

Make sure the environment variable CUDA_ARCH_FOLDER is set to either x86_64 or sbsa (for ARM64) depending on your processor architecture, and CUDA_VERSION is set to the installed CUDA version. Install cuQuantum version 23.10 using the following commands:

CUQUANTUM_VERSION=23.10.0.6
CUQUANTUM_DOWNLOAD_URL=https://developer.download.nvidia.com/compute/cuquantum/redist/cuquantum

cuquantum_archive=cuquantum-linux-${CUDA_ARCH_FOLDER}-${CUQUANTUM_VERSION}_cuda$(echo ${CUDA_VERSION} | cut -d . -f1)-archive.tar.xz
wget "${CUQUANTUM_DOWNLOAD_URL}/linux-${CUDA_ARCH_FOLDER}/${cuquantum_archive}"
mkdir -p "${CUQUANTUM_INSTALL_PREFIX}" 
tar xf "${cuquantum_archive}" --strip-components 1 -C "${CUQUANTUM_INSTALL_PREFIX}" 
rm -rf "${cuquantum_archive}"

cuTensor

Depending on how you installed CUDA, the cuTensor library is usually not included in the installation. This library is used by some of the simulator backends. Please check the cuQuantum documentation to ensure you choose a version that is compatible with the used cuQuantum version, such as version 1.7.

Make sure the environment variable CUDA_ARCH_FOLDER is set to either x86_64 or sbsa (for ARM64) depending on your processor architecture, and CUDA_VERSION is set to the installed CUDA version. Install cuTensor version 1.7 using the following commands:

CUTENSOR_VERSION=1.7.0.1
CUTENSOR_DOWNLOAD_URL=https://developer.download.nvidia.com/compute/cutensor/redist/libcutensor

cutensor_archive=libcutensor-linux-${CUDA_ARCH_FOLDER}-${CUTENSOR_VERSION}-archive.tar.xz
wget "${CUTENSOR_DOWNLOAD_URL}/linux-${CUDA_ARCH_FOLDER}/${cutensor_archive}"
mkdir -p "${CUTENSOR_INSTALL_PREFIX}" && tar xf "${cutensor_archive}" --strip-components 1 -C "${CUTENSOR_INSTALL_PREFIX}"
mv "${CUTENSOR_INSTALL_PREFIX}"/lib/$(echo ${CUDA_VERSION} | cut -d . -f1)/* ${CUTENSOR_INSTALL_PREFIX}/lib/
ls -d ${CUTENSOR_INSTALL_PREFIX}/lib/*/ | xargs rm -rf && rm -rf "${cutensor_archive}"

Toolchain

The compiler toolchain used for the build needs to support C++20 and must be a supported CUDA host compiler for the installed CUDA version. The following instructions have been tested with GCC-11 as your toolchain for building CUDA Quantum. If you use a different compiler, we recommend using an OpenMP-enabled compiler. At this time, we actively test building with GCC 11 and 12, as well as with Clang 16. Other toolchains may be supported but have not been tested.

Within the tested AlmaLinux 8 environment, for example, the following commands install GCC 11:

GCC_VERSION=11
dnf install -y --nobest --setopt=install_weak_deps=False \
    gcc-toolset-${GCC_VERSION}

Independent on which compiler toolchain you installed, set the following environment variables to point to the respective compilers on your build system:

GCC_INSTALL_PREFIX=/opt/rh/gcc-toolset-11
export CXX="${GCC_INSTALL_PREFIX}/root/usr/bin/g++"
export CC="${GCC_INSTALL_PREFIX}/root/usr/bin/gcc"
export FC="${GCC_INSTALL_PREFIX}/root/usr/bin/gfortran"
export CUDACXX=/usr/local/cuda/bin/nvcc
  • The variables CC and CXX must be set for the CUDA Quantum build.

  • A Fortran compiler is needed (only) to build the OpenSSL dependency; if you have an existing OpenSSL installation that you set the OPENSSL_INSTALL_PREFIX variable to, you can omit setting the FC environment variable.

  • To use GPU-acceleration in CUDA Quantum, make sure to set CUDACXX to your CUDA compiler. If the CUDA compiler is not found when building CUDA Quantum, some components and backends will be omitted automatically during the build.

Building CUDA Quantum

This installation guide has been written for a specific version/commit of CUDA Quantum. Make sure to obtain the source code for that version. Clone the CUDA Quantum GitHub repository and checkout the appropriate branch, tag, or commit. Note that the build scripts assume that they are run from within a git repository, and merely downloading the source code as ZIP archive hence will not work.

From within the folder where you cloned the CUDA Quantum repository, run the following command to build CUDA Quantum:

CUDAQ_WERROR=false \
CUDAQ_PYTHON_SUPPORT=OFF \
CUDAHOSTCXX="$CXX" \
CUDAQ_ENABLE_STATIC_LINKING=true \
LDFLAGS='-static-libgcc -static-libstdc++' \
LLVM_PROJECTS='clang;lld;mlir' \
bash scripts/build_cudaq.sh -uv

The CUDA Quantum build will compile or omit optional components automatically depending on whether the necessary pre-requisites are found in the build environment. Please check the build log to confirm that all desired components have been built. If you see a message that a component has been skipped, make sure you followed the instructions for installing the necessary prerequisites and build dependencies, and have set the necessary environment variables as described in this document.

Preparing the Installation

To easily migrate the built binaries to the host system, we recommend creating a self-extracting archive. To do so, download the makeself script(s) and move the necessary files to install into a separate folder using the command

mkdir -p cuda_quantum_assets/llvm/bin && mkdir -p cuda_quantum_assets/llvm/lib && \
mv "${LLVM_INSTALL_PREFIX}/bin/"clang* cuda_quantum_assets/llvm/bin/ && \
mv "${LLVM_INSTALL_PREFIX}/lib/"clang* cuda_quantum_assets/llvm/lib/ && \
mv "${LLVM_INSTALL_PREFIX}/bin/llc" cuda_quantum_assets/llvm/bin/llc && \
mv "${LLVM_INSTALL_PREFIX}/bin/lld" cuda_quantum_assets/llvm/bin/lld && \
mv "${LLVM_INSTALL_PREFIX}/bin/ld.lld" cuda_quantum_assets/llvm/bin/ld.lld && \
mv "${CUTENSOR_INSTALL_PREFIX}" cuda_quantum_assets && \
mv "${CUQUANTUM_INSTALL_PREFIX}" cuda_quantum_assets && \
mv "${CUDAQ_INSTALL_PREFIX}/build_config.xml" cuda_quantum_assets/build_config.xml && \
mv "${CUDAQ_INSTALL_PREFIX}" cuda_quantum_assets

You can then create a self-extracting archive with the command

./makeself.sh --gzip --sha256 --license cuda_quantum_assets/cudaq/LICENSE \
    cuda_quantum_assets install_cuda_quantum.$(uname -m) \
    "CUDA Quantum toolkit for heterogeneous quantum-classical workflows" \
    bash cudaq/migrate_assets.sh -t /opt/nvidia/cudaq

Installation on the Host

Make sure your host system satisfies the Prerequisites listed above. Copy the install_cuda_quantum file that you created following the instructions in the Preparing the Installation section onto the host system, and then run the commands

sudo bash install_cuda_quantum.* --accept
. /opt/nvidia/cudaq/set_env.sh

This will extract the built assets and move them to the correct locations. The set_env.sh script in /opt/nvidia/cudaq defines the necessary environment variables to use CUDA Quantum. To avoid having to set them manually every time a new shell is opened, we highly recommend adding the following lines to the /etc/profile file:

if [ -f /opt/nvidia/cudaq/set_env.sh ];
  . /opt/nvidia/cudaq/set_env.sh
fi

Note

CUDA Quantum is configured to use its own linker, meaning the LLD linker, by default. While this linker should be a drop-in replacement for system linkers, in rare cases it may be necessary to use your own linker instead. You can configure CUDA Quantum to use an external linker setting the NVQPP_LD_PATH environment variable to point to it; for example export NVQPP_LD_PATH=ld.

To enable C++ development in general, you should also make sure that the C++ standard library is installed and discoverable on your host system. CUDA Quantum supports the GNU C++ standard library (libstdc++), version 11 or newer. Other libraries may work but can cause issues in certain cases.

The remaining sections in this document list additional runtime dependencies that are not included in the migrated assets and are needed to use some of the CUDA Quantum features and components.

CUDA Runtime Libraries

To use GPU-acceleration in CUDA Quantum you will need to install the necessary CUDA runtime libraries. While not necessary, we recommend installing the complete CUDA toolkit like you did for the CUDA Quantum build. If you prefer to only install the minimal set of runtime libraries, the following commands, for example, install the necessary packages for the AlmaLinux 8 environment:

CUDA_VERSION=11.8
CUDA_DOWNLOAD_URL=https://developer.download.nvidia.com/compute/cuda/repos
# Go to the url above, set the variables below to a suitable distribution
# and subfolder for your platform, and uncomment the line below.
# DISTRIBUTION=rhel8 CUDA_ARCH_FOLDER=x86_64

version_suffix=$(echo ${CUDA_VERSION} | tr . -)
dnf config-manager --add-repo "${CUDA_DOWNLOAD_URL}/${DISTRIBUTION}/${CUDA_ARCH_FOLDER}/cuda-${DISTRIBUTION}.repo"
dnf install -y --nobest --setopt=install_weak_deps=False \
    cuda-nvtx-${version_suffix} cuda-cudart-${version_suffix} \
    libcusolver-${version_suffix} libcublas-${version_suffix}

MPI

To work with all CUDA Quantum backends, a CUDA-aware MPI installation is required. If you do not have an existing CUDA-aware MPI installation, you can build one from source. To do so, in addition to the CUDA runtime libraries listed above you will need to install the CUDA runtime development package (cuda-cudart-devel-${version_suffix} or cuda-cudart-dev-${version_suffix}, depending on your distribution).

The following commands build a sufficient CUDA-aware OpenMPI installation. To make best use of MPI, we recommend a more fully featured installation including additional configurations that fit your host system. The commands below assume you have the necessary prerequisites for the OpenMPI build installed on the build system. Within the tested AlmaLinux 8 environment, for example, the packages autoconf, libtool, flex, and make need to be installed.

OPENMPI_VERSION=4.1.4
OPENMPI_DOWNLOAD_URL=https://github.com/open-mpi/ompi

wget "${OPENMPI_DOWNLOAD_URL}/archive/v${OPENMPI_VERSION}.tar.gz" -O /tmp/openmpi.tar.gz
mkdir -p ~/.openmpi-src && tar xf /tmp/openmpi.tar.gz --strip-components 1 -C ~/.openmpi-src
rm -rf /tmp/openmpi.tar.gz && cd ~/.openmpi-src
./autogen.pl 
LDFLAGS=-Wl,--as-needed ./configure \
    --prefix=/usr/local/openmpi \
    --disable-getpwuid --disable-static \
    --disable-debug --disable-mem-debug --disable-event-debug \
    --disable-mem-profile --disable-memchecker \
    --without-verbs \
    --with-cuda=/usr/local/cuda
make -j$(nproc) 
make -j$(nproc) install
cd - && rm -rf ~/.openmpi-src

Confirm that you have a suitable MPI implementation installed. For OpenMPI and MPICH, for example, this can be done by compiling and running the following program:

// Compile and run with:
// ```
// mpic++ mpi_cuda_check.cpp -o check.x && mpiexec -np 1 ./check.x
// ```

#include "mpi.h"
#if __has_include("mpi-ext.h")
#include "mpi-ext.h"
#endif
#include <stdio.h>

int main(int argc, char *argv[]) {
  MPI_Init(&argc, &argv);
  int exit_code;
  if (MPIX_Query_cuda_support()) {
    printf("CUDA-aware MPI installation.\n");
    exit_code = 0;
  } else {
    printf("Missing CUDA support.\n");
    exit_code = 1;
  }
  MPI_Finalize();
  return exit_code;
}

Note

If you are encountering an error similar to “The value of the MCA parameter plm_rsh_agent was set to a path that could not be found”, please make sure you have an SSH Client installed or update the MCA parameter to another suitable agent. MPI uses SSH or RSH to communicate with each node unless another resource manager, such as SLURM, is used.

Different MPI implementations are supported via a plugin infrastructure in CUDA Quantum. Once you have a CUDA-aware MPI installation on your host system, you can configure CUDA Quantum to use it by activating the necessary plugin. Plugins for OpenMPI and MPICH are included in CUDA Quantum and can be activated by setting the environment variable MPI_PATH to the MPI installation folder and then running the command

bash "${CUDA_QUANTUM_PATH}/distributed_interfaces/activate_custom_mpi.sh"

If you use a different MPI implementation than OpenMPI or MPICH, you will need to implement the necessary plugin interface yourself prior to activating the plugin with the command above.