Installation from Source¶
In most cases, you should not need to build CUDA-Q from source. For the best experience, we recommend using a container runtime to avoid conflicts with other software tools installed on the system. Note that Singularity or Docker rootless mode address common issue or concerns that are often the motivation for avoiding the use of containers. Singularity, for example, can be installed in a user folder and its installation does not require admin permissions; see this section for more detailed instructions on how to do that. Our installation guide also contains instructions for how to connect an IDE to a running container.
If you do not want use a container runtime, we also provide pre-built binaries for using CUDA-Q with C++, and Python wheels for using CUDA-Q with Python. These binaries and wheels are built following the instructions in this guide and should work for you as long as your system meets the compatibility requirements listed under Prerequisites. To install the pre-built binaries, please follow the instructions here. To install the Python wheels, please follow the instructions here.
If your system is not listed as supported by our official packages, e.g. because you would like to use CUDA-Q on an operating system that uses an older C standard library, please follow this guide carefully without skipping any steps to build and install CUDA-Q from source. The rest of this guide details system requirements during the build and after installation, and walks through the installation steps.
Note
CUDA-Q contains some components that are only included as pre-built binaries and not part of our open source repository. We are working on either open-sourcing these components or making them available as separate downloads in the future. Even without these components, almost all features of CUDA-Q will be enabled in a source build, though some pieces may be less performant. At this time, the multi-GPU state vector simulator backend will not be included if you build CUDA-Q from source.
Prerequisites¶
The following pre-requisites need to be satisfied both on the build system and on the host system, that is the system where the built CUDA-Q binaries will be installed and used.
Linux operating system. The instructions in this guide have been validated with the AlmaLinux 8 image that serves as the base image for the manylinux_2_28 image, and should work for the operating systems CentOS 8, Debian 11 and 12, Fedora 38 and 39, OpenSUSE/SLED/SLES 15.5 and 15.6, RHEL 8 and 9, Rocky 8 and 9, and Ubuntu 24.04 and 22.04. Other operating systems may work, but have not been tested.
Bash shell. The CUDA-Q build, install and run scripts expect to use
/bin/bash
.GNU C library. Make sure that the version on the host system is the same one or newer than the version on the build system. Our own builds use version 2.28.
CPU with either x86-64 (x86-64-v3 architecture and newer) or ARM64 (ARM v8-A architecture and newer). Other architectures may work but are not tested and may require adjustments to the build instructions.
Needed only on the host system: NVIDIA GPU with Volta, Turing, Ampere, Ada, or Hopper architecture and Compute Capability 7+. Make sure you have the latest drivers installed for your GPU, and double check that the driver version listed by the
nvidia-smi
command is 470.57.02 or newer. You do not need to have a GPU available on the build system; the CUDA compiler needed for the build can be installed and used without a GPU.
We strongly recommend using a virtual environment for the build that includes only the tools and dependencies listed in this guide. If you have additional software installed, you will need to make sure that the build is linking against the correct libraries and versions.
Build Dependencies¶
In addition to the prerequisites listed above, you will need to install the following prerequisites in your build environment prior to proceeding with the build as described in the subsequent sections:
Python version 3.10 or newer: If you intend to build CUDA-Q with Python support, make sure the Python version on the build system matches the version on the host system. If you intend to only build the C++ support for CUDA-Q, the Python interpreter is required only for some of the LLVM build scripts and the Python version used for the build does not have to match the version on the host system.
Common tools:
wget
,git
,unzip
. The commands in the rest of this guide assume that these tools are present on the build system, but they can be replaced by other alternatives (such as, for example, manually going to a web page and downloading a file/folder).
The above prerequisites are no longer needed once CUDA-Q is built and do not need to be present on the host system.
Note
The CUDA-Q build scripts and the commands listed in the rest of this
document assume you are using bash
as the shell for your build.
In addition to installing the needed build dependencies listed above, make sure to set the following environment variables prior to proceeding:
export CUDAQ_INSTALL_PREFIX=/usr/local/cudaq
export CUQUANTUM_INSTALL_PREFIX=/usr/local/cuquantum
export CUTENSOR_INSTALL_PREFIX=/usr/local/cutensor
export LLVM_INSTALL_PREFIX=/usr/local/llvm
export BLAS_INSTALL_PREFIX=/usr/local/blas
export ZLIB_INSTALL_PREFIX=/usr/local/zlib
export OPENSSL_INSTALL_PREFIX=/usr/local/openssl
export CURL_INSTALL_PREFIX=/usr/local/curl
export AWS_INSTALL_PREFIX=/usr/local/aws
These environment variables must be set during the build. We strongly recommend that their value is set to a path that does not already exist; this will ensure that these components are built/installed as needed when building CUDA-Q. The configured paths can be chosen freely, but the paths specified during the build are also where the corresponding libraries will be installed on the host system. We are working on making this more flexible in the future.
Note
Please do not set LLVM_INSTALL_PREFIX
to an existing directory;
To avoid compatibility issues, it is important to use the same compiler
to build the LLVM/MLIR dependencies from source as is later used to
build CUDA-Q itself.
Note
If you are setting the CURL_INSTALL_PREFIX
variable to an existing
CURL installation (not recommended), please make sure the command
curl --version
lists HTTP and HTTPS as supported protocols. If these
protocols are not listed, please instead set the CURL_INSTALL_PREFIX
variable to a path that does not exist. In that case, a suitable
library will be automatically built from source as part of
building CUDA-Q.
If you deviate from the instructions below for installing one of the dependencies and instead install it, for example, via package manager, you will need to make sure that the installation path matches the path you set for the corresponding environment variable(s).
CUDA¶
Building CUDA-Q requires a full installation of the CUDA toolkit. You can install the CUDA toolkit and use the CUDA compiler without having a GPU. The instructions are tested using version 11.8 and 12.0, but other CUDA 11 or 12 versions should work, as long as the CUDA runtime version on the host system matches the CUDA version used for the build, and the installed driver on the host system supports that CUDA version. We recommend using the latest CUDA version that is supported by the driver on the host system.
Download a suitable CUDA version following the installation guide for your platform in the online documentation linked on that page.
Within the tested AlmaLinux 8 environment, for example, the following commands install CUDA 12.0:
CUDA_VERSION=${CUDA_VERSION:-12.0}
CUDA_DOWNLOAD_URL=https://developer.download.nvidia.com/compute/cuda/repos
# Go to the url above, set the variables below to a suitable distribution
# and subfolder for your platform, and uncomment the line below.
# DISTRIBUTION=rhel8 CUDA_ARCH_FOLDER=x86_64
dnf config-manager --add-repo "${CUDA_DOWNLOAD_URL}/${DISTRIBUTION}/${CUDA_ARCH_FOLDER}/cuda-${DISTRIBUTION}.repo"
dnf install -y --nobest --setopt=install_weak_deps=False \
cuda-toolkit-$(echo ${CUDA_VERSION} | tr . -)
Toolchain¶
The compiler toolchain used for the build must be a supported CUDA host compiler for the installed CUDA version. The following instructions have been tested with GCC-11. Other toolchains may be supported but have not been tested.
Within the tested AlmaLinux 8 environment, for example, the following commands install GCC 11:
GCC_VERSION=${GCC_VERSION:-11}
dnf install -y --nobest --setopt=install_weak_deps=False \
gcc-toolset-${GCC_VERSION}
# Enabling the toolchain globally is only needed for debug builds
# to ensure that the correct assembler is picked to process debug symbols.
enable_script=`find / -path '*gcc*' -path '*'$GCC_VERSIONS'*' -name enable`
if [ -n "$enable_script" ]; then
. "$enable_script"
fi
Independent on which compiler toolchain you installed, set the following environment variables to point to the respective compilers on your build system:
export GCC_TOOLCHAIN=/opt/rh/gcc-toolset-11/root/usr/
export CXX="${GCC_TOOLCHAIN}/bin/g++"
export CC="${GCC_TOOLCHAIN}/bin/gcc"
export CUDACXX=/usr/local/cuda/bin/nvcc
export CUDAHOSTCXX="${GCC_TOOLCHAIN}/bin/g++"
The variables
CC
andCXX
must be set for the CUDA-Q build.To use GPU-acceleration in CUDA-Q, make sure to set
CUDACXX
to your CUDA compiler, andCUDAHOSTCXX
to the CUDA compatible host compiler you are using. If the CUDA compiler is not found when building CUDA-Q, some components and backends will be omitted automatically during the build.
Building CUDA-Q¶
This installation guide has been written for a specific version/commit of CUDA-Q. Make sure to obtain the source code for that version. Clone the CUDA-Q GitHub repository and checkout the appropriate branch, tag, or commit. Note that the build scripts assume that they are run from within a git repository, and merely downloading the source code as ZIP archive hence will not work.
Please follow the instructions in the respective subsection(s) to build the necessary
components for using CUDA-Q from C++ and/or Python.
After the build, check that the GPU-accelerated components have been built by confirming
that the file nvidia.config
exists in the $CUDAQ_INSTALL_PREFIX/targets
folder.
We also recommend checking the build log printed to the console to confirm that all desired
components have been built.
Note
The CUDA-Q build will compile or omit optional components automatically depending on whether the necessary pre-requisites are found in the build environment. If you see a message that a component has been skipped, and/or the CUDA compiler is not properly detected, make sure you followed the instructions for installing the necessary prerequisites and build dependencies, and have set the necessary environment variables as described in this document.
Python Support¶
The most convenient way to enable Python support within CUDA-Q is to build
a wheel that can then easily be installed
using pip
. To ensure the wheel can be installed on the host system, make sure to
use the same Python version for the build as the one that is installed on the host system.
To build a CUDA-Q Python wheel, you will need to install the following additional
Python-specific tools:
Python development headers: The development headers for your Python version are installed in the way as you installed Python itself. If you installed Python via the package manager for your system, you may need to install an additional package to get the development headers. The package name is usually your python version followed by either a
-dev
or-devel
suffix. If you are using a Conda environment, the necessary headers should already be installed.Pip package manager: Make sure the
pip
module is enable for your Python version, and that yourpip
version is 24 or newer. We refer to the Python documentation for more information about installing/enablingpip
.Python modules: Install the additional modules
numpy
,build
,auditwheel
, andpatchelf
for your Python version, e.g.python3 -m pip install numpy build auditwheel patchelf
.
Note
The wheel build by default is configured to depend on CUDA 12. To build a wheel for CUDA 11,
you need to adjust the dependencies and project name in the pyproject.toml
file.
From within the folder where you cloned the CUDA-Q repository, run the following command to build the CUDA-Q Python wheel:
LLVM_PROJECTS='clang;flang;lld;mlir;python-bindings;openmp;runtimes' \
bash scripts/install_prerequisites.sh -t llvm && \
CC="$LLVM_INSTALL_PREFIX/bin/clang" \
CXX="$LLVM_INSTALL_PREFIX/bin/clang++" \
FC="$LLVM_INSTALL_PREFIX/bin/flang-new" \
python3 -m build --wheel
Note
A version identifier will be automatically assigned to the wheel based on the commit
history. You can manually override this detection to give a more descriptive identifier
by setting the environment variable SETUPTOOLS_SCM_PRETEND_VERSION
to the desired
value before building the wheel.
After the initial build, auditwheel is used to include dependencies in the wheel, if necessary, and correctly label the wheel. We recommend not including the CUDA runtime libraries and instead install them separately on the host system following the instructions in the next section. The following command builds the final wheel, not including CUDA dependencies:
CUDAQ_WHEEL="$(find . -name 'cuda_quantum*.whl')" && \
MANYLINUX_PLATFORM="$(echo ${CUDAQ_WHEEL} | grep -o '[a-z]*linux_[^\.]*' | sed -re 's/^linux_/manylinux_2_28_/')" && \
LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:$(pwd)/_skbuild/lib" \
python3 -m auditwheel -v repair ${CUDAQ_WHEEL} \
--plat ${MANYLINUX_PLATFORM} \
--exclude libcublas.so.11 \
--exclude libcublasLt.so.11 \
--exclude libcusolver.so.11 \
--exclude libcutensor.so.2 \
--exclude libcutensornet.so.2 \
--exclude libcustatevec.so.1 \
--exclude libcudart.so.11.0 \
--exclude libnvToolsExt.so.1 \
--exclude libnvidia-ml.so.1 \
--exclude libcuda.so.1
The command above will create a new wheel in the wheelhouse
folder. This wheel can be
installed on any compatible platform.
Note
You can confirm that the wheel is indeed compatible with your host platform by
checking that the wheel tag (i.e. the file name ending of the .whl
file) is listed under
“Compatible Tags” when running the command python3 -m pip debug --verbose
on the host.
C++ Support¶
From within the folder where you cloned the CUDA-Q repository, run the following command to build CUDA-Q:
CUDAQ_ENABLE_STATIC_LINKING=TRUE \
CUDAQ_REQUIRE_OPENMP=TRUE \
CUDAQ_WERROR=TRUE \
CUDAQ_PYTHON_SUPPORT=OFF \
LLVM_PROJECTS='clang;flang;lld;mlir;openmp;runtimes' \
bash scripts/build_cudaq.sh -t llvm -v
Note that lld
is primarily needed when the build or host system does not already
have an existing default linker on its path; CUDA-Q supports the same linkers as
clang
does.
To easily migrate the built binaries to the host system, we recommend creating a self-extracting archive. To do so, download the makeself script(s) and move the necessary files to install into a separate folder using the command
mkdir -p cuda_quantum_assets/llvm/bin && \
mkdir -p cuda_quantum_assets/llvm/lib && \
mkdir -p cuda_quantum_assets/llvm/include && \
mv "${LLVM_INSTALL_PREFIX}/bin/"clang* cuda_quantum_assets/llvm/bin/ && \
mv cuda_quantum_assets/llvm/bin/clang-format* "${LLVM_INSTALL_PREFIX}/bin/" && \
mv "${LLVM_INSTALL_PREFIX}/bin/llc" cuda_quantum_assets/llvm/bin/llc && \
mv "${LLVM_INSTALL_PREFIX}/bin/lld" cuda_quantum_assets/llvm/bin/lld && \
mv "${LLVM_INSTALL_PREFIX}/bin/ld.lld" cuda_quantum_assets/llvm/bin/ld.lld && \
mv "${LLVM_INSTALL_PREFIX}/lib/"* cuda_quantum_assets/llvm/lib/ && \
mv "${LLVM_INSTALL_PREFIX}/include/"* cuda_quantum_assets/llvm/include/ && \
mv "${CUTENSOR_INSTALL_PREFIX}" cuda_quantum_assets && \
mv "${CUQUANTUM_INSTALL_PREFIX}" cuda_quantum_assets && \
mv "${CUDAQ_INSTALL_PREFIX}/build_config.xml" cuda_quantum_assets/build_config.xml && \
mv "${CUDAQ_INSTALL_PREFIX}" cuda_quantum_assets
You can then create a self-extracting archive with the command
./makeself.sh --gzip --sha256 --license cuda_quantum_assets/cudaq/LICENSE \
cuda_quantum_assets install_cuda_quantum.$(uname -m) \
"CUDA-Q toolkit for heterogeneous quantum-classical workflows" \
bash cudaq/migrate_assets.sh -t /opt/nvidia/cudaq
Installation on the Host¶
Make sure your host system satisfies the Prerequisites listed above.
To use CUDA-Q with Python, you should have a working Python installation on the host system, including the
pip
package manager.To use CUDA-Q with C++, you should make sure that you have the necessary development headers of the C standard library installed. You can check this by searching for
features.h
, commonly found in/usr/include/
. You can install the necessary headers via package manager (usually the package name is called something likeglibc-devel
orlibc6-devel
). These headers are also included with any installation of GCC.
If you followed the instructions for building the
CUDA-Q Python wheel,
copy the built .whl
file to the host system, and install it using pip
; e.g.
pip install cuda_quantum*.whl
To install the necessary CUDA and MPI dependencies for some of the components,
you can either follow the instructions on PyPI.org,
replacing pip install cudaq
with the command above, or you can follow the
instructions in the remaining sections of this document to customize and better
optimize them for your host system.
If you followed the instructions for building the
CUDA-Q C++ tools,
copy the install_cuda_quantum
file that you created to the host system,
and install it by running the commands
sudo bash install_cuda_quantum.$(uname -m) --accept
. /opt/nvidia/cudaq/set_env.sh
This will extract the built assets and move them to the correct locations.
The set_env.sh
script in /opt/nvidia/cudaq
defines the necessary environment
variables to use CUDA-Q. To avoid having to set them manually every time a
new shell is opened, we highly recommend adding the following lines to
the /etc/profile
file:
if [ -f /opt/nvidia/cudaq/set_env.sh ];
. /opt/nvidia/cudaq/set_env.sh
fi
Note
CUDA-Q as built following the instructions above includes and uses the LLVM C++ standard library. This will not interfere with any other C++ standard library you may have on your system. Pre-built external libraries, you may want to use with CUDA-Q, such as specific optimizers for example, have a C API to ensure compatibility across different versions of the C++ standard library and will work with CUDA-Q without issues. The same is true for all distributed CUDA libraries. To build you own CUDA libraries that can be used with CUDA-Q, please take a look at Using CUDA and CUDA-Q in a Project.
The remaining sections in this document list additional runtime dependencies that are not included in the migrated assets and are needed to use some of the CUDA-Q features and components.
CUDA Runtime Libraries¶
To use GPU-acceleration in CUDA-Q you will need to install the necessary CUDA runtime libraries. Their version (at least the version major) needs to match the version used for the build. While not necessary, we recommend installing the complete CUDA toolkit like you did for the CUDA-Q build. If you prefer to only install the minimal set of runtime libraries, the following commands, for example, install the necessary packages for the AlmaLinux 8 environment:
CUDA_VERSION=${CUDA_VERSION:-12.0}
CUDA_DOWNLOAD_URL=https://developer.download.nvidia.com/compute/cuda/repos
# Go to the url above, set the variables below to a suitable distribution
# and subfolder for your platform, and uncomment the line below.
# DISTRIBUTION=rhel8 CUDA_ARCH_FOLDER=x86_64
version_suffix=$(echo ${CUDA_VERSION} | tr . -)
dnf config-manager --add-repo "${CUDA_DOWNLOAD_URL}/${DISTRIBUTION}/${CUDA_ARCH_FOLDER}/cuda-${DISTRIBUTION}.repo"
dnf install -y --nobest --setopt=install_weak_deps=False \
cuda-cudart-${version_suffix} \
cuda-nvrtc-${version_suffix} \
libcusolver-${version_suffix} \
libcublas-${version_suffix}
if [ $(echo ${CUDA_VERSION} | cut -d . -f1) -gt 11 ]; then
dnf install -y --nobest --setopt=install_weak_deps=False \
libnvjitlink-${version_suffix}
fi
MPI¶
To work with all CUDA-Q backends, a CUDA-aware MPI installation is required.
If you do not have an existing CUDA-aware MPI installation, you can build one from
source. To do so, in addition to the CUDA runtime libraries listed above
you will need to install the CUDA runtime development package
(cuda-cudart-devel-${version_suffix}
or cuda-cudart-dev-${version_suffix}
,
depending on your distribution).
The following commands build a sufficient CUDA-aware OpenMPI installation.
To make best use of MPI, we recommend a more fully featured installation including
additional configurations that fit your host system.
The commands below assume you have the necessary prerequisites for the OpenMPI build
installed on the build system. Within the tested AlmaLinux 8 environment, for example,
the packages autoconf
, libtool
, flex
, and make
need to be installed.
OPENMPI_VERSION=4.1.4
OPENMPI_DOWNLOAD_URL=https://github.com/open-mpi/ompi
wget "${OPENMPI_DOWNLOAD_URL}/archive/v${OPENMPI_VERSION}.tar.gz" -O /tmp/openmpi.tar.gz
mkdir -p ~/.openmpi-src && tar xf /tmp/openmpi.tar.gz --strip-components 1 -C ~/.openmpi-src
rm -rf /tmp/openmpi.tar.gz && cd ~/.openmpi-src
./autogen.pl
LDFLAGS=-Wl,--as-needed ./configure \
--prefix=/usr/local/openmpi \
--disable-getpwuid --disable-static \
--disable-debug --disable-mem-debug --disable-event-debug \
--disable-mem-profile --disable-memchecker \
--without-verbs \
--with-cuda=/usr/local/cuda
make -j$(nproc)
make -j$(nproc) install
cd - && rm -rf ~/.openmpi-src
Confirm that you have a suitable MPI implementation installed. For OpenMPI and MPICH, for example, this can be done by compiling and running the following program:
// Compile and run with:
// ```
// mpic++ mpi_cuda_check.cpp -o check.x && mpiexec -np 1 ./check.x
// ```
#include "mpi.h"
#if __has_include("mpi-ext.h")
#include "mpi-ext.h"
#endif
#include <stdio.h>
int main(int argc, char *argv[]) {
MPI_Init(&argc, &argv);
int exit_code;
if (MPIX_Query_cuda_support()) {
printf("CUDA-aware MPI installation.\n");
exit_code = 0;
} else {
printf("Missing CUDA support.\n");
exit_code = 1;
}
MPI_Finalize();
return exit_code;
}
Note
If you are encountering an error similar to “The value of the MCA parameter plm_rsh_agent
was set to a path that could not be found”, please make sure you have an SSH Client installed
or update the MCA parameter to another suitable agent.
MPI uses SSH or
RSH to communicate with each node
unless another resource manager, such as
SLURM, is used.
Different MPI implementations are supported via a plugin infrastructure in CUDA-Q.
Once you have a CUDA-aware MPI installation on your host system, you can
configure CUDA-Q to use it by activating the necessary plugin.
Plugins for OpenMPI and MPICH are included in CUDA-Q and can be activated by
setting the environment variable MPI_PATH
to the MPI installation folder
and then running the command
bash "${CUDA_QUANTUM_PATH}/distributed_interfaces/activate_custom_mpi.sh"
Note
To activate the MPI plugin for the Python support, replace ${CUDA_QUANTUM_PATH}
with the path that is listed under “Location” when you run the command
pip show cuda-quantum
.
If you use a different MPI implementation than OpenMPI or MPICH, you will need to implement the necessary plugin interface yourself prior to activating the plugin with the command above.