Taking Advantage of the Underlying Quantum Platform

The CUDA Quantum machine model elucidates the various devices considered in the broader quantum-classical compute node context. Programmers will have one or many host CPUs, zero or many NVIDIA GPUs, a classical QPU control space, and the quantum register itself. Moreover, the specification notes that the underlying platform may expose multiple QPUs. In the near-term, this will be unlikely with physical QPU instantiations, but the availability of GPU-based circuit simulators on NVIDIA multi-GPU architectures does give one an opportunity to think about programming such a multi-QPU architecture in the near-term. CUDA Quantum starts by enabling one to query information about the underlying quantum platform via the quantum_platform abstraction. This type exposes a num_qpus() method that can be used to query the number of available QPUs for asynchronous CUDA Quantum kernel and cudaq:: function invocations. Each available QPU is assigned a logical index, and programmers can launch specific asynchronous function invocations targeting a desired QPU.

NVIDIA MQPU Platform

The NVIDIA MQPU target (nvidia-mqpu) provides a simulated QPU for every available NVIDIA GPU on the underlying system. Each QPU is simulated via a cuStateVec simulator backend. For more information about using multiple GPUs to simulate each virtual QPU, or using a different backend for virtual QPUs, please see remote MQPU platform. This target enables asynchronous parallel execution of quantum kernel tasks.

Here is a simple example demonstrating its usage.

  auto kernelToBeSampled = [](int runtimeParam) __qpu__ {
    cudaq::qvector q(runtimeParam);
    h(q);
    mz(q);
  };

  // Get the quantum_platform singleton
  auto &platform = cudaq::get_platform();

  // Query the number of QPUs in the system
  auto num_qpus = platform.num_qpus();
  printf("Number of QPUs: %zu\n", num_qpus);
  // We will launch asynchronous sampling tasks
  // and will store the results immediately as a future
  // we can query at some later point
  std::vector<cudaq::async_sample_result> countFutures;
  for (std::size_t i = 0; i < num_qpus; i++) {
    countFutures.emplace_back(
        cudaq::sample_async(i, kernelToBeSampled, 5 /*runtimeParam*/));
  }

  //
  // Go do other work, asynchronous execution of sample tasks on-going
  //

  // Get the results, note future::get() will kick off a wait
  // if the results are not yet available.
  for (auto &counts : countFutures) {
    counts.get().dump();
  }

One can specify the target multi-QPU architecture (nvidia-mqpu) with the --target flag:

nvq++ sample_async.cpp -target nvidia-mqpu
./a.out
import cudaq

cudaq.set_target("nvidia-mqpu")
target = cudaq.get_target()
num_qpus = target.num_qpus()
print("Number of QPUs:", num_qpus)

kernel, runtime_param = cudaq.make_kernel(int)
qubits = kernel.qalloc(runtime_param)
# Place qubits in superposition state.
kernel.h(qubits)
# Measure.
kernel.mz(qubits)

count_futures = []
for qpu in range(num_qpus):
    count_futures.append(cudaq.sample_async(kernel, 5, qpu_id=qpu))

for counts in count_futures:
    print(counts.get())

CUDA Quantum exposes asynchronous versions of the default cudaq algorithmic primitive functions like sample and observe (e.g., sample_async function in the above code snippets).

Depending on the number of GPUs available on the system, the nvidia-mqpu platform will create the same number of virtual QPU instances. For example, on a system with 4 GPUs, the above code will distribute the four sampling tasks among those GPUEmulatedQPU instances.

The results might look like the following 4 different random samplings:

Number of QPUs: 4
{ 10011:28 01100:28 ... }
{ 10011:37 01100:25 ... }
{ 10011:29 01100:25 ... }
{ 10011:33 01100:30 ... }

Note

By default, the nvidia-mqpu platform will utilize all available GPUs (number of QPUs instances is equal to the number of GPUs). To specify the number QPUs to be instantiated, one can set the CUDAQ_MQPU_NGPUS environment variable. For example, use export CUDAQ_MQPU_NGPUS=2 to specify that only 2 QPUs (GPUs) are needed.

Asynchronous expectation value computations

One typical use case of the nvidia-mqpu platform is to distribute the expectation value computations of a multi-term Hamiltonian across multiple virtual QPUs (GPUEmulatedQPU).

Here is an example.

  using namespace cudaq::spin;
  cudaq::spin_op h = 5.907 - 2.1433 * x(0) * x(1) - 2.1433 * y(0) * y(1) +
                     .21829 * z(0) - 6.125 * z(1);

  // Get the quantum_platform singleton
  auto &platform = cudaq::get_platform();

  // Query the number of QPUs in the system
  auto num_qpus = platform.num_qpus();
  printf("Number of QPUs: %zu\n", num_qpus);

  auto ansatz = [](double theta) __qpu__ {
    cudaq::qubit q, r;
    x(q);
    ry(theta, r);
    x<cudaq::ctrl>(r, q);
  };

  double result = cudaq::observe<cudaq::parallel::thread>(ansatz, h, 0.59);
  printf("Expectation value: %lf\n", result);

One can then target the nvidia-mqpu platform by executing the following commands:

nvq++ observe_mqpu.cpp -target nvidia-mqpu
./a.out
import cudaq
from cudaq import spin

cudaq.set_target("nvidia-mqpu")
target = cudaq.get_target()
num_qpus = target.num_qpus()
print("Number of QPUs:", num_qpus)

# Define spin ansatz.
kernel, theta = cudaq.make_kernel(float)
qvec = kernel.qalloc(2)
kernel.x(qvec[0])
kernel.ry(theta, qvec[1])
kernel.cx(qvec[1], qvec[0])
# Define spin Hamiltonian.
hamiltonian = 5.907 - 2.1433 * spin.x(0) * spin.x(1) - 2.1433 * spin.y(
    0) * spin.y(1) + .21829 * spin.z(0) - 6.125 * spin.z(1)

exp_val = cudaq.observe(kernel,
                        hamiltonian,
                        0.59,
                        execution=cudaq.parallel.thread).expectation()
print("Expectation value: ", exp_val)

In the above code snippets, since the Hamiltonian contains four non-identity terms, there are four quantum circuits that need to be executed in order to compute the expectation value of that Hamiltonian and given the quantum state prepared by the ansatz kernel. When the nvidia-mqpu platform is selected, these circuits will be distributed across all available QPUs. The final expectation value result is computed from all QPU execution results.

Parallel distribution mode

The CUDA Quantum nvidia-mqpu platform supports two modes of parallel distribution of expectation value computation:

  • MPI: distribute the expectation value computations across available MPI ranks and GPUs for each Hamiltonian term.

  • Thread: distribute the expectation value computations among available GPUs via standard C++ threads (each thread handles one GPU).

For instance, if all GPUs are available on a single node, thread-based parallel distribution (cudaq::parallel::thread in C++ or cudaq.parallel.thread in Python, as shown in the above example) is sufficient. On the other hand, if one wants to distribute the tasks across GPUs on multiple nodes, e.g., on a compute cluster, MPI distribution mode should be used.

An example of MPI distribution mode usage in both C++ and Python is given below:

  cudaq::mpi::initialize();
  using namespace cudaq::spin;
  cudaq::spin_op h = 5.907 - 2.1433 * x(0) * x(1) - 2.1433 * y(0) * y(1) +
                     .21829 * z(0) - 6.125 * z(1);

  auto ansatz = [](double theta) __qpu__ {
    cudaq::qubit q, r;
    x(q);
    ry(theta, r);
    x<cudaq::ctrl>(r, q);
  };

  double result = cudaq::observe<cudaq::parallel::mpi>(ansatz, h, 0.59);
  if (cudaq::mpi::rank() == 0)
    printf("Expectation value: %lf\n", result);
  cudaq::mpi::finalize();
nvq++ file.cpp -target nvidia-mqpu
mpiexec -np <N> a.out
import cudaq
from cudaq import spin

cudaq.mpi.initialize()
cudaq.set_target("nvidia-mqpu")

# Define spin ansatz.
kernel, theta = cudaq.make_kernel(float)
qvec = kernel.qalloc(2)
kernel.x(qvec[0])
kernel.ry(theta, qvec[1])
kernel.cx(qvec[1], qvec[0])
# Define spin Hamiltonian.
hamiltonian = 5.907 - 2.1433 * spin.x(0) * spin.x(1) - 2.1433 * spin.y(
    0) * spin.y(1) + .21829 * spin.z(0) - 6.125 * spin.z(1)

exp_val = cudaq.observe(kernel, hamiltonian, 0.59,
                        execution=cudaq.parallel.mpi).expectation()
if cudaq.mpi.rank() == 0:
    print("Expectation value: ", exp_val)

cudaq.mpi.finalize()
mpiexec -np <N> python3 file.py

In the above example, the parallel distribution mode was set to mpi using cudaq::parallel::mpi in C++ or cudaq.parallel.mpi in Python. CUDA Quantum provides MPI utility functions to initialize, finalize, or query (rank, size, etc.) the MPI runtime. Last but not least, the compiled executable (C++) or Python script needs to be launched with an appropriate MPI command, e.g., mpiexec, mpirun, srun, etc.

Remote MQPU Platform

As shown in the above examples, the nvidia-mqpu platform enables multi-QPU distribution whereby each QPU is simulated by a single NVIDIA GPU. To run multi-QPU workloads on different simulator backends, one can use the remote-mqpu platform, which encapsulates simulated QPUs as independent HTTP REST server instances. The following code illustrates how to launch asynchronous sampling tasks on multiple virtual QPUs, each simulated by a tensornet simulator backend.

  auto [kernelToBeSampled, runtimeParam] = cudaq::make_kernel<int>();
  auto q = kernelToBeSampled.qalloc(runtimeParam);
  kernelToBeSampled.h(q);
  kernelToBeSampled.mz(q);

  // Get the quantum_platform singleton
  auto &platform = cudaq::get_platform();

  // Query the number of QPUs in the system
  // The number of QPUs is equal to the number of auto-launch server instances.
  auto num_qpus = platform.num_qpus();
  printf("Number of QPUs: %zu\n", num_qpus);
  // We will launch asynchronous sampling tasks
  // and will store the results immediately as a future
  // we can query at some later point.
  // Each QPU (indexed by an unique Id) is associated with a remote REST server.
  std::vector<cudaq::async_sample_result> countFutures;
  for (std::size_t i = 0; i < num_qpus; i++) {
    countFutures.emplace_back(cudaq::sample_async(
        /*qpuId=*/i, kernelToBeSampled, /*runtimeParam=*/5));
  }

  //
  // Go do other work, asynchronous execution of sample tasks on-going
  //

  // Get the results, note future::get() will kick off a wait
  // if the results are not yet available.
  for (auto &counts : countFutures) {
    counts.get().dump();
  }

The code above is saved in sample_async.cpp and compiled with the following command, targeting the remote-mqpu platform:

nvq++ sample_async.cpp -o sample_async.x --target remote-mqpu --remote-mqpu-backend tensornet --remote-mqpu-auto-launch 2
./sample_async.x
import cudaq

# Number of remote QPUs to launch
num_qpus = 2
cudaq.set_target("remote-mqpu",
                 remote_execution=True,
                 backend="tensornet",
                 auto_launch=str(num_qpus))
target = cudaq.get_target()
print("Number of QPUs:", target.num_qpus())

kernel, runtime_param = cudaq.make_kernel(int)
qubits = kernel.qalloc(runtime_param)
# Place qubits in superposition state.
kernel.h(qubits)
# Measure.
kernel.mz(qubits)

count_futures = []
for qpu in range(num_qpus):
    count_futures.append(cudaq.sample_async(kernel, 4 + qpu, qpu_id=qpu))

for counts in count_futures:
    print(counts.get())

In the above code snippets, the remote-mqpu platform was used in the auto-launch mode, whereby a specific number of server instances, i.e., virtual QPUs, are launched on the local machine in the background. The remote QPU daemon service, cudaq-qpud, will also be shut down automatically at the end of the session.

Note

By default, auto launching daemon services do not support MPI parallelism. Hence, using the nvidia-mgpu backend to simulate each virtual QPU requires manually launching each server instance. How to do that is explained in the rest of this section.

To customize how many and which GPUs are used for simulating each virtual QPU, one can launch each server manually. For instance, on a machine with 8 NVIDIA GPUs, one may wish to partition those GPUs into 4 virtual QPU instances, each manages 2 GPUs. To do so, first launch a cudaq-qpud server for each virtual QPU:

CUDA_VISIBLE_DEVICES=0,1 mpiexec -np 2 cudaq-qpud --port <QPU 1 TCP/IP port number>
CUDA_VISIBLE_DEVICES=2,3 mpiexec -np 2 cudaq-qpud --port <QPU 2 TCP/IP port number>
CUDA_VISIBLE_DEVICES=4,5 mpiexec -np 2 cudaq-qpud --port <QPU 3 TCP/IP port number>
CUDA_VISIBLE_DEVICES=6,7 mpiexec -np 2 cudaq-qpud --port <QPU 4 TCP/IP port number>

In the above code snippet, four nvidia-mgpu daemons are started in MPI context via the mpiexec launcher. This activates MPI runtime environment required by the nvidia-mgpu backend. Each QPU daemon is assigned a unique TCP/IP port number via the --port command-line option. The CUDA_VISIBLE_DEVICES environment variable restricts the GPU devices that each QPU daemon sees so that it targets specific GPUs.

With these invocations, each virtual QPU is locally addressable at the URL localhost:<port>.

Warning

There is no authentication required to communicate with this server app. Hence, please make sure to either (1) use a non-public TCP/IP port for internal use or (2) use firewalls or other security mechanisms to manage user access.

User code can then target these QPUs for multi-QPU workloads, such as asynchronous sample or observe shown above for the nvidia-mqpu platform.

nvq++ distributed.cpp --target remote-mqpu --remote-mqpu-url localhost:<port1>,localhost:<port2>,localhost:<port3>,localhost:<port4> --remote-mqpu-backend nvidia-mgpu
cudaq.set_target("remote-mqpu", url="localhost:<port1>,localhost:<port2>,localhost:<port3>,localhost:<port4>", backend="nvidia-mgpu")

Each URL is treated as an independent QPU, hence the number of QPUs (num_qpus()) is equal to the number of URLs provided. The multi-node multi-GPU simulator backend (nvidia-mgpu) is requested via the --remote-mqpu-backend command-line option.

Note

The requested backend (nvidia-mgpu) will be executed inside the context of the QPU daemon service, thus inherits its GPU resource allocation (two GPUs per backend simulator instance).

Supported Kernel Arguments

The platform serializes kernel invocation to QPU daemons via REST APIs. Please refer to the Open API Docs for the latest API information. Runtime arguments are serialized into a flat memory buffer (args field of the request JSON). For more information about argument type serialization, please see the table below.

When using a remote backend to simulate each virtual QPU, by default, we currently do not support passing complex data structures, such as nested vectors or class objects, or other kernels as arguments to the entry point kernels. These type limitations only apply to the entry-point kernel and not when passing arguments to other quantum kernels.

Support for the full range of argument types within CUDA Quantum can be enabled by compiling the code with the --enable-mlir option. This flag forces quantum kernels to be compiled with the CUDA Quantum MLIR-based compiler. As a result, runtime arguments can be resolved by the CUDA Quantum compiler infrastructure to support wider range of argument types. However, certain language constructs within quantum kernels may not yet be fully supported.

Kernel argument serialization

Data type

Example

Serialization

Trivial type (occupies a contiguous memory area)

int, std::size_t, double, etc.

Byte data (via memcpy)

std::vector of trivial type

std::vector<int>, std::vector<double>, etc.

Total vector size in bytes as a 64-bit integer followed by serialized data of all vector elements.