Noisy Simulators

Trajectory Noisy Simulation

The nvidia target supports noisy quantum circuit simulations using quantum trajectory method across all configurations: single GPU, multi-node multi-GPU, and with host memory. When simulating many trajectories with small state vectors, the simulation is batched for optimal performance.

When a noise_model is provided to CUDA-Q, the nvidia target will incorporate quantum noise into the quantum circuit simulation according to the noise model specified.

import cudaq

# Use the `nvidia` target
cudaq.set_target("nvidia")

# Let's define a simple kernel that we will add noise to.
qubit_count = 2


@cudaq.kernel
def kernel(qubit_count: int):
    qvector = cudaq.qvector(qubit_count)
    x(qvector)
    mz(qvector)


# Add a simple bit-flip noise channel to X gate
error_probability = 0.1
bit_flip = cudaq.BitFlipChannel(error_probability)

# Add noise channels to our noise model.
noise_model = cudaq.NoiseModel()
# Apply the bit-flip channel to any X-gate on any qubits
noise_model.add_all_qubit_channel("x", bit_flip)

# Due to the impact of noise, our measurements will no longer be uniformly
# in the |11> state.
noisy_counts = cudaq.sample(kernel,
                            qubit_count,
                            noise_model=noise_model,
                            shots_count=1000)

# The probability that we get the perfect result (11) should be ~ 0.9 * 0.9 = 0.81
noisy_counts.dump()
python3 program.py
{ 00:15 01:92 10:81 11:812 }
#include <cudaq.h>

struct xOp {
  void operator()(int qubit_count) __qpu__ {
    cudaq::qvector q(qubit_count);
    x(q);
    mz(q);
  }
};

int main() {
  // Add a simple bit-flip noise channel to X gate
  const double error_probability = 0.1;

  cudaq::bit_flip_channel bit_flip(error_probability);
  // Add noise channels to our noise model.
  cudaq::noise_model noise_model;
  // Apply the bitflip channel to any X-gate on any qubits
  noise_model.add_all_qubit_channel<cudaq::types::x>(bit_flip);

  const int qubit_count = 2;
  // Due to the impact of noise, our measurements will no longer be uniformly in
  // the |11> state.
  auto counts =
      cudaq::sample({.shots = 1000, .noise = noise_model}, xOp{}, qubit_count);

  // The probability that we get the perfect result (11) should be ~ 0.9 * 0.9 =
  // 0.81
  counts.dump();
  return 0;
}
nvq++ --target nvidia program.cpp [...] -o program.x
./program.x
{ 00:15 01:92 10:81 11:812 }

In the case of bit-string measurement sampling as in the above example, each measurement ‘shot’ is executed as a trajectory, whereby Kraus operators specified in the noise model are sampled.

For observable expectation value estimation, the statistical error scales asymptotically as \(1/\sqrt{N_{trajectories}}\), where \(N_{trajectories}\) is the number of trajectories. Hence, depending on the required level of accuracy, the number of trajectories can be specified accordingly.

import cudaq
from cudaq import spin

# Use the `nvidia` target
cudaq.set_target("nvidia")


@cudaq.kernel
def kernel():
    q = cudaq.qubit()
    x(q)


# Add a simple bit-flip noise channel to X gate
error_probability = 0.1
bit_flip = cudaq.BitFlipChannel(error_probability)

# Add noise channels to our noise model.
noise_model = cudaq.NoiseModel()
# Apply the bit-flip channel to any X-gate on any qubits
noise_model.add_all_qubit_channel("x", bit_flip)

noisy_exp_val = cudaq.observe(kernel,
                              spin.z(0),
                              noise_model=noise_model,
                              num_trajectories=1024).expectation()
# True expectation: 0.1 - 0.9 = -0.8 (|1> has <Z> of -1 and |1> has <Z> of +1)
print("Noisy <Z> with 1024 trajectories =", noisy_exp_val)

# Rerun with a higher number of trajectories
noisy_exp_val = cudaq.observe(kernel,
                              spin.z(0),
                              noise_model=noise_model,
                              num_trajectories=8192).expectation()
print("Noisy <Z> with 8192 trajectories =", noisy_exp_val)
python3 program.py
Noisy <Z> with 1024 trajectories = -0.810546875
Noisy <Z> with 8192 trajectories = -0.800048828125
#include <cudaq.h>

struct xOp {
  void operator()() __qpu__ {
    cudaq::qubit q;
    x(q);
  }
};

int main() {
  // Add a simple bit-flip noise channel to X gate
  const double error_probability = 0.1;

  cudaq::bit_flip_channel bit_flip(error_probability);
  // Add noise channels to our noise model.
  cudaq::noise_model noise_model;
  // Apply the bitflip channel to any X-gate on any qubits
  noise_model.add_all_qubit_channel<cudaq::types::x>(bit_flip);

  double noisy_exp_val =
      cudaq::observe({.noise = noise_model, .num_trajectories = 1024}, xOp{},
                     cudaq::spin::z(0));

  // True expectation: 0.1 - 0.9 = -0.8 (|1> has <Z> of -1 and |1> has <Z> of
  // +1)
  std::cout << "Noisy <Z> with 1024 trajectories = " << noisy_exp_val << "\n";

  // Rerun with a higher number of trajectories
  noisy_exp_val =
      cudaq::observe({.noise = noise_model, .num_trajectories = 8192}, xOp{},
                     cudaq::spin::z(0));
  std::cout << "Noisy <Z> with 8192 trajectories = " << noisy_exp_val << "\n";
  return 0;
}
nvq++ --target nvidia program.cpp [...] -o program.x
./program.x
Noisy <Z> with 1024 trajectories = -0.810547
Noisy <Z> with 8192 trajectories = -0.800049

The following environment variable options are applicable to the nvidia target for trajectory noisy simulation. Any environment variables must be set prior to setting the target.

Additional environment variable options for trajectory simulation

Option

Value

Description

CUDAQ_OBSERVE_NUM_TRAJECTORIES

positive integer

The default number of trajectories for observe simulation if none was provided in the observe call. The default value is 1000.

CUDAQ_BATCH_SIZE

positive integer or NONE

The number of state vectors in the batched mode. If NONE, the batch size will be calculated based on the available device memory. Default is NONE.

CUDAQ_BATCHED_SIM_MAX_BRANCHES

positive integer

The number of trajectory branches to be tracked simultaneously in the gate fusion. Default is 16.

CUDAQ_BATCHED_SIM_MAX_QUBITS

positive integer

The max number of qubits for batching. If the qubit count in the circuit is more than this value, batched trajectory simulation will be disabled. The default value is 20.

CUDAQ_BATCHED_SIM_MIN_BATCH_SIZE

positive integer

The minimum number of trajectories for batching. If the number of trajectories is less than this value, batched trajectory simulation will be disabled. Default value is 4.

Note

Batched trajectory simulation is only available on the single-GPU execution mode of the nvidia target.

If batched trajectory simulation is not activated, e.g., due to problem size, number of trajectories, or the nature of the circuit (dynamic circuits with mid-circuit measurements and conditional branching), the required number of trajectories will be executed sequentially.

Density Matrix

Density matrix simulation is helpful for understanding the impact of noise on quantum applications. Unlike state vectors simulation which manipulates the \(2^n\) state vector, density matrix simulations manipulate the \(2^n x 2^n\) density matrix which defines an ensemble of states. To learn how you can leverage the density-matrix-cpu backend to study the impact of noise models on your applications, see the example here.

The Quantum Volume notebook also demonstrates a full application that leverages the density-matrix-cpu backend.

To execute a program on the density-matrix-cpu target, use the following commands:

python3 program.py [...] --target density-matrix-cpu

The target can also be defined in the application code by calling

cudaq.set_target('density-matrix-cpu')

If a target is set in the application code, this target will override the --target command line flag given during program invocation.

nvq++ --target density-matrix-cpu program.cpp [...] -o program.x
./program.x

Stim

This backend provides a fast simulator for circuits containing only Clifford gates. Any non-Clifford gates (such as T gates and Toffoli gates) are not supported. This simulator is based on the Stim library.

To execute a program on the stim target, use the following commands:

python3 program.py [...] --target stim

The target can also be defined in the application code by calling

cudaq.set_target('stim')

If a target is set in the application code, this target will override the --target command line flag given during program invocation.

nvq++ --target stim program.cpp [...] -o program.x
./program.x

Note

By default CUDA-Q executes kernels using a “shot-by-shot” execution approach. This allows for conditional gate execution (i.e. full control flow), but it can be slower than executing Stim a single time and generating all the shots from that single execution. Set the explicit_measurements flag with sample API for efficient execution.