Noisy Simulators

Trajectory Noisy Simulation

CUDA-Q GPU simulator backends, nvidia, tensornet, and tensornet-mps, supports noisy quantum circuit simulations using quantum trajectory method.

When a noise_model is provided to CUDA-Q, the backend target will incorporate quantum noise into the quantum circuit simulation according to the noise model specified, as shown in the below example.

import cudaq

# Use the `nvidia` target
# Other targets capable of trajectory simulation are:
# - `tensornet`
# - `tensornet-mps`
cudaq.set_target("nvidia")

# Let's define a simple kernel that we will add noise to.
qubit_count = 2


@cudaq.kernel
def kernel(qubit_count: int):
    qvector = cudaq.qvector(qubit_count)
    x(qvector)
    mz(qvector)


# Add a simple bit-flip noise channel to X gate
error_probability = 0.1
bit_flip = cudaq.BitFlipChannel(error_probability)

# Add noise channels to our noise model.
noise_model = cudaq.NoiseModel()
# Apply the bit-flip channel to any X-gate on any qubits
noise_model.add_all_qubit_channel("x", bit_flip)

# Due to the impact of noise, our measurements will no longer be uniformly
# in the |11> state.
noisy_counts = cudaq.sample(kernel,
                            qubit_count,
                            noise_model=noise_model,
                            shots_count=1000)

# The probability that we get the perfect result (11) should be ~ 0.9 * 0.9 = 0.81
noisy_counts.dump()
python3 program.py
{ 00:15 01:92 10:81 11:812 }
#include <cudaq.h>

struct xOp {
  void operator()(int qubit_count) __qpu__ {
    cudaq::qvector q(qubit_count);
    x(q);
    mz(q);
  }
};

int main() {
  // Add a simple bit-flip noise channel to X gate
  const double error_probability = 0.1;

  cudaq::bit_flip_channel bit_flip(error_probability);
  // Add noise channels to our noise model.
  cudaq::noise_model noise_model;
  // Apply the bitflip channel to any X-gate on any qubits
  noise_model.add_all_qubit_channel<cudaq::types::x>(bit_flip);

  const int qubit_count = 2;
  // Due to the impact of noise, our measurements will no longer be uniformly in
  // the |11> state.
  auto counts =
      cudaq::sample({.shots = 1000, .noise = noise_model}, xOp{}, qubit_count);

  // The probability that we get the perfect result (11) should be ~ 0.9 * 0.9 =
  // 0.81
  counts.dump();
  return 0;
}
# nvidia target
nvq++ --target nvidia program.cpp [...] -o program.x
./program.x
{ 00:15 01:92 10:81 11:812 }
# tensornet target
nvq++ --target tensornet program.cpp [...] -o program.x
./program.x
{ 00:10 01:108 10:73 11:809 }
# tensornet-mps target
nvq++ --target tensornet-mps program.cpp [...] -o program.x
./program.x
{ 00:5 01:86 10:102 11:807 }

In the case of bit-string measurement sampling as in the above example, each measurement ‘shot’ is executed as a trajectory, whereby Kraus operators specified in the noise model are sampled.

Unitary Mixture vs. General Noise Channel

Quantum noise channels can be classified into two categories:

  1. Unitary mixture

The noise channel can be defined by a set of unitary matrices along with list of probabilities associated with those matrices. The depolarizing channel is an example of unitary mixture, whereby I (no noise), X, Y, or Z unitaries may occur to the quantum state at pre-defined probabilities.

  1. General noise channel

The channel is defined as a set of non-unitary Kraus matrices, satisfying the completely positive and trace preserving (CPTP) condition. An example of this type of channels is the amplitude damping noise channel.

In trajectory simulation method, simulating unitary mixture noise channels is more efficient than general noise channels since the trajectory sampling of the latter requires probability calculation based on the immediate quantum state.

Note

CUDA-Q noise channel utility automatically detects whether a list of Kraus matrices can be converted to the unitary mixture representation for more efficient simulation.

Noise Channel Support

Backend

Unitary Mixture

General Channel

nvidia

YES

YES

tensornet

YES

NO

tensornet-mps

YES

YES (number of qubits > 1)

Trajectory Expectation Value Calculation

In trajectory simulation method, the statistical error of observable expectation value estimation scales asymptotically as \(1/\sqrt{N_{trajectories}}\), where \(N_{trajectories}\) is the number of trajectories. Hence, depending on the required level of accuracy, the number of trajectories can be specified accordingly.

import cudaq
from cudaq import spin

# Use the `nvidia` target
# Other targets capable of trajectory simulation are:
# - `tensornet`
# - `tensornet-mps`
cudaq.set_target("nvidia")


@cudaq.kernel
def kernel():
    q = cudaq.qubit()
    x(q)


# Add a simple bit-flip noise channel to X gate
error_probability = 0.1
bit_flip = cudaq.BitFlipChannel(error_probability)

# Add noise channels to our noise model.
noise_model = cudaq.NoiseModel()
# Apply the bit-flip channel to any X-gate on any qubits
noise_model.add_all_qubit_channel("x", bit_flip)

noisy_exp_val = cudaq.observe(kernel,
                              spin.z(0),
                              noise_model=noise_model,
                              num_trajectories=1024).expectation()
# True expectation: 0.1 - 0.9 = -0.8 (|1> has <Z> of -1 and |1> has <Z> of +1)
print("Noisy <Z> with 1024 trajectories =", noisy_exp_val)

# Rerun with a higher number of trajectories
noisy_exp_val = cudaq.observe(kernel,
                              spin.z(0),
                              noise_model=noise_model,
                              num_trajectories=8192).expectation()
print("Noisy <Z> with 8192 trajectories =", noisy_exp_val)
python3 program.py
Noisy <Z> with 1024 trajectories = -0.810546875
Noisy <Z> with 8192 trajectories = -0.800048828125
#include <cudaq.h>

struct xOp {
  void operator()() __qpu__ {
    cudaq::qubit q;
    x(q);
  }
};

int main() {
  // Add a simple bit-flip noise channel to X gate
  const double error_probability = 0.1;

  cudaq::bit_flip_channel bit_flip(error_probability);
  // Add noise channels to our noise model.
  cudaq::noise_model noise_model;
  // Apply the bitflip channel to any X-gate on any qubits
  noise_model.add_all_qubit_channel<cudaq::types::x>(bit_flip);

  double noisy_exp_val =
      cudaq::observe({.noise = noise_model, .num_trajectories = 1024}, xOp{},
                     cudaq::spin::z(0));

  // True expectation: 0.1 - 0.9 = -0.8 (|1> has <Z> of -1 and |1> has <Z> of
  // +1)
  std::cout << "Noisy <Z> with 1024 trajectories = " << noisy_exp_val << "\n";

  // Rerun with a higher number of trajectories
  noisy_exp_val =
      cudaq::observe({.noise = noise_model, .num_trajectories = 8192}, xOp{},
                     cudaq::spin::z(0));
  std::cout << "Noisy <Z> with 8192 trajectories = " << noisy_exp_val << "\n";
  return 0;
}
# nvidia target
nvq++ --target nvidia program.cpp [...] -o program.x
./program.x
Noisy <Z> with 1024 trajectories = -0.810547
Noisy <Z> with 8192 trajectories = -0.800049

# tensornet target
nvq++ --target tensornet program.cpp [...] -o program.x
./program.x
Noisy <Z> with 1024 trajectories = -0.777344
Noisy <Z> with 8192 trajectories = -0.800537

# tensornet-mps target
nvq++ --target tensornet-mps program.cpp [...] -o program.x
./program.x
Noisy <Z> with 1024 trajectories = -0.828125
Noisy <Z> with 8192 trajectories = -0.801758

In the above example, as we increase the number of trajectories, the result of CUDA-Q observe approaches the true value.

Note

With trajectory noisy simulation, the result of CUDA-Q observe is inherently stochastic. For a small number of qubits, the true expectation value can be simulated by the density matrix simulator.

Batched Trajectory Simulation

On the nvidia target, when simulating many trajectories with small state vectors, the simulation is batched for optimal performance.

Note

Batched trajectory simulation is only available on the single-GPU execution mode of the nvidia target.

If batched trajectory simulation is not activated, e.g., due to problem size, number of trajectories, or the nature of the circuit (dynamic circuits with mid-circuit measurements and conditional branching), the required number of trajectories will be executed sequentially.

The following environment variable options are applicable to the nvidia target for batched trajectory noisy simulation. Any environment variables must be set prior to setting the target or running “import cudaq”.

Additional environment variable options for trajectory simulation

Option

Value

Description

CUDAQ_BATCH_SIZE

positive integer or NONE

The number of state vectors in the batched mode. If NONE, the batch size will be calculated based on the available device memory. Default is NONE.

CUDAQ_BATCHED_SIM_MAX_BRANCHES

positive integer

The number of trajectory branches to be tracked simultaneously in the gate fusion. Default is 16.

CUDAQ_BATCHED_SIM_MAX_QUBITS

positive integer

The max number of qubits for batching. If the qubit count in the circuit is more than this value, batched trajectory simulation will be disabled. The default value is 20.

CUDAQ_BATCHED_SIM_MIN_BATCH_SIZE

positive integer

The minimum number of trajectories for batching. If the number of trajectories is less than this value, batched trajectory simulation will be disabled. Default value is 4.

Note

The default batched trajectory simulation parameters have been chosen for optimal performance.

In the below example, we demonstrate the use of these parameters to control trajectory batching.

import time
import cudaq
# Use the `nvidia` target
cudaq.set_target("nvidia")

# Let's define a simple kernel that we will add noise to.
qubit_count = 10


@cudaq.kernel
def kernel(qubit_count: int):
    qvector = cudaq.qvector(qubit_count)
    x(qvector)
    mz(qvector)


# Add a simple bit-flip noise channel to X gate
error_probability = 0.01
bit_flip = cudaq.BitFlipChannel(error_probability)

# Add noise channels to our noise model.
noise_model = cudaq.NoiseModel()
# Apply the bit-flip channel to any X-gate on any qubits
noise_model.add_all_qubit_channel("x", bit_flip)

ideal_counts = cudaq.sample(kernel, qubit_count, shots_count=1000)

start = time.time()
# Due to the impact of noise, our measurements will no longer be uniformly
# in the |1...1> state.
noisy_counts = cudaq.sample(kernel,
                            qubit_count,
                            noise_model=noise_model,
                            shots_count=1000)
end = time.time()
noisy_counts.dump()
print(f"Simulation elapsed time: {(end - start) * 1000} ms")
# Default batching parameter
python3 program.py
Simulation elapsed time: 45.75657844543457 ms

# Disable batching by setting batch size to 1
CUDAQ_BATCH_SIZE=1 python3 program.py
Simulation elapsed time: 716.090202331543 ms
#include <chrono>
#include <cudaq.h>
#include <iostream>

struct xOp {
  void operator()(int qubit_count) __qpu__ {
    cudaq::qvector q(qubit_count);
    x(q);
    mz(q);
  }
};

int main() {
  // Add a simple bit-flip noise channel to X gate
  const double error_probability = 0.01;

  cudaq::bit_flip_channel bit_flip(error_probability);
  // Add noise channels to our noise model.
  cudaq::noise_model noise_model;
  // Apply the bitflip channel to any X-gate on any qubits
  noise_model.add_all_qubit_channel<cudaq::types::x>(bit_flip);

  const int qubit_count = 10;
  const auto start_time = std::chrono::high_resolution_clock::now();
  // Due to the impact of noise, our measurements will no longer be uniformly in
  // the |1...1> state.
  auto counts =
      cudaq::sample({.shots = 1000, .noise = noise_model}, xOp{}, qubit_count);
  const auto end_time = std::chrono::high_resolution_clock::now();
  counts.dump();
  const std::chrono::duration<double, std::milli> elapsed_time =
      end_time - start_time;
  std::cout << "Simulation elapsed time: " << elapsed_time.count() << "ms\n";
  return 0;
}
nvq++ --target nvidia program.cpp [...] -o program.x
# Default batching parameter
./program.x
Simulation elapsed time: 45.47ms
# Disable batching by setting batch size to 1
Simulation elapsed time: 558.66ms

Note

The CUDAQ_LOG_LEVEL environment variable can be used to view detailed logs of batched trajectory simulation, e.g., the batch size.

Density Matrix

Density matrix simulation is helpful for understanding the impact of noise on quantum applications. Unlike state vectors simulation which manipulates the \(2^n\) state vector, density matrix simulations manipulate the \(2^n x 2^n\) density matrix which defines an ensemble of states. To learn how you can leverage the density-matrix-cpu backend to study the impact of noise models on your applications, see the example here.

The Quantum Volume notebook also demonstrates a full application that leverages the density-matrix-cpu backend.

To execute a program on the density-matrix-cpu target, use the following commands:

python3 program.py [...] --target density-matrix-cpu

The target can also be defined in the application code by calling

cudaq.set_target('density-matrix-cpu')

If a target is set in the application code, this target will override the --target command line flag given during program invocation.

nvq++ --target density-matrix-cpu program.cpp [...] -o program.x
./program.x

Stim

This backend provides a fast simulator for circuits containing only Clifford gates. Any non-Clifford gates (such as T gates and Toffoli gates) are not supported. This simulator is based on the Stim library.

To execute a program on the stim target, use the following commands:

python3 program.py [...] --target stim

The target can also be defined in the application code by calling

cudaq.set_target('stim')

If a target is set in the application code, this target will override the --target command line flag given during program invocation.

nvq++ --target stim program.cpp [...] -o program.x
./program.x

Note

By default CUDA-Q executes kernels using a “shot-by-shot” execution approach. This allows for conditional gate execution (i.e. full control flow), but it can be slower than executing Stim a single time and generating all the shots from that single execution. Set the explicit_measurements flag with sample API for efficient execution.