Computing Expectation Values¶
CUDA-Q provides generic library functions enabling one to compute expectation values of quantum spin operators with respect to a parameterized CUDA-Q kernel. Let’s take a look at an example of this:
// Compile and run with:
// ```
// nvq++ expectation_values.cpp -o d2.x && ./d2.x
// ```
#include <cudaq.h>
#include <cudaq/algorithm.h>
// The example here shows a simple use case for the `cudaq::observe`
// function in computing expected values of provided spin_ops.
struct ansatz {
auto operator()(double theta) __qpu__ {
cudaq::qvector q(2);
x(q[0]);
ry(theta, q[1]);
x<cudaq::ctrl>(q[1], q[0]);
}
};
int main() {
// Build up your spin op algebraically
using namespace cudaq::spin;
cudaq::spin_op h = 5.907 - 2.1433 * x(0) * x(1) - 2.1433 * y(0) * y(1) +
.21829 * z(0) - 6.125 * z(1);
// Observe takes the kernel, the spin_op, and the concrete
// parameters for the kernel
double energy = cudaq::observe(ansatz{}, h, .59);
printf("Energy is %lf\n", energy);
return 0;
}
Here we define a parameterized CUDA-Q kernel, a callable type named ansatz
that takes as
input a single angle theta
. This angle becomes the argument of a single ry
rotation.
In host code, we define a Hamiltonian operator via the CUDA-Q spin_op
type.
CUDA-Q provides a generic function cudaq::observe
. This function takes as input three terms.
The first two terms are a parameterized kernel and the spin_op
whose expectation value we wish to compute.
The last term contains the runtime parameters at which we evaluate the parameterized kernel.
The return type of this function is an cudaq::observe_result
which contains all the data
from the execution, but is trivially convertible to a double, resulting in the expectation value we are interested in.
To compile and execute this code, we run the following:
nvq++ expectation_values.cpp -o exp_vals.x
./exp_vals.x
Parallelizing across Multiple Processors¶
multi-processor platforms page.
One typical use case of multi-processor platforms is to distribute the expectation value computations of a multi-term Hamiltonian across multiple virtual QPUs.
The following shows an example using the nvidia-mqpu
platform:
import cudaq
from cudaq import spin
cudaq.set_target("nvidia", option="mqpu")
target = cudaq.get_target()
num_qpus = target.num_qpus()
print("Number of QPUs:", num_qpus)
# Define spin ansatz.
@cudaq.kernel
def kernel(angle: float):
qvector = cudaq.qvector(2)
x(qvector[0])
ry(angle, qvector[1])
x.ctrl(qvector[1], qvector[0])
# Define spin Hamiltonian.
hamiltonian = 5.907 - 2.1433 * spin.x(0) * spin.x(1) - 2.1433 * spin.y(
0) * spin.y(1) + .21829 * spin.z(0) - 6.125 * spin.z(1)
exp_val = cudaq.observe(kernel,
hamiltonian,
0.59,
execution=cudaq.parallel.thread).expectation()
print("Expectation value: ", exp_val)
using namespace cudaq::spin;
cudaq::spin_op h = 5.907 - 2.1433 * x(0) * x(1) - 2.1433 * y(0) * y(1) +
.21829 * z(0) - 6.125 * z(1);
// Get the quantum_platform singleton
auto &platform = cudaq::get_platform();
// Query the number of QPUs in the system
auto num_qpus = platform.num_qpus();
printf("Number of QPUs: %zu\n", num_qpus);
auto ansatz = [](double theta) __qpu__ {
cudaq::qubit q, r;
x(q);
ry(theta, r);
x<cudaq::ctrl>(r, q);
};
double result = cudaq::observe<cudaq::parallel::thread>(ansatz, h, 0.59);
printf("Expectation value: %lf\n", result);
One can then target the nvidia-mqpu
platform by executing the following commands:
nvq++ observe_mqpu.cpp -target nvidia-mqpu
./a.out
In the above code snippets, since the Hamiltonian contains four non-identity terms, there are four quantum circuits that need to be executed
in order to compute the expectation value of that Hamiltonian and given the quantum state prepared by the ansatz kernel. When the nvidia-mqpu
platform
is selected, these circuits will be distributed across all available QPUs. The final expectation value result is computed from all QPU execution results.