How-to customize simulations using model transformations and evaluation modes#
Qiskit Dynamics provides various options for configuring simulations that can impact solver performance. These options include choosing between dense and sparse array representations, different differential equation solvers, and model transformations that modify the definition of the problem. Depending on the specifics of the problem, different configurations can yield better performance.
Here we walk through some of these options, covering:
How-to set up a simulation in a rotating frame, and its potential benefits
How-to perform a rotating wave approximation, and its potential benefits
How-to use a sparse evaluation mode, and how-to appropriately set a rotating frame to preserve sparsity
Throughout this guide we work at the level of the Solver
interface,
and consider Hamiltonian dynamics for simplicity, however all of the
considerations have their analogs for Lindblad dynamics.
1. How-to set up a simulation in a rotating frame, and its potential benefits#
Here we show how to perform a simulation in a rotating frame by setting the
optional rotating_frame
argument when instantiating a Solver
, and demonstrate how a
well-chosen frame operator \(F = -iH_0\) can reduce solving time.
See the Rotating frames section of the Models API documentation
for details on rotating frames.
We will simulate the unitary generated by a transmon model with Hamiltonian:
where:
\(N\), \(a\), and \(a^\dagger\) are, respectively, the number, annihilation, and creation operators.
\(\nu\) is the qubit frequency and \(r\) is the drive strength.
\(s(t)\) is the drive signal, which we will take to be on resonance with constant envelope \(1\).
First, construct the components of the model:
import numpy as np
from qiskit.quantum_info import Operator
from qiskit_dynamics import Solver, Signal
dim = 5
v = 5.
anharm = -0.33
r = 0.1
a = np.diag(np.sqrt(np.arange(1, dim)), 1)
adag = np.diag(np.sqrt(np.arange(1, dim)), -1)
N = np.diag(np.arange(dim))
# static part
static_hamiltonian = 2 * np.pi * v * N + np.pi * anharm * N * (N - np.eye(dim))
# drive term
drive_hamiltonian = 2 * np.pi * r * (a + adag)
# drive signal
drive_signal = Signal(1., carrier_freq=v)
# total simulation time
T = 1. / r
Construct a Solver
for the model as stated, without entering a rotating frame, and solve,
timing the solver.
solver = Solver(
static_hamiltonian=static_hamiltonian,
hamiltonian_operators=[drive_hamiltonian],
)
y0 = np.eye(dim, dtype=complex)
%time results = solver.solve(t_span=[0., T], y0=y0, signals=[drive_signal], atol=1e-10, rtol=1e-10)
CPU times: user 6.55 s, sys: 0 ns, total: 6.55 s
Wall time: 6.55 s
Next, define a Solver
in the rotating frame of the static
Hamiltonian by setting the rotating_frame
kwarg, and solve, again timing the solver.
rf_solver = Solver(
static_hamiltonian=static_hamiltonian,
hamiltonian_operators=[drive_hamiltonian],
rotating_frame=static_hamiltonian
)
y0 = np.eye(dim, dtype=complex)
%time rf_results = rf_solver.solve(t_span=[0., T], y0=y0, signals=[drive_signal], atol=1e-10, rtol=1e-10)
CPU times: user 2.65 s, sys: 0 ns, total: 2.65 s
Wall time: 2.65 s
Observe that despite the two simulation problems being mathematically equivalent, it takes less time to solve in the rotating frame.
Next, verify that the results are numerically equivalent. This requires
transforming the results to a common frame, which may be done via utility
functions in the RotatingFrame
instance stored within Solver.model.rotating_frame
.
To compare the results, we use the fidelity function for unitary matrices:
where \(d\) is the dimension. A value of \(1\) indicates equality of the unitaries.
def fidelity(U, V):
# the fidelity function
inner_product = (U.conj() * V).sum()
return (np.abs(inner_product) / dim) ** 2
U = results.y[-1]
# transform the results of the solver in the rotating frame out of the rotating frame
U_rf = rf_solver.model.rotating_frame.state_out_of_frame(T, rf_results.y[-1])
fidelity(U, U_rf)
0.9999999944421245
Based on the fidelity, we see that the two simulations are numerically equivalent with reasonable accuracy based on our specified tolerances.
The discrepancy in solving times can be understood by examining the number of right-hand side (RHS) evaluations when solving the differential equation in each instance. The number of RHS evaluations for the first simulation (not in the rotating frame) was:
results.nfev
32366
Whereas the number of evaluations for the second simulation in the rotating frame was:
rf_results.nfev
8246
This demonstrates that the speedup from entering the rotating frame is a result of reducing the number of RHS calls required to solve with a given accuracy.
2. How-to perform a rotating wave approximation, and its potential benefits#
Next we show how to perform a simulation with the rotating wave approximation (RWA)
by setting the rwa_cutoff_freq
argument at Solver
instantiation, and show
how it results in further speed ups at the expense of solution accuracy. See the API
documentation for the rotating_wave_approximation()
function
for specific details about the RWA.
Construct a solver for the same problem, now specifying an RWA cutoff frequency and the carrier frequencies relative to which the cutoff should be applied:
rwa_solver = Solver(
static_hamiltonian=static_hamiltonian,
hamiltonian_operators=[drive_hamiltonian],
rotating_frame=static_hamiltonian,
rwa_cutoff_freq=1.5 * v,
rwa_carrier_freqs=[v]
)
y0 = np.eye(dim, dtype=complex)
%time rwa_results = rwa_solver.solve(t_span=[0., T], y0=y0, signals=[drive_signal], atol=1e-10, rtol=1e-10)
CPU times: user 511 ms, sys: 0 ns, total: 511 ms
Wall time: 509 ms
We observe a further reduction in time, which is a result of the solver requiring even fewer RHS evaluations with the RWA:
rwa_results.nfev
1274
This speed comes at the cost of lower accuracy, owing to the fact that RWA is a legitimate approximation, which modifies the structure of the solution:
U_rwa = rwa_solver.model.rotating_frame.state_out_of_frame(T, rwa_results.y[-1])
fidelity(U_rwa, U)
0.9986750508422452
3. How-to use a sparse evaluation mode, and how-to appropriately set a rotating frame to preserve sparsity#
Here we show how to perform a simulation using sparse arrays for evaluating the RHS
via the evaluation_mode
initialization kwarg,
with extra emphasis on the following:
Note
As stated in the
evaluation modes section of the Models API documentation,
when using a sparse evaluation mode, to preserve sparsity, it is recommended to
only use diagonal rotating frames, which can be specified as a 1d array to the
rotating_frame
kwarg of Solver
instantiation.
For this section we use JAX as it is more performant. See the userguide on using JAX for a more detailed explanation of how to work with JAX in Qiskit Dynamics.
Note
JAX sparse arrays are only recommended for use on CPU.
Start off by configuring to use JAX.
from qiskit_dynamics.array import Array
# configure jax to use 64 bit mode
import jax
jax.config.update("jax_enable_x64", True)
# tell JAX we are using CPU
jax.config.update('jax_platform_name', 'cpu')
# set default backend
Array.set_default_backend('jax')
Reconstruct the model pieces at a much larger dimension, to observe the benefits of using sparse arrays. Furthermore, set up the initial state to be a single column vector, to further highlight the benefits of the sparse representation.
dim = 300
v = 5.
anharm = -0.33
r = 0.02
a = np.diag(np.sqrt(np.arange(1, dim, dtype=complex)), 1)
adag = np.diag(np.sqrt(np.arange(1, dim, dtype=complex)), -1)
N = np.diag(np.arange(dim, dtype=complex))
static_hamiltonian = 2 * np.pi * v * N + np.pi * anharm * N * (N - np.eye(dim))
drive_hamiltonian = 2 * np.pi * r * (a + adag)
drive_signal = Signal(Array(1.), carrier_freq=v)
y0 = np.zeros(dim, dtype=complex)
y0[1] = 1.
T = 1 / r
Construct standard dense solver in the rotating frame of the static Hamiltonian, define a function to solve the system for a given amplitude, and just-in-time compile it using JAX.
solver = Solver(
static_hamiltonian=static_hamiltonian,
hamiltonian_operators=[drive_hamiltonian],
rotating_frame=static_hamiltonian
)
def dense_func(amp):
drive_signal = Signal(Array(amp), carrier_freq=v)
res = solver.solve(
t_span=[0., T],
y0=y0,
signals=[drive_signal],
method='jax_odeint',
atol=1e-10,
rtol=1e-10
)
return res.y[-1]
jitted_dense_func = jax.jit(dense_func)
Construct sparse solver in the frame of the diagonal of the static Hamiltonian, define a function to solve the system for a given amplitude, and just-in-time compile it. Note that in this case the static Hamiltonian is already diagonal, but we explicitly highlight the need for this.
sparse_solver = Solver(static_hamiltonian=static_hamiltonian,
hamiltonian_operators=[drive_hamiltonian],
rotating_frame=np.diag(static_hamiltonian),
evaluation_mode='sparse')
def sparse_func(amp):
drive_signal = Signal(Array(amp), carrier_freq=v)
res = sparse_solver.solve(
t_span=[0., T],
y0=y0,
signals = [drive_signal],
method='jax_odeint',
atol=1e-10,
rtol=1e-10
)
return res.y[-1]
jitted_sparse_func = jax.jit(sparse_func)
Run the dense simulation (twice to see the true compiled speed).
yf = jitted_dense_func(1.).block_until_ready()
%time yf = jitted_dense_func(1.).block_until_ready()
CPU times: user 3.45 s, sys: 3.57 ms, total: 3.45 s
Wall time: 3.45 s
Run the sparse solver (twice to see the true compiled speed).
yf_sparse = jitted_sparse_func(1.).block_until_ready()
%time yf_sparse = jitted_sparse_func(1.).block_until_ready()
CPU times: user 1.45 s, sys: 0 ns, total: 1.45 s
Wall time: 1.45 s
Verify equality of the results in a common frame.
yf = solver.model.rotating_frame.state_out_of_frame(T, yf)
yf_sparse = sparse_solver.model.rotating_frame.state_out_of_frame(T, yf_sparse)
np.linalg.norm(yf - yf_sparse)
Array(3.05621703e-13)
We observe that the final states are extremely close, and that the sparse representation provides a speed advantage for this problem.