qrisp.vqe.VQEBenchmark.evaluate#

VQEBenchmark.evaluate(cost_metric='oqv', gain_metric='approx_ratio')[source]#

Evaluates the data in terms of a cost and a gain metric.

Cost metric

The default cost metric is overall quantum volume

\[\text{OQV} = \text{circuit_depth} \times \text{qubits} \times \text{shots} \times \text{iterations}\]

where \(\text{shots} = 1/\text{precision}^2\). The acutal number of shots exhibits a scaling factor that depends on the Hamiltonian. For different Hamiltonians, the results for the OQV metric are not comparable.

Gain metric

By default, two gain metrics are available.

The approximation ratio is a standard quantity in approximation algorithms and can be selected by setting gain_metric = "approx_ratio".

Users can implement their own cost/gain metric by calling .evaluate with a suited function. For more information check the examples.

Parameters:
cost_metricstr or callable, optional

The method to evaluate the cost of each run. The default is “oqv”.

gain_metricstr or callable, optional

The method to evaluate the gain of each run. The default is “approx_ratio”.

Returns:
cost_datalist[float]

A list containing the cost values of each run.

gain_datalist[float]

A list containing the gain of each run.

Examples

We set up a Heisenberg problem instance and perform some benchmarking.

from qrisp import QuantumVariable
from qrisp.vqe.problems.heisenberg import *
from networkx import Graph

G = Graph()
G.add_edges_from([(0,1),(1,2),(2,3),(3,4)])

vqe = heisenberg_problem(G,1,0)
H = create_heisenberg_hamiltonian(G,1,0)

benchmark_data = vqe.benchmark(qarg = QuantumVariable(5),
                    depth_range = [1,2,3],
                    precision_range = [0.02,0.01],
                    iter_range = [25,50],
                    optimal_energy = H.ground_state_energy(),
                    repetitions = 2
                    )

We now evaluate the cost using the default metrics.

cost_data, gain_data = benchmark_data.evaluate()

print(cost_data[:10])
#Yields: [7812500.0, 7812500.0, 15625000.0, 15625000.0, 31250000.0, 31250000.0, 62500000.0, 62500000.0, 14687500.0, 14687500.0]
print(gain_data[:10])
#Yields: [0.8611554188923896, 0.8585520978550613, 0.8581865630518749, 0.8576694650376105, 0.8589623529655623, 0.8594148020629763, 0.8591696326013233, 0.8597669545624406, 0.9715380139717106, 0.9490977432492607]

To set up a user specified cost metric we create a customized function

def runtime(run_data):
    return run_data["runtime"]

cost_data, gain_data = benchmark_data.evaluate(cost_metric = runtime)

This function extracts the runtime (in seconds) and uses that as a cost metric. The run_data dictionary contains the following entries:

  • layer_depth: The amount of layers

  • circuit_depth: The depth of the compiled circuit as returned by .depth method.

  • qubit_amount: The amount of qubits of the compiled circuit.

  • precision: The precision with which the expectation of the Hamiltonian is evaluated.

  • iterations: The amount of backend calls, that the optimizer was allowed to do.

  • energy: The energy of the problem Hamiltonian for the optimized ciruits for each run.

  • runtime: The time (in seconds) that the run method of VQEProblem took.

  • optimal_energy: The exact ground state energy of the problem Hamiltonian.