qrisp.vqe.VQEBenchmark.evaluate#

VQEBenchmark.evaluate(cost_metric='oqv', gain_metric='approx_ratio')[source]#

Evaluates the data in terms of a cost and a gain metric.

Cost metric

The default cost metric is overall quantum volume

\[\text{OQV} = \text{circuit_depth} \times \text{qubits} \times \text{shots} \times \text{iterations}\]

Gain metric

By default, two gain metrics are avialable.

The approximation ratio is a standard quantity in approximation algorithms and can be selected by setting gain_metric = "approx_ratio".

Users can implement their own cost/gain metric by calling .evaluate with a suited function. For more information check the examples.

Parameters:
cost_metricstr or callable, optional

The method to evaluate the cost of each run. The default is “oqv”.

gain_metricstr or callable, optional

The method to evaluate the gain of each run. The default is “approx_ratio”.

Returns:
cost_datalist[float]

A list containing the cost values of each run.

gain_datalist[float]

A list containing the gain of each run.

Examples

We set up a Heisenberg problem instance and perform some benchmarking.

from networkx import Graph

G =Graph()
G.add_edges_from([(0,1),(1,2),(2,3),(3,4)])
from qrisp.vqe.problems.heisenberg import *

vqe = heisenberg_problem(G,1,0)
H = create_heisenberg_hamiltonian(G,1,0)

benchmark_data = vqe.benchmark(qarg = QuantumVariable(5),
                    depth_range = [1,2,3],
                    shot_range = [5000,10000],
                    iter_range = [25,50],
                    optimal_energy = H.ground_state_energy(),
                    repetitions = 2
                    )

We now evaluate the cost using the default metrics.

cost_data, gain_data = benchmark_data.evaluate()

print(cost_data[:10])
#Yields: [15625000, 15625000, 31250000, 31250000, 31250000, 31250000, 62500000, 62500000, 29375000, 29375000]
print(gain_data[:10])
#Yields: [0.8580900440328681, 0.8543553838641942, 0.8510356859364849, 0.8539404216232306, 0.8587643576744335, 0.8635364234455172, 0.8497389289334728, 0.8600092443973252, 0.884232665213583, 0.8656112346503356]

To set up a user specified cost metric we create a customized function

def runtime(run_data):
    return run_data["runtime"]

cost_data, gain_data = benchmark_data.evaluate(cost_metric = runtime)

This function extracts the runtime (in seconds) and uses that as a cost metric. The run_data dictionary contains the following entries:

  • layer_depth: The amount of layers

  • circuit_depth: The depth of the compiled circuit as returned by .depth method.

  • qubit_amount: The amount of qubits of the compiled circuit.

  • shots: The amount of shots that have been performed in this run.

  • iterations: The amount of backend calls, that the optimizer was allowed to do.

  • energy: The energy of the problem Hamiltonian for the optimized ciruits for each run.

  • runtime: The time (in seconds) that the run method of VQEProblem took.

  • optimal_energy: The exact ground state energy of the problem Hamiltonian.