qrisp.qaoa.QAOABenchmark.evaluate#

QAOABenchmark.evaluate(cost_metric='oqv', gain_metric='approx_ratio')[source]#

Evaluates the data in terms of a cost and a gain metric.

Cost metric

The default cost metric is overall quantum volume

\[\text{OQV} = \text{circuit_depth} \times \text{qubits} \times \text{shots} \times \text{iterations}\]

Gain metric

By default, two gain metrics are avialable.

The approximation ratio is a standard quantity in approximation algorithms and can be selected by setting gain_metric = "approx_ration".

The time to solution metric as used in this paper can be selected with gain_metric = "tts".

Users can implement their own cost/gain metric by calling .evaluate with a suited function. For more information check the examples.

Parameters:
cost_metricstr or callable, optional

The method to evaluate the cost of each run. The default is “oqv”.

gain_metricstr or callable, optional

The method to evaluate the gain of each run. The default is “approx_ratio”.

Returns:
cost_datalist[float]

A list containing the cost values of each run.

gain_datalist[float]

A list containing the gain of each run.

Examples

We set up a MaxCut instance and perform some benchmarking.

from qrisp import *
from networkx import Graph
G = Graph()

G.add_edges_from([[0,3],[0,4],[1,3],[1,4],[2,3],[2,4]])

from qrisp.qaoa import maxcut_problem

max_cut_instance = maxcut_problem(G)

benchmark_data = max_cut_instance.benchmark(qarg = QuantumVariable(5),
                           depth_range = [3,4,5],
                           shot_range = [5000, 10000],
                           iter_range = [25, 50],
                           optimal_solution = "11100",
                           repetitions = 2
                           )

We now evaluate the cost using the default metrics.

cost_data, gain_data = benchmark_data.evaluate()

print(cost_data[:10])
#Yields: [17500000, 17500000, 35000000, 35000000, 35000000, 35000000, 70000000, 70000000, 22500000, 22500000]
print(gain_data[:10])
#Yields: [0.8425333333333328, 0.9379999999999996, 0.9256666666666667, 0.8816999999999998, 0.764399999999999, 0.6228000000000001, 0.8136000000000001, 0.9213999999999997, 0.8541333333333333, 0.6424333333333333]

To set up a user specified cost metric we create a customized function

def runtime(run_data):
    return run_data["runtime"]

cost_data, gain_data = benchmark_data.evaluate(cost_metric = runtime)

This function extracts the runtime (in seconds) and uses that as a cost metric. The run_data dictionary contains the following entries:

  • layer_depth: The amount of layers

  • circuit_depth: The depth of the compiled circuit as returned by .depth method.

  • qubit_amount: The amount of qubits of the compiled circuit.

  • shots: The amount of shots that have been performed in this run.

  • iterations: The amount of backend calls, that the optimizer was allowed to do.

  • counts: The measurement results as returned by qarg.get_measurement().

  • runtime: The time (in seconds) that the run method of QAOAProblem took.

  • optimal_solution: The optimal solution of the problem