qrisp.vqe.VQEBenchmark.rank#

VQEBenchmark.rank(metric='approx_ratio', print_res=False, average_repetitions=False)[source]#

Ranks the runs of the benchmark according to a given metric.

The default metric is approximation ratio. Similar to .evaluate, the metric can be user specified.

Parameters:
metricstr or callable, optional

The metric according to which should be ranked. The default is “approx_ratio”.

Returns:
list[dict]

List of dictionaries, where the first element has the highest rank.

Examples

We create a Heisenberg problem instance and benchmark several parameters:

from networkx import Graph
G =Graph()
G.add_edges_from([(0,1),(1,2),(2,3),(3,4)])

from qrisp.vqe.problems.heisenberg import *

vqe = heisenberg_problem(G,1,0)
H = create_heisenberg_hamiltonian(G,1,0)

benchmark_data = vqe.benchmark(qarg = QuantumVariable(5),
                    depth_range = [1,2,3],
                    shot_range = [5000,10000],
                    iter_range = [25,50],
                    optimal_energy = H.ground_state_energy(),
                    repetitions = 2
                    )

To rank the results, we call the according method:

print(benchmark_data.rank()[0])
#Yields: {'layer_depth': 3, 'circuit_depth': 69, 'qubit_amount': 5, 'shots': 10000, 'iterations': 50, 'runtime': 2.202655076980591, 'optimal_energy': -7.711545013271984, 'energy': -7.465600000000004, 'metric': 0.9681069081683767}