qrisp.vqe.VQEBenchmark.rank#

VQEBenchmark.rank(metric='approx_ratio', print_res=False, average_repetitions=False)[source]#

Ranks the runs of the benchmark according to a given metric.

The default metric is approximation ratio. Similar to .evaluate, the metric can be user specified.

Parameters:
metricstr or callable, optional

The metric according to which should be ranked. The default is “approx_ratio”.

Returns:
list[dict]

List of dictionaries, where the first element has the highest rank.

Examples

We create a Heisenberg problem instance and benchmark several parameters:

from qrisp import QuantumVariable
from qrisp.vqe.problems.heisenberg import *
from networkx import Graph

G = Graph()
G.add_edges_from([(0,1),(1,2),(2,3),(3,4)])

vqe = heisenberg_problem(G,1,0)
H = create_heisenberg_hamiltonian(G,1,0)

benchmark_data = vqe.benchmark(qarg = QuantumVariable(5),
                    depth_range = [1,2,3],
                    precision_range = [0.02,0.01],
                    iter_range = [25,50],
                    optimal_energy = H.ground_state_energy(),
                    repetitions = 2
                    )

To rank the results, we call the according method:

print(benchmark_data.rank()[0])
#Yields: {'layer_depth': 3, 'circuit_depth': 69, 'qubit_amount': 5, 'precision': 0.01, 'iterations': 50, 'runtime': 1.996392011642456, 'optimal_energy': -7.711545013271988, 'energy': -7.572235160661036, 'metric': 0.9819348973038227}