# How to calculate computational cost of an annealer based algorithm?

As per my knowledge, to solve a problem using D-wave QPU (or SA solver or Leap solver) we need to construct only the QUBO, and rest of things will be taken care by Quantum annealer. But when we do analysis of an approach we also look into the "computational cost". In Circuit model of quantum computing there is also a metric called "gate cost". Is there any similar (or different) way of calculating computational cost for annealer based approaches?

• Hello,

A broader description of QPU timing information can be found here:
https://docs.dwavesys.com/docs/latest/doc_timing.html

Of note are two of the subpages.

The first is the QPU Timing Information from Ocean Software which has information on estimating read times, and also has information about how to obtain readings from the Ocean Tools software:
https://docs.dwavesys.com/docs/latest/timing_cloud_client_use.html

The second is a breakdown of the timing information, which goes into detail about each of the parts of the process:
https://docs.dwavesys.com/docs/latest/timing_qa_cycle_time.html

The QPU (Hardware) Solvers page provides information that can be queried from the QPU directly once it's instantiated:
https://docs.dwavesys.com/docs/latest/c_solver_intro.html

These community posts might also be useful:
https://support.dwavesys.com/hc/en-us/articles/360043679413-How-do-the-hybrid-and-QPU-solvers-consume-time-

Please let us know if you have any further questions.

• The resources above says the metric which depicts the computational cost is mainly "run time" or "run time complexity" (I have checked "time-to-target" metric"). Is there any other metric apart from these "run time" one, which should be taken care of when designing an annealer based algorithm?

• Hello,

When you say other metrics, do you mean outside of time complexity?

Often problem formulation can be a little costly (on any system), but can often be reused in subsequent runs, so would not be a major concern.

I suppose the answer to your question might also depend a bit on the nature of the problem you are formulating.

Can you give us any more information on what kind of problem you are working with?

• We are mainly working with Non-NP problems. In computer science complexity of an algorithm is given by "BIG O ". Is it possible to compare annealing based approach in terms of "BIG O", if we are able to map Non-NP problems into Ising or QUBO form and get correct solution using D-wave QPU?

• So when we refer to time complexity, this is somehwat comparable to "Big O", which gives a theoretical upper bound.
That said, it is very difficult to do Big O analysis of the Quantum Annealer, since it is empirical in nature, and probabilistic, so it will ultimately fail sometimes.

For more information on the challenges faced when extrapolating outside of measured data, see this paper on benchmarking:

The gist is that there are problems with modeling empirical data.

The QPU itself basically runs in constant time O(1), given that the problem formulation has been done ahead of time.

Some problems can be solved more easily and quickly on classical architecture.

Most non-NP-hard problems that will fit directly on the QPU will be easy enough to solve classically.
Where the QPU is able to offer good value is when the problem is more in the NP-hard range of problems.

It is possible to formulate non-NP-hard problems, but it is not really desirable, given the above information.
It is very difficult to compare classical computing to quantum annealing, and any practical speedups will not be apparent in the non-NP-hard problem formulations.
Even still, it is possible to do.