Can I configure the quantum annealer for time series analysis?
I would like to solve hundreds of quadratic equations simultaneously in a single pass. My application requires running a number of feature extraction equations for each set of input values. Can I configure the quantum annealer to generate a solution on each pass? I do not expect the solution to be more than an approximation of the convolution of the functions. But I am seeking answers from 0.0 to 1.0 where 0.0 is a high energy state and 1.0 is a low energy state with varying probabilities over time. If I have to run multiple passes, then the quantum annealer will be slower than a classical computer. My application does not demand 0 or 1 certainty; it only requires that the solutions over time reflect the probability distribution function programmed into the quantum annealer. This is a time series analysis application that samples real time data and finds low energy states in order to determine when are the best times to execute a single strategy in real time.
Comments
Hello,
Although it is theoretically possible, it is probably a good idea to take a closer look at exactly how you would want to represent your problem on the QPU.
Using smaller problem sets, you can place several on the QPU, and run them in parallel.
This does, however, limit the size of the problem that you can fit on the QPU.
Often what users will do is program some problems onto the QPU, and then do multiple reads of the problem to get a distribution of solution energies.
Doing multiple reads is much faster than reprogramming the QPU with subsequent problem sets.
The data comes from one problem, but the results returned are a set of low energy sample solutions.
Some combination of multiple samplings with various problem inputs should give the desired effect.
The best way to know for sure is to try running some problems on the QPU and see what options are available!
I hope this was helpful!
Feel free to ask for more information or for clarification.
Is it possible to construct a hardware pipeline of QPUs that I can load independently? If so, I will have a flexible way to compute complex algorithms in stages without slowing down computation. Multiple samplings with different variable values is one solution, but I don't think a QPU would offer any advantages over a classical CPU in such a case.
I am searching for a hardware solution that supports continuous processing problems: what is the point of having a co-processor that is slower than classical hardware? I need to get data in and out of the QPU quickly and in as wide a bandwidth as possible.
A pipeline architecture might give one the bandwidth necessary to handle real world problems; this would make a QPU useful outside academia. The key seems to be to have a hardware architecture that supports massively parallel computation. Otherwise, one has constructed a co-processor that actually operates as a bottleneck.
I am not sure whether any of this is feasible given the existing technology. But I do know that if one is to use a QPU, it must offer superior computational speed and comparable bandwidth to a classic CPU or GPU.
Hello,
The qubits of the QPU are actually independent variables.
It is possible to relate them to one another using bias values that allow one to solve mathematical problems using the qubits.
I'm having a bit of trouble understanding the problem you are trying to solve.
It is possible to create workflows using the dwave hybrid library.
The data can be fed in one end and then decomposed, run on a variety of solvers - including the QPU - and then reconstituted.
This can be useful for very large problems that won't fit on the QPU directly.
I feel like with the problem you are describing, the challenge would lie in figuring out what parts to use the QPU to solve in order to take advantage of its computing strength.
This might be because I don't fully understand the problem.
What do the variables in your problem look like?
How dependent are they on one another? What's the complexity like?
How big is the problem? How many variables?
What is the value range like on the weights associated with the variables?
Are you concerned about latency from submission of the problem to receiving a sample?
Some of the latency issues can be mitigated.
It all just depends on what your workflow looks like.
There are definitely real world applications for the QPU.
Like any piece of hardware, a little engineering is required to get it functioning how we want.
Let's try to get you on track to solving your difficult problem data!
I was not clear. I want to use the QPU for problems where there is a continuous flow of data. Fluid dynamics, real time finance, etc., problems that solve real problems in real time, not simulations or backtesting.
Is this possible with the current hardware design? If not, then what is the reality of building the hardware in stages so this kind of problem can be addressed.
I think it's still a little unclear of what the definition of "real time" is.
As it stands, our system takes QUBO or Ising models in "real time", samples them, and gives a response.
In this case "real time" means that the problems are taken, processed, and returned upon request.
There are a few considerations to take into account.
One is that there is network latency, which is the case with any physical system made available over the internet.
Because our systems have a physical location, the latency over the network to communicate with them must be taken into account.
The second issue is that there are many users using the system at the same time.
This means that there might be some risk of jobs being queued, and therefor some delays caused by this.
These considerations pertain mostly to the Leap cloud offering, however we also offer have different purchasing offers, which allow a fridge to be installed physically in the purchaser's location, and used privately on their own network.
There would be less network latency on an internal network, and problems would only queue if you were to use the system heavily.
The Leap cloud system has an API endpoint that you can query using the D-Wave Ocean Tools Library.
The results returned from this endpoint are real results produced by the QPU.
For each call, you can choose to make a single read and get a single set of results back (maximum of ~2048 qubits), or do multiple reads of the QPU (microseconds) in one call and get several sets of samples back (~2048 x N samples).
Here is some documentation on the breakdown of the time used when accessing the QPU:
https://docs.dwavesys.com/docs/latest/c_qpu_timing.html
Note that the programming time would only be needed once for multiple reads of the same problem.
The properties section in this documentation contains information about timing for a given QPU:
https://docs.dwavesys.com/docs/latest/c_solver_properties.html
As you can see, the timing is on the order of microseconds.
A very large number of samples can be taken for a given API call.
I would strongly recommend testing out some problem submissions to see if the latency is within an acceptable range.
The problem submission timing is definitely overshadowed by network latency.
I hope this clears some things up.
Thanks for your interest!!
I work in the Financial Industry where access to the QPU needs to have a QOS that is very consistent. I am also solving a problem that requires continuous output of data results. Managing money in real time necessitates a pipelined hardware architecture---actual or simulated.
Your business model seems to be to offer time sharing on individual QPUs. For production, and even testing, I would need to lease/purchase a QPU to ensure QOS. Is it feasible to lease or purchase a QPU to handle sustained computation? And can your QPU provide pipelined performance---real or simulated? My solution moves petabytes of data through the processor and requires real time results.
I think these questions might be better suited to our sales team.
I could try to answer them, but there might be alternative solutions.
You can submit an inquiry via this page:
https://www.dwavesys.com/contact
I will keep an eye on your request to make sure it gets to the appropriate team.
In the mean time, I would still strongly encourage you to try out the system and see if it has the capabilities, at least with sample problems, to do what you need it to do.
If it's a question of scale and reliability, that part can be investigated further, but it would be good to try a few sample problems out to see if the results are of the nature you are interested in.
I apologize for not being able to be of further assistance at this stage, but look forward to helping you along the way.
Feel free to also ask questions about programming the system.
Please sign in to leave a comment.