tuning DWaveSampler


I have a qubo problem with 50 variables.  The resulting  graph is sparse (each vertex has at most 5 neighbors).

I solved the problem using D-Wave's Simulated Annealing and Kerberos. I also checked the solutions using other solvers and obtained identical results. I have reasons to believe that the solutions are globally optimal.

However, whenever I try using DWaveSampler with EmbeddingComposite the quality of  solutions is really poor. Problem inspector does not issue any warnings. 

While Ocean tools documentations is absolutely superb, unfortunately I can't make any practical sense of D-Wave systems documentation.

My question is:

How can I tune DWaveSampler so that it produces good solutions?



I was modifying annealing_time but this doesn't seem to affect anything.

Thanks a bunch! 





  • Hello,

    Maybe some of these resources can help you tune your problem formulation for the QPU:





    Some of these go-to gotchas, as mentioned in the community posts above include things, like broken chains (which I don't think you are seeing, because you aren't seeing warnings in the visualizer), frozen chains (chains of qubits getting stuck on a value early in the anneal), and problem scaling (some configurations don't allow the problem data to be strongly represented on the QPU).

    I hope this is useful. 

    Comment actions Permalink
  • Hi David,

    Thank you very much for the reply and valuable pointers.

    I tried playing around with anneal_offsets and anneal_schedule.  Adjusting these parameters slightly improves the energy average  of all samples but doesn't give the desired result.

    Since there is a countless number configurations of anneal_offsets I can't for sure rule out the possibility of frozen chains. However, given that my max chain length is 3 and offsets do not help much, I assume that frozen chains are unlikely to be the issue.

    I guess, this leaves me with  a "bad" energy gap  or unsuitable problem scaling.

    Comment actions Permalink
  • So just to revisit some of these techniques.

    • Have you verified that there are no broken chains by looking at the chain break frequency? Chain strength will determine if these chain breaks occur. It might be worth strengthening to avoid chain breaks, or relaxing to allow the problem to be better represented. 
    • The anneal offset should shift longer chains in the negative direction, so they anneal after the standard anneal. The longer the chain, the later it should be annealed. Chains of the same length can be offset the same amount. This might help improve results, if you haven't tried it already.
    • One technique that can be used for an insufficient energy gap is to scale the whole problem up, if it hasn't been scaled already.
    • Are you able to give a minimal example? These issues with results tend to be problem-specific, so it is hard to assess or improve without understanding the problem.
    • Another suggestion from one of my colleagues is to maybe try the HSS, as it is designed to optimize QPU calls.

    I apologize if this is reiterating things you are already aware of or have already tried. I will continue to investigate further solutions and update as they become available.

    Comment actions Permalink
  • Hi David,

    Thank you very much this is a very valuable info, especially on how to shift chains! Is there a way to group qubits based on their chains for easy offsetting?


    Regarding the energy gap, I use auto_scale=True.  I also tried normalization with few bias ranges.

    I'll try scale() to up scale the problem as you suggested.


    I have not tried HSS for a number of reason: 

    1) I'm specifically interested in learning more about QPU

    2) I have the solution of the problem from D-Wave SA and Kerberos solvers as well as other mainstream solvers.


    Once I find time, I'll try to extract a minimal example to post it here.

    Thanks again for coming back to this thread!

    Comment actions Permalink
  • No problem! I hope I can help you learn more about the QPU and help get to better solutions!

    You had mentioned adjusting annealing_time.
    Usually with more complex or connected problems like this, increasing the annealing time can help.

    There is also the option of doing a pause and/or quench.
    This can be done by creating an anneal schedule and passing it in as the anneal_schedule parameter.

    The anneal schedule Jupyter Notebook is useful in helping learn to use anneal schedules.
    It is under .../notebooks/leap/techniques/anneal_schedule/01-anneal-schedule.ipynb in the Jupyter Hub.

    Comment actions Permalink
  • Hi David,

    Thanks for the link. I used that notebook as an example for my trial schedules. 

    Here is the link to the bqm.csv. It is a 49x49 numpy matrix obtained by bqm.to_numpy_matrix(). The matrix has a lot of zeros off diagonal. The variable type of the bqm is Binary (qubo).

    The solution vector X to this bqm  should satisfy the equality sum(X)=40.

    This problem can be solved with SA.

    I used beta_range=[1e-3, 10] and run SA algo 100 times in a for loop. While iterating I kept track of the best solution.





    Comment actions Permalink
  • Thank you for providing an example data set!

    Were you able to use the annealing schedule notebook to improve your results?

    Comment actions Permalink
  • Hi David,
    I was able to slightly improve my results using the annealing schedule. But it was still far from optimal.

    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post