• Hi Thomas,

    Have a look here and see if this answers your question:

    For the values pertaining to the QPU in the online system, see also

    Let me know if this answers your question. 


    Comment actions Permalink
  • Hi Fiona - That is definitely what I want to adjust, but I don't know how to make the adjustments using the Python libraries. (I don't know how to set anneal_schedule parameters for the solver.)

    Comment actions Permalink
  • I downloaded the python source code and analyzed it to try and answer the question. I am taking a guess at answering the question. I think the annealing schedule can be passed in the anneal_schedule parameter (to a solver) as a list or tuple that includes "a series of pairs of floating-point numbers identifying points in the schedule at which to change slope. The first element in the pair is time t in microseconds; the second, normalized persistent current s in the range [0,1]. The resulting schedule is the piecewise-linear curve that connects the provided points." This matches what I expected for schedule data, but makes explicit the data types expected. (See below for the data type reference.) However, I still do not know which of these two forms is correct: [t1 s1 t2 s2 etc.] or [[t1 s2] [t2 s2] etc.] I suspect the former is correct, but I will need to find out through experimentation.

    For anyone curious of what I did to figure this out (although my guess may not be correct), read on:

    Clone the repository: dwave-system (

    Find references to anneal_schedule: find . -type file -exec egrep -Hi "anneal_schedule" {} \;

    ./dwave/system/composites/ 'anneal_schedule': ['parameters'],
    ./dwave/system/composites/ u'anneal_schedule': ['parameters'],
    ./dwave/system/composites/ u'anneal_schedule': ['parameters'],
    ./dwave/system/samplers/ u'anneal_schedule': ['parameters'],
    ./tests/unit/ 'anneal_schedule': '',

    [At face value, looks good, because it may contain an example of usage. However, that code contains the line 'anneal_schedule': '', which is very misleading since it implies the parameter takes a string. And, in fact, that may be the case, though it would contradict later findings that the parameter should be a list or tuple.] looks like the most general class with a reference.

    There is only one reference, which is in the comments for def parameters(self):

    The code seems to silently pass on any parameters passed. I know everything must go through SAPI at some point, so pull up the docs repo that defines SAPI: [This was a lucky guess -- there could have been layers of abstraction between the sampler and SAPI.]

    Global search for anneal_schedule, assuming it is passed through as-is. Top hit is QPU (Hardware) Solvers (

    There are 13 references to anneal_schedule, but no indication of which one is a "good" one to investigate. (There is a reference to "anneal_schedule" in the left nav bar index under "Solver Properties and Parameters Reference" --> "QPU (Hardware) Solvers" --> "Parameters" --> "anneal_offsets". However, I discovered this post facto since the left nav bar is collapsed by default, and the semantics of each document repository are independent.)

    I searched for each instance of "anneal_schedule" in the page to eventually find the one I was looking for, the sixth reference. (

    The description gave me insight into how to set the anneal_schedule, in a generic way. The Python semantics was not explained.

    I (literally) randomly scrolled around the document to see if some other text might give a clue. I did not search for Python since this document was generic and in the repository that described SAPI, which is a REST interface that accepts JSON.

    I was lucky - the page above the anneal_schedule section was titled "Parameters," and had a brief section stating "Format varies by SAPI client:" Under that section it stated "For Python, supply the values as a list or a tuple."

    I knew what kinds of values I wanted to submit, and this brief, unassuming statement informed me of the data type to pass to the solver in the Python code.

    Pop the stack and resume coding.

    At this point I will try some experiments to figure out the exact syntax. The catch, however, is that I have no way of knowing whether the semantics are correct. It is possible that something I have not discovered manipulates the values before sending them to SAPI. To verify, I can try to change the annealing schedule and observe the results, seeing if the observations back up my hypothesis that everything is passed through unchanged. Whew! It's like I have to do physics. :-)


    Comment actions Permalink
  • Hi Thomas,

    Thank you for the detailed report, we appreciate the time and effort! There are some very good points about documentation that I will pass on. In the mean time, I will create a support ticket to see if we can answer your questions more directly.

    Comment actions Permalink
  • Hi Thomas

    I'm sorry you had to go on such an expedition through the documentation!  We are going to be improving the search functionality shortly and hope that will help you somewhat.

    In the meantime, I didn't see you mention this document yet, which provides a reference for all solver properties:

    specifically, this section may be useful:

    Comment actions Permalink
  • Also: you might want to use dimod.samplers.DWaveSampler to send the params:

    sampler = DWaveSampler()
    sampler.sample(bqm, anneal_schedule=..., etc=...)
    Comment actions Permalink
  • Update on setting the global anneal schedule:

    The correct data structure seems to be a list of lists. For example,

    [ [0E+00, 0E+00], [1E-03, 4.29E-03], [2E-03, 8.53E-03] etc. ]

    The parameter documentation references a "normalized persistent current." I am taking that to be the C values in the spreadsheet Fiona referenced, DW_2000Q_2_processor-annealing-schedule.csv., and will assume the table of values is essentially the default anneal schedule. (I am not sure how A(s) and B(s) are derived from c(s), though I don't think it's important at this time.)


    Comment actions Permalink
  • Almost there...

    The annealing spreadsheet (referenced above) contains 1001 points. I extracted the s, C pairs (columns 1 and 4), put them into a list of lists, and fed the data to the anneal_schedule parameter of the solver. The solver fails with the error: Too many points in anneal_schedule.

    I have read through the document "QPU Properties: D-Wave 2000Q Online System (DW_2000Q_2)," but did not see an explicit reference to the maximum number of anneal points allowed.

    Am I doing something wrong? Does the spreadsheet contain the actual default annealing schedule? Do I need to modify the schedule to make it work with the Ocean SDK?


    I tried a custom, arbitrary schedule with nine points and I still get the error, "Too many points in anneal_schedule." The 9 point schedule I tried is:



    I printed the ( variable and saw that max_anneal_schedule_points is 4. I think I am wildly off course here. In general, I want to modify the annealing schedule so I can quench it at varying times before the freeze-out points and gather statistics. Now I am not so sure how to get from here the there.

    Comment actions Permalink
  • Yes, the max_anneal_schedule_points for the solver is 4. 

    Did you get a chance to read through this doc?


    Comment actions Permalink
  • Yes, but I don't think I am interpreting the values correctly. On some charts I see B(s) run up to 10-12 GHz, but in figure 76 I see B(s) go to 5GHz. In figure 76, third panel, I see 20µs with 2µs quench, but the time scale goes to 12µs. Table 40 shows these data as "(0.0,0.0)(10.0,0.5)(12.0,1.0)," which leaves me confused as to whether that is 10µs + 12µs, or just 12µs.

    Given the spreadsheet of values for time, A, and B, I don't understand how (or even whether) those values are affected by the time and current in the global anneal schedule. I also don't know what meaningful time values should be. For example, it seems like a standard anneal is only 20µs, but I don't see any references about whether 20µs is actually standard for this QPU, all QPUs, etc.

    Perhaps a good question to ask is where to start with the global anneal values. What points will produce the same effect as not providing a custom anneal schedule? Given those data, I should be able to experiment on my own.


    Comment actions Permalink
  • Yes, we have used different values for B(s) when generating example plots. That's confusing!

    As you've figured out, the annealing schedule determines the mapping from time to s. So if you have the default 20 microseond annealing schedule ((0.0,0.0) (20.0, 1.0)) you are telling the system to evolve from s = 0 at time t = 0 microseconds to s = 1.0 at t = 20 microseconds. So, s = t / 20.0. At any time t, one can calculate s and then look up what B(s) and A(s) from the spreadsheet to discover the parameters of the qubits on the chip at that time. The spreadsheet has columns for s, A(s), B(s), and C(s). C(s) is needed to understand the anneal offsets feature and I won't discuss that in this answer.

    Figure 76 is also somewhat confusing --- the third panel shows a 20 us anneal where we have interrupted it at 10 microseconds with a 2 us quench. So, the schedule ((0.0, 0.0) (10.0, 0.5) (12.0, 1.0)) means that for t < 10 microseconds s is evolving as if it were a 20 microsecond anneal, that is s = t / 20.0. Then starting at 10 microseconds we have s = 0.5 + (t-10.0)/4.0.

    The default annealing time is given in the solver properties for the solver you are using as the variable default_annealing_time and is typically 20 microseconds. That means the default annealing schedule is ((0.0, 0.0) (20.0, 1.0)).

    Comment actions Permalink
  • Thanks - I am pretty sure I now understand. I have a run a number of experiments, quenching at different points with an 8-qubit chain, and am seeing expected results. (Error rate is inversely proportional to quench time. Types of errors are also what I would expect - infrequent broken chain and occasional "fighting" between bias and couplings.)

    I apologize for being slow to pick up on the global annealing schedule. In my mind, the spreadsheet of A(s),B(s) values is the actual schedule, and the anneal_schedule parameter is the rate or scale for that fixed schedule. If I were to explain it to someone else, I would say the anneal_schedule parameter can have up to four points. The first point is always at 0 (when the fixed schedule starts) and the last point is always the time at which the fixed schedule ends. Up to two inflection points (for the rate) can be inserted between the start and end points. If this is all correct, then individual offsets make more sense, allowing one to give more or less "weight" to certain qubits in long chains (which I imagine to be altering probabilities). (I am guessing the control bias regulates or determines the energies for A(s) and B(s), which would allow individual qubit control without needing to worry about more than one master signal/schedule for the anneal.)

    Anyways, thanks again and sorry for such an epic thread.

    Comment actions Permalink
  • No apologies necessary - I am sure this thread will be helpful to others, and you've helped us identify a couple of areas of inconsistency that we can address.

    Comment actions Permalink

Post is closed for comments.

Didn't find what you were looking for?

New post