What makes a good embedding

I'm working on a hypergraph coloring problem characterized as a constraint problem.

My original set of constraints would work on only the smallest sizes of the problem. Following other advice in the forums, I looked the max and mean chain length of the embedding returned by minorminer. Working on size N=8, this produced max chain = 15, mean chain = 4.45, or similar. 

I redesigned my constraints so the better map onto the chimera graph, and dropped typical max chains into [5-8] and typical mean chains into [3-3.5]. Most embeddings that minorminer discovers generate some valid solutions when run for 100 samples.

I started trying to characterize what maxes/means would admit solutions, and ran into some issues I don't understand. I searched for different embeddings with either low mean, low max, or large numbers of length-4 embeddings. The probability of solution is mixed:

From the limited and biased sample above, it seems like if the mean/max is too-large or too-small, there's little success. In the middle of some sort of "Goldilocks Zone" for chain lengths, more length-4 chains seems to of benefit.

  • Is there a simple explanation for why low max/mean chain length embeddings might not be optimal when mapped onto the system?
  • The length-4 chain observation was a blind guess based on early trends. Is there a more principled reason why that feature might be nice in an embedding?
  • I came across a reference to annealing time offsets as a way of dealing with longer chains; however, I've not identified how to change that in code. Does someone know the right place to look in the doc?
  • Can other people think of other embedding features to look at as I try to scale this problem up?

-- Andrew Matteson

3

Comments

3 comments
  • Hi Andrew,

    "Don't cross the streams." Egon, Ghostbusters (http://quotegeek.com/quotes-from-movies/ghostbusters/206/)

    The first bit of advice is that long chains are almost always undesirable. The way the physics crowd sees this is that these long chains create uncomfortably close gaps between the desirable solution (called the ground state) and other possible solutions. This can be seen visually by graphing the total energy of the system as a function of time. That graph is called the eigenspectrum; an example is in figure 7 on this page: https://docs.dwavesys.com/docs/latest/c_gs_2.html The eigenspectrum is what you get when you cross the hamiltonian equation, the annealing functions (figure 8 from the above link), and possible permutations of qubits. There is some internal (proprietary, I think) code that will try to compensate for long chains by altering the biases. (I cannot speak to how those biases are changed, but I expect it has to do with the specific physical characteristics of the particular D-Wave chip, which would vary across different physical installations.)

    The number one challenge (in my opinion) is finding an optimal embedding for a problem. Tools like minorminer are excellent to get started with, but as you saw, the quality of the embedding can make a big difference both in reliability (probability of desirable solution) and in space complexity. Changing the annealing rate may or may not help for any particular embedding; I believe this is an area of active research. Probably the most popular heuristic is "slow down when the ground state gets close to the next higher state." (This insight comes from the study of adiabatic quantum computing, which is an ideal. However, the behavior of a real physical system does not always match the ideal.) Knowing how best to alter the annealing rate works better if you know the solution a priori, but then that's another topic. A couple links to illustrate how to change the annealing schedule:

    If you are using dwavebinarycsp, then I recommend playing with the min_classical_gap argument for the stitch() function. The default is 2.0. Try higher values like 2.5, etc. I cannot speak to scale for this value because I think it depends on the ultimate embedding. In some experiments I did, values over 4 resulted in the stitch taking too long to complete. Take that as anecdote, however; I have not seriously tried to map out different values and their impact on different kinds of constraint scenarios. 

    If you are willing to embed by hand, you can try things like doubling with widths of some chains, or otherwise adding redundancy for the sake of increasing the gap between the ground state (ideal solution) and next-higher state. Given that you are working with hypergraphs, I would presume there are all kinds of substructures in the resultant embeddings. (BTW, this is fantastic stuff for research if anyone is interested.)

    Let me know if this helps.

    Tom

    3
    Comment actions Permalink
  • Thanks Tom! Thanks for adding some context to the resources and pointing me towards some new ones. I look forward to playing around with these ideas the next opportunity I have for a large block of free time.

    -- Andrew Matteson

    2
    Comment actions Permalink
  • Andrew, some news about this problem?

    0
    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post