Use AWS to conduct a massive search for embeddings for a hard problem?

I'm using minorminer.find_embedding() to search for embeddings for increasingly hard problems.  I'm looking at max chain length and average chain length as suggested by Alex.  It took my MacBook Pro almost six hours to try 100 times to find an embedding for a particular problem.  Three embeddings were found, with an embedding length of 424, and in the best case a max chain length of 14 and an average chain length of 3.337.  QPU time is scarce and expensive, but CPU time is abundant and cheap.  I'd like to improve my chances of getting good results from the QPU by finding many embeddings and using the few with the most promising metrics.  This leads me to think of using Amazon Web Services to conduct a massive search for embeddings in a reasonable wall-clock time.

Does anyone have any comments about this idea?

1

Comments

9 comments
  • P.S.  Notes about performance:

    My five-year-old MacBook Pro has a 2.3 GHz Intel Core i7 CPU with four cores (two hyperthreads per core), 256 KiB L2 cache per core, 6 MiB L3 cache, and 16 GiB memory.  It's running macOS 10.12.6 Sierra.

    I'm using some non-default parameters with minorminer.find_embedding().

    embedding = minorminer.find_embedding(bqm.quadratic, target_edgelist,
        max_no_improvement=40,  # default 10
        chainlength_patience=40,  # default 10
        threads=4,  # default 1
        verbose=1)  # default 0

    Does anyone have any comments about my parameter choices?

    0
    Comment actions Permalink
  • Hi Scott - My intuition is that while minorminer is a good general purpose utility, it would be better to use more specific algorithms to match the problem space to the D-Wave. The one catch to this approach is working around inactive qubits. The run time of the minorminer algorithm is:

    $O(n_{H}n_{G}e_{H}(e_{G} + n_{G} \log n_{G}))$

    I recommend reading the paper on which minorminer is based:

    https://arxiv.org/pdf/1406.2741.pdf

    The approach I am taking is to map the problem space onto an intermediate form, and then look for ways to map that intermediate form onto the D-Wave.

    2
    Comment actions Permalink
  • Hi Thomas,

    Thank you for the suggestions.  I hope that with persistent effort I can penetrate that paper.  Do you find it helpful to use dwave_networkx to plot chimera graphs and embeddings as you're looking for a good way to map a problem to a QPU?  Do you mind saying a bit more about your intriguing intermediate form?

    Scott

    0
    Comment actions Permalink
  • Let me start by saying I do not necessarily have the right answers for anything.

    Plotting graphs does not help me much; I tend to think more abstractly about problems, usually moving from what is possible, to what is tractable, to what I am willing to invest effort into. The D-Wave is limited by the space complexity of a problem. When fitting a problem to the D-Wave, I look for ways to solve a similar problem (or subset) while reducing the space complexity. For example, in a weighted, directed graph I can collapse nodes and edges to make a smaller, but approximately equivalent, graph. See this graph where three edges (weighted 1 each) have been replaced by a single, more heavily-weighted edge (weighted 3).

    Original graph: A =1=> B =1=> C =1=> D
    Compressed graph: A =3=> D

    In a similar vein, Karnaugh maps can be used to reduce gate complexity. Though I am not a number theorist, I know there are techniques that can be used to reduce the problem space for factoring. This principal of decomposing and transforming a problem's components is how I have been approaching fitting problems to the D-Wave.

     Edit: "equivalent graph" to "approximately equivalent graph" That is, close enough to get me to an acceptable solution.

    1
    Comment actions Permalink
  • Hello Scott,

     

    Minorminer embedding heuristic should run quickly for small problems and will always fail for problems that won't embed. For problems that are close to the efficiency limit (the largest target graph in the smallest chimera graph) the run time can get quite inefficient.

    You might try passing in an adjacency matrix for a slightly larger Chimera graph to minorminer, for example a C18 instead of a C16. If this brings the run time down, you could check the resulting embedding and see how close it is to fitting into a C16 subsection.

    Hope this helps!

    1
    Comment actions Permalink
  • Thomas,

    Very interesting!  A Karnaugh map might very well help me simplify my building-block logic circuit.  I have to refresh my memory of exactly how a K-map is constructed.

    Scott

    0
    Comment actions Permalink
  • Luca,

    Thank you for the idea of tweaking the adjacency matrix.  I'll experiment with that.

    Scott

    0
    Comment actions Permalink
  • No problem, please do let us know how your experiments go.

    0
    Comment actions Permalink
  • In my current project, I'm constructing a logic circuit using XOR gates and chaining multiple copies of it together.  The circuit has 16 input bits and 8 output bits, so the truth table has 64 Ki rows.  Apparently Karnaugh maps are only practical for a few variables, but there's a heuristic logic minimization algorithm known as Espresso that can handle many variables, and free software implementations are available.  I'm going to give the Espresso algorithm a whirl.

    2
    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post