What is the relationship between physical bit consumption and problem size when using EmbeddingComposite tools

I read the paper “Vicky Choi. Minor-embedding in adiabatic quantum computation: II. minor-universal graph design.“.

It says that you can use (n-3)/(d-2) physical qubits to achieve a n-bit problem's minor-embedding.

But when I try to use EmbeddingComposite tools to embed a 80 bits random MIS problem , sometimes it coasts more than that number of qubits.

ps:I achieve a 80 bits random MIS problem by the codes below

problem_node_count = 80
G = nx.random_geometric_graph(problem_node_count,radius=0.0063*problem_node_count,seed=1)
qubo= dnx.algorithms.independent_set.maximum_weighted_independent_set_qubo(G)
bqm = dimod.BQM.from_qubo(qubo)

And  I use EmbeddingComposite(DWaveSampler()) as sampler

0

Comments

4 comments
  • Hello,

    There is a certain level of randomness to the embedding.

    If one were to manually embed a problem, cleverly, keeping in mind the architecture of the QPU, it is often possible to create very efficient embeddings. 

    The EmbeddingComposite uses some amount of randomness to select the qubits in different configurations each time. 
    One reason it does this is that the qubits each have a degree of bias towards certain values that comes from this being a physical system.
    To mitigate the risk of incurring the same bias each run, the embedding is randomized to be sure that the likelihood of seeing the same bias every time is reduced.

    Sometimes shorter or longer chains are formed, resulting in results that are more or less affected by chain strength, and also using varying amounts of qubits.

    Since the underlying architecture is restricted to a given formation, and the problems being embedded onto it are arbitrary, it is an inherently hard problem to efficiently place the problem onto the underlying architecture in the most optimal way quickly every time.

    Understanding the underlying architecture and designing embeddings for individual problems is the most efficient way to utilize the full power of the QPU.
    Alternatively, using the LeapHybridSampler provides a powerful, easy to use tool, which obfuscates these details, but also removes a lot of the fine-grained control from the user.

    I hope this helps make some things a little clearer. 
    Of course, feel free to ask more questions or for clarification.

    0
    Comment actions Permalink
  • It helps a lot. Thanks

    I have a question that:

    Are the Embedding schemes used by the D-wave system the same as "chopping TRIAD diagram" proposed in the article:Vicky Choi. Minor-embedding in adiabatic quantum computation: II. minor-universal graph design?

    And

    0
    Comment actions Permalink
  • And when I Embedding a problom with 80 variables, it occurs chain_length=26. Does it means the embedding is bad?

    When I use problem_inspector function, It warns me that the chain_length is longer than 7.

    Does 7 has some special meaning? Why it use 7 as boundaries?

    0
    Comment actions Permalink
  • Hello,

    I am not familiar with the method that you shared for embedding, but it looks like a way of representing the Chimera graph (the DWave QPU architecture) on another kind of graph (TRIAD).

    These documents describe the method used to embed arbitrary problems onto the Chimera graph:

    https://arxiv.org/abs/1507.04774

    https://arxiv.org/abs/1406.2741

    It is possible to get good results with longer chain lengths, but this tends to require more fine-tuning to help avoid chain breaks.

    Here are some links on fine tuning if you are interested:

    https://support.dwavesys.com/hc/en-us/community/posts/360043592114-tuning-DWaveSampler

    https://support.dwavesys.com/hc/en-us/community/posts/360034852633-High-Chain-Break-Fractions

    It is difficult to say that an embedding is good or bad just by the length of the chain. 
    Each problem is different and will require different considerations. 
    One thing to keep in mind is that having chains that are all the same length where possible is preferable. 
    Another thing to consider is that longer chains "freeze out" (decide their output value) earlier in the anneal process than shorter chains.

    As for the choice of 7 qubit chain lengths for the inspector, this is due to the chains starting to break more frequently.
    This was decided through testing different configurations on the QPU.

    Is it possible to use test data that is more representative of the problems you will be solving on the QPU?

    Using data with an expected result to do initial testing can be really helpful in understanding the machine, more so than random data with uncertain output.

    I hope this was all helpful.
    Please feel free to ask more questions.

    0
    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post