Answered

Factoring program / embedding metrics

Starting with code from the factoring Jupyter notebook, I cobbled together a command-line Python + Ocean SDK factoring program.  It takes three command-line parameters -- product to be factored, number of bits in each multiplication circuit input, and number of QPU reads.  I ran 116 experiments on the D-Wave.  See table summarizing results below.

I tried factoring four products -- 12, 20, 36, and 81.  For each product I tried two multiplication circuit input bit sizes.  For each combination of product and bit size, I ran several experiments with 100, 200, 500, 1000, or 2000 QPU reads.  I wanted to get a feel for how results might vary with different embeddings found by minorminer and with different numbers of reads.

All expected valid factors of 12 were found easily using multiplication circuit inputs of 3 or 4 bits.

All expected valid factors of 20 were found using 4 or 5 bits.  It took thousands of QPU reads to find the factors using 5 bits.

All expected valid factors of 36 were found using 5 bits.  It took thousands of QPU reads.  No valid factors of 36 were found using 6 bits.

No valid factors of 81 were found using 6 or 7 bits.  One experiment failed because minorminer could not find an embedding.

Valid factors did not always have the lowest energies of the answers returned by the QPU.

Here is the table.

prod:    product to be factored
bits: number of bits in each input of
multiplication circuit
reads: number of reads from QPU
expno: ordinal experiment number
(1st, 2nd, ... nth run with
given parameters)
factors: valid factors found
energy: reported energy of solution

prod bits reads expno
                    factors  energy
12   3    100   1
                     2,  6   -25.5
                     4,  3   -25.5
                     3,  4   -25.5
                     6,  2   -25.5
12   3    100   2
                     4,  3   -25.5
                     3,  4   -25.5
                     2,  6   -25.5
                     6,  2   -25.5
12   3    200   1
                     4,  3   -25.5
                     6,  2   -25.5
                     2,  6   -25.5
                     3,  4   -25.5
12   3    200   2
                     3,  4   -25.5
                     2,  6   -25.5
                     4,  3   -25.5
                     6,  2   -25.5
12   3    500   1
                     3,  4   -25.5
                     4,  3   -25.5
                     6,  2   -25.5
                     2,  6   -25.5
12   3    500   2
                     4,  3   -25.5
                     6,  2   -25.5
                     3,  4   -25.5
                     2,  6   -25.5
12   3   1000   1
                     3,  4   -25.5
                     4,  3   -25.5
                     2,  6   -25.5
                     6,  2   -25.5
12   3   1000   2
                     4,  3   -25.5
                     6,  2   -25.5
                     3,  4   -25.5
                     2,  6   -25.5
12   3   1000   3
                     2,  6   -25.5
                     4,  3   -25.5
                     6,  2   -25.5
                     3,  4   -25.5
12   3   1000   4
                     3,  4   -25.5
                     4,  3   -25.5
                     2,  6   -25.5
                     6,  2   -25.5
12   4    100   1
12   4    100   2
                     1, 12   -45.0
12   4    200   1
                     4,  3   -48.0
                    12,  1   -42.0
12   4    200   2
12   4    500   1
                     3,  4   -45.0
                     4,  3   -42.0
                     6,  2   -39.0
12   4    500   2
                     6,  2   -42.0
                    12,  1   -40.0
12   4   1000   1
                     2,  6   -48.0
                     3,  4   -45.0
                     6,  2   -41.0
12   4   1000   2
                     6,  2   -44.0
12   4   1000   3
                     2,  6   -43.0
12   4   1000   4
                     2,  6   -45.0
                     4,  3   -45.0
                     3,  4   -43.0
                     1, 12   -41.0
20   4    100   1
                    10,  2   -45.0
                     5,  4   -42.0
20   4    100   2
20   4    200   1
20   4    200   2
                     5,  4   -45.0
20   4    500   1
                     4,  5   -39.0
                     2, 10   -37.0
20   4    500   2
20   4   1000   1
                     5,  4   -40.0
20   4   1000   2
                     4,  5   -46.0
                     2, 10   -42.0
                     5,  4   -41.0
                    10,  2   -40.0
20   4   1000   3
                     2, 10   -45.0
20   4   1000   4
                     2, 10   -45.0
20   5    100   1
20   5    100   2
20   5    200   1
20   5    200   2
20   5    500   1
20   5    500   2
20   5   1000   1
20   5   1000   2
20   5   1000   3
20   5   1000   4
20   5   2000   1
20   5   2000   2
                     5,  4   -61.5
20   5   2000   3
                    10,  2   -69.5
20   5   2000   4
36   5    100   1
36   5    100   2
36   5    200   1
36   5    200   2
36   5    500   1
36   5    500   2
36   5   1000   1
36   5   1000   2
36   5   1000   3
36   5   1000   4
36   5   2000   1
36   5   2000   2
                     3, 12   -58.5
                    18,  2   -51.5
36   5   2000   3
36   5   2000   4
36   5   2000   5
36   5   2000   6
                     6,  6   -68.5
                     4,  9   -65.5
36   5   2000   7
36   5   2000   8
                     3, 12   -57.5
                     9,  4   -53.5
36   6    100   1
36   6    100   2
36   6    200   1
36   6    200   2
36   6    500   1
36   6    500   2
36   6   1000   1
36   6   1000   2
36   6   1000   3
36   6   1000   4
36   6   2000   1
36   6   2000   2
36   6   2000   3
36   6   2000   4
36   6   2000   5
36   6   2000   6
36   6   2000   7
36   6   2000   8
81   6    100   1
81   6    100   2
81   6    200   1
81   6    200   2
81   6    500   1
81   6    500   2
81   6   1000   1
81   6   1000   2
81   6   1000   3
81   6   1000   4
81   6   2000   1
81   6   2000   2
81   6   2000   3
81   6   2000   4
81   6   2000   5
81   6   2000   6
81   6   2000   7
81   6   2000   8
81   7    100   1
81   7    100   2
81   7    200   1
81   7    200   2
81   7    500   1
81   7    500   2
81   7   1000   1
81   7   1000   2
81   7   1000   3
81   7   1000   4
                    No embedding found
81   7   2000   1
81   7   2000   2
81   7   2000   3
81   7   2000   4
81   7   2000   5
81   7   2000   6
81   7   2000   7
81   7   2000   8
0

Comments

5 comments
  • Thanks for posting this, Scott. Note even excited states may be "correct" answers as only a subset of the variables represent the factors sought.

    0
    Comment actions Permalink
  • The factoring Jupyter notebook says:

    "This example uses a pre-calculated minor-embedding. ... ... ... The stored embedding we used was selected for its good performance from many candidate embeddings generated by a heuristic algorithm from dwave-ocean-sdk called minorminer (GitHub repo). Alternatively, we can use this tool here to generate a new embedding that may or may not have as good performance. Here we will ask for more samples from the QPU by raising the num_reads parameter from 50 to 1000 to increase our chances of getting a correct answer."

    I'm glad that the notebook didn't go too far down this rabbit hole, but now I want to know more.

    How do you assess the quality of an embedding obtained from minorminer?  Is there any way to check its fitness by examining it?  Or do you just have to try it and see how well it works compared with other embeddings?

    In the little experiment I reported on above, I called minorminer to obtain an embedding every time I submitted a factoring problem to the QPU.  I regret now that I didn't save the embeddings -- but I was just getting my feet wet.  I'm wondering whether I got a different embedding every time.

    As a follow-up experiment, I think I'll do the following.  Obtain several embeddings from minorminer and store them or record their details.  For each embedding, submit the same set of factoring experiments to the QPU and record the results.  Tabulate and graph the results to attempt to discern which embedding works best.

    Is this a good idea?

    0
    Comment actions Permalink
  • The first two metrics I would use for embedding quality are maximum chain length and average chain length.

    max(len(chain) for chain in embedding.values())
    sum(len(chain) for chain in embedding.values()) / len(embedding)

    We care about maximum chain length because long chains 'break' more easily so it makes sense to minimize the longest chain.

    Beyond that, assessing embedding quality is more than an art than a science. It is worth noting that chains do not necessarily behave like ideal qubits and their idiosyncrasies can affect the quality of the returned solutions. See

    https://www.dwavesys.com/media/22zdsrbs/14-1020a-a_virtual_graphs_for_high_performance_embedded_topologies.pdf

    https://www.dwavesys.com/media/l0tjzis2/14-1002a_b_tr_boosting_integer_factorization_via_quantum_annealing_offsets.pdf

    for some examples/strategies.

     

    In practice, one can obtain good results by simply trying a large number of embeddings and seeing which gives the best answer!

    3
    Comment actions Permalink
  • Hi Alex,

    Thank you very much!

    In the little project I'm working on, I build 8 copies of a circuit, each with 16 inputs and 8 outputs, and chain them together.  I specify some or all of the 16 inputs to the first circuit, some or all of the 8 outputs from the last circuit, and leave some unfixed, then run the circuit forward, backward, or in both directions at once.

    It's hard for minorminer to find an embedding for this circuit.  When minorminer succeeds in finding an embedding, sometimes the QPU finds valid solutions.

     I'll try calling minorminer many times till I have a large collection of embeddings.  I'll rank the embeddings by their metrics, try the most promising ones, and see how well they do.

    Scott

    1
    Comment actions Permalink
  • Alex,

    Thank you again for your guidance on metrics of embeddings.  Very helpful!

    To others who may read this, I suggest using float() to calculate a double-precision value for average chain length.  Otherwise integer division is performed.

    float(sum(len(chain) for chain in embedding.values())) / float(len(embedding))

    Scott

    1
    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post