Answered

Duplicate results with high num_reads

I am setting num_reads to 10,000 for each query.

I am getting duplicate results. My guess is that the high num_reads is being broken into multiple queries, and the results are not being stitched together correctly. Otherwise, should I assume duplicate results can occur and I need to stitch things together at the application level?

Here is an example of the results (from response.data()) I am seeing from a single sample_ising() using num_reads=10000:

Sample(sample={'a': -1, 'n1': -1, 'n2': -1, 'n3': -1}, energy=-5.25, num_occurrences=6)

Sample(sample={'a': -1, 'n1': -1, 'n2': -1, 'n3': 1}, energy=-4.25, num_occurrences=1)
Sample(sample={'a': -1, 'n1': -1, 'n2': -1, 'n3': 1}, energy=-4.25, num_occurrences=5376)

Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': -1}, energy=-4.25, num_occurrences=1)
Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': -1}, energy=-4.25, num_occurrences=1)
Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': -1}, energy=-4.25, num_occurrences=2)

Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': 1}, energy=1.75, num_occurrences=4)
Sample(sample={'a': 1, 'n1': -1, 'n2': 1, 'n3': 1}, energy=-4.25, num_occurrences=4)

Sample(sample={'a': 1, 'n1': 1, 'n2': 1, 'n3': -1}, energy=-4.25, num_occurrences=2)
Sample(sample={'a': 1, 'n1': 1, 'n2': 1, 'n3': -1}, energy=-4.25, num_occurrences=4602)

Sample(sample={'a': 1, 'n1': 1, 'n2': 1, 'n3': 1}, energy=-5.25, num_occurrences=1)

(I sorted, bolded, and partitioned the samples so the duplicates stand out easier.)

 

1

Comments

4 comments
  • Hi Thomas, which sampler are you using?

    1
    Comment actions Permalink
  • EmbeddingComposite(DWaveSampler()) - Solver is DW_2000Q_2.

    Sample code:

    sampler = EmbeddingComposite(DWaveSampler(endpoint=URL, token=TOKEN, solver=SOLVER))
    bias = +0.0
    pc = -3.00
    nc = +1.25
    h = { 'n1':bias, 'n2':bias, 'n3':bias, 'a':bias }
    J = {
    ('n1','a'):pc, ('n2','a'):pc, ('n3','a'):pc,
    ('n1','n2'):nc, ('n1','n3'):nc, ('n2','n3'):nc
    }
    response = sampler.sample_ising(h, J, num_reads=10000)
    for sample in response.samples():
    print("{}".format(sample))
    1
    Comment actions Permalink
  • Right, this is an issue with the EmbeddingComposite.

     

    This happens when there are broken chains that are resolved. As an example, let's use a very simple problem.

    h = {'a': 1}
    J = {}

    We'll now use an embedding {'a': [0, 4]} mapping our logical variable 'a' to the qubits [0, 4]. This creates a new 'embedded' problem.

    h = {0: .5, 4: .5}
    J = {(0, 4): -1}

    We then sample 1000 times on the system. Say we get the following results for our embedded problem

    Sample(sample={0: -1, 1: -1}, energy=-2.0, num_occurrences=496)
    Sample(sample={0: +1, 1: +1}, energy=0.0, num_occurrences=498)
    Sample(sample={0: -1, 1: +1}, energy=1.0, num_occurrences=4)
    Sample(sample={0: +1, 1: -1}, energy=1.0, num_occurrences=2)

    So in this case we see that in 6 of our samples we have a 'broken' chain. Now, we wish to unembed our samples, we need to do something to resolve the broken chains. The EmbeddingComposite specifically does a majority vote on the samples. So transforming each we get...

    Sample(sample={'a': -1}, energy=-1.0, num_occurrences=496)
    Sample(sample={'a': +1,}, energy=1.0, num_occurrences=498)
    Sample(sample={'a': -1}, energy=-1.0, num_occurrences=4)
    Sample(sample={'a': +1}, energy=1.0, num_occurrences=2)

    You can now see that by resolving the broken chains, the samples are no longer unique. We could do a step to now merge the non-unique samples, but (for now) we don't because checking the uniqueness of samples is slow.

    I hope this helped!

     

     

    1
    Comment actions Permalink
  • That explains it - thanks.

    0
    Comment actions Permalink

Post is closed for comments.

Didn't find what you were looking for?

New post