Answered
Duplicate results with high num_reads
I am setting num_reads to 10,000 for each query.
I am getting duplicate results. My guess is that the high num_reads is being broken into multiple queries, and the results are not being stitched together correctly. Otherwise, should I assume duplicate results can occur and I need to stitch things together at the application level?
Here is an example of the results (from response.data()) I am seeing from a single sample_ising() using num_reads=10000:
Sample(sample={'a': -1, 'n1': -1, 'n2': -1, 'n3': -1}, energy=-5.25, num_occurrences=6)
Sample(sample={'a': -1, 'n1': -1, 'n2': -1, 'n3': 1}, energy=-4.25, num_occurrences=1)
Sample(sample={'a': -1, 'n1': -1, 'n2': -1, 'n3': 1}, energy=-4.25, num_occurrences=5376)
Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': -1}, energy=-4.25, num_occurrences=1)
Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': -1}, energy=-4.25, num_occurrences=1)
Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': -1}, energy=-4.25, num_occurrences=2)
Sample(sample={'a': -1, 'n1': 1, 'n2': -1, 'n3': 1}, energy=1.75, num_occurrences=4)
Sample(sample={'a': 1, 'n1': -1, 'n2': 1, 'n3': 1}, energy=-4.25, num_occurrences=4)
Sample(sample={'a': 1, 'n1': 1, 'n2': 1, 'n3': -1}, energy=-4.25, num_occurrences=2)
Sample(sample={'a': 1, 'n1': 1, 'n2': 1, 'n3': -1}, energy=-4.25, num_occurrences=4602)
Sample(sample={'a': 1, 'n1': 1, 'n2': 1, 'n3': 1}, energy=-5.25, num_occurrences=1)
(I sorted, bolded, and partitioned the samples so the duplicates stand out easier.)
Comments
Hi Thomas, which sampler are you using?
EmbeddingComposite(DWaveSampler()) - Solver is DW_2000Q_2.
Sample code:
Right, this is an issue with the EmbeddingComposite.
This happens when there are broken chains that are resolved. As an example, let's use a very simple problem.
We'll now use an embedding {'a': [0, 4]} mapping our logical variable 'a' to the qubits [0, 4]. This creates a new 'embedded' problem.
We then sample 1000 times on the system. Say we get the following results for our embedded problem
So in this case we see that in 6 of our samples we have a 'broken' chain. Now, we wish to unembed our samples, we need to do something to resolve the broken chains. The EmbeddingComposite specifically does a majority vote on the samples. So transforming each we get...
You can now see that by resolving the broken chains, the samples are no longer unique. We could do a step to now merge the non-unique samples, but (for now) we don't because checking the uniqueness of samples is slow.
I hope this helped!
That explains it - thanks.
Post is closed for comments.