How is it possible to run multiple instances in parallel with different Ising models (h and j) and the other parameters fixed using the multiprocessing tool?

Unless I am mistaking, the QPU only solves one problem at a time (albeit, very fast). Multiprocessing can be useful to prepare multiple problems in parallel on the classical hardware, however once dispatched to the D-Wave servers, I believe each problem is scheduled sequentially.

Perhaps a D-Wave team member can pitch in on this topic. For example, the Hybrid Solver breaks problems into individual QPU sub-problems. I am assuming that these sub-problems are sent in sequence to the QPU.

Assuming I am correct, I do not think you can "solve 5 problem instances on the QPU at once".

ValueError Traceback (most recent call last)
<ipython-input-201-6f7b82ec22ab> in <module>
1 from dwave.system import TilingComposite
2 sample =DWaveCliqueSampler(token='xxxxxxxxxxxxxxxxxxx', endpoint='https://cloud.dwavesys.com/sapi/', solver={'topology__type': 'chimera'})
----> 3 sampler = EmbeddingComposite(TilingComposite(sample, 4, 4))
~\anaconda3\lib\site-packages\dwave\system\composites\tiling.py in __init__(self, sampler, sub_m, sub_n, t)
111 # we could also just tile onto the unstructured sampler but in that case we would need
112 # to know how many tiles to use
--> 113 raise ValueError("given child sampler should be structured")
114 self.children = [sampler]
115
ValueError: given child sampler should be structured

DWaveCliqueSampler finds the embeddings for all the supported cliques on the underlying QPU, so you use it directly, without wrapping it in EmbeddingComposite.

To run parallel sampling of a problem embedded in a C4 with a clique embedding, it's recommended to find the clique embedding using minorminer's find_clique_embedding() on an ideal C4 generated with networkx's chimera_graph() , and then use that with FixedEmbeddingCompositeto tile to a particular QPU.

For example, to use a clique embedding on a random fully connected 7-node ran1 problem:

replace bqm = ran_r(1, 7) with your problem and the size of the needed clique

replace c4 = chimera_graph(4) with a different target tile; i.e., depending on the size and structure of the problem, find an appropriate embedding target of MxN adjacent Chimera unit cells, and use that as the block that is replicated across the QPU

Here's the previous example modified for 36 nodes:

from dwave.system import DWaveSampler, TilingComposite, FixedEmbeddingComposite from dwave_networkx import chimera_graph from dimod.generators import ran_r from minorminer.busclique import find_clique_embedding

## Comments

Mario G(Report)Hello Jane,

Here is a link to a similar question and answer based on my experience: https://support.dwavesys.com/hc/en-us/community/posts/4406749973911-Run-multiple-instances-in-parallel

M.

jane g(Report)Hi Mario,

I've tried to follow your instruction but I think I'm missing something.

for example, if I have a function that I pass to it some variables and it returns h and j.

suppose I have 5 different sets of variables that I want to pass them to that function.

How can I use the multiprocessing tool to solve these 5 instances on the qpu at once.

thanks

Mario G(Report)Hi Jane,

Unless I am mistaking, the QPU only solves one problem at a time (albeit, very fast). Multiprocessing can be useful to prepare multiple problems in parallel on the classical hardware, however once dispatched to the D-Wave servers, I believe each problem is scheduled sequentially.

Perhaps a D-Wave team member can pitch in on this topic. For example, the Hybrid Solver breaks problems into individual QPU sub-problems. I am assuming that these sub-problems are sent in sequence to the QPU.

Assuming I am correct, I do not think you can "solve 5 problem instances on the QPU at once".

But again, a D-Wave person could clarify.

Mario

jane g(Report)Hi Mario,

As far as I understood from this paper (Section 4):https://arxiv.org/abs/2001.04014

It is possible to run multiple instances in parallel. But I'm not sure how it is done.

regards

Mohammad D(Report)An alternative to multiprocessing tool is to run parallel samples using Ocean's built-in Tiling Composite.

jane g(Report)Hi Mohammad D,

I've tried to use it but it gives me the following error:

`ValueError: given child sampler should be structured`

Mohammad D(Report)Hi Jane,

In order to investigate further I will need a copy of the code and the exact debug error(s) you are receiving including line numbers etc...

jane g(Report)Hi Mohammad D,

I have tried also different samplers but still the same.

Thanks

Mohammad D(Report)Hi Jane,

finds the embeddings for all the supported cliques on the underlying QPU, so you use it directly, without wrapping it in`DWaveCliqueSampler`

.`EmbeddingComposite`

To run parallel sampling of a problem embedded in a C4 with a clique embedding, it's recommended to find the clique embedding using minorminer's

on an ideal C4 generated with networkx'sfind_clique_embedding()

, and then use that withchimera_graph()to tile to a particular QPU.`FixedEmbeddingComposite`

For example, to use a clique embedding on a random fully connected 7-node ran1 problem:

The above should give

for our QPUs with high yield or a bit less for bigger cliques in the C4.`len(sampleset)=100`

jane g(Report)Hi Mohammad D,

Thanks for the explanation!

In the case of a larger number of nodes, for example, 36.

Could you please show me how it can be done using the same above example?

Regarads

Mohammad D(Report)Hi Jane,

Here's how to modify the previous example:

`bqm = ran_r(1, 7)`

with your problem and the size of the needed clique`c4 = chimera_graph(4)`

with a different target tile; i.e., depending on the size and structure of the problem, find an appropriate embedding target of MxN adjacent Chimera unit cells, and use that as the block that is replicated across the QPUHere's the previous example modified for 36 nodes:

For larger graphs a non-clique embedding will work better i.e. find_embedding(). Reserve using clique embedding for very dense problem graphs.

Please sign in to leave a comment.