API request time out when running hundreds of runnable in parallel using hybird.Map

Hello D-Wave community,

I am trying to solve a binary optimization problem with 21k variables.

As this problem does not fit in to a single QPU run, after creating my bqm, I decompose the problem into smaller subproblems with size 100 using EnergyImpactDecomposer.

sub_size = 100
initial_state = hybrid.State.from_problem(bqm)
workflow = hybrid.Unwind(hybrid.EnergyImpactDecomposer(size=sub_size))
states_updated = workflow.run(initial_state).result() 


len(states_updated) # outputs 214
len(states_updated[212].subproblem) # outputs 100

states_updated then is a data structure with length 214, which makes sense as the number of variable is 21k.
Then I define a merge_substates function which will be included in the workflow.

def merge_substates(_, substates):
print("merge substates")
a, b = substates
return a.updated(subsamples=hybrid.hstack_samplesets(a.subsamples, b.subsamples))
Finally, I define my workflow as below with hybrid.Map,
 
qpu_subsampler = hybrid.Map(
hybrid.QPUSubproblemAutoEmbeddingSampler()
) | hybrid.Reduce(
hybrid.Lambda(merge_substates)
) | hybrid.SplatComposer()
 

and run the sampler.

qpu_result = qpu_subsampler.run(states_updated).result()

 

After calling the qpu_subsampler.run(), I rather get timeout error with below error messages.

TimeoutError: _ssl.c:980: The handshake operation timed out
ReadTimeout: HTTPSConnectionPool(host='na-west-1.cloud.dwavesys.com', port=443): Read timed out. (read timeout=60.0)
RequestTimeout: API request timed out

 

Instead of using hybrid.Map, I can run this problem sequentially, but it takes hours to run the whole problem due to network overheads, and in this case there is no point of using the quantum annealer because classical heuristic algorithms can do better job in terms of computing time.
I suspect the time out error is caused because the D-Wave server blocks my requests as I am submitting too many jobs at once, but this is just a conjecture.
Does anyone know what is causing the time out error, and have any idea to circumvent the problem so that I can speed up the sampling process?

0

Comments

1 comment
  • Hi Jeung,
     
    Handshake timeout errors typically occur due to network issues. In your case, setting states_updated to a length of 214 would create 214 instances of DWaveSampler and Client. Since hybrid doesn't run 214 samplers in parallel, it can throttle the network as well as run into out-of-memory issues.
     
     
    On the other hand, SAPI cannot handle 214 problems submitted in parallel much faster than when submitted sequentially, given that the network is not overloaded. 
     
    Another approach that can be taken is by reusing a single QPU sampler like this:

    from dwave.system import DWaveSampler

    qpu = DWaveSampler()
    qpu_subsampler = hybrid.Map(
    hybrid.QPUSubproblemAutoEmbeddingSampler(qpu_sampler=qpu)
    ) | hybrid.Reduce(
    hybrid.Lambda(merge_substates)
    ) | hybrid.SplatComposer()

     
    However, it's noteworthy that DWaveSample is not thread safe so it may lead to other issues.
     
    I hope this helps. Please do let us know if you have any questions. 
     
    With kind regards,
    Tanjid

    0
    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post