API request time out when running hundreds of runnable in parallel using hybird.Map
Hello D-Wave community,
I am trying to solve a binary optimization problem with 21k variables.
As this problem does not fit in to a single QPU run, after creating my bqm, I decompose the problem into smaller subproblems with size 100 using EnergyImpactDecomposer.
sub_size = 100
initial_state = hybrid.State.from_problem(bqm)
workflow = hybrid.Unwind(hybrid.EnergyImpactDecomposer(size=sub_size))
states_updated = workflow.run(initial_state).result()
len(states_updated) # outputs 214
len(states_updated.subproblem) # outputs 100
states_updated then is a data structure with length 214, which makes sense as the number of variable is 21k.
Then I define a merge_substates function which will be included in the workflow.
def merge_substates(_, substates):
a, b = substates
return a.updated(subsamples=hybrid.hstack_samplesets(a.subsamples, b.subsamples))
qpu_subsampler = hybrid.Map(
) | hybrid.Reduce(
) | hybrid.SplatComposer()
and run the sampler.
qpu_result = qpu_subsampler.run(states_updated).result()
After calling the qpu_subsampler.run(), I rather get timeout error with below error messages.
TimeoutError: _ssl.c:980: The handshake operation timed out
ReadTimeout: HTTPSConnectionPool(host='na-west-1.cloud.dwavesys.com', port=443): Read timed out. (read timeout=60.0)
RequestTimeout: API request timed out
Instead of using hybrid.Map, I can run this problem sequentially, but it takes hours to run the whole problem due to network overheads, and in this case there is no point of using the quantum annealer because classical heuristic algorithms can do better job in terms of computing time.
I suspect the time out error is caused because the D-Wave server blocks my requests as I am submitting too many jobs at once, but this is just a conjecture.
Does anyone know what is causing the time out error, and have any idea to circumvent the problem so that I can speed up the sampling process?