Stitching take ages even for small problems. How to scale ?

I ran this code, following basic examples:

def logic_circuit1(a, b, c, d, z):
nota = not a
notb = not b
notc = not c
c1 = a or b or c
c2 = b or notc or d
c3 = not a or b or d
c4 = notb or d or nota
f = c1 and c2 and c3 and c4
return (z == f)

def logic_circuit2(a, b, c, d, z):
not1 = not b
or2 = b or c
and3 = a and not1
or4 = or2 or d
and5 = and3 and or4
not6 = not or4
or7 = and5 or not6
return (z == or7)

csp1 = dbc.ConstraintSatisfactionProblem(dbc.BINARY)
csp1.add_constraint(logic_circuit1, ['a', 'b', 'c', 'd', 'z'])

csp2 = dbc.ConstraintSatisfactionProblem(dbc.BINARY)
csp2.add_constraint(logic_circuit2, ['a', 'b', 'c', 'd', 'z'])

and then:

bqm1 = dbc.stitch(csp1)
bqm2 = dbc.stitch(csp2)

The second one took about 10 minutes and I had to interrupt the first after an hour (On my brand new laptom with 16go and an I7)

I mostly followed code examples and documentation, but if even for such small examples (3SAT with 4 variables and 4 constraints) it takes an hour, the complexity of the algorithm in stitch must be atrocious (I don't really understand how it can be that slow, even if the problem is difficult...)

How are we supposed to use that kind of constructions for scalable problems ?

Thanks for your answer



  • Hello,

    One thing that can help speed up the stitching process is to change the minimum classical gap.

    The default of min_classical_gap is 2, so when the stitching takes place, it is trying to create a model where that gap is respected, which can be tricky.

    Reducing the gap can help speed up the process.

    Here is a code example of how to do this:

    bqm = dbc.stitch(csp, min_classical_gap=0.5)

    You can try experimenting with different values, but for this problem in particular, 0.5 seemed to work.

    That said, it still does take a bit of time for the stitch function to form the BQM.

    I apologize for the delay in getting back. 

    I am still looking into a better solution to speed this problem up.

    One way to do this is to try to break the larger CSPs into smaller ones where possible.

    I will update with any findings I come across.

    I hope this was helpful!

    Comment actions Permalink
  • So, additionally, you can break your function down into fewer variables for each CSP.

    This will make smaller intermediate penalty model BQMs (i.e. the things that are created when you add a constraint).

    These will allow a much faster stitch.

    I was able to make a model that stitches very fast this way.

    This comment shows how you can break a constraint into two smaller constraints as an example:

    I checked on a smaller problem to see that the results were equivalent by using an ExactSolver and looking at the targets before and after splitting up constraints into smaller constraints.

    Let me know if you get stuck.

    I hope this helped!

    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post