# Stitching take ages even for small problems. How to scale ?

I ran this code, following basic examples:

`def logic_circuit1(a, b, c, d, z):nota = not anotb = not bnotc = not cc1 = a or b or cc2 = b or notc or dc3 = not a or b or dc4 = notb or d or notaf = c1 and c2 and c3 and c4return (z == f)def logic_circuit2(a, b, c, d, z):not1 = not bor2 = b or cand3 = a and not1or4 = or2 or dand5 = and3 and or4not6 = not or4or7 = and5 or not6return (z == or7)csp1 = dbc.ConstraintSatisfactionProblem(dbc.BINARY)csp1.add_constraint(logic_circuit1, ['a', 'b', 'c', 'd', 'z'])csp2 = dbc.ConstraintSatisfactionProblem(dbc.BINARY)csp2.add_constraint(logic_circuit2, ['a', 'b', 'c', 'd', 'z'])`

and then:

`bqm1 = dbc.stitch(csp1)bqm2 = dbc.stitch(csp2)`

The second one took about 10 minutes and I had to interrupt the first after an hour (On my brand new laptom with 16go and an I7)

I mostly followed code examples and documentation, but if even for such small examples (3SAT with 4 variables and 4 constraints) it takes an hour, the complexity of the algorithm in stitch must be atrocious (I don't really understand how it can be that slow, even if the problem is difficult...)

How are we supposed to use that kind of constructions for scalable problems ?

• Hello,

One thing that can help speed up the stitching process is to change the minimum classical gap.

The default of min_classical_gap is 2, so when the stitching takes place, it is trying to create a model where that gap is respected, which can be tricky.

Reducing the gap can help speed up the process.

Here is a code example of how to do this:

`bqm = dbc.stitch(csp, min_classical_gap=0.5)`

You can try experimenting with different values, but for this problem in particular, 0.5 seemed to work.

That said, it still does take a bit of time for the stitch function to form the BQM.

I apologize for the delay in getting back.

I am still looking into a better solution to speed this problem up.

One way to do this is to try to break the larger CSPs into smaller ones where possible.

I will update with any findings I come across.

• So, additionally, you can break your function down into fewer variables for each CSP.

This will make smaller intermediate penalty model BQMs (i.e. the things that are created when you add a constraint).

These will allow a much faster stitch.

I was able to make a model that stitches very fast this way.

This comment shows how you can break a constraint into two smaller constraints as an example: