# Map coloring with 2 bits?

Hello, I have modified the map coloring demo (https://docs.ocean.dwavesys.com/en/latest/examples/map_coloring.html) to encode the colors using two bits instead of four. The constraints are the following:

def ab_not_equal_cd(a, b, c, d):

return a != c or b != d

for neighbor in neighbors:

v, u = neighbor

variables = [v+'0',v+'1', u+'0',u+'1']

csp.add_constraint(ab_not_equal_cd, variables)

Here is the complete code: https://github.com/Virtual-X/bin-quad-model/blob/master/mapcoloring2.py

The CSP has less variables and less constraints, and the BQM is smaller than the original. Nonetheless, the qpu is not able to solve it, even with a lot of samples, while the SimulatedAnnealingSampler solves it consistently. Are there requirements for the constraints that are no longer respected?

Many thanks

## Comments

David J(Report)Hi Igor,

Thomas is correct. The answers returned from the QPU are probabilistic/nondeterministic, so there will be a variety of low energy states returned.

That said, they should be ordered from lowest to highest energy state.

There are several factors that will be relevant to the effectiveness of the algorithms you run on the QPU.

Since the QPU is a physical machine with physical properties, there are some limitations to the range of constraints and also to the length and connectedness of qubit chains.

This can mean that two logically equivalent statements might not have exactly the same behaviour when translated to the QPU, as they will have different energy characteristics.

Having a better understanding of how the qubits are connected will help to formulate problems in a way that runs better on our systems.

As suggested by Alex here, a good metric to check embeddings is to see the longest and average chain lengths.

You can do this by creating the embedding before sending it to the solver and iterating over the chains created in the embedding (remember that chains are sequences of qubits that represent a single qubit).

Although in this case, the max chain length is the same (7) and the average chain length is actually slightly higher in our implementation (3.2884615384615383) than in yours (3.2051282051282053).

In this case we can take a look at the number of chains that are broken in the returned samples to get an idea of how effective this embedding is on the physical QPU.

If you run the solver once and take a look at the results as in the following, you can get an idea of how effective the problem formulation is.

Another approach to analyzing the effectiveness of the BQM is to look at how many auxiliary variables have been produced by the stitch function.

In the case of the 2 bit map coloring BQM, the stitch function produced 15 auxiliary variables.

For the original map coloring BQM, no auxiliary variables were produced.

Thomas P(Report)There are a number of factors involved.

Making good algorithms for quantum annealing is still a fairly low-level process. The LEAP service, combined with the Ocean Tools SDK, are major advances compared to what programming the chip used to be. Even so, if you are aiming for efficiency, it is still very important to map the problem as directly as possible to the underlying compute topology. For this specific case (map coloring, 4-bit color encoding vs 2-bit color encoding), you have injected an extra layer of encoding and deferred implementation to a general-purpose library. This rarely results in an efficient program. It is interesting from a proof-of-concept perspective, and for learning how the parts of the solution fit together, but there just isn't enough "headroom" on the D-Wave 2000 to be too inefficient.

Fitting the problem to the chip topology matters a lot. For small problems, a good embedding is not that important. However, the chip only has a 1:6 (and 1:5 in some edges) connectivity between qubits. There is some black magic involved in embedding complex problems to that topology. Determining the optimal embedding is still an open research area. For some problems it cannot be done, but for other problems, in a certain computational sweet spot, the best approach is to bypass the automated embedding and make the embedding by hand. Another approach is to run the program many times, capturing the embedding used on each run. For whichever program runs best, you record and keep the embedding it used.

The best type of problem (best suited for using the D-Wave) is a hard one that needs to be solved over and over, but the main structure of the problem does not change over time. That way you can use the same embedding and have a degree of certainty that the implementation is efficient, though not necessarily optimal. (Keep in mind, asking for an optimal algorithm for an NP-complete or NP-hard problem is asking a lot.)

All that said, after diving into the details of what is actually going on to solve the problems, it is clear that the platform is practically a science fiction reality. The next generation of chip will have many more qubits, and more importantly, significantly increased connectivity between those qubits, which will open up all sorts of new possibilities. Literally and figuratively! :-)

I hope this helps. There is a definitely a learning curve in figuring out what problems work best, how best to approach them, etc. Trying experiments and asking questions is definitely a sound approach to learning about the platform.

Thomas P(Report)Quantum annealing is probabilistic at its heart. That means you can never expect deterministic behavior. This does not make sense for small or computationally easy problems (that easily submit to brute force), but it can be very effective for computationally hard problems.

Instead of using next(response.samples()) to fetch a result, use this code instead:

Also, I suggest using a significantly higher num_reads. 10000 will often yield acceptable results with this particular configuration.

Let me know if this helps.

Igor B(Report)Unfortunately, more reads did not help. I have collected more than 30000 samples in a few runs, but no solution was ever found, while with the unary notations, 2000 to 6000 samples out of every 10000 were valid solutions. Checking all samples also did not help, as expected, since the samples are sorted by energy and all solutions have lower energy than any non-solution.

In the unary notations, the csp defines 73 constraints (between 2 variables) for 52 variables.

In the binary notations, the csp defines 15 constraints (between 4 variables) for 24 variables, defining a bqm with 15 auxiliaries.

I thought the QPU were agnostic about the meaning of the bqm, besides its size, possibly the number of non-zero coefficients and the number of solutions. Does the ratio between the number of variables configurations over the number of solutions play a role as well? Is a less dense representation (as the unary representation) more suited for the QPU? It would be great if there were guidelines in such sense.

Thanks.

Igor B(Report)Thanks for your answer. I fear that it would be hopeless to send a hard problem before understanding more about the system by trying simple cases. Certainly it wasn't my intention to add a layer of complexity, but save variables instead. I'll try changing a few parameters and build the csp by hand, let's see, hoping that the QPU will be soon competitive.

Please sign in to leave a comment.