Embedding with Lidar's repetition code to do error correction on Dwave's QPU


I tried to run some TSP problem on DWave and have the usual problem when problem size increases (I read comments in forum and I am aware of minimal gap of Hamiltonian, strategies to change anneal time and schedule, pause of anneal, new lower noise 2000Q_5 QPU etc.).

I was wondering if instead it would be possible to implement the repetition code with penalty described in Lidar's video below at 54:10


In this video, each K_4,4 bipartite graph with 8 physical qubits is reduced to two logical qubits. There is a parameter alpha which couples the two logical qubits together and a penalty parameter minus beta which couples the three physical qubits on one side with the other physical qubit on the other side of the bipartite graph in one logical qubit to strengthen the relation between the 4 physical qubits and make them a logical one. 

To decode the solution, one samples with majority voting the four physical qubits and gets one logical qubit back.

Of course, the 2048 physical qubits are reduced to 512 logical and the degree of the Chimera graph goes down from 6 to 3 (in mentioned video at 1:00:12). So, the embedding of a complete graph for TSP is reduced even further...

Would it be possible to implement such a method in Ocean as new general embedding strategy available to anyone? It would be one more effective strategy to reduce noise when decoding solutions from the sampler, provided the problem is small enough to be embedded in the reduced Lidar graph.

If yes how? Should one hardcode the reduced Lidar graph leaving out damaged physical Qubits for a particular QPU? Should one subclass EmbeddingComposite and create a LidarEmbeddingComposite? Where would the decoding part go?

Thank you







  • Hello Tiziano,

    Great question! It is possible to write an embedding that maps the physical QPU graph to Lidar's graph, where each Chimera cell contains 2 "encoded" qubits. Another embedding would be required to translate between the problem graph and Lidar's graph. You might achieve this by wrapping dimod sampler classes

    That being said, could you elaborate on the particular issues you are having with large TSP problems? There might be other ways to get good results without sacrificing so many qubits (for example, Virtual Graph Composite). 

    Comment actions Permalink
  • ciao Luca,

    thanks for your answer. I will look into wrapping dimod sampler ...

    For the TSP problem I am using this code I found on github by http://www.bohr.technology/:


    With these modifications to the code: in main.py I disabled the Forest Solver (Rigetti) at the end of the main function.

    In TSP_utilities.py I changed the calculate_cost function to penalize solutions which are not visiting all nodes:

    def is_hamiltonian_path(solution, n):
       sol_len = len(solution)
       if n!=sol_len:
          return False

       checkList = [0 for i in range(n)]
       for i in range(n):
          if checkList[solution[i]] == 0:
             checkList[solution[i]] = 1
             return False

       return True

    def calculate_cost(cost_matrix, solution, n):
       cost = abs(n - len(solution)) * 1000
       if (is_hamiltonian_path(solution, n)==False):
          cost += 500

       for i in range(len(solution)):
          a = i%len(solution)
          b = (i+1)%len(solution)
          cost += cost_matrix[solution[a]][solution[b]]

    return cost

    And in dwave_tsp_solver, I added annealing time and numruns to:

    self.annealing_time = 2000
    self.numruns = 499
    self.best_score = 100000

    just below the max that can be run on Dwave in a single job.

    And slightly changed the two functions below to include the new DWave QPU and to accomodate for the other changes:

    def solve_tsp(self):
       response = EmbeddingComposite(DWaveSampler(token=self.sapi_token, endpoint=self.url, solver='DW_2000Q_5'), ).sample_qubo(self.qubo_dict, chain_strength=self.chainstrength,  num_reads=self.numruns,annealing_time=self.annealing_time, auto_scale=True, reduce_intersample_correlation=False)
       return self.solution, self.distribution

    def decode_solution(self, response):
       n = len(self.distance_matrix)
       distribution = {}
       min_energy = response.record[0].energy

       for record in response.record:
          sample = record[0]
          solution_binary = [node for node in sample]
          solution = TSP_utilities.binary_state_to_points_order(solution_binary)
          score = TSP_utilities.calculate_cost(self.distance_matrix, solution, n) # + record.energy
          distribution[tuple(solution)] = (record.energy, record.num_occurrences)
          #if record.energy <= min_energy:
          if score<self.best_score:
             self.solution = solution
             min_energy = record.energy

       self.distribution = distribution

    The code always finds a solution for N=4 (4 cities) and solutions very close to the optimum up to N=6 (if launched several times). Starting from N=7 I get crosses in the salesman path, which is a visual indication of non optimality.

    The above implementation uses N^2 qubit for the TSP problem. 

    I think the main reason why Lidar would not work, is that the above implementation fails to retrieve an embedding on Dwave QPU already with N=10 (100 physical qubits, but as I understand it the full connected graph is needed, this is why the embedding fails even if N^2<<2048):

    File "/home/dm/.local/lib/python3.6/site-packages/dwave/system/composites/embedding.py", line 195, in sample
    raise ValueError("no embedding found")

    If I would have a Lidar graph instead, the embedding will fail even earlier for N<10 as the degree of Lidar is 3 ... For sure, Pegasus will make things better... Or maybe I should look into some kind of Monte Carlo approach to solve TSP on Dwave?







    Comment actions Permalink
  • Hi Tiziano,

    For larger problems (e.g. 7 city TSP) we are approaching the limit of what can be embedded onto the 2000-qubit Chimera graph. The probabilistic nature of the QPU means it will be much more difficult to find optimal (or even valid) solutions in this region. As you pointed out, using Lidar's sub-embedding method for error correction would significantly reduce the number of effective qubits available.

    I think you are on the right track to optimizing your chances of finding good solutions. D-Wave also has a Traveling Salesperson tool that you can look into - it's open source like the rest of Ocean. It allows you to tweak the relative weights of different constraints, which could also help.

    For even larger problems, I would recommend reading into hybrid approaches. A good discussion on this topic can be found in another Community post:


    Comment actions Permalink
  • Thanks Luca for pointing me at hybrid methods as general method to harness the QPU. 

    I am now experimenting with embeddings ... 



    Comment actions Permalink
  • No problem Tiziano. Hope your experiments turn out well.

    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post