Building a QSVM (Quantum Support Vector Machine) by following Examples

Hello!

My Goal: Build a QSVM Quantum Support Vector Machine using D-Wave QPU.

I've been learning a lot while using D-Wave Ocean SDK and Notebook examples.  Thank you!

I have a question about the QBoost example.  I'm running the qboost examples from github: https://github.com/dwave-examples/qboost

The question is, what is really happening?  I've read the code and to my surprise there is nothing obvious happening from a D-Wave perspective.  I posted a GitHub Issue with more details: https://github.com/dwave-examples/qboost/issues/8

Really I may be looking at the wrong example for best achieving my goal.  The resources available in the documents and examples have been informative.  I'm not as close as I want to be for achieving the QSVM goal.

 

0

Comments

10 comments
  • Really my goal here is to build a Quantum Support Vector Machine.  However the examples and documents I've been reading are leaving me without results.

    Is there a good starting place for me in this pursuit?  I'm trying to do a basic hello world as follows:

    • Train: Training-Data -> QSVM -> Weights
    • Predict: Input -> Weights -> Predictions
    0
    Comment actions Permalink
  • Hello,

    Have you seen this Jupyter Notebook?
    https://github.com/dwavesystems/qboost/blob/master/demo.ipynb

    If I become aware of any other resources, we will be sure to share them.

    0
    Comment actions Permalink
  • Hi David,

    Thank you.  I will check this out.

    1
    Comment actions Permalink
  • Reading https://github.com/dwavesystems/qboost/blob/master/demo.ipynb was helpful. 

    David thank you! 🎉

    The information in the notebook file explains the transformation of mnist data into -1 and +1 for N > 4 generating the y_train labels matrix.  I'm missing some fundamentals still.  It appears that the code is using 100% classical ML and the D-Wave QPU isn't really being used.

    1
    Comment actions Permalink
  • Glad to hear the resources were helpful!

    As for how the QPU gets involved in the Jupyter Notebook, if you take a look at the fit function call, you will see that a sampler can be passed in alongside the training data. In demo.ipynb, you can see that a dwave_sampler is constructed and passed in.

    Taking a look at the end of the fit function, the estimator_weights variable is set to the most optimal set of weights returned from the QPU.

    Later in the predict function call, the estimator_weights variable is used.

    I hope this helps to clarify where and how the QPU is used. Please let us know if you have any more questions!

     

     

    0
    Comment actions Permalink
  • David,

    Thank you!  This does help.  Running the example we can see that the the estimator_weights are updated from the results of the EmbeddingComposite(dwave_sampler). 

    The weights are thirty five 1s [1 1 1 1 1 1 1 ...].  When applied in a series of classifiers, the output weight results from the DWaveSampler are 1s, meaning there is no impact on the model.  This is where I get lost. 

    The final predict() function uses a matrix multiplication via np.dot() method like this: np.dot(self.estimator_weights, pred_all) where estimator_weights is the result from the DWaveSampler and pred_all is the results from each classical ML model.  So basically all the heavy lifting is classical ML, not quantum.

    I'm taking a guess here:

    The results from the DWave Sampler demonstrate that the classical ML classifiers all have significance.  Therefore no weak classifier should be excluded in the list of 35.

    The value of DWave is being shown as a validator in this case or something of an output layer filter to allow weights from the classical model to be used or ignored.  Basically we can Optimize for the most meaningful features/weights, while ignoring the rest.  Is this the idea?  The example code didn't do justice, and it is helpful 😄

     

    0
    Comment actions Permalink
  • Hi Stephan,

    Really my goal here is to build a Quantum Support Vector Machine.  However the examples and documents I've been reading are leaving me without results.

    you might also check out this alternative approach to QSVM: https://arxiv.org/abs/1906.06283

    Best wishes,

    Dennis

    0
    Comment actions Permalink
  • Dennis,

    Excellent.  Thank you!

    0
    Comment actions Permalink
  • Hi Stephen, 

    Your intuition about the 1s having no impact on the model is correct. 

    We have actually since taken a look at this example and are working to improve on its current implementation.

    Please keep an eye on it for changes.

    0
    Comment actions Permalink
  • Sounds great!  I've since realized the value and intent of the example.  This inspired me to complete my task successfully with good results.  Was great to use D-Wave to learn and apply the QPU to a real world application.  Thank you!

    0
    Comment actions Permalink

Please sign in to leave a comment.

Didn't find what you were looking for?

New post