Reading through the descriptions of the solvers/samplers is probably a good place to start, although there are a lot of them.
Another great set of resources is the set of Jupyter Notebooks. They go run through different samplers, show how to compare, examine the inside working through examples, etc.
The QPU Samplers page includes the DWaveSampler, which is the basic QPU sampler, as well as the LeapHybridSampler, which is a simplified solution that we developed to help people solve very large problems (up to 10,000 variables) using input data and a time limit as the only two simple parameters:
The CPU solvers/samplers are all in dimod:
They are good for working on smaller problems, or larger problems, where you are more concerned with the processing of the data; getting a good idea of how the samplers function without calling the QPU. They all function slightly differently, so be sure to look at the description carefully. I often use the ExactSolver if I want to understand a small problem/proof of concept.
Understanding how Samplers and Composites fit together is important:
This section talks about what each part does, as well as touching on Hybrid solutions. In the case of the QPU, the architecture is of a certain configuration (for the current QPU it's a Chimera Graph), so in order to plug an arbitrary set of dependencies in, an EmbeddingComposite is used to map the arbitrary problem to the QPU architecture, in a process called minor embedding.
We have a library called dwave-hybrid which provides workflows to allow for iterative samplings of the QPU, and comes with a presetup workflow called KerberosSampler out of the box.
Here are the docs about dwave-hybrid:
Here is the set of samplers that can be used in the workflows:
Finally the aforementioned EmbeddingComposite is in the Composites page, as well as a number of other composites, here:
The maximum cut example uses the EmbeddingComposite with the DWaveSampler and a QUBO:
The knapsack example uses the LeapHybridSampler with a BinaryQuadraticModel (BQM):
A BQM is an in-house data model that can be constructed from either ising or QUBO data: