Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RandomSampler #24

Merged
merged 11 commits into from
Jun 23, 2022
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,8 @@ venv.bak/
# cython files
dwave/samplers/greedy/*.cpp
dwave/samplers/greedy/*.html
dwave/samplers/random/*.cpp
dwave/samplers/random/*.html
dwave/samplers/sa/*.cpp
dwave/samplers/sa/*.html
dwave/samplers/tabu/*.cpp
Expand Down
19 changes: 19 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,32 @@ or locally on your CPU.
*dwave-samplers* implements the following classical algorithms for solving
:term:`binary quadratic model`\ s (BQM):

* Random: a sampler that draws uniform random samples.
Copy link
Contributor

@kevinchern kevinchern Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing backticks i.e.,

* `Random` : a sampler that draws uniform random samples.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are for the link. In this case I am not linking to anything. Though open to a good place to link to, https://en.wikipedia.org/wiki/Randomization seemed a bit over general 😄

Copy link
Contributor

@JoelPasvolsky JoelPasvolsky Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's backticks plus underscore to link to the new Random section in line 35

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, I misunderstood. Will do.

* `Simulated Annealing`_: a probabilistic heuristic for optimization and approximate
Boltzmann sampling well suited to finding good solutions of large problems.
* `Steepest Descent`_: a discrete analogue of gradient descent, often used in
machine learning, that quickly finds a local minimum.
* `Tabu`_: a heuristic that employs local search with methods to escape local minima.
* `Tree Decomposition`_: an exact solver for problems with low treewidth.

Random
======

Random samplers provide a useful baseline performance comparison. The variable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be the place to mention which PRNG is used? e.g., Mersenne-Twister?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could do, though the user can actually set that by passing in a Generator. By default we use NumPy's default, which is currently PCG64. Though I would want to just say that we use the default from NumPy in case NumPy changes.

assignments in each sample are chosen by a coin flip.

>>> from dwave.samplers import RandomSampler
>>> sampler = RandomSampler()

Create a random binary quadratic model.

>>> import dimod
>>> bqm = dimod.generators.gnp_random_bqm(100, .5, 'BINARY')

Get the 20 best random samples found in .2 seconds of searching.

>>> sampleset = sampler.sample(bqm, num_reads=20, time_limit=.2)

Simulated Annealing
===================

Expand Down
28 changes: 28 additions & 0 deletions docs/reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,34 @@ Reference Documentation

.. currentmodule:: dwave.samplers

Random
=======

RandomSampler
-------------

.. autoclass:: RandomSampler

Attributes
~~~~~~~~~~

.. autosummary::
:toctree: generated/

~RandomSampler.parameters
~RandomSampler.properties

Methods
~~~~~~~

.. autosummary::
:toctree: generated/

~RandomSampler.sample
~RandomSampler.sample_ising
~RandomSampler.sample_qubo


Simulated Annealing
===================

Expand Down
2 changes: 2 additions & 0 deletions dwave/samplers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,8 @@

from dwave.samplers.greedy import *

from dwave.samplers.random import *

from dwave.samplers.sa import *

from dwave.samplers.tabu import *
Expand Down
15 changes: 15 additions & 0 deletions dwave/samplers/random/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Copyright 2022 D-Wave Systems Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from dwave.samplers.random.sampler import *
174 changes: 174 additions & 0 deletions dwave/samplers/random/cyrandom.pyx
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# distutils: language = c++
# cython: language_level = 3

# Copyright 2022 D-Wave Systems Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

cimport cython

from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer
from libc.time cimport time, time_t
from libcpp.algorithm cimport sort
from libcpp.vector cimport vector
from posix.time cimport clock_gettime, timespec, CLOCK_REALTIME

import dimod
cimport dimod
import numpy as np
cimport numpy as np
cimport numpy.random

cdef extern from *:
"""
#if defined(_WIN32) || defined(_WIN64)

#include <Windows.h>

double realtime_clock() {
LARGE_INTEGER frequency;
LARGE_INTEGER now;

QueryPerformanceFrequency(&frequency);
QueryPerformanceCounter(&now);

return now.QuadPart / frequency.QuadPart;
}

#else

double realtime_clock() {
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return ts.tv_sec + ts.tv_nsec / 1e9;
}

#endif
"""
double realtime_clock()


cdef struct state_t:
np.float64_t energy
vector[np.int8_t] sample


cdef bint comp_state(const state_t& a, const state_t& b) nogil:
return a.energy < b.energy


cdef state_t get_sample(dimod.cyBQM_float64 cybqm,
numpy.random.bitgen_t* bitgen,
bint is_spin = False,
):
# developer note: there is a bunch of potential optimization here
cdef state_t state

# generate the sample
state.sample.reserve(cybqm.num_variables())
cdef Py_ssize_t i
for i in range(cybqm.num_variables()):
state.sample.push_back(bitgen.next_uint32(bitgen.state) % 2)

if is_spin:
# go back through and convert to spin
for i in range(state.sample.size()):
state.sample[i] = 2 * state.sample[i] - 1

state.energy = cybqm.data().energy(state.sample.begin())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Converting spins and computing energies seem more fitting as a postprocessing step.

Copy link
Member Author

@arcondello arcondello Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only in the case when you're keeping every sample. Otherwise you need the energies in order to keep the population to num_reads. Because for "real uses" of the solver you need to keep the population limited, IMO it's part of sampling.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK that makes sense. Follow-up question: why do we use num_reads as a cap, as opposed to accepting one and only one parameter (one of num_reads, time_limit)? The latter seems like a more natural use case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Four different combos with their uses:

  • both unspecified: I just want to use the defaults
  • time_limit set, num_reads unset: I want the best sample I can find randomly in the given time
  • time_limit unset, num_reads set: I want num_reads samples drawn uniformly. If I was to specify the time_limit, they would not be uniform.
  • both set, I want the best sample(s) found after time_limit and I want to specify the number of returned samples and/or the internal memory used.

Obviously these are not inclusive of use cases but should give some idea.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would forbid user setting both num_reads and time_limit, since time_limit takes precedence and might return less than num_reads.

Copy link
Member Author

@arcondello arcondello Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

947320b changed the behavior to always return num_reads. time_limit now just allows the algorithm to keep sampling.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking at your comment above, and missed that update, thanks.


return state


@cython.boundscheck(False)
@cython.wraparound(False)
def sample(bqm, Py_ssize_t num_reads, object seed, np.float64_t time_limit):

cdef double preprocessing_start_time = realtime_clock()

cdef Py_ssize_t i, j # counters for use later

# Get Cython access to the BQM. We could template to avoid the copy,
# but honestly everyone just uses float64 anyway so...
cdef dimod.cyBQM_float64 cybqm = dimod.as_bqm(bqm, dtype=float).data
cdef bint is_spin = bqm.vartype is dimod.SPIN

# Get Cython access to the rng
rng = np.random.default_rng(seed)
cdef numpy.random.bitgen_t *bitgen
cdef const char *capsule_name = "BitGenerator"
capsule = rng.bit_generator.capsule
if not PyCapsule_IsValid(capsule, capsule_name):
raise ValueError("Invalid pointer to anon_func_state")
bitgen = <numpy.random.bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name)

cdef double sampling_start_time = realtime_clock()

cdef double sampling_stop_time
if time_limit < 0:
Copy link
Contributor

@kevinchern kevinchern Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be <=?

Quoting docstring below

If given and non-negative, samples are drawn until time_limit.

Should this be strictly positive?

update: Nvm, I can see why a strict inequality is an intuitive choice. I don't have an objective argument at the moment.

sampling_stop_time = float('inf')
else:
sampling_stop_time = sampling_start_time + time_limit

# try sampling
cdef Py_ssize_t num_drawn = 0
cdef vector[state_t] samples
for i in range(num_reads):
samples.push_back(get_sample(cybqm, bitgen, is_spin))
num_drawn += 1

if realtime_clock() > sampling_stop_time:
break

if time_limit >= 0:
while realtime_clock() < sampling_stop_time:

samples.push_back(get_sample(cybqm, bitgen, is_spin))
sort(samples.begin(), samples.end(), comp_state)
samples.pop_back()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use a heap instead? (for total O(N logN) complexity)

Alternatively, you can use insertion sort (for O(N) per step here, and total ~O(N^2)) instead of "IntroSort" (for O(N logN) per step here, and total of ~O(N^2 logN)).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried that, it actually hurt performance. Imagine it's the compiler being smart.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or constant factor too big to notice benefits on smaller scale.

Copy link
Member Author

@arcondello arcondello Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, though I tried fairly large numbers. You know what, there have been a bunch of changes since then. Let me retest.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, after the change introduced in 176c25c, I was able to get a performance boost at very large num_reads. So I will indeed make the change. Thanks, and good suggestion!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For posterity I used

bqm = dimod.generators.gnp_random_bqm(1000, .5, 'BINARY')
num_reads=10000


num_drawn += 1

cdef double postprocessing_start_time = realtime_clock()

if time_limit < 0:
# for consistency we sort in this case as well, though we count
# it towards postprocessing since it's not necessary
sort(samples.begin(), samples.end(), comp_state)

record = np.rec.array(np.empty(num_reads,
dtype=[('sample', np.int8, (cybqm.num_variables(),)),
('energy', float),
('num_occurrences', int)]))

record['num_occurrences'][:] = 1

cdef np.float64_t[:] energies_view = record['energy']
for i in range(num_reads):
energies_view[i] = samples[i].energy

cdef np.int8_t[:, :] sample_view = record['sample']
for i in range(num_reads):
for j in range(cybqm.num_variables()):
sample_view[i, j] = samples[i].sample[j]

sampleset = dimod.SampleSet(record, bqm.variables, info=dict(), vartype=bqm.vartype)

sampleset.info.update(
num_drawn=num_drawn,
prepreocessing_time=sampling_start_time-preprocessing_start_time,
sampling_time=postprocessing_start_time-sampling_start_time,
postprocessing_time=realtime_clock()-preprocessing_start_time,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Return as a dict, i.e., timing=dict(preprocessing_time=..., sampling_time=..., postprocessing_time=...) for consistency with how we output QPU timings

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am open to having it be nested. Though since we're already changing the names, IMO consistency is not the main reason to do it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do nest it, and if we're already changing the names, IMO it would look nicer to have

info = dict(timing=dict(
    preprocessing=...,
    sampling=...,
    postprocessing=...,
))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO consistency of timing terminology with QPU and other solvers is a priority. Otherwise the code to plot data from different solvers on the same graph becomes very unwieldy. This may require some exceptions and compromises because different solvers structures have different structures, but aiming for maximum comparability would be good thing. This would also be an argument for generic timing categories based on once per input'' and once per output'' , which are natural comparators, rather than solver-specific functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for nested timings!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also consider typing the duration variables. Perhaps best to use datetime.timedelta. I always need to check if some time variable in Ocean is in seconds, ms or us. Hate that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also like that. I did it this way for consistency with QPU timing, but we're already being inconsistent in other places so I think probably worth raising at #22 ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QPU timing info could be considered encoded for transport -- and we can decode it as timedelta client-side.

)

return sampleset
122 changes: 122 additions & 0 deletions dwave/samplers/random/sampler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
# Copyright 2022 D-Wave Systems Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import typing

import dimod
import numpy as np

from dwave.samplers.random.cyrandom import sample


__all__ = ['RandomSampler']


class RandomSampler(dimod.Sampler):
"""A random sampler, useful as a performance baseline and for testing.

Examples:

>>> from dwave.samplers import RandomSampler
>>> sampler = RandomSampler()

Create a random binary quadratic model.

>>> import dimod
>>> bqm = dimod.generators.gnp_random_bqm(100, .5, 'BINARY')

Get the 20 best random samples found in .2 seconds of searching.

>>> sampleset = sampler.sample(bqm, num_reads=20, time_limit=.2)

"""

parameters: typing.Mapping[str, typing.List] = dict(
num_reads=[],
seed=[],
time_limit=[],
)
"""Keyword arguments accepted by the sampling methods.

Examples:

>>> from dwave.samplers import RandomSampler
>>> sampler = RandomSampler()
>>> sampler.parameters
{'num_reads': [], 'seed': [], 'time_limit': []}

"""

properties: typing.Mapping[str, typing.Any] = dict(
)
"""Information about the solver. Empty.

Examples:

>>> from dwave.samplers import RandomSampler
>>> sampler = RandomSampler()
>>> sampler.properties
{}

"""

def sample(self,
bqm: dimod.BinaryQuadraticModel,
*,
num_reads: int = 10,
seed: typing.Union[None, int, np.random.Generator] = None,
time_limit: typing.Optional[float] = None,
**kwargs,
) -> dimod.SampleSet:
"""Return random samples for a binary quadratic model.

Args:
bqm: Binary quadratic model to be sampled from.

num_reads: The number of samples to be returned.

seed:
Seed for the random number generator.
Passed to :func:`numpy.random.default_rng()`.

time_limit:
The maximum sampling time in seconds.
If given and non-negative, samples are drawn until ``time_limit``.
Only the best ``num_reads`` (or fewer) samples are kept.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Until we figure out how best to extend/generalize Initialized sampler interface to handle other termination criteria, I would forbid setting both num_reads and time_limit here. It's confusing because time_limit takes precedence and the sampler might return less then num_reads.

A case of setting time_limit, but not num_reads is also confusing -- we return only one sample after num_drawn tries.

Perhaps we should introduce max_answers (or max_samples) that can be used in combination with time_limit. See dwavesystems/dwave-greedy#24 for discussion.

So:

  • num_reads set: exactly num_reads returned, time_limit not allowed. max_answers ignored.
  • time_limit set: max_answers (which can default to 1 or inf/None) is used

With this scheme, I would set num_reads by default to None, with interpretation on how much that actually is depending on time_limit/max_answers. Similarly to how we resolve it in the Initialized arg parser.

Copy link
Member Author

@arcondello arcondello Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had already changed it so num_reads has precedence. And I disagree that they should be mutually exclusive. The solver will now always returns num_reads. time_limit just determines whether it keeps going.

Copy link
Contributor

@kevinchern kevinchern Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To summarize the current implementation's behaviour...

num_reads [x], time_limit [x]
- Case more num_reads than time_limit allows for: always return num_reads states and go past time limit.
- Case fewer num_reads than time_limit allows for: always return num_reads states and continue sampling until time_limit.
num_reads [ ], time_limit [x]
- num_reads is implicitly set to default. See above scenarios.
num_reads [x], time_limit [ ]
- Always return num_reads
num_reads [ ], time_limit [ ]
- num_reads is set to default. See above scenario.

I think the bolded case could be an invitation to making human errors for default num_reads > 1 (currently 10). For example, sampler.sample(time_limit=very-small-value) suggests the sampler should 1 state, but instead it will return 10. If we want to keep this behaviour, I suggest changing default num_reads to 1.

edit: I reread the above, and I think the ambiguity comes from the word argument name "time limit" being suggestive of a hard cutoff.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, happy to lower the default to 1. I had it higher for backwards compatibility reasons with dimod but I don't think that actually matters.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so it looks an implied AND between ==num_reads and >=time_limit:

  • sampler will always return num_reads,
  • sampler will run for at least time_limit?

I find this awkward and inconsistent with Initialized (num_reads adapts to initial_states if states are user-specified and reads are not) and termination criteria in dwave-hybrid/Kerberos (time/convergence/energy/iteration count are OR-ed - we stop as soon as one condition is met).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, how about I just make two samplers. One that take a time_limit and max_answers and one that takes num_reads. If we're going to make the arguments disjoint, we might as well make that clear at the sampler level.

Personally I think that's way overkill, but it seems like no one else agrees with me 😄

Copy link
Member Author

@arcondello arcondello Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If user sets time_limit (and not num_reads), they care about run time, although it's not clear what's the number of samples they want. But if they haven't specified the number, it makes sense to return all samples generated during runtime. Here's where max_samples/max_answers comes into play.

The problem is that we generate a lot of samples. Way more than can easily fit in memory. (we can probably fit them in memory, but still it can be millions for even small runtimes). So returning all is not practical for many problems. Which is why I used the default num_reads which was 10.

In this framework, setting both num_reads and time_limit would run until one of them is satisfied.

This is what I had before. edit: nevermind, I understand. I was treating time_limit as having priority.

Copy link
Contributor

@kevinchern kevinchern Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, we can either do treat time_limit as the most important, and then have a max_answers/max_samples as the secondary criteria. Or we can treat num_reads and the most important and treat time_limit as the secondary criteria when provided.

If we're deciding between the two, I also favour the latter for similar reasons. I do think Radomir's proposal is more intuitive and gives more control over the behaviour of the sampler.

Another point to consider is if we add a similar feature for Neal, a consistent behaviour across classes of samplers would be desirable. And in the case of Neal, I think it's reasonable to set a time limit and retrieve all solutions it came across.

Copy link
Member Author

@arcondello arcondello Jun 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I concede. I will refactor.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extending sampler interface with time_limit (and max_answers or whatever we call it) would be a good candidate for a follow-up PR, just to keep concerns separated. But since we had all this discussion here, we might as well do it here and then generalize in a follow-up.


Returns:
A sample set.
Some additional information is provided in the
:attr:`~dimod.SampleSet.info` dictionary:

* **num_drawn**: The total number of samples generated.
Copy link
Contributor

@kevinchern kevinchern Jun 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is my understanding as follows correct?
num_drawn != num_reads is true when time_limit allows for more draws than num_reads?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Specifically when time_limit is provided and when it allows for more reads

* **prepreocessing_time**: The time to parse the ``bqm`` and to
initialize the random number generator.
* **sampling_time**: The time used to generate the samples
and calculate the energies. This is the number controlled by
``time_limit``.
* **postprocessing_time**: The time to construct the sample
set.

"""

# we could count this towards preprocesing time but IMO it's fine to
# skip for simplicity.
self.remove_unknown_kwargs(**kwargs)

return sample(bqm,
num_reads=num_reads,
seed=seed,
time_limit=-1 if time_limit is None else time_limit,
)
Loading