Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include chain break points in returned embedding context #447

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

kevinchern
Copy link
Contributor

Example of "chain break points":
a chain with +++: no break points.
a chain with +--: break point at (0,1)
a chain with +-+: break points at (0,1) and (1,2)

Copy link
Member

@arcondello arcondello left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this!

A few comments on the code.

Once that's sorted there might be a few more formatting comments on the docstring.

"""Identify breakpoints in each chain.

Args:
samples (array_like):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be expanded to samples like. You can use array, labels = dimod.as_samples(samples_like) to get the numpy array.

The reason I say this is because in Ocean you can get embeddings that look like {'a': ['b', 'c']}. The QPU (currently) only uses integer labels for its qubits, but that might change in the future.

Should add a test for this as well.


Args:
samples (array_like):
Samples as a nS x nV array_like object where nS is the number of samples and nV is the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also specify that the samples should be for the embedded problem

for sample in samples:
bps = {}
for node in embedding.keys():
chain_edges = embedding.chain_edges(node)
Copy link
Member

@arcondello arcondello Apr 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to avoid some confusing errors, you probably want

try:
    chain_edges = embedding.chain_edges(nodes)
except AttributeError:
    raise TypeError("'embedding' must be a dwave.embedding.EmbeddedStructure") from None

@@ -289,7 +290,8 @@ def async_unembed(response):
if return_embedding:
sampleset.info['embedding_context'].update(
embedding_parameters=embedding_parameters,
chain_strength=embedding.chain_strength)
chain_strength=embedding.chain_strength,
break_points=break_points(response, embedding))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a non-trivial performance hit. IMO we either should not do this by default or we need to write a more performant implementation of break_points.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, ideally this would be a lazy proxy.

But FWIW, return_embedding does default to False.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true, unless for instance the inspector is imported.

My inclination is to not include this in the embedding composite for now, but document how to use the TrackingComposite to get the information in the docstring of break_points().

Copy link
Contributor Author

@kevinchern kevinchern May 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the PR according to feedback except for this comment. Should I move the statistic to TrackingComposite instead?

I think the lazy proxy approach would require storing response. For a more performant implementation, I can wrap it in numba. Any suggestions for writing a more performant implementation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the best thing is just to remove the statistic from the embedding composite altogether. I would then add an example to the break_points docstring showing how to calculate it, using the TrackingComposite to retrieve the relevant information.

@codecov-commenter
Copy link

codecov-commenter commented May 9, 2022

Codecov Report

Merging #447 (8135c08) into master (13bda5c) will decrease coverage by 2.95%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master     #447      +/-   ##
==========================================
- Coverage   90.52%   87.57%   -2.96%     
==========================================
  Files          22       22              
  Lines        1520     1521       +1     
==========================================
- Hits         1376     1332      -44     
- Misses        144      189      +45     
Impacted Files Coverage Δ
dwave/system/composites/embedding.py 96.02% <100.00%> (-1.13%) ⬇️
dwave/system/samplers/leap_hybrid_sampler.py 61.42% <0.00%> (-14.29%) ⬇️
dwave/system/samplers/clique.py 77.35% <0.00%> (-5.04%) ⬇️
dwave/system/samplers/dwave_sampler.py 84.14% <0.00%> (-3.05%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 13bda5c...8135c08. Read the comment docs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants