-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
61 improve halo2 query calls #1
61 improve halo2 query calls #1
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! Added some suggestions, also:
- The diff contains some whitespace/format changes which I think will largely be solved by running
cargo fmt
(code won't be accepted in the PSE repo without it). Aiming for minimal diffs with no changes that are not absolutely necessary always a good idea, makes it easier to see what's actually going on. - Modified example is great, but for the PSE PR modifying one of the existing samples with the new API will be sufficient, otherwise a lot of duplicated sample code just to test this one thing out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, cleaner diff! Some small suggestions, and don't forget to revert the changes to Cargo.toml
, primitives.rs
and remove simple-example-2.rs
. Then I think we're ready to make a PR on the PSE repo to get some feedback on this approach.
halo2_proofs/src/plonk/circuit.rs
Outdated
// /// Allows the developer to include an annotation for an specific column within a `Region`. | ||
// /// | ||
// /// This is usually useful for debugging circuit failures. | ||
// fn annotate_column<A, AR>(&mut self, annotation: A, column: Column<Any>) | ||
// where | ||
// A: FnOnce() -> AR, | ||
// AR: Into<String>; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I plan to refactor this a bit on upstream zcash/halo2. And then I might modify again in PSE. So not worry too much about that! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
zcash#706 is ready now. This should be closer to what we will have in the future.
Co-authored-by: Brecht Devos <[email protected]>
Co-authored-by: Brecht Devos <[email protected]>
Co-authored-by: Brecht Devos <[email protected]>
Co-authored-by: Brecht Devos <[email protected]>
Co-authored-by: Brecht Devos <[email protected]>
Co-authored-by: Brecht Devos <[email protected]>
Co-authored-by: Brecht Devos <[email protected]>
@@ -706,6 +792,51 @@ pub enum Expression<F> { | |||
} | |||
|
|||
impl<F: Field> Expression<F> { | |||
/// Make side effects | |||
pub fn query_cells(&mut self, cells: &mut VirtualCells<'_, F>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be decently parallelizable no? Can't you split all the cells
into chunks and digest them in parallel? Maybe Mutex
s add too much overhead and it isn't worth it. But might be worth a try in the future probably.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just to make the query API easier to use, but it is a good idea to parallelize when processing Vec<Expression<F>>
in Gate
, or even aggregate all expressions in ConstraintSystem
and parallelize them all. We can open an issue on that.
Co-authored-by: Han <[email protected]>
Co-authored-by: Han <[email protected]>
Co-authored-by: Han <[email protected]>
#61