Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapt GPU feature to Supernova / non-folding of running instance #275

Closed
huitseeker opened this issue Jan 20, 2024 · 2 comments
Closed

Adapt GPU feature to Supernova / non-folding of running instance #275

huitseeker opened this issue Jan 20, 2024 · 2 comments

Comments

@huitseeker
Copy link
Member

huitseeker commented Jan 20, 2024

This is a consequence of the following upstream issue & PR:

This PR will be ported to Arecibo in #276

As a result of not absorbing the running instance, this PR modifies the NUM_FE_FOR_RO constant from 24 to 9, which in turn modifies the IOPattern of the Poseidon circuit:
https://github.com/lurk-lab/arecibo/blob/2858db1da990cadb704d0c20525e40ddd811416e/src/provider/poseidon.rs#L136-L143
This IOPattern parameters are also modified (similarly, 24 ->9) out-of-circuit.

There's a couple of issues with this:

  • To this day, we just import a neptune instantiation with arity24 for the GPU, something that may no longer be sufficient,
  • SuperNova already modified the # of absorbs we configured the ROs with (see function num_ro_inputs .. the # of absorbs is instance dependent, but we run e.g. simple tests with a # absorbs ranging between 20 and 42), reinforcing the obsolescence of this number (24).

=> should we modify neptune features activated in the cuda feature?

@huitseeker huitseeker changed the title Adapt supernova to non-folding of running instance Adapt GPU feature to Supernova / non-folding of running instance Jan 20, 2024
@huitseeker
Copy link
Member Author

Notes from discussion with @porcuquine : Nova and Arecibo aren't actually using GPU acceleration, which in the case of Neptune, can only be gotten through calling into the batch_hasher functions. We can thus remove the Cargo metadata which imports this functionality.

huitseeker added a commit to huitseeker/arecibo that referenced this issue Jan 23, 2024
- Removed unused neptune cuda feature in the project's `Cargo.toml`

Closes argumentcomputer#275
samuelburnham pushed a commit to samuelburnham/arecibo that referenced this issue Jan 24, 2024
- Removed unused neptune cuda feature in the project's `Cargo.toml`

Closes argumentcomputer#275
github-merge-queue bot pushed a commit that referenced this issue Jan 24, 2024
* chore: Refactor CUDA feature dependencies in Cargo.toml

- Removed unused neptune cuda feature in the project's `Cargo.toml`

Closes #275

* Enable `grumpkin-msm` CUDA feature

---------

Co-authored-by: François Garillot <[email protected]>
@huitseeker
Copy link
Member Author

Closed in #279

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant