Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda array interface (clone) #333

Closed
wants to merge 35 commits into from

Conversation

janden
Copy link
Collaborator

@janden janden commented Aug 22, 2023

Clone of #326 to get Jenkins runs.

@janden janden force-pushed the cuda_array_interface branch 3 times, most recently from ce4019b to 8b7828a Compare August 22, 2023 21:25
Leave this in a separate module for reuse in other places.
Looks like `torch.asarray` was introduced in v1.11, so for backwards
compatibility, we use `as_tensor` instead.
Should be using `_data`, not `data` since the former has been
transformed to the appropriate dtype, stride, and shape.
Allows us to specify which frameworks to test. This allows us to avoid
some crashes when different frameworks don't interact well (e.g., Numba
will fail if it's not given the primary context).
Can't call `x.size`, need to use compatibility layer.
Avoids having to call `util.transfer_funcs` all the time and simplifies
when we only want `to_gpu`.
Only for show right now, since we only test the `pycuda` framework
pending updated interface.
Shows that these are *pycuda* examples, not for the other frameworks (to
be added later).
Since we can run on any of the given frameworks, we no longer depend on
the `pycuda` package.
@janden janden closed this Aug 24, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants