Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Look for nvcc in CUDA_HOME #466

Merged
merged 1 commit into from
Aug 26, 2024
Merged

Look for nvcc in CUDA_HOME #466

merged 1 commit into from
Aug 26, 2024

Conversation

amiller27
Copy link
Contributor

nvcc is not always on the PATH, but it should always be in CUDA_HOME (this is where PyTorch looks for it when building the extension anyway).

Before this, if nvcc was not on the PATH, tiny-cuda-nn would attempt to build with C++14 on CUDA versions that support C++17 (and importantly, where PyTorch has dropped support for C++14). Now tiny-cuda-nn correctly detects C++17 in those cases.

@Tom94
Copy link
Collaborator

Tom94 commented Aug 26, 2024

Hi, thanks for the PR!

Before I hit merge, I've got two requests:

  1. Could you please add back the old PATH-based behavior as fallback in case the CUDA_HOME approach fails?
  2. Please use the os.path.* family of functions and remove the Pathlib dependency again. (I realize Pathlib is best practice these days, but the codebase already uses os.path.* and I'd prefer path handling to stay consistent.)

Also, is torch.utils.cpp_extension.CUDA_HOME (as used here) a public API? I don't see it in the docs (https://pytorch.org/docs/stable/cpp_extension.html) so am hesitant to use it. If not, then I'd ask you to turn the sequence of checks in this PR around (first check PATH, and only if that fails torch.utils.cpp_extension.CUDA_HOME with appropriate import error handling in case the variable disappears in future versions).

Thanks a bunch!

@amiller27
Copy link
Contributor Author

Thanks for the quick review; done, done, and done.

CUDA_HOME does seem undocumented, so I've flipped the logic like you suggested - we'll check PATH first and then CUDA_HOME if it's importable and nonempty

@Tom94
Copy link
Collaborator

Tom94 commented Aug 26, 2024

LGTM, thanks for the quick turnaround!

@Tom94 Tom94 merged commit c91138b into NVlabs:master Aug 26, 2024
13 checks passed
@SomeoneSerge
Copy link

Hi! A more widely compatible and reliable option would be to just rely entirely on FindCUDAToolkit.cmake, which, additionally, supports e.g. splayed layouts where different cuda libraries or different components of cuda libraries might reside in isolated prefixes. This is relevant e.g. for Nixpkgs

Also, is torch.utils.cpp_extension.CUDA_HOME (as used here) a public API?

I'm not associated with pytorch but I'd caution against relying on cpp_extension in any way. It is effectively an attempt to reimplement a subset of cmake/meson in python in a very ad hoc way which is a rather ambitions enterprise. As a result, cpp_extension is a source of a multitude of issues, it hard-codes all sorts of paths and constants that are wrong, and makes cross-compilation nearly infeasible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants