Skip to content

Using GPUs that lack support for double-precision floating point operations #3805

Answered by njzjz
PabloPiaggi asked this question in Q&A
Discussion options

You must be logged in to vote

We suggest a mixed precision, where only the precision of NN is FP32 and other parts (environmental matrix, output energies) are still FP64. https://github.com/deepmodeling-activity/deepmd-kit-v2-paper/blob/main/models/04/input.json provides an example.

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@PabloPiaggi
Comment options

Answer selected by PabloPiaggi
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants