This repository has been archived by the owner on Feb 2, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[Note: since this repository is not widely watched, I will be sending a similar message to the TG mailing list.]
A common theme in this task group from the outset has been distinguishing different degrees of BF16 support:
(1) Conversions between BF16 and "native" types (like FP32);
(2) Basic support for BF16 linear algebra, namely FMAs, dot products, and matrix multiplications more generally; and
(3) Comprehensive BF16 arithmetic, either matching existing FP support, or with AI-specific bells and whistles.
At SiFive we've seen a similar classification in interest/requests from our customers, and think it is best to separate the concerns of these use-cases.
This PR shares SiFive's proposal for use-case (1), i.e., conversions. (It does not address the other two use-cases.)
Your feedback is welcome.
Best,
Nick Knight (acting co-chair),
SiFive, Inc.