-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement more reduction #44
Implement more reduction #44
Conversation
reduceL1 / reduceL2 reduceLogSum / reduceLogSumExp reduceSumSquare fixes webmachinelearning#17
@@ -20,7 +20,7 @@ export function slice(input, starts, sizes, {axes} = {}) { | |||
const axesLen = axes.length; | |||
const outputShape = input.shape.slice(); | |||
for (let i = 0; i < axesLen; ++i) { | |||
const axis = axes[i] >= 0 ? axes[i] : axes[i] + rank; | |||
const axis = axes[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aah, nice, we can simplify the testing and implementations after Ningxin's removal of the negative axes policy.
* @return {Tensor} | ||
*/ | ||
export function reduceSumSquare(input, options = {}) { | ||
return reduceSum(pow(input, new Scalar(2)), options); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formulas all look correct, but wasn't there a GitHub issue with ReduceL2 ambiguity?
The formula I recall was ReduceL2 = sqrt(a1^2 + a2^2 + ... + an^2)
.
(btw, I always have to re-lookup the formulas because I keep thinking "reduceSumSquare" means x.sum().square()
, but actually it means x.square().sum()
- so square and then reduce sum)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but wasn't there a GitHub issue with ReduceL2 ambiguity?
Are there other explanations for ReduceL2 besides ReduceL2 = sqrt(a1^2 + a2^2 + ... + an^2)
?
Reference: https://en.wikipedia.org/wiki/Norm_(mathematics)#p-norm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh never mind, it was actually L2 pooling, not reduction: webmachinelearning/webnn#278
@@ -20,7 +20,7 @@ export function batchNormalization(input, mean, variance, {axis=1, scale, bias, | |||
// The output tensor has the same shape as the input tensor. | |||
let output = new Tensor(input.shape); | |||
const shape = new Array(input.rank).fill(1); | |||
shape[axis] = -1; | |||
shape[axis] = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Ningxin's PR removed the special handling of negative numbers related to axes, but we still have this special case with axes here with reshape
. I wonder if this too should be a policy resolved by higher level frameworks first, and they just pass the resolved shape before it reaches the WebNN API (or does keeping Reshape's special null axis handling make the composition of other WebNN ops simpler, like your instanceNormalization implementation which utilizes reshape?) *this comment is not blocking either way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to simplify the implementation
Sorry, I don't catch your point, could you please explain how to simplify the implementation?
And I submitted a pr webmachinelearning/webnn#367 to Spec, please also have a review, thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Commented on the other PR.
This PR is based on #37.
@fdwr @huningxin PTAL at the third commit, thanks.