-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing reduction ops #1673
base: main
Are you sure you want to change the base?
Fixing reduction ops #1673
Conversation
Just curious of the reasoning, why did you decide to make this change in TTIR instead of Forge? You had to change bunch of code and bunch of tests because of this attribute renaming (and missed some). If we are already renaming, I would choose |
@mrakitaTT When I was talking with someone from forge couple days ago they told me they are trying to mirror torch as close as possible with ops so I decided not to do it in forge. If we were to change it in forge then we would have to either:
But regardless of the naming we still have a issue with attribute type which is not 1-1 what forge can lower.
I thought personally change was small and simplified stuff, like getting reduce dims. Unfortunately with these kind of changes it's inevitable to test changes . Regarding tests I fixed that, I didn't have stable hlo flag on...
Yea I thought this also, not sure why torch developers decided on that name. |
Yeah but this is not part of the Forge external interface (i.e. changing the Forge ops), this is part of the serialization from Forge graph to TTIR graph.
I am not too familiar with Forge codebase so please pardon my ignorance, but shouldn't this just be a simple serialization string change? Edit: I've just checked the Forge repo and I see what you mean, Forge serialization is just matching attribute names from Forge I see this pattern often where we are focusing too much on Forge, but I would implore everyone to always keep in mind that there are also other frontends and dialects, and to check their definitions too when deciding on something like this. For example we've named some op
I am not sure about this change too. Imagine if some other dialect used For example StableHLO uses i64 type for dimensions, so we do this during StableHLO->TTIR conversion:
Arguably we could've also changed TTIR to use i64 instead of i32, though I can't imagine tensor with such large rank ever existing :) |
Currently all reduction ops when lowered from forge are failing. Reduction ops in
TTIR/TTNN
have optionaldim_arg
attribute which can be used to specify dimension along which reduce should be applied.Forge uses
dim
attribute to specify reduction dims, so when lowered toTTIR
is completly ignored by our compiler.This PR fixes the naming of this attribute and also aligns possible values for this attribute from:
OptionalAttr<I32ArrayAttr>
->OptionalAttr<AnyAttrOf<[SI32Attr, I32ArrayAttr]>>
IR before change:
IR after change:
fixes #1574