-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
System.Math and System.MathF should be implemented in managed code, rather than as FCALLs to the C runtime #9001
Comments
FYI. @mellinoe |
I do not think that this is necessarily a good idea. It makes the runtime less portable. The C runtime implementations of these functions are a fine default implementation. If we want to make the implementation better on some platforms, that's ok - but it should not be the requirement for all platforms. |
@jkotas, how does it make the runtime "less portable"? Currently, we are tied to a particular implementation of the C runtime for each platform. This leads to:
I would think providing a managed implementation makes it more portable since it means:
For all of these we should obviously ensure that codegen and perf remain on-par with what we have today. Some of the functions, such as Others (such as The remaining (such as |
If porting to a new hardware platform requires implementing a ton of intrinsics in the CodeGen, you have added like one man-year to the porting cost. It is what makes the runtime less portable. |
It does not, strictly speaking, require this. It only likely requires this for producing the most performant code. It is also, strictly speaking, entirely possible to set the "intrinsic" for these functions on new hardware architectures to be the CRT implementation by default (provided they are IEEE compliant), if the software fallback's performance is considered too poor. That being said, we already have the case today, when using the CRT implementation, that the majority of the functions on some platforms are considerably poorer (330% slower in the worst case): https://github.com/dotnet/coreclr/issues/9373. This proposal gives us a standard baseline of "correctness" in the software implementation where any hardware specific improvements can readily be checked. It also allows us to check that the underlying CRT implementation (if it were to be used as the intrinsic) is compatible with our expectations. |
A few places where this might unlock some perf:
|
As I have said, I am fine with using C# implementation that depends on intrinsics on platforms where we have deeper codegen investments. The best implementation of Abs for x64 is actually: For bring up of new platforms or platforms with less codegen investment, the C runtime implementation is the default. It may not be as good or it can have different behavior in corner cases - but it is good enough. For example, I do not ever want to have a software fallback for |
Having a managed version will not do the porting easier? For example suppose you want to port Net Core to MIPS you have not worry to port Math / FMath initially and you have time to debug other issues, the Math optimization using Hardware Intrinsics / calling C run time could be done after. Or I'm missing something? |
For porting, you do not have to worry about the C runtime either (porting CoreCLR to a platform without C runtime is non-scenario). And the C runtime functions will be better debugged and have better performance than a software based callback written in C# (on platforms without deeper codegen investment). |
There are several problems with this proposal, but to me the major problem is that it moves the maintenance and fixing from the underlying platform to us for what is a very well established, maintained and universally understood API. The stated goal of "consistency across operating systems and platforms" has been troublesome in the past for things like Unicode string collation. Mono emulated the Windows behavior, while .NET Core took the native approach - and I think that the approach that .NET core took of using libICU instead of trying to emulate the Windows behavior was the right one. Porting to a new platform already requires an advanced libc to be available, or an equivalent. Not only for the managed bridges, but the runtime itself (CoreCLR and Mono) both consume There is the performance issue as well, which Jan just touched on. |
AMD has open sourced a snapshot of the libm implementation that is currently used by Windows x64: https://github.com/amd/win-libm (plus a few improvements that haven't been picked up yet). This, combined with other open source libm implementations such as the one from ARM: https://github.com/ARM-software/optimized-routines, should allow us to provide a portable implementation that is both IEEE compliant and deterministic. We could do that by contributing back to the existing libraries and picking an implementation as the "de-facto" standard to pull in, or we could do that by porting the code to C#. I would lean towards the former (picking an implementation as the "de-facto" standard), but I think the latter has some interesting possibilities as well. Not only would it be able to play with .NET code better (such as being GC aware), but it can also take some optimizations that the standard C code doesn't have available (such as not needing to worry about exception handling or alternative rounding modes, since .NET doesn't currently support those). |
Just a comment after many weeks of frustration. |
@tcwicks, could you please explain what problems you are having and why you are having difficulties with the design? |
As someone interested in .Net being a first-class IEEE standards high performance computing platform I agree with @tannergooding we should support libm implementation in x64 and ARM. The "C runtime implementations" as stated by @jkotas are not suitable for anything but substandard floating point performance. The C runtime is not good enough for today's machine learning and IEEE computing needs. The C runtime is little more than a all else fails compute fallback option. I have no idea why the .Net team spent all these years making an awesome high performance cross platform library then wants to drop the ball on IEEE and machine learning compute. How many high performance compute architectures outside of ARM and X64 do we need to support outside of the C runtime? Zero. Should we have a way to all vendors to add compute libraries? Yes! We only need high performance compute on these two platforms. Processor makers should be able to add compute libraries as plugins to .Net rather than having a narrow minded approach to what is good enough. |
Hoping to get some traction on this idea. The main reason is performance, as the libc implementations can be slow and may require register shuffling to call. Additionally, I'd love identical results on different platforms. I understand that only a subset of users has these priorities. I think the ScalB "experiment" went well. The other regularly reported problematic functions are pow, exp, sin and cos. I believe, having those four in managed code would solve most problems. |
Still struggling with bad performance of It seems since .NET9 the runtime already contains managed math implementations for Given this precedent, I'd be happy port scalar |
Simon - as a third-party devs we're in complete agreement with you and would be 100% interested in you porting these functions to managed code as the c runtime is less than optimal. If the .net team already wrote managed code for these functions, we are mystified why they did not update these routines to managed code. We would likely want to back port this .net 8, just for consistency... one can dream. |
Vector vs scalar algorithms are a bit different in how they handle various logic due to the difference in handling many values simultaneously vs a single value at a time. There are in some cases notably also minor accuracy differences between the two of them; where it is acceptable for net new vectorized algorithms to have slightly higher amounts of error due to typical use-case and the fact they are "net new" APIs. We can then improve that accuracy and/or performance over time for the less likely edge cases.
That's unlikely to happen for many reasons.
Normally the answer would be: yes. However, I've already got much of the work done and there's not really any need for you to re-port things. There's only so much work that I (and the broader team in general) can do (design, implement, review, document, etc) in a given release so not everything ends up in a single release. For .NET 9 the focus was primarily on providing vectorized versions of the "core" math APIs. In .NET 10 we plan on continuing with vectorizing the rest of the math APIs and at least getting reviews up for ports of some of the scalar APIs (I just have to finish getting the pending community PRs reviewed/merged into .NET 10 first, before I start getting even more PRs and work up; so things don't get overly stale, so things can stay manageable, etc). In the interim, as of .NET 9 developers have access to accelerated and deterministic math already by functionally doing |
That is fantastic news! 👍👍👍
Unfortunately, this alternative was slower than MathF. (For others thinking about going this route: |
I know our leader of the ILGPU project has been unimpressed with dotnet team’s level of interest in scientific computing. Clearly the tensor library is an outgrowth of AI, which goes hand in hand with scientific computing. Microsoft Research believes AI powered PDE solvers are the future, would be nice to have fully baked scientific primitives in the library rather current assumption everyone only codes in python/C++ for scientific compute.
Get Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
From: Simon Felix ***@***.***>
Sent: Sunday, November 24, 2024 9:41:07 AM
To: dotnet/runtime ***@***.***>
Cc: James Carpenter ***@***.***>; Comment ***@***.***>
Subject: Re: [dotnet/runtime] System.Math and System.MathF should be implemented in managed code, rather than as FCALLs to the C runtime (#9001)
Given this precedent, I'd be happy port scalar pow, cos, sin, exp and fmod implementations from AOCL to Math. Would you be interested in such a PR?
Normally the answer would be: yes. However, I've already got much of the work done [...]
That is fantastic news! 👍👍👍
In the interim, as of .NET 9 developers have access to accelerated and deterministic math already by functionally doing Vector128.Sin(Vector128.CreateScalarUnsafe(value)).ToScalar() for example. It is indeed a bit more verbose, but it gets the job done in the interim.
Unfortunately, this alternative was slower than MathF.
(For others thinking about going this route: pow and fmod are not [yet?] implemented on Vector128.)
—
Reply to this email directly, view it on GitHub<#9001 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AL2R3SEWNSYYG7EG2X5I3FD2CHXRHAVCNFSM6AAAAABSKK3P62VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOJWGA3DQNJZHE>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Definitely possible in some cases, since there are still some edge cases that may not be as accelerated or which need to do additional handling for some inputs. For example with However, since it has to consider all Doing |
As per the title, both
System.Math
andSystem.MathF
should have most of their extern methods implemented in managed code rather than being FCALLs to the underlying C runtime.This will ensure:
Some of the functions (such as
Abs
,Ceil
,Floor
,Round
, andSqrt
) are simple enough that they can be implemented in managed code today and still maintain the performance characteristics.Other functions (such as
Cos
,Sin
, andTan
) will need to wait until the hardware intrinsics proposal is more widely available (since maintaining perf numbers will require an implementation to call said intrinsics).The text was updated successfully, but these errors were encountered: