Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training capabilities #3

Open
anssiko opened this issue Nov 8, 2018 · 1 comment
Open

Training capabilities #3

anssiko opened this issue Nov 8, 2018 · 1 comment

Comments

@anssiko
Copy link
Member

anssiko commented Nov 8, 2018

The current charter states in its out of scope section:

The scope is limited to development of interfaces that expose inference capabilities of the modern platforms beneficial or purpose-built for ML. Training capabilities are out of scope due to limited availability of respective platform APIs.

As a follow-up to the F2F discussions, this issue is to track and discuss respective platform APIs that enable training capabilities with a view on future development needs of the Web Neural Network API. Should all major platform APIs gain similar training capabilities, the group has an option to amend its scope in the future subject to adequate support from the group.

The group has made the following commitment in scope of work with respect to platform support:

The APIs in scope of this group will not be tied to any particular platform and will be implementable on top of existing major platform APIs, such as Android Neural Networks API, Windows DirectML, and macOS/iOS Metal Performance Shaders and Basic Neural Network Subroutines.

@DanielMazurkiewicz
Copy link

Moved from: #6

Back-propagation is a fairy simple training algorithm and if provided only for basic operation blocks ("core1" domain as I mentioned in phasing issue ) then it shouldn't require significant effort to implement and test it on vendors site.

Benefits seems sort of obvious, but lets briefly list what comes to my mind quickly:

  • lower entrance barrier to ML world for JS developers (just enough to get familiar with API and they could quickly jump on into ML)
  • easier phasing and quicker standard shipment - no need to cover all possible operation blocks from different ML environments, enough is just a small basic subset of blocks
  • possibility to train or adjust NNs "on the fly", for example in gaming industry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants