Skip to content

🧠⚙️ Standalone native implementation of the Web Neural Network API

License

Notifications You must be signed in to change notification settings

zesongw/webnn-native

 
 

Repository files navigation

Backend \ OS Windows Linux
null (for unit test) null backend
DirectML DirectML backend (Windows)
Node Binding (DirectML backend / Windows)
Memory leak check - DirectML backend (Windows)
DirectMLX DirectMLX backend (Windows)
Node Binding (DirectMLX backend / Windows)
Memory leak check - DirectMLX backend (Windows)
OpenVINO OpenVINO backend (Windows)
Node Binding (OpenVINO backend / Windows)
OpenVINO backend (Linux)
Node Binding (OpenVINO backend / Linux)
XNNPACK XNNPACK backend (Windows) XNNPACK backend (Linux)
oneDNN oneDNN backend (Windows) oneDNN backend (Linux)
MLAS MLAS backend (Windows)

clang format

WebNN-native

WebNN-native is a native implementation of the Web Neural Network API.

It provides several building blocks:

  • WebNN C/C++ headers that applications and other building blocks use.
    • The webnn.h that is an one-to-one mapping with the WebNN IDL.
    • A C++ wrapper for the webnn.h
  • Backend implementations that use platforms' ML APIs:
    • DirectML on Windows 10
    • DirectMLX on Windows 10
    • OpenVINO on Windows 10 and Linux
    • oneDNN on Windows 10 and Linux
    • XNNPACK on Windows 10 and Linux
    • MLAS on Windows 10 and Linux
    • Other backends are to be added

WebNN-native uses the code of other open source projects:

  • The code generator and infrastructure code of Dawn project.
  • The DirectMLX and device wrapper of DirectML project.
  • The XNNPACK project.
  • The oneDNN project.
  • The MLAS project.

Build and Run

Install depot_tools

WebNN-native uses the Chromium build system and dependency management so you need to install depot_tools and add it to the PATH.

Notes:

  • On Windows, you'll need to set the environment variable DEPOT_TOOLS_WIN_TOOLCHAIN=0. This tells depot_tools to use your locally installed version of Visual Studio (by default, depot_tools will try to download a Google-internal version).

Get the code

Get the source code as follows:

# Clone the repo as "webnn-native"
> git clone https://github.com/webmachinelearning/webnn-native.git webnn-native && cd webnn-native

# Bootstrap the gclient configuration
> cp scripts/standalone.gclient .gclient

# Fetch external dependencies and toolchains with gclient
> gclient sync

Setting up the build

Generate build files using gn args out/Debug or gn args out/Release.

A text editor will appear asking build options, the most common option is is_debug=true/false; otherwise gn args out/Release --list shows all the possible options.

To build with a backend, please set the corresponding option from following table.

Backend Option
DirectML webnn_enable_dml=true
DirectMLX webnn_enable_dmlx=true
OpenVINO webnn_enable_openvino=true
XNNPACK webnn_enable_xnnpack=true
oneDNN webnn_enable_onednn=true
MLAS webnn_enable_mlas=true

Build

Then use ninja -C out/Release or ninja -C out/Debug to build WebNN-native.

Notes

  • To build with XNNPACK backend, please build XNNPACK first, e.g. by ./scripts/build-local.sh. For Windows build, it requires supplying -DCMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded$<$CONFIG:Debug:Debug>" to set MSVC static runtime library.
  • To build with oneDNN backend, please build oneDNN first by following the build from source instructions.
  • To build with MLAS backend, please build MLAS (part of ONNX Runtime) first by following the Build ONNX Runtime for inferencing, e.g., by .\build.bat --config Release --parallel --enable_msvc_static_runtime for Windows build.

Run tests

Run unit tests:

> ./out/Release/webnn_unittests

Run end2end tests on a default device:

> ./out/Release/webnn_end2end_tests

You can also specify a device to run end2end tests using "-d" option, for example:

> ./out/Release/webnn_end2end_tests -d gpu

Currently "cpu", "gpu" and "default" are supported, more devices are to be supported in the future.

Notes:

Run examples

License

Apache 2.0 Public License, please see LICENSE.

About

🧠⚙️ Standalone native implementation of the Web Neural Network API

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 92.1%
  • C 5.6%
  • Python 1.7%
  • JavaScript 0.5%
  • Pawn 0.1%
  • Batchfile 0.0%