Create a Web Audio Worklet that uses DaisySP for signal processing.
Checkout a functional example using daisysp::Oscillator
.
-
Install Node.js
-
Install Emscripten
NOTE: Emscripten packages a specific version of Node.js. If you
source ./emsdk_env.sh
like explained in the documents, it will add that version to the front of the shellPATH
. There is a long discussion on why it does this, but if Node.js is already installed on your system, your should addemsdk/upstream/emscripten
to your shellPATH
, rather than source the script. -
Make sure Emscripten and Node.js executables are in the shell
PATH
-
Clone this repository
git clone https://github.com/johnhooks/daisysp-wasm.git
-
Enter the directory, install dependencies and run the build script
cd daisysp-wasm yarn install # or npm install yarn build # or npm run build
There is also a watch script that will rebuild the project when source files change.
yarn watch # or npm run watch
-
The project has an example website. To start the dev server
yarn website:dev # or npm run website:dev
-
Visit localhost:5173 to checkout the example.
The C++ source code for the audio processor is located in the worklet
directory.
A Makefile
is generated in the build
directory using emcmake cmake -B ./build -S ./
.
After entering the build
directory, make
is called and the WASM file is build in the build/wasm
directory. The wasm file is inlined into the wrapping JavaScript code for ease of loading.
rollup
bundles the audio worklet processor JavaScript code.
The bundled file is copied into the example/public
directory to be loaded as an AudioWorket
in the example website.
An example of how to create an AudioNode
from the worklet
let context = new AudioContext();
await context.audioWorklet.addModule("wasm-worklet-processor.js");
let audioNode = new AudioWorkletNode(context, "wasm-worklet-processor");
Google in their example
use raw pointers to memory allocated in the WASM heap, but first the inputs and outputs are transformed. The WASM heap is an Uint16Array
and the wasm-audio-helper.js preforms some magic to convert each of the arrays of inputs
and outputs
arguments of the AudioWorkletProcessor#process
method to a single array of Float32Array
, chaining the channels one after the other.
In the Emscripten binding, raw pointers are cast to float*
to access the input
/output
data on WASM heap in the C++ code. There is a warning about using raw pointers somewhere in the documentation, I am having some trouble finding it, but the gist was that there isn't any guarantee on the lifetime of a raw pointer. Though since its being using in the process
method of the audio processor and not used after leaving the callback's scope, I hope it's reasonably safe.
void process(uintptr_t input_ptr, uintptr_t output_ptr, unsigned channel_count) {
float* input_buffer = reinterpret_cast<float*>(input_ptr);
float* output_buffer = reinterpret_cast<float*>(output_ptr);
}
ref reinterpret_cast
I am a total novice of both C++ and WASM, I have a lot to learn.
- I think DaisySP expects the sample to be interwoven and the Web Audio API uses a planar buffer format.
- Need to be flexible on the number of frames per call to render. Googles examples expect it to be 128, though MDN warns that it could change in the future to be variable.
- Changing audio buffer/blocksize of the Daisy Seed
- Building LLVM