From 2c2f5b866ffc1b84d708f516e822259e50f491f0 Mon Sep 17 00:00:00 2001 From: Qichao Lan <35621141+chaosprint@users.noreply.github.com> Date: Thu, 7 Apr 2022 13:30:27 +0200 Subject: [PATCH] edit readme --- README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index cb006c1..b92144a 100644 --- a/README.md +++ b/README.md @@ -12,11 +12,12 @@ Let's consider a simple example: you want to train an agent to play the synth se Yet it can be very difficult and time-consuming to build a real-world environment (such as a music robot) to cover all the needs for electronic music. Another option is to use some built-in Python function to compose our `music tasks`, but still, for each task, you need to write some DSP function chains which will be unlikely for these codes to be used again in the real world. A better way is to find a commonplace between our simulation and real-world music practices. Live coding is exactly such a practice where the artist performs improvised algorithmic music by writing program code in real-time. What if we train a virtual agent to write (part of the) code to synthesis a loop for us? The architecture looks like this: -``` -Agent --> Play around the live coding code --> Live coding engine does the non-real-time synthesis --> Get the reward, observation space, etc. +```mermaid +flowchart TD + A[agent, i.e. a neural network] --> B + B[input current states. output params of the live coding code] --> C + C[live coding engine does the non-real-time synthesis] --> D + D[get the reward, observation space, etc.] -- update --> A ``` This process should involve some deep neural network as the synthesised audio is much more difficult to process than the symbolic sequences. @@ -91,4 +92,4 @@ Glicol branch (main): SuperCollider branch: -> GPL-3.0 License \ No newline at end of file +> GPL-3.0 License