After some recent playtime with modular synthesis I stumbled upon the Web Audio API and found out that oscillators are totally a thing! So, why not write down my findings to help you create your own synth in JS.
Web Audio
The web audio API isn't new at all, and widely supported across real browsers. (IE isn't a real browser, @vandijkstef me bro!). It allows us to work with audio using modules
and routing
. To make it work, you will need at least one source (like an oscilator or an audio file) and route it to a destination
. Optionally, you can add more sources and/or route this through one or multiple effects, like filter, delay or volume controls. Note that that HTML audio
element is regarded as a source-module within the web audio API.
Before we start anything, theres two important things to know about the web audio API:
- Some browsers (like Chrome) will only allow new audiocontexts (see next chapter) to be created after a user action. It's wise to give the synthesizer a 'power-button' to hook up your scripts to, but is disregarded in this tutorial.
- Any chain that is not actively doing 'something' will be garbage collected (thus removed from memory). This will possibly remove the complete audiocontext, should your last oscillator be stopped. The browser will warn you about trying to access objects that aren't available anymore.
Make some noise
To make noise in the land of JS, we first need to create a 'space' where our sound will live in. A music studio within our code. We refer to this space as the AudioContext
. Within this space, we can create sources and effects and route them to the destination
, our soundcard and speakers. You can create an oscillator with the AudioContext method .createOscillator()
. Routing is done using the .connect()
method on the 'from' module, passing the 'to' module as its argument. Also, the oscillator isn't generating noise when created. To do this, you need to call it's .start()
method. Let's rephrase this in a block of code:
WARNING: This is probably a good time to turn down your speakers or headphones.
const context = new AudioContext(); // Create the space for our audio to live in
const osc = context.createOscillator(); // Create an oscillator
osc.connect(context.destination); // Route the osc to our AudioContext. Only use '.destination' when routing to the AudioContext
osc.start();
If done right you will hear a horrible tone playing. If you happen to use headphones or have propely set-up speakers you may have noticed that the sound is only coming from the left channel. Did it came from the right only? Probably a good time to check if you swapped wires.
You may have noticed that the sound is only coming from the left channel
To fix this we need to 'expand' our mono sound to stereo. We can do this with the ChannelMergerNode. When creating it, you can decide the amount of channels it will support (2 for stereo) and route channels into that. To better understand this, we need to look at the arguments of .connect
a little better. They are destination, outputIndex, inputIndex
. In the previous example we have already used the only required argument destination
. A node where the audio will go to. The outputIndex
specifies which of the output channels will be used, where 0 is the left signal, and 1 the right signal. Same goes for the inputIndex
, only that this is using the channel for the destination
. In this example we are going to copy our 'left' signal to the 'right' channel. Note that we also have to change our routing to use the channelMerger
.
const context = new AudioContext();
const osc = context.createOscillator();
osc.start();
const merger = context.createChannelMerger(); // Create the channel merger
osc.connect(merger, 0, 0); // Connect our OSC to the merger, using the left channel as source and left channel as destination
osc.connect(merger, 0, 1); // Do the same, but this time use the right channel as destination
merger.connect(context.destination); // Don't forget to route it to the speakers
Control the noise (master gain, panning and property control in WA)
Are you deaf yet?!
Sorry! Now is probably a good time to add some volume controls. As before with the oscillator and channelMerger, we need to create a new module for volume control; gain
. Additionally we're going to add some panning with the stereoPanner
. Let's jump in:
// [...] previous example
const gainModule = context.createGain(); // Create our gain module
const pannerModule = context.createStereoPanner(); // .. and our pan module
merger.connect(gainModule); // This line is changed from previous example
gainModule.connect(pannerModule); // Use all our new modules
pannerModule.connect(context.destination); // And update routing again
To control these values, we're using simple HTML range sliders. Note that the gain will use a value between 0
and 1
, but the panner uses a scale from -1
for full left, and 1
for full right. 0
is the sweet full stereo spot.
<input name="gain" type="range" min="0" max="1" step="0.01">
<input name="pan" type="range" min="-1" max="1" step="0.01" value="0">
Okay, the next part is going to be tricky. Unless you are fine without either feedback, or with grainy distortion when moving parameters we need to do several things to avoid that from happening. We cannot depend on onChange
alone, since it doesn't update while sliding, only when releasing the mousebutton. Additionally if you change something instantly in audio, it will 'cut' the waveform leaving nasty 'pops'. First, let's take a look at our event handling, which will rely on the mousemove event:
// [...] previous examples
const sliderGain = document.querySelector('[name=gain]');
sliderGain.addEventListener('mousemove', (e) => {
// This event will fire when we hover over the element, the range slider
if (e.buttons === 1) { // This is to test if we clicked the element, essentially 'dragging' the slider
console.log(e.target.value);
// Next example here
}
});
// And for our panning
const sliderPan = document.querySelector('[name=pan]');
sliderPan.addEventListener('mousemove', (e) => {
if (e.buttons === 1) {
console.log(e.target.value);
// Next example here
}
});
Confirm that the console is reporting proper values for the sliders before you move on.
As I've said before, you cannot instantly change audio variables without creating a nasty pop. Luckily most audio parameters have several methods to change them, one of them being .linearRampToValueAtTime
. It takes a value
and an endTime
. With this method we can tell the parameter to change to another value at some specified point in time. I found that about .1
seconds in the future works fine. This time needs to be relative to the current time of the audio context. So our event handlers would look like this:
// Gain
gainModule.gain.linearRampToValueAtTime(parseFloat(e.target.value), context.currentTime + .1); // Use the .gain in the gainModule!
// Pan
pannerModule.pan.linearRampToValueAtTime(parseFloat(e.target.value), context.currentTime + .1); // Use the .gain in the gainModule!
Now when executing the code, you should be able to control the master volume and panning of the sound. Warning: Additional event handling if required to make this work without a mouse.
Control the sound
So we did all that, and all we have is a basic tone. This is because we didn't look at the oscillator properties yet. An oscillator is changeable in several ways. You can detune
it, alter its frequency
or change the type
of waveform. The first two take a numerical value and are described as an 'AudioParam'. This means they have methods like .linearRampToValueAtTime
to gradually change their values. The type is a string and can contain 'sine', ' square', 'sawtooth', 'triangle' or 'custom'. Go ahead and play around with these variables.
Make it music
Finally, something groovy! Let's start with our spacebar
as our 'key'. We won't do anything to the frequency yet, but instead start to 'shape' the sound to create something piano-like. To do this we could simply try to start and stop our oscillator, only that it will never start again. They are not made to be reused. Instead you could create an oscillator each time you hit a key but in this case I want to use another gain
to control the sound. We will place this new gain before our master gain in the chain. Now when we press our spacebar, we should set that gain to 0, let it fade in for a
amount of time and then fade it out for r
amount of time. Yes I am referring to ADSR envelopes here. Please check out [this article on synthesis] if you are not sure what this means. What this will look like in code:
const gainEnvelope = context.createGain(); // Create the new gain
merger.connect(gainEnvelope); // Change this line from the previous examples
gainEnvelope.connect(gainModule); // .. to wiggle it in
// Envelope variables, feel free to hook these up to sliders or other inputs
const a = .2;
const r = .4;
document.addEventListener('keyup', (e) => {
if (e.keyCode === 32) { // the keyCode for the spacebar
gainEnvelope.gain.setValueAtTime(0, context.currentTime); // Reset
gainEnvelope.gain.setValueAtTime(1, context.currentTime + a): // Attack
gainEnvelope.gain.setValueAtTime(0, context.currentTime + a + r); // Release, don't forget to include the attack since you are setting up events to happen in the future all at the same time
}
});
Expand
These are the basics of generating a tone based on some input. Now you can expand. There is various more effects available, like filters, delays and distortion. You can use multiple oscillators, even seperate ones per stereo channel, and hook them up to you keyboard, controller or any other device. Change the frequency based on the input or try to create a sequencer in JS too.