Spatial Audio & Dynamic Music
Sound Has a Location
In your previous tutorials, sound was flat — volume was the only control. In real games, a gunshot to your left sounds different from one directly ahead. Sound sources in 3D space pan between stereo channels, attenuate with distance, and bounce off surfaces (reverb).
The Web Audio API has powerful nodes for all of this: PannerNode for 3D positioning and
ConvolverNode for room acoustics. Combined with Dynamic/Adaptive Music
techniques, you can build an audio system that reacts to gameplay in real time.
Spatial Audio Playground
Click Start Audio, then click anywhere on the canvas to place the sound source. Move it around and hear it pan left/right and fade with distance from the listener (blue dot).
Wiring the PannerNode
The audio graph for spatial sound adds a PannerNode between your source and destination.
Moving the panner in 3D space automatically adjusts volume and stereo pan.
const ctx = new AudioContext();
// Audio graph: Source → Panner → GainNode → Speakers
const oscillator = ctx.createOscillator();
const panner = ctx.createPanner();
const masterGain = ctx.createGain();
// Configure panner model
panner.panningModel = 'HRTF'; // Head-Related Transfer Function (most realistic)
panner.distanceModel = 'inverse'; // Volume = refDist / (refDist + rolloff * (dist - refDist))
panner.refDistance = 1;
panner.maxDistance = 500;
panner.rolloffFactor = 1.5;
// Connect chain
oscillator.connect(panner);
panner.connect(masterGain);
masterGain.connect(ctx.destination);
// Position the listener (the "camera", usually player position)
ctx.listener.setPosition(0, 0, 0);
// Position the sound source in 3D space
// For a 2D top-down game: use X for left/right, Z for depth, keep Y=0
function setSoundPosition(worldX, worldZ) {
panner.setPosition(worldX, 0, worldZ);
}
Updating Position Each Frame
// In your game loop, update listener position to follow the player
function updateAudioListener(player) {
const { x, y, angle } = player;
// Listener position (player's world coords mapped to audio coords)
ctx.listener.positionX.setValueAtTime(x / SCALE, ctx.currentTime);
ctx.listener.positionY.setValueAtTime(0, ctx.currentTime);
ctx.listener.positionZ.setValueAtTime(y / SCALE, ctx.currentTime);
// Listener orientation (which direction they're facing)
ctx.listener.forwardX.setValueAtTime(Math.sin(angle), ctx.currentTime);
ctx.listener.forwardZ.setValueAtTime(Math.cos(angle), ctx.currentTime);
ctx.listener.upY.setValueAtTime(1, ctx.currentTime);
}
// For a sound emitter (e.g., enemy footsteps):
function updateEnemySoundPosition(enemy, panner) {
panner.positionX.setValueAtTime(enemy.x / SCALE, ctx.currentTime);
panner.positionZ.setValueAtTime(enemy.y / SCALE, ctx.currentTime);
}
Reverb: ConvolverNode
Reverb simulates sound bouncing off walls. A ConvolverNode accepts an Impulse Response (IR)
audio buffer — a recording of a real or simulated room — and applies it as a convolution filter.
// Load an Impulse Response from a URL
async function createReverb(audioCtx, irUrl) {
const response = await fetch(irUrl);
const arrayBuffer = await response.arrayBuffer();
const irBuffer = await audioCtx.decodeAudioData(arrayBuffer);
const convolver = audioCtx.createConvolver();
convolver.buffer = irBuffer;
return convolver;
}
// Cross-fade between dry and wet (reverb) signal
// This is called a "wet/dry mix"
function setReverbMix(dryGain, wetGain, mix) {
// mix: 0 = fully dry, 1 = fully wet (reverb)
dryGain.gain.setValueAtTime(1 - mix, audioCtx.currentTime);
wetGain.gain.setValueAtTime(mix, audioCtx.currentTime);
}
// Graph: Source → (Dry Gain → Dest)
// └→ Convolver → Web Gain → Dest
const convolver = await createReverb(ctx, '/audio/hall-ir.wav');
const dryGain = ctx.createGain();
const wetGain = ctx.createGain();
source.connect(dryGain); dryGain.connect(ctx.destination);
source.connect(convolver); convolver.connect(wetGain);
wetGain.connect(ctx.destination);
// In a dungeon: mostly reverb
setReverbMix(dryGain, wetGain, 0.7);
// Outdoors: mostly dry
setReverbMix(dryGain, wetGain, 0.05);
Dynamic Music: Layered Stems
Adaptive music responds to gameplay. The technique used by AAA games is stem mixing: the soundtrack is pre-composed as separate "stems" (percussion, bass, melody, tension), all perfectly synchronized. You fade in/out stems to match the game state.
Synthesized Stem Mixer
Click to start. Adjust stems to hear adaptive music. Use the preset buttons to simulate game states.
Implementing Stem Transitions
The key to smooth stem transitions is using linearRampToValueAtTime() instead of setting
gain instantly — this prevents audio "pops".
class AdaptiveMusicSystem {
constructor(audioCtx) {
this.ctx = audioCtx;
this.stems = {}; // { name: { source, gain } }
}
// Load and loop a stem from a URL
async addStem(name, url, initialVolume = 0) {
const response = await fetch(url);
const buffer = await this.ctx.decodeAudioData(await response.arrayBuffer());
const source = this.ctx.createBufferSource();
const gain = this.ctx.createGain();
source.buffer = buffer;
source.loop = true;
gain.gain.setValueAtTime(initialVolume, this.ctx.currentTime);
source.connect(gain);
gain.connect(this.ctx.destination);
source.start(0); // All stems start at EXACTLY the same time → perfect sync
this.stems[name] = { source, gain };
}
// Smoothly fade a stem to a target volume over `duration` seconds
setStemVolume(name, targetVol, duration = 1.5) {
const { gain } = this.stems[name];
gain.gain.cancelScheduledValues(this.ctx.currentTime);
gain.gain.linearRampToValueAtTime(targetVol, this.ctx.currentTime + duration);
}
// Apply a preset (game state)
applyState(state) {
const presets = {
explore: { drums: 0.0, bass: 0.4, melody: 0.8, tension: 0.0 },
combat: { drums: 0.9, bass: 0.7, melody: 0.3, tension: 0.2 },
boss: { drums: 1.0, bass: 0.8, melody: 0.0, tension: 1.0 },
victory: { drums: 0.5, bass: 0.3, melody: 1.0, tension: 0.0 }
};
const target = presets[state];
for (const [name, vol] of Object.entries(target)) {
this.setStemVolume(name, vol, 2.0); // 2-second crossfade
}
}
}