Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to write a web-based music visualizer?

I'm trying to find the best approach to build a music visualizer to run in a browser over the web. Unity is an option, but I'll need to build a custom audio import/analysis plugin to get the end user's sound output. Quartz does what I need but only runs on Mac/Safari. WebGL seems not ready. Raphael is mainly 2D, and there's still the issue of getting the user's sound... any ideas? Has anyone done this before?

like image 617
nico Avatar asked Jun 22 '10 08:06

nico


4 Answers

Making something audio reactive is pretty simple. Here's an open source site with lots audio reactive examples.

As for how to do it you basically use the Web Audio API to stream the music and use its AnalyserNode to get audio data out.

"use strict";
const ctx = document.querySelector("canvas").getContext("2d");

ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);  

function start() {
  ctx.canvas.removeEventListener('click', start);
  // make a Web Audio Context
  const context = new AudioContext();
  const analyser = context.createAnalyser();

  // Make a buffer to receive the audio data
  const numPoints = analyser.frequencyBinCount;
  const audioDataArray = new Uint8Array(numPoints);

  function render() {
    ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);

    // get the current audio data
    analyser.getByteFrequencyData(audioDataArray);

    const width = ctx.canvas.width;
    const height = ctx.canvas.height;
    const size = 5;

    // draw a point every size pixels
    for (let x = 0; x < width; x += size) {
      // compute the audio data for this point
      const ndx = x * numPoints / width | 0;
      // get the audio data and make it go from 0 to 1
      const audioValue = audioDataArray[ndx] / 255;
      // draw a rect size by size big
      const y = audioValue * height;
      ctx.fillRect(x, y, size, size);
    }
    requestAnimationFrame(render);
  }
  requestAnimationFrame(render);

  // Make a audio node
  const audio = new Audio();
  audio.loop = true;
  audio.autoplay = true;

  // this line is only needed if the music you are trying to play is on a
  // different server than the page trying to play it.
  // It asks the server for permission to use the music. If the server says "no"
  // then you will not be able to play the music
  // Note if you are using music from the same domain 
  // **YOU MUST REMOVE THIS LINE** or your server must give permission.
  audio.crossOrigin = "anonymous";

  // call `handleCanplay` when it music can be played
  audio.addEventListener('canplay', handleCanplay);
  audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
  audio.load();


  function handleCanplay() {
    // connect the audio element to the analyser node and the analyser node
    // to the main Web Audio context
    const source = context.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(context.destination);
  }
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>

Then it's just up to you to draw something creative.

note some troubles you'll likely run into.

  1. At this point in time (2017/1/3) neither Android Chrome nor iOS Safari support analysing streaming audio data. Instead you have to load the entire song. Here'a a library that tries to abstract that a little

  2. On Mobile you can not automatically play audio. You must start the audio inside an input event based on user input like 'click' or 'touchstart'.

  3. As pointed out in the sample you can only analyse audio if the source is either from the same domain OR you ask for CORS permission and the server gives permission. AFAIK only Soundcloud gives permission and it's on a per song basis. It's up to the individual artist's song's settings whether or not audio analysis is allowed for a particular song.

    To try to explain this part

    The default is you have permission to access all data from the same domain but no permission from other domains.

    When you add

    audio.crossOrigin = "anonymous";
    

    That basically says "ask the server for permission for user 'anonymous'". The server can give permission or not. It's up to the server. This includes asking even the server on the same domain which means if you're going to request a song on the same domain you need to either (a) remove the line above or (b) configure your server to give CORS permission. Most servers by default do not give CORS permission so if you add that line, even if the server is the same domain, if it does not give CORS permission then trying to analyse the audio will fail.


music: DOCTOR VOX - Level Up

like image 55
gman Avatar answered Nov 11 '22 17:11

gman


By WebGL being "not ready", I'm assuming that you're referring to the penetration (it's only supported in WebKit and Firefox at the moment).

Other than that, equalisers are definitely possible using HTML5 audio and WebGL. A guy called David Humphrey has blogged about making different music visualisers using WebGL and was able to create some really impressive ones. Here's some videos of the visualisations (click to watch):

like image 25
Brian McKenna Avatar answered Nov 11 '22 15:11

Brian McKenna


I used SoundManager2 to pull the waveform data from the mp3 file. That feature requires Flash 9 so it might not be the best approach.

My waveform demo with HMTL5 Canvas: http://www.momentumracer.com/electriccanvas/

and WebGL: http://www.momentumracer.com/electricwebgl/

Sources: https://github.com/pepez/Electric-Canvas

like image 2
Petteri H Avatar answered Nov 11 '22 17:11

Petteri H


Depending on complexity you might be interested in trying out Processing (http://www.processing.org), it has really easy tools to make web-based apps, and it has tools to get the FFT and waveform of an audio file.

like image 1
Arjun Avatar answered Nov 11 '22 17:11

Arjun