Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I draw sound data from my wav file?

First off this is for homework or... project.

I'm having trouble understanding the idea behind how to draw the sound data waves on to a graph in Java for a project. I have to make this assignment entirely from scratch with a UI and everything so basically making a .wav file editor. The main issue I'm having is getting the sound data into the graph to be drawn. Currently I have a randomly generated array of values just being drawn right now.

So far I have a mini-program running and validating the wav file for it to actually be a wav file.

I'm reading it in with a FileInputStream and validating: the RIFF bytes(0-3), FileLength(4-7), WAVE bytes(8-11), then the format chunk format(starting from the end of the RIFF chunk; and positioning the index to the end of it and giving format 0-3, length of format chunk 4-7, then the next 16 bytes for all the specifications of the wave file and storing those in their appropriate named variables.

Once I get to the DATA chunk and its length past that is all my sound data and that is what I'm unsure of how to store each byte for byte of sound data or even translate it to be value that's related to the amplitude of the sound. I thought validating was similar so it would be the same but it doesn't seem to be that way... Either that or I've been complicating something super simple since I've been staring at this for a few days now.

Any help is appreciated thanks.

like image 525
Kevin Heng Avatar asked Oct 14 '12 04:10

Kevin Heng


People also ask

How do WAV files store data?

WAV file formats use containers to contain the audio in raw and typically uncompressed “chunks” using the Resource Interchange File Format (RIFF). This is a common method Windows uses for storing audio and video files— like AVI— but can be used for arbitrary data as well.

What data does a WAV file contain?

Though a WAV file can contain compressed audio, the most common WAV audio format is uncompressed audio in the linear pulse-code modulation (LPCM) format. LPCM is also the standard audio coding format for audio CDs, which store two-channel LPCM audio sampled at 44,100 Hz with 16 bits per sample.

How are WAV files read?

Windows and Mac are both capable of opening WAV files. For Windows, if you double-click a WAV file, it will open using Windows Media Player. For Mac, if you double-click a WAV, it will open using iTunes or Quicktime. If you're on a system without these programs installed, then consider third-party software.


2 Answers

I'm not a Java programmer, but I know a fair bit about rendering audio so hopefully the following might be of some help...

Given that you will almost always have a much larger number of samples than available pixels the sensible thing to do would be to draw from a cached reduction or 'summary' of the sample data. This is typically how audio editors (such as Audacity) render audio data. In fact the most common strategy is to compute the number of samples per pixel, then find the maximum and minimum samples for each block of size SamplesPerPixel, then draw a vertical line between each max-min pair. You might want to cache this reduction, or perhaps a series of such reductions for different zoom levels. Audacity caches to temporary files ('block files') on disk.

The above is perhaps something of an oversimplification, however, because in reality you will want to compute the initial max-min pairs from a chunk of fixed size - say 256 samples - rather than from one of size SamplesPerPixel. Then you can compute further 'on the fly' reductions from that cached reduction. The point is that SamplesPerPixel will typically be a dynamic quantity - since the user might resize the canvas at any time (hope that makes sense...).

Also remember that when you are drawing to your canvas you will need to scale the sample values by the width and height of the canvas. The best way to do this (in the vertical direction, at least) is to normalize the samples, then multiply by the canvas height. 16-bit audio consists of samples in the range [-32768, 32767], so to normalize just do a floating-point division by 32768. Then reverse the sign (to flip the waveform to the canvas coordinates), add 1 (to compensate for the negative values) and multiply by half the canvas height. That's how I do it, anyway.

This page shows how to build a rudimentary waveform display with Java Swing. I haven't looked at it in detail, but I think it just downsamples the data rather than computing max-min pairs. This will, of course, not provide as accurate a reduction as the max-min method, but it's easier to calculate.

If you want to know how to do things properly you should dig into the Audacity source code (be warned, however - it's fairly gnarly C++). To get a general overview you might look at 'A Fast Data Structure for Disk-Based Audio Editing', by the original author of Audacity, Dominic Mazzoni. You will need to purchase that from CMJ, however.

like image 192
ChrisM Avatar answered Sep 23 '22 20:09

ChrisM


For standard WAV files, it's actually pretty easy. Once you get past the headers, you just interpret every 16 bits as a two's complement integer. I'd recommend using a DataInputStream, since then it is as easy as calling readShort().

These are the amplitude values at each sample point. You may want to do some averages or something, because most of the time there will be way more samples than horizontal pixels. Trying to plot all the samples on some sort of line graph may not be the best way.

like image 26
0xFE Avatar answered Sep 26 '22 20:09

0xFE