A while back I made my first attempt and create a sound visualizer in AS3. It was a pretty feeble, albeit necessary step toward making something that I looked cool and ran smoothly. Over the last couple of weeks, I have been playing around with a visualizer in my spare time, and finally have something I think is worth another post.
This time around, I am actually loading in a sound and getting its audio spectrum data. This is farther than my last, graphic-test only attempt. The new Sound model in AS3 is a bit different than previous versions, but it is not very difficult to understand. The major differences are a result of the new object hierarchy of the Sound package… and when you think about it they are intuitive changes. For the sake of this example, I won’t get into the nitty gritty of all the new classes in the Sound package, but keep it to the ones I am using for my visualizer. I am using:
- a Sound object (the sound’s raw data)
- a SoundChannel object (a reference to the playing sound)
- the SoundMixer object (a static class that controls all the SoundChannels in use)
The first step in creating a visualization of a playing sound is to… play the sound! Instantiate a new Sound object and pass it a URLRequest object that points to the URL of the sound file you’d like to load.
_sound = new Sound(new URLRequest('./your.mp3'));
Once the audio file is loaded (at least enough to start playing), go ahead and create a SoundChannel object by calling the Sound.play() method.
_channel = _sound.play();
Once the sound is playing, call the SoundMixer.computeSpectrum() method to get a data snapshot (ByteArray) of the current waveform comprised of all the playing SoundChannels.
var spectrum:ByteArray = new ByteArray(); SoundMixer.computeSpectrum(spectrum);
I won’t pretend to know the ins and outs of the new ByteArray class in AS3… I actually find it a bit confusing. All I seem to have figured out is that running a loop through 256 values and calling the ByteArray.readFloat() method on the spectrum variable will give you 256 unique values spanning the waveform for the left channel of the playing sound. Calling the exact same loop/readFloat() combo again will give you 256 unique values spanning the right channel of the playing sound. Once you understand how to get that data and relate it to the playing sound, visualizing the sound is pretty easy.
In my example, I am doing some extras with the graphics of my visualizer. I am drawing each channel separately, and I am applying some Bitmap drawing technics to give each channel line its own trailing effect. Then I am drawing the bitmaps to the stage to give they effect of trailing motion and allowing the visual sound data to linger on the stage. There are a lot cooler looking visualizers out there, but hopefully if you are looking to get into this kinda stuff, this example can help you out a bit.
Here it is: