14 October 2008
General experience of building applications with Flash is recommended.
Intermediate
The release of Adobe Flash Player 10 includes new functionality to play dynamically generated sound. Prior to Flash Player 10, all sounds played in Flash Player were based on unmodified data in a loaded MP3 file. Dynamic sound generation allows you to modify the data from a loaded sound (and play the resulting, modified sound). You can also generate sound data without loading an MP3 file, and play a sound based on that generated data.
This article describes two sample applications that demonstrate the use of dynamically generated sounds in Flash Player:
Figure 1. The DynamicSoundSample1 sample application
Figure 2. The DynamicSoundSample2 sample application
In Flash Player 10, you can call the play() method of a Sound object that has no loaded MP3 data. When you do, the Sound object periodically dispatches a sampleData event (an object of type SampleDataEvent). In the event handler for the sampleData event, you can provide sound samples to be added to the audio buffer for the Sound object. This is accomplished by creating a byte array that contains from 2048 to 8192 pairs of floating-point numbers. Each pair of floating-point numbers represents an audio sample. One number represents the amplitude (from –1.0 to 1.0) of the left channel for the sample, and the other represents the amplitude of the right channel. The SampleDataEvent object has a data property. The event handler sets the data property to the byte array of the sample data. This data is added to the audio buffer of the Sound object.
Flash Player 10 also includes a new extract() method, which is added to the Sound class. This method allows you to extract sound data from a Sound object that has loaded an MP3 file. You can then alter this data (by adjusting, adding, or removing sound samples) and use it as the data for a second Sound object that uses dynamically generated data.
The DynamicSoundSample1 application loads MP3 audio data into a Sound object. It then uses the extract() method to load the audio data into a ByteArray object. It then modifies the data to shift the pitch of the audio. A second Sound object then plays the modified data (using the soundSample event).
The DynamicSoundSample1 file includes a Slider component and a Button component. These two components provide the user interface for the application.
The FLA file defines DynamicSoundSample1 as the document class for the application. An ActionScript 3.0 application's document class is a class that extends the MovieClip class. It defines the top-level functionality for the application. In Flash CS4, you can set the document class for a SWF application in the Publish Settings. (Select File > Publish Settings, select the Flash tab and then click the Settings button next to the Script drop-down menu.)
The DynamicSoundSample1 class is defined in the DynamicSoundSample1.as file. It declares the playButton Button component and the pitchShiftSlider Slider component, which are added to the Stage in the FLA file. The constructor function for the class loads the source MP3 file for the application, and it defines event handlers for the Button component:
sampleMP3 = new Sound();
var urlReq:URLRequest = new URLRequest("test.mp3");
sampleMP3.load(urlReq);
sampleMP3.addEventListener(Event.COMPLETE, loaded);
playButton.addEventListener(MouseEvent.CLICK, playSound);
The playSound() method is the event handler for the click event of the playButton Button component. It first adjusts the label of the button and then adjusts the event listeners so that the button also serves as a "Stop" button:
playButton.label = "Stop";
playButton.removeEventListener(MouseEvent.CLICK, playSound);
playButton.addEventListener(MouseEvent.CLICK, stopSound);
The method then instantiates a new SoundPitchShift object. The SoundPitchShift class is a custom class (defined in the com.adobe.flash.samples package, and included in the com/adobe/flash/samples subdirectory). The class defines a play() method, which plays audio (from an MP3 file) at an adjusted pitch. The playSound() method (of the DynamicSoundSample1 class) calls the play() method of the SoundPitchShift object, passing the loaded MP3 Sound object and the pitch shift factor as parameters. It also sets an event handler for the Slider control (which passes the pitch shift value to the SoundPitchShift object) like this:
soundPitchShift = new SoundPitchShift();
soundPitchShift.play(sampleMP3, pitchShiftFactor.value / 100);
pitchShiftFactor.addEventListener(SliderEvent.CHANGE, adjustPitch);
The play() method of the SoundPitchShift class creates a new Sound object, morphedSound. It then calls the play() method of this Sound object. Since no MP3 data has been loaded into the object, the object dispatches a soundSample event to request sound sample data:
var morphedSound:Sound = new Sound();
morphedSound.addEventListener(SampleDataEvent.SAMPLE_DATA, sampleDataHandler);
soundChannel = morphedSound.play();
The sampleDataHandler() function generates the sample data (as a byte array). It does this by calling the extract() method to extract sound sample data from the source Sound object; it then passes those bytes (as a ByteArray object) to the shiftBytes() method, which returns a modified byte array. The modified ByteArray object is set as the data property of the SampleDataEvent object. This causes the bytes to be added to the output buffer for the Sound object that is dispatched by the sampleData event:
var bytes:ByteArray = new ByteArray();
position += srcSound.extract(bytes, 4096, position);
event.data.writeBytes(shiftBytes(bytes));
The shiftBytes() method takes a byte array containing sound data as a parameter. It returns a modified byte array, with sound samples (each representing two floating-point values) removed to adjust to the pitch shift factor (the value of the pitchShiftFactor property). The shiftBytes() method uses two numbers, skipCount and skipRate, to determine how frequently to remove sound samples from the byte array:
private function shiftBytes(bytes:ByteArray):ByteArray
{
var skipCount:Number = 0;
var skipRate:Number = 1 + (1 / (pitchShiftFactor - 1));
var returnBytes:ByteArray = new ByteArray();
bytes.position = 0;
while(bytes.bytesAvailable > 0)
{
skipCount++;
if (skipCount <= skipRate)
{
returnBytes.writeFloat(bytes.readFloat());
returnBytes.writeFloat(bytes.readFloat());
}
else
{
bytes.position += 8;
skipCount = skipCount - skipRate;
}
}
return returnBytes;
}
The skipRate number is based on the pitch shift factor (the pitchShiftFactor property). If the factor is 2.0, skipRate is set to 2.0 and every second sound sample is removed. If the factor is 1.5 (3/2), skipRate is set to 3.0 and every third sound sample is removed. If the factor is 1.333 (4/3), skipRate is set to 4.0 and every fourth sound sample is removed. Removing samples causes the pitch (frequency) of the sound to shift higher.
The soundCompleteHandler() method of the SoundPitchShift class relays the soundComplete() event to the host application:
private function soundCompleteHandler(event:Event):void
{
dispatchEvent(event);
}
The soundCompleteHandler() in the DynamicSoundSample1 class handles this relayed event, by adjusting the label and event handler for the Button component:
public function soundCompleteHandler(event:Event):void
{
playButton.label = "Play MP3";
playButton.removeEventListener(MouseEvent.CLICK, stopSound);
playButton.addEventListener(MouseEvent.CLICK, playSound);
}
Also, the stopSound() method of the DynamicSoundSample1 class calls the stop() method of the SoundPitchShift class when the user clicks the Stop button:
public function stopSound(event:Event):void
{
soundPitchShift.stop();
}
The stop() method of the SoundPitchShift class calls the stop() method of the SoundChannel object controlling the playing sound. This stops the sound.
The DynamicSoundSample2 application creates audio sample data in a ByteArray object. It then plays the data, using the soundSample event of a Sound object.
The DynamicSoundSample2 file includes a TextArray component and a Button component. These two components provide the user interface for the application.
The FLA file defines DynamicSoundSample2 as the document class for the application. An ActionScript 3.0 application's document class is a class that extends the MovieClip class. It defines the top-level functionality for the application. In Flash CS4, you can set the document class for a SWF application in the Publish Settings. (Select File > Publish Settings, select the Flash tab and then click the Settings button next to the Script drop-down menu.)
The DynamicSoundSample2 class is defined in the DynamicSoundSample1.as file. It declares the codeTextArea TextArea component and the playButton component, which are added to the Stage in the FLA file. The constructor function for the class defines an event handler for the Button component:
playButton.addEventListener(MouseEvent.CLICK, playMorse);
The constructor function also instantiates a MorseCode object:
morse = new MorseCode();
The MorseCode class is a custom class (defined in the com.adobe.flash.samples package, and included in the com/adobe/flash/samples subdirectory). The class includes a playString() method, which plays a text string as Morse code sound.
The play() method is the event handler for the click event of the playButton Button component. It first adjusts the label of the button and adjusts event listeners so that the button acts as a "Stop" button:
playButton.label = "Stop";
playButton.removeEventListener(MouseEvent.CLICK, playMorse);
playButton.addEventListener(MouseEvent.CLICK, stopMorse);
The method then calls the playString() method of the MorseCode object. This plays Morse code audio for a given string:
morse.playString(codeTextArea.text);
The playString() method of the Morse class calls a stringToCode() method and the codeStringToBytes() method:
var codeString:String = stringToCode(string);
soundBytes = codeStringToBytes(codeString);
The stringToCode() method converts a text String into a string of ".", "–", and blank space characters, representing Morse code. It does this by looking up tables in a static array named codes, as shown below:
public static function stringToCode(string:String):String
{
var returnString:String = "";
for (var i:int = 0; i < string.length; i++)
{
var char:String = string.charAt(i).toLowerCase();
if (codes[char] != undefined)
{
returnString += codes[char] + " ";
}
else
{
returnString += " ";
}
}
return returnString;
}
The codes array maps characters to their Morse code equivalent. For instance, codes["x"] is defined as "–..–".
The codeStringToBytes() method generates the sound sample data (as a byte array), based on the code string. It does this by calling two methods, sineWaveGenerator() and silenceGenerator(). The sineWaveGenerator() method returns a ByteArray object representing a sine wave. This sine wave represents a pure tone when used as sound samples by a Sound object:
private function sineWaveGenerator(length:Number):ByteArray {
var returnBytes:ByteArray = new ByteArray();
for ( var i:int = 0; i < length * 2400; i++ )
{
var value:Number = Math.sin(i / 6) * 0.5;
returnBytes.writeFloat(value);
returnBytes.writeFloat(value);
}
return returnBytes;
}
Each sound sample is represented by two floating-point numbers—one for the left channel and one for the right channel.
The silenceGenerator() method generates a wave with constant amplitude 0, which plays back as silence:
private function silenceGenerator(length:Number):ByteArray
{
var returnBytes:ByteArray = new ByteArray();
for (var i:int=0; i < length * 2400; i++ )
{
returnBytes.writeFloat(0);
returnBytes.writeFloat(0);
}
return returnBytes;
}
The return value is written to a soundBytes ByteArray object.
The addSoundBytesToSound() method is the event handler for the sampleData event, dispatched by the codeSound Sound object. The Sound object dispatches this event to request audio data. The method adds 8192 audio samples to the audio buffer of the Sound object. (Each sample is two four-byte floating-point numbers.) It adds data to the audio buffer by setting the data property of the SampleDataEvent object to a portion of the soundBytes ByteArray object:
private function addSoundBytesToSound(event:SampleDataEvent):void
{
var bytes:ByteArray = new ByteArray();
soundBytes.readBytes(bytes, 0, Math.min(soundBytes.bytesAvailable, 8 * 8192));
event.data.writeBytes(bytes, 0, bytes.length);
}
Note: If you do not provide ample sample data, the sound will stop playing as soon as the sound buffer becomes empty.
The soundCompleteHandler() method of the SoundPitchShift class relays the soundComplete() event to the host application:
private function soundCompleteHandler(event:Event):void
{
dispatchEvent(event);
}
The soundCompleteHandler() in the DynamicSoundSample2 class handles this relayed event, by adjusting the label and event handler for the Button component:
public function soundCompleteHandler(event:Event):void
{
playButton.label = "Play MP3";
playButton.addEventListener(MouseEvent.CLICK, playMorse);
playButton.removeEventListener(MouseEvent.CLICK, stopMorse);
}
Also, when the user clicks the Stop button the stopMorse() method of the DynamicSoundSample2 class calls the stop() method of the MorseCode class:
private function stopMorse(event:MouseEvent):void
{
morse.stop();
}
The stop() method of the MorseCode class calls the stop() method of the SoundChannel object controlling the playing sound. This stops the sound.
| 04/23/2012 | Auto-Save and Auto-Recovery |
|---|---|
| 04/23/2012 | Open hyperlinks in new window/tab/pop-up ? |
| 04/21/2012 | PNG transparencies glitched |
| 04/01/2010 | Workaround for JSFL shape selection bug? |
| 02/13/2012 | Randomize an array |
|---|---|
| 02/11/2012 | How to create a Facebook fan page with Flash |
| 02/08/2012 | Digital Clock |
| 01/18/2012 | Recording webcam video & audio in a flv file on local drive |