Prerequisite knowledge
General experience of building AIR applications with Flash Builder, Flash Professional, or other tools is suggested.
Required products
Adobe AIR Flash Builder (Download trial)
Adobe Animate CC
User level

This article discusses how to upload images obtained via the CameraRoll and CameraUI classes to a server.
The CameraRoll class lets you open an image browser so that the user can select a picture from the device media library. The CameraUI class lets you launch the device camera app so that the user can take a picture or video. In both cases the CameraRoll and CameraUI objects return the media via an event containing a media promise.
The options available for accessing the data in a media promise are different from one platform to another. For example, on Android, the media promise includes a file property that references the source image file, but on iOS this property is always null—the source file is not accessible. The technique described in this article works on all platforms.

Getting the media promise

The first step is to request the image from either the CameraRoll or the CameraUI object. In both cases, you create the appropriate object, set up some event listeners, and then call the function that asks for the image. The runtime then returns the image in an event containing the media promise after the user has either chosen a picture from the device media library or taken a new picture with the camera.
The following code example sets up the event listeners necessary to handle the events that can be dispatched by a CameraRoll object and then calls the browseForImage() method. When the user selects an image, the runtime calls an event handler function named, imageSelected.
//declare cameraRoll where it won't go out of scope var cameraRoll:CameraRoll = new CameraRoll(); if( CameraRoll.supportsBrowseForImage ) { cameraRoll.addEventListener( MediaEvent.SELECT, imageSelected ); cameraRoll.addEventListener( Event.CANCEL, browseCanceled ); cameraRoll.addEventListener( ErrorEvent.ERROR, mediaError ); cameraRoll.browseForImage(); } else { trace( "Image browsing is not supported on this device."); }
The code for requesting an image from the CameraUI object is very similar:
private var cameraUI:CameraUI = new CameraUI(); if( CameraUI.isSupported ) { trace( "Initializing..." ); cameraUI.addEventListener( MediaEvent.COMPLETE, imageSelected ); cameraUI.addEventListener( Event.CANCEL, browseCanceled ); cameraUI.addEventListener( ErrorEvent.ERROR, mediaError ); cameraUI.launch( MediaType.IMAGE ); } else { trace( "CameraUI is not supported."); }
One difference to note is that the CameraRoll dispatches an event of type MediaEvent.SELECT and the CameraUI class dispatches an event of type MediaEvent.COMPLETE.

Getting the raw image data

You can get the raw image data from the media promise as a byte array. A media promise object is allowed to provide its data in a few different ways, though. So, it takes a bit of care to create a robust, cross-platform solution. On Android, the media promise dispatched by CameraRoll and CameraUI includes a File object referencing the image file. However, the File object is not available on iOS. If your code opens the file directly, it will only work on Android.
If you just want to display an image, you can always use the Loader.loadMediaPromise() method. This method loads and decodes the image as a display object. But to upload the image, you need either a File object or a ByteArray object containing the image data, not a Flash display object. You can use the file property of the MediaPromise object when it is available, but when a file property is not available, you can access the image data by reading the media promise data source.
A media promise object provides access to a data source that can be synchronous or asynchronous. In the synchronous case, you can open the data source and read the data immediately. But in the asynchronous case, you must wait for progress or complete events before the data is available.
The following code example checks to see whether the media promise data source is synchronous or asynchronous (using the isAsync property). In the asynchronous case, the example adds an event listener. Because you don't know what type of object the actual data source is, you have to cast it to the appropriate interface using the "as" operator. Cast the object to the IDataInput interface to read data from the data source. Cast it to the IEventDispatcher interface to access the addEventListener() method of an asynchronous data source.
private var dataSource:IDataInput; private function imageSelected( event:MediaEvent ):void { trace( "Media selected..." ); var imagePromise:MediaPromise =; dataSource =; if( imagePromise.isAsync ) { trace( "Asynchronous media promise." ); var eventSource:IEventDispatcher = dataSource as IEventDispatcher; eventSource.addEventListener( Event.COMPLETE, onMediaLoaded ); } else { trace( "Synchronous media promise." ); readMediaData(); } } private function onMediaLoaded( event:Event ):void { trace("Media load complete"); readMediaData(); } private function readMediaData():void { //do something with the data }

Uploading the image

The easiest way to upload the image is to use FileReference.upload(). Yes, I know I just told you not to rely on the existence of a file. However, you can create a temporary file yourself if you don't get one from the media promise directly. Another benefit to using FileReference.upload() is that it performs a multipart-form-style upload. Many server-side upload scripts expect this type of upload since it is commonly used by HTML forms that allow users to upload files.
The other option is to use the URLLoader class to upload the byte array directly. Performing an upload as a raw byte array is also fairly easy. If your server script expects a multipart form upload, however, you will have to write the code that creates the request into the proper format. Other than FileReference.upload(), there aren't any ActionScript APIs that help you directly. There are third-party libraries that you might be able to use, such as Eugene Zatepyakin's open source MultipartURLLoader class.
The following example illustrates how to create and upload a temporary file: 
private function readMediaData():void { var imageBytes:ByteArray = new ByteArray(); dataSource.readBytes( imageBytes ); tempDir = File.createTempDirectory(); var now:Date = new Date(); var filename:String = "IMG" + now.fullYear + now.month + + now.hours + now.minutes + now.seconds + ".jpg"; var temp:File = tempDir.resolvePath( filename ); var stream:FileStream = new FileStream(); temp, FileMode.WRITE ); stream.writeBytes( imageBytes ); stream.close(); temp.addEventListener( Event.COMPLETE, uploadComplete ); temp.addEventListener( IOErrorEvent.IO_ERROR, ioError ); try { temp.upload( new URLRequest( serverURL ) ); } catch( e:Error ) { trace( e ); removeTempDir(); cameraUI.launch( MediaType.IMAGE ) } } private function removeTempDir():void { tempDir.deleteDirectory( true ); tempDir = null; }
You can use this PHP script example given in the ActionScript 3 Developer's Guide to test the upload function. (Remember to increase the maximum file size defined in the script to accommodate the images produced by your device camera.)
If you have control of the server-side script that accepts the image upload, you can upload the image bytes directly without creating a temporary file. On iOS, this can save some memory since there are potentially fewer copies of the image data in memory than otherwise. But as with any optimization, you should evaluate whether the expected gains are worth the effort.
The following function uploads an image as a byte array:
public function upload( data:ByteArray, destination:String, fileName:String = null ):void { if( fileName == null ) //Make a name with correct file type { var type:String = sniffFileType( buffer ); var now:Date = new Date(); fileName = "IMG" + now.fullYear + now.month + now.hours + now.minutes + now.seconds + ".jpg"; } loader = new URLLoader(); loader.dataFormat= URLLoaderDataFormat.BINARY; var urlString:String = destination + "?file=" + fileName; var request:URLRequest = new URLRequest( urlString ); = data; request.method = URLRequestMethod.POST; request.contentType = "application/octet-stream"; loader.addEventListener( Event.COMPLETE, onUploadComplete ); loader.addEventListener(IOErrorEvent.IO_ERROR, onUploadError ); loader.load(request); }
You can find the code for this function in the Uploader class in the example files. You can test the function with the following simple PHP script:
<?php if ( isset ( $GLOBALS["HTTP_RAW_POST_DATA"] )) { $flux = $GLOBALS["HTTP_RAW_POST_DATA"]; $fp = fopen('.images/' . $_GET['file'], 'wb'); fwrite($fp, $flux); fclose($fp); } ?>

Where to go from here

This article covers how to get the image data from a media promise object and a couple of easy ways to upload that data to a server. In the wild, your upload function will need to add headers and parameters to conform with a particular photo services API. You can find more information about these network operations, as well as more information about the CameraRoll and CameraUI classes in the ActionScript 3 Developer's Guide and the ActionScript 3 Reference: