Requirements
Prerequisite knowledge
Familiarity with the basic concepts of developing for Adobe AIR and mobile devices.
Required products

Adobe AIR
User Level
Intermediate

Requirements

Prerequisite knowledge

Familiarity with the basic concepts of developing for Adobe AIR and mobile devices. 

User level

Intermediate

Adobe AIR has evolved beyond its original goal of being a platform for desktop applications. It now supports stand-alone application development across mobile, desktop, and digital home devices. AIR is an attractive development platform in part because of this broad reach. At the same time, each of these environments places unique demands on mobile application development and design.
 
For example, mobile applications are frequently run for short periods of time. They need a UI that is usable on small screens, yet can scale up to tablets and support different screen orientations. They must work with touch input, and integrate with hardware and software facilities that are unique to this class of devices. They must also take into account the memory and graphics models of mobile devices.
 
This article describes the features and design approaches that AIR supports to enable mobile application development. The features and approaches described will help you develop applications that can run on Android, BlackBerry Tablet OS, and iOS devices, and on both smartphones and tablets.
 

 
Screens

Perhaps the first and most obvious consideration when targeting a mobile device is the screen. It is relatively small, both physically and in the number of pixels it can display. It also has a high density (pixels per inch), and various devices feature different combinations of densities and dimensions. Mobile devices can also be held in either landscape or portrait orientation.
 
To operate across this wide variability in size and density, AIR supports the following key APIs:
 
  • Stage.stageWidth , Stage.stageHeight : These two properties provide the actual screen dimensions at runtime. Note that these values can change as the application enters or exits full-screen mode, and if the screen rotates. (More on rotation below.)
  • Capabilities.screenDPI : This provides the number of pixels per inch on the screen.
By combining the information provided by these properties, an application can adapt its display to a wide range of screens—even to sizes and densities not anticipated when it was written.
 
Note: If you've build desktop apps on AIR, note that there's only one Stage for mobile applications, and the NativeWindow class is inoperable. By inoperable I mean that the class can be referenced and instantiated, but doing so has no effect. This makes it possible to write shared code that can operate in both environments. To check if NativeWindow is available, query NativeWindow.isSupported.
 
Mobile applications need not support screen rotation, but should at least consider that not all mobile devices default to portrait (height greater than width) displays. Applications that do not want to be aware of screen rotations can opt out entirely by setting <autoOrients> to false in their application descriptor. Applications that wish to handle rotation can opt in by setting <autoOrients> to true and then listening for REORIENTING and REORIENT events on the Stage. Note that not all mobile platforms dispatch a REORIENTING event, but they all dispatch a REORIENT event.
 
It's also worth noting that applications are not required to use the built-in auto-orientation feature to handle screen rotation. The built-in events are best if you wish to match system behavior, however. For example, on some devices with slide-out physical keyboards, the system orientation will change to align with the keyboard even if the device itself hasn't been physically rotated. For applications expecting text input, it probably makes sense to reorient in this situation. For other applications, such as games, it may be desirable to leave auto-orientation off and instead monitor accelerometer events to determine the device's physical orientation. I cover accelerometers later in this article.
 

 
Touch input

Once an application has been drawn to the screen, it typically is ready for some input from the user. For a mobile application, this means accepting touch input.
 
AIR automatically maps simple, single-finger gestures—like a single-finger tap on a button—to the corresponding mouse events. This makes it possible to write shared code that can operate in a reasonable fashion across both mobile and desktop platforms.
 
For more complex interactions, you'll want to take advantage of multitouch input. AIR for mobile supports the following key APIs for multitouch support:
 
  • Multitouch: This controller class lets the application determine which of the touch and gesture events are available, and select which event to use.
  • TouchEvent: Events of this type are received by an application when processing raw touch events.
  • GestureEvent, PressAndTapGestureEvent, TransformGestureEvent: These events are received by an application when processing gestures.
For applications that handle the standard gesture events of the underlying platform—for example, pinching or expanding two figures to zoom out or in—set Multitouch.inputMode to MultitouchInputMode.GESTURE . Multiple touch points will be synthesized into gestures by the system, and gesture events delivered for each. For example, a zoom gesture is dispatched as a TransformGestureEvent with type TransformGestureEvent.GESTURE_ZOOM .
 
Applications can also opt to receive raw touch events by setting Multitouch.inputMode to MultitouchInputMode.TOUCH_POINT . The system will dispatch a series of events for each touch, indicating when the touch point begins, how it moves over time, and when it ends. Furthermore, multiple touch points can occur simultaneously. It's up to the application to synthesize this event stream into something meaningful.
 

 
Text input

Mobile devices with soft keyboards (that is, keyboards displayed on the screen, not with physical keys) require additional considerations. While not all mobile devices have a soft keyboard, you should ensure that your application works well with devices that use them because they are becoming more prevalent.
 
When visible, soft keyboards necessarily consume some of the available screen real estate. To accommodate this, AIR will, by default, pan the Stage such that both the text input control and the keyboard remain visible. When the Stage is panned in this scenario, it is typically pushed up, so that the topmost part of the Stage is clipped by the top of the screen and is no longer visible.
 
Applications can disable this behavior and implement their own logic to accommodate the soft keyboard. This behavior is controlled by the softKeyboardBehavior setting in the application descriptor. The default setting is pan. To implement your own logic, use none.
 
When the default pan behavior is disabled and the soft keyboard is activated or deactivated, AIR will report the area of the Stage covered by the keyboard via Stage.softKeyboardRect. Applications should listen for the SoftKeyboardEvent to be notified when this value changes, and then adjust their layout accordingly. (SoftKeyboardEvent is dispatched for both soft keyboard behaviors.)
 
Applications typically do not need to worry about activating the soft keyboard, as this happens automatically when a text field receives the focus. Applications can also request a soft keyboard to be shown for any InteractiveObject that receives focus by setting InteractiveObject.needsSoftKeyboard, and ask that the keyboard be displayed immediately via InteractiveObject.requestSoftKeyboard(). These APIs have no effect on devices that do not use soft keyboards.
 

 
Sensors

Mobile device users aren't just acclimated to multitouch input screens for interacting with their mobile applications—they also expect apps to know where they are and react to physical orientation and movement of the device. AIR supports this through two key APIs:
 
  • Geolocation: This dispatches events giving the device's geographic position (latitude and longitude) as well as movement (heading, speed).
  • Accelerometer: This dispatches events reporting the current force being applied to the device along the x, y, and z axes.
For some applications, geolocation is inherent to the application's operation—for example, an application that finds the closest ATM. Even more applications can use this information to enhance the user's experience. For example, a voice memo app might record where you recorded each memo to provide even more context during playback.
 
As I mentioned earlier, accelerometer input can be useful if you want to know the actual orientation of the device, and not just its logical orientation. Accelerometer data can also turn the device itself into a controller. Many applications take advantage of this, using the tilting or twisting of the device to control the application itself.
 
Both of these sensor APIs permit the caller to set a requested update interval; that is, the requested rate at which updates to the location and acceleration will be dispatched to any listeners. Note that neither one guarantees this rate of updates, however. The actual update rate depends on a variety of factors, including the underlying hardware.
 

 
Web views

No modern application runtime would be complete without support for HTML content, and AIR for mobile provides that via the StageWebView API. StageWebView provides a way for AIR apps to access the underlying, built-in HTML rendering capability of the target platform. Note that because StageWebView uses the platform HTML control, it does not guarantee consistent rendering across platforms. What it does guarantee is that it will render content consistent with the platform on which it is run. If you are using it to host a web page, that will probably match your user's expectations.
 
Because it relies on a native platform control, StageWebView is not integrated with the display list. Instead, it floats above all other content. Think of it as attached directly to the Stage—hence the name. The contents of a StageWebView control can be captured as a bitmap via drawViewPortToBitmapData(), which can be placed on the display list. This could be used to enable a snapshot of a web page to participate in screen transition animations, for example.
 
For those familiar with the HTMLLoader API in AIR, it is worth noting that StageWebView is not a suitable replacement. HTMLLoader includes a built-in HTML rendering capability and supports hosting HTML and JavaScript that run outside the browser sandbox as part of the application. StageWebView can only host HTML and JavaScript content that operates in the traditional browser sandbox; it is not able to host the application itself.
 
If your user wants to escape to the browser, you can enable it by calling navigateToURL(). This also redirects the user to other applications if invoked on a URL prefix registered by that application, such as YouTube or Google Maps.
 

 
Images

When it comes to taking pictures, the question for mobile devices these days is not whether they have a camera, but how many cameras they have. AIR for mobile includes new APIs providing integration both with the cameras and with any photos already stored on the device.
 
 
CameraUI and CameraRoll classes
Built-in camera functionality is accessed via the new CameraUI class. As the name suggests, this differs from the familiar Camera class in that it is an API to the camera's user interface, not to the camera directly. Depending on the device, this means the user may have the ability to select between still and video recording, select different resolutions, turn the flash on or off, select between front and rear cameras, and so on.
 
Mobile devices not only take pictures but also store them. The user's library of recorded images can be accessed via the CameraRoll class. The browseForImage() method can be used to open the device's standard UI for selecting an image from the library. The camera roll is also writable: images can be stored to the library via the addBitmapData() method.
 
 
MediaPromise class
CameraUI and CameraRoll both return selected images via a new event type, called MediaEvent. MediaEvent is straightforward, adding just one interesting member to its parent Event class: data. The data member is of type MediaPromise, and it's through this class that image data must be accessed.
 
As its name suggests, MediaPromise is a promise to provide the data associated with a media item, such as an image. It does not necessarily hold those bytes, however. The distinction is important, and it's worth spending a few minutes on the API to understand how to use it efficiently.
 
Whether it is best to have a media item in memory or in storage depends on a number of factors. For example, with video, it is generally necessary to keep it in storage, as the available memory is frequently too small; and if the media item is in the device's camera roll library, then it is already in storage and should not be read into memory unless necessary. On the other hand, a still photo that has just been taken is typically stored in memory, as it is probably small enough and is likely to be displayed immediately.
 
The MediaPromise class wraps up this uncertainty in a single object that can then be used efficiently if some care is taken. If the application wishes to keep the media item in storage to free up memory, it can easily check whether the item is already in storage by checking MediaPromise.file for a non-null value. When dealing with video, this may even be the difference between having enough storage and running out.
 
If the application wishes to process the media item in memory, then it can always be read via a stream accessed via MediaPromise.open(). The MediaPromise object will automatically return those bytes either from the in-memory copy or from storage, depending on where the item is located. When using open(), be sure to also check MediaPromise.isAsync to determine the kind of stream that has been returned.
 
Finally, to handle the common case in which the returned media item is being added to the display list, the Loader class has been extended with a new method, called Loader.loadFilePromise(). This permits the item to be added directly to the display list, optimizing away any potentially unnecessary copies in application code. As the method name indicates, this method also works with any FilePromise. The MediaPromise class implements the IFilePromise interface.
 

 
Application lifecycle

On mobile devices, applications are subject to a lifecycle over which they have little control. They cannot start themselves, but may be started either directly by the user (for example, when launched from a home screen) or indirectly by the user (say, via a registered URL scheme). They can be sent to the background at any time. When running in the background, they may also be stopped at any time; this typically happens when the device is running low on resources for the foreground application.
 
Mobile apps cannot start themselves or shut off themselves. On some mobile platforms, NativeApplication.exit() is inoperable (a "no-op"). Rather than rely on saving state during a shutdown, applications should rely on saving state when they are sent to the background, periodically while running, or both.
 
Applications are notified when they are sent to the background via the dispatch of DEACTIVATE events and, correspondingly, ACTIVATE events when brought to the foreground. AIR also takes some specific actions when applications transition to the background and foreground. The details vary by platform.
 
 
Android background behavior
On Android, applications are encouraged to do as little as possible in the background but do not have severe restrictions imposed. When an AIR application is sent to the background on Android, its animation framerate is reduced to four frames per second and, although all events continue to be dispatched, the rendering phase of the event loop is skipped.
 
AIR apps on Android therefore can continue to perform background tasks, such as completing an upload or download operation, or periodically syncing information. However, applications should take steps to further reduce their framerate, turn off or reduce other timers, and so on, when in the background.
 
 
iOS background behavior
On iOS, applications are not permitted to run in the background in a generic fashion. Instead, they must declare that they want to perform a certain type of background processing, such as keeping a voice-over-IP call going or completing a pending upload.
 
AIR does not provide support for this iOS background processing model, so when they are sent to the background, AIR apps are simply paused. Their framerate goes to zero, no events are dispatched, and no rendering occurs. They do, however, stay resident in memory by default. This allows the application to preserve its state when brought back to the foreground.
 

 
Performance

Achieving good performance in mobile applications is best achieved by selecting a solid fundamental approach to each aspect of your application. Trying to wring an extra 10 percent improvement out of your linear-time algorithm is simply not going to compete with what you can achieve by using the constant-time algorithm in its place.
 
 
Startup time
Startup time is particularly challenging because the costs are often spread throughout the application. To minimize startup costs, focus on running as little code as possible, rather than making that code run faster.
 
For example, suppose you are writing a game, and on your first screen you'd like to display the current high scores, which are saved locally. Executing the code to retrieve those scores may be surprisingly expensive. Since it's the first time that code path has run, you may have to pay the cost of interpreting or compiling the code, so it will be slower than steady-state ActionScript performance. Second, you'll wait for the information to be retrieved from the file system. Finally, you'll pay the cost of layout and rendering that information on the screen.
 
Instead, consider deferring all of this work until after the first screen has already been displayed. Then, while the user is busy appreciating the beauty of your artwork, you can prepare the high score list. Finally, you can bring it onto the screen by fading or animating it in.
 
Note that the optimization scheme here involves choosing when to perform the work, rather than performing the work more quickly. It is the user's perceived performance that matters here: Users only notice these things when they have to wait for them to finish.
 
 
Rendering
The rise of GPUs has essentially inverted the performance characteristics of a typical rendering pipeline. When rendered on a CPU, each pixel is expensive. It's therefore advantageous to render from a description of shapes, performing pre-processing such that each pixel on the screen is drawn only once. This is the basic approach taken by AIR when rendering traditional, vector-based content.
 
A GPU, on the other hand, renders shapes poorly but can easily move enormous numbers of pixels around—often, several times more pixels than can actually fit on the screen. The best way to use a GPU is to compose your UI out of a set of bitmaps and then restrict yourself to transformations of those bitmaps.
 
In AIR, it is possible to get the best of both worlds. You can use the full capabilities of the AIR rendering model to draw and then cache the result as a bitmap that can be efficiently rendered to the screen. Use BitmapData.draw() to capture your rendered results in this way.
 
Note that while it is also possible to package these bitmaps with your application, rather than rendering them on the fly, the proliferation of screen sizes and densities makes it effectively impossible to produce all necessary variations beforehand. This approach is therefore not only fast, but also fits well with today's proliferation of devices.
 
 
Memory
Although today's mobile devices contain plenty of RAM, it is key to remember that they do not use the same memory management model as traditional desktop operating systems. On the desktop, if memory requirements are exceeded, the contents of memory can be spilled to disk and then brought back in to memory later. This enables the operating system to keep an effectively unlimited number of programs running at once.
 
On mobile devices, this spill-to-disk approach is not available. Instead, if the demand for memory exceeds what is physically available, background applications are forced to exit, thus freeing up the memory they were using. If a request for memory cannot be satisfied at all, then the requesting application itself is exited.
 
There are two implications here. First, it is important to have a sense of the total memory requirements of your application to ensure that it will not run out of memory. Second, to increase its chances of staying resident while in the background, your app should strive to use as little memory as possible when it's in the background.
 
Both of these goals can be met by explicitly managing your application's memory. This may sound odd at first since, after all, the garbage collector is supposed to relieve you of this effort. It is best to think of the garbage collector as something that empties the trash for you. However, it is still up to you to put unused objects into the trash can.
 
The first step in adopting an explicit memory management approach is ensuring that you clear references to objects that are no longer needed. For example, suppose your application, reads an XML configuration file at startup and then copies some important values out of that document. At this point, it is likely that the XML object tree created during that process is no longer needed. However, it is also likely that the application has kept a reference to the root XML object, thus keeping the entire XML document pinned in memory. After reading the configuration values, the application should set the reference to the XML document to null, thus placing this object in the trash and making it available for collection.
 
Explicit memory management is also critical when dealing with multiples of a given object. For example, an application that loads a set of images will, if written naively, always run out of memory when that set is too large. On the other hand, if the implementation caps the number of images that can be in memory at any one time, it will not run out of memory no matter how large the set of images is. This can be achieved by freeing up an old image before loading a new one or, even more efficiently, by keeping a fixed number of objects in memory and cycling images through them.
 

 
Storage

Mobile devices provide local file systems that applications can use to store preferences, documents, and the like. In general, applications should assume that this storage is accessible only to the app itself and cannot be shared with other apps. This storage can be accessed on all platforms via the File.applicationStorageDirectory property.
 
Android implements a secondary file system, which typically resides on an available SD card and is accessed via the "/sdcard" path. Unlike the primary application storage, these locations can be read and written by all apps on the device. Applications should be aware that this secondary storage is not always available, however, as SD cards can be removed or may be present but unmounted.
 
Given the prevalence of cameras on mobile devices, they also provide a shared storage location specific to photos. Applications should generally access this via the CameraRoll API, as I previously explained in the "Images" section. While stored photos can be accessed directly via the file system API on some platforms, this is not a portable practice.
 

 
Deployment

In the mobile space, deployment happens primarily via app marketplaces. These marketplaces include on-device functionality for discovering, installing, and updating them.
 
AIR applications are prepared for deployment to a particular marketplace by packaging them in the appropriate platform-specific format. For example, you should package your application as an .ipa file for uploading to the Apple App Store and as an .apk file for uploading to an Android marketplace. These options are available from within Flash Builder, or can be scripted via the ADT command-line tool.
 
All mobile application marketplaces require that applications be signed that are published to them. For iOS, signing must occur with a certificate issued by Apple. For Android devices, developers should create a self-sign certificate that is valid for at least 25 years, and must use the same certificate to sign all updates to their application. Due to the differing certificate requirements, publishing to multiple marketplaces requires keeping tracking of a variety of certificates.
 
When you are preparing to deploy a mobile app to an Android market, keep in mind that AIR itself is separately deployed. (On iOS, a copy of AIR is bundled with each application, so this discussion does not apply.) If your app is installed on a device on which AIR is not installed, the user will be redirected to install AIR the first time the application is launched. Whenever possible, you should make sure that this redirection sends the user back to the same market from which they purchased your app. To accomplish this, you can pass the appropriate URL for that market via the -airDownloadURL flag when invoking the ADT command-line tool. Contact the market to determine which correct URL to use.
 

 
Where to go from here

Developing mobile apps with Adobe AIR makes it possible to create a single application that can be deployed across multiple mobile smartphones and tablets running Android, iOS, or BlackBerry Tablet OS.
 
AIR makes this possible by providing cross-platform abstractions where they are helpful (such as for accessing the camera roll), discovering device properties dynamically (such as screen size), and getting out of your way when necessary (for instance, using the file system API to access any part of the file system).
 
Building a cross-device application also requires developers to be cognizant of the memory, application lifecycle, and other unique aspects of mobile development. Combining this knowledge with the AIR runtime enables the rapid creation of capable, cross-device mobile applications.
 
For more information about mobile app development with Adobe AIR, check out the mobile application development resources in the Mobile and Devices Developer Center, as well as the Adobe AIR Developer Center.