14 June 2010
Familiarity with ActionScript and Flash Builder will be helpful.
Beginning
In the last few months, I've seen a significant jump in the number of financial services firms interested in building real-time trader applications using Flex and LiveCycle Data Services.
The new quality of service features available in LiveCycle Data Services ES2 (including guaranteed message delivery and message throttling) have contributed to this intensified interest. But for many real time applications, performance is what matters most. So, in addition to adding and improving features, Adobe also spent a considerable amount of time running benchmarks and optimizing performance. On his blog, Damon Cooper describes serveral server-focused tests. They answer the question: How many messages per second can the server push to how many clients with what latency? In one of the scenarios tested, the answer was a total of 400,000 messages per second spread over 500 clients with an average latency of 15 milliseconds.
Another more client-focused question customers have asked me is: How many messages per second can one client consume and render with what latency? To answer that question, I built my own Performance Console and feed generator.
This article provides an overview of the Performance Console and instructions for setting it up in your environment.
Note: This tool is designed to help you get started and run your own tests. This is not an official LiveCycle Data Services performance benchmark. There are many variables that affect the performance of real-time systems. This tool can help you experiment with some of these variables; the results that it produces do not necessarily reflect the best results that can be achieved with Flex and LiveCycle Data Services.
The Performance Console allows you to configure the throughput of the server-side feed generator as well as the client subscription details, and then measure the overall performance and health of the system.
The number of symbols you select doesn't have a direct impact on the total number of messages pushed. It only impacts the frequency (defined here as the number of messages pushed per subtopic, in other words, per symbol in this specific application). For example, if the feed generator generates updates for 1000 symbols using one thread and sleeps for 1 millisecond between messages, it will generate a total of approximately 1000 messages per second, and each symbol will get one update per second. If you change the number of symbols to 500, the generator will still push a total of 1000 messages, but each symbol will be updated two times per second.
You can specify the channel you want to use: RTMP, Streaming, or NIO Streaming. I used the Parsley framework to externalize the channel configuration in the channels-config.xml file. You can add additional channels in channels-config.xml if you want to test additional options. (The channels you add must be defined on the server in channels-config.xml and must be registered with the f destination in messaging-config.xml)
You can also enable throttling using the Max Frequency setting, which defines the maximum number of messages per second you want the client to get per subscription (in other words, per symbol in this specific application). Setting Max Frequency to 0 (the default) keeps throttling disabled: the client will be sent all the messages processed by the server. If you set the value to 1, the client will receive a maximum of one message per second per symbol. A conflation policy is configured at the server side for the destination. The options are ignore, buffer, and merge. You can also define your own conflation policy by creating a custom outbound queue.
Lastly, you can set the Frame Rate of the application. The Frame Rate is not a parameter of the client subscription per se, but it can significantly impact the overall performance and behavior of this type of application, especially if you are pushing a very large number of messages to the client. The Performance Console allows you to modify it easily, so you can find the best value for your own application.
The Rendering checkbox allows you to measure the performance impact of displaying the live changes in the datagrid. With the checkbox unchecked, the client receives and processes the messages but doesn't display them in the datagrid.
One indicator I found particularly useful in gauging the maximum capabilities of the client is message latency–the time it takes for a message to be handled by the client after it is published (or pushed) by the server. If the client can't keep the latency low and fairly constant, you have probably reached the limits of the system, and many other things will start to go wrong. For example, the application may become unresponsive or the messages will be queued and the latency will continue to grow.
In this test suite, I calculate the message latency as follows:
The Performance Console displays the latency of each incoming message, providing a real-time indicator of the overall health of the system. Before I started tracking and displaying this indicator in real time, it was much more difficult to determine exactly when and why the client was starting to get overloaded.
When you click the Run Benchmark button, the application will monitor the incoming messages for one minute and report the total number of messages received as well as the average latency as a new row in the Performance data grid. The first eight columns of the grid provide the test parameters. The last three columns provide the actual results of the test.
The second test, for example, shows that using the NIO streaming channel, the client processed 59,611 messages in one minute (993.5 messages per second) with an average latency of 0.3 milliseconds. The last test shows that using the RTMP channel, the client processed 116,136 messages in one minute (1,935.6 messages per second) with an average latency of 3.4 milliseconds.
These tests are theoretical. In a real life application, you'd probably not want to send 2,000 individual messages per second to your clients. Instead, you'd typically throttle messages using a policy (ignore, buffer, merge, or custom) that is appropriate for your application. Regardless, it is good to be able to identify the limits of the system.
Note: Because of a dependency that currently exists between frame rate and RTMP message processing, I increased the frame rate to handle large number of messages when using the RTMP channel.
To run your own tests, you'll need to install the server-side application and the Performance Console client.
Follow these steps to set up the web application:
webapps directory, and call the new web application traderdesktop. For example, copy /tomcat/webapps/lcds to /tomcat/webapps/traderdesktop.traderdesktop directory.Install the AIR application located in the sample files for this article. This AIR file was built with the publicly available AIR 2 beta 2.
To access the console in your browser, navigate to
http://localhost:8400/traderdesktop/TraderDesktopWebPerfConsole/TraderDesktopWebPerfConsole.html
The source code of the application is available in the traderdesktop-projects folder of the sample files.
To import the projects in Flash Builder 4:
traderdesktop-projects as the root directory and click Finish. The DataGrid control available out-of-the-box in the Flex SDK is a general-purpose component that was built to support a wide variety of use cases and is therefore not specifically optimized to handle a trader desktop price grid with frequent real-time updates. The good news is that it is easy to optimize the DataGrid control for this use case. The basic idea is to disable the collectionChange event handler and let the itemRenderers watch (and handle) changes to their respective data items. Check out the FastDatagrid.as class and the application's item renderers for the details.
After some initial testing, I discovered a problem when using the AIR client and the RTMP channel. Speficifally, the message latency started to increase when the application was deactivated (lost focus). When an AIR application loses focus, the AIR 2 runtime lowers the frame rate of the application to prevent the application from using unnecessary CPU resources while it's running in the background. Because of the current dependency between frame rate and RTMP message processing, this behavior is not appropriate for the desktop trader application when using an RTMP channel, because it would significantly reduce the number of real-time messages the application can handle. This, in turn, would lead to a rapidly growing queue of messages waiting to be processed, and the message latency growing out of control. To handle this situation, the code now forces a higher frame rate on the deactivate event of the WindowedApplication.
After developing the source code for the AIR version and the browser version of the real-time trader application, the next logical step was to build a mobile version. I used Flex 4 and the prerelease version of AIR for Android.
I made some changes to the user interface to make it work better on a smaller screen, but for the most part, this version of the application uses the same code as the browser and the desktop versions–the same data feed, subscription logic, model, controller, components, and so on. This made it amazingly easy to take an existing Flex application and deploy it on Android. I deployed on three different targets (browser, desktop, and mobile), but used the same programming model, same language, same tools, same code.
For more information on optimizing data grids, see Tom Sugden's recent blog post on this topic.
For an overview of LiveCycle Data Services ES2, including new quality of service and data throttling features, see What's new in Adobe LiveCycle Data Services ES2. Also check out the Adobe LiveCycle Data Services ES2 Performance Brief for more details on the performance of the messaging infrastructure.