24 March 2008


Prerequisite knowledge

Basic familiarity with accepted quality assurance testing procedures is assumed. This includes being aware of Performance 101 : You should not load a server beyond its capabilities. If you do, the results you get will be invalid and should be thrown away.

User level


Additional Requirements

LiveCycle ES

Load test tools

You can use load test tools such as

  • Borland SilkPerformer
  • HP Mercury LoadRunner
  • IBM Rational Performance Tester
  • RadView WebLOAD

Note: This article discusses performance testing Adobe LiveCycle ES applications with IBM WebSphere Application Server. For information on older (version 7.2 and earlier) Adobe LiveCycle applications, refer to the article "Performance testing Adobe LiveCycle applications with IBM WebSphere Application Server and Microsoft Windows Server 2003".

Enterprise applications should not be deployed into production environments without a rigorous performance testing cycle. Ideally performance testing should occur on the same hardware as production hardware. The test environment and production environment should also have matching platform software versions, patch levels, and topologies.

This article describes the best practices for performance testing applications built on LiveCycle ES software. These practices are based on Adobe's own experience testing LiveCycle ES products in the Adobe Server Performance Labs.

A variety of load testing tools are available in the market, such as Mercury (now HP) LoadRunner, IBM Rational Performance Tester, and Segue (now Borland) SilkPerformer. They tend to be expensive. A serious enterprise load test configuration with three software licenses, 1,000 virtual user (VU) licenses, and a couple of additional component licenses can cost as much as US$100,000. However, the benefits they provide are truly remarkable when you consider the dire consequences of not testing an application for performance before enterprise deployment. This document provides examples based on SilkPerformer 2006 R2, which is Adobe's corporate standard. However, what this document covers can be easily applicable to other tools.

This article is for developers and testers who are responsible for determining the performance of enterprise applications that use Adobe LiveCycle ES software. It is also suitable for system analysts, architects, and IT personnel who are trying to size hardware for application deployment.

Although the details I provide in this article are applicable to different operating systems and performance testing tools, the screen shots I use cover the following:

  • IBM WebSphere 6.1
  • Microsoft Windows Server 2003 Enterprise Edition SP2
  • AIX 5.3
  • Segue SilkPerformer 2006 R2
  • Microsoft Office Excel 2007

How load test tools work

Load test tools capture and play back protocol-level traffic between clients and servers. Therefore, they are generally immune from the widget/object recognition problems that typically plague function test tools.

Test design

Properly designed tests yield the maximum amount of usable information for the fewest number of tests. Performance tests typically run for a few hours. However, performance data collected from short-term tests tends to be highly variable and therefore less reliable. You can determine if your performance data is reliable by dividing the standard deviation by the mean and expressing the result as a percentage. Higher values are bad. Adobe's best practice is to conduct performance tests for at least one hour.

Performance testing vocabulary

The following is a list of terms that people in performance testing usually use:

  • Active users: The subset of total users who will be using the application at any given time.
  • Concurrent users: A subset of active users who are contacting the servers for services at any given time. This represents the number of users who have clicked a button and are currently waiting on a response from the server. It is a very small subset of the active user population—about 5% in many cases.
  • Virtual users: Users simulated by a load test tool. The behavior of real users can be simulated with thinktimes (defined below).
  • Peak hours: Hours during a typical workday that the application sees maximum usage.
  • Peak load: The transaction load that the application experiences during peak hours in the busiest period of the year. For someone testing an IRS application, peak load would tend to occur around April 15.
  • Peak concurrent users: Concurrent users hitting the servers during peak load.
  • Typical transaction: The single transaction that is most frequently executed during peak hours in the busiest period of the year. This transaction can be used to represent the overall usage of the application under test.
  • Elapsed time: The amount of time a user waits for service, usually expressed in seconds. It is also called the "response time."
  • Throughput: The rate at which typical transactions can be executed, usually expressed as transactions per hour.
  • Thinktime: The amount of time a virtual user is programmed to wait to simulate the time a real user would spend reading and filling out forms. This is usually programmed to vary randomly between three and 10 seconds. Load tests run without thinktime tend to produce unrealistic results.
  • PDF form complexity index: An index that represents the impact of processing a form using Adobe LiveCycle ES software. Do not use the number of pages in a form as an indicator of the load the server will experience rendering it. A better predictor is the number of interactive objects on the form, like text fields, radio buttons, drop-down list boxes, and so on.

Recording scripts

Although many vendors claim that it is feasible to reuse function test scripts for load testing, Adobe's best practice is to avoid it. Most load test tools come with macro recorders that generate scripts based on user interaction with the client application. You can customize and run these scripts.

We recommend that—in order to minimize the complexity of load testing scripts—you code test harnesses, which can then be driven by simple load testing scripts. If you decide to implement test harnesses using servlets, having the following exception handling code in the servlets will make debugging easier:

import; HttpServletResponse resp; catch(Exception e) { PrintWriter out = resp.getWriter(); out.print("<h2>Test Harness</h2> <p>An Exception occurred. Details below:</p>"); out.print("<font color=red>"); e.printStackTrace(out); out.print("</font>"); }

Most load test tools log what the clients see during the test. The output of the previous code will appear in the client browser and get saved in the logs. This will let you get the debug stack trace of the error without digging through the server logs.

The following is a simple SilkPerformer Benchmark Description Language (BDL) script that requests LiveCycle Forms ES to render and reader-enable a PDF form to the client via a custom-written test harness:

//---------------------------------------------------------------------- // Recorded 05/02/2005 by SilkPerformer Recorder v7.0.0.2364 //---------------------------------------------------------------------- benchmark SilkPerformerRecorder use "WebAPI.bdh" dcluser user VUser transactions TInit: begin; TMain: 1; var dclrand fRandomThinkTime: RndUniF(2.0..4.0); dcltrans transaction TInit begin WebModifyHttpHeader("Accept-Language", "en-us"); end TInit; transaction TMain var strURL: string init ""; begin WebPageUrl(strURL, "LiveCycle Test Harness"); // Load Options WebPageLink("collateral", "Choose Test Collateral"); // Link 3 // ----------------------------- // Choose Render & Reader-Enable // ----------------------------- Thinktime(fRandomThinkTime); Print("Requesting interactive PDFForm...",OPT_DISPLAY_ALL,TEXT_BLUE); WebVerifyData("%PDF-1.6"); WebVerifyData("%%EOF"); MeasureStart("Render"); WebPageSubmit("Submit", SUBMIT001, "srvltGetForm"); // Form 1 MeasureStop("Render"); Print("Form received.",OPT_DISPLAY_ALL,TEXT_BLUE); end TMain; dclform SUBMIT001: "outputFormat":= "PDFForm", // changed "formNames":= "data.xdp", // added "dataFiles":= "data.xml"; // added

The key things to take away from this code are the verification steps (highlighted) and the custom timers (also highlighted). Every PDF document starts with a beginning tag that says %PDF and an end tag that says %%EOF. These tags can verify that the client application received the entire PDF document during the load test. If the last part of the PDF document did not make it through, the verification will fail and the load test tool will flag an error. This information is crucial. Load test scripts should always contain verification steps.

Custom timers are very important because they let you put timers around key calls to the server. In this case, the most important call is the call to the servlet srvltGetForm, which is wrapped around a custom timer called "Render".

Test harness for API calls

Many LiveCycle use cases involve calls to LiveCycle services in a single request-response paradigm. The test harness servlet included in the sample zip file at the top of this article calls Adobe LiveCycle Forms ES to render a PDF interactive form from an XDP form template after merging XML data with it

Test harness for short-lived orchestrations

A process "orchestration" is an automated workflow process designed using LiveCycle Workbench that invokes multiple LiveCycle services in sequence. Short-lived process orchestrations are synchronous, which means that code invoking the orchestration will block until the orchestration finishes executing. You can use the test harness servlet included in the sample zip file to invoke an orchestration. To use this servlet unchanged, your process orchestration would need to have a single output variable of datatype 'document' named"outdoc".

Long-lived orchestrations

Long-lived orchestrations involve user tasks. These are more difficult to test for performance because every user task has to be scripted separately. If your long-lived orchestration can be redesigned as a short-lived orchestration for test purposes, it is definitely recommended.

Collecting performance data

Most operating systems provide a large number of performance counters that you can use to determine how well an application is performing under test.

Windows Task Manager

Windows Task Manager on the servers can provide a lot of information and insight into the performance characteristics of the application during a load test. To record data like this, you can use tools like Windows Performance Monitor. As you can see in Figure 1, the JVM in which WebSphere application server is running currently consumes about 366 MB of memory and is running 82 threads.

Figure 2 shows that three process instances of the Adobe module XMLForm.exe are currently running. If the PoolMax property of the Adobe LiveCycle ES XMLForm module is set to 0 (unlimited pool size), this Task Manager number will indicate the number of concurrent requests that are coming in for the given load.

Tivoli Performance Viewer

To get inside the JVM, you would need to use the free Tivoli Performance Viewer that ships with WebSphere application server Administration Console. You will need to install the Adobe SVG Viewer 3; the interface is shown in Figure 3. In WebSphere 5.1., this was a separately installed application.

Windows Performance Monitor

Most load test tools have performance monitoring modules that let you collect performance counter data values published by Windows Performance Monitor. Try to collect data points every 5 or 10 seconds. At the very least, track the following performance counters for every test (explanations courtesy of Microsoft):

  • Memory – Available Mbytes: This is the amount of physical memory, in megabytes, immediately available for allocation to a process or for system use.
  • Memory – Committed Bytes: This is the amount of committed virtual memory, in bytes. Committed memory is the physical memory which has space reserved on the disk paging file(s).
  • Network Interface – NIC card instance – Bytes Total/sec: This is the rate at which bytes are sent and received over each network adapter, including framing characters.
  • Network Interface – NIC card instance – Packets/sec: This is the rate at which packets are sent and received on the network interface.
  • Paging File – page file instance - % Usage: This is the percentage of the page file instance in use.
  • Physical Disk – disk - % Disk Time: This is the percentage of elapsed time that the selected disk drive was busy servicing read or write requests.
  • Physical Disk – disk – Disk Bytes/sec: This is the rate at which bytes are transferred to or from the disk during write or read operations.
  • Processor – CPU instance - % Processor Time: This is the percentage of elapsed time that the processor spends to execute a non-idle thread. It is calculated by measuring the duration of the idle thread that is active in the sample interval, and subtracting that time from the interval duration. (Each processor has an idle thread that consumes cycles when no other threads are ready to run.) This counter is the primary indicator of processor activity, and displays the average percentage of busy time observed during the sample interval. It is calculated by monitoring the time when the service is inactive, and subtracting that value from 100%.

Also track the following counters for the java.exe process, which represents the J2EE application server instance:

  • Process – java – Handle Count: This is the total number of handles currently open by this process. This number is equal to the sum of the handles currently open by each thread in this process.
  • Process – java – Private Bytes: This is the current size, in bytes, of memory that this process has allocated but cannot be shared with other processes.
  • Process – java – Thread Count: This is the number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread.
  • Process – java – Virtual Bytes: This is the current size, in bytes, of the virtual address space that the process is using. Use of virtual address space does not necessarily imply corresponding use of either disk or main memory pages. Virtual space is finite, and the process can limit its ability to load libraries.
  • Process – java – Working Set: This is the current size, in bytes, of RAM used by a process. It is the set of memory pages touched recently by the threads in the process. If free memory in the computer is above a threshold, pages are left in the working set of a process even if they are not in use. When free memory falls below a threshold, pages are trimmed from working sets. If they are needed, they will then be soft-faulted back into the working set before leaving main memory.


For UNIX operating systems such AIX, rstat can be used to collect performance data. By default, the rstat daemon is not configured to start automatically on most systems. To configure this:

As root:

  1. Edit /etc/inetd.conf and uncomment or add an entry for rstatd; for example, rstatd sunrpc_udp udp wait root /usr/sbin/rpc.rstatd rstatd 100001 1-3 2
  2. Edit /etc/services and uncomment or add an entry for rstatd; for example, rstatd 100001/udp 3
  3. Refresh services: refresh -s inetd
  4. Start rstatd: /usr/sbin/rpc.rstatd

When enabled, metrics such as shown in Figure 4 can be collected.

WebSphere Performance Servlet

WebSphere comes prepackaged with an installable application called Performance Servlet, typically found at \WebSphere\AppServer\installableApps\PerfServletApp.ear. This servlet lets a performance monitoring tool monitor the Java Virtual Machine of the system under test. AIX rstat, Windows Performance Monitor and Task Manager cannot report on details within the JVM like the number of sessions, JDBC pool size, and so on.

The WebSphere Performance Servlet is an application that IBM packages with WebSphere. It uses the WebSphere Performance Management Instrumentation (PMI) framework to return performance statistics as an XML file to the caller. The caller can be any application that can parse this XML and make sense of it. Segue's SilkPerformer Performance Explorer Server Monitor works this way.

Newer versions of SilkPerformer provide an additional option for WebSphere 6.0 and 6.1 by way of JMX MBean Server. But this requires WebSphere libraries to be installed on the load controller which many people choose to shy away from.

After installing the servlet, make sure that all servers in the cluster are restarted. Once the Performance Servlet application starts successfully, regenerate the HTTP plug-in and redeploy it to the web servers. Test to make sure that the servlet works by pointing your browser to a URL like this:


For example, this would work if there is a web server front end to the LiveCycle cluster:


or this, if directly connecting to one of the JVM instances:


In SilkPerformer Performance Monitor, choose Add Data Source > Select from Predefined Data Sources > Application Server > IBM WebSphere Application Server > IBM WebSphere 5.0 (PerfServlet). Enter the data as shown in Figure 5. AP-PS6 is the web server in this case.

Be aware that the more nodes there are in the cluster, the more data there will be in the XML returned by the Performance Servlet. SilkPerformer sometimes displays the error message shown in Figure 6.

Ignore the error message and keep trying until it works. When it does, you will see a window similar to the one in Figure 7.

Remember that every request to the Performance Servlet uses up server resources. The very act of observing what is going on affects what you are observing.

Note: People refer to this phenomenon as the Heisenberg uncertainty principle after a principle formulated by the German physicist Werner Heisenberg (1901–1976) in a 1927 paper in the field of quantum mechanics.

It is a good idea to restrict the frequency with which data is collected using this servlet so that you can minimize the impact on the servers.


Before testing, make sure that all of your servers and the test tool controller are synchronized with their system clocks. The synchronization will let you correlate error logs across multiple servers.

Before each test, reboot all the servers so that each test starts off with the same baseline. In addition, delete all logs before the start of each test so that entries from previous tests do not cause confusion later.

You can categorize various tests performed based on their goals, and design tests to determine maximum throughput possible.


By running a series of relatively short step tests, you can chart a profile of your application that will tell you the following things:

  • Highest possible transactional throughput while keeping elapsed times within acceptable limits
  • Number of servers needed to satisfy throughput requirements

The chart in Figure 8 is the result of seven one-hour step tests with 2, 4, 6, 8, 10, 12, and 14 virtual users on a single-node WebSphere cluster. It shows that the system saturates at a transaction level of about 889 transactions per hour with a mean elapsed time of 12.9 seconds. If you add more load to the system, it processes more transactions but the elapsed time starts rising.

System sizing

If your required hourly throughput is 1,600 transactions per hour, the chart tells you that you need at least an additional node in your cluster.


The only way to determine the long-term behavior of an application in production use is to run it for a long time under typical load. This method is just about the only way to determine memory leaks and other deployment issues that typically make the IT person hate the application.

Although a test should ideally last for one week, people typically have only 48 hours for the test during the weekend. If this is your case, run the test under peak load rather than typical load. By its very nature, peak loads occur only during peak periods that are usually of a short duration. So testing longevity at peak load is not completely realistic.


Before each test, stop all application servers and delete existing logs. After finishing the test, check all logs, including web server logs and application server logs.

You can use additional tools such as JVM profilers to further debug problems. However, we strongly recommend that you do not run performance tests on a JVM while it is being profiled. Popular JVM profilers include Borland Optimizeit and Quest JProbe.

Availability calculations

Availability is an index of the stability and reliability of an application. Availability is expressed as a percentage of the following:


where MTTF is mean time to failure and MTTR is mean time to recover.

Failure typically means that the application stops responding and has to be restarted. Recovery typically means an application restart or a server reboot. Therefore, MTTF is the amount of time an application remains available to users, usually expressed in minutes. MTTR is the amount of time required to make the application available to users after it becomes unavailable, also expressed in minutes.

Simplistically, you can determine MTTF by running the application under typical load for weeks or until it fails. If the application runs for four weeks and the recovery time is only three minutes, you already have "four 9" availability:

4 weeks uptime (MTTF) = 4 weeks×7 days×24 hours×60 minutes = 40,320 minutes

where MTTR = 3 minutes. This means:

Availability = [40,320 / (40,320 + 3)]×100 = 99.9925%

Note: Best practice calls for this type of tests to be repeated at least three times, preferably five times.

Performance considerations

Semantics and academic discussions aside, you get the best performance out of your application by running it on the best-performing hardware. The hardware includes gigabit Ethernet network backbones, high rotational speed disk arrays in high-performance RAID configurations, high-clock-speed multicore CPUs with high-clock-speed front-side bus, and faster RAM.

Make sure that you exclude antivirus software from scanning high I/O folders on the servers. High I/O folders include those containing WebSphere, IIS, and IBM HTTP server logs.

In addition, minimize logging by setting the logging threshold to ERROR. Redirect application server logs to a separate physical disk. Avoid writing temporary files to disk. If your application has to do temporary file I/O, consider setting up a RAM disk in memory.

Where to go from here

Designing valid tests and conducting performance testing on hardware that reflects production hardware will avoid nasty surprises when the application is deployed to production.

Please refer to the following resources to learn more about this topic: