Requirements

Prerequisite knowledge

Familiarity with HTML and JavaScript.



Additional required other products

  • A touch device
  • A local web browser on WiFi (setup instructions within this article)

User level

Beginning

When talking about features of iPhones, iPads, or Android devices, the first thing that comes to mind is their support for multi-touch events. Multi-touch refers to the ability of a touch-sensing surface to recognize the presence of two or more points of contact with the surface. This plural-point awareness is often used to implement advanced functionality such as zoom, or to activate predefined programs. Here, I explore the concepts of touch events and gestures in reference to normal HTML and JavaScript rendered in the browsers (Safari/WebKit) of iPhone, iPad, or Android devices.

Touch events supported by iPhones, iPads, and Android devices

All touch-enabled devices provide support for touch events—a set of events that let you take advantage of the touch screen interface. When you put a finger down on the touch-enabled screen (say, an iPhone or Android screen), it starts the lifecycle of a touch event. Within such a device's web browser, each time a new finger touches the screen, a new DOM touchstart event happens. As each finger lifts up, a touchend event happens. If, after touching the screen, you move any of your fingers around, touchmove events happen.

So we have the following touch events in the DOM:

  • touchstart : Initiated when a finger is placed on the screen
  • touchend : Kicked off every time a finger is removed from the screen
  • touchmove : Triggered when a finger already placed on the screen is moved across the screen
  • touchcancel : The system cancels events

And the good thing is that the Webkit engine (the HTML layout rendering engine inside the Safari and Android native browsers) supports all these events. So, by implementing touch events in our Edge composition, you can benefit from touch-enabled screens to give the user a richer user experience on iPhone and Android devices.

Differences between touch and mouse events

When you start comparing the touch events with the mouse events, notice some differences:

  • A touch is very hard to keep steady at one point. In the case of a mouse, it can stay at a fixed position. With a touch on the screen, you can go directly from a touchstart event to a touchmove event and thereby to a touchend . In the case of a mouse, a mousedown event is not required to happen before a mousemove .
  • There is no mouseover equivalent in touch events. So if you have any functionality in your application that uses mouseover to trigger something, you need to change that part in the context of a touch-enabled user experience.
  • iPhone, iPad and Android devices are developed with the fact in mind that these will be used by touch from human fingers, so a touch point of the surface is an averaged point taken from the surface area in contact with the pointing device (the finger) translated to pixel coordinates—like finding the center of a circle. In the case of a mouse, the mouse position/coordinate is very precise, and no averaging need be done.

The WebKit Event Object for touch events

With a mouse, there is only one point of contact: where the cursor is positioned in the screen. But things are different in the case of touch events. In the real world, it is possible for the user to keep two or more fingers held down on the left of the screen while at the same time tapping the right side of the screen. For that reason, the TouchEvent object in WebKit has a list (an array) called touches containing information for each and every finger currently in touch with the screen. There are two more lists in the event object. One, named targetTouches , contains the information for fingers that originated from the same node or target element on screen. The other list, changedTouches , contains only information for fingers associated with the current event.

  • touches : an array of touch information created upon touchstart and touchmove , but not touchend events
  • targetTouches : an array of information for touches originating from the same target element
  • changedTouches : an array of touch information regarding the current event

The data stored in the above lists are stored as objects as follows:

  • target : The element in the HTML DOM from which the touch event originated
  • identifier : The identifying number, unique to each touch event, that can be used to track it
  • clientX : The x coordinate of the touch relative to the viewport or the viewing area browser. This excludes the scroll offset in the browser.
  • clientY : The y coordinate of the touch relative to the viewport or the viewing area browser. This excludes the scroll offset in the browser.
  • screenX : The x coordinate relative to the screen.
  • screenY : The y coordinate relative to the screen.
  • pageX : The x coordinate relative to the full page, which includes scrolling.
  • pageY : The y coordinate relative to the full page, which includes scrolling.

Enough of concepts! You'll now do some experimenting to see these concepts in action and make things more clear.

Setting up your environment

You will now create and edit some HTML and JavaScript files outside of Edge to understand certain concepts related to touch and gesture events. To test these on a device, you should be able to access them on a device browser through a URL.

Set up a local web server

For this, you need to set up a local server on your PC and host your HTML and associated files. Then you can access them on your iPhone, iPad or Android device. First, run a local server on your PC; you can run IIS (if you are on a Windows system) or Apache. Many free small web servers are available on the web that you can download and use.

If you want to install an Apache server, I would recommend XAMPP, an easy-to-install Apache distribution that contains MySQL, PHP, and Perl. XAMPP is a good server for first-timers and is available for Windows, Mac, Linux, and Solaris.

To set up XAMPP:

  1. Download the ZIP version of XAMPP for your system from http://www.apachefriends.org/en/xampp.html.
  2. Extract it to a local folder (such as C:\xampp).
  3. Run the xampp-control file to open a window listing the different servers to run (see Figure 1).
  1. Click the Start button for Apache, and you have your Apache server running!

By default, Apache server uses port 80, so you can access this server in your browser through the localhost URL, like so: http://localhost:80/.

  1. To serve your HTML pages through this server so that they can be accessed through the localhost URL, place them in the folder called htdocs found inside the xampp folder.

You can place your files directly in that folder, or alternatively place a project folder inside htdocs. For example if you place index.html inside htdocs, you can access that through the URL http://localhost:80/index.html. In the case where you place a project folder named project1 (containing a file index.html) inside htdocs, then you can access it through the URL http://localhost:80/project1/index.html. See Figure 2.

Editing files on your web server

To make changes, you can directly edit the files in the server folders. Then, on the device, you refresh the browser to see the effect of the changes you have made to your files.

Deploy the sample files on your local server:

  1. Download and unzip the sample file referenced at the top of this article.
  2. Copy the TouchEventsWithOutEdge folder into the web page folder of your local web server (see Figure 3). For example, if you used the XAMPP installation as I've described, you would copy the TouchEventsWithOutEdge folder into htdocs.
  1. Access the main HTML page with your multitouch device.

Note: When you create or edit the Edge composition files using the Edge IDE, you don't need to run a separate web server. The Edge IDE has a web server enabled that serves the HTML pages to the browser when you preview the project in the browser.

Connect your device

In case you do not have a WiFi router or hub, you can download and run a virtual WiFi hotspot. For example, running Connectify on your PC creates a WiFi hotspot through which your device can connect to your server (see Figure 4).

Connect your device to the same WiFi network as the PC that is hosting the local web site. Once your PC is connected through a WiFi network, it will be assigned an IP address. Now the same localhost URL you used (for example, http://localhost:80/project1/index.html) can be accessed based on this IP address, like so: http://192.168.0.100:80/project1/index.html. Basically, in the URL, the "localhost" part is replaced with the IP address of the PC hosting the webserver. This URL can be used to access the file from the server from any device connected to the same network, either through the same WiFi or by any other means (such as a LAN network). So the important thing here is to get the IP address of your PC running the web server.

To obtain the IP address of the development computer:

On Windows, start a command-line session and run the ipconfig command. Look for the IPv4 entry in the results displayed (see Figure 5).

On Mac OS, on the Apple menu, select System Preferences. In the System Preferences window, click the Network icon, which will open the Network preferences window that displays the the IP address.

On your device, open the browser. (On iPhone or iPad, use Safari. On an Android device, use the native WebKit-based browser.) Type in the web server address prefixed with the IP address of the system. You will be now able to access the HTML files you placed on your server directly from your device.

Now access the folder from your device. You will be able to see the list of all the files in this folder. Select TouchEventsWithOutEdge.html to open it in your device's browser (see Figure 6).

Note: I am using an iPad to load it, and so the screenshots will be based on iPad.

Viewing touch events

Once TouchEventsWithOutEdge.html is loaded on your device, you will be able to see a green rectangular area on the page (created by a <div> element in the code). Now touch it with just one finger. You will see the information regarding the finger touch within the rectangle itself (see Figure 7).

In this file, I have added the following code that actually tracks the details of touchstart events associated with each of the fingers touching the screen.

<!DOCTYPE html > <html> <head> <title>Touch Events</title> <meta name="viewport" content="width=device-width, user-scalable=no"> <style type="text/css" media="screen"> div { position: absolute; width: 350px; height: 650px; background:#00ff00;} </style> <script type="text/javascript" charset="utf-8"> function init(){ var output =""; var myObject = document.getElementById("myObject"); // Now adding the listener for the touchstart myObject.addEventListener('touchstart', function(e) { output =""; //first clear the outfield output =output+ "<b>touchstart</b> is initiated.<br/>Number of fingers touched: "+e.touches.length+""; for( var i=0; i< e.touches.length; i++){ output = output+"<br/>-----------------------<br/>Information for finger #"+(i+1) +": <br/> identifier: "+e.touches[i].identifier +"<br/> target: "+e.touches[i].target +"<br/>clientX: "+e.touches[i].clientX +"<br/>screenX: "+e.touches[i].screenX +"<br/>pageX: "+e.touches[i].pageX +"<br/>clientY: "+e.touches[i].clientY +"<br/>screenY: "+e.touches[i].screenY +"<br/>pageY: "+e.touches[i].pageY+""; } myObject.innerHTML = output; }, false); } </script> </head> <body onload="init()"> <div id="myObject"></div> </body> </html>

If you touch with with two fingers, the details will contain the touch information regarding both fingers (see Figure 8).

Similarly, if you touch with three fingers, you will see the three different lists with information related to each respective finger. Notice that in each case, the identifier is unique, and the finger coordinates for each finger are different based on the actual contact point of each finger.

Next, check the objects associated with touchmove . Open the file TouchEventsWithOutEdgeWithMove.html. This file has additional code to record the objects associated with touchmove events. It does so by means of the following lines of additional code in the init() function, where it tracks the details associated with touchmove events:

// Adding listener for the touchmove myObject.addEventListener('touchmove', function(e) { output =output+ "<b>touchmove</b> is called.<br/>Number of fingers touched: "+e.touches.length+""; for( var i=0; i< e.touches.length; i++){ output = output+"<br/>-----------------------<br/>Information for finger #"+(i+1)+": <br/> identifier: "+e.touches[i].identifier+"<br/> target: "+e.touches[i].target+"<br/>clientX: "+e.touches[i].clientX+"<br/>screenX: "+e.touches[i].screenX+"<br/>pageX: "+e.touches[i].pageX+"<br/>clientY: "+e.touches[i].clientY+"<br/>screenY: "+e.touches[i].screenY+"<br/>pageY: "+e.touches[i].pageY+""; } myObject.innerHTML = output; }, false);

Now, take a look at the information stored in the three different lists of the touch event. To do this, open TouchEventsWithOutEdgeWithMoveAndEnd-targetTouches-ChangedTouches.html on your device browser.

When you touch with one finger, all three lists will have the same information. Here changedTouches will have the information for the same finger as that which caused the change in the event.

If you touch the green rectangle with two fingers at exactly the same time, the changedTouches object will have information about both fingers. But if you touch the second finger after the first one, touches will have two sets of information, one for each finger. The targetTouches list will have two items as both fingers are placed in the same area; however, changedTouches will have the information related to the second finger, because the second finger triggered this event.

If you move your fingers, then only changedTouches will have the updated information. Depending on the number of fingers moving, it will have different sets of information related to each respective finger.

When you remove your finger, it will be removed from touches and targetTouches , and the updated information will appear in changedTouches , because it's what caused the event.

Removing all fingers will remove all the information from the touches and targetTouches lists, leaving them empty, and changedTouches will contain information for the last finger removed.

Note: Apple's WebKit implementation has a few things that are different from the Android implementation. The touchend event removes the current touch from the event.touches array. With Apple's implementation, you have to look inside the event.changedTouches array.

Viewing gesture events

Gestures are a special set of events where more than one finger is used on a multi-touch screen. A gesture event occurs any time two or more fingers are touching the screen. If either finger lands in the node to which you've connected any of the gesture handlers ( gesturestart , gesturechange , gestureend ), you'll start receiving the corresponding events.

The event object for gesture events looks very different from that for touch events. It contains scale and rotation values and no touch objects.

The scale and rotation values are the two important keys of this event object. While scale gives you the multiplier the user has pinched or pushed in the gesture (relative to 1), rotation gives you the amount in degrees the user has rotated their fingers.

Copy the GestureEventsWithOutEdge folder (from the sample files) and place it in your local web server to access it from your multi-touch device (see Figure 9).

Then, access the folder on your device. You will be able to see the list of all the files in this folder (see Figure 10). Select GestureEventsWithOutEdge.html to open it in your device's browser.

Once it is loaded in your device browser, you will see a similar rectangular shape (the <div> element) on the page. Now, with two fingers, try to scale it and at the same time rotate it. Notice as you move both fingers that the information regarding gesturestart , gesturechange and their associated information objects like the scale and rotation will appear (see Figure 11).

If you open this file in a text editor, you will see the following lines of code actually getting this information from gesture events:

<script type="text/javascript" charset="utf-8"> function init(){ var output =""; var myObject = document.getElementById("myObject"); // listener for the gesturestart myObject.addEventListener('gesturestart', function(e) { output ="";//first clear the outfield output =output+ "<b>gesturestart</b> is initiated.<br/>"; output =output+ "Scale: " + e.scale + ", Rotation: " + e.rotation+"<br/>"; myObject.innerHTML = output; }, false); // listener for the gesturechange myObject.addEventListener('gesturechange', function(e) { output =output+ "<b>gesturechange</b> is initiated.<br/>"; output =output+ "Scale: " + e.scale + ", Rotation: " + e.rotation+"<br/>"; myObject.innerHTML = output; }, false); // listener for the gestureend myObject.addEventListener('gestureend', function(e) { output =output+ "<b>gestureend</b> is initiated.<br/>"; output =output+ "Scale: " + e.scale + ", Rotation: " + e.rotation+"<br/>"; myObject.innerHTML = output; }, false); } </script>

Next, open GestureEventsWithOutEdge_RotationScale.html. In this file, I have added the following code that applies the same gesture values relating to scale and rotation to the <div> element.

<script type="text/javascript" charset="utf-8"> function init(){ var output =""; var width = 350; var height = 650; var rotation = 0; var myObject = document.getElementById("myObject"); // listener for the gesturestart myObject.addEventListener('gesturestart', function(e) { e.preventDefault(); output ="";//first clear the outfield output =output+ "<b>gesturestart</b> is initiated.<br/>"; output =output+ "Scale: " + e.scale + ", Rotation: " + e.rotation+"<br/>"; myObject.innerHTML = output; }, false); // listener for the gesturechange myObject.addEventListener('gesturechange', function(e) { output =output+ "<b>gesturechange</b> is initiated.<br/>"; output =output+ "Scale: " + e.scale + ", Rotation: " + e.rotation+"<br/>"; myObject.innerHTML = output; e.target.style.width = (width * e.scale) + "px"; e.target.style.height = (height * e.scale) + "px"; e.target.style.webkitTransform = "rotate(" + ((rotation + e.rotation) % 360) + "deg)"; }, false); // listener for the gestureend myObject.addEventListener('gestureend', function(e) { output =output+ "<b>gestureend</b> is initiated.<br/>"; output =output+ "Scale: " + e.scale + ", Rotation: " + e.rotation+"<br/>"; myObject.innerHTML = output; width *= e.scale; height *= e.scale; rotation = (rotation + e.rotation) % 360; }, false); } </script>

When you rotate and scale the <div> with two fingers, you will see the <div> actually scalling and rotating at the same time (see Figure 12).

Where to go from here

For more touch event examples, see the following resources:

Adobe Edge implements some of the custom events of jQuery Mobile which offers several custom events built on native events to create useful hooks for development. These events employ various touch, mouse, and window events. These events can be bound to different elements and the window for use in both handheld and desktop environments. You can bind to these events as you would with other jQuery events, using live() or bind() .

Adobe Edge implements these events in its framework, especially notable in vmouse (Virtual Mouse events) to abstract away mouse and touch events.