You are on page 1of 66

Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Open Exhibits Core - Definitive Guide, Part I


v. 1.1, 11.29.2010

1.Introduction.......................................................................................................

Audience for this Guide


Links to Adobe API
Link to Open Exhibits Core API
Using Code Examples
Links to Tutorials
An Introduction to Open Exhibits Core
Why We Built Open Exhibits Core
Why We Built Open Exhibits Core for Flash

2. The Multitouch Simulator...................................................................................

Introduction
Why Use a Multitouch Simulator?
How the Multitouch Simulator Works?
Using the Open Exhibits Core Multi-touch Simulator
Adding Touch Points
Moving Touch Points
Removing Touch Points
Performing Simulated Gesture Actions
Simulator Mechanics
Touch Point Ownership
Touch Point Visibility
Touch Point Shape
Breaking Out of Clusters

3. Building A Simple Multitouch Application.......................................................

Your First Multitouch Application


Importing Classes
Creating a Custom Class
Creating a New Instance of TouchSprite

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Adding GestureEvent Listeners
Handling GestureEvents
Testing Your Application
Publishing the Application for Windows 7

4. The TouchSprite Class.....................................................................................

The Touch Sprite Class


Understanding the TouchSprite Class
TouchSprite Inheritance
Creating a TouchSprite Instance
Adding a TouchSprite to the Stage
Setting the TouchSprite Properties
Using TouchSprite Methods
Creating Custom Classes that Extend TouchSprite
Creating a TouchSprite from the Library
Creating Dynamic Instances from the Library
Tips for Using the TouchSprite Class in Application Development

5. The TouchMovieClip Class................................................................................

The TouchMovieClip Class


Understanding the TouchMovieClip class
MovieClip Inheritance
TouchMovieClip Inheritance
Creating a TouchMovieClip Instance
Adding a TouchMovieClip to the Stage
Setting the TouchMovieClip Properties
Using TouchMovieClip Methods
Creating Custom Classes that Extend TouchMovieClip
Creating a TouchMovieClip from the Library
Creating Dynamic Instances from the Library
Assessing the TouchMovieClip Timeline
Tips for Using the TouchMovieClip Class in Application Development

6. Events In Open Exhibits Core......................................................................................

Events in Flash

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Event Flow
Event Targeting
Stopping Event Flow
Understanding Event Propagation in Open Exhibits Core
The TouchEvent Class
Understanding TouchEvents
Blob Containment
Hierarchical event dispatch
TouchEvent Bubbling
Stopping Event Propagation
Working with Touch Event Listeners
Adding TouchEvent Listeners
Removing TouchEvent Listeners
Adding Multiple Listeners
Working with TouchEvents and Nested TouchObjects
The GestureEvent Class
GestureEvent Inheritance
GestureEvent Containment
GestureEvent Processing
GestureEvent Targeting
GestureEvent Flow
Working with Gesture Event Listeners
Adding GestureEvent Listeners
Removing GestureEvent Listeners
Adding Multiple Listeners to a Single Touch Object
Adding TouchEvent and GestureEvent Listeners to a Single Touch Object
Adding TouchEvent and GestureEvent Listeners
Working with GestureEvents and Nested Display Objects

7. The Gesture Library...............................................................................................

Exploring Gestures in Open Exhibits Core


Gesture Standards
Gesture Analytics
New Gestures in Open Exhibits Core
Gesture Types and Gesture Groups
Basic Touch
TouchEvent Phases

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Touch Tap
Touch Hold
Simple Primary Gestures
The Drag Gesture
The Scroll Gesture
The Flick Gesture
The Swipe Gesture
The Scale Gesture
The Rotate Gesture
Stroke Gestures
Stroke Letters
Stroke Symbols
Stroke Greek Symbols
Stroke Shapes
Stroke Numbers
Simple Secondary Gestures
The 3D Tilt Gesture
The Split Gestures
Complex Secondary Gestures
The Anchor Gesture
Future Gesture Development
Universal Gestures

8. Module
Components....................................................................................................................
Introduction to Module Components
Modules Available in Open Exhibits Core
ImageViewer
VideoViewer
FlickrViewer
YoutubeViewer
GMapViewer
KeyViewer
Using Modules in Applications
Flash/Flex Life Cycles
The Open Exhibits Core Module Life-Cycle
Creating a Module Instance
Customizing Module Components

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Using Module XML Files
Module Properties
Setting the Number of Objects Shown
Changing the Appearance of Module
Module Content
Adding New Content
Editing Source Files
Editing Meta Data
Module Interactions
Handling Events Modules
Using Multiple Components in a Single App

9. Templates...............................................................................................................

Introduction to Templates
Templates Available in Open Exhibits Core
CollectionViewer
Using Templates in Applications
Creating a Template Instance
Setting the Modules Included in the Template
Modifying Template Properties

10. The Application Class..............................................................................................

The Application Class


Application Class Inheritance
Application Settings
General Settings
Changing the Application Frame Rates
Hiding the Mouse Cursor
Adding New Modules
Setting the Open Exhibits Core Licence Key
Input Provider Settings
Native Settings
Socket Settings
Setting Auto-Reconnect
Changing the Host IP
Changing the Port

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Enforcing FLOSC capture size
Simulator Settings
Turning on Debug Mode
Degradation Settings

11. Publishing Applications.............................................................................................

Introduction
Publishing Projector Files
Publishing SWFs
Publishing in AIR

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

1. Introduction To Open Exhibits Core


Audience for this Guide
There are a number of excellent books out there that explain how to get started with ActionScript
3 or go into detailed descriptions concerning object-oriented programming. This type of Flash
documentation is beyond the scope of this guide. We recommend Essential ActionScript 3.0, the
Actionscript 3.0 Cookbook, or another ActionScript guidebook for this purpose. No attempt is
made in this guide to explicitly document or compile a definitive guide to existing properties or
associated methods of existing ActionScript classes in the standard Adobe API. More information
on the Adobe Flash AS3 API can be found at:[Adobe Flash AS3 API]

Links to the Open Exhibits Core API


The Open Exhibits Core API has been compiled to exhaustively document all features, properties
methods and events of all classes, elements, components, modules and templates present in the
Open Exhibits Core framework. The Open Exhibits Core API can be found online at http://
openexhibits.org/support/

Using Code Examples


For the sake of brevity, the code examples and code snippets (with the exception of chapter 3) are
used to demonstrate particular properties and methods and require additional code in order to run as
stand-alone applications. Specifically, they are lacking import statements which need to be included
in order that associated libraries can be found in Open Exhibits Core. These have not been explicitly
referenced as part of the explanations but can be easily found through the Open Exhibits Core API
Documentation online.

Links to Tutorials
In addition to examples used in this guide and the API documentation, there are also a series of
online tutorials which compliment and expand on the concepts seen in this guide. These tutorials
can be found at: http://www.openexhibits.org/tutorials/

The Open Exhibits Core Framework


Open Exhibits Core is designed to be the ultimate tool for the development of multitouch and
gesture-interactive Flash applications. The Open Exhibits Core Framework has been constructed to
allow simple extensibility and expansion of the Gesture, Module and Template Libraries.

Adobe Flash was chosen because of the ability of Flash to create media-rich applications that can be

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
deployed cross-platform and the ability to publish once for a variety of devices. Since Open Exhibits
Core is designed for Multitouch Developers, we have made sure that our framework plays well with
others. This means that developers can easily integrate third party APIs directly into apps published
with Open Exhibits Core.

Additionally, Flash is available on over 98% of computers on the web and one key features of Open
Exhibits Core is the ability to build apps that can be integrated into web pages and elegantly provide
a true multitouch experience on the web.

As the Open Exhibits Core framework grows, more and more tools will be released to develop new
and as yet unimagined gestures. These tools will be able to build on top of modules and templates,
simplify workflow, and give developers even more ways to interact with applications.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

2. The Multitouch Simulator


Introduction
Open Exhibits Core comes with a built-in multitouch simulator that can be accessed with any
multitouch application published with Open Exhibits Core. Multitouch devices are technically
defined as touch devices that can simultaneously register three or more touch points. Although
becoming less common, some touchscreens that label themselves multitouch-capable are in fact
only dual touch and so can only accept a maximum of two simultaneous points of touch. Not
all developers have access to screens with “true” multitouch when creating desktop applications.
So we’ve included a multitouch simulator that allows developers to test and deploy multitouch
applications in environments that do not directly support multitouch or even touch input.

The multitouch simulator allows for the creation of multiple touch input points using a single mouse
cursor. Once new points of interaction are created, they can be repositioned and moved dynamically
to simulate gesture actions that require multiple touch points to be triggered or require multiple
dynamic touch points.

Using the Multitouch Simulator


The multitouch simulator can be used for testing and developing multitouch applications but has
also been designed as a tool for universally deploying multitouch applications on the web. If a
multitouch application made with Open Exhibits Core is published as a SWF using Flash Player
10.1, it can be embedded directly into a web page and deployed online. When users visit the website,
they can fully interact with the multitouch app if they are using Windows 7 and have a multitouch
input device. If not, the Open Exhibits Core application will “automatically” allow the cursor
to be used to control the built-in multitouch simulator. In this way, fully interactive multitouch
applications can be developed and deployed on the web without accessibility or usability concerns.

Adding and Removing Touch Points


The simulator can be used to produce multiple simulated touch points. To add a touch point, simply
hold down the “Ctrl” key and left-click. This can be done repeatedly to create literally hundreds of
points if required. To remove a touch point, triple click on the chosen point with the left mouse
button.

Performing Simulated Gesture Actions


To modify the position of the existing touch point, press down the left mouse button when the
cursor is over the touch point. Then drag the touch point using the required action and release the
mouse button when complete in order to simulate a multitouch gesture.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Simulator Mechanics
Touch point ownership describes the direct association of touch points with touch interactive
objects. This association is used to determine the exact influence of a cluster of touch points on a
touch object.

The multitouch simulator uses the same mechanisms to determine touch point ownership as other
non-simulated multitouch data providers. This ensures that all applications behave consistently
regardless of the data provider.

Touch Point Ownership


When a mouse down event is first detected over a touch object, the simulator immediately converts
the mouse event into a touch event and associates the event with the touch object it occurred on.
In this way the new touch point is directly associated with a touch object. Even if the touch point is
dragged off the touch object, the associated event is only cancelled once the touch point has been
removed from the simulator.

Touch Point Visibility


By default, touch points are visible in applications published with Open Exhibits Core. Each time
a touch point is added to the stage, the tracking system built into Open Exhibits Core assigns the
touch point a local cluster. Clusters are identified visibly on screen by the color of the touch points.

The tracking system checks the proximity of the touch points and how much time has passed
between the placement of each point. Touch point groups are selected that best fit conditions for
probable grouping.

Touch Point Shape


The first touch point of a cluster is called the primary anchor point. It is denoted by a blue dot with
a squared outline color. Other touch points that are part of the cluster are shown as simple blue dots
with a circular outline color. Touch points that are not members of a cluster are denoted by a simple
blue dot with a large outer white ring.

In some cases, you may notice the touch point changing shape from a circle to a square. This occurs
when a points changes from a standard member of a cluster to the primary cluster point. This
happens when the previous touch point designated as the primary anchor point is removed from the
cluster.

Breaking Out of Clusters

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Touch points that become sufficiently far apart are regrouped into separate clusters. This can be
seen when two or more touch points are placed close together on stage at the same time and then
slowly moved until they are more that a typical hand’s width apart. When this occurs, the touch
points visibly change from a single set of colored points into two groups of distinct color.

FAQ
How do I add more touch points in the simulator?
To add a touch point simply hold down the “Ctrl” key and left-click on the mouse. This can be
done repeatedly to create literally hundreds of points if required.
How do I move a touch point to perform a gesture action?
To modify the position of the existing touch point, press down the left mouse button when the
cursor is over the touch point, drag the touch point to the required position, then release the mouse
button.
How do I stop a touch point from interacting with a touch object. I have tried dragging off the
point but it is still registering with the touch sprite?
If a touch point becomes associated with an unintended touch object, the best way to correct it is to
simply remove the touch point.
How do I remove a touch point from the simulator?
To remove a touch point, double-click on the chosen point with the left mouse button.
How do I make the touch points visible?
Change the contents of the XML tag <Debug> to “true” in the application.xml file.
What is an primary cluster point ?
An anchor point is the first touch point in a cluster of touch points. It is denoted by a blue dot with
a squared outline color.
Blue touch points appear in my application. How do I turn them off?
Change the contents of the XML tag <Debug> to “false” in the application.xml file.
Touch points appear in my application when I click with the mouse. How do I prevent this?
Change the contents of the XML tag <Degredation> to “never” in the application.xml file.
I want to allow only mouse events to trigger touch points using the simulator when not using a
touch screen. How do I set this?
Change the contents of the XML tag <Degredation> to “always” in the application.xml file.
How do i make sure that multitouch touch screens and mouse events will work in my applications?
Change the contents of the XML tag <Degredation> to “auto” in the application.xml file.
Can I use MouseEvents in my Open Exhibits Core application?
Yes, MouseEvents can be used as they would normally be used in an application. However, in order
for the simulator to handle the mouse events correctly, degradation must be set to “never” or “auto”
in the application.xml file
What is a “Gesture Action”?

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Gesture Actions are physical actions that describe how to create and manipulate the multiple touch
points used to successfully trigger and control a gesture event in Open Exhibits Core.
How do I place additional points in the simulator?
Double-click with the left mouse button to add the first point, then hold down the “Ctrl” key and
double-click to add another “touch point”. Repeat this for as many points as you require. To move
the points, simply click on a point and drag to the desired position. Note: when a touch point is
down on a touch object, it is automatically associated with that touch object and will trigger touch
events accordingly. In order to dissociate a touch object and a touch point, the touch point must be
removed.
How do I remove points in the simulator?
Place the cursor directly over the touch point you wish to remove, then left-click three times in
quick succession to remove the point from the simulator.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

3. Building a Simple Multitouch Application


Your First Multitouch Application
Creating your first multitouch application in Open Exhibits Core is a relatively simple process. Once
you have installed Open Exhibits Core, begin by opening a new Flash .fla file and new .as3 file.
Then name the actionscript file “Main.as” and set the document class in your .fla file to “Main”.

Importing Classes
Importing the “Application” package instantiates libraries and builds a framework of control
structures for managing multitouch events and methods for object tracking within Flash.

import id.core.Application;

In this example, we are going to be using multitouch gestures, so we will need to import the
GestureEvent class into our file. As with importing other packages, importing the GestureEvent
package provides a reference to all available touch and gesture analysis modules via their class name.

import gl.events.GestureEvent;

In this example, we will be rendering an image on stage. To do this, we import the TouchSprite class
libraries. TouchSprite directly extends the Sprite class, which is a display-level class. This means that
TouchSprite inherits all the properties of Sprite and can be easily handled as a display object.

import id.core.TouchSprite;

Creating a Custom Class


In order to build our multitouch application, we are going to create what is essentially a custom class
in Flash. We begin this process by extending the “Application” class. This sets up our new class to
automatically inherit a multitouch framework that will manage touch events behind the scenes.

public class Main extends Application {


public function Main() {

Creating a New Instance of TouchSprite


The next step is to create a TouchSprite object. This ensures that multitouch events are tracked
correctly through all display levels of our applications. For multitouch events to work correctly, we
must make sure that the parents of any touchable objects are also touch objects. We can use either

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
TouchMovieClip() or TouchSprite() as the container for any object we create. Since we won’t be
using any timeline methods in this example, we can use a TouchSprite object.

The following code constructs a new instance of a TouchSprite, draws a simple square using the
Flash Drawing API then adds the object to the display list and positions it on stage.

var square:TouchSprite = new TouchSprite();


square.graphics.lineStyle(3,0xFF0000);
square.graphics.beginFill(0x0000FF);
square.graphics.drawRect(0,0,100,100);
square.graphics.endFill();
addChild(square);
square.x = 100;
square.y = 100;

To make sure that all touch points placed over the container object are explicitly associated with that
object, we need to set the “blobContainerEnabled” property to true.

square.blobContainerEnabled = true;

Adding GestureEvent Listeners


Next we add event listeners to the TouchSprite so that any GestureEvents that occur on the
object can be responded to. The following code adds a listener for GESTURE_ROTATE and
GESTURE_SCALE.

square.addEventListener(GestureEvent.GESTURE_ROTATE, gestureRotateHandler);
square.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);

Handling GestureEvents
When a rotate GestureEvent is detected on the TouchSprite object, it calls
the “gestureRotateHandler()” function. Every time a GESTURE_ROTATE event is dispatched
from the TouchSprite, a value is returned which measures the calculated change in orientation
(rotation) of the cluster of touch points on the touch object. This value is used to incrementally
change the angle of orientation of “container”, adding the change to the previous value each time
the event is detected, which acts to rotate the square.

private function gestureRotateHandler(e:GestureEvent):void {


e.target.rotation += e.value;

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
}

In addition to rotate, each time the “scale” gesture is detected on our shape, a GESTURE_SCALE
event is fired. This is detected by the gesture listener we placed on the touch object and calls
the function “gestureScaleHandler”. Each time this occurs, a value for the change in scale is
taken from the event and used to incrementally change the scale value of the target (in this case
the “container”.)

private function gestureScaleHandler(e:GestureEvent):void {


e.target.scaleX += e.value;
e.target.scaleY += e.value;
}

Note: The center of scale and rotation is dynamically set as the gestures are performed. If a scale
gesture is performed such that the midpoint between the touch points is on the edge of the image,
the image will expand away from that point (if the touch points are moving away from each other).

Summary
This simple multitouch-enabled application allows the direct gesture manipulation of a display
object. The square shape can be independently rotated and scaled using multitouch gesture actions.
The GESTURE_ROTATE and GESTURE_SCALE gestures can be performed concurrently so
that the shape is simultaneously rotated and scaled in a single fluid motion.

Complete Code
package{
import id.core.Application;
import gl.events.GestureEvent;
import id.core.TouchSprite;

public class Main extends Application {


public function Main() {

var square:TouchSprite = new TouchSprite();


square.graphics.lineStyle(3,0xFF0000);
square.graphics.beginFill(0x0000FF);
square.graphics.drawRect(0,0,100,100);
square.graphics.endFill();
addChild(square);

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
square.x = 100;
square.y = 100;

square.blobContainerEnabled = true;
square.addEventListener(GestureEvent.GESTURE_ROTATE, gestureRotateHandler);
square.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);
}

private function gestureRotateHandler(e:GestureEvent):void {


e.target.rotation += e.value;
}
private function gestureScaleHandler(e:GestureEvent):void {
e.target.scaleX += e.value;
e.target.scaleY += e.value;
}
}
}

Testing Your Application


There are various ways to test a multitouch application. If you have a multitouch input device,
events can be directly streamed to your Flash application via TUIO. If you do not have a device that
can output multitouch events, you can use the built-in simulator in Open Exhibits Core.

To test your app in the main menu, go to Control > Test Movie or press “Ctrl + Enter” on the
keyboard.

Note: To test these tutorials on a computer without a touchscreen display, pressing “Shift”
or “Control” while clicking on an object with the mouse will create additional touch points within
the simulator. To remove touch points, simply triple-click them.

Publishing Your Application for Windows 7


You can publish your application as a self-contained executable by following these steps:

1. In your open Flash application.fla file, go to File > Publish Settings.


2. Then in the Formats tab, select “Windows Projector” then press the “Publish” button.

To run the multitouch application simply double-click on the executable. Note: To ensure
application settings are correct for Windows 7, navigate to the application.xml, locate the

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
<TouchCore> tag then set the <InputProvider> to “Native”.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

4. The TouchSprite Class


The TouchSprite Class
A TouchSprite is very similar to a Sprite object. It supports the Flash drawing API and user
interaction events. The main difference is that TouchSprites can listen for TouchEvents and
GestureEvents. The TouchSprite can be considered one of the basic display list building blocks
in Open Exhibits Core. TouchSprites can contain children in the form of Sprites or other
TouchSprites.

TouchSprite Inheritance
The TouchSprite class directly extends TouchObjectContainer class.

TouchSprite>TouchObjectContainer>TouchObject>IDSprite>Sprite

Since TouchSprite extends the TouchObjectContainer class, it can contain other display objects
such as Shapes, Bitmaps and Sprites along with MovieClips or even other TouchSprite or
TouchMovieClip instances.

Creating a New TouchSprite Instance


In order to create a new instance of a TouchSprite, a constructor is used. The following code creates
a new TouchSprite called “box”.

var box:TouchSprite = new TouchSprite();

Adding a TouchSprite to the Stage


In order for TouchSprite instances to be rendered, they must first be added to the display list. This is
done using the addChild() method.

addChild(box);

Setting TouchSprite Properties


The TouchSprite class inherits properties common to its base class Sprite. To position a
TouchSprite instance on stage, simply set the x and y properties. For example, the following
ActionScript code sets the TouchSprite “box” to 50 pixels along the a axis and 150 pixels along the y
axis.

box.x = 50;

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
box.y = 150;

Using TouchSprite Methods


TouchSprite has methods that are similar to Sprite but different in that they are designed to
specifically support full touch and gesture interaction.

In Flash Player 10.1 and AIR 2, Adobe introduced limited touch and gesture support in Flash. These
existing methods have been integrated into Open Exhibits Core so that they can be also be used
with TouchSprite objects. For example, the method startTouchDrag() implements a simple drag
method that used to move a touchSprite at run-time.

box.startTouchDrag(-1);

Note: The Sprite “box” will remain draggable until it has been explicitly stopped by calling the
stopTouchDrag() method. The -1 parameter sets the touchPointID to null which allows Open
Exhibits Core to take over the touch point validation cycles.

In addition to the methods inherited from Sprite, the TouchSprite class has built-in touch and
gesture methods that can be used to capture TouchEvents and GestureEvents. For example,
the following code shows how to enable touch point containment and how to add a simple
TOUCH_MOVE event listener to the touchSprite “box”.

box.blobConainerEnabled = true;
box.addEventListener(TouchEvent: TOUCH_MOVE, touchMoveHandler);

This enables the TouchSprite “box” to capture and retain ownership any touch points which
are created over any part of the box object. Each time one of these touch points moves, a
TOUCH_MOVE event is triggered and the function touchMoveHandler() is called. This handler
can be used to change the ball properties or use other methods to invoke a desired change.

Creating Custom Classes that Extend TouchSprite


Just as with other display objects classes in AS3, TouchSprite can be used as a base class to create
a custom display class. For example, the following code creates a custom class called “Ring” by
extending the TouchSprite class.

class Ring extends TouchSprite {

private var radOut:int = 150;

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
private var radIn:int = 100;

function Ring() {
drawRing();
}
public function drawRing():Void
{
graphics.lineStyle(3,0xFF0000);
graphics.beginFill(0x0000FF);
graphics.drawCircle(0,0,radOut);
graphics.drawCircle(0,0,radIn);
graphics.endFill();
}
}

The benefit of sub-classing TouchSprite is that any custom class that extends TouchSprite
automatically inherits the multitouch interactivity associated with the class along with the properties
and methods associated with the Sprite class. It is especially useful for display objects that primarily
act as display object containers and do not require timeline functionality.

Using the Sprite class in this way is an efficient way to build new classes and provides a simple and
elegant method by which to integrate touch functionality into pre-existing custom classes.

Creating a TouchSprite from the Library


Right-click on the Sprite in the Library. Select “Properties” from the drop down menu. Set the class
name to “myLibrarySprite”. Then set the base class to “id.core.TouchSprite” and press the “OK”
button.

Creating Dynamic Instances from the Library


When a TouchSprite object exists in the Library, it is simply a matter of using a constructor to
dynamically create an instance of the object using ActionScript.

var oval = new myLibraryTouchSprite();


addChild(oval);

Nesting TouchSprites
TouchSprites can be nested in much the same way that traditional sprites can be. As with Sprites
and MouseEvents, TouchEvents and GestureEvents bubble up from nested TouchSprites allowing

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
listeners to be attached to parents that contain the TouchSprite or the TouchSprite object directly.

Tips for Using the TouchSprite Class in Application Development


As a general rule, TouchSprites should only be used when you need to construct a display object
that can contain other display objects but does not require timeline functionality. Otherwise, we
recommend that you use the TouchObject class as it makes available most of the same properties
and methods with less memory and processor overhead.

Additionally, it is considered good practice to remove objects from the display list and nullify
all active references to TouchSprite instances when they are no longer in use. For example the
following code nullifies the Sprite “box”:

box= null;

This prepares the object for garbage collection, which will release the resources used when the
instance was in use. Eventually, the built-in mechanisms in Flash should do this automatically
once a DisplayObject has been removed from the display list. However, it becomes increasingly
important to systematically apply this method when developing multitouch-enabled applications as it
is common to create multiple numerous nested touch objects in multi-user applications.

FAQ
What is the difference between a TouchSprite and a TouchMovieClip?
A TouchSprite has all the touch and gesture interactivity of a TouchMovieClip but does not have a
timeline. The TouchSprite can be thought of as having a single frame which is used to hold nested
touch display objects.
What is a TouchObject?
The TouchObject class is the base class for all touch-enabled display objects in Open Exhibits Core.
What is the Difference between a TouchObject and a TouchObjectContainter?
The TouchObject Container can contain other TouchObjects and TouchObjectContainer. This
enables it to act as a layout tool as other display objects can be nested inside it.
What is the difference between startTouchDrag() and GESTURE_DRAG?
startTouchDrag is a method which initiates the coupling of a display object to a touch point,
allowing it to move as the touch point moves, which creates the effect of dragging. The
GESTURE_DRAG event is triggered when the motion of touch point clusters meet specific
criteria. The information gathered from the motion of the touch points can be used to update the
position of the touch object to create the effect of object dragging if desired but can also be used
to modify any chosen property of the touch object. The drag gesture allows more explicit selection
criteria before initiating drag.This aligns drag gesture behavior with other gestures in the Open

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Exhibits Core library.
Can I use startTouchDrag with Flash Player 10.1?
The TouchDrag() method has been devalued in Open Exhibits Core for Flash Player 10.1 It can still
be used as a fully compatible method for dragging objects around the stage area. However for more
consistent refined gesture control of dragging we suggest using GESTURE_DRAG listener and
updating the x and y position of touch objects.
If Flash Player 10.1 allows multitouch and gestures in Sprite class why use the TouchSprite class?
The current Flash implementation has very limited touch and gesture support. In most instances,
touch and gesture events cannot happen at the same time and only one gesture event can be
triggered on a sprite at any given instant. The TouchSprite class gives developers full discretionary
control over which and how many multitouch or gesture events occur at any given moment.
GestureEvents can occur simultaneously with each other and with TouchEvents. In addition to over
200 existing gesture events custom gestures can be added to the gesture library and compiled into
a custom application or directly into the Open Exhibits Core framework. The TouchSprite class
also has direct support for multitouch data streams other than Windows 7, such as Tangible User
Interface Object (TUIO) protocol.
What is the difference between touchPointID and tactualObjectID?
TouchPointID is used in FlashPlayer 10.1 to explicitly identify touch points as each point of touch
has a unique TouchPointID. The tactualObjectID is a similar object that is used in Open Exhibits
Core to identify points of touch. Other than being two independent systems, the primary difference
between the two ID types is that tactualObjectID can be used with alternative data providers such as
TUIO.
Can I use the startTouchDrag method with Flash Player 10?
Yes. Unlike Flash 10.1, the startTouchDrag() method in Flash 10 does not require a touchPointID
overide parameter.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

5. The TouchMovieClip Class


Understanding the TouchMovieClip Class
A TouchMovieClip object is very similar to a MovieClip object. The main difference is that
TouchMovieClip can listen for TouchEvents and GestureEvents. The TouchMovieClip can be
considered one of the basic display list building blocks in Open Exhibits Core along with the
TouchSprite object. TouchMovieClips differ from TouchSprites in that they contain timeline
functionality.

MovieClip Inheritance
The following shows how the MovieClip extends and inherits properties from the Sprite class,
which in turn inherits all its properties from Display ObjectContainer.

MovieClip>Sprite>DisplayObjectContianer>InteractiveObject>DisplayObject>EventDispa
tcher>Object

This pattern of inheritance continues all the way to the Object class. Each time a class is extended, it
typically acquires additional properties and methods. In this way you can see that MovieClip is very
similar to Sprite but contains additional methods that can handle multiple frames.

TouchMovieClip Inheritance
The TouchMovieClip class directly extends IDMovieClip class.

TouchMovieClip>IDMovieClip>MovieClip

The IDMovieClip class extends MovieClip which in turn extends Sprite which is a descendant
of the DisplayObjectContainer class. Display object containers can among other things contain
other display objects, which means that other objects such as Shapes, Bitmaps, Sprites, TextFields,
MovieClips and TouchSprites can be nested inside a TouchMovieClip object along with other
TouchSprites.

TouchMovieClip is also a descendant of DisplayObject so it has many of the same properties and
methods seen in other display objects such as TouchSprite, Sprite and Shape. These include x,y
positioning, rotation and scaleX, scaleY. To reduce redundancy, all the properties and methods
inherited by the TouchMovieClip class are not be listed here. However, it is important to understand
that, through inheritance, the TouchMovieClip class possesses all the properties and methods of the
base class MovieClip.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Creating a New TouchMovieClip Instance


The TouchMovieClip class uses a similar method to create new instances as other display objects.
For example, to create a new instance of a TouchMovieClip, simply call the touchMovieClip
constructor:

var ball:TouchMovieClip = new TouchMovieClip();

This creates a new instance called “ball” which has all the properties and methods of a
TouchMovieClip. This essentially means that it can be treated as a multi-touchable MovieClip.

Adding a TouchMovieClip to the Stage


In order to place a TouchMovieClip instance on stage, it must be added to the display list. To do
this, we use the addChild() method.

addChild(ball);

Setting TouchMovieClip Properties


Since TouchMovieClip is a display object, it has properties in common with other display objects
such as Shape and Sprite. To position a TouchMovieClip on stage, simply set the x and y properties.
For example, the following ActionScript code sets the TouchMovieClip “ball” 100 pixels along the x
axis and 200 pixels along the y axis.

ball.x = 100;
ball.y = 200;

Using TouchMovieClip Methods


The TouchMovieClip class inherits simple touch methods from the TouchObject class such as
startTouchDrag() and stopTouchDrag(). The startTouchDrag() method allows the TouchMovieClip
object “ball” to be dynamically moved to a new position on stage during run-time.

ball.startTouchDrag(-1);

In addition to the methods inherited from TouchObject, the TouchMovieClip class has built-
in touch and gesture methods that can be used to capture TouchEvents and GestureEvents. For
example, the following code shows how to enable touch point containment and add a simple
TOUCH_DOWN event listener to the touchMovieClip “ball”.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
ball.blobConainerEnabled = true;
ball.addEventListener(TouchEvent: TOUCH_DOWN, touchDownHandler);

When a point is placed on stage on top of the ball object, the touch point is captured by
the TouchMovieClip and a single TOUCH_DOWN event is fired. This calls the function
touchDownHandler(), which can be used to change the ball properties or used other methods to
invoke a desired change.

Creating Custom Classes That Extend TouchMovieClip


Just as with other display objects classes in AS3, TouchMovieClip can be used as a base
class to create a custom display class. For example, the following code creates a custom class
called “Rectangle” by extending the TouchMovieClip class.

class Rectangle extends TouchMovieClip {

private var recWidth:int = 100;


private var recHeight:int = 150;

function Rectangle() {
drawRectangle();
}
public function drawRectangle():Void
{
graphics.lineStyle(3,0xFF0000);
graphics.beginFill(0x0000FF);
graphics.drawRect(0,0,recWidth,recHeight);
graphics.endFill();
}
}

The benefit of sub-classing TouchMovieClip is that any custom class that extends TouchMovieClip
automatically inherits the multitouch interactivity associated with the class along with the properties
and methods associated with the MovieClip class. Extending the TouchMovieClip class provides
a simple and elegant method by which to integrate touch functionality to custom classes that have
been developed for existing applications.

Creating A TouchMovieClip From The Library


Right Click on the MovieClip in the Library. Select “Properties” from the drop down menu. Set the

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
class name to “myLibraryTouchMovieClip”. Then set the base class to “id.core.TouchMovieClip”
and press the “OK” button.

Creating Dynamic Instances From The Library


When a TouchMovieClip object exists in the Library, a constructor can be used to dynamically
create an instance of the object using ActionScript.

var box = new myLibraryTouchMovieClip();


addChild(box);

This creates a new instance of “myLibraryTouchMovieClip” called “box” and places the object on
stage. The object can now be manipulated on stage in the same way as other dynamically-generated
display objects.

Assessing The TouchMovieClip Timeline


The timeline of a touchMovieClip can be accessed by using the gotoAndStop() method and the
currentFrame property to move the play head forward five frames and stop.

ball.gotoAndStop(ball.currentFrame + 5);

The TouchMovieClip has access to playhead and timeline methods in the same manner as a
traditional MovieClip object.

Note: The playhead of a TouchMovieClip instance is automatically stopped if any of the display
properties such as filters, height, or scaleX are modified during run-time.However, the playhead of a
TouchMovieClip object that is nested inside a TouchMovieClip is not affected by the modification
of the display properties of its parent.

Tips For Using The TouchMovieClip Class In Application Development


As a general rule, TouchMovieClip should only be used when you need timeline functionality.
Otherwise we recommend that you use the TouchSprite class as it makes available most of the same
properties and methods with less memory and processor overhead.

Additionally it is considered good practice to remove objects from the display list and nullify all
active references to TouchMovieClip instances when they are no longer in use. For example:

ball = null;

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
This prepares the object for garbage collection which will release the resources used when the
instance was in use. Eventually the built-in mechanisms in Flash should do this automatically once a
DisplayObject has been removed from the display list. However, it becomes increasingly important
to systematically apply this method when developing multitouch-enabled objects as it is common
to frequently deal with the creation and destruction of numerous nested touch objects in multi-user
applications.

FAQ
Can you embed a video into TouchMovieClip?
There are a number of ways to embed a video into a MovieClip in Flash. Short video clips can be
nested inside MovieClips in the form of child MovieClips. The FLVPlayerback Component is the
better way to do it, since it allows the use of more robust resource management. This component
can be nested inside a Sprite, MovieClip, TouchSprite or TouchMovieClip. However, to ensure
that touch events are correctly handled and video control control buttons behave properly, we
recommend using the VideoViewer Module provided with Open Exhibits Core.
What is Inheritance?
Inheritance is a form of code reuse that allows new classes (or subclasses) to be based on existing
classes (base class). One of the key benefits of inheritance is that the subclass can acquire the
properties and methods of a base class.
How is TouchMovieClip different from Movieclip?
The TouchMovieClip object can capture and manage multitouch and gesture interactions.
How is TouchMovieClip Inheritance different from MovieClip?
The MovieClip class directly extends the Sprite class whereas TouchMovieClip extends
IDMovieClip, which is a descendant of MovieClip.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

6. Events In Open Exhibits Core


Events In Flash
In Flash, the event class is used as the base class to create event objects. These are passed as
parameters to event listeners when an event occurs and carry basic information about an event such
as the event target, whether the event can be cancelled, or whether the event bubbles.

Events in Flash, such as the MouseEvent, use display list event flow to pass event objects to a
variety of display objects This can be thought of as a local event broadcast which occurs after the
event target has been selected with the broadcast area limited to the display list path defined by the
event target object.

The event flow in Flash typically consists of three phases: capture, target and bubble. The event flow
actively uses the display list (generated from the display hierarchy) to move through relevant groups
of display objects on stage.

A Typical Display Object Hierarchy Segment

stage> root (document class)> DisplayObjectContainer(parent)>DisplayObject(child)

When the Flash player detects an event on a display object, the display object creates an event
object, sets the event type and the event target, then dispatches it. The event object is then passed
through the display list. Starting at the stage it moves down to the event target (capture phase). As
the event is passed to each display object, it checks for all registered event listeners and then returns
the event object to each listener the event type matches. The display objects that contain matching
event listeners can be used to respond the event and call a specific function to handle the event.
Once the event object reaches the predefined event target display object, it passes off a copy of
itself (at target phase), then continues back up the display list, once again returning event objects to
display objects with matching listeners (bubbling phase).

Event listeners can be added to the event target or ANY display object that is part of the event
flow segment and made to respond to a dispatched event. One of the most important concepts
about events in AS3 is the idea that once an event is dispatched, it flows through the display list
and can be handled by multiple event listeners. The event flow ALWAYS starts at the top of the
display list (stage), goes down to the target object (event dispatcher), then flows back up the list to
the top (stage) regardless of where on the display list the event is dispatched from. During this trip,
the event object can be used to trigger multiple listeners on multiple objects by triggering them on

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
the way down and on the way up the display list (if capture and bubbling are enabled). This allows
developers to broadcast a single event to entire segments of the display list. The choice of whether
to use capturing, bubbling or both depends on whether you want the event to be passed down to
children or passed up from children or both passed down and passed up.

Using Event Targeting


Flash determines where the event originates before the event is dispatched and made to traverse the
display hierarchy looking for listeners to trigger. This information can be used to dynamically target
display objects.

A single listener can be registered on the parent of nested objects in the display list and made to
dynamically act on an event target. For example, the following code places a single listener on the
parent and allows ANY child to be acted upon by the mouseClickHandler() function if they are
clicked. This effectively “targets” the object that dispatched the event.

parentSprite.addEventListener(MouseEvent.CLICK, mouseClickHandler)’

function mouseClickHandler() void {


event.target.scaleX += 0.05;
event.target.scaleX += 0.05;
}

This code can modify the properties of the children nested in the display object “parentSprite”.
Each time ANY of the child display objects that are contained in the parentSprite are clicked on
stage, the scale of the object shrinks by 10%. This can be useful when dealing with multiple nested
display objects as it only requires one listener and so reduces code repetition.

Stopping Event Flow


During any phase, the event can be prevented from traversing the display list. This effectively stops
the event flow. There are two main methods for achieving this in Flash:

stopPropogation()
The stopPropogation() method prevents the event from continuing on the event flow only after
other events listeners on object are allowed to execute.

stopImmediatePropogation()
The stopImmediatePropogation() method prevents the event from flowing immediately.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Both these methods can be used to halt the event flow at a specific display object. This has the
effect of preventing the event from moving any further in the display list, stopping the capture or
bubbling phase and therefore prevents the triggering of additional listeners.

Understanding Event Propagation in Open Exhibits Core


In Open Exhibits Core, there are two additional event classes to the standard Flash events, the
TouchEvent and the GestureEvent. These two classes behave very differently from each other but
are both decedents of the Flash Event class.

The TouchEvent Class


The TouchEvent class is used as a base class to create TouchEvent objects which are passed as
parameters to TouchEvent listeners when a TouchEvent occurs.

TouchEvent Inheritance
The TouchEvent class directly extends the Event class and is composed with various additional
features designed for direct touch management and integration.

TouchEvent > Event

The TouchEvent object carries standard event information such as: event target, event type, if
event is cancel-able and if event bubbles. The TouchEvent object also additional information that
directly relates to touch point such as touch point position, the touchPointID and if the point is a
primaryTouchPoint. Additionally, accommodations have been made in the TouchEvent object to
include information about the size of the touch point and the pressure associated with it. However,
these values must be made available by the multitouch data provider in order to be seen in the
TouchEvent object.

Understanding TouchEvent Dispatch


Open Exhibits Core has its own system for creating TouchEvents, determining the target of an
event and dispatching TouchEvents which is separate from the Flash player mechanism. However,
once the Touch event is dispatched, it behaves just like a MouseEvent object and uses the same
built-in Flash event flow behaviors.

TouchEvent Flow
Once a TouchEvent is dispatched, it behaves in the same way as other Events in Flash. Starting at
the stage, the TouchEvent goes through the capture phase, the target phase, the bubbling phase and
then returns to the stage. As with other events in Flash, as the TouchEvent object is passed from
one display object to another down and up the display list, it checks for matching event listeners. If

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
the TouchEvent finds multiple matching listeners, then the event will trigger multiple event handlers
on multiple display objects. Essentially, the TouchEvent object behaves just as the MouseEvent
object does.

Stopping TouchEvent Flow


The TouchEvent class provides access to many of the same methods used in the MouseEvent
class. As with the MouseEvent class, methods for the TouchEvent class can be used to affect how
the TouchEvent object behaves. For example, the stopPropogation() method prevents any further
processing of the TouchEvent in the event flow.

TouchEvent propagation can be stopped by display objects that encounter the event along the
display list. For example, if the stopEventPropagation() method is placed on a display object ahead
of the target display objects in the display list, it will prevent the touch event from continuing to be
passed down the display list and prevent the event object from getting to the target object. So the
event will never bubble back up.

mytsprite.stopEventPropogation();

The exact consequence of stopping the event flow at mytsprite depends on the location of
TouchEvent listeners along the display list. This is a common feature of the Flash event flow model
and is designed to provide flexibility when dealing with nested display objects.

Working With TouchEvents Listeners


Listening for TouchEvents with Open Exhibits Core is very similar to listening for MouseEvents. In
order to register an TouchEvent listener with a display object, it must be added to the touch object
using the addEventListener() method.

Adding TouchEvent Listeners


In the following example, a TouchEvent Listener is directly added to a touch object using the
addEventListener() method.

mytsprite.addEventListener(TouchEvent.TOUCH_DOWN, touchDownHandler);

A listener is added to the touchsprite “mytsprite” which listens for the TOUCH_DOWN
TouchEvent and calls the function “touchDownHandler()” when triggered.

Removing TouchEvent Listeners


To remove a TouchEvent listener, simply use the removeEventListener() method:

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

mytsprite.removeEventListener(TouchEvent.TOUCH_UP, touchDownHandler);

This un-registers the event listener and associated handler function. This method is particularly
useful when freeing up resources that are no longer being used.

Adding Multiple Listeners


As with other display objects in Flash, multiple TouchEvent listeners can be registered to the same
TouchDisplayObject. For example, the following code adds listeners for the TOUCH_DOWN,
TOUCH_MOVE and TOUCH_UP events to the TouchSptrite “mytsprite”:

mytsprite.addEventListener(TouchEvent.TOUCH_DOWN, touchDownHandler);
mytsprite.addEventListener(TouchEvent.TOUCH_MOVE, touchMoveHandler);
mytsprite.addEventListener(TouchEvent.TOUCH_UP, touchUpHandler);

The touchsprite “mytsprite” can now independently call different event handlers for each of the
three TouchEvents. This means that if the TOUCH_DOWN , TOUCH_MOVE and TOUCH_UP
events are passed to “mytsprite” at the same time, it can call the functions touchDownHandler,
touchMoveHandler and touchUpHandler in a single event cycle. If a single TOUCH_DOWN event
is passed to “mytsprite,” only the touchDownHandler will be triggered.

Additionally, multiple listeners of the same event type can be registered on a single
TouchDisplayObject. The following code independently calls two different handler functions when
a single TOUCH_DOWN event is passed to “mytsprite”:

mytsprite.addEventListener(TouchEvent.TOUCH_DOWN, touchDownHandlerA);
mytsprite.addEventListener(TouchEvent.TOUCH_DOWN, touchDownHandlerB);

Using this technique, the two functions touchDownHandlerA() and touchDownHandlerB() are
called when a single TouchEvent is passes to the TouchSprite object.

Working with TouchEvents and Nested TouchObjects


Nesting touchEvents is very similar to nesting MouseEvents. If you require a TOUCH_MOVE
TouchEvent to be detected by a parent TouchDisplayObject, simply attach a listener to that object.

touchspriteA.addChild(touchSpriteB);
addChild(touchSpriteA)
touchSpriteA.addEventListener(TouchEvent.TOUCH_MOVE, touchMoveHandlerA);

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

As the TouchEvent bubbles up the display list, it will trigger any handler associated with a matching
TouchEvent Listener. In this way, moving a touch point over the touchSpriteB (the child of a
touchSpriteA) can trigger the touchMoveHandlerA() function on touchspriteA.

The GestureEvent Class


The GestureEvent class is used as a base class to create GestureEvent objects which are passed as
parameters to GestureEvent listeners when a GestureEvent occurs. GestureEvents are dynamic
objects which can contain values specific to the gesture and determine and indirectly manage
processes that are to be performed on the container object.

GestureEvent Inheritance
The GestureEvent class directly extends the Event class and is composed with various additional
features designed for direct gesture management and integration.

GestureEvent > Event

The GestureEvent object carries standard event information such as: event target, event type, if
event is cancelable and if event bubbles. The GestureEvent object also has additional information
that directly relates to the gesture such as: local point position and stage point position.
Accommodations have been made in the GestureEvent object to include information about the
tactual objects that are used in the gesture action.

Note: The GestureEvent object does not directly carry information about TouchPointID.

GestureEvent Containment
One of the primary differences between GestureEvents and MouseEvents is the use of
containment. Enabling BlobContainment on a GestureObject forces the object to allow
GesureEvent processing.

mysprite.blobContainment = true;

Blob containment plays no direct role in the event flow or the the Flash listener system.
GestureEvents do not traverse the display list and so do not use the Flash event flow mechanism.
However, containment does play a role in determining the GestureEvent target.

Note: Blob containment must be enabled on at least one object in the local display list path in order
to be correctly processed.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

GestureEvent Processing
When TouchObjects are placed on a displayObject with enabled blob containment or when placed
on displayObjects that are nested within an object with blob containment enabled, those touch
points are directly processed by the container object. Processing includes basic analysis of the touch
points, the association of touch object clusters, and the direct use of gesture module calculations,
gesture caching and object transformations. It is important to note that object transformations only
occur on the object that is processing the GestureEvent, which is the object that acts as the blob
container.

GestureEvent Targeting
Gesture Events behave completely differently from TouchEvents. Instead of using an automated
system of event flow, GestureEvents do not propagate through the display list; they are created and
processed by the blob container object. It is the relative placement of the blob container and touch
point target in the display list that ultimately determines where the GestureEvent is dispatched and
which object is defined as the GestureEvent target. GestureEvents are dispatched by the highest
level blob container in relation to the display object that contains the touch point.

GesureEvent Flow
In Open Exhibits Core, GestureEvents do not exhibit event flow behavior (no capture or bubbling
phase). The GestureEvent object is passed directly to the event target. If the TouchEvent finds
multiple matching listeners on a display object then the event will trigger multiple event handlers on
the display object sequentially in a single cycle. In this manner the TouchEvent object behaves just
as the OnEnterFrameEvent object does.

Working with Gesture Event Listeners


Listening for GestureEvents with Open Exhibits Core is very similar to listening for
OnEnterFrameEvents. In order to register an GestureEvent listener with a display object, it must be
added to the touch object using the addEventListener() method.

Adding GestureEvent Listeners


In the following example, a GestureEvent Listener is directly added to touch object using the
addEventListener() method.

mytsprite.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);

A listener is added to the touchsprite which listens for the GESTURE_SCALE GestureEvent and
calls the function gestureScaleHandler() when triggered.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Removing GestureEvent Listeners


To remove a TouchEvent listener, simply use the removeEventListener() method:

mytsprite.removeEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);

This un-registers the event listener and associated handler function. This method is particularly
useful when freeing up resources that are no longer being used.

Adding Multiple Listeners to a Single Touch Object


As with other display objects in Flash, multiple GestureEvent listeners can be registered to the same
TouchDisplayObject. For example, the following code adds listeners for the GESTURE_SCALE,
and GESTURE_ROTATE events to the TouchSptrite “mytsprite”.

mytsprite.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);
mytsprite.addEventListener(GestureEvent.GESTURE_ROTATE, gestureRotateHandler);

The touchsprite “mytsprite” can now independently call different event handlers for each of
the two GestureEvents. This means that if the GESTURE_SCALE and GESTURE_ROTATE
events are passed to “mytsprite” at the same time, it can call the functions gestureScaleHandler
and gestureRotateHandler in a single event cycle. If a single GESTRE_ROTATE event is passed
to “mytsprite,” only the gestureRotateHandler will be triggered.

Additionally, multiple listeners of the same event type can be registered on a single TouchObject.
The following code shows how to independently call two different handler functions when a single
GESTURE_SCALE event is passed to “mytsprite”:

mytsprite.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandlerA);
mytsprite.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandlerB);

Using this technique, the two functions gestureScaleHandlerA() and gestureScaleHandlerB() are
called when a single GestureEvent is passed to the TouchSprite object.

Adding TouchEvent and GestureEvent Listeners


All TouchObjects are, by default, enabled to receive TouchEvents and GestureEvents. In
the following code, a TouchEvent listener and a GestureEvent listener are added to a single
TouchSprite object “mytsprite”:

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
mytsprite.addEventListener(TouchEvent.TOUCH_TAP, touchTapHandler);
mytsprite.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);

When a TOUCH_TAP event object is passed to “mytsprite,” it will call the touchTapHandler
function. If a GESTURE_SCALE event object is passed to mytsprite it will call the
gestureScaleHandler function. Both events can independently target the ”mytsprite” TouchObject
and trigger handlers independently so the listeners can be triggered concurrently if required.

Working with GestureEvents and Nested Display Objects


Because there is no event flow for Gestures, a GestureEvent can only ever trigger listeners on one
object, the object it targets (although multiple listeners on the same object can be used to trigger
multiple handlers as in the above example). Ensuring that GestureEvents target the correct object
requires an understanding of how GestureEvent targets are determined and dispatched.

GestureEvents are dispatched by the highest level blob container in relation to the display object
that contains the touch point (the touch point that triggered the GestureEvent). If you want to
successfully listen for GestureEvents on nested content, make sure the blob container is always the
parent of the object that owns the touch points created during the gesture action. Simply put, if you
want to trigger a gesture on a parent display object make sure the parent is the blob container and
the gesture is performed on the a child display object.

To effectively use nested gesture behaviors, you must use nested GestureEvent listeners and you
must employ the use of structured nested blobContainers that follow the hierarchy of required
nested behaviors.

FAQ
What is event propagation?
Event propagation is the transference of a single event object to multiple display objects so that each
other object in the display receives the event instead of just the display object in which the event
originated.
What does Flash use event flow in its event model?
Simply put, if you click a display object, the event does not have to be limited to the display object
on which the event occurred but can received by the multiple objects in the event flow. The only
display objects that respond to the event are ones that have listeners attached to them. Therefore
using the event flow system allows a single event to trigger a response from multiple display objects
in the display list. This provides a automated mechanism by which to distribute the event object to
relevant local display objects.
Why use event capturing?

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
The event capture phase as used in event flow in Flash allows the event object to be passed to all
display objects in the display list branch from the stage down to the event target. This acts as a local
event broadcasting system.
Why use event targeting?
Event targeting allows the display object that triggered the event dispatch to be the direct recipient
of the event object and handle the event itself.
Why use event bubbling?
The event bubbling phase as used in event flow in Flash allows the event object to be passed to all
display objects in the display list branch event target all the way up to the stage. This acts as a local
event broadcasting system. Event bubbling can be used to allow event dispatched by nested display
objects to be handled by their parent containers.
How does the Current Target of an event change during the event flow?
When an event occurs at a specific location, the event enters the capture phase. In the capture
phase, the event drills down the display list checking objects that are drawn directly under the point
of interaction looking for associated event listeners. When a matching listener is found, the event
enters the target phase. When in the target phase, the event object is assigned an event target and
then enters the bubbling phase. In the bubbling phase, the event object traverses up the display list
and as it encounters each successive display object, the event object is assigned a different “current
target”.
How can multiple listeners be triggered by a single event dispatch from a single object?
The event dispatch class dispatches the event to the target object via the display list. The event
object then traverses the display list hierarchy from the stage object all the way down to the target
display object then event object moves back up the display list. During this event flow, the event
object is collected by all the display objects on the way down, collected by the target object and
then collected on the way up as it bubbles up the display list. If at any point in the event flow the
event object is collected by a display object that has a registered listener with a matching event type
handler function, the event listener will be called.
What is the difference between a TouchEvent and a GestureEvent?
Unlike TouchEvents, GestureEvents do NOT use have a capture or bubbling phase and do not
carry any direct information about the touch points.
How many TouchEvents are there?
Open Exhibits Core utilizes over 20 types of TouchEvent objects
How many GestureEvents are there?
In Open Exhibits Core there are over 180 type of GestureEvents objects.
How do I add a TouchEvent Listener ?
To add an TouchEvent listener use the addEventListener() method as follows:
touchDisplayObject.addEventListener(TouchEvent.TOUCH_UP, touchUpEventHandler);
How do I add a GestureEvent Listener?

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
To add an GestureEvent listener, use the addEventListener() method as follows:
touchDisplayObject.addEventListener(GestureEvent.GESTURE_SCALE, gestureScaleHandler);
Can I add a TouchEvent Listener to a Sprite or MovieClip object?
No, Only TouchDisplayObjects such as TouchSprite and TouchMovieClip objects can listen for
TouchEvents.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

7. The Gesture Library


Exploring Gestures in Open Exhibits Core
In Open Exhibits Core we have introduced over 180 new gestures to our gesture library, bringing
the total number of available gestures to 203. The primary reason for extending the library in this
manner is to give developers an extensive toolset for building multitouch applications that increases
work-flow efficiency and provides a simplified method for UI and UX testing and development.

Gesture Standards
Multitouch Gestures are only beginning to enter the mainstream lexicon of user interface designers.
There has been a number of successful experiments that deploy multitouch gestures as an integral
part of the primary user interface, both in the mobile industry and for large scale interactive surfaces.
However, the touch industry has yet to define a set of universally accepted gestures and associated
manipulations beyond TAP, PAN, PINCH & ROTATE. This provides a tremendous opportunity
for the exploration and development of new UI paradigms. Flash represents a vehicle with which to
deliver multitouch and gesture interactive content across multiple devices and platforms. Existing
mobile and desktop devices have established true multitouch capability that currently exceeds the
ability of most applications that have been developed for those devices. Open Exhibits Core has
been designed not only as a test bed for developing new gestures and testing user interfaces but as a
robust system for the full deployment of applications across web, mobile and large-scale interactive
surfaces using Flash, Flex, Air and Android.

Open Exhibits Core has deliberately positioned certain gestures to align with the few standards that
have been established in the multitouch field, such as tap, flick, scale and rotate, so that developers
can immediately begin to use familiar listeners and popular interactions. But careful consideration
has been given to ensure that developers have full creative freedom to recast gestures to perform
different roles. Custom GestureEvent listeners can be created for a limitless number of as yet
undiscovered gestures using the powerful gesture analytics framework included in Open Exhibits
Core.

Gesture Analytics
We have developed a versatile system for the tracking and detailed analysis for touch point data
in Flash called Gesture Analytics. This consists of a combination of touch point ownership
determination, point histories capture and point clustering models. Using these three facets,
touch point data can be verified, analyzed and classified according to where and how touch points
are initiated, the patterns of motion exhibited and how they are terminated. This allows certain
conclusions to be drawn about implicit association and the significance of specific interrelated

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
motion providing a method by which to listen for and discern gesture actions performed by users.

New Gestures in Open Exhibits Core


There are 203 gestures in Open Exhibits Core. Each is organized by type and group.

The type classifies the typical mode of usage of the action and the group number is a measure of the
level of complexity associated with performing the gesture action.

Gesture Types and Gesture Groups


Organising gestures into Types and Groups provides a system of common reference and serves to
characterize the gesture actions that they consist of. This reference system can be used to expose
new potential gestures or suggest alternative uses for a gesture action.

Gestures can also be referred to using the gesture notation, which uses a simplified set of characters
to symbolize the gestures, along with the number of touch points and number of hands used to
perform the gesture action. For example:

[Rot] {2,1}

This gesture notation represents the rotation gesture that requires an action with two touch points
from a single hand.

Basic Touch
The simplest form of “multitouch gesture” in Open Exhibits Core is called a touchEvent. Touch
consists of the primary touch phases such as TOUCH_DOWN, TOUCH_MOVE, TOUCH_UP.
The TOUCH_DOWN event occurs when a new touch point is placed over a TouchObject. If that
touch point is moved, a TOUCH_MOVE event occurs. If the same touch point is removed, the
TOUCH_UP event will be triggered. These touch events can be further analyzed as part of the
history of a touch point or combined to form more complex touchEvents such as TAP and HOLD.

Touch Tap
Simple TOUCH_TAP Events are analogous to single point mouse events. A single Touch
TAP is triggered by a single TOUCH_DOWN event and single TOUCH_UP event occurring
in close proximity to each other within a 150ms time frame. Touch Tap events can be defined
for a specific number of touch points and a specific number of successive taps. For example,
TOUCH_DOUBLE_TAP_2 requires that exactly two distinct touch points be created in close
proximity, then removed, then added, then removed within a specific time interval with minimal
motion.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Touch Hold
Hold events are triggered when a TOUCH_DOWN event occurs and a paired TOUCH_UP event
is not detected for a set period of time. Touch Hold events can be defined for a specific number of
touch points. For example the TOUCH_HOLD_3 event requires that exactly three distinct touch
points be placed on a touch object and held in position for at least 500ms.

Simple Primary Gestures


The primary mode of performing actions in Open Exhibits Core is using a single hand to perform
gestures. All gestures that can be performed with one hand are called “Primary Gestures”. Simple
primary gestures use actions that do not require rhythmic analysis.

The Drag Gesture


The one finger drag event is analogous to a simple drag and drop mouse action. The
DRAG_EVENT in Open Exhibits Core occurs when a touch point is placed on a touch object and
a TOUCH_MOVE event is detected on that point (i.e. it is dragged). Touch objects can be set to
move on stage only when a specific number of touch points placed on them and determined to be
moving. When using multi-point drag, any number of touch points can be used to move a touch
object. The average position of all the associated touch points is used to calculate the center of drag
and move the center of the touch object to that point.

The Scroll Gesture


Scroll gesture events are triggered when a defined number of touch points are placed on an object
and moved vertically or horizontally as a group.

The Flick Gesture


The flick gesture uses an abrupt action that ensures touch points move with sharply increasing
velocity across a touch object. The GESTURE_FLICK_1 tracks a single touch point placed on
a touch object and triggers and event if the motion of the touch point accelerates immediately
before it is removed from the touch object and a TOUCH_UP event is detected on the object. The
GESTURE_FLICK event returns an acceleration and both the X and Y components of the touch
point velocity.

The Swipe Gesture


The swipe gesture allows users to detect the action of moving touch points over a touch objects.
Swipe gestures are triggered only when specific conditions of motion are met. In the case
of “GESTURE_SWIPE_H_1 a single point must move with a relatively constant velocity in the x
direction for 400ms to return the swipe event. If the touch point moves from left to right, the event

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
returns a value of 1; if the touch point moves from right to left, the event returns a value of -1.

The Scale Gesture


The SCALE gesture mode allows users to analyze the separation of touch point clusters. The
number of touch points required to initialize a scale gesture can be set explicitly or defined for
any “n” number of points.

The Rotate Gesture


The ROTATE gesture allows users to analyze the relative rotation of touch point clusters around a
common center of rotation. The number of touch points required to initialize a scale gesture can be
set explicitly or defined for any “n” number of points.

The Stroke Gestures


The STROKE gesture set uses a customized path analysis to recognise a broad set of single stroke
shapes, symbols, letters and numbers. Each stroke has been preset using a detailed vector path in
the gesture library. When a STROKE listener is placed on a touch object and a TOUCH_DOWN
event is detected on that object, the path created by the motion of the touch point is compared to
the defined path for that stroke stored in the gesture library.

Stroke Letters
The STROKE letters gestures define a group of 52 English alphabet letters. Each letter uses a
simplified single stroke path to determine a unique vector based pattern and returns a single case
specific character.

Stroke Symbols
The STROKE symbol gestures define a group of 10 symbols. Each symbol gesture returns a unique
character string. Some of the more complex symbols have been simplified so that they can be
represented using a single cursive stroke.

Stroke Greek Symbols


The STROKE Greek gestures define a group of 24 symbols associated with Greek alphabet letters.
Each symbol gesture returns a unique character string. Some of the more complete Greek characters
have been simplified so that they can be represented using a single cursive stroke.

Stroke Shapes
The STROKE shape gestures use a preset group of paths to select and return a shape when a
defined path is detected on a touch object. Each shape path is defined by a single stroke. Some
of the more complex symbols have been simplified so that they can be represented using a single

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
cursive stroke.

Stroke Numbers
The STROKE Number gestures defines a set of 10 unique numbers. Each Number path is defined
by a single stroke and returns a positive integer value.

Simple Secondary Gestures


The primary mode of performing actions in Open Exhibits Core is using a single hand to perform
gestures. However, we have developed a select number of gestures that are designed to be
performed with two hands but contain a single gesture type.

The 3D Tilt Gesture


Simple 3D gestures are designed to be performed with two hands. The first hand gesture acts as
a modifier for the second hand gesture. In the first set of 3D gestures available in Open Exhibits
Core, 3D tilt gestures can be activated using two fundamentally different gesture actions.

Using a two-handed action, the first hand places down two fingers creating two touch points aligned
in the intended tilt plane. Then, using the second hand, a third touch point is moved perpendicular
to the tilt plane to evoke a precise orientation change in 3D.

Another simplified action that can also be used evoke an orientation change in 3D is by placing
three fingers down on an object and dragging all three touch points in the direction of rotation. The
single hand technique is reserved for aggressive tilting of objects or scene views in 3D as it affords
less fine control.

Both methods trigger the tilt gesture to update continually as touch points are moved over the
associated touch object. This allows the gesture to act to incrementally change the orientation of
the object in 3D. Since the planes of rotation are perpendicular, they can be updated simultaneously
without interference.

The primary difference between the two techniques is the degree of precision and the use of one or
two hands. In this gesture, as with others in Open Exhibits Core, the use of a second hand when
performing a gesture acts to increase the precision of gesture control or expose secondary behaviors
when altering a object’s properties.

The Split Gestures


The GESTURE_SPLIT_1 event is triggered when two touch points are placed on a touch object
and dragged such that the separation of the points increases. The split gesture is many ways is an

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
extension of scale type gesture but is typically only activated when touch point separation exceeds a
hand width, as SPLIT gestures are designed to be performed with two hands.

A common use for a SPLIT gesture is to control the separation of a group of elements, such as
the pulling apart a collection of images for inspection or “exploding” a rendering of a complex 3D
shape to understand how it fits together.

Complex Secondary Gestures


Gestures that use two hands and require the execution of more than one action or more than one
type of action concurrently or in sequence are considered complex secondary Gestures.

The Anchor Gesture


Simple anchor gestures are designed to be performed with two hands. The first hand gesture acts as
a modifier for the second hand gesture. In the first set of anchor gestures available in Open Exhibits
Core, the first hand is used to perform a simple three finger hold with the second hand used to tap,
flick, scale or rotate.

Future Gesture Development


Open Exhibits Core has been designed to develop multitouch interactions on a variety of touch
devices including multi-user multitouch tables and walls. These devices provide the means to use
more than one hand when performing gesture actions as well as the possibility to use supplementary
touch tools such as a stylus, slap widgets and fiducial markers. One area of gestural interaction that
is currently being researched is how these tools can be developed to augment existing gestures and
integrate additional layers of functionality within Open Exhibits Core.

Universal Gestures
Gesture type and associated actions as defined in the existing Open Exhibits Core library have been
selected based on patterns in usage criteria, multi-point motion and multi-point clustering. However,
there is no reason why these gesture handlers cannot be configured be to control other properties
of touchDisplay objects. For example, the GESTURE_ROTATE action and gesture listener could
be used to control the transparency of an image on screen. In Open Exhibits Core, we refer to this
as “Re-tasking The Gesture”.

FAQ
Why extend the gesture library?
We extended the gesture library to make building sophisticated multitouch interactions simpler,
improve work flow, allow for rich application analysis, and fine-tune user experience.
How many gestures are there available in GW 2.0?

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
There are over 200 distinct touch and gesture event listeners available in Open Exhibits Core that
provide methods of creating an ever increasing repertoire of single and multitouch gestures.
Can Open Exhibits Core detect multi-stroke gestures?
Currently Open Exhibits Core has been designed to only analyse single stroke gestures.
What are designations?
We have introduced a standardized categorization system that gives each gesture a designation. The
designation consists of a descriptive name (which includes the type name), a text based abbreviated
symbol and the number of touch points and hands required for the gesture.
How are Gestures determined from raw touch point data?
One of the primary methods for determining if a touch point associated with an action qualifies as
part of a gesture is whether the touch object that has the attached listener owns the touch point. If
the touch object does not “own it,” the object effectively ignores the touch point.
Once it is determined that a touch object owns a set of touch points, the number of touch points is
measured and tested against each type of attached gesture listeners. If the number of touch points
falls between the max and min number allowable for a specific gesture, the points are cached,
histories generated and analyzed for geometric changes. The relevant results of the spatial analysis
of the motion over a fixed time interval is then returned with the gesture event if the motion meets
specified pattern of motion.
How are point clusters used in Open Exhibits Core for gesture analytics?
Touch point clusters are not used to determine which touch object owns a group of points but is
used optionally to qualify grouping of points already owned by a touch object. For example, the split
gesture uses clustering to determine when a single cluster of touch points splits into two clusters as
GESTURE_SPLIT is designed to be used with two hands.
Why are there gestures which have different name but have the same actions?
Gestures have been organised into groups both functionally and conceptually. There are occasions
where it was more efficient to duplicate a gesture action inside different groups of gestures in order
to significantly simplify implementation of large groups of gesture listeners. Not only does this
simplify implementation but it also reinforces the relationship between what type of data is returned
from gestures and the context in which it is used.

A good example of this is the difference between the circle stroke gesture, the omicron stroke
gesture and the zero stroke number gesture. Performing a circular stroke on a touch object with
the “NumberEvent.NUMBER_ZERO” listener on it will return the number zero. Similarly
performing a circular stroke on a touch object with the “LetterEvent.o_” listener on it will return
the letter “o”. Performing a circular stroke on touch object with the “SymbolEvent.OMICON”
listener on it will return the symbol name “omicron,” which has a string associated with it.

Holding two fingers down on a touch object and dragging a third would trigger both the

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
GESTURE_SCALE_3 and the GESTURE_TILT_ZY gesture events, if they were both attached
to the same touch object. The actions would geometrically be the same and the analysis that occurs
within the the two gestures would be identical in this example. However, they are considered
different gestures not only because they return values by different names but because they
generally use of those values is very different ways. In this case, the functionality of the two
gestures overlap in the applied gesture action and the returned value but this is not the case for the
GESTURE_SCALE group and the GESTURE_TILT group where distinctly different actions are
required and give distinctly different return values in general.
Why not just use a single gesture that can be used with a varying number of touch points?
We have included gestures that can accept varying numbers of touch points as well as gestures
that require a specific number of touch points in order to trigger an event. For example, the
GESTURE_SCALE listener can recognize actions that use 2 to 5 touch points to scale touch
objects whereas the GESTURE_SCALE_2 gesture will only trigger when 2 touch points are used
on a touch object.
What is the difference between startTouchDrag, stopTouchDrag and GESTURE_DRAG?
There are two distinct techniques for implementing touch object dragging in Open Exhibits Core.
The first uses the startTouchDrag and stopTouchDrag methods available as part of Adobe’s touch
implementation and uses Adobe’s standard methods for updating the position of touch objects on
stage. The GESTURE_DRAG gesture set returns a set of values that represent the changes in the
position of touch points associated with the touch object. These can be used to modify the position
of the touch object but can equally be used to modify ANY chosen property of the touch object.
What is the difference between PINCH and CRUNCH?
The PINCH gesture is defined as accepting only two touch points. The CRUNCH or SCALE
gestures can utilize any number of touch points.
What is the difference between SCALE and SPLIT?
The SCALE gesture requires only one cluster of touch points whereas the split gesture requires two
distinct touch point clusters owned by the same touch object.
What is a combination gesture?
Combination gestures require a sequence of actions.
What is a dual hand gesture?
A dual hand gesture is a gesture that has a gesture action designed to be performed by two hands at
the same time.
Why use two hands to perform a gesture when you can use one?
Using two hands creates two independent touch point clusters which can provides different
characteristic motion compared to points within a single cluster. Additionally using two hands to
perform an action can allow greater precision.
What is a Simple Primary Gesture?
All gestures that can be performed with one hand are called “Primary Gestures”. Simple primary

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
gestures use actions that do not require rhythmic analysis.
What is a Simple Secondary Gesture?
All gestures that are designed to be performed with two hands but contain a single gesture type are
considered “Simple Secondary Gestures”.
What is a Complex Secondary Gesture?
All gestures that use two hands and require the execution of more than one action or more than one
type of action concurrently or in sequence are considered “Complex sSecondary Gestures.
Can GW differentiate between capital and lower case alphabet letters?
Yes, there are different stroke paths defined for upper and lower case letters of the alphabet.
What if my cursive style is different from the prescribed stroke?
Stroke paths can be modified or re-recorded. and new custom strokes can also be added to the
Open Exhibits Core library. The set of Strokes as defined in the Open Exhibits Core library have
been selected and grouped to provide the simplest, most accurate and reliable stroke discernment.
Can gestures be used to other purposes than what they were originally developed?
Yes. Open Exhibits Core has been designed with developers in mind. Gestures can be easily re-
tasked to act on objects in a variety of ways. For example, the spiral gesture can be used to control
the scale of an object or the position of the object in 3D space. It does not have be used to control
the angle of an object or associated with rotation in any way.
How is the SWIPE gesture different from the SCROLL gesture?
The swipe gesture is similar to the scroll gesture but requires a relatively constant touch point
velocity and only returns an event once the motion of the touch points has been completed.
Additionally the SWIPE gesture only returns the values 1 and -1 depending on the direction of the
swipe along the associated axis.
How are SCROLL gestures different from DRAG gesture events?
Scroll Gestures are similar to Drag events but, unlike drag events, scroll gesture events have more
explicit preconditions and return the vector component of the moving touch points.
How are SCROLL gestures different from SWIPE gesture events?
The SCROLL gesture uses a similar action to swipe gestures but does not require a constant
velocity to trigger an event. The SCROLL gesture event continually returns values as the action is
performed giving a range of values dependent on the length of the scroll motion.
How is the FLICK gesture different from the SWIPE gesture?
The flick gesture is similar to swipe gesture but requires touch points to accelerate close to the end
of the motion of the touch points.
How is the FLICK gesture different from the SCROLL gesture?
The flick gesture is similar to the scroll gesture but requires touch points to accelerate close to the
end of the motion of the touch points. Additionally the Flick gesture can be performed in ANY
direction.
What is the difference between NUMBER gestures and SYMBOL gestures?

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Number gestures are similar to symbol gestures as they both use captured stroke data in the form of
vector paths to match a performed gesture to a stroke in the gesture library. The primary difference
between the two gesture modes is the data type that is returned from each gesture. Number stroke
gestures return a integer value whereas symbol stroke gestures return a character string.
What is the difference between LETTER gestures and SYMBOL gestures?
Letter gestures are similar to symbol gestures as they both use captured stroke data in the form of
vector paths to match a performed gesture to a stroke in the gesture library. The primary difference
between the two gesture modes is the data type that is returned from each gesture. Letter stroke
gestures return a single character whereas symbol stroke gestures return a character string.
What is the difference between SHAPE gestures and SYMBOL gestures?
Shape gestures are similar to symbol gestures as they both use captured stroke data in the form of
vector paths to match a performed gesture to a stroke in the gesture library. The primary difference
between the two gesture modes is the data that is returned from each gesture. Shape stroke gestures
return numerical information about the dimensions of the captured shape whereas symbol stroke
gestures return only a single character string.
What is the difference between LETTER gestures and SYMBOL gestures?
Letter gestures are similar to Symbol gestures as they both use captured stroke data in the form of
vector paths to match a performed gesture to a stroke in the gesture library. The primary difference
between the two gesture modes is the data type that is returned from each gesture. Letter stroke
gestures return a single character whereas symbol stroke gestures return a character string.
What is the difference between a TouchEvent and a GestureEvent?
The TouchEvent travels the display list hierarchy and carries with it touch point data.
What is the maximum number of points that Gestures in Open Exhibits Core can handle?
The Open Exhibits Core tracking system has been tested with over 1,000 touch points. The primary
factor in determining the number of interaction points in a Open Exhibits Core application is the
number of touch points recognized my the multitouch input device and the CPU resources that are
available on the system running the application.
What is the max number of points that my application can handle?
The maximum number of points that can be tracked by your application depends on three things.
The max number of points tracked by the algorithm your multitouch hardware device uses, The
ability of the CPU hardware that runs the algorithm and the number of points enabled in your
application. Open Exhibits Core has been optimized for 60 touch points but has been tested with
over 100 tracked points. Theoretically, it can be up to 1000, effectively unlimited.
Can you use mouse events in Open Exhibits Core?
The Open Exhibits Core simulator features in every application that is published with Open
Exhibits Core. The multitouch simulator will automatically translate mouse clicks on stage to
touch events that touch objects can recognise and react to. Mouse events can be used in custom
applications in conjunction with touch and gesture events developed using Open Exhibits Core, but

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
we recommend that mouse events be handled by the simulator and not referred to explicitly in the
ActionScript for the application.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Chapter 8. Module Components


Introduction to Module Components
In Open Exhibits Core, modules have been designed to be self-contained multitouch interactive
base components. Each module can be used as a stand-alone application or integrated as part of a
multi-component application.

Modules can contain custom XML parsers, video loaders, image loaders, text fields and other
touch elements such as buttons and sliders. Modules have been integrated with TouchEvent
and GestureEvent listeners to create versatile multi-user multi-point interactive touch objects.
Open Exhibits Core builds on existing Flash API tools and Flex methods to create an advanced
framework for discerning and managing multi-user multitouch and gesture-based interactions.

Using Open Exhibits Core modules provides an “out of the box” way to create dynamic Flash
applications. Each module makes it easy to create sophisticated interactive objects from media assets
and provides methods to create flexible layouts and formatting along with customizable interactivity
and full content management control. Modules are easy to extend for more advanced developers
and provide reliable resource management.

Modules Available in Open Exhibits Core

ImageViewer
The ImageViewer is a module designed to display media content in the form of static images.
Bitmap data files such as PNG, GIF and JPG, along with associated metadata and basic formatting,
can be defined using a simple XML file. Multiple touch object images can be displayed on stage and
each touch object can be manipulated using the TAP, DRAG, SCALE and ROTATE multitouch
gestures. All multitouch gestures can be activated and deactivated using the module XML settings.

VideoViewer
The VideoViewer is a module designed to display media content in the form of digital video. Video
data files such as FLV and SWF along with associated meta data, timed text and basic formatting
can be defined using the module XML file. Multiple touch object videos can be displayed on
stage and each touch object can be manipulated using the TAP, DRAG, SCALE and ROTATE
multitouch gestures and standard PLAY, STOP, BACK, FORWARD and PAUSE touch buttons.
All multitouch gestures can be activated and deactivated using the module XML settings.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
FlickrViewer
The FlickrViewer is a module designed to display media content using the Flickr API. Selected
Bitmap data and short video files are downloaded from a defined Flickr user account along with
associated meta data. User account settings, image and video preferences and basic formatting can
be specified using the module XML file. Multiple touch object images and videos can be displayed
on stage and each touch object can be manipulated using the TAP, DRAG, SCALE and ROTATE
multitouch gestures. All multitouch gestures can be activated and deactivated using the module
XML settings.

YouTubeViewer
The YouTube Viewer is a module that uses the YouTube API to display video content from
YouTube in the form of an interactive video player window. Video can be streamed from a specified
YouTube account along with associated meta data. YouTube account preferences along with the
formatting and basic appearance of the video windows can be defined from the module XML file.
Multiple touch object images can be displayed on stage and each touch object can be interacted with
using the TAP, DRAG, SCALE and ROTATE multitouch gestures. All multitouch gestures can be
activated and deactivated using the module XML settings.

GMapViewer
The GMapViewer is a module that uses the Google Maps API to create interactive mapping
windows. Multiple touch object windows can independently display maps with different sizes and
orientations. Each map can be centered on different coordinates, use different types and views. The
map windows can be interactively moved around stage, scaled and rotated using multitouch gestures.
Additionally map type, latitude and longitude, zoom level, attitude and pitch can also be controlled
using multitouch gestures inside the mapping window. All multitouch gestures can be activated and
deactivated using the module XML settings.

KeyViewer
The keyViewer module constructs a simple extend-able onscreen keyboard which can be
repositioned and re-sized onstage using multitouch gestures. The keyboard appearance and styling,
as well as the key layout and allowed gesture interactions, can be customized using the module XML.

Using Modules in Applications


Modules are designed to be stand-alone applications or easily integrated into larger multitouch multi-
user applications. Each module employs a life-cycle interface that can be used to manage application
memory and CPU resources when used in a multi-object interactive.

Flash/Flex Life Cycles

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
The component life cycle describes the sequence of steps that occur when a component
object is created from a component class. The common life-cycle methods used in Flex are:
commitProperties(), createChildren(), layoutChrome(), measure(), updateDisplayList() and dispose().
These methods are an efficient way of handling events such as the pre-initialization, initialization,
positioning, resizing and disposal of components. Using these techniques helps eliminate redundant
processing when managing content update or layout changes and provides reliable methods for
prompting garbage collection.

The Open Exhibits Core Module Life-Cycle


Internal methods have been constructed inside Open Exhibits Core that mimic the established
Flash/Flex life-cycle. In Open Exhibits Core Modules use five methods to provide life cycle
functionality: createUI(), commitUI(), layoutUI(), UpdateUI() and Dispose(). These methods
provide the capability to explicitly dictate the instantiation, dynamic control and destruction of a
module.

Using the life-cycle approach ensures that Open Exhibits Core modules and templates can
be flexibly used in a variety of situations. The life cycle methods provide simple component
management and a reliable system for establishing state notifications and updates. In addition to
this, by aligning Open Exhibits Core with the Flash/Flex system, it is standardized with other
frameworks and gives a familiar interface for application developers.

In Open Exhibits Core, the methods createUI(), commitUI(), layoutUI() and updateUI() are
protected methods that are internally used in modules. When composing custom modules,
it is recommended that you use the same syntax and follow the same procedure for module
construction.

Creating a Module Instance


Using modules, you can easily create advanced multitouch user interfaces. Each module can in many
ways be treated as a custom component class.

Import the Module Folder


The following code imports the GMapViewer class from the Open Exhibits Core module library:

import id.module.GMapViewer;

Creating a New Module Instance


In order to create a new instance of the module GMapViewer, we use the constructor. The
following code creates a new instance of the GMapViewer class called “myWindow”:

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

var myWindow:GMapViewer = new GMapViewer();

Modules in Open Exhibits Core are similar to standard Flash components in the methods used to
instantiate, set properties and add to the stage.

Configure Module Properties


The GMapViewer Module extends the TouchSprite class.
The following code sets the name of the GMapViewer instance “myWindow” to “MapWindow”:

myWindow.name = “Map Window”;

Add the Instance to the Display List


Just as with other displayObjects, the instance “myWindow” must be added to the display list before
it can be rendered on stage. To do this, we can use the addChild() method.

addChild(myWindow);

Disposing of a Module Instance


Module disposal is an important part of component control. In Open Exhibits Core, the Dispose();
method is the only life-cycle method that is public and available for use outside the module. The
following code uses the Dispose(); method to remove and nullify all child display objects and
listeners of the module instance “myWindow”:

myWindow.Dispose();
myWindow = null;

Removing instances in this way enables the resources used by the creation and operation of the
module to be released. This helps keep CPU and memory use low and is especially useful when
working with multiple display objects, which is frequently the case in multi-user multitouch
applications.

Customizing Module Components


The module XML files have been designed as a hassle-free way to manage component assets and
formatting without the need to understand advanced coding structures or techniques. Users with
little or no programming knowledge can add new content, change meta data or the appearance of
objects shown in a module.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Module Properties
There are a number of properties associated with each module component. One method of setting
module properties is to set the module XML. For example, the following code sets the color of the
window outline.

1. Navigate to the GMapViewer.xml file.


2. Locate the <FrameStyle> tag.
3. Edit the content of the <outlineColor> tag to read “0xFF0000”.

This sets the outline of all on the map windows to red.

Module Content
In order to modify content used in a module, simply navigate to the XML file associated with the
module. Then add a new source block to the <Content> tag. For example, to add a new image to
to the ImageViewer module:

1. Navigate to the ImageViewer.xml file.


2. Locate the <Content> tag.
3. Add the folowing code blog:
<Source id="1">
<url>assets/MyImageName.jpg</url>
<title>My Title Text</title>
<description>My Description text</description>
<author>My Author</author>
<publish>&#169; My Publisher</publish>
</Source>
4. Set the source id number so that it is unique.
5. Edit the content of the URL tags to set the file path to your desired image.
6. Edit the content of the title, description, author and publish tags to set the associated meta data.

This adds another image file with meta data to the ImageViewer display.

Module Interactions
Each module is associated with a set of default touch and gesture event listeners that allow the
touch objects to respond in a prescribed way to touch and gesture input. Certain Touch and Gesture
interactions can be deactivated at run time by changing the module XML file. For example to
deactivate the rotate gesture on an image touch object in the ImageViewer module:

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
1. Navigate to the ImageViewer.xml file.
2. Locate the <Gestures> tag.
3. Set the <rotate> tag to “false”.

This makes each of the Images displayed in the ImageViewer unresponsive to the two finger gesture
rotate action.

Handling Module Events


There are various events that occur in each module as all internal objects are pre-initialized and
initialized. These events are monitored and effectively inside the module class and are do not bubble
up. However, in order to indicate the progress of module initialization, each Open Exhibits Core
Module fires a complete event when it is fully initialized and ready for use. The following code
attaches a COMPLETE event listener to the module instance “myWindow”:

myWindow.addEventListener(Event.COMPLETE, addToMyDisplayList);

When instance “myWindow” is fully initialized, the ‘COMPLETE” event is fired and listener calls
the function “addToMyDisplayList();

Using Multiple Modules in a Single Application


Different modules can be mixed into one application. Using the componentManager class,
interactive windows can be sampled from each module, displayed on stage and managed as a single
set of interactive objects. Touch objects can be organized into display patterns, sorted according
to associated meta data or media type and new media sources periodically retrieved from online
accounts or local folders or prompted by user interaction.

FAQ
What is the difference between a template and a module?
The primary difference between a module component and a template component is complexity.
Modules are constructed from touch elements whereas templates are constructed from modules. For
example, the template CollectionViewer uses the modules ImageViewer, VideoViewer, FlickrViewer
and YoutubeViewer to parse and display multiple forms of interactive media on stage in a single
component.
What are core classes?
Core classes are Open Exhibits Core classes that form the foundational application framework
which all elements, modules, templates and gestures utilize.
How many modules are there available in Open Exhibits Core?
There are currently six modules available as part of the Open Exhibits Core release: The

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
ImageViewer, VideoViewer, FlickrViewer, YoutubeViewer, GMapViewer and KeyViewer.
Can I build my own module?
Yes, Open Exhibits Core has been designed with developers in mind. Modules can be created
by extending touchSprite and the various touch elements in the Open Exhibits Core library or
by constructing a set of custom classes that have been multitouch-enabled using TouchEvent or
GestureEvent listeners.
How do I customize the look and feel of a module?
Each module has an accompanying XML file that can be used to define the appearance of the touch
objects in a module as well as the various gesture interactions that are activated on the touch objects
within it.
How do I add content to module?
Each module has accompanying XML files which can be used to define the content file paths or
user account information used to load media into the module.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
9. Templates
Introduction to Templates
In Open Exhibits Core, templates are specialized touch object classes designed to be used as
complete stand-alone applications that can be customized to form an interactive exhibit. Templates
are composed from a set of Modules and Elements that are TouchEvent- and GestureEvent-
enabled. Each Template has been engineered to integrate and manage multiple sub-modules into
a single display object so that media resources can be sustainably imported and manipulated in a
multi-user multitouch environment.

The modules in each template have been selected to provide developers with advanced tools for
rapidly creating multi-user multitouch applications. Template structure ensures that each module
not only works reliably with other modules but can also be customized in appearance, content and
behavior without the worry of creating conflicts in the user experience or competition for CPU and
memory resources.

The template class in Open Exhibits Core uses the Application.xml to determine which Modules are
loaded at run-time. The initialisation process of Templates is comprised of a series of custom pre-
loaders that manage the acquisition of media content from the web and processes local content.

Templates Available In Open Exhibits Core

CollectionViewer
The CollectionViewer is a Template that uses a collection of Modules to load various types of media
and display the content on stage in the form of interactive windows. Media can be dynamically
displayed from local bitmap and video files and streamed via Youtube, Flickr or Google Maps.
All videos have custom touch buttons controls, map windows have targeted 3D gesture control
and metadata for media windows can be dynamically displayed along with thumbnail images and
QR images. Each window object can be manipulated using the TAP, DRAG, SCALE, ROTATE
multitouch gestures. All multitouch gestures can be activated and deactivated using the module
XML settings.

Using Templates in Applications


Templates are designed to be easily developed as part of a stand-alone application. Using the
Template class similar to using a Module class. Developers can download Open Exhibits Core
templates or create their own templates that consume and display modules in new ways.

Creating a Template Instance

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Each Template class directly extends the TouchMovieClip class and has been composed from
modules and touch elements. The Template class is itself a TouchDisplay object so a simple
constructor can be used to create a new instance of a Template.

var myCollection:CollectionViewer = new CollectionViewer();

This creates a new CollectionViewer instance called “myCollection”. As with other display class
objects, the new instance must be added to the display list before it can be rendered on stage. To do
this, we can use the addChild() method.

addChild(myCollection);

Setting the Modules Included in the Template


The exact modules that are used in the Template can be explicitly defined in the Application.xml
file. For example, in order to add the YoutubeViewer module to the collectionViewer Template:

1. Navigate to the Application.xml file.


2. Locate the <Template> tag.
3. Add the following code block:
<module>YouTubeViewer</module>

If a module is not included in the Application.xml file, it will not be included in the
CollectionViewer application and the content will not load into the app.

Note: There should be only one <module> tag for each Module type.

Modifying Template Properties


To modify the content, interactions or appearance of the Template requires modifying the XML
files responsible for the properties of each module. For example, the CollectionViewer Template
object has six Module XML files associated with it by default: ImageViewer.xml, VideoViewer.xml,
FlickrViewer.xml, YoutubeViewer.xml, GMapViewer.xml and KeyViewer.xml.

To disable the scale gesture on the collectionViewer Template requires locating the <Gestures>
tag in each of the six Module XML files and setting the <scale> tag to false. This method can be
somewhat time-consuming but it provides incredible versatility as the touch objects generated by
each of the modules can have independent content, stylized appearance and be programmed to
respond differently to TouchEvents and GestureEvents. All this can be done by simply editing the
external XML files.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

FAQ
How many Templates are available in Open Exhibits Core?
There is currently one Template available with Open Exhibits 1.0 - “The Collection Viewer”- which
is a multi-purpose media viewer designed for multiple user multitouch.
Will there be more available in the future?
More templates are added every month. The next template scheduled for release in December will
be “The Map Viewer”.
Can I build my own template?
Yes, Templates are designed to be simple to build and easy to maintain.
How do I customize the appearance and behavior of a template?
Each Template comes with an associated XML file that provides streamlined work-flow for
customizing the look and feel of the Template.
How do I add content to template?
Each Template comes with an associated XML file that allows developers to simply define the file
paths of the content they wish to include in an application and set any associated meta data.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

10. The Application Class


The Application Class
The Application Class is the base object for all classes in Open Exhibits Core and the display class
that all content in an Open Exhibits Core application is rendered to. In order to create a multitouch-
enabled application this class must be extended as the document or root class.

Application Class Inheritance


The Application class is a direct descendant of the TouchMovieClip Class. This means the
application class inherits all the display list properties and methods of the TouchMovieClip class.
The application class is therefore a display object container that can process and respond to
TouchEvents and GestureEvents.

Application>TouchMovieClip>MovieClip

The root or document class must directly extend the application class. This puts the application
object immediately below the stage in the display hierarchy. That means that ALL display objects
that are created in a Open Exhibits Core application are placed inside the application object.

This is one of the systems by which Open Exhibits Core ensures that TouchEvents and
GestureEvents are correctly processed globally. The application object has blob containment
enabled by default. This means that the application object forcibly contains and manages
touchEvents and GestureEvents on the top level, ensuring that the events are correctly processed
and event dispatches correctly attributed according to the complete display list.

Application Settings
The application class has an associated XML file that can be used to override default settings. The
following code must be added to the main class in order to access the application settings via XML.

settingsPath="Application.xml";

Behind the scenes, the “Application.xml” file is parsed and settings are assigned to the global
class “applicationGlobals,” which updates the application class settings. This is done when
applications made with Open Exhibits Core first initialize. This means that application settings can
be changed at runtime and set externally to the multitouch application in much the same way as
module and template XML settings.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
General Settings
There are general settings inside Application.xml which affect global properties of the application.
For example, the application frame rate, mouse visibility, Open Exhibits Core licence key and
module display list can be set via XML

Changing the Application Frame Rate


Modifying the frame rate of an application can provide a method for smoothing animations or
reducing required resources.

<frameRate> allows you to change the application frame rate. The frame rate can be set to 12, 24,
or 30 or 60 frames per second.

The current limit of 60fps has been chosen to match the most common LCD screen and projector
refresh rates. Increasing the frame rate beyond 60 fps will have no noticeable effect on animations in
Flash. Increasing the frame rate from 12 to 60fps will speed up animations that use onEnterFrame
events. This can smooth out animations on stage but generally requires more CPU resources.

Hiding the Mouse Cursor


For most applications that extensively use Touch and Gesture interactions, the cursor location no
longer has any special significance in the user interface. In most respects it can be considered a
distraction. In all applications published with Open Exhibits Core, there is the ability to make the
mouse invisible via the application XML file.

<mouseHide> controls whether or not the mouse cursor shows up in your application. "true" will
hide the cursor within the application; setting the value to "false" will make the cursor appear when
the mouse (or touch point) is over the application.

Adding New Modules


When working with templates, you can define the modules displayed in the application at run-time
using the Application.xml document.

<module> controls which modules are included in the application. By adding or removing
<module> tags from the list, you can control which modules are included in your application.

The number of modules and functionality of a template app can be determined by the modules
selected to part of the application. With this level of flexibility it is easy to create well formed and
flexible user interface that can display dynamic web content.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Note: There should only be one module tag for each module type.

Setting the Open Exhibits Core Licence Key


On some occasions it may be necessary to manually insert the Open Exhibits Core license key. This
can be done via the Application.xml file.

<licenseKey> allows you to manually input a license key if you're on a Mac or didn't put in your
license key during the installation process.

Note: Manually entering your license key using this method directly overrides the license key entered
during installation.

Input Provider Settings


The input provider in Open Exhibits Core defines the method in which the application receives
multitouch input from non-native hardware devices. Open Exhibits Core has been designed to work
with native multitouch input on devices such as cell phones, tablet PCs, desktop screens and screen
overlay devices. Additionally custom-made multitouch hardware devices can be used that utilize the
TUIO protocol to generate multitouch input via a socket connection.

<InputProvider> controls how the application receives touch data. Setting the provider to "Native"
allows it to work with Windows 7 systems or other hardware with native multitouch capabilities;
setting the provider to "FLOSC" allows it to work with TUIO-based systems.

Socket Settings
When using the Flash OSC (FLOSC) socket, it can be useful to customize the host IP and Port that
is allocated for the socket connection. In most cases, the IP settings set up a local virtual server to
connect via a XML socket connection to Flash but the connection can also be made via an external
server which provides streaming multitouch data to a defined port. This connection is automatically
handled behind the scenes in GestureWorks.

To customize the FLOSC socket connection locate the <floscSettings> tab in Application.xml.
Then edit the content in the <Autoreconnect>,<Host>, <Port> and <EnforceSize> tabs.

Setting Auto Reconnect


In case the data stream is interrupted while an application is running, Open Exhibits Core provides a
method to automatically reconnect and synchronize multitouch on an assigned port.

To enable this auto-reconnect feature, set the contents of the <Autoreconnect> tab to “true”

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Changing the host IP


The Host IP or Internet Protocol explicitly defines the IP address of the server which the
application is expecting the multitouch input data to stream from.

To set up a localhost virtual server set the contents of the <Host> tab to “127.0.0.1”. To set up an
external server, enter the network IP address of the host server.

Changing The Port


The default TUIO transport method is called UDP Data in this form is sent by the “server” to port
3333. In Open Exhibits Core, there is a built-in FLOSC module which acts as a UDP to TCP bridge
and prepares the data for a Flash socket connection. This data stream is available via port 3000.

To set the port used in the FLOSC connection, set the contents of the <Port> tag to “3000”.

Enforcing FLOSC Capture Size


Most systems that generate TUIO output are optical-based multitouch input devices. For certain
devices that use projected displays, it is necessary to have different size areas of touch and image
projection. In Open Exhibits Core, we have accommodated for this by allowing developers to
override the default touch area and explicitly define the width and height of the capture region.

To set the capture area set the contents of the <EnforceSize> tab to “true” and then set the
required <Width> and <Height> values.

Note: once a custom size has been set, if there continues to be any touch point offset in a Flash .exe
or swf make sure that the application is set to full screen so that there are no window borders.

Simulator Settings
The built-in multitouch simulator in Open Exhibits Core is designed to work as a testing utility and
as a tool for multitouch input on devices that provide no native multitouch input data.

Turning on Debug Mode


Debug mode is designed to be used to display touch points when debugging application interactions
and behaviors but can also be used to provide integrated user feedback in deployed multitouch
applications.

<Debug> controls whether or not touch points are visible when set down. If this is set to "true,"
blue dots will appear on touch points when the screen is touched.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

Degradation Settings
Degradation settings determine how the multitouch simulator behaves in response to available
multitouch input devices.

<Degradation> sets whether the application degrades into mouse events in the absence of touch
data, so if you run the application on a non-touch device, touch interactions will become mouse
events, like click, drag, etc. It can be set to “always”, “ auto” and “never”.

Setting the degradation to “auto” allows Open Exhibits Core to intelligently determine whether no
touch input data is available. If touch input is not detected in the Window 7 operating system, or
if Windows 7 is not present, Open Exhibits Core will enable the multitouch simulator to process
mouse interactions. This occurs with projector .exe files and SWFs embedded in web pages.

Setting the degradation settings to “never” forcibly disables the multitouch simulator. It will no
longer respond to or convert ANY mouse input. This can be useful in some situations however it
must be noted that if the application is run on a device with no touch input system the application
will no longer be interactive.

The default degradation settings on an application published with Open Exhibits Core needs to
be carefully considered when planning application deployment and modeling unintended usage
scenarios. When used properly, it is an extremely powerful tool for ensuring reliable cross-platform
and cross-device interoperability.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.

11. Publishing Applications


Introduction
Flash CS5 provides a variety of methods for publishing applications. Using Open Exhibits Core fully
interactive multitouch applications can be published as projector files, swf’s or packaged into an AIR
installer.

Publishing Projector Files


Projector files, also known as Flash executables, are designed to be stand-alone applications that can
run on the desktop.

Publish an application as a executable in Flash CS5 by following these steps:

1. In your open Flash CS5 application .fla file, go to File > Publish Settings.
2. In the Player tab make sure Flash Player 10.1 is selected and set Script to Actionscript 3.0.
3. Then in the Formats tab, select “Windows Projector (.exe)” then press the “Publish” button.

To run the multitouch application, simply double-click on the executable and run. Note: To
ensure application settings are correct for Windows 7, navigate to application.XML, locate the
<TouchCore> tag then set the <InputProvider> to “Native”.

Publishing Flash SWF’s


Simple web files or SWF’s can be embedded directly into html pages. Open Exhibits Core supports
multitouch input devices that use the Window 7 touch management system. In conjunction with
Flash 10.1 support any web page with embedded swfs that have been published with Open Exhibits
Core can be configured to directly support multitouch events in the browser.

Since Open Exhibits Core comes with a built-in simulator, it means that multitouch-enabled rich
Internet applications can be deployed on the web using Open Exhibits Core without the worry of
non multitouch compliant systems disabling key interactions.

Publish a SWF file in Flash CS5 by following these steps:

1. In your open Flash CS5 application myApp.fla, go to File > Publish Settings.
2. In the Player tab make sure Flash Payer 10.1 is selected and Script to Actionscript 3.0.
2. Then in the Formats tab, select “Flash (.swf)” then press the “Publish” button.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org
Definitive Guide, pt. 1. Version 1.1. 11.29.2010.
Publishing in AIR
Adobe Air is a cross-operating system runtime environment that allows Flash applications to be
deployed using a custom local installer. AIR applications can be treated in the same way as native
desktop applications, which allows developers to create stand-alone desktop applications using
existing Flash assets and ActionScript techniques without the need to learn traditional desktop
development technologies. Publishing with AIR for CS5 Flash organizes all the application assets
and executables into a specialized installer package.

Note: Before you begin publishing for AIR make sure you have downloaded the AIR 2.0 plug in for
Flash CS5. Then make sure you have a p12 developer’s certificate.

Publish an AIR application in Flash CS5 by following these steps:

1. In your open Flash CS5 application myApp.fla, go to File > Publish Settings.
2. Set the Player Settings Tab to Adobe Air 2:
3. Enter Application Settings:
○ File name
○ Name of the application
○ Description of the application
○ Window Style
○ Icon
4. In the Signature tab set the path the your AIR developer certificate and enter your password.
5. Then in the Icons tab set your icon source file path.
6. In the Advanced tab set any required files that a associated with your app.
7. Press the OK button.
8. Press the Publish button.

This material is based upon work supported by the National Science Foundation under Grant No.DRL-
1010028. Any opinions, findings, and conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National Science Foundation.

Open Exhibits is operated by Ideum. Open Exhibits Core is based on GestureWorks and free for educational use.
To learn more, visit: http://openexhibits.org

You might also like