This project is read-only.

Multiple Users and Multiple Objects

Topics: Gesture Recognition Engine
Feb 10, 2011 at 2:05 PM

Hi there

I'm building a multi-touch application as part of my MSc computer science this year.  The hardware platform I am using is a PQLabs Touch3 overlay, and I intend coding in Visual C# using Visual Studio 2010.  I have read 3 papers on formalising the gesture language and your solution seems most elegant.  I am considering using your toolkit for the ease of creating new gestures and so that my final application is more platform-independent.  

I have spent a little time playing with WPF.  So far I have not found a way to allow simultaneous gestures on different objects, such as when more than one person is using the table:  the gestures all seem to relate to the workspace as a whole, and only one gesture is recognised at a time.  That seems to render the point of multi touch moot.  Surely the gesture event should be generated and passed through per object? Am I missing something here? Does your toolkit provide for this?

I have considered ways to solve it manually, such as using per-object state machines, a transparent panel to catch the events, etc.  I would much prefer a built in solution, however.  


Feb 10, 2011 at 4:30 PM

Hi iv,

You are right, the gesture events are generated and passed through per object. The GestureToolkit framework internally handles the parallel interaction issues and provides a simplified interface for developers.

I guess you are considering large multi-touch display where multiple users can interact with the application simultaneously. For simplicity, lets say you define a gesture having a particular interaction in mind. Now, depending on the number of objects and gestures you subscribed, the runtime gesture processor will create the necessary environment to run multiple gestures in parallel(i.e. same gesture by multiple persons, multiple gestures, etc). You can also perform multiple gestures on the same object at the same time. For example, resize, rotate and drag gesture on the same object.

Fyi, I am currently working on the next version. It won't change anything in gesture definition but will simplify the hardware independence significantly. I will need a few weeks but let me know if you are interested and I can give you an early version.


Feb 15, 2011 at 4:51 PM

Hi Shahed

Thanks for your reply, I know you must be busy.

That's right, I'm considering simultaneous interaction on large displays with collaborative applications (specifically, collaborative information retrieval).  The fact that your toolkit handles parallel interaction is an important feature, as I found that rather cumbersome to deal with when trying to stay hardware independent.  I am using a 42" variable-tilt table my University has constructed using a PQLabs G3 Plus overlay (  Your demo application (2 coloured rectangles) seems to recognise a maximum of 5 points and does get confused sometimes when fingers are placed simultaneously - I think it might be a result of using the Windows 7 gesture events.  I expect I will need to write a hardware abstraction to use your toolkit, but you make it look easy :) I have not started implementing yet; I hope to begin next week.  I will keep you updated on any feedback I may have on your toolkit as it may arise and I would be most interested in any early versions or other resources you could point me towards.

Thanks again for your help.


Feb 16, 2011 at 7:59 PM

Hi iv,

Thank you for your feedback and feel free to post anytime you have an issue. 

I will look into the simultaneous drag issue you mentioned this weekend. In general, it works fine in the Dell XT2 tablet for 4 simultaneous touches. Could you please clarify few things:

a) Does your device support native Win 7 Touch. I mean, can use do multi-finger drawing in windows 7 paint application.
b) if your device supports Win 7, are you using any WPF touch events?

Finally, I think I can give you the new version sometime next week. It implements the hardware abstraction differently and supports Windows7 touch by default. If you can give me some details about your requirements, I will try to add some sample codes for you.


Feb 23, 2011 at 12:32 PM

Hi Shahed

Sorry for the late reply, I have been out of the office this week.  

My device does support native Win 7 touch; that is how I am presently using your toolkit.  My device can handle 32 simultaneous touches but so far 5 seems to be my limit; I think this might be a result of using Win 7?  I am only using the touch events supplied by your framework now in order to take advantage of the hardware abstraction layer in your architecture.  I have however experimented a bit with the API that came with my hardware.  The events generated by my API are very low-level (simply touch down, etc).  There is a very curious bug with my system:  confusion occurs when two touches are placed diagonal to each other, lower left to upper right.  The result is two touches diagonal to each other, upper left to lower right.  This only happens when the fingers are placed simultaneously and in those specific places.  I'm ignoring the problem for now.  

My project uses multi-touch interaction techniques to support Collaborative Information Retrieval (CIR).  I create information objects (which might represent documents, web pages, images etc) which will recognise a variety of multi-touch interactions that support CIR tasks (collaborative querying, filtering, sorting, searching, duplicating, annotating, etc); the research involves evaluating those techniques.  My general InformationObject class extends the Canvas class; there will be several different sub-classes for different information object types.  

I'm presently still considering the best way to "bubble up" the gesture events on the UIElements (images, text, etc) to the canvas class to which they are attached.  For example, an information object might have an image on it; dragging on the image should drag the whole canvas.  I accomplish that by adding gesture callbacks to all of the sub-elements on the canvasses.  When a gesture is made, I typecast the items on my canvasses to FrameworkElement, since it has a Parent property.  I then recursively call the gesture callback using the Parent as sender until a Canvas is reached, and apply the gesture to that.  All elements on the canvas are manipulated correctly with drag and rotate; with resize I need to manually resize all the elements.  Do you feel there is a more elegant solution to this "bubbling up" of gestures, which I'm sure will be a common requirement?

Kind regards


Feb 24, 2011 at 3:46 AM

Hi Ivan,

There is an option to "bubble up" the events. Currently it is turned off by default. However, you can turn it on using the following property:

GestureFramework.EventManager.BubbleUpUnhandledEvents = true;

It works fine for regular use but it wasn't tested with full range of test cases. 

I think, I will be able to give you an early version of the next release after this weekend. It will allow you to use more native touch controls and simplify basic touch interactions (e.g. drag, resize).

About the touch data on your device, there are number of possibilities. The limitation in touch points could be due to the implementation of that specific win7 driver. Not sure about the touch detection issue though, could be a device or detection algorithm issue. Is there any website where I can see the details/specs of your device and the SDK?



Mar 2, 2011 at 9:58 AM

Hi, how is the development going?  Thanks the bubble up option works fine for my purposes, thank you!

The device is a PQLabs G3 Touch Overlay - - the SDK is available there, if you have time and are willing to look I'd be most appreciative!

Mar 4, 2011 at 4:01 AM

Hi Ivan,

The development is going on. However, it might take little longer for the next version then I originally expected. But please let me know if you have any questions/concerns.

Thanks for the link. I will look into the SDK.


Mar 8, 2011 at 10:47 AM

Thank you!