Pantomime Playground is Coming

By: Pantomime Corporation

1   0   307

Uploaded on 03/17/2015

Pantomime Playground™ is the first app featuring the revolutionary Pantomime™ platform -- the virtual reality platform that supports multiplayer games, doesn't require a headset and lets anyone reach in with a their mobile devices. Here, a player is reaching into a virtual world with an iPad, and viewing the scene on both the iPad and a Mac.

As shown, when you launch Pantomime Playground, you're surrounded by 8 monoliths that explain everything you need to know about navigating virtual worlds. You can Spin and Skate with screen gestures, tap to Throw objects, and Walk and Paddle the device to move and interact. Then you're ready to control an on-screen contraption, and even Gravity, by swatting Magic Cans.

Pantomime Playground will be in the App Store in a few weeks and is available now for pre-release testing. Want to participate in the beta test? Visit http://pantomimecorp.com.

Comments (1):

By DonHopkins    2018-05-21

Good questions! It's a very difficult problem, and I don't know of a universal solution. I haven't been very happy with any of the higher level multitouch tracking API's that I've used.

I usually just end up writing a lot of ugly Rube Goldbergesque spaghetti event handling code with lots of global state and flags and modes.

The problem doesn't seem to break down very cleanly into a bunch of nice clean little components that don't know very much about each other, like mouse oriented widgets do, so you need a lot of global event management and state machine code and friendly objects that know about each other, in order to keep track of what's really going on, and to keep from tripping over your own fingers.

Michael Naimark discusses some interesting stuff in his articles "VR / AR Fundamentals — 3) Other Senses (Touch, Smell, Taste, Mind)" and "VR / AR Fundamentals - 4) Input & Interactivity"! (Read the whole series, it's great!)

https://medium.com/@michaelnaimark/vr-ar-fundamentals-3-othe...

https://medium.com/@michaelnaimark/vr-ar-fundamentals-4-inpu...

I wrote some stuff in the "Gesture Space" article about the problem of multi touch map zoom/pan/rotate tracking, and how it's desirable to have a model that users can easily comprehend what's going on:

Gesture Space

https://medium.com/@donhopkins/gesture-space-842e3cdc7102

>Multitouch Tracking Example

>One interesting example is multitouch tracking for zooming/scaling/rotating a map.

>A lot of iPhone apps just code it up by hand, and get it wrong (or at least not as nice a google maps gets it).

>For example, two fingers enable you to pan, zoom and rotate the map, all at the same time.

>The ideal user model is that during the time one or two fingers are touching the map, there is a correspondence between the locations of the fingers on the screen, and the locations of the map where they first touched. That constraint should be maintained by panning, zooming and rotating the map as necessary.

>The google map app on the iPhone does not support rotating, so it has to throw away one dimension, and project the space of all possible gestures onto the lower dimensional space of strict scaling and panning, without any rotation.

>So the ideal user model two finger dragging and scaling without rotation is different, because it’s possible for the map to slide out from under your fingers due to finger rotation. So it effectively tracks the point in-between your fingers, whose dragging causes panning, and the distance between your fingers, whose pinching causes zooming. Any finger rotation around the center point is thrown ignored. That’s a more complicated, less direct model than panning and scaling with rotation.

>But some other iPhone apps haphazardly only let you zoom or pan but not both at once. Once you start zooming or panning, you are locked into that gesture and can’t combine or switch between them. Whether this was a conscious decision on the part of the programmer, or they didn’t even realize it should be possible to do both at once, because they were using a poorly designed API, or thinking about it in terms of “interpreting mouse gestures” instead of “maintaining constraints”.

>Apple has some gesture recognizers for things like tap, pinch, rotation, swipe, pan and long press. But they’re not easily composable into a nice integrated tracker like you’d need to support panning/zooming/rotating a map all at once. So most well written apps have to write their own special purpose multitouch tracking code (which is a pretty complicated stuff, and hard to get right).

For example, if one finger drags, and two fingers can scale and rotate, you might want to implement inertia when you let go, so you can drag and release while moving, and the object will flick in the direction of your stroke with the instantaneous velocity of your finger.

But what happens if you release both fingers while rotating? Should that impart rotational inertia? What about if you start spinning with two fingers and then lift one finger -- do you roll back to panning but impart some rotational inertia so you spin around the point you're touching to pan? Should it also impart rotational inertia from the rotation of the iPad in the real world from the gyros, when you release your fingers? It gets messy!

I implemented some variations of that for Pantomime on Unity for iOS and Android, so you can pan yourself through the virtual world by dragging one finger across the screen, and rotate around the vertical axis through the center of the screen by twisting two fingers around.

Pantomime – Interactive Multiplayer Virtual Reality https://www.youtube.com/watch?v=T43b5ywnYpo

For Pantomime, supporting inertia for panning and rotating gestures made sense and was lots of fun, and it also integrated the rotational motion in the real world from the gyros, so you could spin and skate around with your fingers, lift them and continue spinning around while skating too, all the while turning the actual iPad itself around!

Or you could grab an object to twist it with two fingers, then rotate it by rotating the iPad itself instead of dragging your fingers across the screen! (It's actually a lot easier to turn things that way, I think! No friction.) So the tracking needs to happen in 3d space projecting the touch point on the screen into the 3d world, so you can touch the screen with a single finger and drag an object by pointing with the screen instead of dragging your finger, or combine it with dragging your finger for fine positioning.

Another wrinkle is that the user might be holding the iPad in one hand and touching the screen with two fingers of their other hand, to rotate. Or the user might be holding the iPad in two hands like a steering wheel, one at each side, with both thumbs touching opposite sides of the screen.

In the "steering wheel" situation (which is a comfortable way of holding an iPad, that you control it with your thumbs), you might want to have a totally different tracking behavior than the two finger touch gesture (like each thumb controls an independent vertical slider along the screen edge, instead of two finger scaling, for example), so you have to define a recognizer with a distance threshold or a way of distinguishing those two gestures.

But when only one thumb has pressed, you don't know which way they're holding it yet, whether to expect the second finger will touch nearby or at the opposite side, so the initial one finger tracking has to be compatible for each way of holding it.

Another approach is instead of the app trying to guess how it's being used, for the app to INSTRUCT the user which way it expects them to operate the device, and how it will interpret the gestures, in a way that the user has control of what mode it's in (like touching the screen or not).

So you could switch between different modes by wielding different tools or weapons, and the user interface overlay changes to show you how to hold and operate the iPad to maintain the illusion of pantomiming walking or paddling.

Pantomime switches between showing two hands holding the screen like a steering wheel (when no fingers are touching, you're walking), and one hand holding it like a paddle (when one finger is touching the screen, you're paddling, pivoting on your elbow by the side of the screen you're touching).

And you can detect when the iPad is sitting flat with the screen facing up, and then you can switch into a different mode with different touch tracking, since you know they're probably not holding it like a steering wheel or waving it around if it's flat and not moving.

Here's a good demo that shows panning, rotating, inertia, walking and paddling, with magic cans of different gravities, explained with in-world Help Monoliths:

https://www.youtube.com/watch?v=ma9CsOLnux0

Here's a demo with a terrible bug:

https://www.youtube.com/watch?v=4rBuRDq7pMo

Here's a four-year-old playing with Pantomime -- "I'm so good at this!" he says:

https://www.youtube.com/watch?v=3ilhH2hDyQc

You have to think long and hard how people are going to interact with the device in the real world, and not follow the official operating instructions of your app! There might be two people touching the screen with their fingers near each other. Or it could be a cat swatting or a baby licking the iPad! You can never tell what's going on in the real world.

For Pantomime, I used the TouchScript multitouch tracking library for Unity3D on iOS and Android.

https://assetstore.unity.com/packages/tools/input-management...

It seemed to be able to handle a certain set of complex gesture situations, but not the complex gesture situations I needed it to handle. But it might work for you, and it's free! I think there are other versions of it on different platforms, too. And it handles proxying events from remove devices (or from Flash to Unity). And it can handle attaching different gesture recognizers to different levels of the transform hierarchy (perhaps to control which colliders detect the touches), but I'm not sure what that's good for.

What I needed to do was full screen multi touch tracking, not tracking multiple gestures on individual objects, so I didn't use everything TouchScript had to offer, so I can't comment on how well that feature works.

It had a separate drag recognizer and rotate recognizer that could be active at the same time, and you can configure different recognizers to be friends or to lock each other out, but still all the different handlers had to know a hell of a lot about each other to be able to roll between them properly with any combination of finger touches and lifts. It was not pretty.

It's free, and it's certainly worth looking at the product description and manual to see which complex gesture situations it can handle, if you're interested.

>TouchScript makes handling complex gesture interactions on any touch surface much easier.

>Why TouchScript?

>- TouchScript abstracts touch and gesture logic from input methods and platforms. Your touch-related code will be the same everywhere.

>- TouchScript supports many touch input methods starting from smartphones to giant touch surfaces: mouse, Windows 7/8 touch, mobile (iOS, Android, Windows Store/Windows Phone), TUIO.

>- TouchScript includes common gesture implementations: press, release, tap, long press, flick, pinch/scale/rotate.

>- TouchScript allows you to write your own gestures and custom pointer input logic.

>- TouchScript manages gestures in transform hierarchy and makes sure that the most relevant gesture will receive touch input.

>- TouchScript comes with many examples and is extensively documented.

>- TouchScript makes it easy to test multi-touch gestures without an actual multi-touch device using built-in second touch simulator (activated with Alt + click), TUIOPad on iOS or TUIODroid on Android.

>- It's free and open-source. Licensed under MIT license.

It's not too hard to track full screen gestures, where one object is tracking all the fingers.

The problem is when you have several gestures going on at the same time, or several different objects tracking different gestures.

Are there two objects tracking single finger dragging gestures at the same time, or is one object tracking double finger dragging?

How do you properly roll between one, two and three finger gestures when you raise and lower fingers?

The thing that's frustrating to a programmer used to tracking a mouse is that users can touch and remove their fingers in any order they please, and it's easy not to think things through and figure out how to cover every permutation. They can put down three fingers A B and C one by one, then remove them in a different order, or touch two fingers at once, or almost at once.

So you need to be able to seamlessly transition between 1, 2, 3, etc, finger tracking in any order or several at once.

I also tried implementing web browser pie menus for a gesture tracking library called hammer.js, by making my own pie menu gesture recognizer. Overall hammer was pretty nice for touch screen tracking, but my problem was that at the time (several years ago, I don't know about now) you couldn't make a gesture that tracked while the button wasn't pressed, and mouse based pie menus need to be able to track while they're clicked up. So I needed to do some ugly hack to handle that.

https://hammerjs.github.io/

I am guessing hammer.js was designed mainly for touch screen tracking, but not necessarily mouse tracking (since touch screens can't track "pointer position" when no finger is touching the screen). It would be nice if it better supported writing gesture recognizers that seamlessly (or as much as possible) worked with either touch screen or mice. Maybe it's better at that now, though.

It's not hammer.js's fault, but you must beware the minefield of browser/device support:

http://hammerjs.github.io/browser-support/

With a mouse, you can do things like "warping" the mouse pointer to a new location when the user tries to click up a pie menu near the screen edge, but there's no way to forcefully push the user's finger towards the center of the screen.

But then again, the amazing Professor Hiroo Iwata has figured out a "heavy handed" approach to solving that problem:

3DOF Multitouch Haptic Interface with Movable Touchscreen

https://www.youtube.com/watch?v=YCZPmj7NtSQ

>Shun Takanaka, Hiroaki Yano, Hiroo Iwata, Presented at AsiaHaptics2016. This paper reports on the development of a multitouch haptic interface equipped with a movable touchscreen. When the relative position of two of a user’s fingertips is fixed on a touchscreen, the fingers can be considered a hand-shaped rigid object. In such situations, a reaction force can be exerted on each finger using a three degrees of freedom (3DOF) haptic interface. In this study, a prototype 3DOF haptic interface system comprising a touchscreen, a 6-axis force sensor, an X-Y stage, and a capstan drive system was developed. The developed system estimates the input force from fingers using sensor data and each finger’s position. Further, the system generates reaction forces from virtual objects to the user’s fingertips by controlling the static frictional force between each of the user’s fingertips and the screen. The system enables users to perceive the shape of two-dimensional virtual objects displayed on the screen and translate/rotate them with their fingers. Moreover, users can deform elastic virtual objects, and feel their rigidity.

https://link.springer.com/chapter/10.1007/978-981-10-4157-0_...

(There is some other seriously weird shit on the AsiaHaptics2016 conference video list -- I'm not even gonna -- oh, all right: relax and tighten, then look for yourself: https://www.youtube.com/channel/UC8qMmIgmWhnQBeABjGlzGbg/vid... ... I can't begin imagine what the afterparties at that conference were like!)

Don't miss Hiroo Iwata's food simulator!

https://www.wired.com/2003/08/slideshow-wonders-aplenty-at-s...

http://www.frontier.kyoto-u.ac.jp/te03/member/iwata/index.ht...

Original Thread

Popular Videos 20

Submit Your Video

If you have some great dev videos to share, please fill out this form.