I’m really curious to know what you think of this too:
Part of Apple’s mandate in move from a mouse interface to touch is to create a barrier between its mouse interface ecosystem, (OSX) and touch ecosystem (iOS).
This prevents lazy porting of applications which can result in a mouse-centric, pixel-accurate interface model to one developed for touch.
Briefly, the key differences are:
Accuracy: pixel perfect versus about 1cm square,
Hover: finger hover is not easily differentiated from a click or fingerdown event,
Occlusion: a finger, simply put, is always in the way of the interface.
Fatigue: A mouse is a device that can be used for hours, but a touch that uses fingers (not thumbs) generally has no wrist/forearm support.
Touch does have a powerful advantage – one less abstraction in the connection between the user and the device: what you touch is what you get. This led to the concept of kinetic interfaces with inertia: the expectation when actually touching the interface is that it behaves as a physical object.
But no touch is actually required – it is a limitation of the current technology that can soon be augmented.
Touching the screen itself is a technical limitation of capacitive and resistive interface systems: the touch itself provide a hollow experience – no varying tactile feedback, just a too-smooth perfectly uniform surface and a bit of finger grease. No information is conveyed by the touch itself aside from the expected visual queues, and even these have to be offset to be made visible from under the fingers.
These are disadvantages that can be addressed. Computer vision algorithms and the ubiquity of front facing cameras will aloow a software solution where a combinaton of the user’s gestures above the screen and an analysis of their line of site will allow:
More accurate control: where a finger touch is inaccurate and has roll, squish and smudge, fingers held like a grasped pencil above a surface can point much more accurately
Less Occlusion: a visual offset can be made into the interface where queued contact points are offset by a combination of the gesture and the persepective of the user, as read from head and eye positions
Less fatigue: A non-touch proximity interface can adapt to several modes of handling depending on the device orientation and location of the hands, a fully accurate 10 finger interface may not be needed if the user is just flipping pages or scrolling and these can be done from a comfortable poisition at the sides of the device.
Is the technology there yet? The software is not there yet and the supporting hardware is not in tablets and phones quite yet, however this is a function of better, cheap cameras, and more CPU cycles dedicated to gesture processing, both of which could be said to fall under Moore’s Law.
So would such an interface look like a smartphone/tablet UI, or a Windowed UI, a hybrid or something entirely new? I think it will fall near the middle in terms of UI, but with more modes of interaction than either system and the strength of greater accuracy and finesse.
The simple fact is that having the 3D volume of space above and around a display opens up new interface possibilities that exceed a single, highly accurate mouse pointer or 10 inaccurate fingers pressed on a glass window.