There are no active ads.

Advertisement

Future Tech: The Mobile Touch Interface, Reimagined

Today’s touchscreen devices are great for a number of reasons. It certainly makes manufacturing easier, as there’s little to no hardware buttons involved. For end users, we don’t have to fiddle with pesky plasticky bits getting stuck or breaking off, nor do we need to worry about huge sausage fingers trying to hit an itty bitty set of keys. And these aren’t static inputs — software changes can imbue them with added functionality for additional gestures at any time.

But capacitive input is not all puppies and rainbows. The biggest disadvantage is that, in order to use these screens, our fingers have to actually touch them, inevitably blocking part of the display. (It’s even worse if you have the aforementioned chorizo-shaped digits.)

Some interesting solutions have crept up, giving us all an idea of where inputs could be heading in the future. Google is already said to be looking into a Kinect motion controller for Android devices. But that might just be the tip of the iceberg.

Microsoft Research has been working on the topic for a while now, and its latest futuriffic idea doesn’t totally do away with the idea of touch either, despite the recent trend of voice command features. This project simply relocates it from the device screen onto a separate area. We’ve seen this type of concept before, and this one likewise involves a surface on which to project the interface — similar to Carnegie Mellon’s HCI Institute OmniTouch that this research is based on. Just beam the image, and interact with it like you would a touchscreen phone or tablet. Here, the magic happens by way of a wearable rig that’s basically a mash-up of a pico projector and a Kinect motion-based controller (not unlike what ).

The rig isn’t exactly petite, though. Who’d want to schlep around all that gear? Masters student Christian Loclair seemed to ask himself that question, since his “thumb on hand” project uses a much smaller unit — a 2D camera, one that can register movements made by a single hand, no less.

Here’s how it works: The subject wears the depth camera, which can detect small physical movements in front of it. This allows the hand to act as the touch surface itself or move to form gestures that register with the device.

The concept enables mid-air pinches, swipes and other actions to trigger scrolling, zooming or whatever the app specifies, all using a camera and one single hand. That’s not to say it’s perfect — having to learn a whole new physical lexicon could be daunting for users, but it’s still really interesting and very, very creative.

For a closer look at both of these projects, hit up the vids below.

What’s your ideal device interface? Interactive projections? Mid-air gestures? Other?

[via Geek.com, Engadget]

 

From Microsoft Research

 

From Christian Loclair


Adriana Lee

Adriana is the resident writer-slash-culture vulture who has written about everything from smartphones, tablets, apps, accessories, and small biz...

Advertisement

Advertisement

Advertisement