MENU

Motion engine beats gestures

Motion engine beats gestures

Technology News |
By eeNews Europe



End of May, the company started to offer an Android smartwatch launcher called QiLaunch Ware, for a private beta testing. The motion-based interface is blazing fast and allows users to navigate through any app in a continuous motion, totally eliminating the "point and click" scenarios that we have been accustomed to with touch screens.

Key to the Qi interface is the motion engine developed by Josephson, way before touch screens and smartphones became commonplace, let alone all the apps they power.

"We have global patents and IP dating back to 2002 and we’ve been working on that since before then", Josephson said, admitting that when he first thought of using natural motion instead of coded gestures to interact with interfaces, his idea was to control light switches across a room, for easy light selection and dimming.

"It struck me how we could use simple geometry and the principles of motion and time to control objects, control programs and data, seamlessly".

The motion engine software is sensor agnostic, it is architected to take any sensor data (capacitive finger touch, IR or time of flight, eye tracking or name it) and do the maths to convert the user’s hand or finger direction, angle, speed, and even acceleration into dynamic control attributes.

"For us, it doesn’t matter which sensors you use, we convert user motion into dynamic attributes that can reach threshold events to trigger selection, all in real time. Compare this to gesture-based interfaces where you have to finish the action (gesture) before the processor can go to a look up table and decide what to do with it", explains Josephson.

In fact, as soon as the user moves in a direction, the predictive algorithm starts unfolding menus and pre-selecting the icons that the user is likely to be looking for, and even better, these icons come to the user, all that at a proportional speed that reflects the user’s agility.


The benefits are many fold. First the user interface is much more intuitive, you no longer have to imitate a cursor across a screen and move to reach a fixed icon on its XY coordinates, start moving in the icon’s direction and you’ll trigger a new layout of options. 

"By getting away from the click-mentality, we are bringing back the missing analog into digital interfaces" claims the CTO.

Then the Qi interface is much faster than your traditional "scroll and click" app interface where all the options of a given menu appear once you have selected it, which translates into a better user experience but also in power savings.

"We have made a side by side test with traditional touch-screen interfaces, trying to access different menus. The Qi interface averaged out at around 5% of CPU power, versus 20 to 30% for other solutions. That’s also because the menu develops on-demand, and only the options that matter show up on your screen", said Josephson.

The company has been demonstrating its technology to OEMs and is building a software development kit to include seven modules for different types of use-cases, and some automotive OEMs in particular are very interested to integrate the technology into their head-up displays.

When you drive, it can be very distracting to scroll through many options from a menu, but using eye tracking and QI’s motion engine, only a glance in the approximate direction is enough to trigger the right response. In that case the company analyses the motion of the eyes instead of requiring the user to focus his/her sight on particular coordinates.

"We’ve done a demo where we combined eye tracking to perform menu pre-selection, say if you glance quickly at your dashboard towards the radio area, with a thumb pad mounted on the steering wheel to verify and give attribute control (music selection, volume). Within seconds of watching the demo, most companies go , Wow! This changes everything".

Also on the company’s roadmap is the integration of its motion engine at chip level.

"Now, our solution seats at the top layer, it pulls the sensor data already filtered through the base OS and it works after the OS to interact with the API, but ideally we would want to go down to silicon and access the sensors’ raw data", explained Josephson, arguing this would further reduce user-interface power consumption.

"We’ve been approached by a couple of silicon manufacturers but we are probably two years out before silicon integration". Such IP would certainly make sense for touch-screen controllers, but in the coming months, embedded integration is the most likely.

"Virtual reality and gaming are bigger markets and we are also building demos for Oculus, using eye tracking, and possibly taking body motion into the equation" concluded Josephson.

Visit Quantum Interface at www.quantuminterface.com

Related articles:

UltraHaptics promises airborne tactile interfaces

Feeling augmented

Close encounter of the haptic type

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s