MENU

Future of mobile 3-D is a two-way street

Future of mobile 3-D is a two-way street

Technology News |
By eeNews Europe



FaceTime may be the utility that grabs the most attention, but iPad and iPhone’s forward-facing camera can do more than just video calling. These cameras act like a little eye that can be programmed to track our heads as we look left and right to produce some of the same movement effects seen with Kinect. But although the effect looks similar, it is a facade. Similar to closing one of our own eyes, these single forward-facing cameras can only create 2-D image maps and are lost with the addition of depth. Technically speaking, iPhone already employs depth sensing technology, and does it in the same fashion as Kinect. Buried inside the phone’s ear piece is an infra-red (IR) LED measuring the reflectivity of objects placed in front of it. This is what allows us to press our ear to our phone without mashing on buttons and hanging up on callers.

Both the dual camera method and IR can achieve the same effect, but IR actually stands as the more beneficial of the two. The computing power it takes to translate two images into depth-of-field data is far more than it takes for IR data. In addition, IR functions just as well in the dark as it does in light, sort of like a technologically advanced version of the flashlight apps that clutters the app store’s utility section. While most of us won’t be stumbling through the dark with our smartphones, this begins to reveal some of the functionalities that depth sensing apps could bring to the market.

Just as though we had a second pair of eyes, a depth sensing input on our mobile devices can see anything we point it at. For the blind, this means that not only could they replace touch screens with gesture-based commands, but that their mobile device could potentially function as a visual assist. A group of university students in Germany have already put a Kinect on the head of a blind person and hacked it to serve as a navigational tool. Integrating depth perception into mobile would provide all the same benefits without requiring the cumbersome Kinect hats.

Depth-sensing interfaces will ultimately revolutionize how we interact with our devices. First the touch-screen came along and turned our own fingers and natural movements into a stylus, now Kinect has removed all the tools that stood between users and a device. GarageBand instruments in iPad can be played just by strumming fingers in the air, or an angry bird could be launched at a pile of pigs by plucking fingers away from the screen. When a mobile device can detect depth, the screen opens up into a multi-dimensional space that users can look deep beyond. For enterprise, this means that a mechanic could virtually look inside an engine as though it were in front of him, moving in for a closer look or peering to the far right or left corners of the workspace.

Despite the possibilities and popularity, the destiny of depth-sensing mobile devices is ultimately in the hands of manufacturers, and so far their interests in 3-D lie more in output than in input. Pre-orders are already being taken for the HTC Evo 3-D, expected to be the first glasses-free 3-D Android phone, and the LG Thrill is scheduled to be close behind. On Apple’s end, speculation has already started that the iPhone 5 will contain 3-D camera and display. In addition, the company holds a number of patents on 3-D projectors and viewing techniques dating as far back as 2006.

IR and dual cameras are such young technologies that it’s still difficult to predict how they will be adopted within other devices, but the success of Kinect will set a precedent for how depth is employed in the future. Kinect’s hacker-driven open source project led to the discovery of thousands of new uses that went far beyond Kinect’s original intention as a game, and with the release of the SDK this spring, there are soon to be thousands more. As the technology continues to get more refined and the public appetite continues to grow, the crossover between Kinect-like technology and mobile becomes increasingly inevitable.

Ryan Engle is a Senior iOS Developer at Mutual Mobile where he specializes in bending depth sensory technology and augmented reality to enterprise uses. Ryan has served as lead developer for Audi’s A8 Experience iPad application and co-led development for StumbleUpon’s iPad application.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s