Google has just unveiled Project Tango, a smartphone with built-in 3D computer vision technology. Think of it as a smartphone with built-in Kinect functionality — but rather than enable Leap-like gesture control, the computer vision tech is used to create a full 3D map of your current environment. At its most basic, you might use Tango to create a map of your house, or a 3D model of your favorite antique vase or your motorbike.
In addition to all the usual cameras and sensors, Tango has a depth sensor, a motion tracking camera, and two computer vision coprocessors, called the Myriad 1, from Movidius (a mobile computer vision startup). These two additional sensors and processors constantly scan your environment, allowing the phone to “make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.”
If you’ve ever seen the raw output from Kinect, the models/depth maps generated by Tango are very similar.
Now, Google is trying to work out what you can actually do with all that data. The most obvious use-case is indoor positioning — currently, there is no easy way to use your smartphone to navigate inside a large building.
With Tango, your phone could quite easily compare your current location to a previously generated internal map, and tell you exactly where you are — and then direct you to exactly where you want to go, down to the exact shelf location for a product. The same data could also be used to help visually impaired people, by providing audio cues, vibrations if you walk near an obstacle, etc.