Stanford researchers have developed a novel algorithm that creates visually-pleasing three-dimensional models from a single still image. Make3D uses a variety of visual cues that people use for estimating the 3-D aspects of a scene. The algorithm starts with breaking the image up into a number of tiny patches. For each patch in the image, it uses supervised learning techniques to infer both the 3-D location and 3-D orientation of the patch. The algorithm models both image depth cues as well as the relationships between different parts of the image. Other than assuming the environment is made up of a number of small planes, Make3D makes no explicit assumptions about the structure of the scene enabling the algorithm to capture a much more detailed 3-D structure.Stage of Research
In a pilot study, the algorithm was tested on approximately 600 images and it was effective on about 65% of images. The inventors, Ashutosh Saxena and Andrew Ng, have made the algorithm available at Make3D
. Approximately 22,000 users have converted about 25,000 images to 3-D models on this website.
Currently, the algorithm works best on images of outdoor scenes, such as mountains, lakes, houses, streets, etc. Researchers plan to extend the algorithm to other environments, such as close-ups of individual objects.