Stanford researchers have developed a crowdsourced framework for real-time robotic teleoperation with six degrees of freedom. Through smartphone controllers, RoboTurk enables large human workforces to remotely operate the robots without the need for prior training.
Stanford researchers have created the first large-scale dataset of aerial videos from multiple classes of targets interacting in complex outdoor spaces.
Stanford inventors have developed a deep learning framework that is able to label individual points from 3D Point Clouds that are acquired by various sensors (RGBD sensors, LIDAR sensors, etc.). This framework obtains a point-level fine-grained labeling of 3D Scenes.
Stanford researchers have developed a method that allows for 3D semantic parsing of indoor spaces. It receives a 3D point cloud input which is parsed into individual spaces and specific components, such as structural and furniture.
Although tracking has been studied for decades, real-time tracking algorithms often suffer from low accuracy and poor robustness when confronted with difficult, real-world data.