Stanford researchers in Zhenan Bao's Group have developed a nanomesh sensor printed directly on the hand that uses an AI-trained model to detect multiple movement types from a single sensor.
A Stanford bioengineering researcher developed an optical sensor based muscle and body motion tracking system for use with prosthetics and wearable human machine interfaces.
Researchers at the Stanford Robotics Lab have developed new methods for modeling multi-contact collisions and steady physical interactions between multiple rigid bodies.
Stanford researchers have developed a method called KleinPAT, for creating sound models in seconds, making it cost effective to simulate sounds for many different objects in a virtual environment.
Engineers in the Solgaard lab have developed a high-speed, random access grating light valve (GLV) for phase modulation to steer and focus light in LIDAR and 3D imaging applications.
Stanford researchers have developed an optical coating that steers infrared and visual light in different paths while suppressing the typical undesired rainbow effect.
Stanford inventors have developed a new approach to tackling the vergence-accommodation conflict, which is a common contributor to discomfort associated with virtual reality setups.
Researchers in the Collaborative Haptics and Robotics in Medicine Lab at Stanford University have patented a haptic device that simulates a stroking sensation.
Stanford researchers patented a method to design, computationally optimize and fabricate efficient optical devices using semiconducting and dielectric nanostructures.
Stanford researchers have developed a method that allows for 3D semantic parsing of indoor spaces. It receives a 3D point cloud input which is parsed into individual spaces and specific components, such as structural and furniture.