Stanford researchers have created a single diffusion generative model, DiffusionPoser, that can reconstruct human motion in real-time from arbitrary body sensor configurations, with broad application in a variety of motion capture end uses.
Researchers in the Murmann Mixed Signal Group have developed a pipelined chip architecture with inverted residual and linear bottlenecks-based networks for energy efficient Machine Learning inference on edge devices.
Active manipulation of light beams is required for a range of emerging optical technologies, including sensing, optical computing, virtual/augmented reality, dynamic holography, and computational imaging.
A Stanford bioengineering researcher developed an optical sensor based muscle and body motion tracking system for use with prosthetics and wearable human machine interfaces.
Stanford researchers have developed a method called KleinPAT, for creating sound models in seconds, making it cost effective to simulate sounds for many different objects in a virtual environment.
Stanford inventors have developed a new approach to tackling the vergence-accommodation conflict, which is a common contributor to discomfort associated with virtual reality setups.
Stanford researchers have patented a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations.
Stanford researchers have patented an automated method for generating articulated human models consisting of both morphological and kinematic model data.
Stanford researchers have patented the "Wolverine," a mobile, wearable haptic device designed for simulating the grasping of rigid objects in virtual reality.
A team of researchers from the Stanford Artificial Intelligence Laboratory have patented a portfolio of innovations that harness depth sensing technology to analyze human motion for touch-free control of devices and motion capture.