Stanford researchers have created a single diffusion generative model, DiffusionPoser, that can reconstruct human motion in real-time from arbitrary body sensor configurations, with broad application in a variety of motion capture end uses.
Researchers in the Murmann Mixed Signal Group have developed a pipelined chip architecture with inverted residual and linear bottlenecks-based networks for energy efficient Machine Learning inference on edge devices.
Active manipulation of light beams is required for a range of emerging optical technologies, including sensing, optical computing, virtual/augmented reality, dynamic holography, and computational imaging.
A Stanford bioengineering researcher developed an optical sensor based muscle and body motion tracking system for use with prosthetics and wearable human machine interfaces.
Stanford researchers have developed a method called KleinPAT, for creating sound models in seconds, making it cost effective to simulate sounds for many different objects in a virtual environment.
Stanford inventors have developed a new approach to tackling the vergence-accommodation conflict, which is a common contributor to discomfort associated with virtual reality setups.