Skip to main content Skip to secondary navigation

Docket #: S24-435

Deep and wide learning: A Novel Learning Framework via Synergistic Learning of Inter-and Intra-Data Representations for Augmented Data-Drive Inference

Stanford researchers have developed a novel representation learning model that improves data-driven learning by incorporating more relevant data relationships. This approach significantly enhances both model performance and inference accuracy.

Large Language Models (LLMs) have been useful for advancing deep learning applications, but they face critical challenges, including high computational demands, a lack of interpretability, and poor generalizability. Current deep neural networks (DNNs) are often brute-force in nature and overlook important data relationships, which can lead to suboptimal learning, slower training times, and poor performance with noisy data.

To address these limitations, Stanford researchers developed a novel approach that incorporates relationships across the system to enhance model performance and efficiency. The key innovation is the multi-channel network. This network extracts all important data features by using standard convolutional operations and a Bayesian approach to capture contextual details while preserving global structures. This new learning strategy provides an effective way to achieve more efficient learning, higher accuracy, and faster convergence.

Stage of Development

Proof of Concept

Applications

  • Natural language processing
  • AI/ML in drug/biologic discovery
  • Healthcare diagnostics
  • Computer vision
  • Autonomous systems
  • Web analytics

Advantages

  • Use of inter- and intra-data relationships
  • Greatly surpasses state-of-the-art DNNs in accuracy
  • Improved computational efficiency by orders of magnitude

Publications

  • To be available

Related Links