Skip to main content Skip to secondary navigation

Docket #: S24-435

Deep and wide learning: A Novel Learning Framework via Synergistic Learning of Inter-and Intra-Data Representations for Augmented Data-Drive Inference

Stanford researchers have developed a deep and wide learning (DWL) model that enhances data-driven learning by incorporating inter- and intra- data relationships. This strategy significantly improves both model performance and inference accuracy.

Large Language Models (LLMs) have been useful for advancing deep learning applications across various domains. However, they face critical challenges, including high computational demands, lack of interpretability, and poor generalizability. Current deep neural networks (DNNs) overlook inter-data relationships which leads to suboptimal learning and slower training times while also performing poorly with noisy data.

To address these limitations, Stanford researchers developed a DWL approach, incorporating both intra-data and inter-data relationships to enhance model performance and efficiency. The key innovation is the dual-interactive-channel network (D-Net), which extracts and integrates high-dimensional (HD) and low-dimensional (LD) features. HD features are derived from standard convolutional operations, capturing local contextual details. LD features are obtained through Bayesian dimensionality reduction, which learns the structural characteristics of input data and preserves its global relationships. The integration of both feature types ensures DWL has more efficient learning, higher accuracy, and faster convergence.

Stage of Development

Proof of Concept

Applications

  • Natural language processing
  • AI/ML in drug/biologic discovery
  • Healthcare diagnostics
  • Computer vision
  • Autonomous systems
  • Web analytics

Advantages

  • Use of inter- and intra-data relationships
  • Greatly surpasses state-of-the-art DNNs in accuracy
  • Improved computational efficiency by orders of magnitude

Publications

Related Links