Scaling the Science of Intelligence
History will likely remember this era as the moment humanity first encountered a non-biological intelligence of its own making. But the bridge between today’s large language models and the Artificial General Intelligence (AGI) of the future requires a fundamental shift in strategy.
The current trajectory has proven one thing conclusively: industrial-scale execution works. Feeding massive data into expanding architectures has yielded incredible results, but we are now engineering ahead of our understanding. We have effectively built brighter, yet inherently limited, incandescent bulbs before grasping the underlying material science. Current models are triumphs of empirical engineering. Their capabilities scale with power input, yet they lack the efficiency and reasoning required for true General Intelligence.
The challenge of the coming years is not to abandon scaling, but to expand its geometry. We must move beyond the single axis of parameters to build “brighter filaments” through broad scientific synthesis. This demands that we scale the connections themselves—deploying our infrastructure to bridge AI with mathematics and the full spectrum of natural disciplines. Just as the transition to modern lighting required a leap from brute-force heating to material science, reaching AGI requires us to scale our scientific horizon, breaking current bottlenecks by discovering the laws of intelligence that lie at the intersection of these fields.
The Search for First Principles
We stand at a threshold where architectural scaling offers diminishing returns. To advance, the field must transcend the simple expansion of datasets and focus on establishing the foundational first principles of intelligence. We must treat the discovery of these governing dynamics not as a philosophical debate, but as an optimization problem as grand and resource-intensive as model training. This broadening is necessary because the industry currently operates with an incomplete scientific framework. We currently lack a unified mathematical framework that can connect the empirical performance of these systems with a fundamental understanding of how intelligence emerges from them. As SAIR co-founder and Fields Medalist Terence Tao presents the challenge: "In physics, we have made significant progress in understanding how macroscopic laws emerge from microscopic first principles, for instance deriving the laws of fluids or thermodynamics from the interactions of individual particles. A major challenge for the twenty-first century will be to similarly understand how emergent machine learning laws, such as those relating to scaling or transfer learning, can emerge from the mechanics of training a neural network or transformer, and to locate useful mathematical models of real-world data that can lead to such understanding."
With these first principles, our roadmap evolves from relying solely on empirical scaling laws to utilizing theoretical scaling laws. While current trends act as remarkably predictive heuristics, they describe what happens rather than why. By scaling the science, we turn these scaling curves from descriptive observations into prescriptive engineering controls. This ensures that rather than optimizing against an invisible ceiling, we can continuously architect the structure of the system to accommodate indefinite growth.
SAIR: The Foundation for Science and AI Research
Transitioning to a fundamental science of intelligence requires a new institutional structure. Nature ignores administrative boundaries; future breakthroughs demand the unification of currently fragmented disciplines—from theoretical physics and neuroscience to mathematics and high-performance computing. SAIR exists to drive this convergence. SAIR co-founder and IPAM Director Dimitri Shlyakhtenko highlights the necessity of interdisciplinary research: “We have seen first hand how ideas from theoretical physics and mathematics have impacted AI in areas such as convolution neural networks, stable diffusion, equivariant neural networks, and others. Questions posed by making better AI are inherently interdisciplinary in nature and require collaborations across disciplines to make progress.”
We are building the infrastructure to operationalize this multi-dimensional scaling. SAIR functions as a high-bandwidth platform designed to maximize connectivity, transforming scientific inquiry from isolated silos into a unified, reactive network. In this system, knowledge does not just accumulate; it propagates. A theoretical breakthrough in one domain instantly ripples across the entire structure, recalibrating assumptions and triggering new lines of inquiry in adjacent fields. By scaling these collaborative interactions, SAIR creates an engine of discovery capable of expanding the AGI frontier in every direction simultaneously.
We pair this collaborative framework with abundant computational resources. SAIR co-founder and Dean of UCLA Physical Sciences Miguel A. García-Garibay articulates our vision: “SAIR’s mission includes the identification, categorization, and description of the fundamental principles behind AGI, the internal structures that make it excel in all areas of human-like cognition, and their deployment in pursuit of scientific and technological advancement.” By applying the scale of modern AI to the scientific process itself, we enable researchers to rapidly prototype and verify theoretical models across these disciplines. This compresses the latency between intuition and evidence, turning the search for AGI from a guessing game into a systematic engine of discovery. Supported by Nobel, Fields, and Turing Laureates, we operate with the certainty of David Hilbert: We must know, and we will know.