On The SAIR: Episode 1 — AI × Science with Terence Tao & Chuck Ng
In October, we recorded the very first episode of On The SAIR, a new podcast from the Science & AI Research Foundation — where we explore how artificial intelligence can responsibly accelerate discovery across every field of science.
For our debut episode, host Peter sat down with Professor Terence Tao (UCLA) and Chuck Ng (Co-Founder, World Leading Scientists Institute) to discuss what AI means for the future of research — from mathematics to biology, from education to ethics.
Here are a few highlights that stood out:
1. 🧠 AI is a partner, not a replacement
Terence reminded us that the real promise of AI isn’t about replacing human scientists — it’s about removing the repetitive and time-consuming parts of research. When AI handles the “drudge work,” people can focus on creativity, intuition, and breakthrough thinking.
2. 📚 A new kind of literature copilot
No scientist can keep up with the vastness of human knowledge. Tools that help organize, summarize, and connect what’s already known will be transformative. In science, that’s half the battle.
3. 🤝 Start where we can measure
Chuck emphasized that mathematics offers a natural starting point for AI-for-science — it’s structured, benchmarkable, and verifiable. Once we understand the patterns of responsible use, those methods can expand across disciplines.
4. 🎓 Education must evolve
Both speakers agreed that AI should be integrated into learning, not banned. Students can use AI tools — but they must show prompts, reasoning, and process. As Terence put it, “You can’t just give the answer — you have to show your work.”
Projects, hands-on applications, and balanced policies will shape a new generation of scientific thinkers.
5. ⚖️ Risks are real, but manageable
The real risks are not about AI taking over humanity like in the Terminator movies — often it's more about marketing. The real challenges lie in authenticity and trust, from misinformation like deepfakes. Transparency, cultural norms, and sound policy will matter far more than fear.
6. 🧩 Verification matters
We should never lose the ability to verify AI outputs. The rule of thumb is only using AI as far as you can trust it outputs. In mathematics we have the capability to verify outputs automatically by reliable tools; in other sciences, lab replication and simulation play that role.
AI and science have always shared a common goal — to understand the world more deeply. What’s changing is how we collaborate with intelligence itself.
We’re just getting started.
Watch now: https://youtu.be/Rm1mHfwlS2w?si=NQ-zNEl84iMlrXqo
Subscribe On The SAIR for upcoming conversations with the thinkers shaping the next era of scientific discovery.