AutoGrad Changed Everything (Not Transformers) [Dr. Jeff Beck]
Summary
This video delves into foundational AI/ML concepts like AutoGrad and contrasts current large language models (like Transformers) with brain-inspired approaches, advocating for smarter AI development. It offers a deep conceptual understanding of AI's evolution and future, making it highly valuable for students learning AI and educators teaching advanced topics in the field.
Description
Dr. Jeff Beck, mathematician turned computational neuroscientist, joins us for a fascinating deep dive into why the future of AI might look less like ChatGPT and more like your own brain. **SPONSOR MESSAGES START** — Prolific - Quality data. From real people. For faster breakthroughs. https://www.prolific.com/?utm_source=mlst — **END** *What if the key to building truly intelligent machines isn't bigger models, but smarter ones?* In this conversation, Jeff makes a compelling case that we've been building AI backwards. While the tech industry races to scale up transformers and language models, Jeff argues we're missing something fundamental: the brain doesn't work like a giant prediction engine. It works like a scientist, constantly testing hypotheses about a world made of *objects* that interact through *forces* — not pixels and tokens. *The Bayesian Brain* — Jeff explains how your brain is essentially running the scientific method on autopilot. When you combine what you see with what you hear, you're doing optimal Bayesian inference without even knowing it. This isn't just philosophy — it's backed by decades of behavioral experiments showing humans are surprisingly efficient at handling uncertainty. *AutoGrad Changed Everything* — Forget transformers for a moment. Jeff argues the real hero of the AI boom was automatic differentiation, which turned AI from a math problem into an engineering problem. But in the process, we lost sight of what actually makes intelligence work. *The Cat in the Warehouse Problem* — Here's where it gets practical. Imagine a warehouse robot that's never seen a cat. Current AI would either crash or make something up. Jeff's approach? Build models that *know what they don't know*, can phone a friend to download new object models on the fly, and keep learning continuously. It's like giving robots the ability to say "wait, what IS that?" instead of confidently being wrong. *Why Language is a Terrible Model for Thought* — In a provocativ
More Videos
1:18:07When AI Discovers the Next Transformer — Robert Lange
1:26:40The Dangerous Illusion of AI Coding? - Jeremy Howard
55:49What If Intelligence Didn't Evolve? It "Was There" From the Start! - Blaise Agüera y Arcas
46:57The Brain Is Just Specialized Agents Talking To Each Other — Dr. Jeff Beck
53:38Why AI Has a Plato Problem — Mazviita Chirimuuta
42:05