Skip to main content
Playlistyoutube

AI News & Updates

Latest AI announcements, product launches, and industry developments.

14 items in this collection
3

Robotron Was Supposed to Be Humanly Impossible. So I Built an AI to Break It.

After teaching an AI to dominate Tempest, I pointed it at Robotron: 2084… and things got gloriously out of control. Robotron isn’t just another classic arcade game. It’s one of the most chaotic, punishing, brilliantly engineered games of the golden age—an all-direction, twin-stick panic attack running on a Motorola 6809, custom Williams blitter hardware, and the kind of game design that assumes the player is always one bad decision away from disaster. In this episode, I dig into my still-in-progress attempt to build an AI that can survive—and eventually master—Robotron. Along the way, I explore what makes the game so uniquely difficult, how its enemy logic and scoring system create constant tactical tradeoffs, and why this challenge is fundamentally different from Tempest. Tempest was elegant. Robotron is chaos management. Even better, I got to compare notes with Robotron creators Eugene Jarvis and Larry DeMar, who were generous enough to share stories and technical details from the original development process: the 6809-based GIMIX systems, custom in-house tools, hand-managed assembly modules, blitter tricks, and the design philosophy that turned two joysticks and a single screen into one of the most intense games ever made. So this is part arcade history, part reverse engineering, part AI experiment, and part excuse to spend far too much time obsessing over one of the greatest cabinets ever built. Can an AI beat a game that was designed to overload the human brain with too many threats at once? That’s what we’re here to find out. If you enjoy deep dives into old hardware, classic games, low-level code, and wildly impractical technical adventures, you’re in the right place. #Robotron2084 #AI #Arcade #RetroGaming #GameDev #EugeneJarvis #LarryDeMar #Tempest #Assembly #DavePlummer

6

This Paradox Splits Smart People 50/50

Two boxes, one choice, and $1,000,000. Sponsored by Brilliant - To learn for free on Brilliant for a full 30 days, go to https://brilliant.org/veritasium. Our viewers also get 20% off an annual Premium subscription, which gives you unlimited daily access to everything on Brilliant. If you’re looking for a molecular modelling kit, try Snatoms, a kit I invented where the atoms snap together magnetically - https://ve42.co/SnatomsV Sign up for the Veritasium newsletter for weekly science updates - https://ve42.co/Newsletter ▀▀▀ A huge thank you to Dr. Arif Ahmed, Dr. Adam Elga, Dr. Kenny Easwaran, Dr. Peter Slezak, Dr. David Wolpert, Dr. Scott Aaronson & Dr. Michael Huemer for their invaluable expertise and contributions to this video on Newcomb’s Paradox. The causal expected utility calculation was based on a post by Professor Huemer here - https://ve42.co/Huemer ▀▀▀ 0:00 What is Newcomb’s Paradox? 3:24 Pick 1 Box! 5:24 Pick Both! 6:38 What is decision theory? 11:27 What does Newcomb’s Paradox say about free will? 13:25 What does it mean to be rational? 16:49 Mutually Assured Destruction 20:02 Precommitment Is The Ultimate Strategy ▀▀▀ References can be found here - https://ve42.co/NewcombRefs ▀▀▀ Special thanks to our Patreon supporters: Albert Wenger, Sam Lutfi, Michael Krugman, Sinan Taifour, Marinus Kuivenhoven, Lee Redden, Richard Sundvall, Ubiquity Ventures, David Johnston, Juan Benet, Paul Peijzel, Meekay, Evgeny Skvortsov, Blake Byers, Dave Kircher, Gnare, Anton Ragin, KeyWestr, meg noah, Tj Steyn, Orlando Bassotto, Adam Foreman, Balkrishna Heroor, Jesse Brandsoy, Garrett Mueller, Kyi, Ibby Hadeed, Bertrand Serlet, wolfee, David Tseng, Bruce, Alexander Tamas, Alex Porter, Jon Jamison, Charles Ian Norman Venn, armedtoe, Jeromy Johnson, Hayden Christensen, Robson, EJ Alexandra, Daniel Martins, Shalva Bukia, Moebiusol - Cristian, Martin Paull, Data Don, Vahe Andonians, Mark Heising, Hong Thai Le, Parsee Health, Kelcey Steele ▀▀▀ Writers: Sulli Yost &

7

What the New ChatGPT 5.4 Means for the World

Just 48 hours after releasing GPT 5.3 Instant, OpenAI have released GPT 5.4 Thinking, so either their is an imminent singularity or perhaps we are being distracted from other news. This video will give 9 crucial bits of context, not just on the GPT 5.4 drop but on the background to the meltdown between the Pentagon and Anthropic. What does this say about the state of AI progress, your job, and what is next. Check out my fast-growing (!) app, free to use, and code INSIDER15 for 15% off paid tiers: https://lmcouncil.ai AI Insiders ($9!): https://www.patreon.com/AIExplained Chapters: 00:00 - Introduction 01:06: GPT 5.4 Breakdown 05:06 - Closing the Loop 06:35 - Spiky Performance 10:31 - Advice 11:32 - Less Encouraging Developments - Fired Like Dogs 17:45 - But Used in Iran GPT 5.4: https://openai.com/index/introducing-gpt-5-4/ Hallucinations: https://artificialanalysis.ai/evaluations/omniscience Investment Banking Bench: https://x.com/bradlightcap/status/2029684672343728452 Move 37: https://x.com/nasqret/status/2029628846518010099 System Card: https://deploymentsafety.openai.com/gpt-5-4-thinking/gpt-5-4-thinking.pdf Prediction Market Scandal: https://www.wired.com/story/openai-fires-employee-insider-trading-polymarket-kalshi/ GPT 5.3 Instant: https://openai.com/index/gpt-5-3-instant/ GDPVal: https://openai.com/index/gdpval/ Claude in Iran: https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign ‘Like Dogs’: https://x.com/AndrewCurran_/status/2029605783311470679 Altman leak: https://www.cnbc.com/2026/03/03/sam-altman-tells-openai-staff-operational-decisions-up-to-government.html Original 2024 Switch: https://archive.fo/20240116172526/https://www.bloomberg.com/news/articles/2024-01-16/openai-working-with-us-military-on-cybersecurity-tools-for-veterans#selection-6173.83-6173.226 Amodei Original Memo: https://www.theinformation.com/articles/read-anthropic-ceos-memo-attacking-openais-mendacious-pentagon-announcement?rc=sy0ihq Anthropi

Similar Collections