Skip to main content
Playlistyoutube

ChatGPT for Education

Videos about using ChatGPT effectively in educational settings, from prompting tips to practical applications.

15 items in this collection
1

GPT 5.2: OpenAI Strikes Back

Full GPT-5.2 breakdown - did OpenAI reclaim the crown? A story of tokens, time and cost, plus 9 details you wouldn’t get just from reading the headlines. https://www.youtube.com/@eightythousandhours AI Insiders ($9!): https://www.patreon.com/AIExplained https://lmcouncil.ai Chapters: 00:00 - Introduction 00:55 - Better than Human @ Professional Tasks? 04:42 - Test time Compute 07:05 - Benchmark Selection 09:32 - Simple Results + council comparison 13:01 - Long Context 13:52 - Self-Improvement 15:00 - 10 Years + New Models Release Page: https://openai.com/index/introducing-gpt-5-2/ GPT 5.2 Benchmark Comparison: https://www.reddit.com/r/singularity/comments/1pka1y9/gpt52_all_20_benchmarks_rankings_and_pricing/ https://storage.googleapis.com/gweb-uniblog-publish-prod/original_images/gemini_3_table_final_HLE_Tools_on.gif https://lmcouncil.ai/benchmarks Charxiv: https://charxiv.github.io/#leaderboard GDPval: https://arxiv.org/pdf/2510.04374 My vid: https://www.youtube.com/watch?v=oK5LxMaROSA Kilpatrick: https://x.com/OfficialLoganK/status/1999270402712023158/photo/1 Noam Brown: https://x.com/polynoamial/status/1999189845164667132 New Model in New Year: https://www.theinformation.com/articles/openai-developing-garlic-model-counter-googles-recent-gains?rc=sy0ihq 10 Years of OpenAI: https://openai.com/index/ten-years/ GPQA: https://x.com/idavidrein/status/1841265634170278063 ARC-AGI 1-2: https://arcprize.org/arc-agi/2/ Sunday Robotics: https://x.com/tonyzzhao/status/1991204839578300813 Non-hype Newsletter: https://signaltonoise.beehiiv.com/ Podcast: https://aiexplainedopodcast.buzzsprout.com/ https://lmcouncil.ai

2

GPT-5.4 Is Here — I Tested the New ChatGPT Model

👉 Join the fastest-growing AI education platform! Try it free and explore 20+ top-rated courses in AI included Introduction to AI Agents course: https://bit.ly/skill-leap I walk through how I test the new GPT 5.4 Thinking model and see how it compares to GPT 5.2 and other AI models. Here is the official blog post: https://openai.com/index/introducing-gpt-5-4/ In this video I test the new ChatGPT update that just released. I talk about GPT 5.4 Thinking, GPT 5.4 Pro, and the new GPT 5.3 Instant model. I explain how the instant model answers right away while the thinking model takes time to reason before it replies. I also show what the Pro model is meant for and when it makes sense to use it. I run real tests with GPT 5.4 Thinking to see what it can actually do. I use it for research, deep web search, building a PowerPoint presentation, and creating a full Excel spreadsheet with formulas. I also test coding by asking it to build a small AI tools website and a simulation app. This helps show how well the new model handles coding, knowledge work, and tool use. I also talk about the new computer use capability that lets the model work on the web, handle tasks, and help with things like emails or data entry. Another update is lower hallucination rates, which means the model should make fewer incorrect claims. Along the way I compare GPT 5.4 with GPT 5.2 and talk about how it stacks up against other AI models like Claude and Gemini. This gives a quick look at where the new ChatGPT model stands right now. If you want to see what GPT 5.4 Thinking can do with research, spreadsheets, presentations, coding, and everyday prompts, this walkthrough shows the results from real tests.

3

Claude Skills: Build Your First AI Assistant (Never Repeat Prompts Again)

How to use Claude Skills. Every time you start a new AI chat, it forgets everything -- your tone, your format, your whole workflow. Claude Skills fix that by letting you teach Claude your exact process once so it follows it every time. In this video, I'll show you how to find, build, and use skills on both claude.ai and Claude Co-work, step by step with real examples you can try today. This video covers everything you need to know about Claude Skills, from what they are and how they work to building your own from scratch. You'll see how to use the built-in Skill Creator to generate skills through a simple conversation, how to test and refine them, and the difference between capability skills and workflow skills. Then we move into Claude Co-work on the desktop app, where skills get even more powerful because Claude can access your local files. I walk through two full builds -- a content repurposer that turns YouTube subtitle files into blog posts, Twitter threads, and newsletters, and a pricing reply skill that reads your rate sheet PDF and drafts professional client responses. We also connect Gmail so Claude can read emails and draft replies directly from your inbox. Timestamps: 0:00 Intro 1:11 What is a skill 2:40 Using skills on the web 4:12 Building a skill with the Skill Creator 7:22 Testing your new skill 8:26 Editing and refining skills 10:19 Capability skills vs workflow skills 11:24 Moving to Claude Co-work 12:44 Building a content repurposer skill 17:03 Automating with scheduled tasks 22:02 Building a pricing reply skill 26:03 Connecting Gmail to Claude 28:24 Final thoughts and next steps Claude Co-work Beginner's Guide: https://youtu.be/vv09DHej6gg Subscribe: https://www.youtube.com/@TeachersTech?sub_confirmation=1 Website: https://www.teachers.tech Instagram: https://www.instagram.com/teacherstechlab LinkedIn: https://www.linkedin.com/in/teacherstech

4

Deep Dive into LLMs like ChatGPT

This is a general audience deep dive into the Large Language Model (LLM) AI technology that powers ChatGPT and related products. It is covers the full training stack of how the models are developed, along with mental models of how to think about their "psychology", and how to get the best use them in practical applications. I have one "Intro to LLMs" video already from ~year ago, but that is just a re-recording of a random talk, so I wanted to loop around and do a lot more comprehensive version. Instructor Andrej was a founding member at OpenAI (2015) and then Sr. Director of AI at Tesla (2017-2022), and is now a founder at Eureka Labs, which is building an AI-native school. His goal in this video is to raise knowledge and understanding of the state of the art in AI, and empower people to effectively use the latest and greatest in their work. Find more at https://karpathy.ai/ and https://x.com/karpathy Chapters 00:00:00 introduction 00:01:00 pretraining data (internet) 00:07:47 tokenization 00:14:27 neural network I/O 00:20:11 neural network internals 00:26:01 inference 00:31:09 GPT-2: training and inference 00:42:52 Llama 3.1 base model inference 00:59:23 pretraining to post-training 01:01:06 post-training data (conversations) 01:20:32 hallucinations, tool use, knowledge/working memory 01:41:46 knowledge of self 01:46:56 models need tokens to think 02:01:11 tokenization revisited: models struggle with spelling 02:04:53 jagged intelligence 02:07:28 supervised finetuning to reinforcement learning 02:14:42 reinforcement learning 02:27:47 DeepSeek-R1 02:42:07 AlphaGo 02:48:26 reinforcement learning from human feedback (RLHF) 03:09:39 preview of things to come 03:15:15 keeping track of LLMs 03:18:34 where to find LLMs 03:21:46 grand summary Links - ChatGPT https://chatgpt.com/ - FineWeb (pretraining dataset): https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1 - Tiktokenizer: https://tiktokenizer.vercel.app/ - Transformer Neural Net 3D visualizer: https:/

5

Let's reproduce GPT-2 (124M)

We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters, then we hit run, and come back the next morning to see our results, and enjoy some amusing model generations. Keep in mind that in some places this video builds on the knowledge from earlier videos in the Zero to Hero Playlist (see my channel). You could also see this video as building my nanoGPT repo, which by the end is about 90% similar. Links: - build-nanogpt GitHub repo, with all the changes in this video as individual commits: https://github.com/karpathy/build-nanogpt - nanoGPT repo: https://github.com/karpathy/nanoGPT - llm.c repo: https://github.com/karpathy/llm.c - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - our Discord channel: https://discord.gg/3zy8kqD9Cp Supplementary links: - Attention is All You Need paper: https://arxiv.org/abs/1706.03762 - OpenAI GPT-3 paper: https://arxiv.org/abs/2005.14165 - OpenAI GPT-2 paper: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf- The GPU I'm training the model on is from Lambda GPU Cloud, I think the best and easiest way to spin up an on-demand GPU instance in the cloud that you can ssh to: https://lambdalabs.com Chapters: 00:00:00 intro: Let’s reproduce GPT-2 (124M) 00:03:39 exploring the GPT-2 (124M) OpenAI checkpoint 00:13:47 SECTION 1: implementing the GPT-2 nn.Module 00:28:08 loading the huggingface/GPT-2 parameters 00:31:00 implementing the forward pass to get logits 00:33:31 sampling init, prefix tokens, tokenization 00:37:02 sampling loop 00:41:47 sample, auto-detect the device 00:45:50 let’s train: data batches (B,T) → logits (B,T,C) 00:52:53 cross entropy loss 00:56:42 optimization loop: overfit a single batch 01:02:00 data loader lite 01:06:14 paramet

6

Let's build the GPT Tokenizer

The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We'll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely. Chapters: 00:00:00 intro: Tokenization, GPT-2 paper, tokenization-related issues 00:05:50 tokenization by example in a Web UI (tiktokenizer) 00:14:56 strings in Python, Unicode code points 00:18:15 Unicode byte encodings, ASCII, UTF-8, UTF-16, UTF-32 00:22:47 daydreaming: deleting tokenization 00:23:50 Byte Pair Encoding (BPE) algorithm walkthrough 00:27:02 starting the implementation 00:28:35 counting consecutive pairs, finding most common pair 00:30:36 merging the most common pair 00:34:58 training the tokenizer: adding the while loop, compression ratio 00:39:20 tokenizer/LLM diagram: it is a completely separate stage 00:42:47 decoding tokens to strings 00:48:21 encoding strings to tokens 00:57:36 regex patterns to force splits across categories 01:11:38 tiktoken library intro, differences between GPT-2/GPT-4 regex 01:14:59 GPT-2 encoder.py released by OpenAI walkthrough 01:18:26 special tokens, tiktoken handling of, GPT-2/GPT-4 differences 01:25:28 minbpe exercise time! write your own GPT-4 tokenizer 01:28:42 sentencepiece library intro, used to train Llama 2 vocabulary 01:43:27 how to set vocabulary set? revisiting gpt.py transformer 01:48:11 training new tokens, example of prompt compression 0

9

Let's build GPT: from scratch, in code, spelled out.

We build a Generatively Pretrained Transformer (GPT), following the paper "Attention is All You Need" and OpenAI's GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video. Links: - Google colab for the video: https://colab.research.google.com/drive/1JMLa53HDuA-i7ZBmqV7ZnA3c_fvtXnx-?usp=sharing - GitHub repo for the video: https://github.com/karpathy/ng-video-lecture - Playlist of the whole Zero to Hero series so far: https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ - nanoGPT repo: https://github.com/karpathy/nanoGPT - my website: https://karpathy.ai - my twitter: https://twitter.com/karpathy - our Discord channel: https://discord.gg/3zy8kqD9Cp Supplementary links: - Attention is All You Need paper: https://arxiv.org/abs/1706.03762 - OpenAI GPT-3 paper: https://arxiv.org/abs/2005.14165 - OpenAI ChatGPT blog post: https://openai.com/blog/chatgpt/ - The GPU I'm training the model on is from Lambda GPU Cloud, I think the best and easiest way to spin up an on-demand GPU instance in the cloud that you can ssh to: https://lambdalabs.com . If you prefer to work in notebooks, I think the easiest path today is Google Colab. Suggested exercises: - EX1: The n-dimensional tensor mastery challenge: Combine the `Head` and `MultiHeadAttention` into one class that processes all the heads in parallel, treating the heads as another batch dimension (answer is in nanoGPT). - EX2: Train the GPT on your own dataset of choice! What other data could be fun to blabber on about? (A fun advanced suggestion if you like: train a GPT to do addition of two numbers, i.e. a+b=c. You may find it helpful to predict the digits of c in reverse order, as the typica

10

GPT 5.4 Is Leaking in Pro Accounts — And It's a BEAST!

OpenAI quietly released GPT 5.3 Instant — their "fix" for ChatGPT being cringe and preachy. I tested it, compared it to Claude Sonnet 4.6, and then discovered something way bigger: GPT 5.4 is already leaking in ChatGPT Pro accounts, and the outputs are genuinely nuts. Flight combat sims, 3D voxel worlds, insane SVG art — all from single prompts. Plus, Qwen 3.5 is here and running on phones for free. Check out Box AI — sponsor of today's video: https://www.box.com/ai?utm_source=youtube&utm_medium=paidinfluencer&utm_theme=icm&utm_campaign=FY27_Q1_MattVidPro_Feb27 Matt Wolf's coverage on the recent AI controversy - Latest: https://youtu.be/_CIL2g1oMSQ?si=ZqiLsD4N_71ssa9O&t=966 Initial: https://www.youtube.com/watch?v=JSetfLwM5sI ▼ Link(s) From Today’s Video: GPT 5.3: https://x.com/OpenAI/status/2028893701427302559 Chubby 5.4 & Lisan: https://x.com/kimmonismus/status/2028783243311407531 https://x.com/scaling01/status/2028806282254114951 Angel shares can's post: https://x.com/Angaisb_/status/2028630896836817210 can's original post: https://x.com/marmaduke091/status/2028604854143176958 Bijan's Dogfight demo: https://x.com/Ominousind/status/2028813851509039384 Insane Minecraft SVG: https://x.com/EthanLipnik/status/2028742967473955237 Shaun Ralston's SF SVG: https://x.com/shaunralston/status/2028703722726150589 Cheta A&B testing: https://x.com/chetaslua/status/2028773840114057627 Adrien Qwen 3.5 on iphone: https://x.com/adrgrondin/status/2028568689709084919 Locally AI App: https://apps.apple.com/us/app/locally-ai-local-ai-chat/id6741426692 Qwen 3.5: https://qwen.ai/blog?id=qwen3.5 MattVidPro Discord: https://discord.gg/mattvidpro Follow Me on Twitter: https://twitter.com/MattVidPro Buy me a Coffee! https://buymeacoffee.com/mattvidpro ▼ Extra Links of Interest: General AI Playlist: https://www.youtube.com/playlist?list=PLrfI66qWYbW3acrBQ4qltDBsjxaoGSl3I Instagram: instagram.com/mattvidpro Tiktok: tiktok.com/@mattvidpro Gaming & Extras Channel: https://www.youtube.com/@MattV

12

What the New ChatGPT 5.4 Means for the World

Just 48 hours after releasing GPT 5.3 Instant, OpenAI have released GPT 5.4 Thinking, so either their is an imminent singularity or perhaps we are being distracted from other news. This video will give 9 crucial bits of context, not just on the GPT 5.4 drop but on the background to the meltdown between the Pentagon and Anthropic. What does this say about the state of AI progress, your job, and what is next. Check out my fast-growing (!) app, free to use, and code INSIDER15 for 15% off paid tiers: https://lmcouncil.ai AI Insiders ($9!): https://www.patreon.com/AIExplained Chapters: 00:00 - Introduction 01:06: GPT 5.4 Breakdown 05:06 - Closing the Loop 06:35 - Spiky Performance 10:31 - Advice 11:32 - Less Encouraging Developments - Fired Like Dogs 17:45 - But Used in Iran GPT 5.4: https://openai.com/index/introducing-gpt-5-4/ Hallucinations: https://artificialanalysis.ai/evaluations/omniscience Investment Banking Bench: https://x.com/bradlightcap/status/2029684672343728452 Move 37: https://x.com/nasqret/status/2029628846518010099 System Card: https://deploymentsafety.openai.com/gpt-5-4-thinking/gpt-5-4-thinking.pdf Prediction Market Scandal: https://www.wired.com/story/openai-fires-employee-insider-trading-polymarket-kalshi/ GPT 5.3 Instant: https://openai.com/index/gpt-5-3-instant/ GDPVal: https://openai.com/index/gdpval/ Claude in Iran: https://www.washingtonpost.com/technology/2026/03/04/anthropic-ai-iran-campaign ‘Like Dogs’: https://x.com/AndrewCurran_/status/2029605783311470679 Altman leak: https://www.cnbc.com/2026/03/03/sam-altman-tells-openai-staff-operational-decisions-up-to-government.html Original 2024 Switch: https://archive.fo/20240116172526/https://www.bloomberg.com/news/articles/2024-01-16/openai-working-with-us-military-on-cybersecurity-tools-for-veterans#selection-6173.83-6173.226 Amodei Original Memo: https://www.theinformation.com/articles/read-anthropic-ceos-memo-attacking-openais-mendacious-pentagon-announcement?rc=sy0ihq Anthropi

13

GPT 5.4 Pro Is the STRONGEST AI Model I’ve Tested (But Costs a TON)

OpenAI’s GPT 5.4 is genuinely strong, but the interesting part is where it breaks, where Pro pulls ahead, and where Claude or Gemini still win. I pushed it through real one-shot creative coding tests: a 3D engine sim, instrument pack generation, a driving game, a water globe, and interactive educational tools. In this video, GPT 5.4 Pro looks like the strongest no-compromises model overall, but Gemini still rules multimodality for me and Claude stays very competitive. ▼ Link(s) From Today’s Video: release blog: https://openai.com/index/introducing-gpt-5-4/ 5.4 mods pokemon red: https://x.com/backus/status/2029711059247059282 Angel Minecraft Demo: https://x.com/Angaisb_/status/2029635731585372598 Adam's Skills Comparison: https://x.com/AdamHoltererer/status/2029926291021894016 Image fails: https://x.com/himanshustwts/status/2029864003217089009 https://x.com/himanshustwts/status/2029864003217089009 API Pricing: https://developers.openai.com/api/docs/pricing/ MattVidPro Discord: https://discord.gg/mattvidpro Follow Me on Twitter: https://twitter.com/MattVidPro Buy me a Coffee! https://buymeacoffee.com/mattvidpro ▼ Extra Links of Interest: General AI Playlist: https://www.youtube.com/playlist?list=PLrfI66qWYbW3acrBQ4qltDBsjxaoGSl3I Instagram: instagram.com/mattvidpro Tiktok: tiktok.com/@mattvidpro Gaming & Extras Channel: https://www.youtube.com/@MattVidProGaming Let's work together! - For brand & sponsorship inquiries: https://tally.so/r/3xdz4E - For all other business inquiries: mattvidpro@smoothmedia.co Thanks for watching MattVideoProductions! I make all sorts of videos here on Youtube! Technology, Tutorials, and Reviews! Enjoy Your stay here. All Suggestions, Thoughts And Comments Are Greatly Appreciated

Similar Collections