Teaching My Son About AI (The Non-Technical Version)
Archael asked me yesterday what I do for work. I told him I help computers think. He looked at me like I was pulling his leg. “Dad, computers can’t think.”
Smart kid.
The Question
“So what do you actually do?”
I showed him ChatGPT. Typed in “write a story about a robot who wants to learn to paint.” Watched his eyes widen as the response appeared, sentence by sentence.
“Is that real? Did a person write that?”
“Sort of. A really smart computer program looked at millions of stories and learned patterns. Then it used those patterns to create something new.”
“But computers just follow rules, right? Like in my game?”
This is the conversation I never expected to have with a seven-year-old.
Breaking It Down
I tried explaining it without the jargon:
“You know how you learn to draw? You see lots of pictures, you practice, you get better?”
He nodded.
“The computer does something similar. It ‘reads’ millions of books and learns how words usually go together. Then when you ask it something, it predicts what words should come next, based on what it learned.”
“So it’s guessing?”
“Very educated guessing, yes.”
The Andriel Factor
The interesting part came when Andriel joined us. He’s 10, on the autism spectrum, and processes information differently.
I showed him the same demo. His question was different: “Does it understand what it’s writing?”
That’s the billion-dollar question, isn’t it?
“I don’t think so,” I admitted. “It’s really good at predicting what words should come next, but understanding? That’s different.”
Andriel thought about this. “Like when I memorize things but don’t always know what they mean?”
Exactly like that. Sometimes the most insightful observations come from unexpected places.
What I’m Actually Teaching Them
Not the mechanics of transformers or embeddings. They’ll learn that if they want to. Instead:
Critical thinking. Just because a computer says something doesn’t make it true. Even if it sounds confident.
Tools, not magic. AI is a tool, like a calculator or a search engine. Useful, but not infallible.
Human creativity matters. The AI can write a story, but it doesn’t care about it. You do. That matters.
Ask good questions. The quality of what you get from AI depends on what you ask. Good questions are a skill.
The Bigger Picture
We’re raising the first generation that will never know a world without AI. That’s simultaneously exciting and terrifying.
They’ll need to understand these systems well enough to use them effectively but maintain enough skepticism to not blindly trust them. It’s a balance I’m still figuring out myself.
What Surprised Me
Archael’s main takeaway: “So it’s like autocomplete on my iPad, but really, really good?”
Yes. Exactly that. Sometimes the simplest explanations are the best.
Andriel’s question later: “If it learns from what people write, does it learn bad things too?”
Kid’s asking about bias and training data at 10. I love it.
The Dad Perspective
Part of me wants to protect them from becoming too dependent on these tools. The other part knows that’s futile—they’ll grow up with AI as ubiquitous as smartphones.
My job isn’t to gate-keep the technology. It’s to help them develop the judgment to use it wisely.
That’s harder than any technical challenge I face at work.
Practical Takeaways
If you’re explaining AI to kids:
- Use analogies they understand
- Be honest about limitations
- Encourage them to question and experiment
- Focus on the “why” not just the “how”
- Make it hands-on when possible
They don’t need to understand backpropagation. They need to understand that computers are powerful tools that humans created, control, and are responsible for.
The rest they can learn later if they’re interested.
For now, Archael wants to use it to help write his Pokemon fanfiction. Andriel wants to know if AI can help him identify different types of trains.
I’ll take that as a win.