My Kids Can’t Avoid AI Slop Anymore
AI videos used to be obviously fake. We made fun of them. That's no longer true, and it's scary.
One of our evening rituals with the kids is scrolling through TikTok videos before bed. They don’t get to hold the phone, but together, we laugh at kids bumping their heads in weird ways, top 10 fart videos, and killer trick shots. My algorithm is doomed, and at some point, I should make an alternate TikTok account only for me.
Anyway.
It’s become nearly impossible to avoid AI slop while scrolling. By “AI slop,” I mean videos that are clearly fabricated—a squirrel that starts talking, a person flying off a slide into the sky, things like that. The problem these days is the “clearly” part.
It used to be clear to everyone, including my five-year-old, that something was AI slop. Now, though, we’re increasingly running into situations where no one—me, my wife, my five-year-old, my nine-year-old—are 100-percent sure one way or the other.
It’s scary, and only getting worse as the technology gets better.
One of the first real conversations I had with my nine-year-old about AI happened over the summer, because we were constantly finding videos that made her go “that looks weird” and she couldn’t fully articulate why. I tried to explain that it’s because of a technology called “AI” that “makes fake things look real, but it’s always slightly off.”
It’s honestly kind of hard to explain how nefarious AI is to a young child because “makes fake things look real” sounds, uh, cool as hell and if I was nine, I’d want that.
I told my daughter that people often call stuff like this “AI slop,” and it prompted her to proudly declare “AI slop!” when we’d scroll past a video that was, in fact, AI slop.
Sometimes, she’d fall into the trap I’ve seen happen online, where she would accuse a style of art she didn’t care for as “AI slop.” Teaching the difference was/is a challenge.
The real inflection point happened after OpenAI launched its Sora app last fall, which made it easy for the average person to make fake videos that shifted from obviously fake. But crucially for our TikTok adventures, there was a little Sora badge on every Sora video, which meant we could identify Sora-built videos and happily dismiss them.
Nowadays, we rarely see the Sora badge.
Some of that is because use of the awful app has declined in recent months, some of that is because people are tearing the badge off in order to trick people. Children, naturally, are easy to trick. People want to believe in extraordinary things, and there are now moments, more and more, where my kids ask me to stop and not skip a video.
“But it’s just more AI slop?” I tell them.
“It looks cool!” they tell me.
At times, they are not wrong. But I skip it anyway, even if I’m fighting a losing battle. It’s hardest with my five-year-old, who is the perfect age to be enchanted by the slop.
Their interactions with AI-produced junk is firmly in my control when I’m the pilot.
That’s less true when on their own, and I’m sure there’s tons of slop all over YouTube Kids and Shorts. I’ve made peace with knowing parts of AI, like LLMs, are simply here to stay because of their utility, flaws and all. I have more trouble making peace with the stuff that’s built on theft, including my own work. I’m watching this transition—this theft—in real-time. My children will be young adults after the transition happens, when the technology reaches a point of near-zero distinction between reality and one produced by a prompt. It’s harder to fathom the ethics then.
I’m at somewhat of a loss, besides limiting access to AI tools in the short term and helping them understand why they’re bad—or, at least, complicated—in the long term.
How are you handling these ideas and technologies at home?
Have a story idea? Want to share a tip? Got a funny parenting story? Drop Patrick an email.
Also:
My daughter knows what ChatGPT is—or, at least, she’s mentioned it before. She does not have access to it at home or at school, but does a friend? I’m not sure.
Access to an obvious cheating tool like ChatGPT is one of my biggest fears, and worst case scenario, I’ll network-level block access to services like it at home.
My other worry is when ChatGPT, or tools like it, invade schools has computers have. So far, that does not appear to be the case but it feels like an inevitable shift.



TikTok is only going to get worse too
My kids first encounter with AI was recently when we were watching the Roku Channel and there was a generative AI made commercial with uncanny valley-looking people. We tried to talk through what it was. Now everytime a commercial comes on he asks if its AI or not.
It really bums me out that thats the world we are heading for now, where everything will be in question. And kids creatively could potentially be stifled with easy access to these things.