17 Comments
User's avatar
Andy's avatar

My kids first encounter with AI was recently when we were watching the Roku Channel and there was a generative AI made commercial with uncanny valley-looking people. We tried to talk through what it was. Now everytime a commercial comes on he asks if its AI or not.

It really bums me out that thats the world we are heading for now, where everything will be in question. And kids creatively could potentially be stifled with easy access to these things.

Patrick Klepek's avatar

Right. The problem is right now, you can still call out some signs. I don't think that's going to be possible in a year or so. I think people will still want "authentic" art, but the idea that you have to specifically seek it out? Ugly.

Shannon Edris's avatar

It's so hard! I feel like ai is a tool, and tools can be helpful. But I've been trying to liken ai videos to movie magic, we know it's pretend because we have critical thinking skills. But man, I'm having to help my kid hone his filter for BS way earlier than I anticipated. Also, while I can (currently) control what he sees in our house, it gets a lot harder when he's at school or a friend's house. I need him to be able to know when something is fake no matter where he is.

Patrick Klepek's avatar

Yep, that's the problem when folks are like "no AI in my house," because I think it betrays talking to your kids about what they're going to encounter outside the house. Which doesn't mean turning them over to chatbots—we don't allow those in the house!—but I don't believe in earmuffs, either.

Shannon Edris's avatar

Yeah, earmuffs or "eyemuffs" are just going to create gullible teens and adults... Which is not ideal. As AI gets better though it's going to be harder to tell reality from fiction, which *gestures broadly* is already a real big problem.

David R Griswold's avatar

It's already officially in schools. The policies around it are mixed as heck, but the movement definitely seems to be toward "Embrace and use" with different levels of policies around whether it counts as plagiarism or when it counts as plagiarism. But my children have had assignments where they are explicitly told to use Google Gemini to generate something and then edit it or modify it, for example, and they've received guidance in using it as a tool for self-study or "personalized feedback" on writing. Platforms that are heavily integrated into schools, like Google Classroom and code.org, have integrated AI tools front and center, with encouragements to use them by both teachers and students. Some of these uses are worse than others, of course, but the point is they are inescapable.

I do believe that schools are *trying* to figure out how to teach it / use it in ways that are more helpful than harmful. I certainly am. I teach coding of all things, so pretending that it doesn't exist is a great way to get blindsided, so I speak a lot about the purpose of learning and struggling, when it might be okay to use AI to generate some code that you don't understand (basically the same situations you might copy and paste code you don't understand, and with the same level of citation) and when that isn't helpful, when AI will make mistakes, when it can be useful for learning, and how you can justify that you actually learned something from AI in a way that is forthright and not cheating. But it's a constant discussion, especially since the kids have (a) been conditioned to use it and (b) been conditioned to hide that fact. They don't understand how to use it for *not* cheating, tbh.

My biggest challenge right now is *images*. My students use images in both web design and game design. I spend a lot of time coaching them on how, if they don't want to make their own assets, to find freely available open source assets online. I explicitly forbid AI generation of images. The problem I run into now is that many of the images on those free platforms are THEMSELVES AI-generated, sometimes cited as such and sometimes not! So it becomes this ethical dilemma - if I am allowing them to use AI-generated images (not subject to copyright, thank you for once copyright law) that others made, am I being hypocritical for not letting them make their own? Obviously I'm reducing environmental impact some marginal amount, but arguably I'm also removing even the tiny bit of creative spark that prompt development offers; is that a net positive? I don't even know.

Alex's avatar

Our generation (I’m of similar age to you Patrick) had to learn to identify scam emails and avoid downloading Trojans from websites—something many of our parents struggle to do. I’m hopeful that our children’s generation will develop skills to separate AI content mentally in a similar fashion. I’m not sure what that will look like yet!

It might be hard to distinguish AI from the “real”, but I expect socially there will be a norm of questioning content, placing higher value on validated content sources or influencers (along with scandals when some of them get caught using AI if they said otherwise). We, in turn, may struggle more than our kids in twenty years to do the same.

Michael's avatar

TikTok is only going to get worse too

Patrick Klepek's avatar

Absolutely. The social platforms are a plague on this front. Their bottom lines are pushing them into AI.

The AI Architect's avatar

This is such an important piece. The erosion of trust in whats real is happening so fast, and your point about kids not knowing why something looks "off" really hits home. I've noticed the same struggle with teaching critical thinking when the fakes are getting so good - its like trying to teach them to spot a counterfeit when the counterfeits keep improving every week.

Patrick Klepek's avatar

The problem is what happens when you can't lose that trust, because you never had a chance to gain it to begin with? I worry about the baseline in a few years.

Jacob's avatar

You're a little far from having to worry about it but I'm most concerned when the schools themselves start pushing AI onto the kids. The university I work at has new AI majors, AI classes, and a whole platform where any of the students can use a bunch of the models for free. I can't imagine pushing all of that and then also trying to instill some sense of ethics as they do their homework.

B Merriman's avatar

I think we're at the threshold right now. I saw some public school teachers posting relevant professional development materials they had to complete over their winter break. Mostly about usage, not about literacy and spotting pitfalls unfortunately.

Zachary Eslick's avatar

I highly disagree. It’s very simple number one. Don’t give your kid an iPad or a tablet of any kind. Number two if you do, don’t let them have YouTube/TikTok. Done easy.

Patrick Klepek's avatar

I respect parents, including yourself if you are one, who chose to go such a strict route, but it does not reflect the vast majority of parents, in my opinion. Plus, wholly locking down AI in the home does not prevent them from *encountering* AI. It's in TV commercials! I'd much rather have those conversations with my kids, rather than leaving encounters to chance.

Anders Lau's avatar

But is the answer to basically wall them off from devices and video services until they're 10? 14? 16? 18?

And is the plan for them to have no restrictions when they're older?

What if their friends watch it? What if they see it at all the mall, or in school, or while at an outing?

Much like with knives, power tools, and other useful but dangerous items, I think the intent of isolating while preparing for introduction is ideal; but then the question still remains - how DOES one prepare for a world of AI?

Sean Enright's avatar

Yes — wall them off from services, not necessarily from devices. Most (all?) apps are explicitly designed to be addictive. Introducing addictive “substances” to developing brains is bad, right? I don’t give my kids social media access for the same reason I don’t give them cigarettes.

~14 feels like a reasonable minimum. I don't plan to lock everything down forever, I will gradually loosen restrictions as they get older and hope I've built healthy habits and educated them along the way.

Exposure is going to happen whether I want it to or not. My kids know what Italian brain rot is. They know the ballerina song, but they’ve never seen the videos, they don’t have access to Roblox or Fortnight (or any online games). I don’t care if they learn about these things from peers — that’s part of growing up.

Lastly - you’re 100% spot on comparing AI to power tools. AI is a tool (or a set of tools) and should be treated like one. I wouldn’t give my kids access to a band saw, but I’d absolutely start them with a drill.

I'm an AI developer and I build AI tools — my kids know this. They see me use AI regularly, and when we use it together, it looks a lot like how we use YouTube.

We’ll use ChatGPT to figure out what we can make for dinner based on what’s in the fridge, or settle an argument about a random rule in Catan. My kids aren’t allowed on YouTube on their own, when we do use it, it’s intentional and for learning — how to play an instrument, fix a TV, or learn a new soccer drill.