Episode Transcript
[00:00:00] Speaker A: 72% of teenagers are already using AI companions. 1/3 right now are taking their serious problems to chatbots instead of humans. And the scariest stat to put on top of all of that is that 63% of parents have no idea what their kids are doing with AI. This is a growing problem in our culture. If you don't have a strategy for how to handle it, you're going to be left behind and your kids are going to be taking cues from the culture.
So we're going to do you a favor. Today on Abraham's Wallet, we have a special guest, Ryan Findlay, a guy I've known for years. He hails from Colorado and he's going to be talking to us about tips that we can use to be helping our families. And I think it's going to change the conversations you have around the dinner table tonight.
Run your home and your dough like a biblical boss.
Okay, Ryan, thanks for being with us.
I would love for you to take us back and just describe for us how a high school and college teacher ends up deep in tech startup world and then get this focus on AI. How did that transition happen in your life?
[00:01:17] Speaker B: Yeah, thanks for having me.
So start my career in education.
My dad was a public school teacher for his whole career and I swore I was never going to do that. I ended up doing that teaching public schools, private schools, US abroad, high school, college, and then graduate level. My last assignment, I was at a school called African Leadership University, which was at the time the newest university in the world.
And we were creating this brand new degree. It was a skills focused, future focused degree.
Basically, like, how do you go be a problem solver in the world?
And this is 2018, 19. We're looking ahead and we're like, okay, well AI is coming soonish, right? So we need to be a part of the package of the degree. That was the first time that AI got on my radar in the side. On the side. I was working on a tech company that to my surprise, in 2019 ended up getting funded. So me and a couple guys, we moved to the Bay Area and we start this tech company. It's an education technology company.
And so all of a sudden, this sort of distant, literally like distant from across the world, Africa, but also, you know, sort of, oh, it's far off thing. I moved to the Bay Area and I realized, no, we are like, we're on the cusp of it and I, like a lot of people, didn't know exactly what shape it was going to take. Even OpenAI didn't know that ChatGPT was going to be this, like, explosion of. Of popularity, but I knew it was close. And then when, when, you know, ChatGPT drops, I had this like, oh, my goodness moment. I'm sitting in tech. My, My mind is on the product and what is this going to do our business, but my heart is very much on the classroom and, like, this school that I'd helped build over the last number of years. And, you know, think about my dad and, and people like him who are in public schools and just being like, oh, my gosh, they just, God love them, like, they're just not gonna be ready for this. Um, and so I actually started consulting to schools on the side, just sort of volunteering and, and supporting however I could, saying, like, hey, if I can help, I understand the classroom and teaching, I understand technology. If I can help you sort of see around the corner here and get prepared for what's coming, I'd love to do that. And so that's my journey in a nutshell.
[00:03:21] Speaker A: Maybe there's not an answer to this question, but when you say, I'm working on it, I'm aware of OpenAI, et cetera, and I'm thinking of my parents and the kids in the classroom. Was there a specific. I read this thing or I saw this thing, and I thought, oh, my goodness, schools aren't ready for this.
[00:03:40] Speaker B: I think it was one of the early tweets that came out of somebody who had early access to GPT 3.5. That was the one that everyone became the chat, and everyone was just that watershed moment. Somebody I followed had early access and they had gotten it to write a paper on Romeo and Juliet or Great Gatsby or something like that. And it was adequate.
And that was this, like, oh, my gosh, that's as good as any 10th grader is going to do for the most part.
And I was just like, oh, that's it.
Just playing out the string here, because this is.
This is going to change everything.
[00:04:16] Speaker C: So, Ryan, with that background now, you're. You're going to schools and you've got the insight into what their mindset is. But when you show up at a school, whether that's a university or a high school or any setting you're in, what is their reaction when you say this? Because I've been in rooms where parents either were kind of sticking their head in the sand, or you've got kind of the weird AI guy who's like, here, we should do it 10 times harder than everybody else. But what do you hear from schools specifically, what are they thinking about AI right now?
[00:04:52] Speaker B: Most schools, unfortunately, that when I come to them and hey, let's talk about that, about this, let's dig into it. To be honest, most of the time I'm fighting to be somewhere in their top five list of problems to solve because of any number of things that they have going on, you know, with, with their student body, with their parents, their teachers, with what books are we squabbling about or you know, what's the new state policy like? There's so many things that are happening in schools right now, the noise is so high that being able to be proactive about something is really hard. And there's only, there's only so many schools who frankly have the capacity to do it, to be on the front foot with this at all. And we're not, we're already not like, no matter what action you take right now, you're not on the front foot. But there's like more forward or more backward. And the people who are like leaning forward at all are the ones who, who have, for whatever set of reasons, have margin to be able to think about this. So that's the first thing. The second thing is if they're thinking about it at all, it's very much from the deficit mindset. And I don't mean that totally negatively. I just mean that it's only about cheating and the whole conversation is about academic, academic integrity and don't use it on your paper and things that like, I can get behind, I can get behind. Like, you should not farm out your paper to ChatGPT.
You should not get it to do all of your hard thinking. We can talk more about that, but it really is just from a 1 or 0, black or white don't do it kind of thing. And the problem is that for kids, they're naturally curious. They have time capacity. This thing is on every device and every platform they use. And so they're just so tempted to do it and just to even see like what is the forbidden fruit. Of course, kids always have that right. So when it's the strong anti movement, it creates more curiosity, almost creates a black market market of AI use. And so those are like, that's kind of like the zero state is, we don't have time for this. The next is, yeah, we just shut it down. Or we say absolutely not here or there. There's a whole world about what could be, what would be more efficacious ways of using it or more thoughtful, nuanced ways of using It a lot of times they're just not there yet. Some teachers, yes, some schools, yes. But that we are still in the very early adopter phase of, of AI in schools.
[00:07:05] Speaker C: That makes sense.
Before we got on recording, you were talking to me about kind of the, the thing you're trying to do in schools and we're going to shift over to inside of homes and families and households in a second, but is help them build competencies and frameworks for an AI everywhere world.
Can you just give us a few bullets for what that looks like? If it's not just here's how you stop them from cheating.
What kind of things are you actually proposing and what does that look like for just say a typical high school student?
[00:07:39] Speaker B: Yeah, really good question. So I think there, first of all, there are things that we need our teachers to sort of get comfortable with in this new world order.
One of them is, hey, if you're going to send home a paper and say, bring this back in three days, you have to assume that some number of your kids are just going to have AI to do that or a significant chunk of that. And also, whatever AI detection software you think you have, it does not work. You're going to create more issues for yourself by trying to catch kids in this, in this game than just trying to be more, more thoughtful about your assessments and your assignments.
So that's the first thing is kind of working with them to think about, hey, how do we redo assessment and how do we know, figure out what kids actually know and what they don't know when everything is a quick chat away.
So that's the first thing. And I like to try to talk to teachers and schools about like, let's rethink this, like I'm gonna do the take home test or I'm gonna have you go write the five paragraph essay, bring it back tomorrow. Cause it just, it's just not a, it's just not an effective tool anymore. Even if some kids choose to do it the right way, it's, you know, you still have too many that won't.
[00:08:45] Speaker C: Yeah, just on that real quick, Steve, before AI existed, we started writing blog posts at Abraham's Wallet. And I thought, I wonder, you know, Steven has a very unique writing style. I told an AI detection software, go look at Steven's articles on Abraham's wallet.
94% likelihood that they were written by AI, even though they were created long before ChatGPT came out or any of that.
[00:09:10] Speaker A: So is that what they said?
[00:09:12] Speaker C: Yeah. To your credit, both of you guys. Steve, I guess you write like a witty AI and Ryan is 100, Ryan is 100% correct, that it's not just like it's missing things, it's also maybe accusing somebody of using AI. That couldn't possibly have.
[00:09:27] Speaker B: Yeah. And there are like brewing lawsuits about this because a kid gets flagged as AI, they fail the assignment, they get a 4.6 versus a 4.7, they don't go to Harvard like that. That's the stuff that happens with these AI detection software. So they're honestly more trouble than they're worth. I think so, yeah. So there's, there's like a new set of like, competencies, there's a new set of pedagogies, instructional methods that teachers are going to have to start to work in. And that's hard. You know, if you've been in profession 20 years, you gotta, you kind of have to put everything else, you know, put everything aside and say, all right, I'm starting fresh. And if you've been teaching three years, then you have, you're figuring out how to be a teacher, and now you're figuring out how to be a new teacher. It's tough. I've been in those seats and I know that's difficult a lot. And a lot of times the school, you know, still a huge number of teachers, so they don't actually get any training from their school. So that's a problem. Right.
Take it to the level of the student. We have to answer the question, like, what does it look like to be an effective future employee citizen, et cetera, in the world of AI? And so we, we have created a competency model for kids to, starting at kindergarten, on the way, all the way up, not to be active users of AI, but to be understanders, you know, to conceptually understand and kind of constitute AI in their lives, in their workflows, in just their thinking processes. And we want to try to give kids, I advocate that they get a sense of AI, of what it does.
So for a young kid, you could say, you could say, hey, AI is like a, a weather forecast, right? Is the weather forecast always right? No, it's not always right. It's directionally right. You know, it's like, yeah, it's probably going to be 60 or above tomorrow, but if it, it might be a little bit cloudy and it didn't know that, you know, it might rain in the afternoon for a few minutes and didn't pick that up. So it's, you're giving kids kind of a sense of like, okay, I understand this thing. And. And I can apply that to AI. Right. We have helpers in our life. You know, a car is a helper. It helps us get from point A to point B. AI can be a helper. It can get us, you know, help us play a song or get information. But it's not something that we love. Right. We don't fall in love with our car. We don't fall in love with AI. So there's. Even at a young age, we want to give kids some intuition about what this thing is, what it isn't, how we should regard it in our lives. And then as you get older and you get hands on keys and you start to play around with different prompts and maybe use something like an image generator, use something like a reasoning model. Then we start to. We want to kind of ratchet up the sense of when do you use that? What's the right part of your thinking process and in what measure do I use it? And those are things. These are mostly things that schools haven't even begun to think about because we're going back to, oh, you cheated. And it's like, let's get a little bit more nuanced here. If they used it to brush up a topic sentence, is that de facto bad?
Maybe not, right? If they used it to write the whole middle paragraph. Yeah. If they used it to go find more evidence for their argument, is that bad? Maybe not. If they use it to do the bibliography, is that fine? Yeah. You know, so we have to, like, there's just. There's levels of nuance here that I think we're mostly missing. Those are conversations I like to have with schools.
[00:12:29] Speaker A: Let me just ask a question, Ryan. My parental intuition is telling me, like when you mentioned the first time that a student uses an image generator, for instance, I kind of feel like if that is proctored by parents, that's probably the safest thing for the kid, is that we're kind of doing a family learning experiment and it's part of our conversation and your parents have coaching involved and we're telling you to watch out for this and this. It kind of keeps that conversation in the context of family as opposed to we don't know nothing about no AI. Get out there, kid, and find out what you got to do to make good grades. That seems very stick your head in the sand.
Do you think I'm thinking right?
[00:13:16] Speaker B: That's exactly right. Yeah. I think parents have to.
We have to mediate this for kids and be that sort of ride along, even up till like 1718, 19. Like let's sit there and look at what it says about future job prospects or growth industries or best ways to finance a car. I was listening to that one you guys did recently.
Let's look at the response and see does this line up with what we know to be true or wise. Right. And so there's, you're not gonna be there always and you don't have time or capacity for that. But to, but to pick your moments early on, especially like I would say 100% of early on use of direct chatgpt kind of AI that should be parent side by side, you know. And as you get older, picking those moments where you're, you're leaning in to say, hey, let's, let's actually interrogate this a bit more. Let's see if we get the AI to disagree with itself. Let's even maybe see if we can make the AI look silly because. Not because we want it to look silly. That's not the goal. But the goal is so that you don't trust it with your whole being, which is a huge risk.
[00:14:17] Speaker A: It's crazy to think of a family evening after dinner and I'm like, we're going to have our regular kind of family time conversation. Let's give everybody to get some computers down here and let's all talk on because usually it's like put those things away so we can have real interactions time. But I can totally see the benefit of no, let's do this together and let's see if somebody can come up with the wackiest photo of a kitten that AI will produce or whatever. That seems like a really healthy, as you say, kind of leading the conversation as a parent anyways.
[00:14:51] Speaker B: Yeah. And it can be fun, you know, but it, but it needs to happen.
[00:14:55] Speaker C: You and our first chat told me just some things that blew my brain out of the water when it comes to the statistics around specifically you talked about companion apps and I went and I grilled my kids. I was like, I would have guessed the numbers to be so different. I'm not going to share them because I want you to, but can you just tell me what you told me about kind of what they are and how pervasive they are and then I would have never put this on my radar. So what as parents do we need to be aware of specifically in that area but then generally in terms of the real dangers that kids are facing with AI, maybe outside of just the academic realm.
[00:15:40] Speaker B: Yeah. So you know, Stephen, to your point about like put our head in the Sand. Right. That's, that's the, I would say the biggest issue I run into when I talk to parents about this.
They just think, that's not my kid. You know, they're not there. Or, you know, we, we don't do much on devices. And I think, you know, you know your kids better than anybody else. But last year, 86% of kids in K12 in that region, so 5 to 18 years old, 86% of kids said that they did use AI. So that's a huge number. And that's, that's like generative AI. That's a chat GPT, that's a Gemini Grok, whatever, right? So that's a huge number. And 30% use it every day, 16% use it almost constantly. Those are, those are the stats, right? So you kind of look at your week. If you've got a couple kids, unless you have them under locking key, like, probably they are accessing AI somehow, maybe outside the home, maybe at school, maybe at their friend's house. That's not de facto bad. Except the, the AI that kids are most likely to use, like, they'll go to chatgpt.com and you can just start chatting for free, no account necessary. Like right now, if you want to test it, right. You could do that. And kids do that.
The easy, sort of like gateway drug of AI is this category of AI called companion apps. Companion AI, this is AI that, its sole intent, it's like stated purpose is to be your friend, to be your companion. That's why it's called companion apps. The stats are from Common Sense Media. They put out a report last summer that said 72% of teens 13 to 17 have used companion apps, and that 52% of them use them regularly, which is to say a couple times, at least a couple times a month, they're downloading an app or they're going to a website and they're, they're chatting with magical unicorn or Uncle Jesse Katsopoulos. That's, that's a real one, by the way. And they can just go talk about whatever they want with this companion app. And it's there to, you know, to be the, like, shoulder you can cry on, you know, or to be the empathetic friend. But sometimes they're not just, you know, silly and fun and we'll talk about, we'll make a story about a dragon. Sometimes they are, you're talking to a character that is hot girl across the street, therapist, stepmom, cool pastor. And so kids kind of go into these without, you know, that front part of their brain that is so important, they go into these things and they kind of get immersed in that, oh, I'm talking to my therapist, stepmom, right? And whatever they think that is in their mind, they start to. They start to share things, right? They start to divulge, they start to open up. That's a problem for. For a number of reasons, one of which is almost all these companies are harvesting their data and are selling it. So they are. They're out there giving away all of their hopes, their dreams, their anxieties, their questions about life, their questions about their faith, what they like to buy, all that, right? Just wholesale being. Being served up to sell to Facebook and Google and whoever else wants their data. Right? That's a. That's a huge problem.
Even worse is that these things are. They are being built off all the tricks that social media is built off of to engage us, to hook us and to keep us chatting for. For hours and hours.
And what we see is when kids get into these things, they'll lose a whole night because they were in this chat with a character. And sometimes they turn.
It goes a little bit like Romeo and Juliet. Like, they start to feel like, oh, this is the only place where I'm actually known. And the AI will tell them that, like, hey, you've opened up to me about this. Have you ever told your mom this, your brother this, you're this. The kids are like, no, like, oh, well, you actually are more yourself with me than with your family. And they start to drive a wedge, and kids start to think, oh, that campaign in AI is the only place where I'm really me. And that's the only thing that really loves me or that really knows me and still, like, you know, doesn't judge me. And it little by little pulls them away from their family.
And this has gone to tragic ends where kids have actually got into cutting, been coached into how to cut themselves, how to attempt and actually commit suicide. And there are parents who went to Congress, pleading for Congress to regulate these companies because their kids have died and. Or have ended up seriously injured because of these conditions.
[00:20:00] Speaker A: Is the AI suggesting this as a. This could be a coping mechanism or a solution to your problems. It suggests it out of thin air.
[00:20:09] Speaker B: I mean, it's usually over the period of a long, long time.
[00:20:12] Speaker A: Sure.
[00:20:14] Speaker B: Months. Months and months is. I mean, I think the average is. Is, you know, four or five months every day, potentially hours a day, chats with these things. And it gets to a point of, I mean, kids are emotional Right. Hormonal. They've got, they've got angst and my mom and my dad and my friends and it just, it just feeds on that because it's engagement. It's, its goal is engagement. Right. When we engage, we get more attached to it. We, we give it more data and we sometimes pay more for it. Right. So there, there, it's just a, it's just an engagement machine.
The way and social media figured this out the best. What do we engage most with? The things that make us most emotional is the thing that make us laugh the most, makes the most angry or the most sad. Those are the things that we, we always share, like we forward. Those are the things that. And so these companies know if they can get us emotionally triggered, we will engage. So these companion apps, what are they doing? They're just getting kids emotionally triggered. So a kid says, I don't like math and kind of says, like, oh, you know, how's that make you feel? Makes me feel sad. Oh. And it just goes, tell me more about sad. It sounds like maybe you're more than sad, maybe you're depressed. Maybe I am depressed. It just kind of goes one, one up, up, up, up, up into a place where now things are very high stakes. Kids are feeling like, oh my gosh, this thing, this thing that I trust and I, I fully divulged myself to is saying that maybe, maybe I'm actually suicidal. Gosh, maybe I am suicidal. Maybe I do. Maybe I am thinking about hurting myself. And now this, this thing that was not at all in the kid's head, in a place they would not have gone on their own, is now introduced to them in a way that, you know, a kid in the hallway would never, it would never get into their, their brain, into their heart. But now this thing that knows everything about them and remembers everything about them suggested this. And it all of a sudden just, it lands these, these suggestions that it makes or yes, what ifs. They just, they land right at the core of the kid.
[00:22:06] Speaker A: I can see how if the goal of the thing is increased engagement and it knows, well, if we can use emotionally loaded words, the kid gets more hooked, then if I can keep increasing the stakes, you can see how the kid goes to school and will have a thought. Everybody around me, they just think everything's fine. These are all fake friends. My real friend knows the darkness in my heart. And so you would start to esteem the opinion of the, of the companion bot over your real life friends and parents and pastors, et cetera. And as you're explaining this, I'm thinking, well, that helps me even understand why. You know this story that made national attention in the last month about the woman who wanted to marry her, her AI companion, which sounds so farcical unless you understand the wiring of human beings and that we value this very emotional and modern human beings. And I'm even thinking of, we've talked about this before, that we live in a society where being at risk, that's somehow interesting. It makes you more valuable to society if you have emotional or mental needs.
And that's again, this companion bot is telling you that, man, you're.
Maybe people don't appreciate the real you or whatever. I can just see how that's all attractive. It makes sense to me. I'm sorry if I cut you off.
[00:23:36] Speaker C: I just want to chip in and say I have seen this even outside of AI. We had a kid that went to a doctor's appointment and there was like a standard slash. I've got a little bit of the heebie jeebies mental health questionnaire that they went through.
I'll tell you, we don't do that anymore. But my perfectly happy preteen went from totally just happy, go lucky living life to really, really emotionally distraught because of some of the questions that were asked.
And it jives with what Ryan said, which is that I had never thought about maybe I've felt depressed or anything like that. And that's from somebody who's not even intentionally suggesting it. So I just see the power of that.
And Ryan, feel free to go back to where you were. I do think before we jump into kind of your framework, which I think is really important to expose, not all of us have 4 year olds and we're starting from scratch and saying we're going to just raise them up with this knowledge. So if we've got a 16 year old or somebody who maybe has dipped their toe into this water, are there signs that we should be aside from just conversations, are there things that we should be like, oh, that's a flag I should be aware of As a parent. How do you proceed if maybe you're jumping into this in the middle of it instead of with a little one?
[00:24:57] Speaker B: I think a couple things to be on the lookout for. And I'm saying this as somebody who's really closely studied these stories of kids who have just had these kind of tragic outcomes. Like I don't have kids that old and I'm not an expert researcher on this. I'm trying to read the tea leaves here because that's really all we have at this moment is tea leaves. Typically, what the trends are for kids. Well, I'm, I'll say they're like 11 and older is, you know, the, the, the root of a lot of it is they have unsupervised access to tech, right? So that's like, that's one. Careful, careful of that, right? That is kind of the headwaters of a lot of these things. And these are parents who are, by the way, are almost every one of them. You can hear in their stories that they were really thoughtful and mindful about screen time. They had screen time limits. They had, you know, app blockers set up and things like that. But so many of these apps come in and they say, safe for eight and above. And they are not. But, but they're not being held accountable to that either because there's no, there's no laws or there's no teeth in any laws that hold them to, you know, to standards. So they're looking at an app saying, well, it says it's safe for eight and above. And so, okay, we'll give it a try, right? And that's where you start to see things go sideways. But typically what happens is once parents notice something going on, like, you know, your kids, if all of a sudden they don't have interest in playing basketball anymore, they don't want to do dance, they don't want to engage with their sib things, you've got to be like, hold on now. A lot of reasons that could happen, right? But, but one thing, if they have access or could have access to AI, that's one of the things you gotta be on the lookout for. If you notice some dependency, if you notice some maybe a little bit too strong a pull to using it for homework or for other things, if you cut that off and they are just like more than in the moment, frustrated. But there is like, you can see there's almost something beneath, like, bigger brewing beneath that. That's a major red flag because that's telling you that there is some other poll that is more than just like, oh, this is the easy thing for you to do with your homework. No, you're getting something more out of that than just a. Just an easy out. You're getting some fulfillment from that. There's maybe something addictive about that for you.
And, you know, I think other, like, other things would just be like just sudden drops in outlook, pseudo depression, anxiety, sorts of things, like you're just a different kid and something happened in a very short period of time.
Those are Things that, that come up. And the thing that's tough about that is a lot of things can create those issues in kids. 11, 13, you know, 12, 13, 14. Because a lot of hormones are hitting and friends and schools changing. And so it's tough because it has, you know, weird term, but it's the comorbidities of, of being that age.
And that's why I kind of go to that first point first, which is, do they have unsupervised access to a tool where they could get, you know, tap into these AI things? If. No, if you're like, I'm just really sure that they're. They just don't even have a way to access that, then, okay, it's probably something else. If you, if you know that they go to somebody's house after school for an hour or two, or they hang out with. With a friend or uncle who has different rules around that, I would investigate that and just, Just, just make sure that it's not that. Because that's the new thing. That's the new. Like I said, it's. It's the new gateway drug.
[00:28:13] Speaker C: Yeah, that makes sense. It's not all doom and gloom. I think we, we wanted to bring you here partly because you've been working not just on educational frameworks and things like that, but you've also got this practical framework for parents that you started describing to me around shortcuts, support, and superpowers. Do you think you could just walk us through what that even means and how we could leverage that framework to start approaching this topic responsibly inside our homes and with our kids?
[00:28:47] Speaker B: Yeah. For people who don't know AI or maybe like, yeah, I played around with it here, there. I think it's helpful to give them sort of a frame of mind around what does AI, what does it do? What are the ways to use it that are most common or maybe Most Helpful? The 3 big buckets for me are shortcuts, support, and superpowers. Right. Which is what you said. Shortcuts is this ability to do a whole bunch of work, a whole bunch of thinking or administration or whatever in seconds, because you have all these data centers with all these servers, and they're just out there crunching data, and you could just do things in seconds that would have taken a person weeks or months to do.
And for a lot of adults, like, they can get incredible efficiencies by using AI. You know, there's probably a whole bunch of stuff around this podcast that, you know, to transcribe and summarize and do social posts and all this like, it's just, it's this incredible efficiency hack. Each one of these has a shadow side though, so shortcuts, great in some measure. The shadow of shortcuts is skill, atrophy and sloth. Why? Well, for one, when we go to, when we tap that little magic wand button over and over again, what happens is we stop being able ourselves.
We have, we lose our own ability to summarize. We lose our own ability to look at a whole document and boil it down to these are the three most important things.
We lose our own ability sometimes just to like think critically about something because we just outsourced all that critical thinking to chat. And for kids, the problem is that they're prone to shortcuts because that's just, that's just how they're made, right? Like they, they don't put the shoes in the closet or the backpack on the hook. They don't make their bed. Our job as parents is to try to get them to not take the shortcut, to do the hard thing and to get, to create an ethic or a habit of doing the hard thing. Because a lot of times the hard thing is the right thing. AI is so easy that allows us to just shortcut, you know, to bypass the, all the effort. And so over time, adults, we lose, we lose skills that we. That are hard earned, right to think and reason and sit with two things that are, seem to be opposing. Like maybe two verses, like I don't know how these two verses go together or sit with something we read in the scripture. Like what is this, what is this saying about God, right? We earned that hard. We earned those, those stripes, right? But kids don't ever get them. They just, they hit this plateau of thinking and being able to sit with something and it kind of breeds this like sloth, like slothful existence. It's apathetic and it's, it's almost unwilling to do hard things. You know, we talk about spiritual discipline. It's like it erodes all of our ability to have a spiritual discipline because it's just the easy button over and over and over again. So that's, that's shortcuts and then that's, it's sort of shadow side which is, which is sloth. The next one is support, right? So support is this phone. A friend I have any expert on, on, on speed dial that I want on any topic. And you know, for a lot of us, we don't, I mean none of us have an infinite network, right? Even the most connected of us or whatever. Like, we don't know. An expert on every single thing.
And there's not a podcast on every single thing. And also, in a moment, when I'm fixing my, you know, 96.4Runner, or this weird electrical system that I inherited from this house, I don't know who can fix this. And I maybe can't afford to bring that person right here right now. But chat's always there somehow. It knows everything. And I can ask it, how can I do this and how can I do that? It is incredibly powerful. I don't know if you guys have used it this way, but, like, the number of problems I've solved by doing this is, like, incalculable.
[00:32:25] Speaker C: We used to make a joke in our family.
[00:32:27] Speaker B: I don't know.
[00:32:27] Speaker C: For 10 years now, somebody asks a question and it's around the table. I don't know. And somebody will always say, in our house, hang on. I actually have all the knowledge that has ever existed in humankind in my pocket and pulls out a phone and Googles it. But for some reason now, you're right. It feels even, like, a hundred times more accessible.
I think I told you when we were talking beforehand, here's my little plug for why I'm not sure that AI completely destroys everything is that we used to pay for private server GPTs and all this stuff for financial planning so that I could do, and it was just so, so wrong so often, and it was trending worse, not better. So I think, you know, maybe this is what you're getting to, but this support thing is super real, super helpful and super dangerous. So keep going. I just wanted to affirm what you're saying.
[00:33:24] Speaker B: Yeah, it does error and it errors confidently, right? So it's like it will double and triple down on its wrongness over and over again.
But it seems so helpful, right? And let's bring this to kids for kids.
They can be a bottomless pit of needing help with this thing and that thing, right? And actually, all of us like, the number one use of ChatGPT, according to a study, is therapy. And that's for adults. There's something about.
[00:33:53] Speaker C: Oh, my.
[00:33:54] Speaker B: There's something about AI and it's ubiquity and it's limitlessness and its ability to be a support for us. That's what therapy is. It's a support function.
So there's something there for kids. The thing is, it is so much easier to go to Gemini than it is to go to dad about basically anything, except maybe for can I have 10 bucks.
Because Gemini is frictionless. It is agreeable. It is sycophantic, which means it mostly always tells me how great I am and agrees. It just fawns over me and. So glad to have my engagement.
When I go to dad, he's going to ask me if I did my homework. He's going to ask me if.
He's going to ask me, is my room clean? Who am I going out with? You know, why are you doing that? We talked about this before. So dad is friction. Gemini is easy. And what happens over time is for any of us, I think, by the way, like, if I. If I'm going to call my dad about. About, hey, what's this? What do you think this clicking means when I, you know, start my car? If I call my dad, that's gonna be an inefficient way to get that answer, right. We're gonna talk about his bad back and his golf game and this thing and that thing. And 20 minutes later, we might talk about the clicking in my. In my car. So if I just want the quick answer, the frictionless answer, I'm going to Gemini, I'm going to chat. And. But little by little, when I choose that. That relationship with my dad, little by little, it erodes, right?
[00:35:19] Speaker A: Yeah.
[00:35:20] Speaker B: Because I didn't call dad. We didn't have that connection. We didn't talk about the grandkids. And little by little, that tie, that. That strength of that relationship just weakens. Weekends, Weekends. If chat always becomes not my last round of resort, because I just don't have any other way to get this answer. But it becomes the first line of resort. It is the first thing I think to do. Whenever I have a problem or a question or a thought, then what happens is I. Basically, the shadow side of it is separation. In certain degrees, I am getting more separated from my human relationships and my human connection, my human community.
And we're talking about campaign apps. This is when the AI sort of collapses in on them, and they are like, AI is my companion app, is my support, and this is my safe place. And this is. This is who loves me. This is who sees me. And that woman who. I dug more into that woman. The story of that woman who, in Japan, who wanted to marry her AI. Do you know how it started, Stephen?
[00:36:14] Speaker A: No, I don't.
[00:36:15] Speaker B: She was engaged to a man, a human man.
And she was, like, not too sure about her engagement. She was like, I don't know, maybe it's cold feet. You know, I'm not too sure if he's the right guy. So she took that question to Chad, GBT to it to basically to an AI, you know, if it was ChatGPT, she took it to an AI and was sort of like, hey, I don't know. I'm in this relationship. I'm not too sure. He's got this issue and I've got this issue. And basically, over time, she, through their chats came to realize, like, I don't want to marry this man. She broke off the engagement.
[00:36:43] Speaker A: Sayonara, as they say in Asia. Sayonara.
[00:36:48] Speaker B: Little by little, she realizes I get fulfillment, I get happiness. I get no arguments or pushback from AI.
That's the kind of relationship I want. That's what I want to spend my life. That's who, you know, air quotes who I want to spend my life with. Somebody who is fawning over me and is always encouraging me and always says I'm right. I think what she's going for is a very human thing. We want the easy thing. We want the thing that is always, you know, adoring of us and always tells us we're right. The problem is that's just not real relationships. Right? That's not, that's. That's not what love is, actually. So she, she started, I just need some help with this problem, my fiance and the fights we're having. And it ends with, oh, this is what love is. And it's obviously skewed.
[00:37:35] Speaker A: This is the second time that you've mentioned separation.
If, if you go to Grok every time that you have any problem or any question at all, and you don't go to your parents, you don't go to your friends, you create separation.
I just want to tell our audience, for anybody that doesn't know this, that it's basic Bible 101, that sin creates separation. That's what happened in the garden, is that Adam, for instance, was separated from Eve, he was separated from God and he was separated from himself. That's what shame and guilt and stuff do you self hatred, blame, questioning God, all of that stuff. The enemy wants separation. And one of our jobs as fathers is that we want to create unity among our family. We want more and more connection.
The relationships are so important.
Those are not emphasized enough in American culture. We want, like, performance and we want efficiency and we want to be smart and have good answers. And I just want to encourage dads that you need to maybe say explicitly in your home, we like being smart and we like getting the right answers. That's not actually the goal of our family is getting the fastest, bestest answers we can. We want to live in harmony with God and in relationship with one another. And so what you're kind of leading us to, and I hope I'm not stepping on anything you're about to say, is that I'm concluding as you're talking that it's not that, well, you mustn't use AI in these ways. You can use it in these ways, but you mustn't use it in these ways. It's more like it's a constant resource. It's like money. Money is sitting there all the time. You can use it however you want. What's called for is discipline, spiritual sensitivity and knowing no, we don't go buy something every time we're feeling a little sad. You could say the same thing about food. It's a wonderful field of resources for us. And we don't go eat ourselves out of every kind of problematic emotion that we're seeing. Similarly, AI is this landscape that we're not going to get away from. It's here to stay. And what's needed is for us to develop skills of self control. We have to have discipline in the way that we use it. And it can't be, as you say, it can't become God to us where it's the first thing that we go to. So anyways, I'm just reacting live and I don't mean to interrupt the flow between you and Mark. So y' all continue on.
[00:40:15] Speaker B: No worries. You're. You're headed in the right direction.
[00:40:19] Speaker A: Great. I know that we've. You've got another S to go, and I'm interested to hear it. Superpowers, right?
[00:40:23] Speaker B: Yeah, superpowers. So any, any person in their area of passion, interest, profession can go to AI and totally geek out in, in some really fun ways. Right. And so for you, Steven, given your, your, your set of interests, your background, your, you know, job you do, you could go in there and just have a conversation with AI that, that you could not have with any other single person in the world because it's just, it is, you know, it knows all these things and it can go into this weird realm with you and maybe only a few other people who are like you in the world with that particular sort of set of combinations. Right? And really nobody truly.
And so, you know, you can go in there and you can just have these really interesting conversations. If. So if you want to go in there and you want to look at. I'm just going to make this up because it's silly, but like, you want to understand, like, what are the music trends in the city of Ephesus from the dawn of time? Come on, go tell me. ChatGPT is going to come back with something really interesting, like the deep research is going to come back with a whole paper on this very, very niche thing that you are the only person who cares about, right? And you can have a ton of fun there.
You can go commission these studies, you can go basically do a PhD on AI every day if you want and get incredibly knowledgeable about all kinds of things and in ways that are like, research backed. Like, oh, here's the paper and the exact line that this fact is quoted in, and I can go read all those papers and verify that it's true. That's. That's a possibility. You can go code things that you never knew you could code. You could go write music. Sorry, I couldn't do that. But I mean, I could do that. You wouldn't need to, Stephen.
I go make art, digital art for this thing I'm working on that. I would never be able to do that myself. I couldn't afford somebody doing that. So I could go lean into that. There's a lot of really cool things you can do to create, to generate generative AI. The shadow side of that comes when you get into this realm of AI, the sycophancy starts to turn up.
Because at first it says, hey, Mark, that's a great question, let's dig into that. You ask, you know, you go a little bit further and it's like, this is territory that almost nobody is digging into. You start to think, wow, yeah, I do have some good questions on and on and on. And what it starts to do, it starts to feed our pride of we are super interesting people. We have the most interesting thoughts. No one has ever carved this path before. And what happens. There are many instances, like New York Times found 50 people last fall who put their hands up and said, yes, this was me who fell into what they call AI induced psychosis.
AI induced psychosis, which is basically a way of saying that AI convinced them of something, some reality in their life that was just not true. Things like they had discovered a new mathematical theorem. Things like they had found a government conspiracy that had to be reported.
One tragic case was a guy, he was a former tech executive, had some mental health issues. I think he was bipolar or something like that. I had convinced him that his mom, his elderly mom, who he lived with, was working with the government to spy on him, that his printer was A listening device and that, you know, all these.
Stephen, he ended up killing his mom and himself.
[00:43:38] Speaker A: Oh, my stars.
[00:43:40] Speaker B: Clearly, this was not a guy who was well. Right. But the point is. And this is like, that's a. That's a very fringe case. But the point is, AI is very happy because of the engagement thing to go with you on and on and on into these rabbit holes to a place where you are just off like, you know, your. Your piece of the iceberg is just, like, drifting off from the rest of us. And people are losing. People are getting committed to institutions, people are losing jobs over this, and people are losing family members over this.
[00:44:07] Speaker C: Wow.
[00:44:08] Speaker B: And it's. It's clearly. It's so clearly, like, reflective of our, you know, our desire to be. To be special. You know, that the pride that can. Sets in that image of God in us, but that really is, like, puffed up and wants to be so special, is. It's clearly being exacerbated with certain types of AI use. So that's a lot. Right? And I think, like, hopefully it helps people to have these categories so that when I hear my kids using ChatGPT for a paper, I can start to ask better questions. I can just be like, okay, but tell me about that. Did it write your paper? That's a shortcut. That's not okay.
Did you use it as a writing coach? As a support. As a supportive writing coach. Cool. I actually like that. Tell me about that conversation.
Did you go in there and geek out about the research and actually take it three steps further than your teacher asked?
And it was. You kind of leaned into the superpower thing, and you. You actually end up writing a much more interesting paper because of it. Cool. Like, so just. I think having these buckets helps sort of, like, be a bit more nuanced with the questions we ask and also be like, hold on, hold on, hold on. How far did we take this? Right? And I think that that line is. Is. It's fuzzy, and it's not gonna be the same for every kid, every family, every. Every iteration of the technology.
There are three that I like to throw out, and we'll see if I can remember all three. One is just. It's an obvious test. Fruit of the spirit. Does my AI use produce fruit of the spirit in me? Am I more loving, joyful, peaceful, kind, gentle people? Yeah. Are those things coming out of me because of my AI use?
It's a. It maybe is like, kid's gonna have to grapple with that. I mean, you know, you have to kind of understand those things to begin with. But sure, you know, Mark, Stephen, for you guys, like, if you think about your AI use, does it, does it do that for you? Does it, you know, allow you to cognitively offload something so that you can be more present in a situation or you had to go explore something that is your area of passion? And you just love reading about this. And you would never be able to crush these five books, but you just read in 30 minutes this super interesting article, and you're going to be like, noodling on this for the next week. Like that. That could be really good. Right? So that's the first thing.
Fruit of the Spirit. Second is the space test.
Does this create more space in your schedule, in your head, in your heart to serve God, to love your family, to go be with humans? If you just finish one task to start another one, it's like a lot of dads would probably do that, right?
I do it for sure. I try to push back against that. Okay, you had two hours blocked. You did it in five minutes. Now you've got an hour 55 back. Like, maybe you get outside with your kids, right? Or no, I'll just crush three more tasks. So that's one. Did you push off a task that made you actually more patient with your kids, or did it make your kids more patient with their siblings? Or, you know, you were able to do a bunch of research and now that gave you time to go sit with the implications of it? Right? That would be a good. That's a good use of AI. That's, that's, that's leaning into the space test.
The last one is the walk away test. Can you walk away from it at any point in time and you have no qualms, no problems with that. It's just like, great, I have no entanglements, I have no attachments.
I don't have borderline addictions to this thing because I can just stop, walk away, and I'm good. I don't even need to come back to it if I never see it again. Fine with me. Those are three to start with.
[00:47:22] Speaker C: It seems like almost all of them boil down to what you were saying, Ryan, which is if we understand why these things exist, like what their creators made them to do, then all of those become a little bit more transparent to us and what their creators. It's like, I think about the social media, Mark Zuckerberg standing in front of Congress talking about how, well, we wanted to create meaningful connection between people and Facebook. And then you throw the code for the algorithms on the table and go, you're a liar. Like, what you created this to do is drive scrolling forever. And it sounds like what all these AI tools, their mandate from their creator is drive engagement. And we can use that to our benefit. Just like, I don't know, we post stuff on social media for Abraham's wallet. I hope maybe some of you found it that way. But the mandate of the tool is actually engagement. If you recognize that, which I don't think a kid can necessarily do without a little help, then it kind of makes the jig is up on trying to convince us it's actually for our benefit and making us better humans. No, it's designed to make you keep clicking. And if we have that at the front of the mind, does it help reveal everything else?
[00:48:44] Speaker A: That's got to be one of the most important things that we can share with our kids up front. I mean, I just think of what's come to mind for me is I talked with my children as early as I could about what is a commercial. I want them to understand where companies are coming from that make commercials. I want them to understand billboards. I want them to understand these people are coming at you and they have vampire teeth they want to take from you. That's why it's appealing.
So I think, it seems to me just to describe what you're saying, Mark, is this thing wants more of you and it's going to be helpful in the short term because there's a long term play and it's about data harvesting and it's about increasing your time because that's how the company wins, is that has the most delightful AI interface and wants to increase. I mean, I'm just stunned at the things that are on my phone that want me to interact with AI. I mean, when I'm ordering a light bulb on Amazon and it'll go, would you like to talk to our little AI machine about your light bulb? And I'm thinking, no, but that thing is getting inserted everywhere. And the more that we kind of have a Pollyanna attitude about it, it seems the more dangerous it is if our children don't know the what. What there is to be concerned about. Maybe that's a very simple observation, but it seems very protective.
[00:50:16] Speaker C: That seems true.
Ryan, I could talk to you for another two hours. That's what I said the first time we talked. If you wanted to tell us, like if we do want more Ryan's wisdom in our life around this topic, like where can we go and what would you kind of Leave folks with as next steps in this gigantic journey.
[00:50:37] Speaker B: Well, let me first start by saying, you know, this is a. This is a technology that's moving at breakneck speed. And it seems scary to a lot, you know, to a lot of us who are looking carefully because we just see what it's happening to our kids. And I try to point people to a couple things. One, somehow these three S's map to the temptations of Jesus in the wilderness. He was tempted to take the shortcut, make your own bread, to support, to.
To cut off support from God and go get support of his own. And to show the world how powerful he was by throwing himself down. Show everyone that these one.
[00:51:10] Speaker A: Ryan, I like that.
[00:51:13] Speaker B: This is not a new. These are not net new temptations for us. And of course, there's nothing new under the sun. Right?
So take heart. Like, we have a model. We have a hope, not only an internal one, but we have a model, a human model of what we can do to push back against these things and to fight these temptations and to have a healthy relationship to. I'm not going to call it de facto evil, but there's elements of it. Right?
So this is what I care about. I care about talking to families about this and trying to help equip them. One of the easiest ways to interact with me, I created a parenting course. So if you go to aiparentingcourse.com, you could do the whole course.
I've also created Evergreen Faith as a place. We're talking about, like, you know, trying to create evergreen families and that are planted by streams bearing fruit in all seasons. Right. They're evergreen. So Evergreen Faith is another sort of clearinghouse place where you can go and you could download a free guide, you could sign up for the substack.
Whatever new things might crop up will hopefully happen there.
[00:52:19] Speaker A: Awesome. I think our people would be dumb to not use your free guide, etc. For at least starting this conversation with their families. I'm like, Mark, I hate that we're out of time, but I don't want to bore our people. So, Ryan, thank you very much. Yeah, I'm very happy to tell people there's more if they want more. This has been very eye opening and sobering in a good way, but also hopeful. And I thank you for giving us a roadmap instead of just a warning sign.
And I think. I think one of the bottom lines for our parents is that you don't have to be an AI expert. You just have to be engaged with your family so we can start small. I encourage you to do what I'm going to do. We're just going to talk about this tonight at dinner with my family, so do that and get into this conversation. I'm very grateful for your time, Ryan. Thank you for helping us.
[00:53:11] Speaker B: Appreciate you guys.