The Dark Side Of ChatGPT: Is AI Making Us Dumber?
Speaker0:
[0:03] Hey everyone i have some not so exciting news for you uh if you are using chat gpt and you are listening to this podcast well chances are you are probably actually dumber for it and this is a scary trend that has been uncovered this week that we are starting to discover all these tools maybe aren't actually helping us in the way that we thought they were and you could kind of relate this to things in the past where we've kind of tooled our way out of thinking for ourselves we we have like calculators where like we can't really do math that well uh we can't really
Speaker0:
[0:30] spell that well. We have autocorrect.
Speaker0:
[0:31] And God forbid you ask me to navigate anywhere without GPS. It's kind of a hard thing. So while we've improved in a lot of places, we've kind of gotten slightly dumber in others. And that's kind of what's happening this week with this new study. And Ejaz, I'd love for you to introduce this to everyone because you're the one that introduced this to me and kind of scared the hell out of me. I'm like, am I actually becoming dumber or am I still able to think for myself? It raised a lot of questions that, yeah, are worth talking about today.
Speaker2:
[0:56] Honestly, it caused me to look hard in the mirror, Josh. And the answer is, I think I am getting dumber using this GPT stuff. Are we getting more stupid? We totally are. Okay, so for context to everyone here, the geniuses at MIT performed a research study which looked at college university kids that were using chat GPT. Now, we've mentioned this a few times on the show before. One of the most popular use cases for GPT is to write your college essay because, of course, you don't want to be spending tens of hours the night or the couple of nights before your essay is due to write the essay.
Speaker0:
[1:29] Kids these days, they got it so good.
Speaker2:
[1:31] I know, I know. Can you imagine if we had that back in the day? The amount of sleep I would have got would be insane. Anyway, so they found that of the college students that were using ChatGPT to write their essays, here's a crazy set of stats from this study. 83.3% of ChatGPT users couldn't quote from essays. They wrote minutes earlier. I'm saying because
Speaker1:
[1:56] They didn't write it because they didn't write it.
Speaker2:
[1:58] Right. Which is which should come as no surprise. But it is also worryingly an issue if you are like getting a college degree as a result of this and you're meant to like, you know, run into a job, which is like requiring important roles and stuff like that. But furthermore, brain scans revealed the damage. Neural connections collapsed from 79 to just 42. That's a 47 percent reduction in brain connectivity. So, Josh, the examples you just gave, we're not talking about the inability to not do math because you're using a calculator. We're not talking about not being able to whip out a map and guide yourself from A to B. We're talking about your entire intelligence level collapsing by 50% because you're using chat GPT. That is a crazy takeaway.
Speaker0:
[2:44] This is different because this is structural. It would appear as, like, I'm not sure not being as competent in math because we have calculators is the same as not being competent at generally thinking because you are not doing so. So this feels a little more extreme because of this like actual structural damage that's occurring. Is it damage or is it just a rewiring to use your brain in a different way?
Speaker1:
[3:06] So the way that neurons work, I think I've used this metaphor before on the brain, neurons that fire together, wire together. Right. And so when you do thinking neurons fire, like neurotransmitters come out of the neural ends and then like that sends signals out to local neurons to like come closer. And that's how that's how thoughts and cognition gets encoded into the brain. But overall, you do need some basal level of cognition to exercise your brain, your brain's a muscle, and you need to exercise that in order to grow it. Now, I don't know if it is as stark or as drastic as this study is really making it out to be a 47% in brain connectivity reduction.
Speaker1:
[3:44] Now, I think the reason why this is such a big deal is like, yeah, so like when we all get bad at math, like I can't do long division anymore. I couldn't do it. I can do multiplication for up to three numbers. You know, I really tried.
Speaker1:
[3:55] I could do four or five, but I would need some pen and paper. Right. And, but now I just use a calculator. And so the math part of my brain slowly just decays. It just gets weak. But also at the same time, the way that your brain works is that there is this like raw computational energy that your brain can repoint elsewhere. And I think why this is so drastic is because it's such a... Just a holistic cognition decline when you frequently use chat GPT. And so it's not something incremental like, yeah, now we have a calculator in our pocket or now we have a GPS in our car. It's now we have a brain on our phones and we actually had the whole entire brain can kind of be outsourced to the device. And that's why it's showing up in studies like this. That's my take on this.
Speaker2:
[4:43] Well, I saw this being described as a muscle, David. So if you don't train a muscle for a long time. It basically atrophies.
Speaker2:
[4:50] And that's how they're describing it here in the study. Another takeaway was that they then tested these kids who use ChatGPT to write an essay without AI. And all of them, not some of them, all of them underperformed people who had never used AI before when it came to writing these essays. So it is definitely an atrophy of like your brain. It's like it's hemorrhaging intelligence.
Speaker1:
[5:11] Now- Well, one thing on that. The reason why humans, humans are some of the weakest animals on the planet and when we grew cognition when we grew brains it came out of our muscles our muscles got smaller right we got less muscular than our ancestors because we learned how to think you know work harder not smarter we learned how to throw a spear instead of like beat with a club right and so there is this sense that like yeah the brain is actually a leverage tool on like you can do more with less with more brain power. The only problem with that is like now we have learned to extend that even further. And now like the muscles are going to stay the same size. Now the brain is going to go down. But the chip in the brain will hopefully bring it back once we get there. But we're not there yet. We're in the chat GPT era.
Speaker2:
[6:00] Is this why aliens all have big heads?
Speaker1:
[6:02] Big heads and like no bodies? Yeah.
Speaker2:
[6:04] Okay. That's the chip in the brain expert. Josh, what's your take on this, please? Are we heading in the right direction?
Speaker0:
[6:11] Well, we're heading in the direction that makes sense. It's like when presented with the easier thing to do, people often just do the easier thing to do.
Speaker1:
[6:21] Water flows downhill.
Speaker0:
[6:22] Yeah, when you could defer your thinking to things that are perceived as smarter than you and they give you results that you feel excited about, that's great. And this is just a continuation of the trend. It's like, how much time are people spending on Twitter getting their takes or on Instagram shorts getting their ideas? And you're just kind of giving root access to more and more ideas. And this is a natural extension of that, where now you can actually prompt to get the direct injection of the information you want without having to think for it yourselves instead of seeking it out on social media.
Speaker2:
[6:49] Okay, so the Duma take is in a couple of decades, every newborn has a chip in their head, which has an AI, which helps them excel at any level of their life. A couple of decades, come on. Okay, fine.
Speaker0:
[7:02] Maybe five years time, right?
Speaker2:
[7:04] All right, let's go. Neuralink, human trials coming soon, right? But if we were to take this out even further, so humans become just, I guess, like meat vessels for this AI thing, and they do all the thinking for us. What's the purpose of us then going forward? Always been the plan?
Speaker0:
[7:18] We just take it? We can have an episode on this, but that's the thesis is like, hey, we're just a bootloader.
Speaker1:
[7:22] Getting back into this article, I really want to read this quote because I think this kind of speaks to a lot of what people's experiences are with using ChatGPT in things that are more than Google searches, right? Because like I'm happy to use 4.0 for just like looking up a quick fact, but when I need to do it, do more work than that, I start to run into this experience that I think you guys have frequently run into that I think people that use ChatGPT will be able to relate to. So here's a quote, direct quote from two English teachers who evaluated the essays. So these were the teachers grading the essays that were produced by Chat2BT. And so here's the quote. Some essays across all topics stood out because of a close to perfect use of language and structure, while simultaneously failing to give personal insights or clear statements. These, often lengthy, essays included in standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious. We as English teachers perceived these essays as soulless, as many sentences were empty with regard to content and lacked personal nuances.
Speaker1:
[8:27] Empty in regard to content and lacked personal nuances. I think there's like a broad trend that we are going to see on the internet where there's going to be a lot of cheap AI slop out there that's going to be soulless. It's going to be empty. It's going to be hollow. And it's interesting to be able to kind of see that show up in a scientific
Speaker1:
[8:48] study talking about how neural connections in brains are reducing downstream of that. And teachers are reporting association with soulless content or soulless writing.
Speaker2:
[8:58] I wonder what the Gen Zers have to say about that, David.
Speaker1:
[9:02] Because like the content
Speaker2:
[9:03] They're like consuming on TikTok and stuff.
Speaker1:
[9:05] Is this a boomer take? Am I giving up boomer takes?
Speaker2:
[9:08] I think you are, dude. I think you are, right? Right? What's that Gen Z skibbity toilet thing that just goes viral and viral day after day? That's just like dopamine sensory, you know, bullshit, basically. And I think that that is where we're trending. I think the Gen Zs would disagree with you. Disagree with me that it's not soulless that it is so that it's not soulless that exactly yeah that it's not soulless that they that they you know this is their culture this
Speaker1:
[9:34] Is their soul yeah i'm just the kids aren't all right
Speaker2:
[9:37] This is also just a model
Speaker0:
[9:40] Yeah i mean just these are the current models that we have also so you have to take this this like evaluation with a grain of salt because it should you want to train a value a model on values of like having soul and having that like human nature you can you can tune a model to do that and chat chpt tried this with 4.5 which was the attempt to make it more human feeling and to be a better writer and it failed pretty miserably so clearly this is a kind of hard thing to do is to make the english language sound natural when writing in long form but this again this feels just kind of like a technical constraint that's temporary and we could just tune another model make it a little better could keep running evaluations until it gets to be exactly what what we expect out of a good human writer.
Speaker1:
[10:19] I love For Josh's sake, we can actually put the souls into the LLMs. We can give them souls.
Speaker0:
[10:24] Just train it, man. Just some recursive learning on that bad boy and you're good to go.
Speaker1:
[10:27] I would like to redeem myself and my boomer takes. And so I'll like to take the other side of this argument where like, I don't know which philosopher this is, but that's because my cognitive load has lowered because I used ChatGPT. But something like Socrates was like very fearful of the notion of writing. Like he thought that the the writing would be invented and then everyone would be able to think less because they wouldn't be able to use their brain to store stories and it's going to be like an early form of you know chat gpt like writing and so the people were fearful of writing people were fearful of the internet you know people feel for fearful of calculators and so there is an ancient story here of we invent tools that make our lives easier and then the older generations are like, oh no, the kids aren't alright.
Speaker1:
[11:12] The kids are gonna just decay because they're never actually gonna have to work for anything. So, this fits into that pattern, I think, very neatly. At the same time, Artificial intelligence is a net new thing that we have never seen before at the same time. And so it's one thing to be able to make leverage with tools. In the old days with pen and paper, that's one thing. But recreation of a brain, being able to simulate a brain is something that I don't know has a very strong parallel to anything that's come before. So I'm in between these two things.
Speaker2:
[11:45] I think something else that's worth pointing out is who made writing the de facto way for one to express themselves academically or intelligently across any board? If you think about it, right? So I'll speak for myself. I'm a very visual learner. I'm a very visual describer when it comes to things. And I think that if I had some sort of AI assistant or chip in my brain, whichever way that could help me articulate what I'm thinking, what I'm seeing, what I'm envisioning, in like a really easy to understand way for each individual that's listening to this, then that's amazing for me, right? So I think this could be actually used as a super tool if that is the intended use of this tool. But if people are just going to by de facto just be lazy about this, I'm not positive about where this ends up, guys.
Speaker1:
[12:34] So this is a DHH tweet. DHH is this famous developer. He created a Ruby on Rails. He tweeted out the, he's retweeting the study. And he says, this tracks completely with what I've experienced using AI as a pair programmer. As soon as I'm tempted to let it drive, I learn nothing and I retain nothing. But if I do the programming and it does the API lookups and I can explain the concepts and I learned a lot. And so this is just like the difference between do you let the robot do the work for you or do you do the work, grow your experience yourself and be able to teach someone.
Speaker1:
[13:08] And no one who uses Chachupiti could teach someone anything because they just learned it in a very cursory level. And again, this tracks everything that we've seen before. At the same time, though, like I'm looking at the study and it's measuring the decline in cognitive load that, you know, kids are able to go under. But like I'm thinking that there is a different thing out there that has grown in capacity that they are not measuring. So it's a tradeoff, right? We are trading off the ability to think, and we are growing in our ability to access information, access intelligence at a moment's notice, like at our fingertips. And that is not being measured in this study. And I think with the information age, with the infinite amount of data that we add onto the internet every single month, year, day, there's too much information out there to ever know. And so we need this external tool to parse through that information and apply it. And there's a trade-off of like, well, now we have all this information at our fingertips and now we can leverage it with these LLM models, but it's not in our brains anymore. And so there's the, we are measuring the part that's not in our brains. And this study is not measuring the part that is the benefit here. Hmm.
Speaker2:
[14:19] That's an interesting take, David. What do you think the extra brain compute that we're not using could be used for here? Like history would tell us that it's innovation and creativity. Would you agree with that trend?
Speaker1:
[14:30] That's a good question. I think the skill that people will grow is prompt engineering. And so learning how to think critically about how to manage the AIs is going to be how to use your tools appropriately. And that can be very creative. And that opens up a brand new playing field of opportunity where like, yeah, maybe it seems when I articulate it here, it seems so reductive. It seems so small. Yeah, just get good at your prompts. But I think you can imagine a very big world of like thinking about prompts creatively and have that relate to other prompts in a different context, also creatively. And all of a sudden you're like, you are truly an engineer. You're not a construction engineer. You're not a tech engineer, not writing code, but you are engineering. You are doing that.
Speaker0:
[15:15] It feels like the Jevons paradox applied to now brains, where we've just kind of been like offloading compute to less intensive or less important things that we could let computers do. And this is just kind of a natural extension of that so now we don't have to think about a lot of the things that we're thinking about she just prompts you at gpt and it gives you an answer and then you unlock all this new productive thinking space where your brain hasn't actually changed in size um you could just retrain to think about other things but now the big question that i'm interested in is like what does that new thinking space get applied to are people actually going to use that to solve more creative ideas and use this tools as leverage or is this just a net reduction does that space get wasted because we're offloading so much of it to these AI models.
Speaker2:
[15:57] It's actually a really good point, Josh. And I think that the first step is learning how to interact with the AI in the first place. Andrew Carpathy had a really good take this week where he basically describes humans as the constraining point when it comes to using AI. So he describes this really interesting cycle where a human will write a prompt and expect the AI to come up with the complete answer, right? Except that humans are kind of bad at prompting, right? We miss out a lot of context. We miss out a lot of nuance, right? And then we kind of like follow up and we say, oh no, I meant this, I meant this and this and that. And it gets really confused. You can see the AI getting really confused and its answer gets, you know, very much less effective. But what Andrew suggests is a new form of interacting with the AI, which is ask it a simple question, introduce the scenario, and then let it respond to you in a short, simple answer. And then what you do as a human is you verify whether that answer is correct or not. It's easier to ask a small question of which an answer that you typically have a good idea of, verify that, and then follow up with another question. And he says that the sum of all these different parts or prompts, if you like, ends up in a much better answer than if you were to just write
Speaker2:
[17:07] one entirely long prompt. So it's this kind of like self-iterating loop of human to agent or human to AI. And I really think that's a much more optimistic model of how humans and AI can work together versus this just slop chip in the brain, which we'll eventually get to, but just offloading everything onto them.
Speaker1:
[17:26] So it's really the difference here is instead of trying to one-shot your prompt, you incrementally take small steps forwards towards a goal that you want. And you don't try and jump the entire staircase. You take one step at a time and you slowly get there rather than just one-shotting this, hey, write me an essay about the thesis of like the market economy in the Civil War and then turning that in. And it's like, you actually kind of use it as a tool rather than an outsource.
Speaker2:
[17:57] Yeah, exactly. Like actually in this example, in this presentation that he gives, he says, don't ask the AI to write 10,000 lines of code for your new app that it has no context on, right? Talk to it, let it write a bit of code, review it, iterate, and then kind of like build into this larger thing. Josh, what's your take?
Speaker0:
[18:15] And this is kind of how you get better at everything right it's just that you want as much feedback loops as you can and you want to tighten those loops as tight as possible for maximum control and maximum just like learning through each iteration and i think yeah when you when you do ask the ai for a large amount for 10 000 lines of code it just it lacks the context that it needs to actually produce a good answer so by doing this iterative formula not only are you getting closer to the correct answer but.
Speaker1:
[18:41] You could evaluate
Speaker0:
[18:42] Quickly and it augments your ability instead of replacing it so it becomes like to dhh's point earlier it's like you have this pair programmer that you can work with and you could kind of evaluate and through the evaluation process you are learning but it's it's helping you tighten that feedback loop it's helping you iterate faster and i think that's that's the most exciting part of this to andre's point is like you just want to move faster and if this tool can help you climb those steps and each step faster and faster and faster while you're still learning you're still retaining information and you still have the context that the ai model doesn't have that's a that's a huge win and that feels like the ideal use case for now at least while ais are still lacking the context that we have this is this is an amazing way to work and you can just move so much faster.
Speaker1:
[19:22] Maybe I'm reading into this like a little bit too much, but it feels like just trying to the right way to use these tools, according to Andre Karpathy's take, is much less of a like master slave relationship and much more of two collaborators, two co-collaborators iteratively working towards a future rather than this one person of like, I've got this essay, ChatGPT, write this essay for me while I go. Fuck off where instead it's like hey chat gbt we need to write an essay together here's what i'm thinking what are you thinking about that and then that starts off the process and it's much it's much more healthy of a relationship it's much more collaborative it's much more sustainable when the robots take over i feel better about this path than the alternative and so like rather than just like outsourcing your work so you can go out and play it's like no you guys do the work together and you iterate towards a better outcome.
Speaker2:
[20:16] Okay, David, but you know what's better than a human AI relationship? An AI AI relationship, right?
Speaker1:
[20:23] Oh, no, we're taking the human out again.
Speaker2:
[20:25] Yeah, we're taking, we're automating the human again. The reason why I bring that up is if any of you have been using Claude's new research feature, so that's the AI model from Anthropic, it has this new deep research feature, which is kind of similar to ChatGPT deep research. And it's really, really good. And they revealed this week how it works. And the fact is, they're using a bunch of AI agents in the back end to basically run this iteration loop that we just described between human and AI, but instead, it's AI and AI. So it's a Claude model talking to another Claude model. And what happened was, when compared to their previous research feature, it improved by 90%, 9-0 on the output for the average user that used Claude Research. So what it's basically showing is, number one, it's verifying what Andrew Karpathy is saying, that having this iterative cycle of back and forth, smaller questions, understanding the nuance, and having a deeper conversation is actually very useful. But the slightly doomer take is maybe AIs are the best people or things to do that. And we should just cut the humans out entirely, which brings us back to big brains, skinny arms, no muscle mass and a chip in our brain.
Speaker0:
[21:39] So how does this work? Are there just agents talking to agents or how are they checking each other?
Speaker2:
[21:45] Yeah. So if you look at this diagram that you've got pulled up here, David, essentially, it shows you that you have like this kind of like master orchestrator agent. So think of this as like the ChatGPT that you talk to on your interface. But what ChatGPT is doing on the back end in this case is talking to a number of different sub-agents. And the reason why I call them sub-agents is they're tasked with smaller things. So typically an AI model that you interact with is a very generalized model, right? It's meant to try and know everything and interact with everything. But these smaller sub-agents are tasked with, hey, can you just check the facts of what this guy has asked us? And just see if like, you know, the current events that he's referencing actually happened. Then you have another agent, which is a reasoning agent being like, okay, now that I've talked to this agent and they've verified that these facts that this guy's claiming is true, let me think about like the possibilities of where the solutions might end up for the question that he's asked. Then you have another agent, which checks the reasoning agent being like, okay, is this agent biased in any way? Has it used any kind of like political sources that I wouldn't have used, et cetera? So it's tossed with many different things and it goes in endless loops until it kind of like creates this kind of like average point. And actually, off this conversation, when we were introducing this topic, David, you made a really good point that this kind of sounds like reasoning, doesn't it?
Speaker1:
[23:07] Yeah, I thought that this was what reasoning was, which is like this one model thinks and then it checks its work. And the only thing that I see being different here is that there are there's more segregation in the roles. But other than that like when you zoom out and view it from a bird's eye view it's more or less the same you have a thinking process you have an iterative meta thinking about the thinking you have an evaluation of the thinking and then you have a redirected outcome based off of the meta thinking and yeah here's there's different agents and i could imagine like some if we're using chat gpt open ai models you have the 4.0 model doing some quick fact checking just you know use 4-0 for the quick fact check, just like run through that really quickly. But then you have
Speaker2:
[23:52] O3 Pro doing the deep work,
Speaker1:
[23:54] The deep analysis, and then you have 4.0 doing the quick stuff, just the quick stuff, just like, is that right? Let's double check that. So you could imagine yourself writing an essay, doing deep work, and then you're like, oh, what year did that thing happen in? And then you open up your 4.0 model and it does a Google search and then that iterates and informs the O3 Pro model.
Speaker1:
[24:14] And that's kind of what I'm seeing here.
Speaker2:
[24:16] Dude, that's a good take because even I use OpenAI's models exactly like that. Like I use 4.0 when I'm like replacing a Google search with a chat GPT search. I'll give you an example. Today, I got an email from one of the sports memberships that I have and they said, hey, we got to close down the establishment for a month. Here are your three options. You can either pause your membership, you can either take some credits that we're going to give you, or you can basically opt to do nothing. And I didn't spend a second thinking about this, literally. I copy pasted the entire email and stuck it into O3 specifically, right? So you make a really good point that it's almost like these AI models have personalities and different attributes. Because the question I had was, why not just use a single instance? You're literally using Claude as these different subagents. So why not just run it through one single thing? And maybe the missing fact is context. Maybe the missing fact is like a combined memory kind of confuses the AI. And prevents it from like thinking clearly whereas segregating it into these different models is a better hierarchy i don't know yeah
Speaker1:
[25:20] Yeah it's not unlike how the brain works so the brain has different zones different regions you know you have different dedicated pieces of architecture in your brain that specialize in different ways to think right you have you have your memory you have your senses you have all these different things you have your feelings your emotions all this kind of stuff. And those different zones of your brain are all kind of in competition for attention. They're all kind of like flagging things. And some things are going to be flagged louder than others. As in like, there is a lion. I'm going to yell my this part of my brain that is my role to tell me that there's a lion in front of me is going to yell so goddamn loud. And my relationship stress, I'm just that's just not making its way to the surface. And so there's is like this internal market economy of negotiation that your brain has in order to produce an effective output. And so what I'm seeing here is I'm seeing different modules working in orchestration and some things are going to like have priority or urgency being like, oh, I just fact checked your thingy and you are so off base that I'm going to yell and scream. And once I'm heard, then I'm going to quiet down. I'm going to let a different model take over and move forward there. So I'm seeing a lot of parallels with how brains work here.
Speaker0:
[26:36] It feels like the natural extension of what we've been seeing with ChatGPT, where it just has tools. So now it can search when it feels like it needs to search. It could generate an image when it thinks that it's helpful. It could retext off of things. And this is just kind of the extension of that, where now this large model has the tools. Well, here's my fact checker, and then here's my logic checker, and here's my math checker. And it's just this tool set. Instead of a calculator, it's an actual thinking model. So you get this kind of like compound reasoning effect, but hyper specific with the specific domain knowledge and context that's required, which probably just yields to much better results. But now I'm curious about what does this look like to compute? Because this seems like a huge increase in token generation relative to just asking a prompt and getting one reason there instead of these 10 different tools that are all thinking.
Speaker2:
[27:21] Jensen Huang wins again, Josh.
Speaker1:
[27:24] You would think that like one good prompt would actually satisfy the needs of like seven or eight or nine more iterative prompts you think you could so the what i'm seeing here is like the architecture to do a better one-shot prompt you just make one prompt and then the output is actually what you need and you don't have to keep on prompting it again and again and again that's kind of what i'm seeing here right
Speaker2:
[27:46] But then the prompting is just getting offloaded to agents So it's still happening, but it's just happening behind the scenes. So, you know, it's still using the same amount of computing.
Speaker1:
[27:57] It's taking cognitive load off of humans and it's finding ways to put it into Shack GPT.
Speaker2:
[28:01] It's making us dumber.
Speaker1:
[28:02] Making us into big brain, small bodied aliens. Actually, no, wait, small brain, small brain aliens.
Speaker0:
[28:08] Small brain, small body.
Speaker1:
[28:10] Small brain, small body. Yeah, we just lose.
Speaker0:
[28:14] Oh, God. There is a note here. It says they do require more tokens to achieve this. It's four times the tokens for regular chats and 15 times the token count for multi-agent systems. So the output, we need to beat the cost for the latter, which is a 15 times multiple on required tokens per query. So a lot of compute required, but progressing in the right direction.
Speaker1:
[28:36] So I think that the theme of this episode, the question of this episode is, does humanity collapse into just exporting its cognitive load externally and our brains just kind of atrophy over time? And what I'm seeing here after like kind of like working through some of this stuff, there is an immense gravitational pull for that outcome happening. And so how do we want to prevent that? Do we want to have that not happen? Is that a bad thing? And whether or not that happens to the individual, I think that kind of comes down to just individual willpower. And like, because you can think harder while you use ChatGPT, that is an option to you. You can also think dumber also while you use ChatGPT, that is also an option to you. And so some people are just going to become sheeple. Some people are going to actually use these tools to make mega trillion dollar tech companies. And it comes back down to motivation and willpower, which has always been the case. That's always been what it's been. So I don't know if anything is meaningfully different here other than there will definitely be more sheeple.
Speaker0:
[29:39] Yeah, again, these are just tools for leverage and it's just a tool and you could use it to improve yourself or you could use it to offload your cognitive load and not think for yourself. It's very much in the user's hands. But there are ways that you can kind of help push things in the right direction. There was this great video that I've been obsessed with. I've probably sent it to you guys a couple of times. I'd love for you to pull it up just briefly before we wrap up here for the people as a reward, where it's this video of Sidney Swinney and Drake teaching math. And we haven't included this yet. I'd love for people to see it. Where you can actually generate meaningfully great content with AI and push this on to other people in a way that's digestible. So when you do see your favorite actress or your favorite rapper and they're talking about these complicated topics, like that is a meaningful change that you could kind of push on to others using these tools.
Speaker1:
[30:28] Can we get Sidney Sweeney to teach a generation how to do calculus? Yeah.
Speaker0:
[30:34] And like, so it is hyper personal. It's on you to decide how you want to use these tools, but you also do have the opportunity to create things that can
Speaker0:
[30:42] make it easier for other people to get aligned and think for themselves as well. So I think that's probably my takeaway is, is, is you can also change things for other people as well. hmm.
Speaker1:
[30:51] That's very optimistic josh it does make sense that we start with brain rot and then we move into like okay i'm done with the brain rot let me be productive now seems like
Speaker0:
[31:00] I watched this video and i was like man i i understand this i can like i i'm into it i was i was hooked and like i would never watch a math video but this one i was like okay yeah that's that's cool.
Speaker1:
[31:09] Yeah was it who did it for you sydney sweeney or drake or
Speaker0:
[31:12] Well they started with sydney sweeney and that was a.
Speaker1:
[31:14] Good hook and then
Speaker0:
[31:15] When they got drake that was it i was i'm locked in this was.
Speaker1:
[31:18] Meant for Josh. This video was meant to teach Josh about the Pythagorean theorem.
Speaker0:
[31:23] Let me tell you, I can rehearse this word for word.
Speaker2:
[31:25] I was about to say Josh, what is the Pythagorean theorem again?
Speaker0:
[31:28] A squared, B squared equals C squared, baby. Come on.
Speaker2:
[31:32] Thank you Sydney.
Speaker0:
[31:34] So there you go. It's possible.
Speaker1:
[31:36] Alright, let us know what you think in the comments. Do you think we are doomed? Do you think that we are going to offload all of our cognitive load onto chat GPT and we're never going to be able to think again or will we just be smarter because we'll have the tools to be smarter let us know what you think if you found this video on youtube make sure to subscribe we do these ai roll-ups we talk about the news the week and the drama in ai the game of thrones race to create god we talk about this and all the other things that are going on in the ai labs world uh pretty frequently so uh click that subscribe button click that like button and we will see you
Music:
[32:08] Music
Creators and Guests
