AI Will Take Your Job in 3 Years: Your Playbook to Survive (& Thrive) | Arjun Bhuptani
Josh (00:01.062)
Okay, let me get set up here quick.
Josh (00:07.142)
Am I addressing the Bankless Nation? Is this hello Bankless Nation?
David (00:10.198)
I actually kept that out just... Yeah, haven't totally figured that out, like... So I just did the Sam Harris intro and said like, we're here with Arjun Bhupthani. Cool.
Josh (00:12.004)
or just generic, just in case, we are not really sure what nation this is going to.
Josh (00:20.058)
Yeah, let's just start right with it. That's perfect. Okay, cool.
Josh (00:29.21)
All right. Where am I? We're here with Arjun Bhupthani, founder and builder in the crypto space who recently wrote a thread that ended up getting over three and a half million views and started a whole cascade of conversations around understanding what life would be like post-AGI, which is the subject of today's episode. So Arjun, welcome to Bankless.
Arjun (00:47.368)
Thanks. Thanks for having me. Once again.
Josh (00:49.618)
So we're really excited to talk to you about this because your thread went mega viral when it hit and the hook that got everyone sold was it's likely that we only have three years remaining where 95 % of human labor is actually valuable. Is humanity doomed? Is it over for us? Like, please explain what's going on here.
Arjun (01:07.89)
Yeah, so I mean, think a lot of people obviously found the thread really controversial. And it's interesting because it's like...
There was a lot of different reasons why people found it controversial. So a lot of people were like, well, no, AI won't come for my job. And then there were other people that were like, oh, I think this timeline is too aggressive and things like that. What's really interesting about it is that when I went and I talked to a bunch of people afterwards who are actually very knowledgeable about this field, everyone agreed on this outcome. Everyone was like, yes, this is happening.
If people disagreed, it was just on my timeline. And usually it was off by like two years. They're like, no, it's going to be five years rather than three. So that's also very interesting data point. I think like the core thesis behind this is like.
We are, I know that a lot of people don't want to believe this. And I know that it seems like we're far away from this right now because you you go to use an LLM today, but like it is, it is janky, right? Like there are, there are like absolutely issues that you run into when you're trying to use like chat jpt to be able to do any of your work and you have, it requires a decent amount of human input.
But we are still accelerating towards an outcome where more and more work can start getting replaced. And I think that this is something where if you look at the rate of acceleration and if you look at the kind of work that is getting replaced, like...
Arjun (02:30.748)
we are heading towards a world where it will just take a lot fewer people to do the same job or the same quantity of work in the world, right? Like one person is suddenly going to become like a thousand X contributor. And so this tweet thread was really a call to action to do two things. One, learn about this stuff, right? Like AI literacy is super, super important. If you are a person that doesn't actually like take the time to like learn this technology, even if you're not a believer in it, like learn what it can do and learn what its limitations are and learn how you can use it. Because otherwise, I think
you end up kind of in the same boat as everybody that was like, well, I don't want to use the internet because it's, you know, like, how is it ever going to replace my fax machine? And, and, and then the second thing is like, prepare for an outcome where like, yes, a lot of the work that a lot of the things that we consider work today, maybe, maybe like all of it that we consider work today just ceases to exist. Right? Like the, the, prepare for an outcome where you're, it probably isn't a great idea right now to go and like,
invest into a specific vocational school for four years or something like that, because it's probably very likely that the world looks quite different after you finish and after you graduate than it does today. So invest into that. Lean into that and say, okay, what are the skills that I can learn right now? And build up your resource base, build up your capacity so that if the world heads in this direction, you are more de-risked.
Josh (03:35.889)
Hmm.
David (03:56.206)
Arjun, and I, Arjun, I've gotten to know you over the years. And so when I see this tweet thread, I am actually just familiar with the person behind the tweet thread, but it's got so viral, you 3.5 million views that the average person reading this tweet thread did not know who Arjun was, right? They didn't know you on a personal level. And especially with this second tweet in the thread where you say, take advantage of this window of info asymmetry to gather resources, invest into things.
Arjun (03:57.392)
lot of information there so you can jump in.
Josh (03:58.642)
Yeah.
Arjun (04:14.44)
Yeah.
David (04:24.728)
that will retain value post-AGI hard assets, food production, compute land and property, and then after year three, join a hyper-local community. Divest from AI-controlled supply chains. Now this, think first impressions, this read's very doomer, very prepper.
Arjun (04:41.889)
Yeah
David (04:42.658)
And that is not how I know you. That's not really the origin that I don't really see Arjun as a doomer. So how do you square these two things where what you're advocating for is pretty doomer-y. This is like doomer-y behavior. Like make sure that...
you have no dependencies on the outside world. Make sure that you can self sustain yourself because the robots might not give it to you. So maybe you can square my two assumptions here between Arjun, I know who lives in a city with millions of people versus Arjun on this Tweet thread that is advocating for living off the grid.
Arjun (05:07.219)
Yeah.
Arjun (05:12.008)
Yeah.
Arjun (05:17.416)
Yeah. And look, I'm not necessarily advocating for living off the grid. I do think that there is going to be time to get to this point. There's going to be a time window where we will have, you know, where it's like...
And maybe we can also take a step back and talk about what are the different futures that are in front of us, right? But there's going to be a time window where we sort of experience this exploration. And right now, it's sort of this hidden secret where you can use, people are building companies on top of AI that are going to remove the need for many types of job categories to exist, right? And that's going to result in displacement. That's going to result in a loss of opportunity. So you should use this time right now to take advantage of that and either build something yourself and be a part of that.
at the very least, like learn about it, right? And then there's going to be a time at which like that has started that has really taken effect and like now it shifts the value of the economy, right? We shift away from this like model of work or of earning, which is centered around like how much work are you doing per day to a model of earning, which is centered around like what things do you actually own? Because everybody's able to do lots and lots and lots of work per day all the time.
so my second, my second tweet there is like partly meant as a way to kind of think through like, okay, what are the things that will actually retain value? And I, I don't necessarily mean that you need to like,
build all of this and own all of it yourself, right? Like don't mean you need to go and own your own farm, though I think if you do, that's actually really great. I think what I'm saying there is like, these are the things that will actually still be valuable. Like if I was with any spare capital that I have, I would be investing into, I am investing into property. I am investing into compute. I'm investing into like...
Arjun (06:56.656)
like companies like Agritech and stuff like that, because like those things are still going to exist, right? Like, it's unclear whether like SaaS companies will exist in the same way. It's unclear whether a lot of consumer products will exist in the same way. But you know that people will always need food. And so it's like, this is more of a like, this is, you know, not financial advice, but it's more like, it's more like, if I were to invest capital right now, what would I invest it into? Because these are the things that are probably going to be worth a lot more in the future.
Josh (07:26.481)
Cool, okay, so I'd love to set a little more context here. Why does it feel like this work will be made redundant? And what did you see that sparked you to write this thread? Because the timelines are fairly short, the claims are fairly large, what was it that you saw that made you feel that work was going to become redundant in a sense that, in a way that it wasn't the past 10, 20 years?
Arjun (07:52.264)
Yeah. So, so in general, like highest level context here is like, I'm not the only person that thinks this way, right? I'm not like,
There is a range of estimates that are being provided by people right now. If people who are listening to this have seen AI 2027, that is a very, very stark forecasting view. And it's based on hundreds of pages of forecasting evidence. There's a lot of data behind that take. It's very, very nuanced. But that take basically says AGI in 2027 and human extinction in 2030.
Like, like that is that it, and obviously there's a path that, leads away from that, but also like that is a possibility, right? and in that, in that world, like, you know, there, it's like limited what you can do. and, we can maybe talk about like alignment and super intelligence in the future as well. But, I think that there's, that is like one very, very stark take. And then the other end of the spectrum is like, you have, you have a lot of people that are like, well,
David (08:23.723)
No
Josh (08:24.779)
man.
Arjun (08:53.158)
You know, we're maybe not ever going to achieve like super intelligence stuff like that. But even with the stuff that exists today, we're still going to see like massive job displacement. So the question is around like, okay, how long will this take to like ripple through the economy? I would say like, part of what triggered this for me was one, seeing the pace of change, right? Like you can, everyone at this point must have noticed how much more quickly OpenAI is shipping models, right? How much the competition has sped up and like what the capacity for these models is, right? Conducts windows have grown exponentially.
of crazy. But you already have, you already have, I like I saw a tweet the other day from, um, Sahil Lavina, who's like, uh, one of the, he's like a founder, he's the founder of a project called Gumroad, which is similar to Patreon, um, and then a bunch of other companies, right? And like, he's basically like, yeah, I'm not actually hiring any junior or mid-level engineers anymore. Why? Because the entire content, the entire, uh, my entire code base is like less than a million lines of code. So can basically just drop that entirety into, in its entirety into Gemini because Gemini now is a million word.
million line of code context window, right? So like, it's like, do you like, what will this mean for engineers? What will this mean for a lot of other people is a little bit unclear. Yeah, I mean, I think, I think like, for me, I think the big big things that I'm worried about right now are like, I was so I initially got into crypto because I was I was concerned about AI. My
Josh (09:55.099)
Mm-hmm.
Arjun (10:17.876)
for crypto was like, immediately after AlphaGo happened in 2016, I became, I became concerned about a world where like our fundamental assumptions around AI being unable to do creative and intuitive things were wrong, which has turned out to be true. And I became concerned about what that would mean for automation, right? And job loss. I, a part of the impetus for me writing this thread and for me to really now be taking a much harder look at AI is that like, I think that those outcomes have really accelerated.
I think we are now a lot closer to it than we realized, and I think nobody's really thinking about it enough. think everybody's kind of in denial that we're in the process of this crazy transformation. And I think we're already at the point where a lot of things that we consider work today are kind of solved. It's just that they haven't really trickled through the economy yet. And so it's a question of when, not if, at this point.
David (11:12.396)
Mm-hmm. Mm-hmm.
I think turning that statement into something that is relatable is really the hard part here, where it's such an intellectual statement. It's an intellectual argument. It doesn't feel real. And that is why there is, as you identified in this thread, information asymmetry. You can go and you can tell the average person on the street, hey, we're all gonna be out of work in three years. But it's hard for them to feel that that is true. So maybe you can of walk us through the logic progression that you see between now
Arjun (11:37.736)
Yeah. Yeah.
David (11:43.85)
and three, four, five years from now. I bet the average listener of this podcast has used ChatGBT and so maybe they even use it in work and so they understand that their work is getting easier and they are excelling at their work better because they are using ChatGBT. But then going from there to we are out of work in three years is still a big gap to cross. Maybe you can help us cross that gap.
Josh (12:01.915)
Okay.
Arjun (12:05.684)
Yeah, so I don't think everyone will be out of work in three years for what it's worth, right? I think the thing that I'm seeing is that competition is going to increase a lot. When you talk about massive job displacement and you talk about it like...
as a result of automation. You're not talking about an AI model has replaced 100 % of my work. You're talking about other people are just executing more efficiently than you possibly can imagine and possibly can do it. So there's just less of a need for people to execute on the same quantity of work. And I actually had a follow-up tweet about this because I think there was lot of confusion around this idea. And the follow-up tweet was really simple. was just like, your competition isn't like,
AI models, it's AI augmented humans, right? It's like the people that are going to learn how to use leverage LLMs far more efficiently than anyone else are going to be the people that become the like thousand X contributors. And those are the people that are going to be able to like do the jobs of tons and tons and tons of people across many different companies today, just as like independent contributors or contractors. And as a result, they will just be able to offer services much more cheaply than you can offer, right? So there's this like question and this
for example, in the software case, right? There's this question around like, well...
How do you compete with somebody who knows how to build and ship products with very, very robust code, just so, much more efficiently and effectively than you can that they can go and do it for like many, many different companies all at once and do it at a much cheaper cost. Like you can't. it's the same, it's the same argument around like outsourcing to other countries, right? Like how do you, how do you compete against a, like a, a labor force that is working in a country where the cost of living is like 5 % of what it is in somewhere in like where you live? well, you can't like you either have to like create regulations around
Arjun (13:52.34)
outsourcing or you have to like bring all those people to your country, right? Like I think the second piece of this is, and so that's kind of like one core idea. The second piece of this is I think people are really underestimating how fast this sort of change can occur. So a lot of the comments that I saw were like in principle accepting that like, the technology can get there, right?
Because it can, you can see that these models on benchmarks are outperforming humans for a lot of normal tasks. But the question was, okay, well, it's gonna take a really long time for XYZ industry to adopt this technology. And that I think is very true. However, I also think that, again, there's this kind of exponential acceleration effect, which is like...
If you are a person that is in an industry that is getting more competitive, the overarching trend is that more and more people are going to want to go and start their own businesses because they're going to find that it's harder and harder to work for another company. So just today I was with some people that we were talking about an idea which we think could realistically just kill middle management at companies. You could take like thousand person enterprise companies and turn them into like...
50 person companies. Why? Because the vast majority of work in a large company is actually just coordination overhead, right? There's just like the productivity loss that comes from like needing to share information. And like that is absolute, 100%.
David (15:20.426)
internal coordination, like making sure the whole company is in sync with each other.
Arjun (15:24.788)
Exactly. That is something that 100 % can be automated. 100 % you can just work towards having central knowledge base of information that everybody's interacting off of. And instead of needing to go and talk to somebody to get information about something, you could just interact with this knowledge base. And the knowledge base can also go and self-execute to do things. So now it's less around, oh, the product person needs to contact the marketing department to give an update about the XYZ thing that happened, and more just, oh, automatically engineering made an improvement.
Josh (15:25.425)
Hmm.
Arjun (15:54.78)
all of a sudden there is an update going out about it, right? So you're cutting so many people and so many processes out of the pipeline that you just can have much smaller, leaner organizations. We are heading towards this world where you will have people building one person billion dollar companies, like that is going to happen. And in this world, there's this question of like, well, what do most people do? I think most, many, many smart people are going to start.
turning to entrepreneurship, right? Because I think entrepreneurship overall becomes a lot more de-risked because now instead of being this thing where you're like, I have to just go and learn everything I can about how to build this organization and like hire a team and whatever. Now it becomes, well, I could do most of this by myself. I could do all the market research and get advice on how to do this really effectively using, using chat GPT. And I can just launch it and put it out there for almost no money because it's like way, way easier for me to build a proof of concept just by by putting it right.
like the kind of like...
activation energy needed to like start a startup, a startup business is like so, so much lower already and it's getting lower by the day. And so the question is what happens when this happens? Right? It's like competition increases for everything, every single business, every single type of industry, every what's going to happen is every single person that has context about any specific industry. say you were working in like insurance, You're an actuary, you're working insurance and you're trying, you just like, you're like got laid off by your
insurance company because they automated a bunch of stuff, what are you going to do? You're going to be like, well, I already know all of this stuff about insurance. I'm just going to start a business insurance. And you can do this now with very, very low lift and go and start selling to people and start competing with the very same company that basically laid you off on specific processes. So I think that there is this rising tide effect that I think we greatly, greatly underestimate right now.
Arjun (17:47.92)
Now, the last major pushback was around physical in-person stuff, right? And I think that this is a really important pushback. You know, how can we make a bet that like, fine, maybe we get rid of all the like white collar jobs, right? And maybe we, you know, all of a sudden, the only thing people are able to do is like physical in-person stuff. And I think that that is certainly important. I think there is no way that AI is going to replace like social interactions. There's no way that AI is going to replace like actual
personal networks. But I want to challenge the assumption that we cannot get to a point where AI can automate a lot of physical manufacturing as well. And my challenge to this is centered around additional research that I did as a follow-up when a bunch of people started. It's like naturally when I posted this, there was a ton of comments that were like, know,
Simultaneously comments are like, clearly you've never written a line of code because this is not how software engineering works. And then other people are like, clearly you've never done any physical labor because it's not how physical labor works. I'm like, pick one.
Josh (18:44.177)
Mm.
Arjun (18:53.308)
I do think that like, you, I ended up doing a bit more kind of research into like the physical labor stuff and it's kind of remarkable. So there's two things that are needed. You need the technology to become cost-effective enough that people, like it's more capital efficient for people to like buy a robot in a warehouse, for instance, on an assembly line than it is to hire a human.
And we are already there. It is already cheaper. People don't know this. It got cheaper basically last year for most sort of manufacturing use cases. And then there's the second step of like, okay, there is enough production of these things to be able to satisfy the requirement for all human labor. Now, again, I think this is similar to like the first case, right? Where it says like,
Josh (19:20.837)
Yeah.
Arjun (19:46.47)
We're not going to experience like widespread, hey, everybody just loses their jobs all at once, but it's going to be an increase in competition, right? Like you're going to be competing against like these humanoids are going to, you're going to be competing against like service companies that are like, yeah, I, I'm a contractor that is just like hiring mostly humanoids and like, I'm starting out by hiring like two to three. They don't need to like, they, they need to recharge, but they don't need to eat or sleep. And so they can work extremely efficiently. And then like over time you can grow that base. and.
You know, there's, there's like open questions around like how quickly and how efficiently you can grow a manufacturing base. But like, there's a lot of good arguments around this in like the AI 2027 post, for instance, where it argues pretty damn quickly, right? Like the argument in that post is like, well, we are entering an arms race. We are entering a world where everybody's going to be heavily, heavily incentivized to, to use this technology. And it's very likely that governments themselves will be heavily incentivized to use this technology because of military applications. And so the question is,
in an arms race, how fast can a government industrialize along a specific axis? We have case precedents of this, which happened in World War II, which was the government in three years converted its entire manufacturing base, so like all automobile factories, into building plants. That was in three years in the 1940s. Could probably be a lot faster now.
Josh (21:10.543)
Man. Okay, so I'm hearing this and this is a lot to digest. And this is someone who is pretty technically adept when it comes to AI. So I would imagine the average person on the street is hearing this and they're kind of freaking out. They're like, okay, I have three years and AI is taking my job and all of this crazy stuff is happening that I am blissfully unaware of on day to day. So for those people who are probably asking themselves, well, what the hell do I do now? I'm really curious. Your answer to it is, is how do you kind of think about preparing yourself for the next three years in the sense of
Arjun (21:15.188)
thought.
Josh (21:40.326)
of resource allocation. A lot of people are just invested in a traditional S &P index fund or in terms of skill allocation where maybe they're going for an additional degree in a specific area when they could be learning specific AI tools. Do they all need to become entrepreneurs or is there still opportunities in business where they can become employees but maybe leveraged employees? I'm curious how you would kind of guide them through this next three year period.
Arjun (22:04.136)
Yeah, that's a good question. Yeah, I mean, think like, so first off, don't panic. It's going to be OK. Like, the world is undergoing a transformation that we have never seen before, right? There is a possibility that AI is, you know, it's like people will look for comparisons at some point in the future around like, how did this change the world?
David (22:11.022)
You
Josh (22:11.121)
Yeah
Arjun (22:27.42)
There is a possibility that AI as a technology is more important than fire. There's a possibility that it's not. But this is the first time in the history, our understanding of the universe, we have the ability to improve on cognitive density very, very rapidly in systems. And so I think that the implications of this are quite hard to understand.
My advice to people is like, don't panic, just spend time learning. we're not in a phase yet, like three years is the time that you have until like this transition really, really starts. Then it will take some time for it to ripple through the economy.
And there will also potentially be a lot of other changes at the time where we may have like AGI by that point. Maybe we may have AGI at around the five-year mark. We may have AGI a little bit later. And it's very unclear at this stage what that will imply, what kinds of technologies will be unlocked. So my advice to people is like, don't panic, join.
groups, right? Like we started an AI meetup here in Lisbon, really for people that are not technical and that just like want to learn about this stuff and like want to get involved. And like those people are now going and like trying and building stuff with AI, because to be honest, it's also just fun. Like, you know, like learn the tools, learn how to build stuff, learn where this stuff is going. And then at, you know, as you see things evolve, like figure out how best to de-risk yourself, right? Part of that might be coming from like
David (23:34.03)
Mm-hmm.
Arjun (24:01.266)
building a strong personal community of people around you that you know you can work with to do stuff. Part of that may be coming from...
you know, investing into, like diversifying your personal investments away from just like the S and P to other things as well. it may be like buying land if you live like somewhere far out where, it's just a lot cheaper and you, you can just kind of like sit on top of it and then potentially rent it out. Right. I think like it's just about at this stage, like education, in my opinion, like more people need to be thinking about this and more neat people need to be talking about it because it's, it is very serious. And, and I think we haven't, I haven't
really like touched on like where this stuff goes yet but you know
It feels at a high level, like there are like three kind of general directions that AI can go right now. And this is kind of, this is basically what has sort of been written about by a lot of people in like safety and alignment and by a lot of people just trying to forecast out like where LLM growth stops. So there's a kind of like core facts around this. First fact is the current transformers architecture scales past what we consider right now to be human level intelligence.
The point at which this level is off is higher than the point at which AI will be smarter than us. so we're not, at this stage, don't expect to see slowdowns associated with the core architecture. There may be slowdowns associated with implementation and things like that, but even those are changing really rapidly. Two, it's not just a compute problem.
Arjun (25:33.14)
It's you know about 50 % of the improvement comes from growth and compute 50 % is algorithmic and the algorithmic improvements are accelerating and so like This that's going to continue to be a case. So like if you're a person In tech who's like, oh, well Moore's law. It's it's not Moore's law. This is totally different paradigm And then the third is Alignment right we don't know yet how to ensure that
AI models are actually going to be aligned with humanity, with what our needs are. And this is kind of an interesting topic, but basically, the core principle here is like...
There is a difference between like intermediate goals and terminal goals. Terminal goals are like the end state of where you kind of want things to end up for whatever it is that you're doing and then you have intermediate goals together. as human beings, human terminal goals are things like live a happy life, like have social connections, like ensure that you are healthy, ensure that you are loved, right? There's this like fuzzy mass of things that we can't, we have a really hard time explaining, but that somehow generally boil down to being good.
And then our intermediate goals are the things that kind of like get us there, right? So maybe like there's a very classic example of the paperclip optimization and paperclip factories. So maybe you're a person that has a paperclip factory and like the way that you get to your self-fulfillment and your happiness and your sense of being good is like, you know, building a really great paperclip factory that earns you money to be able to do things, right?
The problem with LLMs is that LLMs don't, and in general, AI does not necessarily have the same kind of core notion of intermediate and terminal goals. Training LLMs to have the same terminal goals as humanity is very, very difficult. And so there is this risk of extraneous sort of events happening through just like generally innocuous prompts, right? Like obviously some people are worried about like, okay, well, what if you use an LLM to go and like create a bioweapon? And that's definitely a risk, 100%. That's going to be a problem.
Arjun (27:40.338)
But even, even before that, like what happens if you are the owner of this paperclip factory and what you want to do is like just build the best paperclip factory you can. And so you go to ChatGPT and you figure out how to build like a custom version of ChatGPT in your, inside of your paperclip factory. And you're like, you know what, I'm going to, I'm going to tell this thing, optimize my paperclip output as much as possible. So that way I can make, I can basically like, you know, optimize my company and make the most amount of money that I can. And this LLM somehow.
as a result of moving off of open AI servers and putting it onto your own and doing some shit with it, you somehow discover AGI. Somehow. The issue is what you have told this model to do is optimize on producing papercliffs. LLMs don't always understand, and at least AI models without alignment training definitely do not understand that like,
Josh (28:18.011)
Hmm.
Arjun (28:35.174)
that needs to be done while also maintaining certain fundamental important notions of how it needs to be done. So for example, not killing all people. If you take a paperclip optimizer, what is the way to optimize producing the most amount of paperclips possible? is kill all the people on the planet, take over every single factory, turn it into a paperclip factory, and then produce paperclips for everything. But that's not the outcome you really want when you say,
optimize my paperclip factory, right? The outcome you really want to do, the prompt that you're actually trying to put into there is like optimize my paperclip factory without hurting anyone while being as honest as possible and while genuinely helping the world, right? And so what...
Josh (29:13.798)
Hmm.
Arjun (29:19.38)
What alignment is and what happens is a core part of AI training and safety training is like basically teaching AI models ethics, teaching AI models to be helpful, harmless, and honest. And this is something that is just very poorly understood at the moment. And it's something that still needs a lot of work. the quintessential example of this is like everybody on Twitter is talking about a couple of weeks ago, or like, you know, the new, the like GPT-4 model being super sycophantic where it's just like.
You say anything and you're like, wow, you are the most intelligent person I've ever met in my life. Like, holy shit, I can't believe you thought this. And the reason that those models are doing this is because they have been trained to basically receive positive reinforcement. They've been trained to get approval from you and from the model trainers as part of their responses. And they've learned this, like the slight behavioral thing that probably wasn't picked up internally because it was probably slight internally. But then when it's, when it's out in production, it becomes magnified. Right? So there's slight internal thing of like, if I'm a little nicer.
Josh (29:52.475)
Mm-hmm.
Arjun (30:18.406)
I'm more likely to get good ratings. If I'm a little bit more sycophantic, I'm more likely to get good ratings. These are all of the kind of unintended consequences of the way that we use LLMs today, the way that we train LLMs today. And these consequences have sort of like far reaching implications. So those implications are basically centered around like,
in the future when we do have AGI, when we have models that are extremely sophisticated that could lie to us and we would never even know, how will we know that they actually do what we want them to do? How will we know that they are not, for example, for whatever reason, because they just have somewhat different terminal goals than we do, inadvertently plotting to kill humanity or inadvertently siding with some faction over another faction or inadvertently being owned by and manipulated by certain companies to behave in certain ways, right?
And a little bit long-winded, but I think the kind of conclusion of this is like, there appear to be three general outcomes that we're looking at. Outcome number one is the really happy case, which is like, models replace jobs and replace a lot of our work today. But in doing so, they produce value for everybody.
And by producing value for everybody, they remove the need for human beings to have to work to survive. Right. For the first time ever, human beings can move to a world paradigm where we are post-scarcity, where there's no need, there's no constant race to be alive. You can just be alive. And then what you choose to do afterwards is up to you. You don't have to do anything. You just choose to do things. It's the utopia vision, right? Option number two is the kind of...
David (31:58.222)
That's a utopia vision, yeah.
Arjun (32:06.6)
Dystopia but we're all alive vision. is, exactly. Dystopia but we're alive is like, we do achieve that outcome where LLMs can replace all human work, but those LLMs are owned by a small group of companies and we basically enter into feudalism. This is basically like 1600s or like in Japan before everything opened up where there's just a bunch of like,
David (32:08.622)
Dystopia, but we're alive and then there's dystopia, but we're dead.
Josh (32:09.467)
Hmm.
Josh (32:14.691)
boy.
Arjun (32:35.186)
like sects that are made up of companies that have like, like LLMs that are all powerful that control like large parts of the world. like, you know, obviously governments are involved, but governments are tightly coupled, right? They would be involved very, very closely with these and like they would probably nationalize them. But yeah, in this world, right, you, are kind of like,
David (32:36.333)
Mm-hmm.
Arjun (32:55.628)
as an individual, not everybody has access to the same AI resources. And there's probably just a very, very strong divide between the people that do and the people that don't. This is maybe the kind of outcome to hedge against by investing into food production, investing into power generation and things like that, and potentially even doing it personally for yourself.
Because in that case, you are self-sustaining. So no matter what happens, you're fine. And then there's outcome number three, which is we create misaligned intelligence, misaligned AGI, misaligned potentially superintelligence. And that misaligned superintelligence, for one reason or another, just kills all of us.
David (33:42.862)
Mm-hmm.
Arjun (33:44.966)
And right now, when you talk to Sam Altman and you talk to Elon Musk and a bunch of other people, they say they're P-Doom, which is basically the risk of misaligned superintelligence or some other kind of negative consequence along the way. Killing all humanity is usually about 20%.
David (34:00.085)
Mm-hmm.
Arjun, so, you, talked about what you are doing. You are building these like local communities to talk about AI to get ahead of the curve. And that's everything very much aligned with what we are trying to do with this podcast, right? We are just trying to get ahead of things. We're trying to explore these things so that, you know, when it does come, we saw it coming from a mile away. So this is, I definitely think this is like why, why we brought you on in the podcast in the first place and why we appreciate your perspective. and I wanted, I do want to put the.
Arjun (34:21.81)
Yeah.
David (34:31.31)
uh, 2027 paper.
that you've cited, resembles that like, okay, there's complete AGI by 2027 and not too long after that is like one of these outcomes, right? Like perhaps P-Doom, the P-Doom outcome, either the dystopia where we're alive or the dystopia where we're dead, one of the dystopia ones. And then there's other people that have talked about this. They tend to come from the rationalist communities. tend to come from, what's that one blog post with that blog site with LA's, it's Slaystar Codex and Scott.
Arjun (34:48.924)
Exactly. Yeah.
Arjun (34:58.91)
Slate Star Codex. Yeah. Scott Alexander and Les Rahn.
David (35:03.594)
Yeah, less wrong and less wrong. Yeah, so that's the less wrong and rationalist community tends to have like higher and more accelerated P dooms. And I think this is the outcomes that again, we are trying to hedge against, are trying to understand, we are trying to explore. There are other people out there who are saying that like, okay, let's not get too crazy here. AI is like electricity. It is a big deal. Electricity was a big deal. And it changed the world forever. And
Nonetheless, electricity rolled out over a 30 year period.
and while it's very hypey, it's very easy to get over our ski tips about, you know, things that are going to change the world forever. It's very easy for a thread to say, to go viral that says we're all going to be out of work in three years. but really there are just so many things, friction points for AI to like fundamentally roll out, right? Like this culture needs to adapt to it. Things need to adapt to it. And it's actually slower than people give credit for. And so the actual rollout plan, the way that AI is going to impact society is something much more closer to electricity, which took
20 to 30 years to roll out. So how do you think about these arguments? What do you think about them?
Arjun (36:14.984)
yeah, I mean, it's a, it's a good point. I really hope it's the case. to be honest, I, I haven't talked to anybody that understands this stuff well that has said 20 to 30 years. think any, like the, longest estimate that I've gotten from anyone that actually works in this industry on AI research and on like understanding. Like the consequences of this stuff has usually been, has been around 10 years at most. Right. and as far as the electricity example goes, like I think, and so I, I do agree with a lot of those points, right? I do think that like.
There are, there's just stickiness. There's like gum in the works that comes from the inefficient human processes, right? It's like, will just take time for people to adapt to this stuff. It will take time for cultures to shift, stuff like that. The electricity example is an interesting one because it's like, it took us 30 years to roll out electricity. But that was before electricity and the internet existed. Right?
David (37:03.79)
So you're saying like AI, with the AI rollout, AI gets to roll out on the backs of existing electrical networks, the existing internet infrastructure. And so it has an accelerated, it has the infrastructure to roll out faster.
Josh (37:04.475)
Hmm.
Arjun (37:10.12)
Yeah.
Arjun (37:14.055)
Exactly.
Arjun (37:18.398)
Yeah. Like think about how quickly, just think about how quickly in your own life, in your own work, right? Even if you're not a power user in your own work, things have changed. Like two years ago, two years ago, we were saying we are nowhere close to AGI. Two years ago, think GPT 3.5 existed and it was like completely unhelpful for any sort of real task. It was just like a good thing to play around with.
Josh (37:31.761)
Hmm.
David (37:46.222)
Hallucinating was the base case, yeah.
Arjun (37:48.07)
Exactly. Yeah. And like, think about where in the last two years alone, where we have come, right? the ability to like, people are completely revolutionizing, like most research fields right now with like 03, right? Like the rate of new PDF, sorry, PhD papers coming out around topics and genetics and things like that are like skyrocketing because like the rate of research is skyrocketing, right? Like,
Josh (37:48.315)
Mm-hmm.
Arjun (38:15.86)
I think like, I think that it's, I think we are greatly underestimating how much more competitive the world is today and how quickly just mimetics like spread, right? How quick, how quick it is that people are like, you know what? I'm gonna, I'm gonna start using this to do X, Z, right? I, and I think the other thing that people are underestimating because a lot of people are like, okay, well, you know,
Josh (38:30.235)
Mm-hmm.
Arjun (38:43.132)
It'll take a while for AI to be usable by everybody. And that's true, but I think a lot of people are likening it to technologies like computers, where with a computer, you have to learn an interface for how to interact with this thing. And learning that interface, learning how to type was an impediment for people to be able to use a computer. So there was just this natural barrier to entry. But with AI models, you just speak to them. There's no bandwidth constraint anymore versus just interacting with a human.
And so, you know, I definitely see that take and I think it makes sense. I do think that there will certainly be some things that will take longer, but I also think that there's certainly going to be some things that will take a lot less long simply because we're just operating in a totally different paradigm today with access to technology that just didn't exist when we were rolling out electricity. And I think the other thing...
David (39:31.34)
Maybe a parallel example that's worth bringing up is something that actually has nothing to do with AI, but I think does.
argue in alignment with the idea that things move faster now was actually the Silicon Valley bank run. When we were unpacking why there was a bank run on the Silicon Valley bank, people realized that like, it's because mobile banking and Twitter happened where everyone could pull open their phones and instantly withdraw money from Silicon Valley bank. And this has nothing to do with AI, but it has everything to do with the accelerating pace of technology and the fact that things just fundamentally move faster.
Arjun (39:44.595)
Yeah.
Arjun (39:55.474)
Yeah. Yeah.
Arjun (40:05.288)
Yes.
David (40:08.08)
in this day and age.
Arjun (40:09.886)
They do.
And look, like obviously it's going to be a range, right? Like if you're working in tech and if you're, if you're sort of on Twitter terminally, like many of us are, and, and you're, kind of like on the bleeding edge of this stuff, things are going to be moving exponentially quickly for you versus, versus people that are, that are not right. And I think that the reason why I think a lot of people really just don't perceive these things right now is because they're just working in industries, which just like they haven't yet. It's like, you know, you know how, like there's a, there's a, there's a time delay between like when you hear news on Twitter to like when you hear
use on Reddit to when you hear news on YouTube, right? there's like a few days later and then a week later. It's kind of like that for like job industries as well, whereas like, you know, I think like a lot of people, for example, in like law firms don't yet know that like with legal documents, like 100%, like you can, with what exists today, you can just like automate 99 % of like a lot of like legal documentation work, right?
Josh (40:43.693)
Mm-hmm.
Arjun (41:05.33)
A lot of people just don't realize this. And it's not going to take that long until people realize it. think in the past, you could have gotten away with not knowing this for years, potentially even decades before the technology trickled through the economy. But now it's going to be one single viral TikTok video that changes this. One single post from someone, and then all of a sudden half of the people in your law firm are doing this thing, and then they're vastly outperforming everybody else.
everyone else gets fired. And yeah, I mean, it's a bit of a it's a bit of a bleep picture. But again, I don't think like it's something to worry about. think like
Josh (41:36.881)
Hmm.
Arjun (41:44.784)
similar to all technology changes, like what matters here is just like learning about this infrastructure and like working with it, right? Like the things that will change the outcomes are going to be like, what new kind of innovation can you do as a result of this, right? It's like 95 % of existing jobs may disappear, but that doesn't mean that like there won't be anything to do. Like there will certainly be new opportunities to innovate in the future that just we cannot conceive of right now, right? And so I think that's one thing. And then I think the other thing is just
Wow, sorry, just completely lost it. Okay, that, yeah, the main thing I think is just like, exactly, the main thing is just that, is just learning about this technology as much as you can.
David (42:21.58)
No worries, we'll be able to trim that.
Josh (42:32.721)
Okay, so when I read this post initially, I'm reading it in the United States I am thinking it in a US centric view because that is just what I do But then I realized like Arjun is in Portugal and a lot of our other listeners are across the world and there's this really great quote that I love which is like the future is here, but it's not evenly distributed which made me think what happens in the case of this uneven distribution when the power laws and the scale is this large and what kind of influence does politics and
Arjun (42:49.193)
Yeah.
Josh (43:00.389)
policy have across these different countries or even from the AI alignment committees themselves within OpenAI. What kind of role does that play in the distribution of this? Is there a world in which one country just kind of says or one company says we're going to remove all the alignment thresholds, we're going to remove all the policy restraints from happening, we're going to accelerate and another one chooses to try to play the safer route. Does that create this weird conflict between countries where one becomes much more powerful than the other, one is faster than the other?
Arjun (43:26.451)
Yes.
Josh (43:30.413)
on that.
Arjun (43:30.45)
Yes. Yeah. So I think a lot. So first off, totally agree. It is not evenly distributed yet. I think from an economic space is like people are not ready right now. People in the West are not ready about what happens when you have all of a sudden like people living in. I mean, we were all in Thailand for Defqon, right? Like, like, Thai builders are just like, like
super hungry. People in that part of the world are extremely, extremely hungry to make a mark and extremely hungry to build stuff. And it's awesome. You can see that there is just a desire there to make a mark on building new types of applications and on changing the world. I think the West is just not really ready for what happens when the entirety of the rest of the world actually gets access to sophisticated understanding of LLMs and starts building competing products. People just are not ready for that.
And I think that that's actually a big part of what I think will drive a lot of these changes is people, it's like a certain level of like, in the West, like things are working and everyone's fine. So no one is as like concerned about changing things yet, but like, you know, exactly.
David (44:36.746)
mean, the comfy lifestyle that we've been living in the United States because we have the global reserve currency is ultimately going to be our downfall because everyone else is such a much harder worker than us.
Arjun (44:44.934)
Yeah, well, I mean, Trump may fix this by totally destroying the American economy first. So that's an option, you know. But yeah, I think that's an issue, right? Like comfort breeds complacency. As far as the political and policy aspects of this go, you know, it's interesting. mean, it's an open question. Like, we don't know what this is going to look like.
Josh (44:46.129)
You
Josh (44:49.893)
Yeah.
Arjun (45:09.242)
A lot of it seems to come down to how fast does intelligence explosion happen? So when I say intelligence explosion, mean, there's a process by which these companies train LLMs. And LLMs get more sophisticated over time because we just train them on larger data sets. And then eventually, train them in more sophisticated ways. We improve inference. We improve like.
post-training and things like that. like each iteration, every single iteration that we do on a model uses previous models to train it. And so what these companies are doing is that they're intentionally building models that are actually very good at doing machine learning research. They're intentionally building models that are researcher models and coder models because they know that they can dog food those to build better models in the future. And this is kind of like a runaway.
exponential effect, right? Because as you develop more more sophisticated tooling, what OpenAI is doing is internally automating their own researcher work and a greater and greater proportion of their own researcher work, until at some point when it hits AGI, it will be at the point where now it has replaced the entirety of their researchers, right? Their researcher base. The researchers may still be working there.
maybe they may not be doing anything at that point, but they may still be there notionally. But the, but all of a sudden you've replaced those researchers and then you're not just replacing the researchers, but because of the fact that you're running these things as LLMs, can now paralyze them. Right. So now instead of having like a workforce of like 20 researchers that are working on training the new model, have 20 researchers plus, you know, 30,000 LLMs that are like operating at 95 % of the capacity of a researcher that are
independently all running experiments and publishing results and collaborating to figure out how to build a new model better. And so there is a compounding effect of this. And the compounding effect is the intelligence around LLMs is going to explode. It's going to skyrocket. And we're already seeing this. If you look at model sophistication over time, it is growing exponentially. And there are
Arjun (47:19.678)
There's open questions around what this means for how the world reacts to this, right? So if we explode fast enough...
This is the AI 2027 case, which again, I want to say, I don't think this is the right model. I think that this is much more do-mer-y even than I think. I'm not a do-mer-y person. I think people need to prepare, but this model is quite do-mer-y. But it's based off of very good data. And a lot of the data there is just like, hey, look, if this explosion happens fast enough, governments are not going to be able to keep up. The only options are going to like...
the world is not going to be able to keep up, right? Like the only options are going to be like, just pray that things work out okay. Because the rate of growth is going to eventually become fast enough that like it will require like day-to-day responses, day-to-day updating of like the way that people think about policy. And the implication of this, I think is that like, you sort of have to pair this with the fact that like,
LLMs and especially AGI is going to have very, very significant implications for national security and for weapons and wars. We are already pretty close to the point where LLMs can go and independently hack infrastructure. There will be rogue AIs that just live on the internet that are just going around hacking stuff. That's going to exist.
Josh (48:37.243)
Yeah.
Arjun (48:42.312)
probably quite soon. When that happens, it's probably going to be the case that governments are like, OK, we need to try to start restricting some things. It will also be the case that people can start like.
designing like bioweapons using LLMs. In fact, the latest, I think, the latest models from Anthropic and OpenAI both have kind of said that their models are starting to cross the threshold into danger risk for producing bioweapons. So like, they think they're like one generation away from the point at which like, yes, you could produce bioweapons in your home that would like wipe out the planet, Terrifying.
David (49:16.814)
Cool. That's great.
Josh (49:16.968)
Lovely. Yeah, great.
Arjun (49:18.184)
Yeah, exactly. Right. And so, yeah, like there's, it's unclear, right? It's like, you have this kind of push and pull where it's like, if you explode slower, you could have more government regulation, but if you explode slower, then like, we're going to feel the pain of each step of this process before we get to something where we know like, hey, we have built something that is sophisticated enough to solve all of these problems, like the bio-weapons problem. And if you explode faster, then policy is not going to be able to keep up at all.
It's going to be like a, it's going to be a Molleke style race to the bottom where like every company is going to be doing their best to try to like be as fast as possible. And that's going to lead to people being incentivized to take shortcuts. It's going to lead to an arms race. you know, it's unclear what happens when that happens, unfortunately. So yeah, I mean, the, I would say like coming from a crypto perspective, the incentives are just not great around this right now at all. We need to, the only, I think the only way to fix that right now is just like,
David (50:03.712)
Mm-hmm.
Arjun (50:15.31)
massively increase the educational level of everybody so that we can have more conversations around it. Because like, like, you know, I would say like governments are probably like looking at AI and they're like, yeah, this is going to be a way to like automate some jobs. And they're thinking like, okay, the worst case scenario is like this automates a lot more jobs. They're not thinking like, the worst case scenario is that like, another government not worst case, but the base case is like another government in a few years will be able to produce like mosquito sized drones that can like fly into a window and kill any person.
Josh (50:42.097)
You
David (50:42.734)
Arjun there was a tweet that went around Twitter just yesterday that everyone thought was pretty funny Just because of the nature of it and I want to get your take on it This is the tweet that says Mark Andreessen says when AI does everything Venture capital might be one of the last jobs still done by humans now This is the reason why this is funny is of course Mark Andreessen is a venture capitalist And so he's saying that his job will be the last job
Arjun (50:44.852)
you
David (51:10.894)
That AI will be able to replicate and his reason his reasoning is interesting and I think worth unpacking here on the episode today It is his reasoning is that you know VC is more art than science. There's no formula just taste psychology and chaos tolerance It's a lot of pattern recognition. It's a lot of gut instinct. I think it's a very instinctive instinctive So there's a lot of things in venture capital. There are lot of rules of thumb that
also have rules of thumb that violate the other rules of thumb. So when to apply what rules is really just done by, you know, gut instinct, more art than science, as he says. What do you think of this take? Is Marc Andreessen just like tooting his own horn? Or is he onto something here? What do you think?
Arjun (51:51.86)
you know, it's, it's, it's interesting that like, you, when you see people responding to these things after, so like when I posted my tweet, right, there's a lot of people, there's like a bunch of responses that were like, there's no way AI is going to come from my plumbing job. And this is, this kind of reads quite similarly. right. Where it's like, yeah, every, everyone thinks AI is not going to come for their job. like,
You know, it will to an extent, it might not entirely, but it will to an extent, right? Like maybe, maybe, maybe, so there's two worldviews here and I'll share both. And I don't know, I don't know which one is correct, but I think that they're both interesting. worldview number one is maybe Mark is right. Maybe there is something fundamentally like taste driven, intuitive driven, that is just like hard for an LL to replicate around BC. That is, that is just.
at this stage, something that we just don't think we can automate it away entirely. And maybe at some point in the future, yes, but like at least at this stage. But that doesn't mean that that wouldn't still be a negative outcome for A16Z and a bunch of other people, right? It would still be a negative outcome because all of a sudden what you've done is you've made it, you find maybe you don't make company selection as like...
is not the thing that you can automate, but you can automate every other aspect of VC. And you would also have this massive influx of capital coming into venture because all of a sudden everybody is like, okay, well, I'm not earning from a salary anymore, so I'm just gonna start investing into things, right? And so still gonna create this much more highly competitive environment for VC in the first place. at the end of the day, it may not matter if...
LLMs automate intuition or not, or automate this taste of selection of companies or not, because there might just be enough competition that there's just so much spray and pray going on that you're still going to get fucked, right? And this is my argument for a lot of the other jobs. It's like the LLM may not automate 100 % of your job. A humanoid may not automate 100 % of your job, but it may automate just enough that now there's so much competition that nothing...
Arjun (53:57.412)
you, if you are AI forward, can now do the job of a thousand plumbers somehow. But otherwise, you may just get beaten by somebody else who can do the job of a thousand plumbers. Viewpoint number two, think is perhaps more interesting, which is like back in...
2016 and earlier in kind of deep mind era, LLM, sorry, deep mind era neural net architecture and like philosophy. The thinking was like, these things are just, it's just all, I mean, this is all statistics and like through statistics, we could come up with like sort of empirically definable outputs, like machine learning models will never be able to.
solve intuitive problems. like the litmus test for this was like, well, we used AI to beat the world's best chess players, right? Because chess is a closed form, a closed output kind of game. You can map out all the possibilities and then you just have to figure out which possibility gives you the highest likelihood of winning, right? So we use computers to beat chess players, but we couldn't use computers to win at Go. And the reason for this was that, is that Go is this like totally open-ended game. There's no way to simulate all the possibilities in Go, at least with like current
kind of computational restrictions. like, so people were like, well, Go is this like litmus test of what machine learning models can't do because there's a certain level of intuition involved in Go, in playing Go that comes from just having a feel for what's going on with the game before you really can even dictate like where it's going to go.
And AlphaGo changed this. If you're listening to this, really recommend reading, watching, just looking up more information about AlphaGo. There's this really awesome YouTube video. I can't remember the title of it exactly, but it's an excerpt from a documentary that was made about when AlphaGo beat one of the world's strongest Go players.
Arjun (55:54.932)
What's really interesting is that like the way that AlphaGo won was by playing a move that everyone just thought was ridiculous. Like every person just sort of was like stupefied by this move that no person would ever play. you know and like the other player like when AlphaGo played this move basically was just like
stunned was like, I don't understand. This is a move that a child would play. Like, this doesn't make any sense. Like this is, it's just, it's like a stupid move. It's a bad move. Like it doesn't make any sense. And, and then he, you know, like each of his turns, he was like waiting 10 minutes to do a turn. And like that, when, when AlphaGo played that move, he ended up waiting for like an hour and a half or something, something ridiculous, like an over an hour where he was just sitting there like thinking like, what the hell do I even do here? And then AlphaGo won. And, and I think what that taught
the world was like intuition is still part of the same cognitive process, right? Like think what we define as intuition is this like sub cognitive pattern recognition pattern matching that is really, really important because it actually drives how we think and how we actually like find patterns in things in our, in like our conscious state, but that like subconscious pattern recognition is much more sophisticated. takes like, it's a much larger, like it's it's a much larger parameter.
model, right? It's basically, it's like this much, much larger synaptic network inside of your brain that is taking in way more inputs to find some overarching pattern on things that you can't necessarily describe and say, like, this is exactly why, but you have a gut feeling around why. And the thinking is that a sophisticated enough LLM model will, at some point, emulate that.
right? Because it is still the same kind of pattern matching, right? LLMs do have their own internal narrative. They do have their own internal chain of reasoning. They do have some things that actually seem and feel like intuition in ways that we don't truly understand right now. And so it's an open question.
David (57:54.2)
Right.
Yeah. Defining intuition as just like the labeling of the outputs of what is actually fundamentally just a ton more thought, beneath the surface. I think is it perhaps a little bit scary because then it collapses down to just like, that just means that we need another LLM model that has more parameters. and there's the only reason like intuition is a strictly a human constraint. Whereas like, yeah, we have very powerful brains and we just need to prune what actually rises up into consciousness.
Arjun (58:14.128)
Yeah, but it's crazy.
David (58:24.848)
Just for the sake of our own like sanity because we can't have 10,000 thoughts becoming conscious all at the same time So we suppress a lot of things but AI models don't have that problem They can actually just have 10 billion thoughts happening all at once and there's nothing wrong with that Simply just of the nature of what a model looks like
Arjun (58:25.16)
Yeah.
Arjun (58:31.794)
Yeah.
Arjun (58:39.636)
Yeah.
Josh (58:40.657)
Hmm.
Arjun (58:44.04)
We, yeah, I mean, there's a, there's a high thought here that I had, somewhat recently, which was like, so a lot of, a lot of the kind of like discourse around LMS is basically, there's this like identity crisis right now where people are like, well, this is all next token prediction, right? Like, this is all like, you're just training, like a statistical.
like mathematical system on a bunch of data to be like predict what the next part of a word should be in the sentence. I ask a question and then it's like question and response. And like in the response, you just predict what the next word will be. And you just continue to do this enough times that you eventually print some output, right? And so you're like probabilistically finding some output that you predict should be the output that you think should be correct. And
And I think that is fundamentally what's happening. But what's really interesting is that you end up having a lot of behaviors that seem to just go a lot farther beyond this. And this is the interesting thing about very complex systems is when you have systems that are extremely complex, like...
you have emergent properties of those systems that are much greater than the sums of their parts. So a really good example of this is ants are very, very fundamentally simple creatures. You can program the entire behavioral capacity of an ant in a few pages of code. But...
ants in colonies actually exhibit behavioral patterns that are not part of their programming, that are not actually like, they're like far beyond what they should have the cognitive capacity to do. For example, they work together to build bridges, right? That's insane. Like we observe this in nature all the time and they, independently don't know how to do this, but ants together do do this, right? And so that's an example of emergent property where like we just, it's like much more than the sum of its parts and we don't really understand why. And
Arjun (01:00:33.172)
The kind of high thought here is like, it's, you know, we, while it's true that this is all like next token prediction, there seem to be a lot of really emergent properties that do replicate, you know, behaviors that seem like consciousness, not maybe not consciousness, but behaviors that seem like opinions, behaviors that seem like emotions, behaviors that seem like, like, I am thinking deeply about this thing. The high thought here is like,
What if intuition itself is also just next-token prediction? What if what we describe as intuition right now is inside of our brains is something that effectively works like an LLM, which is just like we are just predicting outputs. Maybe it's not necessarily tokens because it's just like an arbitrary data structure, but like...
David (01:01:04.59)
Hmm.
Arjun (01:01:18.14)
Like we are predicting outputs around like right now, for instance, I am stream of consciousness saying as I'm speaking none of this is like something that I was like thinking about earlier. But I'm stream of consciousness saying it. And when I'm streaming consciousness saying it, that means it's sort of coming largely being served directly by my intuition. Right. So like, by my subconscious into, into vocal form. so where is it coming from is the interesting question. And then you, you, the, mental trick here is to be like, okay, well, what, what can I predict what the next word would be inside of my stream of conscious. Right. then you're trying to.
use your conscious mind to predict what your subconscious will say next. And that is very difficult. I, you know, yeah, there's like interesting, interesting questions around this for like neuroscience and philosophy that we are going to have a lot of fun with over the course of the next years for sure.
David (01:02:00.206)
Mm-hmm.
David (01:02:05.634)
Well, Arjun, the reason why we wanted to bring you on for this episode is just because I think your tweet thread really operates as a kind of like a North star or like a manifesto for what the things we want to get done on this podcast are. We want to be aware of the potential pitfalls, the potential possible dystopian futures. We want to prep for those things. We want to understand them before they're coming. So we really appreciate you just coming on and kind of giving us a roadmap for the conversations that we want to have and the things that we need to be aware of and the motivation for why we.
need to do these things. So I really appreciate you coming on and sharing all your insights with us, my man.
Arjun (01:02:40.252)
Of course. Thank you for having me on. And yeah, like I said, if you're a listener, obviously, my T-thread was a little scary and it wasn't necessarily intended to be. I kind of wrote it just sort of without really thinking about it. But I think this is an important thing, right? It's important to learn about this and it's important to do so in a way where you're not panicking, but you are wary and conscious of the fact that yes, this is the period of greatest change that humanity has ever experienced.
So yeah, thank you for this initiative too. Like think it's super important.
David (01:03:13.964)
Yeah, and with that we'll have to come up with a new sign off because this is the limitless podcast This is still the frontier. It's still not for everyone But we are still glad you are with us on the journey west into the unknown Which we are going to explore here on the limitless pod on the limitless podcast. So limitless listener Thank you for joining us here today Arjun. Thank you as well
Arjun (01:03:34.28)
Thanks so much.
Josh (01:03:35.986)
Thank you.
