The Intelligence Curse: How AGI Makes Us All Obsolete | Luke Drago & Rudolf

David (00:05.774)
I forgot to let you guys know we're going to be running this on the probably the bankless feed and also the limitless feed, which is sort of our new AI centric podcast feed. So I'll say limitless. So you know what that is.

Luke Drago (00:18.094)
Sounds great, and also before we start, do know when you'll be publishing this?

David (00:19.704)
Yeah. We will maybe we'll let you know at end of the episode or do we know this Josh?

Josh (00:25.888)
is not set in stone, but it is tentatively scheduled for the 27th of this month. So to be determined, but yeah, probably no later than that. So about a week or two from now.

Luke Drago (00:32.881)
Okay, great, fantastic.

Sounds great. Yeah, awesome, let's do it.

David (00:42.382)
Luke and Rudolf, welcome to Limitless.

Luke Drago (00:45.786)
We're happy to be here, thank you.

David (00:47.93)
Alright guys, let's just start with a big question. You wrote this essay, it's called The Intelligence Curse. What is the Intelligence Curse?

Luke Drago (00:56.09)
So the intelligence curse, pretty briefly defined, is the set of incentives that we might get when we unlock artificial general intelligence. And we're using the open AI definition here, which is the ability to automate most or all human labor. And in that world, we're really concerned that governments, powerful actors, corporations won't have that incentive to care about regular people. That doesn't mean they guaranteed won't, but it means that strong economic incentive that we've had for basically all of human history, where powerful

actors have needed regular people and so there's an exchange of goods and benefits. If you sever that, you're relying a whole lot on goodwill and we think that's not nearly as stable or strong of an arrangement than it mirrors the kind of patterns you see in economies like those that are afflicted with the resource curse.

David (01:40.588)
Are regular people just everybody? Just like, you know, you, me, white collar workers, you people in developing countries, people in very developed countries, like who are the regular people to which you speak of?

Luke Drago (01:52.824)
Yeah, so I think we really do mean here everyone, in particular maybe like everyone who does not have capital that is then like very dependent on AI, that there can be physical capital or financial capital. And I think there's people often talk about white collar workers. There's also people in developing countries, of course, who we shouldn't ignore their existence. So it really does mean everyone, or like right now the state is that everyone can contribute economically, states and companies have an incentive to care about everyone. But then if all of everyone's labor is replaced, then this is less true.

Josh (02:21.556)
So I'm really curious to ask you guys, why now? What did you see that sparked the interest to actually make this post? Because we've read through it it was very, very thoughtful, very pragmatic. But what was it that sparked this now versus a year ago or a year in the future? Was this the exact right time or do you think this thesis kind of changes as we progress over the next few years?

Luke Drago (02:41.732)
So I think for us, just for some history of how we got here.

I think shortly after 03 was announced and it looked like, timelines are getting pretty short. AGI doesn't look 20 years away or 50 years away. It might look five years away or even sooner than that. I think we were having a series of conversations. We worked at the same building at the time, or in same office building. And I had this observation about the resource curse and how this looks somewhat similar to those patterns. And Rudolph had separately been working on this draft of an essay, which is now in the full series called Capital AGI and Human Ambition.

David (03:14.553)
Hmm.

Luke Drago (03:15.248)
earlier versions of those essays back in January that didn't propose any sort of solution or way to try to stop this problem. And we spent the next three or four months banging our head against the wall trying to figure out, well, what can we do about this? What does it look like to actually solve this problem? A whole lot of the related essays in the piece that aren't squarely focused on the solution were essays that we were kind of writing and taking notes on as we got what we thought was closer and closer. And in the last couple of weeks, I think we just thought we had enough. And so we went ahead and

hit the publish button. are big fans of shipping in public. Yeah, I think also to mention on the timelines, I think there's also something about like when AGI is far away, it feels very much like a technical problem. I think for a long time, a lot of the people who take AGI seriously have been thinking about it purely for the technical lens, thinking about the systems. And I think just as it draws nearer, you start realizing that, it's not just going to be a technical thing, it's actually going to interact with the rest of society, with the real world. It's not like very concrete effects and that's it. Maybe we should actually think about those.

Josh (03:46.388)
I love that,

Josh (04:14.522)
Yeah, it seems important. it's funny because at Limitless, are very, we're very optimistic about the future. We are very excited about the pro tech, pro AI future. And then Ryan surfaced this, this document with me, your, your post. And, and I was like, I was reading through it I was, it kind of hurt a little bit. I was like, this isn't the future that I'm super excited about, but it was very thoughtful and very pragmatic. And I wanted to ask you guys, why, why did you define this as the intelligent curse? Why is intelligence not a blessing?

Luke Drago (04:41.016)
So I think the name just strictly comes from the comparison to the resource curse. And while I don't think we, we don't root the entire analysis in the resource curse. It's a very helpful example to really conceptualize what is a similar environment that has similar incentives, what are the outcomes there? But the initial observation was this looks a whole lot like the resource curse in development economics. And so that's kind of where the name came from. There's this huge wealth of existing literature and debates. And we didn't want to center the entire argument on that, but we did think that name seemed quite relevant here. And I think in particular why it might be a curse as opposed to a blessing.

depends on how it gets deployed, if it's a centralizing or decentralizing technology, if it accumulates power in the hands of the few or distributes power out to many, many people. And so I think it can be a curse and we set out the scenario for which it could be a curse but we also offer the world in which it could be a blessing.

David (05:27.269)
There's so much really to unpack here and some of our history at Limitless comes from also crypto, right? And I know this series of essays, you referenced Vitalik Buterin's work on decentralized or defensive accelerationism. And we've had him on the podcast actually talk about that. And that might be indeed one of the ways out. But when you start to talk about decentralization, that very much does seem like maybe our main defense.

against the centralizing effect of this. But I don't want to project this too far forward in the solution. And there's so much to unpack here, so much to go through. We'll do it kind of sequentially by route of essay. one thing I do want to get, since we've defined it, we haven't defined it yet, but we've dropped this phrase, the resource curse, several times so far. Luke, could you just define what the resource curse actually is? I believe this pertains to countries and how endowed they are with

Luke Drago (05:58.97)
Yeah

David (06:23.255)
maybe natural resources, tell us about the resource curse and why it's basically a meme for the name of this essay.

Luke Drago (06:31.16)
Yeah, so I'll stress first that it's not the sole piece of evidence we rest on. It's very much so an example or an analogy.

But the resource curse succinctly put is the tendency for countries that have lots of natural resources to oftentimes, instead of having very rich or wealthy citizens, to have actually worse conditions. And there are a lot of different explanatory mechanisms for why that can happen. But one of them that I think is pretty prominent in the literature is that if you have oil in the ground and all it takes for your state to get really wealthy is to get oil out of the ground, onto the roads and onto the ports, where your incentives are not to build this really complex economy, your incentives are to make as much money out of oil as

you It doesn't require a whole lot of people to make money off of oil. It might require workers to actually extract the resource to get it out to the ports and to sell it, but that's a whole lot less people involved in an economy than, let's say, a more developed advanced economy like the United States where there's lots of moving parts here. Now, there's a lot of different ways the resource curse ends up, but for a whole lot of countries, particularly those that don't have really strong institutions, the resource curse ends up in pretty terrible poverty. There are ways out that we think we can talk later to the pieces to what ways are out, what ways are

like analogies here that we could be looking to for solutions. But the core thing here is that you either want a diversified economy or you want institutions that can withstand the curse.

David (07:48.94)
And examples of that are just countries in the Middle East, maybe they're oil rich, and they really haven't developed their, I guess, civil liberties or kind of the labor economies of their citizens, right? Or maybe a country like Russia, which is kind of in the grip of authoritarian totalitarian powers, and it's kind of devolved into plutocracy.

I suppose that's what you mean by the resource clerics. Now, we also have account counter examples, maybe like Norway is very well endowed. There's a lot of energy there. Canada might be another example. I mean, they seem to be doing fairly well with liberal democracy. But the counterintuitive thing here and why you're labeling this the intelligence clerics is you would think that more resources equals like better. More resources equals better for everybody. And

Luke Drago (08:17.87)
Yeah.

Luke Drago (08:38.072)
Yeah.

David (08:41.113)
It turns out that's actually not the case for nation states when it comes to natural resources. Sometimes more resources actually lead to an incentive structure that makes things worse for the population. And that could be the same with the intelligence curse.

Josh (08:54.74)
Hm.

Luke Drago (08:58.234)
Yeah, that's what we're saying. And we also think that there's a lot of signs of hope there. We talk a lot about Norway, and we talk a bit about Oman as well as two examples of states that broke the curse and what we can learn from those. But yeah, mean, states like the Democratic Republic of Congo, for example, or Nigeria that are just like really, they have tons of resources and yet their people are very poor. And the question is, well, what are the incentives that are creating this outcome?

David (09:19.481)
Okay, so now that we've got kind of the gist of it, let's flesh out this argument in a lot more detail. And you have basically a series of essays with different sections on this, but when I was kind of looking at the high level thesis, it feels like you're playing with a few premises, like maybe three in my mind. Like one is that AGI is the only game worth playing. There's a famous essay titled this as well, but basically AGI accrues incredible capability and power.

And as you said, this could be like on the near term horizon. We're talking about years, maybe five years for instance. So that's like the first premise you sort of have to believe. The second is that AI will replace humans for valuable economic labor. And we're gonna flesh that out in a second. And as a result of that second premise, the third kind of, I guess, idea here is that powerful actors, these would be like nation states and companies, they no longer have an incentive to care about the regular people.

As you said, why? Because the regular people used to be their economic engine and their labor. But now with AGI, the regular people aren't providing utility. So do we need these welfare states? Do we need these social structures? Do we need civil liberties? Okay, so that's the base idea we're gonna flesh out. And it begins here, which is this concept of pyramid replacement. I want you to sharpen this mental model. So AIs, this idea that AIs will replace humans for all...

valuable labor and I'm showing on the screen a picture of a Corporation, I think this is a typical company, know companies are arranged in hierarchies at the base of the pyramid You have your entry-level employees in the very top you have the executives you have the c-suite so can you describe what this pyramid actually is in the typical corporation and What you see AI's doing to this pyramid?

Luke Drago (11:09.208)
Yeah, so basically...

So there is currently this hierarchical structure in companies. And then it's actually not from first principles obvious which end of the pyramid AI will start automating first. But like empirically it seems like AI are getting good tasks that have short time horizons where the task is completed quickly and then you move on to the next thing pretty quickly and getting better at longer time horizon tasks slower. And there's also the social fact that the C-suite is less likely to unemployed themselves and to unemployed other people. like easy people have also unemployed are the entry level employees because you don't need to fire.

David (11:36.985)
Yes.

Josh (11:37.844)
That checks out.

Luke Drago (11:41.004)
anyone, just stop hiring. And that's why we think the like first step in automation, something that might already be happening in software companies is that the entry level employees, instead of hiring more and more of them in the company, you just instead of giving the senior developers at a software company an entry level intern or something, you just give that senior developer cursor and they code with cursor with some other AI cogent tool and they don't need the entry level employees anymore.

David (11:43.02)
Yes.

Josh (12:04.158)
So Ryan, if you don't mind scrolling down just a little bit, what I loved about this section was kind of the visual that you guys created, which showed this pyramid and the pyramid is blue and that means it's all humans. And then as AI starts to roll out, it starts to absorb the entry level employees. And then as it goes to junior and as it goes to middle management, it slowly absorbs the bottom layers until eventually we're just left with the C suite on top and then nothing. And then everything gets absorbed to AI. So I guess the... Yeah.

David (12:28.417)
It's just literally like one big AI like red block, right? The pyramid becomes just like this AI Borg machine something.

Josh (12:35.54)
It's no longer a pyramid, it's a square.

Luke Drago (12:36.27)
So it's funny is this is one of the, I like that. I might steal that, but this is one of the things we changed in the original essay. I think I still have like the rough draft published on my blog, but originally it was just the pyramid kept getting smaller and nothing was replacing it. And you get to this last slide and it's just blank. And we had like an outline of a square for like, that looked like it was just part of the picture the whole time. And then at this point, it's just like, there's nothing. I think I wrote the org chart goes blank. And Rudolph had the idea of maybe we should just like show people

David (12:37.975)
Yeah, it's a square.

Josh (12:50.472)
Mm-hmm.

Luke Drago (13:04.866)
an entire automated company actually show it's not just that, like people are going away, but that AI is rapidly filling those functions. So now you've got the visual in front of you. there's also something here where like, we don't want to apply that, when the AI takes over, you know, in the future company for every human employee, there will now be one agent, one AI agent that does, you know, it's like matches one to one with each original human employee. think the optimal way to structure AI as a new companies will probably look a bit different from the like current thing where you stack humans into a pyramid. And therefore the AI, you know, we represented with this like the square thing, which was like,

Josh (13:25.812)
Mm-hmm.

Luke Drago (13:34.8)
is like blob of AI computes around your sort of like shrinking amount of humans that are providing a direction.

Josh (13:41.332)
And this is what I was curious to ask now is because currently in the world of AI, I feel like I am a leveraged human when I use it, where I'm capable of X and then because of AI, I'm capable of Y. And I guess the question to you is, will humans not just get better jobs? I guess if you could imagine stacking the pyramid on top of the AI, where now we have this foundation that provides a lot of leverage for entry-level employees all the way up to CEOs.

but the productive output that's unlocked as a result of that leverage creates new and interesting problems for them to tackle. So would that not be the case where we become hyper-leveraged humans while removing some of the workforce but not all of it?

David (14:16.793)
Yeah, why can't this be a box with like a pointy hat, know, pointy pyramid hat on top?

Luke Drago (14:20.506)
Well, what is worth it you go one up?

you'll actually find that box with the nice pointy hat on top. I guess the real question is the way that we currently structure are like major white collar companies. These like big mega conglomerates. It's a company of like a couple hundred thousand people or 10,000 people. And if you look at one, the success stories and two, the existing statement CEOs are making on the success story side, like Cursor has demonstrated that you can be a multi-billion dollar company getting tons of money with only a couple of people there. And if the general advice is you should only

Josh (14:26.749)
they're C-Sweet, Nice.

David (14:28.375)
Hahaha

Luke Drago (14:53.948)
hire as many people as you need to run the organization. I'm not sure why cursor within hire 50,000 additional people. I'm not sure if that would actually buy them additional runway right now. But I think maybe what's more important anecdotally and then Rudolph, I head off to the more systematic argument, but I think Duolingo CEO has now said they're an AI first company and this means they're going to ask in every role they hire, every contract position that they have, whether or not they can do automate this first. I think we have at the bottom, some links to other companies who've also made similar statements here. I can't recall off top of my head all of them, but I mean the general ethos that we're hearing

David (15:20.185)
Hmm.

Luke Drago (15:23.918)
right now as this is kicking off is what we really want to be doing here is being more efficient, being leaner. Rudolf, I hand it off to you for the more systematic argument. Yeah. And I guess I think there is some hope that humans currently have this advantage in long horizon tasks. I think basically we know how to train AIs to do tasks for which there's a larger data set or where we can build a digital environment, reinforcement learning environment where the correct behavior is rewarded. And this works for things like writing where there's a lot of data on it. And you just train to write like the average intern person pretty well.

and it works for things like master code where it's easy to verify whether something is correct. But then it's harder to train AI to be the CEO because the CEO interacts with the real world. They take a lot of actions. This is just like we're currently less good at getting AI to be good at this stuff. So think the state where the AI uplifts the humans, this will continue for a while and probably longer than some of the most aggressive AI projections estimates. And I think there is hope that we can extend this period during which humans are mostly just uplifted by the AI. And this will be very good for just human agency and like

of humans to affect change of the world. think right now we are definitely in this regime. But then in the limit, there's no theoretical reason why the AI can't just also get good at the long-term planning. That's what the AGL labs are turning crack right now. And at some point the board will come in and the board will be like, look, you're the CEO, you have a nice job. But like, I'm sorry, but it looks like GPT-9 is starting to get better at making decisions than you are. And we're responsible to shareholders and I'm very sorry you've done a good job, but now we're gonna have to lay you off.

Josh (16:41.138)
Nice.

Josh (16:45.748)
huh.

David (16:48.71)
Can we start there? Can we start at the top? That would be kind of nice for a change. Are we able to do that?

Josh (16:51.092)
Yeah

Luke Drago (16:52.75)
You know, I've gotten a couple of those reactions. And I think the way that we're most likely to be wrong on this model, and I want to be as epistemically rigorous as I can be, is that the middle gets cut first. It could be the case that there are entry-level roles that we just really need a whole lot of people to be doing. The basic work in management becomes dramatically easier. I think the evidence really points towards the former, that we're getting this bottom-up pattern of automation as opposed to this middle-out right now. And I think the reason for that is just quite simple. If it costs you $50,000,

David (16:59.609)
Yeah.

Sure. Okay.

Luke Drago (17:22.524)
$40,000 to hire a person to do something every year, but it would cost you $10,000 and compute to do the exact same task It's really hard to justify the additional $40,000 and sure like Mike's a great analyst and you go golfing with him on the weekend But that's $40,000 and you can go bug off golfing with Mike whether or not you work with him And so I think a lot of companies that when you know Whether it's a downturn or whether they just want to save some money. They're faced with that question

David (17:46.04)
Luke, that was a spoken like a member of the C-suite. Let me tell you that is, sorry, Mike, we can golf, but you can no longer work here. So you guys are kind of, guess you see it maybe emerging right now as sort of bottom up where, know, entry level programmers kind of are the first to go or like support teams, customers support, something like that, kind of the first to go and works its way upward.

Luke Drago (17:49.978)
I actually can't play golf for what it's worth. I really can't play golf. Yeah. Yeah.

Josh (17:57.364)
You

David (18:10.489)
But you're also agnostic in this model as to whether it's kind of middle out or even top down or whether it's bottom up. Correct me if I'm wrong, but I believe this is particular to white collar jobs, yes? So is that part of the thesis that you kind of the information like knowledge worker class is kind of going to be the first to go because our robotics technology hasn't quite caught up to our software LLMs?

Luke Drago (18:21.338)
Yes.

Luke Drago (18:36.386)
Yeah, I think that that's the default. I think right now, at least, our loans are advancing faster than robotics. And this creates the interesting possibility. think Carl Schulman talks about this idea that we might have a period where humans are valuable not for their brains, but for their hands. maybe we get this, we have like a robotics. Yeah, like so one, I'd made this concrete. Imagine your job is just you're like assembling widgets in a factory, but you have an earpiece where the AI is giving you instructions and it gives you like motivational pose from time to time to keep you on task or something.

David (18:49.849)
That sounds that sounds worse

David (18:59.779)
Yeah.

Luke Drago (19:06.3)
we have to do any thinking because the AIs are better at all thinking. Like maybe this is the future. However, don't worry. Maybe we also fix the robotics and we get robotics quickly and then you can't do the widgets either. You're just like fully unemployed. So there are many possibilities here.

David (19:20.377)
Okay, so that's the concept of here pyramid replacement. Let's do some pushback though. It's some objections to this is so one is kind of the I guess this is maybe the at least I'm familiar with the Tyler Cowan kind of pushback argument where he's basically like You know, there's diffusion barriers and we've certainly seen this like, you know kind of coming from crypto. So like

Luke Drago (19:26.138)
Please.

David (19:42.362)
crypto could replace the entire world's money system, but guess what? There's actually regulators who kind of don't want that to happen, right? There's institutions, there's structures, there's all sorts of breaks in society, meat space, government, that just slow things down. It's kind of the human piece of it. And so you might have this technology in a box, in a geniuses in a data center, whatever, but they might not diffuse through society because society has all of these breaks and, you know, like big breaks in meat space.

And so will that kind of slow this down? I mean, it feels like there's, we can adapt better. I mean, just the general idea is we can adapt better if this happens very slowly versus if this happens like in a period of like months to years. And what do you think about that diffusion argument?

Luke Drago (20:28.388)
So one, think a lot of our solutions focus on breaking the intelligence curse section is really at its core an argument to try to extend the augmentation window so that we get more time to adapt. And I think if you look at the way the pyramid replacement flows right now, we argue it happens pretty slowly. It's a bottom up approach. I don't think we give an exact time horizon because it's really hard to predict. But I mean, I think if AGI hits in 2027, I think most people are still employed in 2028. The question really for you is how fast after that moment? And there are a couple of reasons why I should expect companies would want to speed up pretty

quickly. Maybe for example, they don't do the automation, but a competitor does and they start moving faster. So now there's a competitive pressure to automate. And the same way that maybe a state doesn't want to require certain weapons capability, but of course the other state has also acquired that capability and now you're in a race to kind of get to the top here. Maybe it's the case there's an economic downturn in this forces like cost cutting everywhere and you do the layoffs and discover that you actually are like at equal productivity or maybe even faster when you try to automate that away. So there are a lot of diffusion barriers. We do not think that this six month

after AGI everyone's unemployed. But it's also important to that diffusion barriers also have acceleratory pressures that are pushing against them. If you have this kind of technology and there's strong reasons to adopt, if investors are hyping it up, if people are seeing it work in the real world, it really is only a matter of time before critical mass starts to emerge and the way that we work is fundamentally changed.

David (21:46.978)
Okay. And there's been other points to talk about, you know, sort of AI being very jagged now, right? Where, you know, like it, some things you look at its output and you're like, my God, you are so dumb. Like I could do this and other things. You're like, wow, this is incredible. And so this could happen in a jagged way, I guess. I feel like we've established then that there is the possibility if we get this kind of like acceleration towards some sort of an AGI that AIs have the capability to replace the corporate.

human pyramid. I guess this is the capability to, I corporations companies are the economic engine of like basically all societies, right? So that's effectively what we're doing is we're replacing the economic engine of these societies. Let's move to kind of the second essay and the second piece of this intelligence curse, where we start to talk about capital. All right, so now we've got a world where AIs have maybe started to erode, replace,

The human labor pyramids are corporations. They're doing the work. And so I think you're making an argument here that the power, which was in the hands of labor, of course, capital always has power, but we have a large portion of labor in society because humans are valuable that has power, that will begin to shift. And this is almost like a startling revelation that the AIs might make non-human factors of production more important than the human ones, and particularly

capital. you develop some intuition for that for us?

Luke Drago (23:15.17)
Yeah, so first, I think it's worth clarifying that capital, when economists talk about it, often means, you know, like it means money, it means, but it also means stuff like physical factories. And therefore, like economists talk about factories of production, like land, labor, capital management, and like capital here is a bucket that includes things like factories, GPUs, and also just like cash on hand. And I think...

David (23:33.955)
Does it include like energy to Rudolph?

Luke Drago (23:37.25)
Yeah, would, I think economists would call energy a type of capital because it's a like non-human factor of production. It's not land, which works a bit differently and it's not management, which is for our purpose, it's kind of a bit like labor because both involve humans, actually. yeah. then it's basically the point of view here is just the point about like, right now, the economy needs a huge amount of human input and you add more human input on the margin and the economy goes up and therefore the marginal unit of human labor is like compensated pretty highly and like at least compared to the

David (23:40.707)
Okay.

Luke Drago (24:07.164)
historical like Precedent here and I think you can see this like historically so like before the industrial revolution where the human factor is like human capital instance like education skills stuff like this was less important because there was less like wielding technology less like complicated processes and also the like amount of power that human labor had it was lower and then Yeah, so this is sort of like general arguments and then in this essay in particular we talked a lot about just this like point that You can just start substituting capital for labor more effectively

then you can right now and it's actually like I right now for instance if you're a Like if you're trying to hire talented people that's actually like a big button like on your ability to like convert money into results in the real world and then like this will go away if you can just like buy use money to buy credits from open AI to spend on tokens that is like replaces the talent and it's like so this are like right now there's like a lot of complexity and like free friction to converting money into real world results but this will go down a lot once the real world results you acquire this by just spending money on the AIs and this will then

David (24:54.635)
Hmm.

David (25:06.157)
The tokens become your workforce essentially.

Luke Drago (25:07.004)
Yeah.

Josh (25:10.216)
This was an important element that I didn't really realize until after reading this is that there is this this difference in capital between general capital and human capital, the actual labor workforce and tokenizing labor workforce seems a little scary. So I'm curious to get your takes on kind of the system, the way this kind of rolls out over time in either maybe best case to worst case scenarios is what happens as humans get replaced by tokens and as

as we kind of reduce our workforce incrementally, does that happen quickly? Does that happen slowly? And what are like the second order effects downstream of that?

Luke Drago (25:46.074)
Yeah, so let me say a bit about the Ziegenruder effects here. I think one of these is just that, so the thing I already mentioned about, for instance, if you have a bunch of money, but you want results in the real world, you're still walking like Tony, you need to identify talent, need to higher talent. There's a lot of friction here. Another is that a lot of social mobility today is based on like, you are a talented human and you don't have capital, but you can go out in the world and do something. people with capital have to pay attention to you. You're like nimble startup founder or something. And it's like VCs, you have a lot of

capital just you know need you they think they will give you money stuff like this

David (26:19.181)
I mean, another word for that is the American dream, right?

Josh (26:21.652)
Yeah.

Luke Drago (26:21.848)
Yeah, it's what we're all told, right? I live in London now, but I grew up in the States and we're all told from very young age, if you work really hard, if you do well in school, if you go to the right college, you will have a shot at the American dream. And the American dream, it looks like a queuing enough capital to be able to own things or a queuing enough capital to be able to make it and have a nice life and fundamentally change your social position. And I think a lot of the argument here is that provided that capital can be entirely substituted for labor here because you can just sub an AI.

your ability to walk that way up the social hierarchy just gets a whole lot harder and maybe gets eliminated. And then like as a society, I think a lot of social progress and like change depends on this thing that someone who is not currently incentivized to care about the current status quo comes from the outside and shifts things. And then if you lose this ability to have social mobility, it's not as bad for individuals as makes society more static as a whole.

David (27:16.697)
Okay, so there's this idea if capital becomes a substitute, like a general substitute for labor, which you can imagine if pyramid replacement is true. Basically, what pyramid replacement means is like, instead of paying the humans a labor force, I can just like pay the open AI APIs and do this through tokens and pay the geniuses in the data center and that's my labor's force. And so I can just take my capital, which is like,

my assets, right, my money. And instead of you like putting my money into the slot machine of human labor, I just put that into the slot machine of AI. And what you're saying is this kind of like destroys social mobility. You talk about Josh was just asking about like, kind of, you know, the best to worst case scenario. And I think maybe the these essays are like really focusing on the curse side of things and maybe less than the blessing side. So one could imagine some blessing.

But you talk about like, I mean, I guess one of the worst case scenarios, but in a way this is maybe one of the better worst case scenarios, is this permanent caste system where we're all kind of like locked into the capital ledger that we're born into. And so like maybe if you're born into a nation that has really like, I guess, embraced AI and like your, I don't know, like your father worked at OpenAI or was in the industry or was early, right?

and like really was hooked up to this spigot of capital, like that's your cast, you're kind of locked in. It almost sounds sort of a feudal in that way. I mean, not having lived in sort of a strict cast society and certainly embracing the idea of meritocracy, like maybe that all fades away is what you're saying. We're like permanently casted into these kind of capital ledgers.

Luke Drago (29:10.136)
Yeah, and think it's worth noting that social mobility before the Industrial Revolution was very low. And I think social mobility depends on this thing of like human talent matters and also the economy is growing and stuff like this. And then, you know, before the Industrial Revolution, if you were rich, probably at some point in the past, your ancestors did something cool and the king gave them a bunch of land and made them aristocrats or something. But then like then you have the Industrial Revolution, human talent really matters. It's like social mobility is possible by going out and inventing things, pushing science, pushing industry, stuff like this. But then if

you know, maybe we'll keep having technological progress, maybe like an amount of abundance in society will go up, but even then you've like lost this element of new people can enter the elite if there, if AI is like substitute for elite human capital.

David (29:51.64)
Okay, but that's the thing that's counterintuitive or like what I'm wondering about the argument. So we got the Industrial Revolution, which is sort of machines replacing some human physical labor. And you're saying that was actually good for the humans basically. Why does that not follow if we get an intelligence revolution that is not just like good for the humans?

Luke Drago (30:12.046)
Well, I think the most important differentiating factor for humans as a species is our brains and that when freed up from physical labor, we're like we're better than some animals at physical tasks were worse than other animals at physical tasks. And having the thumb is a pretty great advantage at using tools. But at the end of the day, it seems like the single best advantage that people have is that they can think of new things and execute on them. And so post-industrial societies get these really complex information economies that spend a whole lot of time both producing lots of physical abundance in the real world and with those resources, using our brains and our heads to come

with even more abundance and more ideas. And you can see this not just in like existing economies versus old economies. You can see this today between economies and those kind of more resource curse afflicted states. Social mobility is lower because non-human factors of production mean that your ability to have some huge idea and make an outsized impact is also quite limited. Capital begets capital and your ability, this is true in every society, but your ability to have outlier talent succeed is less if you don't need outlier talent in the first place to make money.

like quick econ thing. It's like, if, what we're going to do is we're like, whether it's a substitute or compliment for human labor and the sense of like, what, like the thing that sets wages basically like when you add one additional marginal units of labor, like how much are the returns? And if you have like, you know, pre industrial revolution, additional unit of labor, you have another additional peasant farmer, it's not very much. First industrial revolution, you have an additional unit of labor, but then they command a lot of machines. They command a lot of capital. They actually have like, boost the economy a lot. They get high wages. But then if like all the labor is done by AIs, you've got this like

total substitution of humans, then additional unit of labor output does not change. Human wages are very low.

David (31:47.482)
In this model, who are you saying owns the capital, right? capital is sort of a, it's property rights, somebody's got to own it. In this model, do the humans still own it? Does it kind of consolidate to the tech companies or do the AIs own it? how sci-fi are you getting in this?

Luke Drago (31:58.778)
Hopefully.

Luke Drago (32:05.304)
So we call for a ban of AI ownership of property and being the CEO. So we're willing to plug, we do call for it. do, we do. But I don't know how likely that scenario is, but I don't want to preclude it. But I do, we went ahead and said, well, that's a pretty, it's a pretty cheap ban to do, right? It's not that hard to ban it right now. Maybe it's way harder in the future when you've already delegated lots of authority. think existing law today, at least in most countries probably does prevent this outcome anyways. But it's worth getting that explicit. But even if it's people, it really matters on how many there are.

David (32:09.559)
Really?

Josh (32:10.502)
Wow.

David (32:14.711)
Okay.

Yeah.

Josh (32:19.965)
Mm-hmm.

Luke Drago (32:35.258)
most people in modern economies don't own a whole lot. That doesn't mean that's necessarily a bad thing today because of course your labor is a very powerful thing to trade. And in many cases you might own way more than you did in previous societies, but it's not the same as being able to own like, you know, the kind of capital you might need to command many, many, many AI agents that are replacing lots and lots of labor. And I think Rudolph, you've thought a lot about the kind of ways this create a more static dynamic where, as you mentioned earlier, like this could lock people into the existing positions pre-intelligence explosion.

Yeah.

David (33:07.993)
Okay, so this idea that capital can now buy labor, so human labor is no longer necessary, there's sort of another implication here, which is that classical liberalism starts to fall apart. so, you know, post-feudal societies, I think we've just like in post-enlightenment, we have generally experienced, not in all places, not in all countries, not in all regions, of course, but we've generally,

it's generally led to better human outcomes, right? You quality of life, life expectancy in general, wealth, freedoms, the whole concept of just like humanism. We've ended terrible practices for humans like chattel slavery, at least in like most places. So it's, we kind of pat ourselves on the back and we think like, wow, we've really advanced. We've just like gotten some better moral software and we've kind of like clearly evolved.

I think that what you're arguing though is that there's just like a more utilitarian perspective on this, which is like, maybe you could sharpen this argument for me, but it's like nation states gave labor these rights, citizens these rights because they were so damn useful. And it was just, they gave the citizens these rights because they needed to attract the labor pools and the brains to develop their economies. And if humans become

less necessary for say nation states, we've already demonstrated how they may be less necessary for corporations, then that entire, I guess, social construct starts to fade out. Can you sharpen that intuition for us?

Luke Drago (34:53.339)
yeah, I guess, so...

I think it's definitely true that there's a lot of institutional inertia in the sense of right now, if you live in a society that really values humans and cares about humans and like politically might be willing to like introduce a UBI or stuff like this or like universal basic compute or whatever, then you know, there's a strong chance that like this society has a lot of inertia in this direction. But then like societies don't exist in a vacuum, they compete with each other. There's a sense in which like, for instance, a of all the countries in Europe, like, Britain was doing the most is sort of like be compatible with industrialization. And therefore, as a result, know, they were doing a

They were quite politically advanced for their time. They had quite a lot of freedoms. They were like good at encouraging industry stuff like this. And as a result, Britain becomes the preeminent power. And there's a thing of like, you there's a lot of societies, a lot of countries in the world, they're in competition with each other. So it's not just sufficient that like if one society makes this choice, it's like they can continue on their own. It also matters like which strategy wins overall in the world. It's not clear to me if this dynamic is bottom up or top down. Is it the states gave this right knowing it would attract better competition or that workers or people

who in capital had more power than a state and were able to demand these. I think about like the Magna Carta in Britain, for example, the foundational document for modern, for the concept of modern democracy, where landed gentry, people with lots of property, had powers that weren't necessarily as powerful as the king, but were in many cases like the fact, they controlled the factors of production that created wealth for that king. And so this put them in a position where they can make a whole lot of demands upon a king. see the evolution of British democracy is that it first starts with this like,

land and gentry class. here in America, voting rights begin. The idea of a self-determined government doesn't start with everyone being involved. It starts with these diffused property-owning men who, because of that position, had some sort of diffused self-reliant power. There's this Charlie Munger quote that I think was at the top of the original intelligence curse on my blog, which is, show me the incentives and I'll show you the outcome. And I don't think it's the case that cultural evolution plays no role here. I think it's quite important.

David (36:52.537)
Mm-hmm.

Luke Drago (36:56.924)
But it's also worth asking, what is the role that economic incentives play on cultural evolution, and how strong are those incentives? And so I think in the limit, these incentives are probably the dominating force here.

David (37:06.969)
There's this quote, I think, in this part of the essay where you say this, classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labor replacing AI, this will no longer be true. Wow. So.

Josh (37:24.34)
Hmm.

David (37:29.847)
potentially classical liberalism on the line here. Let's get to the next essay. So we've talked about capital and its importance, how that could be kind of the dominant feature of a post-AGI type society. So let's draw some more implications for what that means for kind of, I guess, the nation-states relationship with its citizens. So this is the heart of maybe the intelligence curse. This is where the curse starts to come down on us even stronger.

Josh (37:32.66)
you

David (37:59.332)
And the summary is this, with AGI, powerful actors will lose their incentive to invest in regular people, just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor. And this is the intelligence curse. Why do powerful actors like nation states invest in their people today?

Luke Drago (38:24.75)
Well think about it, if I'm a government right now and I want to make a lot of money and I don't have...

Maybe I want to make money for a variety of reasons. Maybe I'm altruistic and I want to provide better care for my citizens. Maybe I'm self-interested and I want my state to do well. There are a host of reasons why you might want this because money gets you power. But in order to get that right now, you can do a couple of things. One, you can try to find some sort of resource, but maybe you don't have it. Two, you can offer, you can try to create the increase your return on an investment. And given that most economies right now really flow through people, developed economies, diverse economies,

people and their labor and their work. You can up that return by doing a couple of things. You can really increase the quality of education. You can build infrastructure like roads and public transportation which helps get investment flowing into areas. You can build these really reliable governance systems to encourage investment. You can foster competitive markets. You can support small business formation. You can do all of these things that make it more likely that your population produces meaningful economic results and then you can tax them heavier.

think right now in the United States, taxes on income are something like 50%, whereas taxes on corporations are something like 12. So a share of total tax revenue, 50 % derives from income taxes, whereas 12 to 13 % derives from corporate taxes for the government. And so in that world, of course, you want to make sure that people are making more money, because if they make more money, well, then you accrue more tax revenue, and you can do more things with that tax revenue. It just so happens that these investments are the kind of things that we associate with a better quality.

of life and they also give you better bargaining power. But the other thing of course that we can get into this a bit later is maybe you impact the ability of that government to maintain their and maintain or retain power maybe either through democratic means like voting or through credible threats to an autocratic regime. And so you might want to buy this verse off.

David (40:00.825)
Mm.

David (40:15.445)
Okay, basically, yeah, so you basically you're saying like kind of the public goods, the public infrastructure, take for example, like just like free public education to all its citizens. This is not the government doing kind of like, quote unquote, the right thing for its citizens. This is not governments, nation states of the world just being altruistic and kind of like nice and kind. They're doing it for a reason for an economic outcome, which is with we have an educated population.

then we have greater ability to produce GDP and government benefits from being able to tax that GDP. And that's how makes like revenue itself. And like, this is basically the reason, the incentive for governments to invest in education or for that matter, any kind of like good thing for its people. Does that analysis miss something? feels like, it feels very mechanistic and bleak. Maybe that is the way the world works.

But like, you know, is there not like some sense of morality in like doing the right thing here?

Luke Drago (41:15.876)
So I'll mention one thing here, which is that this analysis doesn't require anyone consciously to be this cynical in government. So for instance, you can imagine in year 1500, like in Europe under feudal limits, like maybe there were people who are really altruistic and they want to like, you know, give people free healthcare and stuff like this, but then they get invaded by the neighboring country who like invested more in their military and stuff like this. Whereas then today, it's like the people who were altruistic in like 1800 who were like liberals and like promoting like British and US like political progress, like these are the people

Josh (41:22.452)
You

Josh (41:32.052)
Hmm.

Luke Drago (41:45.812)
landed up short of the future because their values at that point in time were associated with winning strategy. Well, and ask yourself too, just if you hear, I'll tell you nothing about these two politicians, but a politician who wants to increase spending on education and a politician who wants to decrease spending on education, gut check which one feels more altruistic to you. Yeah.

David (42:03.354)
increase, right?

Josh (42:04.596)
on St. Chris.

Luke Drago (42:05.786)
And I think it just is the case here that the incentives are aligned here. The thing that sounds really good and is altruistic also is the thing that returns money to the state. Now, in the world where the state has to cut that education spending because they're in a fiscal crisis or because making that spending choice would be super irresponsible, they're into some sort of like, as Rudolph said, there's like an invasion coming, they've got military spending. It doesn't make that other politician any less altruistic in theory. It just means that other politician has really strong countervailing incentives to do something else.

David (42:31.647)
Okay, so it's not altruism. It's just a happy coincidence of incentive alignment. That's why we like get beneficial things. It's interesting going back to the idea of that the resource curse, maybe we could talk a little bit more about that because we do have real world examples, experiments at play, where there are very resource rich countries to the extent that they can generate economic returns and taxes revenue essentially to pay for the government based on resources, rather than human capital rather than

human labor. And this is the resource cars that we were referring to at the the outset of this episode. And we actually there's a name for these which are rentier states, I believe. Okay, so it like, tell us again about the rentier states. Let's get into some more detail here, because this is essentially the experiment at play. Again, if your nation is resource rich, how does it treat its citizens? What are the experiments that we've seen right now in the real world?

Luke Drago (43:26.426)
So we can look at the raw incentives. There's a very interesting counter argument that I think we actually incorporate, which is that many states have such good institutions that they aren't exposed to these incentives that can avoid these incentives. But at the raw incentive level, I think we talk about like the Democratic Republic of Congo, for example, extremely complicated history, but with literally trillions of dollars in minerals below their feet and hundreds of billions of dollars in, I believe, total revenue from those minerals. And yet their people subsist on a couple of dollars a day. And if the state has the kind of resources

that could enable relatively widespread wealth. The question is why don't they do it? And oftentimes in these states, you can imagine, I think it's very easy for us to imagine in the West, a state far away that is exposed to high levels of corruption or exposed to high levels of inequality because the leaders choose themselves over their people. It just sounds quite foreign to us here in the West. But of course, this is what we see time and time again in many resource-curse-afflicted states, is that the leaders get rich, or the people who control the mining rights get rich, the oligarchs get rich, but many

Regular people don't see the benefits of economic activity because there's no reason to invest directly in them to reap that reward.

David (44:33.259)
Is there an explanation of counter examples of resource, resource rich nations that are actually like protecting their citizens like fairly well. So like Norway comes to mind again, like Canada is sort of comes to mind. Yeah, like what? Yeah, how do you explain that?

Luke Drago (44:41.56)
Yeah.

Luke Drago (44:47.044)
The UAE comes to mind as well. Australia.

Josh (44:49.054)
Hmm.

Luke Drago (44:49.976)
So one, think the key thing here is like how much of one, like when we talk about rentier states, we're talking about states that have like a very large portion of the revenue coming from natural resources. I think it's a dominating force in their economy. There's, I think we cite it in the piece, but there's these two really interesting examples that the authors give of Norway and Oman as counter examples of states that don't fall into the resource curse. In Norway's case, by the time Norway discovers oil, they have this really efficient, really anti-corrupt, fantastic democracy where voting incentives can kind of override raw capital incentives

where the bureaucracy understands how to do things in really complex ways. And this means that voters have very real power and the benefits get dispersed in different ways. The voters can outvote capital incentives. In states like Oman, Oman in particular had a pretty credible threat of...

of some sort of uprising or revolution. can't recall the exact example from the paper. I know we cite it directly. And as a result of this, the state capitulates. think the kind of analogy that I've used here before is that if you are the person that owns the rents, you'd really like to keep all the rents to yourself. You'd also like to keep your head meaningfully attached to your body. And so if capitulating on that, if paying out people, if doling out the rent money keeps your population in check, you're much more likely to want to do this because otherwise that credible threat could be

Josh (45:56.052)
Yeah.

David (45:56.546)
You

Luke Drago (46:07.26)
quite real. Now, what we say is twofold. One, we think that advanced AI makes it way easier for the state to know everything and suppress things like revolutions. And so we think the revolutionary arguments and autocracies is kind of off the table. We find ourselves in being the contrarian position of defending democracy pretty strongly here, which is, think, become kind of contrarian in some tech circles in recent years, but we actually think it just is. I hope so. I hope so. I appreciate this. But we want, you know, we in the north,

David (46:17.901)
Hmm.

David (46:29.333)
It's not contrarian around here. big defenders of decentralization and democracy. Yes.

Luke Drago (46:37.18)
Norway example, the question you have to ask is do you think most states have the kind of robust and rigorous democracy as Norway, one, and two, do you think the intelligence curse for placing all labor might be stronger? In our case, we think it probably could be stronger, but it really shows us an interesting way out.

David (46:53.379)
That's interesting. So you're basically saying it's just like Norway had the democratic decentralized institutions strong enough to kind of withstand some of the pressures of the resource curse. But the open question is like, how many other countries have that? And when you get into the intelligence curse, no matter what your institutions, your democratic institutions are, will that be enough to withstand the tidal wave?

Luke Drago (47:04.995)
Exactly.

David (47:18.175)
of this intelligence curse? And that's kind of an open question, but it does provide maybe a sliver of hope, which is that with robust institutions, maybe we can start to like, maybe we can withstand the coming tidal wave. I don't want to get into solutions too soon because we're still in the problem, but I'll just earmark that right now.

Luke Drago (47:36.942)
Yeah, please.

Josh (47:39.516)
Yeah, I think what I want to know as a layman listening to this, who is just on the street and now understanding that he will soon not be able to trade his labor for capital. I'm curious. Like what? I'm of a job. So.

David (47:48.601)
You're out of a job, Josh. Enjoy these last few podcasts.

Luke Drago (47:49.21)
Podcast over

Josh (47:55.252)
That's what I'm saying. I'm listening to this. I'm like, hmm, okay, well, I can't do this. I can't do this. I am no longer able to trade my labor for capital. What does that look like for the average person? Are they collecting government welfare? Is there a universal basic income? How am I able to accrue capital if I am just one of those elementary workers in the workforce?

Luke Drago (48:11.802)
Well, I guess the question is how much do you want to get into our solution section right now? I think we've got to, yeah, yeah.

Josh (48:15.206)
So perhaps we'll hold that for a second because there is another element that I was also interested in talking about, which is just the human element. As a human, I like human interaction. I like going to hang out with friends. I like buying homemade things from people. I like meeting the artists that create the art that's on my walls. I really enjoy that connection. And when we introduce this AGI element, this artificial intelligence force, it feels very inhuman and artificial and it feels very sterile in that way. And when I think as someone who is experiencing the human experience,

I'm really curious.

Josh (48:49.51)
this. What element does the human nature have on the way this all plays out?

Luke Drago (48:57.082)
So I think it is definitely true that there's a lot of things where humans have a preference for interacting with humans. And I think this will continue. And I think there will be a lot of like social facing jobs where the humans have a very high bar for replacing that role with a human, with an AI. And I think this does provide a sort of like buffer where I think there will be some jobs that last quite long. It is for like maybe like like teacher or maybe a fair for like interfacing with customers, even if you're like a salesperson who's like very dependent on personal relationships. Like humans might prefer that for quite a while.

I guess there's some things about like how charismatic are the AIs, like how good do they get at like hijacking human social instincts, stuff like this. But there's also a question around then like, so the humans currently have money and there's like some capital flowing around the human economy, but there will also be these like the AIs will be increasingly doing stuff and it might be like increasingly sort of like money flows towards the AI part of the economy. And in particular, think like it's sort of like

how are the humans earning the money with which they pay each other for the human services when at least some of that human money also has to be spent on like doing the AI stuff that probably like keeps them alive, keeps them fed, stuff like this.

Josh (50:08.412)
Is it all for the workforce or are there, like how far does this go? So I guess if currently a human thing that I would really be really excited about is to teach my kids something or to be a father to my children. And how far does that go? Does it get into the household? Does it kind of remove the need for humans through the entire process?

Luke Drago (50:28.59)
So there's a really good paper that walks you some of the cultural elements here. It's called Gradual Disempowerment. We know the authors quite well. It came around around the same time that ours did. Weed's focused less on this initial cultural element, most because we were trying to isolate what we think is like a really critical variable here on the economic side. But I'll tell you, I had an interesting interaction a couple of days ago with someone who was telling me that they talk to Chachi BT constantly and that they think their dad talks to Chachi BT more than he talks to his kids, that it was like two or three hours a day, maybe more.

David (50:57.689)
Damn.

Luke Drago (50:59.043)
And so I think the capacity for Machines to alter our relationship with each other seems quite high I don't have the exact quote that Mark Zuckerberg said in a podcast recently So I hopefully am NOT butchering this too bad It was something along the lines of you know, the average human has you know, four or five friends average Americans four or five friends But they have capacity for 15 we can substitute a lot of that with machines For me, I'm not excited about this vision. This is really not exciting to me at all I really value the real world and the people that I get to interact with and maybe

David (51:21.261)
Yikes. Right. Right.

Josh (51:22.836)
You

Luke Drago (51:28.474)
this is something I don't want to impose and say that I get to make a choice that nobody ever gets to go down that rabbit hole but it's certainly not a technology that I'm excited about building.

David (51:36.218)
Okay, that's really fascinating because when you start getting the family and you talk about just being a parent or something or just being a father, can an AI really do that better? But then you get into scenarios where a lot of people grow up without their father, right? Just maybe by route of an early death or just something else. And is an AI maybe providing some parenthood there? I guess what you guys are saying though is you're acknowledging that...

You know, AI cannot replace all of our labor because we still might want to go to a arts and crafts fair and purchase a piece of artwork for cultural reasons from a real human artist that we just like resonate with and identify with. And that's still going to be a market and economy. What you're saying is like over time that could become a smaller and smaller and smaller portion of the economy. And even the human's purchasing power in this world could actually decrease.

because like, where is their wealth to go purchase the artwork actually coming from? And so you could imagine like just that economy, that human to human kind of economy where only humans can provide this, that just gets smaller and smaller over time. It's kind of a niche. And so like the humans are maybe like, I guess disempowered even though these economies still exist.

Luke Drago (52:51.588)
See ya.

Yeah, go ahead. Or even it could be that like, you know, the human wages stay roughly constant. Everyone has like vaguely social like pro-social jobs. The like money flow in the human part of the economy comes from something, something government, something, something exists in human wealth. And like those are like human wages are what they are today. But then also human just don't really have political power anymore because states worry about like, you know, real things like energy and GPUs and like military competition. And all of these fields are done by AI. And then I think the like human role has become a bit like peripheral and not any more tied to sort of like

real power that exists in the world. And I think I'm a bit worried about that, even if of like humans are like, have their wage level at what it is right now. Another way to think about it too is, I hear a lot that people will always want human teachers, right? Because there's this human interaction that you give the teacher and it's really hard to replace. A relevant question though is what will be the demand for schools? What is the incentive for states to fund mass public education in a world where they aren't receiving a return there? That doesn't mean it isn't going to happen, but you should look at the underlying economic incentives.

Josh (53:36.307)
Mm.

David (53:43.961)
Hmm

Luke Drago (53:52.032)
incentives. And it could be the case, as you described, many, many fields are automated, and so the money flowing in this human economy is just increasingly limited or, know, dwindles over time. I think there are lot of ways in you can reach a pretty bad outcome through different mechanisms here, and a lot of our solutions base is focused on trying to keep humans meaningfully economically involved in many different ways, while also strengthening democratic incentives and democratic structures so that they can override capital incentives when they need to.

David (54:17.017)
Well, just if we could stretch this a little farther and kind of like imagine a world here. like, what, how do future nation states actually like make money in an AI dominated economy? Like how do they tax? Like obviously now our tax mechanisms are just income tax, capital gains tax, consumption type tax, excise tax, increasingly tariffs. That's fun. So, but the nation state is really going to have to reorient around

Josh (54:39.892)
You

David (54:43.851)
AI labors and that that that that's another interesting questions like maybe actually the nation state is not the one in charge. I mean, we're in a world of of nation states, but that is kind of a post feudal model that kind of arose on the back really of the last major technological change, which was the Industrial Revolution. Maybe we're going to reorganize. Balaji Srinivasan has this concept of the network state and you sort of wonder if maybe some of these AI labs could be in a position to accrue such power.

that they actually become the dominant force, some kind of like a open AI network state complete with a flag and Sam Altman as the president. mean, like, who knows, right? How do you guys see this playing out?

Josh (55:21.748)
Hmm.

Luke Drago (55:27.97)
Yeah, I think there's definitely this question over like do nation states continue as the main form of political organism or like main form of organization of power in the world and I think there's something we're like So one I think you should have some prior that these things are pretty sticky sort of like even the like like Catholic Church, know, they were extremely powerful They run they run a Europe for a few centuries in the past But I'm like, you know, they don't run a Europe anymore, but they're actually making a lot of commentary about AI recently They're like finally relevant right is like this this this stuff decays quite slowly. I think this is a Rudolph has been

Josh (55:35.06)
you

David (55:35.203)
Yeah.

David (55:49.209)
And they're still here. We still have a pope though. Yeah. Yeah.

Josh (55:51.806)
Yep.

Luke Drago (55:57.726)
subject to me spending the last couple of days really nerding out about this. just I literally just had a

David (56:02.234)
About the Catholic Church in AI? Okay.

Luke Drago (56:04.122)
Yeah, so I have a reading list that I'm compiling right now because there's the most recent Pope, this is slight sidetrack, most recent Pope. He's now said publicly one of the reasons that he took Leo XIV is because Leo XIII had this very prescient encyclical called Rium Navarum on the Industrial Revolution in the 1890s and he views AI as a similar style of societal reorganization. actually, mean, so Pope Francis had a whole lot of commentary here. I've got a reading list I'm working through right now. Yesterday we were in Oxford and I was talking to a friar.

Josh (56:05.04)
Interesting, okay.

David (56:24.537)
What? Wow.

Josh (56:27.194)
Interesting.

Luke Drago (56:34.076)
So it's been busy.

David (56:34.411)
All right, all right, Josh, Josh, new guest request. We got to get the Pope on Limitless, ask his thoughts on AI. All right, I'll do this in the Vatican. Sounds good, okay. fascinating. So we don't really know what the organizing political structure might be in this new world, but we could imagine it changes, but you're also saying that, the nation state is pretty sticky. The Catholic Church is still doing like big things. Maybe it'll fade somewhat, probably won't go away, but that's kind of a TBD, like we don't know yet.

Josh (56:35.348)
You

Let's get the Pope. Yep, we'll add that to the queue.

You

Luke Drago (56:47.384)
Yeah.

Luke Drago (57:01.23)
Yeah, and also like if something if we get, know, a GI lab network states like the same incentives kind of apply to them by default. And also they aren't by default democracies unless they become democracies. Yeah. A core observation here is that AI can be both destabilizing and centralizing. And this seems kind of counterintuitive, but it could be the case that there's lots of very quick disruption and the winners of that disruption that can very quickly accumulate power and capital. I don't I'm not saying that is certain, but that is that is one scenario you could see here is that it can both destabilize a lot of things.

David (57:13.592)
or not.

David (57:17.303)
Hmm.

Luke Drago (57:31.184)
and then centralize power among the winners.

David (57:33.015)
Yeah, the centralization of power seems to be a massive theme for you guys. I, where I'm getting out of this is definitely some worry about AI. I wouldn't call it doomerism, right? That's it's not, there's some like, look, there could be a scenario where AI comes kills humanity. think it's just like you, you can see that point. That's not really the focus. The focus is more this attractor basin towards authoritarian totalitarianism, right? Which could be possible. I mean, this is.

Even Daniel Schmoktenberger's work, I don't know if you guys have looked him up, but he talks about just with all of these tech revolutions, what we could see is this attractor basin towards like total society control to actually keep our tech in place. There's one more concept though we gotta go through before we actually get to the solution. Oh, go ahead, go ahead. Yeah, yeah.

Luke Drago (58:14.082)
Let me just say something about the power concentration thing. Throughout history we've had really terrible timers and dictators, really terrible centralization of power. But all of them have fundamentally been limited by the fact that whoever the dictator is, they're not infinitely competent, they can't think incredibly fast, they still need lot of other people to do things for them, they somehow need to get buy-in of a big bureaucracy and of the population they rule over. Fundamentally their power is still rooted in humans. If you're a dictator, you're constantly paranoid about everyone

else like overthrowing you like this is like fundamental like or yeah I did even though they have could pay for such good health care exactly but then like once you don't need the like bureaucracy of humans working for you once you don't need the human military you just have AI bureaucracy have like AI military you don't need the population to run your economy like constraints on how total the total terrorism can get get a lot worse yeah

David (58:45.165)
Yeah, and you know what, they also get strokes. also their life expectancy is only about 80 years. So right.

Josh (58:49.842)
Yeah.

David (58:54.871)
Yes.

David (59:10.393)
Indeed they do. Okay, all right. There is a way out guys, all right? For limitless listeners, you're in despair now, never fear, we've got some solutions for you. But one more concept to cover. So this is, I think, the last essay before you kind of like conclude all of the things and give some of your recommendations for the way out, which is this idea of the social contract, okay? An essay titled, Shaping the Social Contract. And what you're saying is the intelligence curse is breaking the social contract.

Josh (59:11.572)
Okay.

Luke Drago (59:13.678)
There is, yes.

David (59:38.102)
And I really like this diagram that you sort of show, which is just this nice equilibrium balance of power. You've got like three boxes here. You've got powerful actors. So these would be corporations, nation states, know, the big powerful networks. You've got the people and then you've got the rules. Okay. And so there's dependency, there's lines of dependency between the powerful actors, the people and the rules. So the powerful actors, they need the people for value. We've already established that they need labor, right?

And so that's like people plus one for the people. The people can displace the powerful actors. We've seen that throughout history, French Revolution, just like American Revolution, right? If the powerful actors get too totalitarian, we stage revolts, right? The people are strong. And what we've done is we've created these social contracts, basically rules for society. And so these rules are moral codes, but just like more...

I guess, yeah, like in more detail, it's kind of our legal system. It's the constitution of the U.S. It's the the Magna Carta. So there's this and the people can influence the rules. The powerful actors have to they're constrained by the rules. We get the balance of power, separation of church and state, three co-equal branches, all of these things. Right. It's like all very nice. And that's our current setup. That's the status quo. What you're saying is this whole AGI thing kind of disrupts the social contract because

It means the people can't displace powerful actors. As you were just saying, Rudolph, it means the powerful actors, so the nation states, don't need the people for value. They can just pay for tokens in the AI genuses in a data center. And then the powerful actors have the ability to influence the rules. The whole social contract is messed up. I, like, flesh this idea out a little bit more? Is this kind of what you're saying?

Luke Drago (01:01:27.972)
So I'll zoom in on just a single interaction here, which I think helps articulate this.

And I know you're listening base. So let's zoom in on software engineer at Google. And let's say it's 2021, which I think if I'm correct here is like, that's the big year where like, was everyone is getting paid crazy amounts of money. You are negotiating with Google in your contract and you have something that they want. In this case, you have like, you really get what you do. They want to hire you. Well, because of this, you get to extract a whole lot of concessions. You're a competitive on the marketplace. You get to ask for more RSUs. You get to ask for more stock. You get to ask for

more money. You also get things like the free cafe on campus because they've got to attract you somehow. Or I think it's like 16 or 17 restaurants and Mountain Dew on their campus. It's absolutely crazy. Cool campus. You get a lot of these benefits because of that exchange. And of course, Google gets something out of you too, because they might pay you $400,000. But as long as they've done their vetting here, they're going to make a whole lot more than $400,000 from your labor. But everybody wins in this relationship. Now imagine that Google is able to replace your labor with a

Josh (01:02:09.971)
Yeah.

Luke Drago (01:02:31.676)
machine that can code way better than you. This really disrupts the relationship, right? Because let's say, you know, in this case it can create value for Google at a cheaper cost than you. It costs, I don't know, like $100,000 a year or $150,000 or $200,000. That's in the price range right there where it's really economically sensible for Google to cut you out of the process, but difficult for you to then go like, you know, 10 trillion clones of yourself and go compete with Google. And in the limit, this creates a world where powerful actors can get more and more entrenched as capitalists.

substitutes for labor more and more perfectly. Your ability to displace them goes down while simultaneously your ability to bargain with them also decreases because you don't have anything that they need. This might create a situation in which powerful actors get to set the rules and you are constrained by them and it's very difficult for you to alter that relationship.

David (01:03:21.869)
That follows through to the government too, right? And it's basically social contract with its citizens when they don't need the citizens very much anymore. I guess my question here or a bit of pushback is, know how we call them a social contract, right? And that's because it's sort of, it's enforced socially. Yeah, there's power of the state, there's military, there's kind of like monopoly on violence types of things. But over time,

human societies have been able to construct their own social contracts. Like what is something like the constitution that's just like a set of laws and legal codes and ideas that we all agree on in this nation called the United States of America, right? Like we put that in place. You've all, Noah Harari calls these kind of like myths, right? They're just like these shared beliefs that power so much of human society. So my question is like,

Okay, if we get to kind of choose social contracts, why don't we just pick one that doesn't screw over all the humans, that doesn't screw over citizens and the labor? Like we put these things together, they're just shared myths, they're socially enforced. Why don't we pick one that's good? And by the way, if this AGI thing comes true, won't we have abundance too? Won't we have like basically 10 % GDP a year? Won't we have like fantastic wealth, at least somebody's making the wealth? And so this abundance, shouldn't this

relieve the competitive pressures. We don't have to think about, you know, the basics of food and shelter because it's all provided for us. And so we're not in this competitive game anymore. We can just think about what makes society happy and pick a social contract that enforces that.

Luke Drago (01:05:04.1)
So guess maybe one historical example here is like the British Empire tried to enforce a social contract on the US or like before it was the US and then the Americans were like, okay, actually, we don't think this is fine. And it was like a reality check to Brits and it turned out the Brits did not have the ability to enforce that. The institution's real power is against them. And then the Americans wrote their own social contract, which became the constitution. I think there's definitely a lot of power in like culture, institutional inertia, just like the beliefs that people have, like myths in the Harare sense to like steer things and keep things on track.

David (01:05:25.635)
That's right.

Luke Drago (01:05:34.044)
But then like over long enough time scale like over like enough stuff happening in the world that like checks that like checks like is there something behind this like if we if someone tries to change that either like you know on a bottom-up way because you know there's some like Social media movement or like from a top-down way if the like leader of a country decides to do something like are those reality checks like does the like Does the economic structure and the political structure like push back against that successfully or is this for like like you can actually shift it? And if you can shift it probably over time it drifts kind of in the

direction of the incentive over time.

David (01:06:06.713)
How about this abundance idea though? Going back to that, right? So like where we have abundance, AIs are creating all of these things. Won't that relieve competitive pressure for us? Like can't we get a utopia out of that?

Luke Drago (01:06:20.068)
So I think...

At a core, you should be really concerned about any arrangement where the long run arrangement has you with very little actual power. And so I think it could be there's lots of abundance, but you aren't creating any of it. You aren't involved in the creation of any of it. And so your material power here is entirely political. This is just way less stable. Another thing to think about here is that I think we talked about this in the essay that it's not really clear that competitive pressures or human greed have this intrinsic stopping point. I think to paint an additional example though here, it could be the case that the

worst outcome is that we have abundance but you don't have any say in what happens afterwards so your needs are met but your political reality is quite constrained. I think about a state like China which has been able to simultaneously lift a whole lot of people out of poverty and one of them, the Chinese miracle is a thing that happened not you know hundreds of millions of people get lifted out of poverty under Deng Xiaoping but simultaneously I wouldn't say that this has resulted in like crazy political freedoms for people in China. It could be that your material conditions improve and yet simultaneously your power

is unaffected. This has been quite a, know, Herculean effort by the Chinese state to keep this equilibrium going. And the Chinese state is in many ways like responsive because it's afraid of losing legitimacy or really afraid of like revolutions. It has a zero tolerance policy on protest. But that is one outcome. We just happen to think that you should be deeply concerned about scenarios in which you don't have the material power to guarantee abundance for yourself. And if you're written out of economic social contracts, you are at this point at the mercy of the political one. We think the political one is better than nothing. We advocate really hard

for strengthening that political contract so we can get to that outcome. But we don't think in the limit, it's the only thing I'd want to be relying on. I'd really want to make sure that I have some real stake in the game here.

David (01:08:00.282)
One last objection to all of this, is basically Professor Arvind, he wrote the intelligence cars. I don't know if you're familiar with him, but he has kind of this riff on AI snake oil, excuse me, sorry. Matt, yeah, he plagiarized your work. Sorry, I just want to let you know right now, AI snake oil. Sorry, Arvind.

Luke Drago (01:08:10.606)
He wrote, the Intelligence Trace? He's gonna say congratulations. We think very similar ways, you know? Maybe we plagiarized, yeah, I'd be frightened to find out he wrote, like he wrote, that's good to know. I didn't realize that title was taken. Yeah.

Josh (01:08:11.581)
AI snake oil.

David (01:08:28.043)
AI snake oil is the name. He has this riff where he talks about, basically he's kind of downplays AGI. He basically thinks that AI is kind of like more akin to regular tech. And it's like one of his riffs is there's a difference between AI capability and power. So there's capability, right? All of this knowledge, intelligence inside of a data center. But then...

that's different than power. It's kind of constrained. Like maybe that idea of you guys said earlier, part of the solution is not giving AI's the ability to accrue their own wealth, right? It's like wealth would be a vector for power. We don't necessarily have to give AI's wealth and power. And so capability and power could be somewhat isolated. Like maybe this whole thing is just a question of like who gets the power and it may not be AI.

Luke Drago (01:09:21.56)
Sorry guys, I think our power just went out or we lost the light. Okay, cool, we're back. If you wanna just, I hate to interrupt you there, but I didn't want us to record in the dark. You said we could splice this, if I recall. Please, I'm sorry, I didn't mean to interrupt you there.

David (01:09:24.085)
wow. Yeah, no worries. Yeah, yeah, yeah, we could splice it.

Josh (01:09:24.652)
nice. No problem.

David (01:09:34.967)
Yeah, so how does that idea, the difference between AI capability and power kind of like factor into this whole analysis?

Luke Drago (01:09:42.452)
If I'm understanding you correctly, you're saying that it could be the case we don't delegate this power to AI systems and then it's relegated in the hands of people. Is that right?

David (01:09:49.913)
Exactly, there's always humans in the loop, know, like they can't get their own bank accounts or something, they can't accrue capital, we always have kind of a check on them, we don't have to give them the keys to the car.

Luke Drago (01:10:00.622)
Well, I think nothing that we've argued is contingent on AI having this power in a self-directed way. One of the biggest oppressors of people in human history is other people.

Totalitarian states require a whole lot of people doing that oppressing and it could be the case We've actually done is we've just expanded the power differential We've made it like that that some people are far more powerful than others This is already true today, but in the era of liberal capitalism and liberal democracies Your power as an individual as a unit of society has just really never been greater And what we're saying here is it could be the case that for a couple of people Because they have existing access to capital and convert this directly into results This could be a world where they have just such dramatic outlier ability to shape the world that their ability

to materially impact your environment is really really high and your ability to resist that is even lower than usual.

David (01:10:45.369)
Okay, I feel like we flushed out the intelligence curse to a sufficient degree. Let's talk about the solution. Let's talk about how to break out of this intelligence curse. You've got three words here. You've got avert, you've got diffuse, and you've got democratize. Where do you want to take this? You want to start with avert? How do we get out of this curse?

Luke Drago (01:11:07.596)
Let's work it backwards. think let's do the initial observation. Go for Rudolph.

David (01:11:10.411)
Okay, start with democratize then. this is like, what's the idea here that we're distributing the power to all of the people? We're just like, you know, not concentrating this in the hands of AI labs and AI models themselves. How do you think about the democratize word?

Luke Drago (01:11:28.186)
So I think the way we'll flow this, if this makes sense, is I want to walk through really quickly just the observations backward, because we started with democratizing the observation, and then I think we can kick it off with avert after that. And what I mean by this is I just want to walk through the whole argument chain real quickly off the kind of initial observation that we have on why we need each step here. Yeah, so I guess the flow here is basically, as we mentioned, know, like Norway, for example, solved the resource curse. They just had good institutions, and therefore they can just all go to polls and vote for, you know, everyone's well off, and then they distribute the oil wealth between the people and everything.

David (01:11:44.334)
Go for it.

Luke Drago (01:11:58.092)
is great. so it's like great if we can get to the point where we have this very like a democratic thing, a lot of people have power, they can affect the decisions that are made, we get like broad distribution of the benefits of AI, stuff like this. And there's various ways we list some ways in which like technology for coordination and like various other things can help with this in our last section here. But yeah, this is basically great. There's various ways we can build tech to make this easier. And then kind of like the point we're making is that to be in the state where you can democratize

David (01:11:58.819)
Mm-hmm.

Luke Drago (01:12:28.236)
and like how that be a stable equilibrium often what matters is that you have you know you get political power often when you have the economic power so then this brings us to like the idea that you need diffusion as well you want to like diffuse the benefits of AI to people such that then sort of like everyone gains in power gains in capabilities like continues having some stake in the economy and some like ownership stake over it and therefore like this sort of like makes the step about

a democratization more stable because then it is actually an incentive of the powerful actors of people of everyone is to keep the democracy in place. So then we've gone from democratized to diffused. then so there's this worry that some people have is if you diffuse AI too much, you give everyone the AI, you're giving out this powerful technology that people use to do things like create bio weapons or like do all sorts of nasty cyber attacks or whatever. Or like maybe the AI takes over because it's misaligned and it is very bad for everyone. And therefore in order to make the diffusion step safe, in order to prove that,

David (01:13:20.407)
Mm-hmm.

Luke Drago (01:13:27.836)
want to avert the various catastrophes that could happen from widespread AI. And we're especially excited to hear about hardening the world against things like bio-attacks, against cyber-attacks, and also just making sure that we don't mess up on the alignment problem. So from that, we worked backwards here, right? So democratization is clearly a way out because democratic incentives can beat capital incentives. You can ensure all the things you want out of that. But we've noticed this pattern where your economic power correlates with democracy and is oftentimes the engine of that.

David (01:13:56.077)
Mm-hmm.

Luke Drago (01:13:57.718)
to diffuse, but also we want to make sure that diffusion happens the way it doesn't create the kinds of catastrophes that either would just be bad in and of themselves or could give like license for states or other actors to freely powerfully centralize. So we have this avert section. So we kick things off with the vert and this backwards chain. And we've realized that in order to get to the democratization, there are some steps we're going to have to take first.

David (01:14:17.685)
Okay, so democratize is all about power diffusion to the people so that the people can hold the institutions in check. But it's a it's a political type of thing, right? And we have had democratic protocols in the past, right? and we have them right now one person one vote. We'll come back to that because I want to get into some tangible examples. But that is about distribution of power, I guess, and the humans having this power and retaining this power.

Luke Drago (01:14:30.766)
Yes.

David (01:14:47.135)
And you're saying one way in order to do that is that other D word, is diffuse. And I think diffuse means give everybody access to AI tools. It can't just be a small percentage. Maybe you sharpen the intuition there, but diffusion is about the distribution, the tools in the hands of everybody. And then avert is just like making sure that we don't completely go off the rails. We have a misaligned AI.

Luke Drago (01:14:51.673)
Yeah.

David (01:15:16.249)
or some sort of bio weapon. also, I love that you say this, because this is super important. A lot of people miss this. Avert without requiring centralizing control. Because the attractor basin, when you start to clamp down and you like avert and you sign letters like pause AI, or you like Nick Bollstrom proposed kind of a high tech panopticon, or the government has to surveil everybody to make sure they're not doing a bio weapon with their LLM at home.

Luke Drago (01:15:27.094)
Yep

Josh (01:15:27.24)
Yeah.

David (01:15:44.321)
Right? Then we get this attractor basin of like a totalitarian, like authoritarian regimes that we then can't get out of. So you're saying avert this, this bad outcomes without requiring centralized control. Okay. So those are the three pieces. Yeah.

Luke Drago (01:15:49.262)
Yeah.

Luke Drago (01:15:58.302)
Exactly. That's the kind of logic chain we flow through. And the reason why we work through avert, diffuse, democratize, and the piece, as opposed to the logic chain where you go backwards, is because we think it's going to be really hard to diffuse unless you avert, and really hard to democratize unless you diffuse. So this is kind of like the logic chain works backwards, and then we present it forwards, if that makes sense.

David (01:16:14.861)
Yeah. Okay, it does. All right. Can we get into some real world examples? So avert. Okay. Okay.

Luke Drago (01:16:21.242)
Let's yeah, let's start with the verts. Let's kick it off. So I think the core observation here is that actually AI can do bad stuff. And this is like sometimes unpopular to say in like it's funny. I think we got a position where we were saying unpopular truth to lots of different people and certain truths are more popular to communities and others. And I think it is the case that AI can make it a whole lot easier for a lot of people to do bad things. And it can also make it a whole lot easier for systems themselves to lose control of them and take actions on their own. And so our observation here is pretty simple. It'd be really bad.

David (01:16:28.093)
Mm-hmm.

Josh (01:16:29.33)
Mm-hmm.

Luke Drago (01:16:51.196)
if that's the end state, if AI is something that is bad for us and not good for us. And secondly, that historically these kind of potential bad outcomes are the really powerful forces that justify centralization. You can see this here where just a host of tragedies, I think a lot about like September 11 attacks and how as a result of 9-11, the government took very broad power grabs. USA Patriot Act, is actually, fun fact, an acronym, was a response a couple of months later.

that resulted in what I would argue was a pretty significant restriction of civil liberties for Americans, I gave the government warrantless wire. Yeah, I gave the warrantless wire tapping capacities. Section 702 in particular has been quite controversial for a host of reasons, and I won't take a side on that argument. But the point is that it rapidly expanded government power. And government power, once distributed, very hard, or once unlocked, is very hard to get back. The other observation that's important here, though, is that if AGI could, in fact, do a whole lot of economic tasks, you're not just centralizing a technology.

David (01:17:25.045)
I would cosign on that. Yep, agreed.

David (01:17:42.51)
Mm-hmm.

Luke Drago (01:17:51.078)
just like giving only nukes to the government, is a pretty common sense argument. You are also centralizing into a couple of points of failure, the development of the technology that might run your entire economy. In this case, it kind of looks like centralizing the means of production into the hands of a single or a couple of actors, and we're pretty worried about that outcome.

David (01:18:00.227)
you

David (01:18:03.514)
That was another essay somebody wrote a while ago. Yeah, I remember that one. Okay.

Josh (01:18:05.801)
Yeah.

Luke Drago (01:18:09.082)
Yeah, think we cite, think we don't cite that one, but we do cite state revolution as an example of like, is, you know, we don't think that the idea of the transition state where a couple of people have all of the power and also all of the economic power is a good one. That's a state where you don't have very much power and historically your Stalin risks are pretty high here. Your risks of, you know, drawing the wrong leader out and putting them in the apparatus that you've built are pretty high.

David (01:18:27.225)
You

David (01:18:32.185)
Your P Stalin, I guess, increases. Did you really? That's hilarious. Okay, okay, okay. Those are the goals. So how do we get there? I think one thing you cite, which is near and dear to our hearts, is Vitalik Guterin's defensive accelerationism. Maybe you could flesh that out as part of the solution here.

Luke Drago (01:18:34.042)
In another essay we called it Peace Dolan specifically. Yeah, it's not in this one. It's not. It's in a we have this piece on tacit knowledge and we did in fact call it Peace Dolan.

Josh (01:18:35.057)
Yeah.

Nice.

Luke Drago (01:18:58.286)
Yeah, I guess the basic idea of like differential technology development or like differential acceleration, whatever it's called this month, is that like, you know, we can push, we can choose which order of technology to drive into some extent. can push the technologies we like and that help us guard against risks and like help humans. And we can like, you know, then just hopefully get these technologies before we get to the bad worrying technologies. And for instance, if you'd probably make sure that by the time like, you know, chat, you can do a cyber attack for you. That's where you go to the point where like our cyber defenses are good. And at the point where like the AI can design bio weapons that we've actually like.

David (01:19:09.465)
Hmm.

Luke Drago (01:19:28.22)
Arden the world against bio-weapon system we can extend and like so this is true in the averse section is also true in like diffuse and democratize like I need to like core sort of spirit of most of our proposal is this thing of like let's please build a technology that enable the good things before we get to the threats and actually like by building the technologies and making them to come faster we can avert a lot of these risks

David (01:19:49.124)
A lot of these things are defensive too, right? When you talk about biosecurity, it's it's more on the defensive loci of focus or cybersecurity is that kind of like defending from attackers, know, cryptography is sort of similar in that way, but we'll also need physical security, AI alignment, of course, the industry is like focused on that, but that's another element of the averting catastrophe here without centralizing. right, let's get to diffuse. Okay, so what does diffuse mean? To me, that's just like,

making sure that everybody, every human has AI superpowers. So like the example that the techs give us is like, even Tim Cook doesn't have a better iPhone than you kind of thing, right? We all have the equal access to iPhones and that's great. So does diffuse mean we all have equal access to these models and other people can't kind of like take them away? Is it like open source? What are the practical ways to diffuse this?

Luke Drago (01:20:43.918)
Yeah, so I think...

basically the thing you want to do is help as many people as possible benefit economically from AI as quickly as possible such that by the time the really radical AI hits people are sort of like they've been there they're like first of all there are more people who are like owners where more people have like built companies stuff like this and then also you've like distributed technology more benefit like everyone has gone the AI power up I like your phrase about you know everyone gets superpowers from AI and then so in terms of like grand strategy here we have this diagram at some point where you show that

like these like two stages of diffusion where like first when AI is un-augmented, sort of like, want to, you want to like diffuse AI, which helps create decentralization. Like you diffuse AI and stuff like just AGI labs have the AI, they use it to benefit themselves. Everyone in society has access to the AI. And what this means is that you get to like decentralization because like, like the benefits of AI has been more widely spread. And then like the fact that you have decentralized the AI then helps you also

diffuse the AI because then like once the humans are automated they're going to get automated not by the big AI labs with their own AIs and they still control the fruits of labor of the AIs that they own.

David (01:21:56.162)
Is it like, how supportive are you guys of open source then models in this and like open source weights and just like all of that, that kind of movement, is that, is that a key?

Luke Drago (01:22:05.018)
Yeah, broadly pretty supportive, especially in a world where we've done a lot of the hard work here and on the hard work on, you know, proofing the world against the biggest disasters. Yes, exactly. And I think to kind of break this down concretely, this looks like two phases. There's this first phase where right now we're on this track where actually an agency isn't that good and yet everyone is investing more and more time to getting agency better. It's open to this interesting market opportunity where AI augmenting tools are both like under invested in and probably way better. Think about cursor for a second.

David (01:22:11.061)
on avert.

Luke Drago (01:22:35.022)
to cursor. love those guys. Cursor is not a tool that does all of the coding for you entirely. It is a tool where usually a software engineer who really understands what doing is like in the driver's seat and it's enabled vibe coding. It's enabled a lot of people who don't know exactly how to do it to still set the high level direction. But ultimately you are in charge of what's happening. You are steering the ship. There's a huge market opportunity to build more tools in that space right now and expand the window of AI human augmentation. We don't think this is like the long-term permanent solution

But going ahead and starting in that direction now can both access like untapped economic incentive and really focus on what we can do today. What we're then excited about in the future is this a whole bunch of concepts here. But one of them is something like aligning models directly to the user. Most people have some sort of like hidden knowledge that is very difficult to gather. And if you ultimately want like the single super intelligence singleton, you're going to want to have access to all that information. This gives you a wedge point, where maybe it is the case that like you aren't the perfect data source because you are slow relative to your AIs.

2050. But it could be the case that there's this like AIs that are trained off of your tacit knowledge of your data. They understand you and could behave like you and can represent your taste and judgment faster. And these AIs are acting throughout the broader economy and they're interacting with other systems. the systems are smarter, but you have access to that information behind that AI. And so this is a world where first we've extended the augmented window. And second, we've aligned systems directly to the user such that even as the systems take off, they're still tied in a meaningful way to the user and therefore the user gets compensated in some

way, or form by their economic activity.

David (01:24:07.243)
Okay, that's cool. Let's talk about this last

point then in more concrete terms, so democratize, right? So how do we do that? How do we, you know, let the humans still maintain some power, right? We're very used to like one person, one vote. I mean, are you talking about concepts like maybe you have an AI lawyer, like you have the right to some sort of AI lawyer data model to represent you is like, how do we really, you know, ensure that democracy and human ability political agency doesn't decrease in this world?

Luke Drago (01:24:39.674)
Yeah, so I think, so we take a very tech-centric perspective here. This is not the essay in which we're going to go out and propose how we solve everything in politics, but I think one thing that is underappreciated again is just like if you push forward technologies that make governance and like verification and coordination and trust easier, then it becomes easier for society to like decide to do the bad things and to not do, sorry, to decide to do the good things and decide to avoid the bad things. So like, so there are some ways in which like AI might help with this in particular, like the AI might help policymakers

understand what voters think. can imagine that, know, yeah, and then like, addition to helping understand what policymakers think, know, the AI is going to advocate on your behalf, especially if you have a model that is aligned to you in particular. You can imagine that the AI is like, you know, you can have like provable guarantees that like there's some like particular AI system that is making a judgment that is like more incorruptible than a human. You can imagine the AI is like...

Warranting information. There's this like fundamental difficulty with using humans to audit that the humans have long-term memory or the AIs you just like, you know use the AI it's like process and context and then the AI is deleted but it returns like yes or no and like whether you're like abiding by some protocol or like building a bioweapon. So it's like you can like audit things without humans having knowledge about it afterwards. And like a bunch of ways like this where like technology gives you building blocks for like governance that might be and like more and more effective and more representative of the desires of the people than we can do right now.

stacking human-intentioned bureaucracies and having laws about that.

David (01:26:06.273)
I think these three words give us a good framework for directionality, even though you can't solve everything in one essay, of course. It's like one last lens and filter for avert, diffuse and democratize. Let's say one society kind of like chooses to do this and puts these things in place in a more intentional way. But another society chooses not to. And there's this kind of like geopolitical race condition here, which is like we're in kind of a, you know, some sort of arms race for AI. Like, you have it? Does your essay in the intelligence

curse have anything to say about kind of that? how do we like it once society chooses to go in the direction of trying to solve the intelligence curse, but another society races faster, fully embracing like the curse, they don't care, they like, basically do the authoritarian totalitarian societies win, you like no matter what. And so are we kind of like screwed even if if we in the US or we in the West kind of choose these ways out?

Luke Drago (01:27:02.49)
So I think here is one of the things where the differential tech development approach is really powerful. It is fundamentally not about taking a sort of cuts to yourself and becoming less competitive and eventually being overrun by more safety oriented actors. It's about developing the technology such that if they exist, doing the safe, good, pro-human thing is the winning strategy. And therefore it like shifts the equilibrium. It's not just about like, it's not reliant on coordination with other actors.

David (01:27:21.145)
Hmm.

David (01:27:26.541)
I like that. we are doing the monger thing of trying to get the incentives correct.

Luke Drago (01:27:30.168)
Yeah, exactly. Now I will say.

Incentives aren't everything. And I think we talk a little bit about some policies, especially in the democratized section. We talk about some of the more boring ones you hear all the time about, like campaign finance reform and reforming anti-corruption laws and strengthening bureaucratic competence. And these all sound kind of boring today. But a really key thing is if you think incentives are about to get radically different and the self-interest of politicians might be much more powerful than it was in years previous, it is really important that the leaders that you're electing in the next couple of years are leaders that you would trust to make good decisions on your behalf.

in stressful situations. That integrity element that we've kind of lost in modern politics is much more important than ever because one of the most oftentimes this one of the ways you can square like great man theory of history away with like a more incentives dominant view is that oftentimes the great men in history are those that take a decision that looks against the incentives and is ultimately the correct one.

David (01:28:20.237)
Hmm.

Luke Drago (01:28:20.346)
You really want to maximize your chance of grabbing one of those leaders when critical decisions come down because you could spend and you should spend as much effort as you can to get the incentives right but you really also want to make sure the person that you have there someone who at that critical moment might make a decision that is against those incentives if it matters if it's important for your well-being so there's that boring answer if you should vote for people who you actually trust but you actually should vote for people who you actually trust

David (01:28:44.085)
Interesting. Unfortunately, it also feels like we're in a shortage of great men these days, at least in our politics. This has been very fascinating. guess my question as we end this is, what should listeners do with this information? Is there anything kind of actionable? I think it's a super valuable mental model and kind of like, hey, you might be out of a job. But what did people do, I guess, personally with this information? What do you recommend listeners take action on?

Luke Drago (01:29:09.87)
Yeah, I'd be just a big fan of, know, go to the solution section of our essay series. We have a lot of specific tech ideas, you know, like read through this. If you're someone who might build something, you know, go and build something off this list or read this list, have your own idea for something like that that is like helps these same goals and go build it out. And he's like, if we build the right technology that makes the equilibrium the good one. Yeah, there's this, there's this kind of, I hear this meme a lot in some of the more like air safety communities that, if it's something that has like market value, the market will solve it. The market is

made up of people. People are in the market and they do things. And so you actually have, if you're going to do differential tech development, then some startup founders got to wake up and decide, okay, I'm going to go build this thing. And a VC has got to decide to back them. And we were not being, you we're not pontificating here. We can talk a bit more about it in the future, but the two of us are currently actively involved in taking a slice of this agenda and building this out ourselves. So we, we were going to go down this rabbit hole ourselves because I don't know, it's really easy to point at a problem. It's pretty hard to build a solution to where we're much more

David (01:30:00.469)
Hmm, okay.

Josh (01:30:01.396)
Hmm.

Luke Drago (01:30:09.72)
excited about is building out the solution space. But I think there's stuff for people who aren't just in the tech community. There are policies that government should be thinking about enacting today. We call, for example, for like Operation Warp Speed for the D-Act-style technologies, the kind of things that could actually prevent major catastrophes and enable this like culture of innovation and democratization. Governments could be incentivizing that right now. mean, Trump's first term did Operation Warp Speed. The bright people are in place to do something like that. Again, this massive moonshot. I think voters can start really thinking critically.

about what's going to happen in the next couple of years in electing politicians on those grounds. And if you're a student, I think I've a blog post somewhere talking about like depending on who you are, what career decisions you might want to be making. Because again, if you think diffusion pressures are real, and it might take a while for this to accelerate, there are still some roles where it's pretty obvious that are going to go down first. Doing bigger, bolder things right now that actually require you to learn how to fail and be your own actor is really great preparation for a world where you might be able to command an army of agents or a lot of augmentative tools, even if the big companies and hiring junior analysts anymore.

David (01:30:56.003)
Mm-hmm.

Luke Drago (01:31:09.5)
So there's a lot of things you can do right now today to, both at a macro level, get us on the right path, or at a micro level, orient yourself for the coming wave.

Josh (01:31:18.622)
So Luke and Rudolph, we spent the last 90 minutes kind of defining the intelligence curse, walking through it, providing solutions. I would love to end this on a positive note on something a little optimistic. What happens if we solve the intelligence curse? I mean, what is the payoff? What do we get for solving this problem?

David (01:31:26.073)
You

Luke Drago (01:31:33.848)
We are talking about a...

what could be the greatest revolution technologically of any human, at least at any time previously in human history and maybe like the final huge thing. The promise of that is honestly hard to fathom. It's things like curing diseases that we couldn't imagine, of actual total abundance, of unlocking crazy amounts of resources, of really being able to provide what would have been just, you in this year and experienced only to the ultra elites, to everyone. That is a world that I want to be able to live in where we can do things

like, you know, abolish poverty and abolish disease. And if we can get this right, the promise of artificial intelligence is that instead of having less agency and less control over your world, you get more with a whole lot less of the drawbacks. And I don't know, that's a vision I'm pretty excited about.

Josh (01:32:22.974)
That's a vision I can be very excited about too. I want to thank both you, Luke and Rudolph, for joining us today, walking us through everything. I know you guys mentioned you were working on some stuff. Where can people find you?

Luke Drago (01:32:33.078)
So we've got a contact form or an email on intelligence-curse.ai. We're both also on Twitter. I think my handle is luke-drago-rudolf. Yours is? Mine is currently at LRUDL. Yeah, so if you want to reach out to us through the contact form there or if you want to reach out us on Twitter, we're both pretty active, unfortunately. We definitely tweet a little too much. That's all right.

Josh (01:32:55.764)
Hey for better or worse, but we very much appreciate you joining us today walking us through this entire intelligence curse I'm sure everyone listening now has a lot to chew on a lot of interesting new questions to ask a lot of new things to consider whether it be the exciting case the optimistic ending that we landed on or any of those Varied outcomes that we also discussed on the show. So Luke and Rudolph. Thanks again. Thank you so much for joining us on the episode today

Luke Drago (01:33:16.91)
Thank you so much. And I guess the last thing I could say is I'm super optimistic and super excited where this could go. So I think even if we know about the problem, we also know how to solve it. We have a guess on how to solve it. I think people should get more excited about jumping at that solution. Yeah. Please build a tech that will save the world.

Josh (01:33:28.912)
Awesome. Well, that's a really optimistic. Absolutely, I love that. That's a really optimistic note to end it on. So thank you again and appreciate you guys taking the time.

Luke Drago (01:33:35.322)
All right. Take care. Thank you.

The Intelligence Curse: How AGI Makes Us All Obsolete | Luke Drago & Rudolf
Broadcast by