U.S. vs China's Race to Superintelligence | Jeremie & Edouard Harris
Ryan:
[0:03] Are we winning or losing the AI race to China right now?
Jeremie:
[0:08] Oh, wow. Okay. That'll just take 30 seconds. I mean,
Jeremie:
[0:14] Well, it depends. It depends, but it depends in interesting ways. So there are different dimensions of the AI race. One way you can split it up is supply chain wise, right? You look at the whole AI supply chain, what goes into training a leading AI model where there's talent, right? That's one big input. In that respect, there's a bit of fog of war around who exactly is where, especially on the Chinese side. But judging by the kinds of engineering advances that go into making your DeepSeq R1, your Quen 3, there are very, very legit people in the Chinese labs right now, DeepSeq and Alibaba actually being among the clear leaders in China. But then separately, there's a question around the hardware that's used, right? You have the talent to train these models to architect them, but then the actual physical hardware that you need to do that training or the inference, that's a complicated story. So the hardware itself, the most advanced chips, obviously, are export controlled. China can't get a hold of the Blackwell series of chips right now. They're stuck on the H20, all kinds of stuff like that. But they have all the power that they need. And so when it comes to power, that's just not a bottleneck for them. They have all the energy they want. We don't know how to build nuclear plants in less than 10 years. China's added an entire America worth of power to their grid in the last decade. So it's just this very, very different set of constraints. And then that gets reflected even in the way that they build their data centers and their chips and the way that they scale them up for training runs. So overall, I would say it depends on the dimension you look at.
Jeremie:
[1:43] Maybe I would put my money on America being first to build truly like AGI level capabilities or ASI right now, but China would just steal that shit like immediately under nominal circumstances. it so it doesn't matter.
Edouard:
[1:57] Yeah, so that's the thing about winning. Under current conditions, winning is kind of not really winning because we can be the first to develop the technology. But from everything that we've seen and heard and all the folks that we've spoken to who are tracking this space closely, it just gets, I mean, they're already all up in our systems at the moment.
Edouard:
[2:17] And so there is a massive security push that's necessary for winning to be meaningful. And that's before you even get to the question of, well, you build an AGI or you build a super intelligence, how do you control it? And does that- What is, you know, what is your catastrophic victory scenario look like?
Ryan:
[2:36] Right, right, right. I mean, superintelligence aside, because there could be like a third actor in this, the U.S. And China, and then the superintelligence with its own ambitions, of course. But I'm sticking to the kind of the U.S. and China. We want to get to the scoreboard and break these out in each of the different categories and dimensions because there's the labs, you know, the software side of things. There's the hardware. There's the energy. There's even kind of the culture of the U.S. versus China. And I think there's scoreboards associated with each of those. But before we get into this in a more granular way, maybe we could set the context for some of our audience and listeners today.
Ryan:
[3:10] Like, why is there an AI arms race in the first place? And is there really an arms race? Because an arms race kind of implies that we're on the cusp of some sort of maybe like super weapon or super capability. And that's not obvious, I think, to a lot of people listening. Is there really an arms race?
Edouard:
[3:30] Well, certainly there's an economic race. I think that's, you know, reasonably obvious to everyone, even within the United States. The labs themselves have said this. We are in a race. You know, we wish everyone else wasn't racing so hard. But obviously, when you're in that when you're in that space,
Jeremie:
[3:47] That's something, by the way, that Sam Altman once said to us directly. Like we were like, hey, so, you know, it seems like you guys don't have the security or the safety stuff in place for where things might be going. And so like you know what what's what's your take and he's like i wish google wouldn't race us so hard i wish
Ryan:
[4:05] I had no code.
Edouard:
[4:06] Nice yeah
Jeremie:
[4:08] Which is true by the way that is
Edouard:
[4:11] Not wrong yeah he's not he's not wrong to wish that we'd all be you know maybe marginally in a better position if google wasn't racing quite so hard but of course once you have three four or five frontier labs all racing each individual one doesn't really matter all that much
Jeremie:
[4:26] Yeah.
Edouard:
[4:27] So in terms of the question around, is it an arms race? Well, it's definitely a race. And so the question is the arms part, right? So it's ultimately, one, is there a good chance that there is a weapon-like capability on the path or at the end of this race? And two, are the participants, especially at the national level, operating as though that's the case? And I think in the first instance, there's definitely, you know, there's reason to believe certainly there are concerning capabilities along the path. It's already started popping up. Anthropic has come out and said, we can't guarantee that the systems that we have out deployed today can't help a novice design and deploy a known biological weapon, right? A known biologically. That's already a weaponizable capability. And it's not, you know, it's not that this is something special of Anthropic, but they, to their credit, have come out and said this explicitly. And you just imagine kind of extrapolate the cyber capabilities that are coming out, out a few months or a few years. You know, you imagine like the best, the best hacker at the NSA or the best zero day finder as just like a cyber expert, multiply that by a million and run a million copies of that dude in a big data center at a thousand times human speed. Is that a weapon of mass destruction? Well, it's not super clear that it is, but it's not clear that it isn't.
Jeremie:
[5:56] Yeah, I mean, it's like,
Jeremie:
[5:58] You know, we've seen NotPetya, we've seen WannaCry, we've seen like just some of the very basic cyber weapons, which, by the way, I mean, often pull from that big leak of like from the NSA's toolkit of kind of a couple of their cyber weapons, they get mixed with things. I think the idea of AI systems autonomously discovering zero days is totally on the table. And the minute that you admit that possibility, you have to think about what does it look like to live in a world where there are catastrophic cyber attacks that could be launched every other day. I mean, it may sound extreme, but the attack surface for our critical infrastructure, everything from power grids to, you know, water purification systems, all that stuff, you know, is all out there. You're dealing, when you talk about AI, with a technology that fundamentally is native to cyberspace in a way that humans aren't. They can move around much faster. Through APIs, you can control swarms of bots that, you know, seamlessly SSH into and out of things in ways that humans just can't. And so there's this people are often hoping that, well, maybe the cyber defense angle is going to is going to win through, you know, this whole thing about maybe the defense will get more help than the offense side.
Jeremie:
[7:11] Unfortunately, the defense side just has to cover so much. Right. Like it's just the space of the attack, the attack surface is so huge. So just from the cyber piece alone, I think that's just a huge potential kind of weaponizable application domain. But then you've got things like biological weapon design. and Ed alluded to it, you know, Anthropic just announced that they're moving up to what they call ASL-3 security measures. So these are the measures associated with
Jeremie:
[7:35] Not nation-state actors, but organized crime and terrorist groups that now go, okay, well, AI has advanced to the point where this stuff actually meaningfully gives us a capability lift along, in this case, the biological weapons axis. We think it'll make it easier for us to, for example, synthesize sarin, right? So maybe, you know, you're not going to see like whatever that Om Shuriken or whatever, the sort of Japanese, right, the failed cult bioweapon attack that failed to deploy sarin, partly because of lack of time, but also partly because it's kind of a genuinely difficult biological problem. And maybe with tools like this, those sorts of attacks would be devastating and effective. If we live in that world, all of a sudden, you know, now you've got bioweapons plus cyber attacks. By the way, the two kind of most commonly cited vectors, there's others like robotics and stuff that especially China is going to be very, very interested in because of what their supply chains look like. But fundamentally, you're talking about intelligence, right? Intelligence is useful for all of the things. And that includes the really bad shit. And that's part of why under the umbrella of what AI can do, you have a lot of weapon of mass destruction like capabilities. If humans can do it, eventually automated intelligence ought to be able to as well.
Ryan:
[8:46] It does seem like an army of super geniuses in a data center is just like it's obviously going to be weaponized and can be weaponized in so many different ways. But even if you take the weaponization like away and you just talk about the economic power that it advantages one country over another. And you could kind of go back through history and all wars are kind of economic wars. Why did the allies win World War II? I mean, you just look at the production rates of tanks and weaponry and airplanes and aircraft carriers, and that's really what won the war. And so ultimately, economic power is power. And this is kind of a power game, I guess. And so let me ask you this question. Do the countries know that they're actually
Ryan:
[9:30] in this AI arms race? So maybe we know it here. We're talking about this in Silicon Valley. But.
Jeremie:
[9:36] Would the U.S.
Ryan:
[9:37] Government actually say, yep, we're in an arms race with China for AI? Would they admit that? And would China admit this as well?
Edouard:
[9:44] They have not yet directly admitted it. Although, so the defining feature of the U.S. government is that it's gigantic. Like that's the main defining thing about the United States government. It's got, you know, even if you don't count the military, it's like 3 million people or some absurd amount, right? So if you ask, does the government believe a thing? Really, that's like it's such a gigantic thing, right? Like it just doesn't have necessarily unless like the president comes out and says, this is the policy of the United States on blah, blah, blah. It's not even like it's such a big target, right? So I think what we can say is that there are absolutely pockets of the government that believe all the things that we believe. Those pockets are growing. Those pockets have read all of the AI, AGI, ASI manifestos that you and I and maybe the audience has at least touched upon. Situational Awareness, AI 2027, maybe the MAME stuff, maybe our Super Intelligence, America's Super Intelligence Project report. And so in those spaces, they're as AGI-pilled as we are. Has it taken the entire government by storm? No. When that switch flips, I think we're all going to know it.
Jeremie:
[11:09] Yeah, it's also the entire government, right? So the Bureau, so BIS, right, the Department of Commerce, they basically handle all the export controls. since like 20, I think 2018 was when the first round of export controls targeting GPU exports to China kicked in. And like, you would have been entitled to look back then at those export controls and go, wait, why are we doing this? The reasons that are being cited are the weaponization of these basically pieces of gaming hardware, like what's going on? And obviously they weren't quite that. I mean, you know, the A100, the V100, these were still impressive pieces of AI hardware are in their own right. But that was what a lot of people felt. Now, the ratchet's been turned up on that significantly since then. But the language explicitly has been and continues to be, this is to prevent China from accessing military-grade technology. That's how they're viewing and choosing to view the export of GPUs, of high-end ASICs to China. And certainly, that's a revealed preference, a revealed preference to call it an arms race without calling it an arms race.
Jeremie:
[12:12] There's also, by the way, one reason you might not hear this talked about, there's a view in some quarters that by calling it an arms race, you exacerbate it. Now, I think this is actually very, very overplayed. If you just look at what the main players are doing, right? So Beijing has put in, like the Bank of China has invested the equivalent of a quarter billion dollars, like a trillion yuan into AI infrastructure domestically. This was just after the Stargate project was announced, which itself was being referred to as the Manhattan Project for AI that the U.S. Government was launching, even though it's not a government thing, it's a private thing. So that's one dimension of it. People are actually using this language, and it's hard to imagine either country doing something.
Jeremie:
[12:54] That much differently on the acceleration side. There's stuff they would do differently on the security side if the governments institutionally believe this is where we're heading. And I don't think people are dialed into that yet. The world looks really different the minute you say, let's stop just calling it a Manhattan project and see what it would look like if we actually believe that was what we were doing. If that was actually what we were doing, all of a sudden society just looks very different.
Ryan:
[13:20] So it feels like we're getting closer to that moment. And Ed, you mentioned a switch that might be flipped at some point in time. Give us an example of what kind of thing might flip the switch to like full out race on AI and to turn this into something like, you know, the atomic, the nuclear kind of energy projects of the 1940s.
Edouard:
[13:41] Yeah. So I'm referring a little less directly to a switch that flips us into necessarily now we're in full arms race mode, although potentially that. And more just around a regime change around who understands and is seen to understand and believe this stuff. So what I mean is right now, it's actually been so fascinating to watch this evolution in government. Previously, let's say 18 months ago, end of 2023, Sam Alvin was getting fired, that era, right?
Edouard:
[14:14] You talk to folks in government and you'd find a few individuals who were AGI-pilled and who were like, yeah so like i i see what's happening i believe this stuff but like i i don't feel like i can really come out with it directly because nobody else believes this and it's like you know it's career suicide or whatever as of end of last year now you had this stuff happening with entire offices so you talk like you'd get on a call at you know there's an office in the department of energy we spoke to and i'm not going to name it just just because but but we we had to sit down and there's, you know, half a dozen folks on and they're all like, yep, we've read situational awareness. Yep. This is coming. Uh huh. Yep. And it was no, okay. There's just no barrier to jump over. They just all get it. Cool. And so now you have like within office, this consensus starting to form. And so what we see is kind of these expanding bubbles in government that are kind of touching each other. And so when you think about this is literally like, it's, it's all, it's exactly like this analogy of how does water boil, right? It's like these little pockets of, of like of of of of gas that are starting to form eventually they merge together into these big things and it starts to boil over and so we we're kind of watching that the government's starting to boil a little bit on this issue and and these pockets kind of forming and merging and
Jeremie:
[15:31] I think, you know,
Edouard:
[15:32] Any number of triggering events could flip us over the line.
Edouard:
[15:36] Could be the president or vice president goes like, hey, this is fucking crazy. And we were now like we now believe this like explicitly for real. It could be there's some incident like we're talking about these models that can do biological attacks. Well, at a certain point, you're going to hit a capability level where the first mofo who wants to see the world burn can actually do that. So there's any number of possible trigger points and but but certainly we're watching as you know the pressure starts to build internally and uh tracking that closely yeah just
Ryan:
[16:09] Like we're quite famously in the u.s obviously governed by like a lot of it's a deritocracy right so it's a lot of baby boomers a lot of older folks that are just kind of getting familiar and i when i read papers like your super intelligence report or ai 2027 even for me and like josh and i we are frontier tech right like we we've seen so much of this like we are engrossed in it but even for me it sounds and feels sci-fi it doesn't even sound believable so i so i'm picturing somebody who's kind of in their late 60s and they pick up some of these reports and they start reading the literature and they're like you're telling me by 2028 we're gonna have what like robots creating robots with an artificial general intelligence we can't control come on that's i mean that's sci-fi like stuff that's fantasy land that's not really like it's got to be really hard for these people to.
Jeremie:
[17:00] Believe it is and the in some ways the most dangerous so age itself is definitely a factor there's no question but you do see people who are in their like 60s and 70s who just like get it and it's often the people who have been in charge of anticipating new technologies in the past. So they've seen like when the atomic bomb was invented, right? Like, come on. The day before, it was all these like cute little like maybe you'd have like a dam buster, right? Like you'd make a thousand pound bomb. People go, ooh. But then like all of a sudden, Hiroshima goes up in smoke.
Ryan:
[17:36] Zero to one.
Jeremie:
[17:37] Exactly. Right. So there are some people who have been. And by the way, this sort of thing, you have to imagine this playing out in the intelligence community, in the national security community, as new technologies that you and I have never even heard of pop up and then our countermeasures are developed and contingency plans are invented on the fly over the last many decades. And so there are people who've been flexing that kind of muscle over a long period of time. But one thing that's on the AI side that's especially, I would say, almost dangerous or that can close people's minds, there's nothing more dangerous than somebody with like a master's in AI from 2005.
Ryan:
[18:18] Before the scaling era kind of thing?
Jeremie:
[18:20] Yeah, exactly. Or in some cases, yeah, before neural networks really were a thing or, I mean, that's maybe to say the same thing. But yeah, you'll often run into these people who, because just about nobody had any AI background back then, they're viewed as the local gurus in the office when it comes to the AI thing. And their identities are wrapped up in it. And they've lived through like one, two, three, maybe AI winters. So they've seen the boom bust cycles and they're just ready to tell you, and maybe they got excited just before the first AI winter and they had to refactor and they've seen it happen a bunch of times. So they're sort of default pessimistic. Ed and I often talk about this in the context of angel investing. Like we built early on an ed tech company and we've never invested in ed tech since because we can see all the problems with it. We like, when you're so close to an issue, you're just like, oh, this is such a shit mark. It, your users aren't going to like this, blah, blah, blah, blah, blah. The same sort of thing can happen when you're just a little too close to the technology, but not quite close enough that you're at the frontier labs, you see all the reasons for pessimism because you've experienced them. And anyway, that can come across in your outlook.
Ryan:
[19:28] This is so funny because it's the same story, like you coming from cryptocurrency, it's just like the people least likely to understand that Bitcoin could hit a trillion dollars were the people who are like bankers and finance and economists. They're like, it's never going to happen. We've already done money. We fixed it. It's like already works. They were the naysayers who were sort of the experts in the room and, you know, got it wrong largely.
Jeremie:
[19:49] Yeah.
Edouard:
[19:49] Yeah. If you're too invested in the way things are, you don't see what's coming sideways for you. It's the, it's like, it is the first piece of the innovators dilemma.
Edouard:
[19:59] You just don't see it. Don't believe it. You're comfortable. You've already won. And so why would you admit a picture of the world where your victory is meaningless?
Josh:
[20:11] Okay, so I want to take us back to the scoreboard because I'm really excited to see kind of how China ranks relative to the United States in a few different categories, the first being AI models. I think Jeremy was you who said, or not DeepSeek, it was the Anthropic team that said, they can't promise that cloud can't be used as a bioweapon. I know last week, 03, someone found a zero-day vulnerability in like a Linux kernel using that model. So they're already so powerful and so applicable to apply damage to other people in other countries. I wanted to ask you kind of who you think is winning we have we have these two entities we have the chinese side which is deep seek and quen and this whole basket of others and then we have the united states which is open ai and google and we have anthropic um who do you think is winning that race so far
Edouard:
[20:51] In open source i think i think they're ahead they're they're ahead and we're neck-to-neck in open source like our open source champion is meta and i mean they've stumbled in the last few months. So that's just one stumble basically sets you back. That's how tight it is. In terms of the closed source stuff, it's a little more complicated by the fact that, yeah, they're just stealing our stuff all the time. So we might, and we probably are ahead in native development because we have more facilities. We just have more data center space, more clusters, higher quality chips. There's a lot of kind of infrastructure that's been built out for this race by these private entities that know what they're doing, at least from the pure, you know, data center scale side at that level. But, you know, is it a victory if your adversary just like can yoink the metal out of your hands, like right after you finish the race? Like, I mean, it's kind of meaningless. There is a
Jeremie:
[21:51] Dimension to this too. And this is with the inference time scaling laws that have come out, right? That did change things in a very significant way. So if you imagine the history up until like up until just pre-01, right? So, you know, six months ago.
Jeremie:
[22:07] What you have is essentially everybody scaling up training time compute. So let me make a better and better text autocomplete system is roughly the order of the day. How am I going to do that? I'm going to make a bigger and bigger cluster. Everybody had more or less kind of maxed the engineering capacity to make use of whatever chips they had. So OpenAI is using their chip fleet like to the max to do pre-training, right? So is Baidu and so is Tencent. So is everybody, right? So you're kind of getting this apples-to-apples comparison on a fully saturated fleet basis of the U.S. and of China.
Jeremie:
[22:41] And now all of a sudden, inference time compute comes around. And what happens? Well, you've got everybody looking over their shoulder going like, okay, I need to find a way to use a million GPUs to do inference time compute. It's a fundamentally different problem from a data flow standpoint, from a networking standpoint. All like the bottlenecks that were the bottlenecks for pre-training compute all of a sudden aren't. New bottlenecks take their place. It's a new problem. And so now you have to answer the question, if you're Anthropic, if you're Google, if you're Microsoft, if you're OpenAI, how do I make use of a million GPUs for inference time compute? And the answer is you don't. You actually start with 100 GPUs and then 1,000 and then 10,000 and you scale your way up. But the thing is, China also has 100 GPUs. They also have 1,000 GPUs. They also have 10,000 GPUs. They probably have 100,000. So until you kind of cross those OOM thresholds and saturate your current pool of inference time compute, you're not actually getting apples to apples. And so for this reason, I think it's reasonable to expect China to put together legitimate frontier models. And I mean frontier models this year until we finally get to the point where we saturate our respective inference time compute piles.
Jeremie:
[23:50] Ours is bigger, so we should expect to saturate higher. but this is kind of, I guess, a mental model to have in your mind when you're thinking about who's ahead. There's going to be an illusion for a time in the coming months, very likely, that China is doing better than in some sense they are or could be. And when that happens, believe you me, the Global Times will be there to remind us all that China did it and to extrapolate from that, that export controls on chips are not working because they are desperate to convince us said that. And this gives us that opportunity that will be false when they say it. It is important that we recognize the falsitude of that statement and keep to our export control strategy. But that's all anyway, kind of some future prediction.
Edouard:
[24:30] That's a really good point. Yeah.
Josh:
[24:33] For the people who are a little confused, would you mind explaining the inference time compute? Because it is very important. And I think a lot of people don't actually know what you're talking about when you say that. So if you could just outline it for us, just so everyone understands why it's such a big deal.
Jeremie:
[24:44] If you could just unfuck the last five minutes of what you said. Yeah.
Josh:
[24:48] Let's explain, like, what is what makes inference compute? Why was it such a huge breakthrough? Yeah.
Jeremie:
[24:53] So it's OK. So here's an intuition. I'm going to give you a
Jeremie:
[24:55] Test and you
Jeremie:
[24:57] Have 100 hours to both study for and perform the test. It's up to you how you split up your time. The old paradigm of pre-training compute was basically saying, I'm going to spend 99.99 hours studying for the test, and then it'll take three seconds to do the test. Well, no surprise, like studying twice as much in that regime isn't actually going to improve your performance. You're bottlenecked on your ability to think on the test. And so now what the new inference time compute paradigm is saying is, well, what if we take these models and instead of spending all our compute at training time, let's save some that compute at inference time on the test itself. And when you scale those two things together at the same time in a complementary way, it seems as if the sky may be the limit. We don't yet know, but the scaling curves do seem to just kind of keep going. That's the basics of that paradigm. So we've shifted to that. Doing that thinking on the test just requires a different compute fleet configuration data centers that look somehow different. The chips are wired together differently.
Ryan:
[25:59] The users experience this with like chat GPT-03, right? When they ask a question, it's like, before it answers your question, it's kind of thinking and you can read its thoughts, right? That's inference time compute at work.
Edouard:
[26:10] Yeah, that's exactly right. So it's like giving you, you know, time to think before you answer, right? For certain questions, like, how are you doing today? You don't really need time to think before you answer. For other questions like, what's the square root of pi? AI, you know, if you give an answer right away, it's not going to be a very good answer. But the more you think about it, the better the answer is. And so that's the idea behind inference time compute. Give the AIs that freedom to decide how much they need to think before they answer.
Josh:
[26:39] I'm curious to understand how you would contrast the approach between US and Chinese labs. They're all kind of using the same technology, but they weren't always using the same technology. It seems that DeepSea kind of like dropped a bomb in the whole AI sector. And they were like, wait a second, we could actually do this much more efficiently than the United States has been doing. So I think traditionally people think the U.S. is throwing a lot of money at the problem. We're building these huge data centers. We have all the GPUs, whereas China's kind of being a little more resourceful on the software stack. And I'm curious how you see this dynamic playing out of Chinese kind of software efficiency versus the United States hardware dominance.
Edouard:
[27:12] Well, so this is actually touching on Jer's point, which is that in the transition to inference time compute, we are at a basically a hairpin turn where we have we had these giant fleets of training stuff, and now we need to flip over to allocating a lot of it to inference stuff. During that tight turn, there is an opportunity for a deep seek that may not have the gigantic fleets of training time or general compute that the United States have to basically appear to come out ahead or edge out ahead and
Edouard:
[27:47] Through efficiency and through the fact that the game is not yet being played at its maximum level until all those scaling checkpoints have been hit. And so, yeah, and the DeepSeek team did incredible, genuine engineering work in finding efficiencies at the deepest level, like basically the bare metal of the chips. Like, here's like, here's how you, here's how you allocate, you know, the, the, the compute across these. And they, they went in like manually built kernels and all this stuff. So they absolutely got the maximum possible amount of juice from their gear that they, that they, that they could. And of course, U.S. companies took as much of that as they as they wanted and made their own setups more efficient. But fundamentally, it's a question of like, where is your constraint? Like, where's your dominant constraint? And where is it most productive to allocate your your resource? So for American companies, you are not really as constrained, at least now we're kind of starting to hit power considerations on the mainland, on the continental United States. So that's starting to be a little bit of a limiting factor. But, you know, still now it's it's still scalable. And especially like in the last few months. Yeah, you could just like keep ramming more compute and like keep keep piling up more stuff in the bigger pile, the better it got. So that was just the way to do it. We had all the best gear. We had all the shiniest stuff. So just keep doing what works.
Edouard:
[29:16] Whereas for them, because they were constrained on chips and had export controls and, you know, some of the Huawei gear, which is now starting to come online, was just not on the horizon yet. They were still like, well, we just have this standing pool of stuff and that's our constraint. So the only thing we can do is how do we maximize utilization of that standing pool of stuff? And so that's where the two different approaches came from. It's just like, what are your limitations right now and where does your engineering effort best spent?
Ryan:
[29:46] It feels like you guys are saying for the AI models part that like the U.S. Is maybe like marginally ahead, but that's not a durable advantage and they're ahead on the closed source side. Well, let me just ask this question, though, because, you know, we always talk about the U.S. open, free society, right? You know, Silicon Valley embracing Linux and open source. Is it ironic to anyone else that now it seems like the leading open source AI models actually from China? Like, why are they doing that? Is this just kind of like game theory because the U.S. is leading in closed source? Of course, they'll pivot to open source. Because that feels like values that America, quote unquote, should sort of have. Like we open technology, you know, like democracy, export it to the world,
Ryan:
[30:26] yet we're the closed source side.
Jeremie:
[30:28] What's up with this? Well, so, OK, there's open source and there's open source. Open source, when you live in a world of civil military fusion, which is the official policy of the Chinese state. In other words, there is no such thing as a private entity. All entities answer fundamentally to the state and the Chinese Communist Party, meaning the Chinese military knocks on your door and says, I want them servers or I want you to do this thing with them servers. You do the thing with them servers and you don't ask questions about it. In that world, open source takes on a distinctly different flavor. So this was what I'm about to describe is I strongly suspect not at all the case with R1, with DeepSeq R1, but easily could be the case with R2, easily could be the case with V4 when it comes out, any DeepSeq products from here on out that are open sourced.
Jeremie:
[31:15] When you are looking at an agentic model, you can bake into that agent all kinds of proclivities and behaviors and tendencies that are advantageous to you. You can bake in responses to certain stimuli that are out of distribution, like totally unanticipable, if that's a word, by the end user, where, you know, if you get a certain kind of query, then it causes you as if you're an agent sitting on somebody's CPU, causes you to go and like, I don't know, log into their email and forward all the email traffic to some like address in China. This is a kind of software that didn't exist before. It is intelligent open source software. So if I can get it running on as many computers as possible, and that software is baked into it, is a desire to fundamentally do the bidding of the Chinese Communist Party or some apparatus, it's a really interesting tool. You may imagine wanting to get people used to using this tool, in part by deploying totally benign versions of it for a really long time, and getting increasing levels of buy-in. I'm not saying this is DeepSeek R1 at all, by the way, for a number of reasons, not least of which is that DeepSeek simply didn't draw the attention of the Chinese Communist Party until well after R1 was released. But this is one thing, right? It is a national strategic advantage.
Jeremie:
[32:30] The other is, to your point, Ryan, about recruitment. Is this a recruitment play? It is 100% a recruitment play. That is the only reason, or I mean, modulo, like kind of relatively uninspiring things to do with like people building on top of their models. That's pretty much what Meta was doing too. It was a recruitment play, which is part of why they bothered gaming like the LM arena.
Edouard:
[32:53] Apart from Meta, I mean, every other company that started out being like, oh, we're open sourcing, then went closed source. That's just like the way it goes.
Jeremie:
[33:02] Okay.
Ryan:
[33:03] So are you also saying that when we say open source, so traditionally open source means you can view the source code, right? That's great. But these are different systems. This is like open source intelligence, which is different. And so if a future deep seek model is doing like nefarious shit in the background and we have open source to the weights, we won't necessarily be able to tell it's doing this nefarious shit because that's like, I don't know, still obfuscated from us. It's the interpretability type of problem.
Jeremie:
[33:32] Until somebody figures out how to map the structure and the parameter values of a neural network to its behavior in a reliable, robust way, we're flying blind, right? It's the same thing as classically, like GPT-4 was launched. And then 10 months or 12 months later, people realized, oh, it actually can be helpful in finding one-day exploits. Who knew? And nobody knew. Nobody knew for a year because like you just have to play with it and find out. So it's sort of similar. There's no way, as you say, to like go from here's
Jeremie:
[34:03] my model to these are the behaviors. This is what's been baked into it.
Josh:
[34:07] OK, so now that we're training these models, we need lots of GPUs
Edouard:
[34:10] And we need
Josh:
[34:10] Lots of energy. And this is the next topic that I want to cover is the importance of energy, because it feels like everything kind of comes downstream of that. And I read this disturbing stat the other week that every 18 months, China adds a United States worth of power. At least over the next five years, they're planning to do so. And that means that China will be producing about one in every three of the world's electrons, which seems like a lot considering the power demands that these new clusters are going to use. I know Stargate in Texas is trying to get to 1.2 gigawatts of power. That is enough to power over a million homes. So I want to ask you, are we actually getting our ass kicked in the energy war or do we have a chance to compete with China when the power demands are so high for building these new clusters.
Edouard:
[34:53] On the continental United States, I mean, so the, again, it comes down to like, where is your constraint, right? What is your constraint and where is it? So for us in the continental United States, the constraint is power. For China, they, as you said, they're building tons of power. So their constraint is not power. Instead, their constraint is chips. And so if they're fundamentally constrained on chips, and Huawei is, of course, building their own natively developed chips, and they're doing a good job. So you can see, actually, they're leaning into their strengths. So one of the things Huawei is doing is developing chip systems to compete with NVIDIA's Blackwells, not just at the chip level, but at the whole system level, with this idea that, hey, our chips can be crappier than NVIDIA's. They don't need to be as efficient. They don't need to give as many flops per, you know, kilowatt as NVIDIA's chips because we just have tons of power. We're not constrained on power. And so they can design against that constraint. But still, fundamentally, they're going to be constrained on the number of functional dyes that get pumped out of their foundries for GPUs.
Edouard:
[36:02] So they literally, that's going to be their main dominant constraint. For us, power is, yes, is the main constraint. There are ways around that for us in various ways. So one of the things we've seen in the last few weeks has been the United States doing deals with Saudi Arabia and the UAE because those countries have abundant power sources. And, you know, can you set up deals like that in such a way that the resulting facilities are secure and that all the national security relevant considerations are observed and all of that?
Edouard:
[36:35] The devil's very much in the details, but at a high level, hey, we need power. If we're not too fussy about it, yeah, we can, we can, I mean, we can unfuck our regulations. That would be great. It would be great to deregulate so that we can build power legally in the United States. And to some extent, we're trying to do that too. But in a pinch, we have allies that have, that have gigawatts. Yeah. I mean, you know, one, one third of all electrons come out of China. Well, two thirds are elsewhere. So, you know, we can we have friends elsewhere.
Josh:
[37:08] Is that the way to do it? Is it to tap in our friends? I'm trying to figure out where we went wrong. How is how does China have so much more abundance than we do? And how do we fix this?
Edouard:
[37:16] That is a long time coming, frankly. It's it's the fact that the explicit policy of the United States for decades, and this is not this is not the fault of any one administration. It's just kind of the the the way that the way that all economic policy, not all, but a large amount of economic policy has been anchored, has been pretty much explicitly, the United States is going to be the consumer of last resort for the world, and China is going to be the producer of all the world's stuff, and we're just going to pay China our dollars. They're going to give us their stuff, and they're going to invest our dollars in treasuries, and that's how the wonderful cycle is going to work. And we're noticing now that there are a lot of problems with that strategy, one of which is handing over your entire industrial base to a likely adversary, that's a big problem. You guys mentioned earlier that, yeah, how did World War II get won? Well, it was like, who could light the most stuff on fire, right? Who could build the most stuff and light it on fire? That's how wars are won in the industrial age. So that's a problem for us. And of course, world's most sophisticated chips are manufactured in Taiwan and within kind of a high-risk zone where China can influence that significantly.
Jeremie:
[38:31] I think there's
Ryan:
[38:31] Maybe a distinction, at least I'm starting to learn, between energy and electricity specifically, right? So just like energy is kind of like, I guess it could be solar, it could be natural gas, it could be oil, and the electrical grid specifically, which feels very much like the AI stack. I was reading a post from the economist Noah Smith. He's been on the podcast before. where he was talking about America basically losing the, what he calls the electrical age. And the basic like narrative is, you know, 19th century, the core, you know, unit of power was steam. 20th century, it was combustion.
Jeremie:
[39:11] Right?
Ryan:
[39:11] Basically, you know, planes, aircraft carriers, oil, all of that. 21st century, the AI stack really depends on electricity specifically, because that's how you get your drones. That's how you get your electric cars. That's how you get your robots. That's how you get your data centers. That's how you get your super intelligence competitive advantage and ultimately world dominance, like power. And so he talks about three breakthroughs. I'm sure some listeners are vaguely familiar with, but for the electrical age, like one is just rare earth metals, right? That are key for movement of the AI in physical space. The other is chips we've been talking about. So silicon, and that's kind of the compute layer. And the third is like lithium batteries which is kind of storage and then he goes through and he shows the u.s electricity use as a percentage of consumption and the u.s actually i'm going to show this on screen and the u.s is like behind i want to say behind not only china but also europe on percent of electricity usage as a percent of our power consumption let me share this uh chart for you guys here. It's like right here. Right. And so primary energy.
Edouard:
[40:23] Yeah.
Ryan:
[40:23] Yeah. Right. Right. Right. So like, okay. So if we are losing on the, It's not just the energy piece of the stack. It's also the electrical grid piece of that. And it seems like we're also maybe losing here. Have you guys studied this at all? Like, what do you think about this?
Jeremie:
[40:40] Yeah, this is actually, because so when you go to building a new data center, your questions are, well, the first question is, where can I find a spare gigawatt or two of, I'll say power, they mean electricity, right?
Jeremie:
[40:54] And that pulls in a whole bunch of questions around transmission lines and critically,
Jeremie:
[40:59] Generators and transformers and transformer substations. So when you look at your average transformer, where is it made? The answer is not in America. Where in particular is almost impossible to find a transformer that does not have Chinese-made components. Now, the problem with that is that we know historically that China has used transformers as backdoors. We actually just had a recent, there's a recent story out about backdoors that were put into solar cells. There are Chinese-made solar cells, sort of similar spirit, right? Like, if you're China, you would be insane not to take some of these measures knowing the importance of the electrical grid well before AI was a thing. You know, there are all kinds of critical failures you can induce if you have access to the right components in the system. So not only do we have a capacity problem in the sense that there just aren't enough transformers. These things are on, like, multi-year backorder, right? So a transformer will set you back. It depends on the context, but anywhere from like six to 24 months, right? So like you could be waiting two years and an awful lot of builds are just bottlenecked by their ability to get their hands on transformers. And the same is true for generators, right? So like where you're going essentially from natural gas to electricity, that's all kind of whether it's behind the meter in front of the meter. The grid right now is in a pretty sorry state and supply chain for the grid has all these beautiful access vectors that China in particular enjoys.
Jeremie:
[42:24] Yikes.
Ryan:
[42:25] Okay, so for folks keeping track at home, we talked about AI labs and models, and maybe the US is kind of, we're ahead in that area. Now for energy and electricity, it feels very much like China's ahead, although maybe we can get our hands on some energy sources and sort of catch up. Let's go to the next piece here, which is, you've alluded to it, Ed, earlier, kind of compute in the compute stack with chips, GPUs, and supply chains. I think your super intelligence report calls supply chains America's Achilles heel. Okay. And some context here is like, do you guys read that book that came out or are you familiar with kind of the book recently? Patrick, I can't think of his last name. Patrick McGee. He was talking about Apple in China. And just going through the last 20 years of Apple and just documenting how basically Apple methodically outsourced its entire supply chain to China, how Apple is basically a Chinese company. It has 150,000 engineers in the U.S., but 3 million of its workforce are actually in China. The entire assembly process and the know-how, most importantly, the capability and competency is just in China. and there's no way to replicate it in America or like anywhere else in the globe, at least without a decade of work effort here. Okay, so supply chain, America's Achilles heel, like what's the story here?
Edouard:
[43:49] Yeah, so that Apple example is really good and kind of telling. That's, yeah, I mean, that's kind of the Tim Cook legacy, unfortunately, and while they're, you know, they're semi-scrambling to shift that over to India, at least to some extent, as you said, These are things that take decades to transition. These are gigantic, unimaginably huge human systems. And the way that that ended up happening is, I mean, Tim Cook was a supply chain guy. He understood the cost of holding inventory. Inventory famously is basically just like a dead weight on you if you're operating a business. It's it's that's what why you do just in time inventory when your stuff just comes in right as it's needed. The problem is there's that means there's no you're not keeping any buffer for a rainy day. And so this is fundamentally like a purely narrowly economic view that goes like, what are the dollars and cents? What's on the balance sheet that basically assumes as the underlying bedrock assumption, we have a global system that is rules-based and stable that we're operating on top of. That's where all of this comes from. You
Jeremie:
[45:01] You you can't
Edouard:
[45:03] You would not operate this way if your view was, hey, all kinds of crazy stuff could happen. Like pirates could hijack my my ship, like Houthis could like blow up my my supply routes around whatever. Right. This these are things that that you only get to like you have these razor thin margins around everything.
Edouard:
[45:24] Only when you have this underlying assumption that the world is stable and safe and will never change. And this comes back to us outsourcing our supply chains to China. The truth is, holding inventory is expensive. Manufacturing is a kind of shitty industry if you're in it. I know a bunch of people who are in manufacturing, and everyone is, all their mentors are like, well, what's your advice? Get out of manufacturing. Hardware is hard. Hardware is hard. It's just a shit business. this. And if you're one country taking one, if you're like Vietnam taking on some piece of the supply chain, and if Cambodia was doing some other thing, and if it was Mexico doing a third thing or whatever, then that might be okay. Because then, as America, you at least aren't too dependent on one provider. And you can kind of, it's similar to being an integration point in a value chain. Like, you know, I'm, let's say I'm, you know, Windows. I'm this one big integrator and I have a zillion apps that are serving me. So I actually have all the power, even though there's a lot of individual apps that I need kind of collectively.
Edouard:
[46:34] That might be a good structure. The problem is when you literally connect hand over to a single country, the power to make all of your physical stuff, I mean, that gives that country a heck of a lot of power over you. And that is potentially enough power to challenge the very assumption that that system was built on in the first place.
Jeremie:
[46:57] There is a dimension of this that's a bit more of a good news dimension on the supply chain side that has to do with chips. So, you know, TSMC, obviously, is that like, so Taiwan Semiconductor Manufacturing Company, they make all the chips. So when you think about semiconductors, you're thinking about a whole bunch of people in a giant factory that are tuning dials. And there's like 500 dials. And if you fuck up one of those dials, your whole thing is useless, right? So 500 dials, go do it, send a bunch of PhDs to do it. You start by making, very roughly to abuse terminology slightly, you start by making chips down to a resolution of seven nanometers and you work on it diligently for years. You're like, oh my God, we got there. Okay. Now you move on to the five nanometer resolution, the five nanometer process node, then the three nanometer process node, then the two nanometer process node. And the entire business of semiconductors is this like painful, arduous, excruciating stepwise process of climbing down that ladder. And that ladder, by the way, started at microns, many microns, right? So we've been stepping down. That's what Moore's law is, right? Stepping down the resolution ladder. So, okay, the question is, where are we now today? Taiwan Semiconductor Manufacturing Company, the only place in the world that is on track to do, that has a two nanometer process.
Jeremie:
[48:15] Now, typically, the two nanometer process, whatever the leading node is, the most advanced one, that all goes to the iPhone. Historically, that's how that's worked. So Apple comes in, they go to TSMC, they say, we're buying up all of your shit at the the highest resolution. The reason they do that is that iPhones are small, so they want it small, they want it fast. That's the idea, right? That leaves the next node up for AI chips. That's usually how it works. So you have Apple de facto subsidizing TSMC's latest node development, which is a critical competitive advantage. That's why TSMC has the moat that they do. Nobody else has a giant partner coming in and saying, hey, we'll spend like hundreds of billions of dollars or tens of billions of dollars every year to help you just develop that next process. and guarantee you a buyer. But then the next node up is usually like NVIDIA, AMD and all those folks.
Jeremie:
[49:02] So the question is then like,
Jeremie:
[49:04] Okay, what can we fab onshore today? Like what can America do? Well, TSMC has just opened up some massive fabs in Arizona, fab 21, and there are others coming up online that are pushing now kind of positioning to be able to do ultimately the three nanometer process, starting with the five nanometer process. And they're already seeing surprisingly good yields. So this means the five nanometer process and its variant, the four nanometer process can be used to make H100 GPUs, which as of like 20 minutes ago were the cutting edge ones. That's rolling over into the blackwells. But the bottom line is we actually are doing what many would have thought impossible just 12 months ago. These yields are serious. This is a serious process. The Taiwanese government used to have a law that prevented TSMC from having their leading node fabbed off-island. And that's now been voided. That's no longer the case. So that's a really big change. The other pieces you're starting to get. So TSMC is well ahead of its Chinese competitor SMIC, which, by the way, was founded by a guy who stole or a bunch of people who stole a bunch of IP from TSMC. Famous lawsuits in the like early 2000s that are fascinating about this. But anyway, so what's happening now, though, is as NVIDIA is muscling in on the market, they're actually able to compete with Apple on the leading node,
Jeremie:
[50:22] Kind of displacing them, which means we're going to kind of see an interesting node skip phenomenon happen where suddenly, you know, we used to fab at four nanometers. Now it's like, oh, we're skipping three. We're going to straight to two nanometer fabrication. So we'll see that big lift. Meanwhile, the Chinese fab SMIC is sort of stuck at the same. They're already using their leading node for AI and have been for a while. And so there's going to be this nice little bump and this domestic fab story that's going to be happening. That's going to be very helpful for supply chain security. But there are other things that are not as good, including, anyway, like the whole A-Speed, yeah, BMC story, which... Is maybe in the weeds, but it's actually like pretty, pretty freaking concerning independently.
Josh:
[51:03] Yeah, well, these are at least encouraging signs that we are taking steps to bring stuff back on shore. I know I saw NVIDIA, the Blackwell chips are being made in Arizona now, I believe it is, somewhere on the United States. And I'm wondering, how long does it take for us to actually get that chip fabrication working here to that two nanometer scale? Because I know, even though they're assembling the GPUs here in Arizona, a lot of those wafers are still being produced in Taiwan. Is there a world in which we can wane ourselves off of that reliance before we reach this critical mass of AGI that happens in the next couple of years? Is there meaningful progress we can make before we hit this breakout point?
Edouard:
[51:41] Yeah, we're definitely making some meaningful progress already, as you've noted. The challenge, of course, is how much of that chain, right? So you're trying to bring all the nodes on shore at once. So the fabbing is, yes we're able to do that here increasingly i think also additionally we have because packaging the next step so after you actually fabricate the die the thing that does the logic and actually processes the stuff you need to package it on with like with memory right you need to package it on with with other with input output you need you need to actually put it on a substrate so that it can it's not just like it's not just like a brain and the whips whips and whirls it has memory I can remember, I can communicate with other things outside of itself and has all these little subcomponents. And so that packaging capability is also being onshore. But there are a lot of components, right? So the wafer piece, I think, is one of the challenging pieces. But there is absolutely a serious effort that's being made to bring all of that onshore. And the Taiwanese government is cooperating to a great extent in that effort, which is
Jeremie:
[52:52] I mean, which is very good of them,
Edouard:
[52:53] Given the fact that their ownership of that technology is a significant geopolitical advantage that they have.
Jeremie:
[53:01] The time it takes to get to that next process node in Arizona is, as far as I know, still an unknown, right? Like, they have their 500 dials, and the dials for the Arizona fab are different from the dials for the Taiwan fab. In fact, making a new fab is this notoriously, like, legendarily difficult thing to the point where Intel, back when they were actually relevant for chips, sorry, Intel, they had this famous policy called copy exactly. And when they made a new fab, they would literally copy every detail down to the color of the paint on the walls in the bathroom because they did not know what was working in the precious, pristine fab that, thank God, was actually pumping stuff out. So it's sort of similar here. So it's anyone's guess. I'm sure TSMC has a much better sense, but it's certainly not, as far as I know, public at this point.
Ryan:
[53:53] Okay. So while we are losing on the kind of the supply chain when it comes to manufacturing things like the iPhone, right, because TSMC is based in Taiwan, because some of that know-how is coming to the US, and just when you look at kind of the numbers, it does feel like on this dimension, the US is ahead with respect to its chip supply. Like we have access to more chips, better chips than China at this present state and have previously, you guys mentioned export controls, which is also helpful towards that. Just curious, how sustainable, how durable do you think that chip advantage actually is right now?
Edouard:
[54:28] It probably eventually erodes, though not too soon. I would say there's a few years, but there are also – you always have to look at the whole system, right? So it's challenging to look at individual verticals and be like, you kind of have to look at what's the whole system. And the key thing is always what's the dominant constraint on the system at any given time in any given regime? So you think about chips. On the Chinese side. Okay, you know, we might be winning on chips. Let's say, you know, it's 2027, 2028. We're still winning on chips in that we have more, you know, more efficient chips and higher volumes.
Edouard:
[55:16] Perhaps they are able to bring, even though we have like that two nanometer processor or whatever, so we have whatever the, we have the Rubens, which are after the Blackwells, and those are super awesome chips. So maybe their chips are worse, right? But maybe they're able to make more of them And maybe they have so much power available that they're able to overcome that constraint of chips. And perhaps they're also able to smuggle in and onshore meaningful amounts of R-ships and divert that way. So when you're in this kind of competitive game, it's really about like... Who's nothing static. Everyone's trying to find an angle. Everyone's trying to like, you know, edge, edge in front of each other. And so they, you know, they might find a strategy that just gets around a constraint that we thought was this like, okay, there's, this is the category. So, so just all I have to say,
Jeremie:
[56:02] I think, I think we,
Edouard:
[56:04] We likely remain ahead on that front for, you know, at least two or three years, if I had to guess. But then of course it's, it's all moved because the final circumvention is they're still stealing all our shit. So, So until we have good or adequate security across the board at these levels for models and in other domains, I mean, it's all kind of, they can just cheat at the very last step.
Jeremie:
[56:30] They're like very diligently guarding the duffel bags full of money as they're hurled into a robber's car. Like, yeah, I mean, that's part of it. But also, I mean, I think it's also interesting, if you're going to predict the future, you You also have to go down each vertical and think about, you know, okay, well, what actually might happen here and here? Because then there are interactions. And in that respect, if you do look at the, you know, the chip story and, you know, where, how are we going to be positioned? What determines that? You know, how are we going to be positioned there in like three, five years? One of the key inputs here is photolithography machines. Like there's no escaping that. This is the thing. It's funny in this whole supply chain, there's always like one monopoly at every level of the stack. So for fabricating the chips, it's TSMC. They just dominate. But then if you're actually going to make one of these chips, right, you take a piece of glass. I'm going to very roughly sketch this. But you take a piece of glass and you shine a laser onto that glass to kind of like shine the pattern of the circuits that you want to print on it. The laser that you use to shine those circuits is exquisite. Exquisite doesn't even begin to describe it.
Jeremie:
[57:39] Back in the old days, people used DUV, deep ultraviolet lithography. So this was like 170 odd nanometer light that would shine onto the wafer and you would do – anyway, it was a very expensive system with a whole bunch of mirrors and stuff and all that jazz, which is fine. These things cost like hundreds of millions of dollars.
Jeremie:
[58:02] And that's the frontier of what China can use today. We have at a at a dutch company called asml this is the other kind of monopoly now at the photolithography layer the the laser level so essentially they have this thing called extreme ultraviolet lithography and to think about an extreme ultraviolet lithography machine just like put this idea in your head you have to imagine there's a like kind of metal like like like molten metal tungsten that's being shot out of a thing. And these little, so you fire a laser beam, laser pulse that's timed as this tungsten thing is flying along. It gets zapped by the pre-pulse laser to get flattened out, keeps flying. It's now flat like a mirror. Why like a mirror? That's important because now a CO2 laser is going to hit it, excite the atoms and fire off basically an X-ray that you're then going to collect through 11 mirrors. And by the way, those mirrors are not lenses. So that's a pain in the ass for optics, legendarily a pain in the ass, because usually you use lenses to focus shit in optics,
Jeremie:
[59:04] Just using mirrors this time because at essentially 13 nanometers, which is the insanely high energy wavelength of light they use, all optics are like super absorbent. So you would never get anything out of the system. So now you have all these mirrors. By the way, the other thing that's absorbent is air itself. And so now you're working in a fucking vacuum because that's the only thing that won't absorb your 13 nanometer light. So you got 11 different mirrors organized just so you've literally got mirrors with holes in them so that beams can go through the mirrors on the other end and it's a giant pain in the ass. And now even the frigging thing that you're imaging is itself a mirror. The pattern of the circuit that you're imaging is a mirror. And so that gets, this is like an impossible thing to do, right? So China is not going to replicate that at home. It's not going to happen. But there's a question of how far you can go with the old generation deep ultraviolet lithography before you're forced to go EUV. And that's going to be one of the key, key constraints. If we can keep them to the DUV, that probably does mean they only have a couple generations left because what they're forced to do right now with their shittier duv machines is take the same chip pass it through a bunch of times do this thing called multi-patterning which slows down their output right they got to go through like three four times just to get the same level of resolution that we do with one pass eventually that shows up in your yields and your unit economics and that becomes untenable so the question of when that actually starts to kick in but that's anyway sorry that's like my my like it's so
Ryan:
[1:00:26] Crazy because this is an example of just like how little we have been thinking about this for the past like three decades or so. And just like the experience of just like you get your iPhone and it's in a nice package. No one thinks about it. All of the shit that went into like making that thing happen. And when you dip into the supply chain, you need to. There's all of these like intricate pieces that only one company does with like a handful of specialists. And you realize how brittle it really is and how amazing it is that any of this
Ryan:
[1:00:57] stuff works in the first place. Let me ask you guys about another dimension that might come into play. So with China and their ability to just manufacture consumer devices with chips at an unprecedented scale, It seems like that could come in handy as AI pivots to the physical world in the world of robots, right? We're already seeing this with drones. But eventually, you're going to have the application of AI in sort of meat space, right? Physical power, monopoly on violence. This is what nation states do. So you can imagine robot armies, just drones, swarms of drones, autonomous fleets, all of this kind of thing. It feels like of all of the things we've discussed so far, perhaps this is where the U.S. is positioned most poorly. And we're not quite in this era yet. But you can imagine the wars of the future are fought with robots, essentially. Where are we going to produce those robots? Have you guys considered this story here and how does it look for the U.S. versus China? Yeah.
Edouard:
[1:01:55] So, I mean, likely the timescale of that is kind of beyond the potentially or could be beyond the kind of AGI or software AGI timescale. But it's absolutely relevant and it's becoming more and more relevant. So the thing is like robotics benefits from iteration just like any other hardware, any other thing. And iteration in hardware benefits enormously from being able to go down the street and get a part machined at 3 a.m. For 12 cents. And the only place where you can reliably do that is in Shenzhen. And so that means that you have, yeah, like America was the first to like invent humanoid robots that actually kind of worked. Yeah. Are we going to be the first to scale them and perfect them and do these iteration loops? Well, show me a place in America where you can get like a fucking servo motor
Edouard:
[1:02:53] machined at 3am for 12 cents. Like that's the, that's the thing about having a gigantic industrial base all smushed together in the same geographic location. It's a huge advantage for all of those things. And so, yeah, like I personally, we have major concerns about this because,
Edouard:
[1:03:13] Like even even you know even the optimists the components are chinese sourced and so that impacts iteration velocity and and of course like elon and and his companies try hard to like internally source and build as much of their own stuff as they can but at a certain point down the supply chain like and this is an intentional strategy you know of us as as we mentioned we basically handed over our industrial base, but also to their credit, strategically, the CCP and the Chinese state understands the advantage that we were handing to them and doubled down on it. There was a made in China 2025 initiative, like part of the five year plans that Xi Jinping enacted. And that has worked. Like you look at the Amazon stuff you get delivered at home. It's all made in China. They are trying really hard to make that happen. So it's like it takes a pull to reverse that effect. And I think like, yes, you look at humanoid robots, you look at drones, you look at increasingly how these things are going to be, how wars are going to be fought. This becomes very, very challenging. Yes.
Ryan:
[1:04:20] Let's talk about culture and talent as another dimension here. I saw this chart recently, and I think this is a chart of the different countries, and it's plotted on the percent that are nervous about the advent of artificial intelligence and excited. And you could see most kind of the Western culture, Anglosphere, exists in this area of like between 60 to 70% of the population is like not very excited and very nervous about AI and its effect on, you know, institutions and, you know, like everything in society. Whereas in Asia and China, the vast majority of the population is more excited, you know, 70 to 80% excited. And like there's lots of analysis for why this might be and like the differences between kind of, you know, the decades of relatively, you know, one to 2% GDP growth versus like what China and many of the Asian tiger countries have had recently. But I want to ask a question around a culture like US culture versus Chinese culture with respect to kind of building, entrepreneurship, dynamism. I mean, that's something that the US has prided itself on. And yet we have a public that is now skewing a bit more technophobic, a bit building adverse. How do you think.
Edouard:
[1:05:36] This stacks up in
Ryan:
[1:05:37] The war here?
Jeremie:
[1:05:38] I think the United States is really big. To Ed's point about the government, once you zoom in on San Francisco, I think the numbers start looking really different. And to Ed's earlier point about Shenzhen, San Francisco is what you can co-locate is usually what drives outcomes or it's a big factor. And so to the extent that San Francisco is the city of techno-optimistic AI accelerationist people, which it disproportionately would be. I would assume if you did a poll of this, looking at tech savvy folks in the Bay Area, it would look very different. So there's a question of like which America is competing with which China or which part of America is competing with which part of China is sort of like a subpart. But there is this like broader issue of America having really shit education and just generally like we've historically brought in a lot of the best founders in the world. You know, a lot of famously, you know, you look at the Fortune 500, how many of them are Indian, right? Right. It's it's pretty. In fact, all the Frontier Labs pretty much are like indirectly owned or at least have really very deep partnerships with entities that are run by South Asians. Right.
Ryan:
[1:06:47] You guys are Canadian, too. Right. So imported into the U.S. Yes. Yeah. I could say this, too, because I am also Canadian living in the U.S., right? Yeah.
Jeremie:
[1:06:56] Yeah, exactly. Right. It's it's and it's the obvious choice. Right. Like this is the decision that we had to make is like, where do you build your professional life? And it's like, this is the source of capital, but also of talent and techno-optimism. I mean, that is what the U.S. is relative to Canada, certainly, in the Valley. It's not that we're thinking about, oh, well, let's move to rural Ohio. It was like, let's focus on cultivating the San Francisco ecosystem. We'll go and do Y Combinator. We'll do that stuff. And that's where you see the kinds of people you look for. But recruitment is a separate issue, right? Like, where are you going to find the fresh talent if broadly the entire country leans a certain way? That is a problem. It's a separate question as to, like, what reflects the underlying reality of where this stuff could go. It would be a shame, but it is possible that reality will end up sort of bearing out the American attitude overall more than it does the Chinese one.
Ryan:
[1:07:55] But would you say, though, talent advantage to the U.S. right now relative to China?
Jeremie:
[1:08:01] Overall i i would guess a slight talent advantage to the u.s in certain areas like like model architecture these are sort of getting less and less separable but at the compute level at the sort of engineering level you might actually have a moderate advantage for china one of the big challenges is you've had a lot of american researchers move over to china recently you've also had a lot of collaborations very in at least in my opinion and ed you can disagree if you think, but I think a lot of very naive collaborations from the earliest days between American institutions and Chinese institutions in the classic, let's get them to liberalize by sharing the goods type of philosophy.
Ryan:
[1:08:43] I'm curious about structural differences between our governments, right? So you've got China, which is sort of centralized, state-sponsored, national capitalism, something like this, and America, which is sort of traditional US capitalist democracy. Does China have a structural advantage, like maybe a coordination advantage in this world? Or does American dynamism sort of, you know, win the day here?
Edouard:
[1:09:07] Yeah, so that's a really good question. Dictatorships in our authoritarian countries often have advantages in velocity, at least when the situation is not obviously critical to everyone in the democracy, in the democratic context. So authoritarians can move fast. The thing that undoes authoritarian regimes in the long run, of course, is that eventually the leadership becomes kind of stewing in its own juices in terms of their information environment. Nobody wants to tell them no. And they eventually they eventually get into a space where the inputs that they receive are not trustworthy and they're making decisions based on those inputs. So they end up moving maybe very fast, but just like in the totally wrong direction. And one example of this is kind of mini example is Putin invaded Ukraine when all his equipment was actually just like super fucked up and not being properly maintained because none of his guys actually wanted to tell him how fucked up their situation was. And so they had to wait to find out until they actually slammed into reality headfirst. This is one of the issues with dictatorships. Now, the risk for us is
Edouard:
[1:10:21] Dictatorships and authoritarian regimes, if they're aimed in the right direction, if they know, if they actually are, you know,
Edouard:
[1:10:30] Genuinely good at aiming themselves, can in short bursts out compete democracies
Edouard:
[1:10:36] that are not alive to the threat that they're facing. And our adversaries are like they know history, right? Like the Russians and the Chinese know that Pearl Harbor happened and what happened to Japan. They know like 9-11 happened and what happened to that. They know not to give America a Pearl Harbor moment because historically you have a bad time If you give America a Pearl Harbor moment. So why would they do that? Why would they do a big over triggering thing that gets us all aligned and rowing in the same direction? Instead, so much more effective for them to keep eroding our information environment, to keep to keep doing things like financing. I mean, we this is actually this is a known thing. Many of the groups that protest major infrastructure projects, power plants, things like that. In fact, the majority of them are through some channel or another financed by our adversaries. And this is a really, really thorny issue. Right. It gets right at free speech and they're aiming right at like our top, top value. We have this open society. We have a society where it is healthy and it is good for a person to be able to say, hey, like, don't build this nuclear power plant in my backyard. That makes me uncomfortable.
Jeremie:
[1:11:58] Yeah.
Edouard:
[1:12:00] The problem is when that voice gets amplified unnaturally by adversaries' money with the intent of eroding our ability to build these infrastructure projects domestically. That is actually a huge part of the problem, that this is happening, that adversaries are intentionally throwing sand in our gears this way. I mean, incidentally, the way that this works is not that they make up a cause and try to push that cause because that tends to fail as an information warfare strategy. It comes off as fake. It doesn't actually have people rallied. Instead, they take genuine causes that are small and they put money and resources against those causes, obfuscated, of course, and amplify it. And it's never like Russia is backing me. No one would go with that if they knew it. But instead, it's like, oh, there's, you know, there's a millionaire who's sympathetic to this small town environmental cause who's like really generously giving money and blah, blah, blah. But standing behind that millionaire is the usual suspects.
Josh:
[1:13:02] What is the manifestation of that look like? Those the type of attack vector. So I guess you're kind of referring to information warfare.
Edouard:
[1:13:08] I guess what I'm referring to specifically there is lawfare. So lawsuits to try to slow down infrastructure projects.
Josh:
[1:13:15] But does that also mean, because scrolling Twitter every day, I see a lot of bots that have very extreme opinions. Is that someone like China with a malicious intent that's trying to sway public opinion?
Edouard:
[1:13:25] The ones you notice to be bots are the ones you notice to be bots, right? The ones, the bots, it's the bots that you don't notice as bots that are the ones that are most concerning. And of course, there are zillions of those and probably by now more of those than there are real humans simply because how do you tell the difference? You can't. Like bots are as good as humans at talking on Twitter.
Jeremie:
[1:13:48] Yeah, there is like, Josh, your point, I mean, part of this is, I read your point as being like, okay, how do we get a sense for like the scale of this in the bot universe? Yeah, what does it actually look like?
Josh:
[1:14:00] How do I know when there are bots like messing with my head the way I'm thinking?
Jeremie:
[1:14:05] You don't is the short answer, but the very minimal or an interesting historical footnote is to ask the question. When did we know? Like, when was the last time where we had access to any kind of ground truth regarding that, right? And one indication is like, so if you go back to about 2021, so back then, the Russian Internet Research Agency, which famously is credited with the interference in the 2016 presidential election and, you know, hired thousands of contractors. They worked around the clock, all that jazz. So the Russian Internet Research Agency would use thispersondosnotexist.com. So this is like website where every time you refresh it, you get a new face of a new person, right? Well, this person does not exist.com once used to produce images that could be detected as kind of being fake images. And so there are, and I forget what the numbers look like, but there are studies that have been done looking kind of like the number of these sorts of profiles that were revealed. And this was an active thing back then. And so, you know, you can imagine there's like infrastructure. And here you really imagine like software infrastructure. Like there are beautiful apps, right? With nice GUIs and they're there for the other, the operatives to just like spawn a bunch of agents. And in that universe, yeah, I mean, like, I don't know.
Jeremie:
[1:15:23] I, I, I,
Edouard:
[1:15:24] There are ways also of, of influencing through not bots, right? Because one of the things that, so you guys maybe have heard of this phenomenon called audience capture where, you know, you have a big major figure on a social media platform And they initially started out, you know, posting interesting and thoughtful and balanced and carefully thought out things. But then over time, as your audience scales and you get, you know, hundreds of thousands, millions of followers, you kind of get drawn to the lowest common denominator psychologically. And it's not a question of like they're manipulating. It's more like, oh, when I post about this, I get more likes and follows and like reposts and whatever. OK, cool. And so I just get kind of trained like Pavlovian, you know, reflex to post more of that stuff.
Edouard:
[1:16:12] You can imagine that process now with human-like bots being increasingly weaponized. And so a major figure on a social media platform, you know, it's not necessarily that, like, these accounts are being so overt as to say, oh, yeah, like, you're not posting enough about how China is awesome. But, like, you know, you post a little something about how, oh, yeah, like, in this, you know, Chinese New Year, it's great to, like, celebrate the achievements of Chinese Americans, which is all great and just legitimately awesome. But then that just gets a lot more likes than than another equivalent post about some other demographic would. And then you kind of just like slowly over months and years inch your way towards towards the kinds of views that you're trying to be pulled towards.
Ryan:
[1:16:56] Did you guys ever read the books on the three body problems? Yeah.
Jeremie:
[1:17:01] Three body books. OK.
Ryan:
[1:17:02] Do you remember? I seem it's been a while since I read them. Do you remember the Trisolarians? Right. This is the alien race coming to basically destroy the world. But the thing is, they're in a star system that's relatively close, but it still takes them hundreds of years, basically centuries to make their way all the way to Earth. But information can travel faster. And so what they were trying to essentially do to Earth, because they were very afraid that Earth would basically level up its technology. And by the time they met them in battle, Earth would be so sophisticated, it could like defeat the Trisolarian fleet. So the very first thing they did is they started funding all of these degrowth memes, basically, right? And sort of injecting that into Earth culture to throw Earth off its game and like, oh, you don't really need fusion. Like that technology is just not something you should be investing in. And like that was how they tried to throw Earth off its game so it couldn't make technological progress. It seems like game theoretically, that's what you're saying is kind of the game here that China could play versus the U.S.
Edouard:
[1:18:07] Yes. And not just China either. We have a number of adversaries who would love for us to be totally degrowth. Yeah.
Jeremie:
[1:18:17] And there's like, this is almost a sort of funny intersection, too, because, you know, we certainly think that there's a lot of evidence pointing to the idea that, yeah, you know, Eliezer said it on your show famously very loudly for the first time I really heard him say it very loudly in podcast form that we may lose control over AI systems and that may be catastrophically dangerous. Right. Right. That can be true. And then it also can be amplified by our adversaries. Like, this is really complicated shit. Like, how do you run that calculation when the airwaves are being saturated by truth that is useful for the adversary? And you really have to think when you think about that truth, you have to think about, no, no, not truth, but like your truth. The thing you believe is being weaponized that way. And it's just really challenging. I mean, you try to keep your head on straight, not super obvious in a context where, like, if you think about the history of cyber attacks, a lot of the cyber attacks that we see play out over years, right? We hear about, you know, like a big nuking of like all these Ukrainian servers overnight.
Jeremie:
[1:19:23] And we kind of go like, okay, yeah, that was, you know, that was a big hit and they must have been doing the research for months. But it's often years, including years of, and typically it's years of dealing with people, right? Social engineering attacks. And so, and I forget which one this was with the GitHub. Basically, they literally got this person to take over the Linux GitHub.
Ryan:
[1:19:47] Yeah, this was recent, right?
Jeremie:
[1:19:49] Yeah. There's so many cyber attacks.
Ryan:
[1:19:52] It's hard to keep. Okay, so as we kind of, it sounds like the picture overall is that between the US and China, it's anyone's game at this point in time. And part of your call to action is for the U.S. to wake up, to not be so naive in the game that it's playing. And I think you have some specific recommendations. So I guess as we start to close this out, the question to you guys is, where do we go from here? And what are your recommendations to the U.S. Government, to Frontier AI Labs, to just the audience in general?
Edouard:
[1:20:27] Yeah, so I would say the security piece going forward is number one. Because again, all the stuff we've talked about, this is super important stuff. It's critical. And you guys have done a great job of organizing it and structuring it to make it understandable and digestible. The problem is always... They can just like take the last piece and like steal it. And so the whole edifice until we have that security in place basically doesn't matter because they can just they can just take our stuff. And they can also they can also then subsequently in some form disable our training runs or our ability to then use the very the very model that we developed if they really think it's down to the wire. So they can they can stuxnet us. Right. So number one is we need to raise those walls and we need to put those security things in place. I mean, we are actually we are working on developing a set of standards for nation state resistant data centers and and other kind of security aspects for Frontier Labs. So if anyone is interested in contributing to that, reach out to us. But that is the absolute critical next step that's needed so that so that we will actually, you know, have a chance of having a real lead at all.
Jeremie:
[1:21:43] Yeah, the other ingredient there too is like you can't build the perfect fortress. You know, this is anybody who works in, it doesn't matter the kind of security actually. I mean, it's just a truism, which means that you need to think of the game theory behind the escalation to ASI differently, right? You're not just making a perfect fortress. There actually has to be offensive activity. Right now, we simply don't hold Chinese infrastructure at risk in the same way that they hold ours at risk. And that means that from their standpoint, they can just keep throwing jabs and hitting us in the jaw. And we're just sitting back and taking it. And there's no consequence. So the challenge here is in part one of willingness.
Jeremie:
[1:22:25] Historically, there has been a reluctance on the part of the U.S. Government to authorize certain activities directed at China and other adversaries. For various reasons, fear of escalation is one of the main ones. The problem is that if in your mind every path goes straight to nuclear war, then you lose all leverage in what is ultimately a negotiation around the use and development of this technology. I mean, it's not a pretty negotiation because it involves getting punched in the jaw and throwing punches at the jaw. But if you take that off the table and you tell yourself the story that like, oh, well, we can't do X because this might marginally increase the likelihood of whatever. The reality is that instability increases when you delay your interdiction later when the stakes are higher. When you establish a norm of, OK, you guys are going to do you guys going to go cut a transatlantic cable. Cool. Now we're now this is happening.
Jeremie:
[1:23:20] Like then it stays at sub-threshold. Then it stays below the level of escalatory exchanges. Whereas if you wait for a country to invade another, if you wait for, you know, land to be conquered and people to die, then the response, the only appropriate response is a more escalatory one at that stage. So it's sort of like raising a kid, like, you know, very delicate, very gentle at first, you know, yeah, don't put that in your mouth, please. Thank you, thank you. Instead of just like waiting them to go through a lifetime where they're then like a cocaine addict and they've got all kinds of diseases and everything's a disaster and you're like, well, I got to put you in prison. Like it's the same principle.
Jeremie:
[1:23:59] These nation state relationships are dynamic and they respond to cause and effect and they respond to interaction. And when we're just taking it on the jaw, all we're doing is teaching our adversary to keep throwing punches. And in a game of super intelligence, there are many reasons, not all of which we can necessarily discuss, but that that is a very, very bad idea.
Ryan:
[1:24:19] In the super intelligence games, I think everyone is worried about a hot conflict as well. And of course, like if we're competing economically, that's a great space to compete. If we're competing for talent, for technology, that's a great space to compete. I think everyone listening probably realizes that any sort of escalatory hot conflict, basically everybody loses in that game. And I'm wondering if you see an optimistic way that this all plays out. Because, you know, we've got to end these episodes on a little bit of optimism here. Is there a way that we, I mean, the technology that we're developing right now is incredible. And as long as we get through superintelligence and we don't create a misaligned AI that destroys us all, see other podcasts for some material on that. We could usher in a future that is just incredible for the generations to come. And we can cooperate with our adversaries as well, compete with them,
Ryan:
[1:25:16] but also cooperate in ways and build a brighter future for humanity. I guess my question, you guys, is there a happy ending in your mind? And if we get to the other side and it is happy, what will have happened?
Jeremie:
[1:25:27] Well, one ingredient is we need to enter a trust but verify regime with our adversaries, right? There's not going to be a stable regime until everybody can verify that we're all using our AIs in the right way. And that's going to mean hardware level things like FlexHeg. So this is a whole portfolio of technologies or a research agenda that people are pushing forward. How do you do flexible, basically like on-chip monitoring kind of governance protocols that are reliable and tamper-proof and all that good stuff? It's the same thing that happened with nukes, right? You hit this regime where I'm inspecting you, you're inspecting me, and we don't trust, but we verify it. That's one ingredient. Another is AI can, well, number one, can help us accelerate the development of technologies that support exactly that. Another is, as you said, can help us through the virtual version of the game of diplomacy figure out, well, where is there alpha for us both to kind of cooperate on this and meet in the middle and mediate even some of these disagreements? I think there's like there's huge potential for that. That's the absolute optimistic outcome, one we'd hoped for. But we got to, as you say, manage that transition.
Edouard:
[1:26:33] Yeah. And just in general, I mean, there's basically three layers of challenges that we have to overcome. And they're not impossible to overcome, but they are very difficult to overcome. One is security, right? If we can establish good security and get a good enough lead over adversaries, we have the margin potentially to address the next challenge, which is control and ensuring that we can actually control the systems at these levels once they are built. And then the final challenge, right, is, okay, you know, you've built potentially this incredibly powerful AI system and have it under control. Who's at the helm, right? Who's at the keyboard? Who's the one giving commands to this crazy...
Jeremie:
[1:27:17] I think Dan
Edouard:
[1:27:18] Fagella called it like a sand god, you know, who's giving commands to this thing. And this is an area where I think that the United States has a cultural advantage. We talked about culture, right? The cultural advantage of the United States is we have this, not just, you know, constitutionally, but culturally, this checks and balances mindset where no one party should gather too much power to themselves. And so we have this cultural ethic where it's like, how do you establish controls around the uses of power? And that's something that China really does not have. It's this very kind of centralized approach. And so I think that we solve for all these challenges. We are in the best position to create something that all of humanity can derive the benefits from.
Ryan:
[1:28:03] Love it. Guys, this has been super informational, very helpful. Of course, we could find more information about all this at Gladstone.ai, I believe. And there's a report that we've alluded to a number of times, America's Super Intelligence Project. We'll include a link in the show notes. Is there anyone else you want to hear from in our audience? Like you mentioned people who are into data center security, like any other parties you really want to hear from?
Edouard:
[1:28:25] Folks, yeah, folks who are just involved in frontier lab security. We're certainly interested in hearing from folks like that as we work on these standards.
Jeremie:
[1:28:35] Also hardware level. So yeah, if you're thinking about compute strategy, working on, yeah, on the compute game plan at
Jeremie:
[1:28:42] Any of these
Jeremie:
[1:28:43] Labs or hyperscalers, we're always interested in talking to folks like that.
Ryan:
[1:28:47] Amazing. They're interested in talking to everybody except for the CCP, right? Hopefully no infiltration here. Guys, thank you so much for joining us today. This has been a lot of fun.
Edouard:
[1:28:56] Thank you. Thanks so much, you guys. It was fantastic.
Creators and Guests
