Stargate: OpenAI’s $500B Plan to Build a Planet-Sized Supermind

David:
[0:03] There's a topic out there that we haven't discussed yet on the ai roll-up and i think it's because this topic is so daunting because the implications are so huge because the outcome of this topic will shape the future of the global order of humanity and planet earth listeners might have heard of this stargate project this is a microsoft open ai soft bank oracle and also united states government collaboration all of these parties are all investing

David:
[0:29] and building this $100 billion multi-gigawatt data center campus slated for 2025. The idea here is that compute and models are going to get so much larger and so much more powerful that we need to get ahead of this incoming demand for compute and create a $100 billion brain center, an intelligence center for running AI compute, not as a matter of creating just a good consumer product, but actually as a

David:
[0:55] matter of national defense and national security. The conclusion that I think the United States government has arrived at, and I think with the assistance of OpenAI to arrive at this conclusion, is that access to compute is equivalent to access to hard power.

David:
[1:12] Ensuring access to the world's most powerful compute centers is now treated like access to stealth technology or enriched uranium in past eras. Nations now view frontier compute as a prerequisite for economic leadership, intelligence dominance, and military deterrence. So like what was once the Manhattan Project back in the 40s, a race towards nuclear armament, is now a race towards building the world's largest, most powerful compute centers, because access to compute means greater access to more powerful intelligence than our adversaries. And I think the downstream implications of this will ultimately come just to redraw the geopolitical lines of the global world order. Just as in the Cold War, we had lines between the values of free market capitalism and on the United States, on the West, from Soviet communism in the East.

David:
[2:02] New lines will be drawn downstream of whoever has the largest intelligence compute centers. So following in the United States $100 billion Stargate project, the UAE is building out Stargate UAE. So in May 2025, this month or last month, Abu Dhabi, state back to G42, signed a deal with OpenAI, NVIDIA, Oracle and Microsoft, some of the same players, for a one gigawatt Stargate UAE cluster with Washington, you know, Washington, D.C. Approving the export of half a billion top-tier NVIDIA chips to the Gulf per

David:
[2:38] year, showing that the UAE wants to enter this game of international compute dominance. So, Josh, here are my big takeaways from what I see this new fight in the jungle among nation-states and their allies, where I think this goes. And I want your guys' help to see how far down this rabbit hole we can go. So, I got seven big takeaways for you guys. First, Frontier Compute has become... hard power. So bleeding edge GPUs are treated like stealth fighters and risk uranium. You know that line compute is the new oil.

David:
[3:10] Compute is now a matter of national security, national defense. So whoever owns the watts, the chips, the energy, the cooling owns the economic rents of the 2030s. That's number one. Number two, United States export controls are now diplomatic currency for the global world order. So if you are a tier one ally of the United States, you get access to chips. If you're a tier two ally, you get chip quotas. And if you're an adversary, China, you just get locked out.

David:
[3:38] Number three, energy becomes the choke point. So a one gigawatt data center needs the energy of like a midsize United States city. So Gulf states and hydro rich Nordic lands and also, of course, the UAE, very energy rich. These are very big beneficiaries of this move towards compute. Number four, the Taiwan bottleneck is huge. I think Taiwan is going to be the line that is drawn between powers will run straight through Taiwan. And Taiwan will be a tug of war center of anyone who wants to fight in this fight. Five, compute nationalism. So EU AI factories, China's homegrown accelerators, Gulf chip splits, all aim to dodge permanent United States cloud dependence. So compute is going to become nationalized. The internet is going to become even more balkanized. The government, number six, the government will shift to input controls. So regulators will see GPU counts and data center controls as basically the safety levers. Those are the controls that governments have over the rest of the world. And at the end of the day, the final big takeaway is if you control the watts and you control the chips, you control the 2030s. And Stargate is the first concrete proof that raw compute is now a frontline geopolitical asset. It's an extremely ambitious topic. I hope we can do this topic justice on this today episode. So, Joss, I'll throw this one to you. Are the stakes really as high as I've made it out to be?

Ejaaz:
[5:04] I mean, in short, yes. We're talking about owning the keys to the most powerful and influential technology of our time. Air will have impact across economies, lives, culture, and these data centers are the things that will power that, right? Complete compute supremacy we're talking about here, right?

Ejaaz:
[5:23] Stepping back a bit, I want to give the audience a bit of an idea as to what to imagine here, like what this thing will look like. So Stargate at its core is an initiative, as you said, David, to build these data center campuses, right? And we're talking about five to 10 of these in the US, plus, I think, five or 10 that's planned overseas. And the first, which they've announced, which you just mentioned, is the UAE. And they're called super clusters, super clusters of these GPUs, which is basically the machinery that can process all of this data and compute to train all these different models and to help you access these different models as a customer as a user in that country that uses chat gbt for example right and this is no small feat right 500 billion dollars has been committed here by some of the biggest backers and companies we're talking 500 billion sorry so you know microsoft softbank oracle mgx which is basically the uae and open ai themselves now also a fun little side kind of fact to this is microsoft still secures their cloud compute for the rest of this decade until 2030, right? And they still secure a 49% profit share of OpenAI products. So the point I'm making here is this is the first time that tech companies are having such a massive influence on global political climates and what the future of those respective nation states are going to build, invest, and facilitate over the next decade. Now, why is this important?

Ejaaz:
[6:51] If you want the best model, you need the compute to train it, right? And this requires a lot of upfront capital, massive data structures, etc. Now, currently, the way that OpenAI is scaling with their models, it suggests that anything post GPT-5, and GPT-5 is a model that doesn't exist yet, but anything post it will require around five gigawatts of dedicated low latency compute by 2028. That is, for listeners of the show, a heck of a lot of compute. That is like very, very expensive. And OpenAI, we see taking the strategy here of owning the facilities versus like renting cloud, which is what a lot of companies already do. And this locks in both the supply and economics, letting OpenAI basically amortize capex over multiple generations rather than pay up like huge markups so we're seeing complete dominance from not just open ai but one man sam altman now how this plays out into global politics is a completely separate discussion which we should definitely have on this show but i want to throw it to josh to get his takes before we dig in deeper

Josh:
[7:57] It reminds me of this short story by Isaac Asimov, which is the last question. And it's like, once you, the idea is like, as you acquire all this intelligence, there's really only one question that matters. And in the story, it was, can you reverse entropy? But in this, it's like, can you create artificial general intelligence? And that is the only thing that matters, because once you solve that, nothing else matters. It's able to solve all of your problems. It is able to give you infinite levels of intelligence, sort infinite energy. It is like a superpower. This is bigger than nukes. So the stakes are very clearly as high as David's proposing them to be. I think the scale is interesting. And that's kind of something that I want to talk about. EJ, you mentioned they're going for five gigawatts of power for some of the data centers. For reference, one gigawatt is equivalent to about 750,000 homes. So for every gigawatt, this is like a city worth of homes. It's 750,000 homes, 2,500 Teslas, I think 100 million light bulbs is the equivalent. So the scale and required energy is massive. They are planning to do this with the uae but they're actually making meaningful progress here in the united states in texas right now and the way it works is they just have these massive buildings with giant gpus inside so the current setup that they have in abilene texas that they're running with it's it's eight buildings i believe with 50 000 gpus per building is the plan so that is going to be 400 000 gpus which would be the biggest cohesive cluster i think in the world

Josh:
[9:15] There's over 2 000 people working on this 24 hours a day and what's amazing is the the power constraints that they have versus what they're trying to get so currently they only have 200 megawatts of power when in reality they need 1.2 gigawatts which is about a 5x multiple on that so as they're building these we're starting to see the restraints that they're running up on and i think a lot of the timelines they have are super ambitious they're like oh yeah we could do this very quickly but they need power so i think that's kind of where the uae comes in the gold coast of the world they have all of the energy they have all of the oil they can actually power these things in ways that we can't so what i'm excited to see with this is like how quickly they're able to out accelerate us in terms of powering these gpu clusters because that's really the big thing that matters

David:
[9:58] Do you guys know what core weave is.

David:
[10:02] It's a company that went public not too long ago. And I think if you want to answer the question, how big of a deal is this? You'll see the market reprice CoreWeave up and up and up. Because what does CoreWeave do? It hosts chips. It is a power center. It is an intelligence center for rent. And so it just buys a lot of chips. It hooks them up to power. And you can buy access to those chips. And so it's independent from everything that we're talking about. But I think you're seeing that same like emphasis on importance of access to compute being expressed in CoreWeave. And the only reason why I'll bring up CoreWeave specifically is because like the stock price has done a roughly 350 percent since March when right after it went public. And so you can see the market starting to really value intelligence centers, access to intelligence. And as Ja said, and there's this open AI for countries phenomenon, like this initiative from them where they're trying to just kind of copy this business. This is a valuable business model. They have the intelligence and they are selling it to entire countries. And so that line that's drawn between powers, like Ja said, is being done arm and arm with a nation state's tech arm. And China has figured this out a long time ago. The integration between the Chinese government and the Chinese tech world is the same. The tech industry of China is an extension of the Chinese government.

David:
[11:30] And in America, it was always much more separate. Like Facebook was going toe to toe with the government. All of these people in Silicon Valley. There's a joke out there that like, why is Silicon Valley in San Francisco on the West Coast where, you know, you know, DC is on the East Coast? Well, because they want Silicon Valley to be as far away from the government control as possible. This I don't think is I don't think this works in the world of AI, where you need because of AI is such a strong matter of national defense that you need an alliance between the AI sector and the government sector for like if as a matter of just what's most profitable for for open AI. Like, you actually have to team up with the largest power bases in order to secure the energy, in order to secure the chips, and have the value of things

David:
[12:15] like OpenAI, things like Microsoft actually reach their maximum potential. Ajax, what are your thoughts? You're nodding your head.

Ejaaz:
[12:21] Yeah, well, I wanted to pick up on your point around defense, David. So you're right to point out that traditionally in tech, in the West at least, it's been separate from the government. They've been going toe-to-toe, right?

Ejaaz:
[12:32] On the defense side of things, tech and the government have worked pretty well together. Actually, just this week, Anduril, which is an infamous or rather famous weaponry or drone striking company that was started by the former founder of Oculus, which was acquired by Meta, announced a partnership with Meta, right? And the point around this partnership was they were going to leverage Meta's new VR or cross-functional VR technology with their drone technology. So basically creating superhuman gamer computers that can basically take drones to war for the U.S. And that was a major non-discreet partnership. And the reason why that was so significant was because this former founder was fired from Meta. He wasn't let go. He didn't leave of his own accord. So it just kind of like points out how important AI is to the weaponry side of things. But I think there's an important difference to point out here with the Stargate side of things, which is, This partnership of OpenAI for countries isn't just about defense. In fact, it's not even mentioned in the announcement blog post. What they talk about the most is the fact that AI is going to have very significant consequential effects on that entire state economy. I actually want to dig into some of the differences with OpenAI Stargate for countries versus Stargate that's happening in America that Josh just described.

Ejaaz:
[13:57] So with the global order, Stargate will be set up in the UAE. Let's take that as the first example. The UAE will own all of their data that goes through that data center. So the U.S. Or OpenAI doesn't claim ownership over the data that's being run through the UAE's Stargate cluster, right? So they'll own all their data. It's sovereign. It's democratic AI, as Sam Altman describes it, for data privacy and compliance. Number two, they'll create custom AIs to serve the citizens of the UAE. So, for example, anything in a native language, different laws, regulations can be leveraged for custom health care or like public services, for example.

Ejaaz:
[14:43] And then number three, in every nation that a Stargate is set up in, they'll create a national startup fund of which OpenAI, by the way, is going to be one of the main contributors to invest and support subsequent AI companies that blossom from these different models. So we kind of see this as kind of like a political play, right? The US gets investment exposure into all of the top companies that comes out of OpenAI's IP or models, but we're keeping the kind of data sovereign and the compliance and privacy aspects in tow, right? So it becomes a geopolitical play. The US positions itself as like the trusted AI compute block versus China's kind of like stack. The open AI for countries track kind of like offers a partner nation's discounted slots in exchange for alignment, as Josh pointed out earlier.

Josh:
[15:34] Yeah, if I'm the United States, this feels like a no brainer to me. I think the race to AGI is between the US and China, really. And in the case that China does reach it first, they have the manufacturing capabilities to physically manifest that through robots and through devices way faster than we can.

Josh:
[15:49] So in the case that they actually get there first, that is a really scary future. And I think the best way to hedge against that is just to like the enemies of my enemies are my friends, just get everyone else who is not China on board, get them in the loop, get them iterating, get them ascending up that curve of acceleration as fast as possible and and use the resources that they have that we don't quite have access to so get access to saudi energy and saudi money that way we can funnel it back to the united states to to provide more gpu cluster training so to me this makes a lot of sense i think the question now is is what position does this place open ai and particularly sam altman in being the single facilitator of this intelligence throughout the world like in a way in a way we're kind of creating this supermind where we're just creating these giant nodes all around the world that will then talk to each other and kind of recursively learn with each other and have access to huge swaths of data that we otherwise did not have access to so if you're thinking about this from sam's perspective well he is now kind of building this hive mind by leveraging other countries and like sure the data is private and sure it's their own private model. But I'm sure there are downstream wins for OpenAI that results from this that are yet to be seen. But if I'm Sam, I'm probably feeling very powerful right now, I think would be a good word to describe that.

Josh:
[17:04] And I guess over time, we'll see how this actually plays out. But good strategic move for the US, whether or not it is the right thing to do with only one company versus getting someone like Google or Anthropic also involved, that

Ejaaz:
[17:16] Remains to be determined.

David:
[17:17] This is where you really see the lines being blurred between the United States government or nation state powers and the tech sector, because the United States, Microsoft, SoftBank, OpenAI are all dual investors in both the domestic Stargate and the UAE Stargate. And so it's funny to see Microsoft and OpenAI having their fingers in both camps, right? They get to own the intelligence center in the United States. They get to own the intelligence center in the UAE. And there's no way the United States government could do this without the help of Microsoft and OpenAI. This thing all centers around OpenAI. And so in order to compete with China and to be competitive at all, the United States government needed to ally itself with this tech sector. But then the tech sector gets to elevate itself beyond the tech sector and is now in the national defense geopolitics camp. And so like OpenAI, whenever it goes trades publicly or maybe Microsoft because it does trade publicly. Does that also count as a defense stock too? Because we're we are now like it is now inside the UAE and it is now like it's already a super national or international organization. But like even more so

David:
[18:29] now when you are highly integrated in the defense of multiple countries.

Josh:
[18:34] Well, I was

Ejaaz:
[18:35] Going to say the lines of nation states are blurring now. Right. This isn't a topic we're too unfamiliar with when we talk about the Web3 side of things, but we talk about political power and influence. We talk about economic power and influence. Tech companies have been kind of blurring that company, but it's been the Eastern and Western bloc, as Josh pointed out earlier. Now it's becoming quite clearly an AI bloc, and it's all for the taking right now in terms of alliances. And it doesn't seem to have or interfere with any past political biases that may have occurred. I had a question for you both. Outside of major political elections, has there ever been a case where tech companies have had such large political influence over a single decision?

David:
[19:20] Outside of elections.

Ejaaz:
[19:22] Yeah. So outside of like lobbying and super PACs.

David:
[19:26] Yeah. Well, there's the pre-Cambridge Analytica debacle where like Facebook was credited with starting that civil war in Africa from just kind of like advertising, like advertising manipulation. And downstream of this one local domestic dispute turned into a civil war because Facebook's advertising program was open to both parties in this local region being able to access and influence people in that region and they created a civil war. I'm blurring around the details, but that was basically effectively what happened. So does that count?

Ejaaz:
[20:00] Kind of. It's using technology as the tool here, right? And that's like a very specific network. I'm talking about like a global order of influence, which is like, it's not just the social network, which is basically the advertising or propaganda machine. It is the machine that allows you to facilitate the workers, right? Or facilitate any kind of global economy GDP, a major percentage of your capita per country, right? That hasn't been done before, in my opinion, and has only been done kind of like behind closed doors, handshake deals. This is like the first major kind of global operation that's happening

Josh:
[20:37] Yeah we've seen this trend of of being able to influence through technology companies like we saw this first with facebook where you can actually sway the opinions of a lot of people and you could actually sway an election to do that and you could sway a policy to do that and there's been this increasing trend of technology companies gaining more leverage to the point where they they can actually influence politics now are required to participate in it because they're the only ones that have the required technology needed to compete with other countries. And while the government has fallen flat on actually innovating, it has to rely on private industry or public industry where these companies in tech have been created. And I don't see a world in which that trend changes, because as great as it would be to have a Manhattan project for AI that is government funded, that is kind of all pooled together and contributed to by everyone as a country, that doesn't exist. And there is no world in which it seems like that is going to exist, when the reality is that they're just going to lean on a single entity like OpenAI. So I think this is an increasing trend. I think private industry will be more powerful and more powerful until eventually they're just at parity with government because they can influence people. And we're going to talk about this later in the show, but the influence you have on the person can sway them to vote for anything. And in democratic worlds where votes matter, having influence over those people makes a really big difference. I have some

Ejaaz:
[21:55] Food for thought for you guys. I want to role play some scenarios with this entire kind of project and see where you guys take it, right? Okay, so the first kind of question I was mulling over is... Who owns the resulting character and quality of open AI models if, say, the UAE contributes 15% of compute to the latest frontier model? Have you thought about how that political play might happen? Like right now, they've announced 10 Stargate projects within the U.S. And 10 more will eventually be announced internationally. Do you think the U.S. will always have to maintain like a greater number of sites versus the outside? Like how does that play out in your head? That's question number one.

David:
[22:39] I think the location of compute, where compute happens in the world, will become very important. You would think that, like, oh, I'll just upload it to the cloud and the cloud will make, like, geolocation, physical location, just completely irrelevant. I don't think that's true when it's AI. And because, like, as soon as things escalate between, like, again, hypothetically, United States versus China, things escalate. Then access to intelligence becomes a national, like becomes a, a weapon that you have. And you need to make sure that you always have access to that intelligence. You can't be dependent on a third party.

Ejaaz:
[23:18] But who gets to define the intelligence?

David:
[23:19] What do you mean define the intelligence?

Ejaaz:
[23:21] So if the UAE's data clusters, right, or compute clusters account for 20% of training open AI's frontier model post-2028, should they have an influence in the character and quality of that ai model oh i

David:
[23:36] Think they'll be able to negotiate there where if they have 20 of the total compute they'll be able to negotiate a way to be involved in that conversation for sure because as an ally again if things escalate versus china and you are throwing your compute against china's compute and whatever i don't know what that looks like just you know subversion tactics building better weapons i don't know but if you can just like align yourselves with the uae's 20 more intelligence then all of a sudden that intelligence adds to your pool of intelligence and you get to take that fight to China because you guys are allies. So rather than like, it's like grouping up armies, except now the armies are data centers.

Ejaaz:
[24:15] Right. The reason why I asked the question is for so long, we have spoken about American made AI and China made AI. And I feel like those lines are going to blur over the next decade, right? Like if the UAE wants to add in a clause or a personality trait in open AI in general, not just within the UAE, but as a generalized model to say that, I don't know, you need to be more friendly in your biases towards the Middle East or Middle Eastern news. I wonder how that plays out to massive nation state and government political decisions.

David:
[24:49] Well, I mean, we can just borrow lines from like the 1984 book. Like we've always been at war with Oceana, like all of those things that we

David:
[24:57] just in the book just injected into the thoughts of the society and the society just went along with it. Now you just get to do that into open AI. And all of a sudden, whatever open AI spits out is truth. So there's another conversation that's not related to Stargate, but I think is incredibly illustrative of exactly why Stargate is so important. We've already mentioned Cambridge Analytica in the 2016 election. And I just want to kind of trace over some of the facts, the TLDR of what happened. Actually, this is in 2014. So, you know, over a decade ago.

David:
[25:29] So Cambridge Analytica, this company that did like advertising analytics and otherwise like Internet analytics based primarily based on Facebook. They ran a personality quiz on Facebook in 2014, and this app vacuumed up data from 270,000 quiz takers and their friends, got collected data on their friends, the friends of quiz takers, and ultimately generated data on 87 million Facebook profiles without people's knowledge. Cambridge Analytica built psychographic models like neurotic suburban mom or angry young man. And while working for the Trump campaign and allied political action committees in 2016, micro-targeted them with razor-tailored political ads. When a whistleblower outed this scheme in 2018, it became just torched Facebook and big tech. And that became just like a huge line between the Democrats and the Republicans. Remember, this was the whole like Russia hacked Facebook line from Hillary Clinton, all this kind of stuff. We saw that it dictated, it had a very large influence on the 2016 election that elected Donald Trump.

David:
[26:33] And that was because of big data and just subtle tweakings into how Facebook served content to its users. Now, here's the news from this week. This week is that Meta plans on creating AI ads. and it just wants to make a very simple ad platform that does a lot of the work that it takes to make an effective ad. And so the idea here is that advertisers can just plug in objectives and a credit card. That is a direct line from Mark Zuckerberg. Just plug in what you want and a credit card and Meta will give you what you want. And so while Meta's AIs, they, I mean, did it again?

David:
[27:12] So the meta AI will just spit out a full creative stack, images, video, copy, even like real-time A-B testing on ads for different users in different locations. And then the AI will also decide whom to target on Facebook and Instagram and also optimize the pacing of spend to make sure that these ads are as effective as possible. We've always known that this is AI, like Facebook's meta's product. What is meta's product? Influence. You give them $1,000, they give you $1,000 of influence. And now they are leveraging AI to make sure that their product of influence is as effective as possible. And so to me, I look at Cambridge Analytica and then I see this and I draw a direct line of Facebook willing to using AI, which will be, again, compute located in the United States.

David:
[28:02] Owned by OpenAI and the data centers that we've talked about. And that will be you'll be able to use that into accessing influence over citizens of the United States or any other country, really. And so this is why there's like AI powerhouse, like owning the intelligence center of your local region is so important because that gives you total control and total influence over what your constituents think, what the people think, what the facts are, what the truth is. And we're going to use this in our way. China is going to use this in their way. Both ways might be kind of authoritarian, kind of totalitarian either way, but there's a little bit of a downward spiral where we have to do this because if we don't do it, then China's going to do it. Josh, what are your thoughts?

Josh:
[28:45] This is a continuation of a trend that we've been seeing, which is kind of dark and scary in the sense that a lot of people don't really know what's going on with AI. They're not sure how powerful it is. They're not sure where it's popping up within their day-to-day life. And I think in the case of this advertising example, we have a very clear case of the continuation of this in the sense that users will not really know that these are AI generated. They'll just like the ads better and better and better. And what we kind of see with TikTok is when you get content that's tailored to you, you really spend a lot of time enjoying it and scrolling. And if you could apply that to advertising, well, then, I mean, that's incredible news for advertisers, but also kind of scary news for us where you're not really quite aware of the ads becoming much more powerful, but they are. And not only are they more powerful, but they're hyper-customized to you. So as a user, Facebook has all of your preferences. They know your data. They know what you like. They know what you don't. They just send a prompt to the AI and they say, hey, generate an ad for this person with these parameters based on these things that he likes. And now you have the best ad in the world that probably doesn't even feel like an ad. And it makes you want to buy things.

David:
[29:50] No one else will see that ad. Only you will see that ad because that ad was crafted for you specifically to influence you the best.

Josh:
[29:58] And not only that, but the cost to generate this ad will be multiple orders of magnitude less than it would previously because it doesn't require people to go out into the world to film things to go into like an editor and actually create these things it's just done with a single prompt and a single click and i think that's a really big thing because as the cost of these ads go down you can then run them and iterate on them much quicker and you could kind of the way that these ai models work is is they kind of take these feedback loops and they learn from them over and over you could run that loop over and over and over for a fraction of the cost it creates it takes to create a normal ad and until you have the best ads in the world for people so it's a continuation of attention and grabbing attention and hyper customizing attention and it's seems, I don't know, kind of weird and scary. I see you just nodding. Do you have anything to comment on this?

Ejaaz:
[30:41] I mean, the question becomes, will this result in Cambridge Analytica 2.0, but like misinformation on steroids? It sounds like what we're indirectly getting at here is potentially, yes, unless it's like regulated or monitored very heavily. Just in that example, Josh, of like personalized ads where, you know, you could be selling the same product to different target audiences. And maybe you might misconstrue some of the details of that product in a few different ways, just to kind of appeal to that consumer and get them to click buy. I kind of worry about how that might kind of spiral into something a little crazier, right? So would company politics kind of change in the way that like that information that they supply to meta? Can they tweak it there and then? That's one thing. The other thing is, there's a shift of a power structure slightly ever so slightly here so typically you've had meta kind of working in a 50 50 relationship with advertisers right advertisers will come to them and say hey we want to advertise on your platform meta is like okay cool what's your product create the ad do a b and c and then we can see where we can kind of like plug you in right it's a kind of like youtube algorithm where it's like here's an advert we'll try and put it in front of the right audience

Josh:
[31:58] Facebook is kind of like now saying,

Ejaaz:
[32:00] Ah, we'll handle all of the video cost production, all the kind of like visual qualities, and we'll maybe take an extra cut on this. I don't know whether that's like a fair take, but I feel like they're going to get more money out of this. They're going to own more of the tooling and services in-house, and they just become a higher kind of like conglomerate, basically. And then the third order effect is, well, what is Facebook going to do with all of the data that they collect from all these advertising experiments that they're running, right? You just mentioned that they're going to be doing A-B tests. It's going to close that loop, basically, because Facebook would put out an ad, see if it works, and then maybe tweak it slightly for the next one. Now AI can just kind of like close that loop in like real time and like kind of give you the best iteration of what that ad might be. So the blueprint basically updates every second. And that's a pretty insane thing.

David:
[32:51] The image generator model is, was it last week that we talked about it? No, two weeks ago, Google's VO3. And so it's interesting to see that this is happening so fast. So VO3 got introduced two weeks ago and now Meta is using Google's VO3 to create AI advertisements. I don't know when this fully rolls out. I guess this is just the announcement, but it's also interesting to note that Meta, when this announcement went out, Meta jumped by 3% on the stock market and then competitive, like ad, like anything that's competitive to Google in the space or excuse me meta in the space also went down by like anywhere between like two and five percent so you can see the market like reacting to this real.

Ejaaz:
[33:31] Time did you see that microsoft also announced a similar product this week oh really no yeah so you know how they have it's it's just using sora which is open ai's video generator but they specifically announced a product which is going to create verticalized adverts or media generation that will basically be ready for tick tock or whatever that might be and they're feeding it to their enterprise customers right so this is like a general rollout to start off with but i bet you they're going to try and go toe-to-toe with this new meta product the

David:
[34:05] Internet i think just becomes more and more dead every single week yeah.

Josh:
[34:10] All of this points to increasing control increasing attention Mention, increase it. It's like if you have the power to make the optimal version of your product, surely they're going to take it. And that means that if your product does work as well as they want it to, well, the downstream effects are actually kind of scary and pretty bad. Like if your advertising is great and your conversion click-through rates jump to 99%. That seems like kind of a scary world. Like you don't actually want, I don't think we want these products to work as well as their design because that creates a reality that isn't good. Like when click-through rates on ads are sub 5%, that's great. People are seeing it, they're like, eh, that kind of sucks. Like I'm just going to keep going on with my day. But when they're really good, that's a lot of distraction and a lot of manipulation. Andre has a great take, which you have up on the screen, which I'd love to discuss.

David:
[34:58] Yeah, so Andre Kapathi, former OpenAI, he just left OpenAI to educate about AI generally, just kind of a legend in the AI space. He tweeted out, very impressed with VO3 and all the things people are finding on our AI video. That's a Reddit subreddit for AI video. Making makes a big difference qualitatively when you add audio. There are a few macro aspects to video generation that may not be fully appreciated. One, video is the highest bandwidth input into the brain, not just for entertainment, but also work and learning. Think diagrams, charts, animations, etc. Two, video is the most easy and fun. The average person doesn't like reading or writing. It's very effortful. anyone can and wants to engage with video. Three, the barrier to creating videos is approaching zero. And four, for the first time, video is directly optimizable and directly optimizable is in bold. I have to emphasize, explain the gravity of number four a bit more. Until now, video has all about been indexing, ranking, and serving a finite set of candidates, candidate videos that are expensively created by humans. If you are a TikTok and you want to keep the attention of a person the name of the game is to get creators to make videos and then figure out which video to serve to which person collectively the system of human creators learning what people like and then ranking algorithms learning how to best show a video to person is a very very poor optimizer okay people are already addicted to tick tock so clearly it's pretty decent but in my opinion nowhere near what's possible in principle.

David:
[36:26] The new videos coming from VO3 and friends and competitors are the output of a neural network. This is a differentiatable process. So you can now take the arbitrary objectives and crush them with gradient descent. I expect this optimizer will turn out to be significantly, significantly more powerful than what we've seen so far.

David:
[36:45] Even just the iterative discrete process of optimizing prompts alone via both humans or AIs may be a strong enough optimizer. So now we can take engagement or even pupil dilation and optimize generated videos directly against that. Or we take ad click conversions and directly optimize against that. So this is Andre just consolidating this down into a very powerful take of the cycles to create the world's most addictive video are going from days with humans to minutes with models. And that isn't even accounting for how the most addictive video can now be optimized for individual end users. The other theme that I'm seeing here, and we've talked about this before, is the differentiation of the internet. I think pre-Facebook, before the algorithm meta of Twitter, Facebook, Instagram really set in, everyone was looking at the same internet. There was no algorithm sorting content and optimizing for distribution based off of who liked what. So if I went to Reddit or if I went to, you know, Facebook or whatever, or Yahoo, I was seeing the same content that somebody on the opposite side of the world was also looking at. And that has slowly eroded to the point where it is today, where no one has the same Internet. I have my feed. Josh has his feed. Aja's has his feed. No one has the same feed anymore.

David:
[38:04] At least with YouTube videos, I could go and take a YouTube video and I could share it with Josh. We're like, yo, Josh, this YouTube video was sick. You should watch it. Now like even that is under threat where my youtube video is for me and it's not necessarily for josh or anyone else and so the the we're getting siloed into our own little like content, bubbles and it's always been this way and now ai is just going to take that even further.

Josh:
[38:29] The way a lot of these platforms work is they rank content and there's a fixed set of content in which is applied ranks and weights and that's how it gets distributed to people. But that model becomes absolutely irrelevant in the case that we could hyper generate content where you could spin up the perfect video on demand. So ranking and indexing the way that Google works, the way that YouTube works, that is no longer a requirement. And I want to just click into this one key thing that Andre said that people might not understand, which is the gradient descent part of how this works.

Josh:
[38:56] So that's kind of how you train large language models. there's this thing called target loss where every time that you go through a training run you're optimizing for a specific thing so in this case he was talking about time spent or pupil dilation where maybe you could track the size of someone's pupils whether they're engaged or not in the case that you have access to the front facing camera well the way it works is you send a prompt to this this generator it creates an ad it measures the objective so your pupil dilation or the time spent watching and then it iterates again the next time and it gives you a slightly better at and a better at and it learns when your pupils become a little less dilated a little more dilated and it iterates that very quickly because it's so cheap so what he's saying is is not only are we removing the indexing function because there will be an infinite set of content but that infinite set of content can get so good so quickly because of these gradient descent training runs where every single time it iterates a little bit better a little bit better a little bit better it learns what you don't like what you do like and it becomes perfect and it creates this really weird internet where every day there is a net new internet because all of the content from yesterday no longer matters because it's all generated on the spot.

Josh:
[39:59] So there's this like real-time internet that is hyper-customized to you that only exists in your reality and can bend your reality however way it would like or however way you would like it to do. So it's this really weird, creepy thing. And I think Andre kind of showcases also how powerful this is because a lot of people, they don't really care to

Josh:
[40:19] read or to write, but video is so easy. It's so easy to sit there and be entertained. And I think this form factor, now that we've unlocked it with video creation and VO3 is so unbelievably powerful. And the way that most people will probably feel the effects of AI first in their own personal lives on a day-to-day basis.

Ejaaz:
[40:35] So one habit that we've had on this series is to talk about kind of like the doomer effects of this technology. And so it's easy to kind of extrapolate where this goes, right? We could create, you know, endless doom scrolling, you know, on steroids and create an internet where everyone is more separated than other. But I was just thinking about where this could be incredibly powerful, where we haven't seen just yet. The first thing that pops into mind is education.

Ejaaz:
[41:04] Could you imagine if you are completely amateur in some kind of sector or task? Let's say you already got into gardening, which is probably going to be something which people ironically get less involved in as people get more digital and online, but you had no idea what the first thing to do is. You could just plug in, watch a bunch of gardening videos. It would know that Ejaz lives in this particular area, in this particular country, so the soil type will be A, B, and C. And before you know it, he's out there, you know, buying products that probably measure serving him through personalized ads and kind of like off you go. Right. And maybe, you know, Ejaz becomes happier because, you know, he's sniffing flowers all day or whatever that might be. Right. So it's the educational aspect. The other aspect is I feel like enterprises and employers, whatever that future world looks like, would really want them want this for their employees, whether it's an AI agent that's automating something or a human that's conducting, I don't know, manual work or online work for them. I feel like this would be something that's super powerful. And then the final thing is,

Ejaaz:
[42:08] Last week, we spoke about OpenAI's new device, right? And Josh, we were kind of like opining whether this might be something that kind of sits in your pocket or listens or like kind of like hears you in some kind of way. But there was no visual element here. I think this now reinforces that there has to be some kind of visual element, right? There has to be some kind of device that has an eye that sees what we see. Otherwise, it would kind of be a mismatched product. I don't know whether that changes your opinion. This is kind of like a side note to like our previous episode, but I don't know. I feel like visual elements are not going to be...

David:
[42:41] By visual element, do you mean camera or a screen?

Ejaaz:
[42:44] Yes, camera, basically. Camera, yes. There needs to be some kind of ingestion mechanism for the world around us and what people are seeing. If we assume AI models are going to become more visual.

Josh:
[42:56] Yeah, well, it seems probable that there will be a camera on the new device. I think the question is whether or not there will be a screen. They said that there won't be a screen. It's a suite of products. there will absolutely be one that includes a screen at some point because this visual manifestation of this is super important. I think to your point about using it for good, there's this like very obvious and very steep like K curve where it's like there's this fork where people who have agency that want to use these tools to leverage them to get smarter will have infinite leverage, infinite power, will have so much fun with this because you can shape the world however you want it but the people who don't people the millions of users who spend hours a day on netflix who spend hours a day on tiktok there's no reason for them to break out of that sphere when it increasingly gets better so while i'm sure the upside is infinite and incredibly exciting for a lot of people who want to leverage that The downside is also equally an opposite as devastating. And it's hard to imagine a world in which that isn't the reality because so

Josh:
[43:57] many people are so addicted to media and completely happy in that world. And a lot of them might not be upset. They're just stoked to sit there and share videos with their friends and just have a good time ingesting content. And that's just fine. But that creates a really huge divide of people

Ejaaz:
[44:16] Who do want

Josh:
[44:17] To use this leverage to enhance their lives or the people who just don't.

Ejaaz:
[44:20] I think it redefines who your friends are, Josh. I think the three of us will have different types of friends if we assume this technology evolves in the way that we've just described. The reason being is if we're just being served up personalized AI content that reveals our true inner selves and biases and maybe accentuates it because they just want to sell us a product and increase, you know, whatever eyeballs, then we end up discovering new people that align with those biases i don't know whether that makes us better people over time but i think it probably introduces us to new friends which will probably be pitched as a good thing initially but i don't know how it the longer order effects of that are david i see you like you mean kind

David:
[45:01] Of like how so like uh downstream of like again the facebook cambridge analytica era we all got tribed we got we all got put in tribed camps and i think what you're saying is that, but even more granular is what you're saying?

Ejaaz:
[45:14] Yeah, exactly. And I don't know whether it becomes some kind of corporate homogenous society, David, where we're kind of like, I don't know, whoever uses Meta and Google's AI the most then becomes kind of like in their own kind of nation state or pact. And whoever uses Microsoft and OpenAI specifically ends up in another. But I think they're going to be like weird form of alliances that kind of get created.

David:
[45:38] Have you heard of a, this is a startup that I got introduced to just in a casual conversation the other day, index.network. So this is in like pre-product, so most people wouldn't have heard of this. But the pitch is, I'll just scroll down to the how it works section. How it works. Start with what you're working on. Upload your notes, your decks, anything that captures your thinking. This information is stored privately. Then step two, tell agents what you're open to. I'm looking for early stage founders building privacy. I want to connect to ZKML researchers and builders.

David:
[46:08] I'm interested in discovering confidential compute startups. And then agents compete to match you. so this is like a founder matching engine and then you get matched with them and you can start like talking to them so you upload your data and then it's like this is like a founder matching engine you could also just do this for like dating apps too you could just like upload all of like your last 10 000 photos on your camera roll and then some dating app will like match you with with like other people and so i think that's there's things that are happening that are directly creating what you're exactly talking about it, Joss, but intelligently as well.

Ejaaz:
[46:43] Yeah. I mean, what's super interesting about this product is it almost sounds like a training gym, right? Hey, get ready for your next work interview by doing a million different simulations. But the assumption there is there will be a point of which the human let go of the AI's hand and, you know, goes and becomes a grownup in the world. But a question then becomes like, what about the products where the AI doesn't let go, right? Where it's constantly your companion, where it maybe even replaces the interaction that you would have in the human world, right? I think those kinds of products will be stickier almost in the future. I don't quite know what that looks like. Maybe it is, it starts off with social media, like an enhanced YouTube or TikTok.

Josh:
[47:28] Yeah. It feels like a transitory project because it, it implies that there will be more than one agent worth picking from as if like others aren't capable enough. There's very rapidly a world in which you don't need to choose. They can all do anything you want. It's just a matter of like the angle that they choose.

Ejaaz:
[47:42] Okay, I want to move on to our final topic of this week, which was actually the most discussed topic in AI across the internet. Now, I want to give you guys two chances, one guess each to figure out what this topic might be about and your options to choose from the following. Option A, a groundbreaking new model was made and it changed people's worlds? Number two, Google released yet another agent product and it's going to automate our entire lives away. Or number three, a multi-billion dollar AI company was outed for having no AI whatsoever.

Josh:
[48:25] I like the reality in which number three exists.

David:
[48:28] One out of these three is different from the rest. The first two we've already talked about in 17 different ways. The last one is new to me.

Ejaaz:
[48:36] Wait, David, you don't want to talk about another frontier model that beats this benchmark? Go around the loop again. Okay, well, you'd be happy to hear, listeners, that it is, in fact, number three. This company called Builder.ai, which had a valuation of an emphasis on the word had, a valuation of $1.5 billion. This company had been backed by Microsoft to the tune of $435 million. It was revealed that they did not have an AI product on the back end. But it was, in fact, 700 Indian software engineers that would manually process and code up any user's request as they were fed it. Wait, wait, wait.

David:
[49:22] So I'm vibe coding. I'm vibe coding.

Ejaaz:
[49:24] On this platform.

David:
[49:25] And I'm like, build me an app that plays Snake in this particular way. And then a bunch of Indian coders would code it as fast as possible and give it back to me?

Ejaaz:
[49:36] To it gets sent to kuma and anand in india presumably and they just build up a prototype and you have a loading screen where it's like this app is being developed and it's like doing that for like 15 to 20 minutes and really it's engineers on the back end that are building this entire prototype or app for you and then just sends it to you i saw a bunch of mock-ups from this i saw a bunch of mock-ups from prototypes that people would see and it was kind of obvious now in hindsight, because there were like typos everywhere. Some of the buttons didn't do the thing that they would claim that it would be. I saw another hilarious tweet, by the way, which was, it looks like AGI stands

Ejaaz:
[50:17] for, fuck, let me, I don't want to ruin the, yeah, a guy, but yeah, I'll repeat that.

Josh:
[50:24] Go for it. I saw a hilarious tweet the other day,

Ejaaz:
[50:27] Which said that AGI actually just stands for a guy in India, which I found like hilarious. Now, I just want to highlight the story because it kind of shines a light that not everything in AI is wizard magic and going to change the world tomorrow. We are, in fact, still very much at the starting phase of this entire revolution. And most people don't really know what they're getting themselves into. We've spoken about AI agents a lot on the show. We've spoken about jobs being replaced on this show. We've spoken about media going to become enhanced, doom scrolling, all that kind of stuff. But really, we kind of don't know what that's all going to materialize into. And it could just be a bunch of Indian software engineers on the back end.

David:
[51:10] Maybe this whole AI race was just a complete scam.

Ejaaz:
[51:14] Yep.

Josh:
[51:14] I can't imagine users not realizing this. Yeah. Because I feel like I very frequently abuse my AI in the sense that I will like be kind of mean, be very direct. You're still doing that? Paying it with a ton of queries.

David:
[51:26] Oh, yeah.

Josh:
[51:27] It gets you good results. If it will like give me the wrong answer, I stop it and I say, absolutely not. You need to change direction. This is what I want. And like very quick. So I'm like constantly hounding it with requests.

David:
[51:36] God, I didn't realize Josh is an asshole on his own time.

Josh:
[51:40] I am a value extractor and I want to get what I want from the models. And if that's what it takes, so be it. Like, okay, sorry. You're not sentient yet. Like, I'm going to speak to you in a way that gets me what I want.

Ejaaz:
[51:51] And it's totally not going to use that data to revise itself in its future form,

Josh:
[51:55] Josh. I'm going to have to delete my account before we change you. It's too late, bro. Start a fresh memory, Josh.

David:
[52:00] You're done, Zach.

Ejaaz:
[52:01] Sorry, Sam's selling it to the UAE at this point.

Josh:
[52:05] But the point is, like, how did they get away with this? EJ, do you know how long this was happening for?

Ejaaz:
[52:11] Yes.

David:
[52:12] Ten years.

Ejaaz:
[52:13] This startup's been operational for ten years. Yes. Okay, but it hasn't

David:
[52:17] Always been an AI vibe coding platform.

Ejaaz:
[52:19] No, no, only over the last two years.

David:
[52:22] What was it doing before that?

Ejaaz:
[52:24] It was gathering data, basically, to fuel some kind of product development platform. And then they jumped on the AI hype train. They were like, hey, we could Vibe code whatever you want. Just type in a prompt. And yeah, it looks like all the funding was going to software engineers in India. And by the way, this would have never been discovered if the company that had helped finance them to the tune of, I think, $600 million didn't ask for, I think it was about 150 million to be recouped and they defaulted on that payment which then led to an investigation of their finances of which the investigators realized that the funding was going to India and they were like hey, why is this going to India? Why do you need so many engineers? And they were like, oh, all the user requests are going to this place as well. And it kind of like unraveled. This is the SCX of our era.

Josh:
[53:16] Yeah, Theranos like Elizabeth Holmes type.

David:
[53:18] Yeah, this is Theranos to a T.

Josh:
[53:20] Oh yeah yeah we're doing really important work that's uh really good and totally not sketchy at all that's unbelievable they got away with after that long that's impressive honestly i wonder if things like that i wonder if they're embedded in other other systems that we use that we're blissfully unaware of like is there a human element in this i don't know i find it hard to believe that it wouldn't become obvious quickly if you are a power user of ai and and had to wait and got inconsistent results on each prompt and it just seems the whole thing seems fishy well

David:
[53:48] It's It depends on how hype-y it is, right? And so if they raise at a $101.5 billion valuation because of revenue and user growth and all these fundamentals, that's one story. But if they just raised on hot air and hype and be like, we're an AI dev shop, then people just cut a check being like, I'm bullish. Take my money. This could 100x. then like the actual user growth and customer stories it could have just been like a hype based value like raise and we see that all the time in crypto like ask and send money ask questions later depends on which one of those stories it.

Josh:
[54:24] Is crazy 400 million dollars with no due diligence to understand like like you can't even ask the question like hey what model are you using what like in-house training do you have like any of those answers or those questions would have probably surfaced the fact that they did not have the infrastructure required and it was just human and labor all the way down. That's crazy.

Ejaaz:
[54:41] The humans are so timid and meek, huh? I reckon a bunch of AI agent products, Josh, will be revealed to have something similar. Definitely human in the loops because ai agents are nowhere near being able to kind of be fully autonomous yet we saw a bunch of this in web 3 and crypto already right wouldn't surprise me if one of these leading like agents i'm not going to name names to throw shade but i just like a bunch of humans on the back end man

Josh:
[55:07] That's going to be good i we see this also in robotics i think a lot where there there are a lot of human robots with the perception of being fully autonomous when in reality there's tele operation so perhaps there's something similar with that i don't know that's a weird world. I guess if the result is what I want, then all the power to you. Keep up the good work. But in the case that it's not, I think, yeah, that's probably where we run into problems.

Ejaaz:
[55:32] Yeah. Josh, before we started the show, one and one final topic that I want us to cover, we were talking about Game of Thrones, right? And how these kind of major companies have been making pretty sneaky moves. We saw some news breaking just this morning, actually, that Anthropic cut off clawed access to Windsurf, which ironically is another company which assumedly doesn't have 700 Indian engineers on the back end that's doing the AI stuff, but actually has autonomous AI agents coding up applications that people use. What are your thoughts on this? Because what I want to understand from you is how does this impact their product, number one? And then number two, what's the political chess that Anthropic is playing here?

Josh:
[56:14] So welcome everybody to this week's version of Game of Thrones, the segment that we come back to every single week, because the game theory of AI domination is very complex and very exciting.

Josh:
[56:23] So this week we have Anthropic, who is the maker of Claude. Anthropic recently released their 4.0 model, but prior to that they had 3.5, 3.7. And it was mostly known as like the best, the premier coding model. So if you wanted to write code for a long time, Anthropic's models were kind of just like the best. So what happened here is we have Windsurf, who OpenAI now owns a very large percentage of, if not the whole thing.

Josh:
[56:44] And OpenAI and Winsurf kind of had this collaboration. And Winsurf was not receiving data from Claude. So when Claude 4 came out, Claude 4 wouldn't allow Winsurf to access its model. So there's this new Premiere flagship model. And the way Winsurf works is it aggregates models and then determines which model is the best to serve the user's needs at the time.

Josh:
[57:03] So this new Frontier model came out. Everyone wanted to use it. And Claude was like, hey, we're not giving this to you guys. Sorry. Like, this is just our model. So Winsurf got cut out. and now what just happened today is windsurf not only cut out their brand new model but their previous two models as well the 3.7 and 3.5 model which a lot of people deemed as the best in class for coding so why would they do this well here we could place our tin hats on our heads and we can start to speculate so one thing that still is in windsurf is google's gemini 2.5 which is noteworthy because now we have the two biggest models being chat gpt and gemini both in the same aggregator and who owns that aggregator well open ai and who's open as biggest competition well it's google so what happens here is now there are a lot of requests made to windsurf and the decision on which model to serve is now split between mostly two so you could go google or chat gpt in the cases that the best outcome is google well open ai gets that data and they're like hmm that was interesting why would you choose their model versus ours and they collect that data and they could see it and they could use that to refine their model so going back to the the gradient descent and the lowering the loss every single time, they can take this feedback from when a user would use Google,

Josh:
[58:14] Integrate that into their future models until that function gradually decreases to zero. And in no instances does it make sense for users to go to Google because ChatGPT is so much better at serving their needs. So it's this interesting dynamic where it's like, hmm, okay, we can actually see the behind the scenes of our competition and how users are using it. And then we can optimize our models for that. And I think that creates an interesting dynamic where OpenAI once again gets an interesting leg up over the competition.

David:
[58:41] Josh, what do you think is the Pareto distribution curve look like in the AI model world? So for the listener, a Pareto distribution is kind of this notion that the 80% of all spoils, the 80% of rewards will go to the top 20% of whatever, like animals, species, companies, startups, like to the rich goes the spoils. And so if you're in the top 20%, you're getting the 80% of the value. If you're in the bottom 80%, you're splitting 20% of the value. Now that Pareto distribution curve can become more gradual and it can be more equitable and it can be closer to 50-50 or a Pareto distribution curve can become more severe where like the top two people get 90%. Where do you think, what do you think the nature of the Pareto distribution curve is on AI models?

Josh:
[59:30] You probably need to split it into two, which is consumer versus like business and industry. because in the consumer world, it's very clear that ChatGPT basically owns 90 plus percent. I mean, all of the normal people use ChatGPT. They're winning. They're above 90. Everyone's competing for the final 10%. In the commercial applications of this, I think that changes a lot because people don't really care what the interface looks like. They're mostly looking to extract the data from the models. And in that case, well, ChatGPT doesn't even really have the best model.

Josh:
[59:59] Google's Gemini 2.5 Pro models are named terribly. But Google actually has the best model. So I think when you are using Windsurf or when you are using Cursor, a lot of times the user who is the developer or the coder or the company, they're just looking for the best results. And in that case, ChatGPT wouldn't have a monopoly. It would be kind of split, I'm not sure evenly, but it would be split much more evenly versus across Anthropic and Gemini and ChatGPT. And I think that's probably what they're trying to accomplish with this is trying to sway that more towards them. because when you remove the user interfacing, the application layer, you are left to compete only on merit. And if it's a merit-based approach and that's based on benchmarks and actual quality of the models, well, ChatGPT has a ton of competition. In fact, they're not even ahead. So I would say it's probably close to an equal split. I'd love to actually see the data behind one of these companies to see how much is being served which model. But it's a very clear divide between commercial and consumer grade. Like commercial, ChatGPT, crushing. But commercial, like not so much. There's still a lot of competition there for who's going to win.

David:
[1:01:02] That's the high quality answer that I think listeners come to the AI rollout for to hear. Josh, that was great. And Josh, this week, what was very interesting, I think there is so much more to explore with the whole geopolitics lines. We're going to need a guest, I think, to talk to and inform us about what kind of things look like under the hood, which is a hard guest to get because if it's a matter of national defense, not very many people are talking. But I definitely want to talk to whoever is the right person about that stuff, another week in the books Joshua Dross thank you guys for coming with me down the AI roll up rabbit hole.

Josh:
[1:01:38] Always a pleasure we'll see you guys next week

Stargate: OpenAI’s $500B Plan to Build a Planet-Sized Supermind
Broadcast by