Zuck's $2.3 Billion Talent Heist: Meta's Plot to Crush OpenAI

Ejaaz:
[0:03] So i'd barely gotten out of bed on monday morning and i had this headline staring at me in the face which said mark zuckerberg had just spent 2.5 billion dollars to steal eight of the top ai researchers to create a new ai lab called meta

Ejaaz:
[0:19] super intelligence lab which has the sole focus of creating artificial general intelligence. This might be the most money ever spent per person for a tech hire. We're talking about $300 million per person. Six of these guys, by the way, were the founding engineers of ChatGPT. And bear in mind, this comes two weeks after Sam Altman himself said that no one important would leave and take Zuckerberg's offer. It turns out he was incredibly wrong. I think at this point, Josh, it's safe to say that the world has officially gone crazy and the stars of the AI arena are literally being traded like NBA superstars. Let's open up the headline right here. So we basically have this headline that says, Mark Zuckerberg announces creation of Meta superintelligence labs.

Ejaaz:
[1:12] Now, just to set some context here, Meta had been falling behind quite significantly in the AI race. I don't know whether you want to agree with me or not here, Josh, but basically... Yes.

Ejaaz:
[1:22] Right. So all their AI models that they had produced so far, initially, they got a bit of a buzz because Lama was open source. But then they quickly fell to the wayside as other newer models from, for example, DeepSeek and then from the centralized providers, OpenAI just absolutely crushed them. So Zuck was kind of between a rock and a hard place. He had to figure out something to do. And there were these murmurings that he was kind of whining and dining AI researchers at some of the top firms like Google, Anthropic and OpenAI to try and see whether they want to kind of move over to Meta Labs and create this new kind of like hyper AI team. And so what he ended up doing was hiring eight of the best researchers from these different companies to create this new lab. Some highlights from this is it's going to be headed up by Alexander Wang, who is a guy that headed up this company called Scale AI, which Meta recently invested $15 billion to acquire a 49% equity position in. So he is spending north of, I think, $18 billion at this point to acquire about 10 guys, which is crazy.

Ejaaz:
[2:28] Furthermore, this new intelligence lab is going to be co-led by the former founder of GitHub and one of the co-founders of Ilya Skutseva's recent company that he set up. I think it's called Super Safe Intelligence, Josh.

Speaker0:
[2:43] Is that right? Yes, that's correct. SSI.

Ejaaz:
[2:45] Right. So So what I'm basically trying to say here is he's assembled the Marvel Avengers of AI, seemingly out of nowhere, and he spent $18 billion to do so. This is pretty insane. So in Mark Zuckerberg's words, he goes, as the pace of AI progress accelerates,

Ejaaz:
[3:02] Developing superintelligence is coming into sight. I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for meta to lead the way. Today, I want to share some details about how we're organizing our AI efforts to bring towards our vision, personal super intelligence for everyone. We're going to call our overall organization Meta Super Intelligence Labs. This includes all of our foundations, product fair teams. He's basically referring to all the other existing AI teams that Meta already had. So he's going to bunch them all together into these one lab foundation, as well as a new lab focused on developing the next generation of our models. Now, I'm sure a bunch of you are curious as to like, you know, who the hell he hired in the first place. So let's pull up this tweet that was shared by the new lead of this lab, Alexander Wang, where he goes, I'm excited to be the chief AI officer of Meta working alongside Nat Friedman, that's the GitCub co-founder, and thrilled to be accompanied by an incredible group of people joining on the same day. Now, I just want to start this off. There's a list of names right in front of us for people who are just listening on the pod. And it's a number of different names with varying different backgrounds. The main thing that strikes me immediately is that six out of eight of these names are guys that help create the existing leading OpenAI models.

Ejaaz:
[4:21] That's just insane. We're talking about some of these guys that help create GPT 4.5, the O-series models, reinforcement learning. There's actually a guy here right at the top, Josh, I don't know if you can see, called Trapit Bansal.

Ejaaz:
[4:34] This is a guy I've been following for like a number of years now, completely indirectly of AI, because he was the guy that founded reinforcement learning and all the OG reasoning techniques. This is that guy. So basically, Zucks got out and hired all the OGs of AI. And this is just a crazy roster. Do you have any immediate thoughts on this, Josh? Like, I'm looking at your face, you're looking absolutely perplexed.

Speaker0:
[4:59] Yeah, there's just like irony and craziness. And it's funny because like a few months ago, to your point, Meadow was like very behind. They were releasing three models, right? They had Behemoth, Scout and Maverick. And Behemoth was supposed to be this massive foundation model that was going to compete with all the Frontier models. It never happened. And we never got Behemoth. And suspicions are we might not ever get Behemoth. And then we had Maverick and Scout. One of them kind of sucked. It was a direct competitor to DeepSeek. And the other one was okay. But then after a few weeks, the other models came out, they were better, they were smaller, and they just kind of got crushed. So by all means, Meta got their ass kicked in this most recent wave of their publications of models. So that was pretty high signal like, hey, my take from it was maybe Meta just isn't going to compete on foundation models. Maybe they'll kind of take the Apple approach where they will compete on the app layer. They have a bunch of users. They don't need to give them AGI. They just need to give them like high value from these AI models. And then these headlines started popping up where he acquired Alex's company for what was the number? How many billions of dollars?

Ejaaz:
[6:01] Fifteen.

Speaker0:
[6:02] Fifteen billion dollars. So now he's paying essentially over a billion dollars for a single person because that's not a very big team. And he's paying for essentially the small ragtag team that works with Alex to lead this new development. So you're like, okay, something interesting is going on here. There's this other bit of irony that I really appreciated is that supposedly

Speaker0:
[6:19] this is a rumor that they made a bid for Ilya's lab, which was super safe or super safe.

Ejaaz:
[6:26] Super intelligence. Yes.

Speaker0:
[6:29] And that would have been for billion dollars. They would essentially be acquiring another talented person for billions of dollars. But the funny thing about Ilya is once again, he was early and he was right. I think one of the trends that we're seeing throughout all these labs is the lack of the use of the word AGI. Where no one's really going for AGI anymore because I think the definition has become very fuzzy. No one really knows what it is, but super intelligence. Oh, we're going for super intelligence. We're just going right for the gold. So depending on who you ask, we already, we've reached AGI. What is super intelligence? I think super intelligence is post AGI. We're just kind of removing the AGI from our memory. We're just like men in black, shoot the laser beam, forget about we ever said that we're going right to super intelligence. And then super intelligence is i mean my perception is just building an ai that's much smarter than a human across every single facet of life so i think that's now got it the goal so once again ilia was right ilia was early and it was funny because it doesn't appear like ilia actually wanted to join meta and that makes sense because ilia has all the offers under the sun to work anywhere he decided to do it himself but there is this interesting thing where he had his co-founder which was daniel gross and daniel gross co-founded super safe safe super intelligence with ilia and Daniel was actually one of the names on this list of people who are potentially going to get poached, which is an interesting dynamic that you can kind of read into and think, well, hey, Ilya and Daniel kind of co-founded this together.

Speaker0:
[7:51] Ilya is very much mission-driven. He will never leave. But Daniel was maybe like, hey, dude, I kind of want to collect it. I want to retire my next like 10 generations for a billion dollars. So what happened is Daniel Gross and this guy, Nat Friedman, also own a venture capital firm. And that is the firm that meta-targeted to buy out to then collect the paycheck. And you're really seeing all of these weird dynamic power plays from all of these top people who have lots of power. And the numbers are ridiculous. It's like hundreds of millions of dollars per employee. So if you're the employee, you have a really difficult decision to make. Like, hey, do I want to never think about money again? Or do I just want to work on AGI through the company that I believe is most likely to succeed it?

Ejaaz:
[8:34] It sounds like everyone has a price. It sounds like it's the former, Josh, right? So just to give a bit more color on the $300 million figure, $100 million of that is a sign-on bonus. And the resulting $200 million gets paid over three years, supposedly, through news that's being shared, right? Another thing that I'm thinking about is the way you described superintelligence, I completely agree with.

Ejaaz:
[9:00] And it sounds like it's something better than AGI, which wasn't really well-defined in the first place. So now I'm just kind of thinking to myself, is this just a new acronym that people are kind of like just fielding around and kind of like trying to make into like a big hype cycle type situation, right? And then the third thing that pops to my head is if people were leaving the companies where they were supposedly so enamored by AGI and they were so convinced that they were going to build it, why would they leave in the first place? It kind of tells me indirectly that they weren't or aren't close to building whatever AGI I was. And now they're pivoting to taking basically the bigger check. So kind of bearish from that perspective. But I'm glad that people are like touching grass more and coming back to reality a bit. But the one thing that really sticks out to me, Josh, is we were literally saying on the show, I think two episodes ago, we were showing a clip and I'm going to play it now of Sam Altman basically roasting Mark Zuckerberg.

Speaker0:
[9:58] Speaking of social, actually, can we ask you about the whole Meta scale thing. Of course. So what's the situation there? Look, I've heard that Meta thinks of us as their biggest competitor. And I think it is rational for them to keep trying. Their current AI efforts have not worked as well as they've hoped. And I respect being aggressive and continuing to try new things. And again, given that I think this is rational, I expect that if this one doesn't work out, they'll keep trying new ones after that. I remember once hearing Zuck talk about how.

Ejaaz:
[10:31] Google in the early days of Facebook,

Speaker0:
[10:33] It was rational for them to try socially, even though it was clear to people at Facebook that that was not going to work. And I feel a little bit similar here, but they started making these giant offers to a lot of people on our team, like $100 million signing bonuses, more than that comp per year. And actually, it is crazy. I'm really happy that at least so far, none of our best people have decided to pick them up on that. Well, that's wrong.

Ejaaz:
[10:58] Exactly. Like famous lost words, right? So the interview then goes on to say, basically, that Meta has lost the ability to innovate and that they're only paying to play. And that's not a useful strategy. Turns out it actually is. I've heard on the rumor mill that Zuckerberg was actually

Ejaaz:
[11:15] personally DMing, personally emailing and hosting these AI researchers at his residence. So he was having dinner with these guys. He was whining and dining. And it just goes to show, and it's actually kind of impressive, for a sitting CEO of a multi-trillion dollar company to be taking time out of his schedule just shows how important something like this is and how competitive it's gotten at this point. Like, I think this is probably the most competitive we've seen for any kind of AI sector or technology sector that was kind of being built in this. Any thoughts, Josh?

Speaker0:
[11:49] Yeah, well, this is why we're bullish on founder CEOs, because founder CEOs have this, care and this fury that a lot of these CEOs of larger companies like we see with Microsoft or Google do not have. They're not going and personally scouting these. They're not putting everything on hold. And you kind of see this with a lot of the great founder CEOs. Elon is another example where there is only one problem that they're focused on in time. And that gets 100% of their attention because it's all that matters. And in the case of Mark Zuckerberg and Meta, well, solving AI is a trillion dollar upside. You have the opportunity to 10X your market cap if you solve this. So it's the only thing that matters and that's all you should be focused on. And I was kind of walking through the thinking process of what it must be like to be one of these engineers where you wake up in the morning, you check your phone, you have a text from Mark Zuckerberg. First of all, I don't even know how you verify that it's Mark texting you. Like, hey, can you like wire me a million dollars just to verify, like something crazy like that. But I was trying to figure out why people would want to go work for him. Because let's say you're an open AI employee, you've created reinforcement learning. That's what you've designed. and you have made an entire career of that. You've helped OpenAI get much closer to AGI and then super intelligence. And I was trying to figure out why, and there's two reasons. One, it's the money. I mean, a million, $100 million, $300 million, however much it is, it is enough for generational wealth forever. But the second thing is that Meta is run a little bit differently than OpenAI.

Speaker0:
[13:10] And in the case of, and Meta also has a much larger user base. So Meta, if not the largest platform in the world, it's one of them. They have a couple billion monthly active users. which is a significant portion of the globe. And they have a leader who has full voting rights to manage and control the company. So in theory, if you are someone who wants to make an impact on a lot of people's lives as quick as possible, there's a case to be made that all you have to do.

Speaker0:
[13:35] Is go to Meta, collect a couple hundred million dollars, convince Zuck to implement the thing that you believe strongly in, and then have it rolled out to a billion users very quickly. And I'm not sure that's the case with OpenAI, where they're kind of moving towards this goalpost of AGI, but they don't have the users and they do have a bit more bureaucracy when it comes to controlling and designing these things. Where, I mean, even with OpenAI, they have this split control, right? Where like Sam Altman's the CEO, but Greg Brockman is the technical guy. and if you want to push technical changes through you go through greg and maybe maybe they're not happy with that relationship there's a lot of interpersonal things happening here but that was one of the suspicions as to why you would leave just kind of playing it out in a game theory way and it was striking because i mean by all means meta has done a terrible job with launching ai and by all means this probably doesn't yield a culture similar to open ai because they haven't they haven't been through the trenches together for a long time. Yeah. But there's certainly the intention there to do so, to solve super intelligence, to build a great culture, to make this new team.

Speaker0:
[14:37] And I don't know, it's fascinating. It's crazy power dynamics that are happening.

Ejaaz:
[14:42] I think it's a difference of founder personalities, right? Because what you just pointed out there is, so Sam Altman and Zuckerberg are both CEOs and founders of their respective companies, right? It's still a centralized structure. And you could argue that Zuckerberg's setup is actually more centralized than OpenAI. He has these special type of owner shares, which basically give him outright voting rights in any board meeting. So the strategy that the company takes is basically Mark Zuckerberg's opinion and thought process. That's basically how Meta is run. But the flip side of what he's doing here is, number one, he's paying these researchers financially. A shit ton of money. And then number two, he's forfeiting some of his ownership rights to give it to these AI researchers and just say, listen, just come over here. You're going to be led by Alexander Wang, who I just paid $15 billion for, but you're going to have autonomy to build whatever the hell you want. And I can imagine to your point, Josh, under a leadership like Sam and Greg at OpenAI, where it's a lot more kind of concentrated, they're kind of, it's ironic to call open AI startup, given their multi-billion, potentially even trillion dollar valuation, it's more tightly run, right? You know, they're hungry for it. They're still kind of like, what do you call it? They're the back runner. They're still trying to chase up to the big guys. Totally.

Ejaaz:
[16:05] They have to be more careful about which decisions they make. Whereas Zuck, with all this money, is just like, hey, you know, I'll give you guys two years, build the best frontier AI lab model. And then I'll tell you what, I've got two billion monthly active users that you can plug into and test any idea you could ever want, right? And I'm going to pay you handsomely in advance of that, which is just insane. The other thing that I was thinking about, Josh, is like, if you were OpenAI, if you were Sam Altman, after talking such a big game, how do you feel right now? How do other leaders at OpenAI feel right now? I pulled up this tweet from Signal, which kind of like takes an excerpt from quotes from members in the company, which basically goes as follows.

Ejaaz:
[16:49] I feel a visceral feeling right now, as if someone has broken into our home and stolen something, Chen wrote. Please trust that we haven't been sitting idly by. Chen promised that he was working with Sam Altman, the CEO of OpenAI and other leaders at the company, around the clock to talk to those with offers, adding, we've been more proactive than ever before. We're recalibrating comp and we're scoping out our creative ways to recognize and reward top talent. Now, the question I have from that is, what's taken you so long? It's so obvious that this would have happened anyway, right? And then I'm going to pull up another tweet, which actually breaks down the compensation currently that exists between the top AI companies that exist right now. It goes, you know, breaking news, Miro Murati, who was actually the former CTO of OpenAI, who left to start her own startup called Thinking Machines. It goes, Miro Murati is offering top dollar to technical talent,

Ejaaz:
[17:46] Much higher base salaries than Anthropica OpenAI. And then it breaks down the different wage compensations. Thinking Machines is offering 500k base annual salary. Anthropic is offering a safe, and it's in quotation marks, a safe 400k. And then OpenAI, who is the leading number one model provider, is the lowest of the bunch. It's 300k. And I know for those of you who are working outside of the technology sector, all of those figures sounds insane. And to me, it also sounds insane. But within the technology sector itself, even pre-AI, a 300k salary is not uncommon among some of the top engineers that are non-AI related in the field, right? So we see this kind of dichotomy of open AI saying that they're going to do stuff about it, but repeatedly kind of underperforming or underhitting the mark. And I know, Josh, it's just like this trending habit of open AI kind of overpromising and kind of underdelivering recently, which is a crazy thing to say, given they have the best model.

Speaker0:
[18:44] Yeah, it's fascinating because it feels like this is a big shift in the Overton window, I guess, in what is acceptable for paying employees. I think a lot of the perception in the past is as a well-paid technical worker, you'll have enough money to be okay. It probably won't retire you unless you have stock options, and the stock does very well, but you'll do good. And then I think there's this new frontier which is just hey we're building towards this this world that kind of flips the economy on its head and the amount of productive output we will get from these new models will actually offset the cost of everything so why not pay a tremendous amount of money for it and i think this is this is a new thing where open ai of course they didn't want to pay 100 million dollars for their employees because they didn't need to and that they're already losing money open ai isn't going to print a profit for years because they're spending so much on training. So now Zuck coming in and saying...

Speaker0:
[19:38] It's basically like finding the highest bidder. Now Zuck's coming in, he's coming to the auction house. He's like, nope, everyone's price is going up. I want this talent. I'm going to take it. And now it creates this crazy bidding war. And you can kind of rationalize this in the sense that when companies make decisions, it takes a lot of money and a long time to actually act on these decisions. So when you're training a model, it takes billions of dollars. It takes months of time. And you kind of do it with this new technology stack. And the technology stack gets curated by the researchers who they're paying a ton of money for, by the engineers who build the code around it. But in the case that someone makes the wrong decision or in the case that someone comes to the wrong conclusion, those mistakes become very expensive and very time consuming. And they kind of compound on themselves where if one researcher says the wrong thing to the right engineer who has the authority to push this change forward and it turns out that it's not that great. Well, you end up with Meta's behemoth where it doesn't work and they never actually get it to ship to market. So the consequences of these decisions are so high that it actually makes a lot of sense to pay a premium for the people most probable to make the right decision. It's just like, man, how high are we going to pay these people? These are like professional athletes, more even.

Ejaaz:
[20:48] Yeah, it's this recurring trend that Zuck specifically has implemented throughout his entire career, and that's it to be honest, which is he kind of breaks the ceiling, the glass ceiling every time when it comes to pricing, right? Do you remember when he bought WhatsApp for, what was the price back then? I think it was like, it's either 1.5 billion or 10 billion. I know that's quite the range, but it was a huge number. I remember reading that headline back then and thinking, holy shit, that is a lot of money.

Ejaaz:
[21:17] He's massively overpaying.

Speaker0:
[21:19] And prior to that- And if I remember correctly, the size of the company was under 100 people, right? So he was, again, paying for a huge amount for very little people.

Ejaaz:
[21:25] It was 15 people at WhatsApp.

Speaker0:
[21:28] So he's done this before.

Ejaaz:
[21:29] Yeah, I remember thinking that and being like, Zuck, you're completely nuts. And that was a few years after he bought Instagram for something like, I don't know, 500 million or a billion dollars, which again, I thought was really dumb. But time and time and again, Zuck has shown this strategy, which always seems to work, which is he kind of sees the value to your point that is going to be created. Let's say you own the best AGI or super intelligence, whatever acronym you want to use now of model, it's going to create trillions of dollars of value for you, especially if you have all the users as a moat in your existing company. So it's a no-brainer to just spend, you know, whatever, 0.5% of the cash reserves that you already have, if you can like end up owning this technology in the future, which is just insane. I was thinking about this kind of like, you know, $300 billion per person, Josh, and I was like, is this like fair? And then I did some digging. Guess how many top AI researchers they are in the world right now.

Speaker0:
[22:27] Guess. Okay, so I'm going to guess. I know the number is small. It has to be in order for these people to...

Ejaaz:
[22:33] It's very small.

Speaker0:
[22:34] So I would imagine it's somewhere in the hundreds... Not in the thousands, maybe 500 people.

Ejaaz:
[22:42] 150.

Speaker0:
[22:43] 150.

Ejaaz:
[22:44] That's the estimate across like number of different articles.

Speaker0:
[22:47] And those 150 are what? How would we classify those people?

Ejaaz:
[22:51] They have the ability to basically train some of the top frontier models. So they are experts in all the techniques that you would need to train a model. So reinforcement learning, reasoning. They just know how to design these different training runs. They are hardware experts, so they know exactly the type of chips, not only the exact type of chips, but how to design and iterate on these chips

Ejaaz:
[23:15] so that you can have better training runs. And then three, they know how to integrate data, which, as we've spoken about a number of different times on this show, into these AI models, which is like, it sounds like a very boring thing, but it's super important to make your models really expensive. It's why Zuck paid $15 billion to invest in that company, Scale AI. Scale AI is a AI data company. Their only thing that they do is take data and help you kind of like mold it into the AI models that you're creating, which sounds super boring. No one had heard of Scala really. So boring. Right?

Speaker0:
[23:48] Yeah, I mean, they got $15 billion.

Ejaaz:
[23:50] Exactly, right? So I was thinking, holy shit, like basically AI talent is the scarce resource at this point. With 150 to 200 people in the world, that's just insane.

Speaker0:
[24:02] The startling takeaway for me is that everyone's worried about these middle-class jobs getting removed from the economy. When in reality, in the entire United States, there's 150 people that are building the entire new economy. So it's like, we've already removed all the jobs because the 150 people that are being paid hundreds of millions of dollars are going to build the AGI or artificial superintelligence at this point that will take over everyone else's productive output. So that to me is the startling thing. It's like, wait a second, we've already been reduced to 150 people. And i'm sure that number will continue to shrink as these systems get smarter and then we're all just maintenance workers we're all just kind of these like physical machines that are are working to maintain and entertain other physical machines it creates this like super weird dynamic that that like feels a little bit uncomfortable i.

Ejaaz:
[24:45] I prefer the term supervisor josh so in the future when my entire job role and voice has been replicated just call me a supervisor so i feel somewhat important please

Speaker0:
[24:56] I actually have a question for you.

Ejaaz:
[24:58] EJ. Go.

Speaker0:
[24:58] In terms of software engineers today who are not in these 150 people, a junior software engineer coming out of college, do you think there's a role for them anywhere? Because I would imagine as these people get more and more leverage, they are just able to do more and more work. And there's not really a need for people who aren't at the top echelon of these services.

Ejaaz:
[25:19] The data says no. Did you see the report that came out of the UK this week?

Speaker0:
[25:24] No, please share.

Ejaaz:
[25:26] So it's this report that was carried out in the UK, which tracks entry level jobs from different graduates from like all different kinds of professions. And the long story short is a 32% haircut ever since ChatGPT got launched. So what it's basically inferring is that a lot of these employers are kind of not needing any of these entry-level graduates who have, again, spent three to five years studying all of these degrees in varying levels of context and detail, only to be replaced by AI, all these different kind of functions. So it's this weird thing where you could be an entry-level software engineer and there's no real job for you. And I also keep hearing about rumors from at least the Silicon Valley side of things that it is just hyper competitive right now. Some of the top graduates from Harvard and the likes just can't seem to nail down jobs. They're each going out to like,

Ejaaz:
[26:26] you know, vibe coding events or they are starting their own startups. But most of the technology companies, I saw a crazy stat, by the way, the CEO of Salesforce recently said that 40% of the work in Salesforce is now being aided or performed by AI. I saw that. That's so crazy. That is a huge amount.

Speaker0:
[26:46] And we're still early. We are in the AI coding companion era. It is not even building its own code bases. It's just a companion coder. So the fact that we are at 40% now is shockingly high, right? Yeah.

Ejaaz:
[26:58] So I think we've reached a point, Josh, where if you don't use AI, if you don't learn new skills that leverage AI, you're going to get left vastly behind. And that'll end up becoming the majority, at least in the near term. But it is also the equal time where if you spend time even learning a few AI tools, your compensation basically skyrockets. And I've seen a bunch of charts which show this, right? Just insane. Josh, I have one final question for you before we wrap this up, because I think it's super important. Now, Meta's AI strategy, to this point, pre-spending $18 billion to get like 10 people, has been open sourcing their AI models, right? And the reason why they created this strategy was because they were behind in the AI race. They didn't have a frontier model. And their thought was, if we spend billions of dollars trying to create a fairly good model, and then open source it, you can have a bunch of developers which basically create better products with it and it increases the relevance of our model and also the potential application. So basically you outsource the work and then eventually kind of like improve your model and it kind of like becomes a fast iteration cycle and you end up hopefully catching up with everyone else. But now what they've done is they've gone and acquired and spent a hefty check on all of these top AI researchers to presumably train the new frontier model. So they're going to leapfrog these guys and he's given them a two-year time span, right?

Ejaaz:
[28:23] By the end of 2026, we're going to have a better model than anyone else in the playing field. Do you think he keeps open sourcing it?

Speaker0:
[28:30] No, you can't. The reason you open source it, I mean, optically is to say, look, we care. We want to open source AI. We want to make sure the world can democratize it. But the reality is you open source it for product velocity. You open source

Speaker0:
[28:41] it to give access to developers. You open source it to allow other people to chip in and to build your code base. And I think this is comparable to what we see in China, where China has these deep seek models that, good not great or maybe great but not quite the best and they're doing it all open source they're they're building that product velocity they have all the developers contributing they have developers offering like algorithmic incremental improvements and and they're slowly getting better but the second someone gets in the lead why would they open source that and i think that's kind of the argument for why like all the leading models will eventually be closed source is because why would you you're spending how many billions of dollars on talent why are you just going to give that away when you can kind of silo it and offer it to all of your users in a custom way. And you're seeing these aggressions happen with the recent acquisition of Scale AI, where they had big contracts with Google and they had big contracts with a few other people, and they immediately cut the ties because they're very secretive about their data, how their algorithms work, how their models perform. And I don't see a world in which that happens, where if Meta released Behemoth today, I'm not sure that would be open source. I think perhaps Maverick and Scout, I'm sure they'll have open source models that they provide, but the leading frontier model at that, that would seem kind of crazy for them to open source.

Ejaaz:
[29:55] Yeah, I couldn't help but agree with you, to be honest. And also, I remember there being a stipulation in the open sourcing that Meta made for their existing models that they have right now, which is, I think it's something like if you get over, I don't know, 100,000 users for your open source iterated model that uses their, you know, open source model, you then have to like, Like it gets restricted under IP clauses, which basically means that they'll be able to own the IP after a certain number of users, which I thought was always indicative that they were eventually going to kind of close source this entire system. And this was just kind of like a strategy for Zuck to kind of catch up. Well, I mean, just a crazy, crazy segment, to be honest. And I was so excited to talk about it this week. Josh, any further thoughts from you?

Speaker0:
[30:40] Oh, man, I just want to I feel like I've thought a lot about the junior software engineer thing and the conclusion was you really just need to get good at three things in order to win in this ai world and it's like you don't really need to to learn how to code necessarily you just you need, the high agency to want to go and try new things. You need like the ability to become an autodidact and learn very quickly on your own and kind of learn a lot of breadth instead of depth. I think a lot of the benefits that will come from like deep knowledge will come from cross-referencing different index or categories. And then you also just need to strictly leverage AI tools. I don't think I could express that enough is that if you go and take a class on Python, that is good, But your AI bot will be able to write you all that Python with a single script. So get in the habit of learning how to provide context to these models to interact with them. I think that's really the only way to get ahead in this new weird world order. And then that's how you get a $100 million paycheck from Zuck. So if you want to do that, dig in. Now is the best time to ever do that.

Ejaaz:
[31:43] I'm going to be honest with you. I think it's the most rational choice for those AI researchers to take that check, regardless of what the vision was for meta superintelligence.

Speaker0:
[31:51] Man, it's like you want to just get your bag before we solve AGI, right? And money doesn't matter.

Ejaaz:
[31:56] So they're kind of selling the top in a weird way.

Speaker0:
[31:59] Yeah, these are like your last chances to get to like first class royalty in this new world. Otherwise, you're just part of the universal basic income crew of everyone else.

Ejaaz:
[32:06] Yeah. Well, Josh, thank you so much for spending the time with me to discuss this. Folks, thank you so much for tuning in to the Limitless podcast. We are posting now multiple videos per week, mixture of interviews and kind of breaking topic segments on all the latest news uh please like subscribe and follow to on all our social media platforms where we're pretty much everywhere and uh we'll see you on the next one thanks folks yeah thanks

Speaker0:
[32:32] We'll see you soon.

Zuck's $2.3 Billion Talent Heist: Meta's Plot to Crush OpenAI
Broadcast by