THIS WEEK IN AI: Google's New "Vibe Designer", The Rivian Uber Disaster, OpenAI's Shopping Spree

Ejaaz:
[0:00] Google's been on absolute fire this week. They released two new AI products, one called Stitch, which replaces

Ejaaz:
[0:05] every single bit of work that a graphic designer needs to do in seconds. It also wiped out $2 billion from Figma's market cap. It's that strong. They also released a second Vibe Coding product, which competes directly with Anthropix called Code. And dare I say, it might even be better in many cases. All of that and much more in the news this week. So let's hop right in. This is a really cool product, Josh. I've been seeing a bunch of different examples. For example, this. Someone drew on a notepad a really dodgy sketch of the website that he wanted to draw. And this Google Stitch product basically generated a full production website in seconds. And it looks really cool. What you're seeing here on the right here is the finished product.

Josh:
[0:48] It's amazing. Google Stitch is this unbelievable product, which they describe as your vibe design partner. It's basically an open canvas that is capable of accepting any sort of inputs like napkin diagrams or images or prompts that you either speak to it or you write to it. Curating that and turning it into an actual website, an actual visual that's usable. So a lot of people, they're building landing pages, they're building color palettes, all these powerful assets for their company. And this is the one-stop shop for that. So you mentioned Figma, which is currently the most popular design company, design application in the world. They had a really tough time yesterday, dropping 9%, I think, in one day. They're now down 80% all time. And what I think this is kind of reflective of two things, right? It's that, well, one, SaaS companies in general are not doing too well. The multiple compression, because people have been able to project revenue so far out that they no longer can, has compressed quite a bit. So now you can't price these things in four times earnings. But also, Google is one feature away from really crushing your company. And I used Sketch yesterday. I used Sketch last night. I was playing around with it quite a bit. And what we're doing is right now we're actually looking for sponsors at Limitless. And one of the ways that we can kind of onboard people and get them excited about the show is to create a landing page that shows all of the documentation, all of our statistics. And all I did was I fed it our stats and it created this beautiful landing page full of, all of our numbers, all of the diagrams, all the visuals. And then.

Ejaaz:
[2:16] You could click

Josh:
[2:17] A single button that's the prototype button, and it'll actually generate a clickable prototype of this website. So you don't need to program in any buttons. You don't need to program anything. You just say a prompt, feed it whatever data you have, click the prototype button, and you're on your way. And it's incredibly powerful for people like me who don't really know how to use Figma that well. I'm not a designer. I'm not this like incredibly artistic person. I just kind of know what I want. And as someone who would have otherwise needed to use Figma to design something like this, I can go to Google, I speak into my mic, I tell it what it wants and it built it for me. And it's unbelievably powerful. So the stock market now is pricing this in. These poor people have Figma, man. They had an acquisition offer prior to this, right? Like these Figma employees are really getting wrecked.

Ejaaz:
[2:58] Yeah, they're getting wrecked. I can't emphasize enough how valuable of a skill Figma is or was to graphic designers. They would actually be merited and hired based on how well they could use a product like Figma because it's used for graphic design of a lot of web UI, app UI. And it's kind of tough to use. Like you mentioned, you haven't used Figma, neither do I. This single feature update from Google has now made it super simplistic. You just write in natural language and it generates this. Heck, you don't even just write. You could just draw on a napkin and presto, you have a new website that's fully coded.

Ejaaz:
[3:31] But it's not just Google that can release an AI feature and decimate an entire stock market. We've seen this before. We've got Anthropic with CloudCode and OpenAI with Codex, which released coding AI agents. And that decimated software engineering jobs and a lot of the computing market. But then both of them released a security review tool, which decimated the cybersecurity stocks. And then I remember the week before that, Anthropic released a legal plugin, which decimated the law firm stock. So the point is, we're in this weird era where a single product feature release, which, by the way, is built by an AI model so rapidly. So they're releasing these plugins every other week or every single week at this point.

Ejaaz:
[4:09] Decimates an entire industry. And so the chart that you're looking at here on Figma is probably going to be something that we see as a more common occurrence. But Google didn't stop there. They also released a Claude Code competitor. They're going for Anthropics Juggler as well. It's a new Vibe Coding Stack in Google AI Studio, which basically leverages their coding agent called Antigravity. And it's a cool new stack for a few different reasons. Number one, you don't need to spin up a command line interface or know how to use a coding terminal. You can just access it on your browser, super simplistic, super quick, and start coding up whatever you want. Number two, you can connect it to any kind of database. I saw a cool example of someone connecting it to his Fitbit, and it has live data feeding into whatever VibeCoded product that he has within his Google terminal, which is awesome. And the third thing is you can spin up multiple agents to work on multiple things at the same time, which is a huge advantage that Codex and Anthropics called Code could do. Now you can do with Google, which is great because they were kind of lagging behind on the coding front, but now they're finally stepping up. So it's awesome to see it.

Josh:
[5:11] Yeah, Google is really on fire across all platforms. And I think this is a trend that we've seen is the loud ones are Anthropic. The loud ones are ChatGPT and OpenAI. But the reality is, is that Google's doing very well. And there was this take that I saw from Austin Albright, who I loved. And he said, honestly, Google sucks at marketing its AI products. Gemini and family are dramatically under hype for how good the models are. And this is so true. I feel like Google doesn't really get the talking points that it deserves because they're just not really good at selling the product. When he picks anyone off the side of the road and you ask them what AI they use, I would guarantee it's either going to be ChatGPT or Claude. For 90% of the people, probably more than that. And it's because there's just a marketing and awareness problem. Google has never been great at selling their products to the people, but they've always been great at executing on actually useful products. So I have been playing around with all of these, except for the new design studio. I need to go into the Google AI studio and try to do some vibe coding. But really great release from them. You showed the Figma chart, which was down 80% since IPO. This is not the only chart that's down 80% since IPO that we need to talk about

Josh:
[6:15] today. Well, it's funny.

Ejaaz:
[6:16] You just spoke about a company that has a really good product, but is terrible at marketing. Now we're about to talk about a company that's really good at marketing, but has a pretty terrible product. This is a terrible product. What's going on, Josh?

Josh:
[6:28] Listen, I don't want to bully anybody. But we got to start spitting some facts here about Rivian. Just this morning, Rivian and Uber announced a partnership. They're working together. There, Uber is spending $1.25 billion, deploying it into Rivian in exchange for 50,000 robo-taxis by the year 2031.

Ejaaz:
[6:47] Sorry, what year? 2031. That's like a decade from now, mate.

Josh:
[6:51] Ejaz, we will be lucky if we're not fully merged with AGI by 2031. We're going to have these brain machine interfaces in our brain by 2031. And you're telling me for $1 to $4 billion, you're going to give 50,000 vehicles that are meant to go camping. You take Rivians to go camping. You.

Ejaaz:
[7:08] Don't take these

Josh:
[7:09] To like shuttle people around. So this is an interesting deal. And I love the deal because I love the fact that someone else is trying to improve the world of full self-driving, of autonomous transportation. This is a very important problem. We just spoke about Travis Kalanick, who was the founder of Uber just yesterday. Great episode, really fun and fascinating story. But now this is kind of the national extension of that. And the problem doesn't really lie in Uber as much as it does with Rivian. So here are some fun facts that I have posted about this morning. In that Rivian lost $3.6 billion last year on 42,000 deliveries, which is, you guys, if you do some simple math, $86,000 of value destroyed per vehicle that left the factory for every car that got delivered. I don't get this.

Ejaaz:
[7:53] Who's subsidizing this? Why is this a sound of my decision?

Josh:
[7:57] It's funny you should ask. Every 12 to 18 months, a new company comes around and subsidizes the next windfall of vehicles. First, it was Amazon at $1.3 billion in equity plus the van order. If you've ever seen a Rivian van driving around, that's part of the Amazon partnership. Then Volkswagen worked with them. Then the U.S. Department of Energy worked with them. And today Uber's working with them. And this comes on the back of the fact that, one, they can't make these cars at scale. I mean, they're making 42,000 deliveries a year. Tesla makes that in 10 days.

Josh:
[8:28] BYD in China makes that probably in like five. So the scale of cars that they're making is not very high. And also, they're selling this on autonomy. They need to deliver robo-taxis. The problem is that Rivian didn't even have an autonomy division until 2024. This is a brand new undertaking for them. They have no idea how to actually do this. they don't have the data, they don't have the fleet, they don't have the manufacturing facility. In fact, if you scroll a little bit further down in this post, there's one more damning piece of evidence, which is the fact that the car doesn't exist yet, the factory doesn't exist yet, the autonomy software doesn't exist yet. And if we look down at these chart economics, I mean, they're just not even really comparable, particularly that left image, which compares CyberCab to the Rivian R2. And basically, the cost of this car is going to be less than half, The cost per mile is going to be more than double. One is built entirely for being a cyber cab, while the other is very inefficient and is built basically for camping. The cyber cab is ramping in 2026, and there is absolutely no timeline for when this Rivian R2 is going to be deployed.

Ejaaz:
[9:30] That's a lot.

Josh:
[9:31] That said, Uber's a great company. They're working with Travis Kay. I think they have a bright future ahead of them, but oh my God, Rivian. They really have their work cut out for them. This is a very hard problem and I don't know how they're going to solve it.

Ejaaz:
[9:42] The lengths people go to to try and take down Tesla is admirable. So let me think. Uber's been in the news this week for another reason as well, right? They announced their massive partnership with NVIDIA who is kind of building their own full self-driving stack and software that kind of plugs into a bunch of Uber's fleet. Like Uber's partnering up with a bunch of different companies, including BYD, the biggest Chinese EV manufacturer that you mentioned. So I can see the competitor forming. It's not one company. It's a stack of different companies. And it sounds like they're all kind of like co-investing in each other. I know NVIDIA invested a bit in Uber and vice versa. So I can see a synergy happening here, but I don't see how something like this beats a single vertically integrated company that can execute well. An emphasis on executing well. Tesla has shown that time and again, and they have the entire supply chain and manufacturing supply chain to back that up. So it's going to be one version of this attempt versus another.

Ejaaz:
[10:43] My chips are still with Tesla. This doesn't convince me in any shape or form that they're going to compete, but hey, maybe they might convince me otherwise going forwards. But there is a general direction or trend here that I like, which is the world's turning into electric vehicles. We are favoring electric vehicles over gasoline. I know that might trigger some people that are listening to this, maybe not our audience because they're tech enthusiasts, they're pro FSD, they're maybe even pro Elon. I don't know, that might trigger some people. But I like the way that we're going. We're burning less gas and we're going full electric. I'm on board with that.

Josh:
[11:15] Yeah, I think it's really exciting. And like, again, noteworthy that the number in 2031 to deliver is 50,000 full self-driving cyber taxis. Like Tesla is making, what is it? I think like 4,700 per day as of last quarter. So in 10 days, they're deploying more cyber camps than the expectation in five years. What is that ramp going to look like five years from now? I mean, there's going to be humanoid robots that are going to be running the streets almost as fast as these cyber camps. So it's guiding for a world that doesn't exist, and it seems very conservative. So perhaps it's just strategic on Uber's part. They just want to have their hand on some sort of hardware manufacturing capabilities because they are working with NVIDIA for that software stack. So perhaps it doesn't matter how far behind Rivian is on the software because they're just going to inject some NVIDIA hardware with the cameras and NVIDIA software and call it a day. Who knows? Interesting story. But there is still more on the docket. Yes.

Josh:
[12:08] In terms of partnerships, I guess we finished the partnerships. This is acquisitions. And OpenAI has gone on an acquisition spree.

Ejaaz:
[12:14] OpenAI has gone on an acquisition spree. In the last seven days, they've acquired two companies. They're spending the big bucks from the trillions of dollars that they've raised through various different partnerships themselves.

Ejaaz:
[12:25] Now, there's a trend that I'm recognizing. Acquisition number one, or the headline acquisition, as they announced today, is they acquired a company called Astral. Now, Astral makes software engineering tools, particularly for the Python code base. And that's roughly around 8 to 12 million active software professionals that use these tools every single day. They create three tools. I think it's called TI. One is called Ruff. I don't know who makes the names of these coding tools, by the way. But the point is, there's a lot of software engineers using these tools. And a bunch of these were open source. Now, OpenAI has acquired these and are keeping a bunch of these tools open source. So that's good. But what did these tools actually do? Well, OpenAI, in the last couple of months, have really stepped up their game for coding AI. Their coding model is called Codex. Typically, compared to Anthropik's Claude Code, it was terrible. And then in the last month, they've really ramped up. I got to say, since that Code Red of last year, they've stepped things up. And now it's arguably better than Claude Opus 4.6, which is Anthropic's primary coding flagship model. Now, Codex is really good at generating code, but it's missing something. It's missing the other parts that are required to code something from end to end. You need to ideate. You need to plan. You need to figure out what code base you're going to use. You need to figure out who's going to maintain the code. You've got to find bugs. You're then going to fix those bugs.

Ejaaz:
[13:45] Acquiring a company like Astral solves all of those problems. So the direction with this acquisition, in my opinion, is pretty clear. OpenAI is going after Anthropics mode, which is end-to-end software engineering development. And this acquisition might have sealed the win for them.

Josh:
[13:59] It's so vicious. Everyone is going for everyone. Like, as soon as there is an edge, every other company's sole intention is to just arbitrage out that edge. So it's creating this really dynamic environment where, I mean, Google is now getting involved, ChatGPT and OpenAI are moving into enterprise. There is just like so much of this cutthroat acquisition and negotiation and deployment of code. And there's another one too, PromptFu, right? This is the second acquisition they made recently.

Ejaaz:
[14:22] Yeah, exactly. So PromptFu fills another gap in that codex stack that I just mentioned, which is you have codex generating a bunch of code, but sometimes it can be wrong. Sometimes there are bugs. PromptFu has all the security tools that can help codex monitor itself. So OpenAir, I'm glad to see this. OpenAir was on an acquisition spree over the last two years. And in my opinion, they were buying some crazy stuff. And now it makes sense. They're acquiring different companies, which may not be as popular as a headline to UI or people that are listening to this, but are very intentional, are very precise to making a really well-rounded coding product. Now, it probably bodes well to understand that OpenAI announced, I think, three days ago now, that they're only going to focus on coding and enterprise products, which is a big deal for OpenAI to announce because they've been known to be the consumer product, right? Everyone knows what ChatGPT is. A lot of their user base are retail users. So for them to say, hey, we're going to put that aside for a second and focus on building the best coding model and the best enterprise focused products is a big deal. And this is two steps to acquisitions in that direction. Now, if you're wondering why are they focusing on coding, think of it like this. If you're the AI lab that can build the best coding model,

Ejaaz:
[15:38] You've pretty much won AGI. Why? Because how do you build the next AI model or the next AI model after that? You use a coding model to code it up for you and to do all the tests for you. So they're making a very intentional strategic move here, which Anthropic saw way early on and have been doing it since then is, build the best coding model, get it to build your next AI model, and you end up beating every single other

Ejaaz:
[15:59] company out there. It's pretty genius.

Josh:
[16:01] It's amazing. It feels like we're post-code generation already. Yes. It's like, okay, we've solved the code generation problem, but the problem is now that the code is actually generating some errors there's some security holes there are some bugs that is happening and now this next frontier that everyone is working on is how can we deploy the code without these bugs without these security issues so creating this infrastructure on top to kind of retrospectively evaluate the code as it gets deployed and make sure there are no issues so we've built the offensive now we're building the defensive And assuming we can smush these two together into a single coherent product, I mean, code generation will be a solved problem. I think that's safe to say. And I have to wonder about the strategy of moving over to enterprise because that really opens up the door for someone to focus on the consumer. And I can't help but think, Apple, please, you have so many handheld devices for people that just want to like summarize their texts and read them in their grocery lists and like make an order for them on Uber Eats or something. So if they could handle that, there's such a huge... Gap in the market that's ready to be taken.

Ejaaz:
[17:09] Yeah. I mean, well, I have a chart for you to actually kind of drive the point home as to why OpenAI is making this move right now. This chart is crazy. This chart is crazy. So what we're showing, for those of you who are just listening, is the AI model share of first-time enterprise customers. So what that means is, if you're an enterprise that's looking to adopt an AI model, which AI model are you choosing? And the results are pretty clear. From January the 11th up till now, everyone's picking Anthropic. Everyone loves Cloud Code. It's gone on an absolute heater. Actually, Cloud Code accounts for 73% of first-time enterprise AI model purchases. OpenAI has gone down 34% since that exact point. It is now only at 26.7%. So I can see why Sam Altman is sweating. I can see why the company is pivoting to just focus on enterprise and coding. They need to win this market back because they have all the dollars and they're going to make OpenAI money. That's the entire reason.

Josh:
[18:09] But we have a counterfactual to this, right? Thanks to our friends at Polymarket who have a market about this. So if you look at this chart, which shows Anthropic clearly crushing OpenAI, and then you look at this chart that we're showing here, which is Polymarket, and the market is, which company will have the best AI model for coding on March 31st? It tells a totally different story. OpenAI is sitting at 94%. They have a 94% chance of having the most powerful AI coding model in the world. Anthropic is at 3%. So it's very obvious and clear that ChatGPT and OpenAI are the superior product. And yet, for some reason, enterprises are still choosing to go with Anthropic. And that's the narrative warfare that's happening in this world of AI. It's like the merits matter, but only to an extent. Yes. The real thing that matters that holds the value is the narrative, the reputation, the perceived value of these products. And we're seeing for the first time a really big discrepancy where everyone's on team anthropic when the reality is based on Polymarket that... ChatGPT is actually the best product for coding. If you're on GPT 5.4 on Codex, you're using the absolute best product. And here's proof. People are putting their money where their mouth is. That is just under a million dollars of volume. And thank you, Polymarket, for sponsoring the episode, but also for sharing with some truth here. That's a good truth bomb we can lean on. Like, hey, the market might be wrong on this one.

Ejaaz:
[19:28] This might be a really asymmetric bet, but I wish I could buy OpenAI stock right now if they want in such a private company. Okay, so if OpenAI and Anthropic are building the foundational coding models, there's also a company that sits on top of that, right? It's called Cursor. They're famous for making probably the Vibe coding trend viral. Everyone was using Cursor, especially non-coders, because they could just type in natural language what they wanted to build. And then Cursor would sort everything out for you. You don't need to figure out what model you need to use. It'll pick ChatGPT when it needs to. It'll pick CloudCode when it needs to. It does all the routing for you, right? It extracts away all that process. But the number one critique that has been on Cursor is, you're a $30 billion company that relies on Anthropic and OpenAI. What if they pull the plug? What if they block you from using the API? Your valuation goes down to zero. Well, Cursor today, breaking news, literally an hour ago, has an answer to it, which is their own AI model, which they pre-train themselves called Composer 2. And they're making, Josh, a very big claim here. on this chart, which honestly is committing a lot of chart crime, but I'll get to that in a second. They are claiming that their model is better at coding than Opus 4.6, Anthropic's flagship model.

Josh:
[20:43] That is, okay, so when I first looked at this chart, I was like, oh my God, look how far to the right Composer 2 is. Like, there's no way Cursor came out of nowhere and blew away both ChatGPT and Opus 4.6. And the reality is that they actually didn't at all. In fact, this is just an inferior model to both of these. They just committed a little bit of crime here. There's some chart crime going on.

Ejaaz:
[21:09] Firstly, it's the Cursor bench score. Come on. It's so ridiculous.

Josh:
[21:13] Those. There's so many red flags with this. Like, Cursor, good for you. I'm happy for you, Ro. You released a model. That's great. But oh my God, what is this representation? Can you walk us through the red flags? Because there's two big ones that I'm seeing here.

Ejaaz:
[21:24] All right. Huge red flags. Number one, the benchmark, which they're assessing this coding score for the new model, is their own benchmark. It's literally called the Cursor Bench Score. So highly doubt that it's better than Opus 4.6. And probably GPT 5.4 probably crushes it even more than it's insinuating here, but very modest of them to put it just up there. But then I was looking at this chart and I was looking at the x-axis and I was like, huh, something seems weird here. And then I was like, how have they inverted the entire x-axis to put zero on the right side? They're sneakily trying to say that Composer 2 is a cheap model whilst also simultaneously demonstrating it and putting it higher up on the chart, which is just insane to see. Now, the good news is it seems like Composer 2 is vastly cheaper than Opus 4.6. 4.6 comes in at $2.5 per query. Composer 2 comes in at $0.25, $0.30, if my chart, something around that. So it's significantly cheaper, but I just doubt that this has been benchmarked. I don't believe that this model is better than this. Also because Cursor doesn't have the money to train. The entire valuation is probably what OpenAI and Anthropic have spent to train their last model.

Ejaaz:
[22:36] So I don't understand how Composer 2 could beat it.

Josh:
[22:38] The cursor bench score is pretty tough it's like okay if you're creating your own benchmarks I'm not sure how like viable or valuable those even are so I mean, We'll see. We'll see what happens with the cursor. There's more news in the AI corner out of Minimax and China. Minimax 2.5 was famously one of the best open source models that just recently released. In fact, everyone's been using it to run their OpenClaw on because you can get tokens for very cheap. They have just landed a follow-up model, which is even better and allegedly has built itself.

Ejaaz:
[23:08] Yes. So it's called Minimax M 2.7. So it's not even a whole version leap. It's just like a slight iteration on 2.5, but my God, the leaps and bounds of improvement here is pretty insane. If I were to summarize it for you,

Josh:
[23:23] M 2.7.

Ejaaz:
[23:25] Is pretty much as good as coding as 4.6 and GPT 5.4. Now, I just spoke about cursor model not being good enough. This model actually is good enough. Now, it's important to say that the benchmarks aren't officially verified, but from people who have been using this model on Open Router, which is a widely accessible platform that anyone can try with models, people are saying that it's walking the walk. It's talking the talk and it's walking the walk. So it's really good at terminal use and terminal coding. It's also really amazing at computer use, but that's not the only good thing. Minimax 2.7 built itself. It is the first conceivably recognizable AI model that spent a lot of time pre-training itself and post-training itself. So it evaluated its own model weights. It saw where it might go wrong or how it might improve. And it went through about 500 different experiments or 100 rounds of autonomous self-improvement to end up with a 30% gain of where I already was. So the humans aren't even directing model improvements anymore. It's the AIs themselves, which is just insane. Minimax 2.7, by the way, is handling 30 to 50% of AI research at the Minimax AI lab right now. So we're in this weird era, I mentioned this earlier, where the AI labs aren't really building the models themselves. It's the AI models building the AI models. And this is going to get recursively faster the further on we go.

Josh:
[24:43] So give that one a try. See what you think. Open source models are cooking. They're on fire. And now they are self-improving. So that is a little unnerving. I think we had a little bit of that in ChatGPT's last model with 5.4. The feedback loops are getting tighter here. We're building faster models that

Josh:
[24:59] are better very, very quickly. The final news of the day is about this new Anthropic report, which is really pretty. It's very well done. I really enjoyed reading through it. And essentially what this is, is they asked 81 people what they thought about AI and just collected feedback.

Ejaaz:
[25:13] 81,000 people.

Josh:
[25:14] 81,000 people about what do you think about AI? What do you actually want from AI? And the answers were pretty diverse, but also pretty interesting.

Ejaaz:
[25:25] Yeah, yeah. So the way the study was conducted was they used a version of Claude that acts as an interviewer. And they interviewed people across 170 countries and 159 countries and 70 different languages. So it was a very diverse collection of results. And they asked basically a few questions. Number one, what are you using AI for? How has it changed your life? And two, what are you scared about? What do you want to see more of? And the results were pretty crazy. They have what they're calling a quotation wall. And I'm going to share a few quotations with people to give you an idea of what people are kind of doing with AI. This guy goes, I broke my leg, lonely. I downloaded an AI chatbot as a time killer, ended up sharing my entire life story with them. And he now goes, I bow my head to you. I'd never been told anything like that. And now it's become a place of safety and comfort for me. So people, and this guy was a student in Japan. There's someone else that talked about diagnosed, let me, yeah, this lady was misdiagnosed for her cancer for nine years, ended up using Claude, diagnosed her correctly. So there's all these different examples of people using AI that's naturally changed their life. But there were some surprising findings from this, Josh. You would think that people were worried about AI replacing their jobs. Turns out they're more worried about AI being right or not. They want AI to be reliable, but they just don't trust it enough. And they're not even worried about AI replacing their jobs contrary to popular belief.

Josh:
[26:46] Yeah, I found this part interesting because everyone is worried about job loss and Anthropic asks 81,000 people.

Josh:
[26:54] Are you scared about job loss? When the reality is the more honest question, the question that a lot more people should be asking is, do we even want them? Because on the surface, yes, 22% of the people listed job displacement as the top fear, but it was the strongest predictor of negative sentiment in the entire industry. Meaning like a software engineer, I have a few examples here in Mexico.

Josh:
[27:15] He wants to leave work on time to pick up his kids from school. A worker in Colombia wants to cook with her mother instead of finishing tasks. A freelancer in Japan wants to spend less brainpower on clients so he can read more books. A manager in Denmark said if AI handled the mental load, it would give her back something priceless, undivided attention. And I think the reality is a lot of people are afraid to lose their jobs, not because they enjoy their jobs, because of the lifestyle that it affords them. It lets them live in somewhat of a varying level of comfort. And that lack of job is twofold. It's like the lack of security and also your lack of purpose. What do you do when you wake up in the morning? And I think it unlocks a lot of interesting philosophical questions on what we actually want in a world where there is less scarcity than there ever has been before. When working could possibly be an optional thing, like not everyone will need to do the thing. And as these jobs evolve, it's not like they're going to disappear. We'll always be in search of this purpose. I just found it interesting here that they actually synthesize it in a way that makes it tangible. It's like, okay, people aren't actually afraid to lose their jobs. Like most people don't really love what they do. They didn't grow up wanting to do what they're doing today. They just want the AI to enable them to do the things they want.

Josh:
[28:25] Unlock the freedom to be with their loved ones or be more productive or read the books that they actually enjoy. And the reality is, is that's kind of what it's providing. So it was an interesting look into the reality of it past the headlines of what people actually believe. And it was pretty reasonable, pretty realistic.

Ejaaz:
[28:40] Also, shout out to the Anthropoc team and their research team in particular for putting all these studies and reports out. They've been honestly the only company where I've been reading their research reports. And some of the findings would argue that maybe Anthropics shouldn't be building what they're building, but I appreciate the transparency.

Ejaaz:
[28:56] The ethics are definitely coming to the surface here, and I hope we see more reports like this. But listen, one thing's for sure, this AI stuff is happening super quickly. We got two new AI features from Google this week that crushed stock market caps. We had two new acquisitions from OpenAI. We have a new model seemingly like every single week now, either from China or from American labs. Everything is happening so fast. I can't keep up with it, even though it's our jobs to study this and figure this out all out 24-7. We hope you enjoyed the news that we brought to you this week and on this episode. There's some banger episodes. We did one, Josh mentioned it earlier. We did one on Travis Kalanick's Rise and Fall and then Rise Again with his new company, Atoms. And two other episodes, definitely go check those out. And if you are listening to this, if you're watching our episodes and you aren't subscribed, please subscribe. Give us a thumbs up. Josh, any other thoughts?

Josh:
[29:49] Yeah, actually, one final note on the Anthropic Report. Because as I'm reading through this, it's interesting. You know who is most concerned about jobs in the economy? Who? North America, by a large margin. You know who's least concerned? Oh, yeah. Central asia central a wait that's so weird it's a cultural phenomenon it is manufactured panic and america has the strongest manufactured panic around ai where someplace in asia that's fully embracing it that's doing the open sourcing that's integrating into their school systems they're not concerned at all only 16 only 15 were concerned so i think it's a just just to wrap up everything i think a lot of this fear a lot of this existential dread is manufactured from what we read and what we ingest. And I just thought it was worth noting as we wrap up this final episode of the week, like you just said, thank you for joining us. The newsletter, poppin'. Go subscribe. All the links are down in the description below. You can just click wherever you want to go and you can find us everywhere. I hope you have an amazing weekend. We'll be back at it again next week with four more episodes on all of the crazy chaotic things.

THIS WEEK IN AI: Google's New "Vibe Designer", The Rivian Uber Disaster, OpenAI's Shopping Spree
Broadcast by