Google Is the New King of AI: Here's Why
Josh:
[0:03] All right ring ring ring the banana phone's gone today ejes and it's got some big news.
Ejaaz:
[0:08] For you this is a huge i can't believe i laughed at that why
Josh:
[0:12] Did it was pretty well i was gonna have a snack and i was like well i might as well use it as a prop to call you and let you know about all the crazy news that is happening today particularly around google's new image and model named nano banana that just went live very impressive model i would say best model in the world for generating images and also editing images and also quickest the fastest and the cheapest it's just like it's amazing across the board and the coolest thing about all of this is the person who's responsible for deploying it logan kilpatrick is actually he was on our show and it went live today also so if you haven't listened to that go listen to our episode with him it was it was really cool it was like exclusive early interview explaining fully what this new model is how it works what it does we are going to summarize that for you in this episode as well as including
Josh:
[0:57] a lot of other interesting topics that we just kind of wanted to round up for the week. So if you're listening, this is going to be Nano Banana plus like four other interesting things. So why don't we get right into Nano Banana? This model is incredible. Ejos, do you want to walk us through why it's amazing and why we were talking so highly about it?
Ejaaz:
[1:10] Absolutely. So the formal name of this new image gen model is called Gemini 2.5 Flash Image.
Josh:
[1:18] Oh, way less cool. Too long.
Ejaaz:
[1:19] Yeah, I know. And it's basically Google's state-of-the-art image generation and editing tool. Now, notice that I said editing at the end. So this isn't just your standard kind of like image gen tool that we've seen from mid journey OpenAI and XAI. It also allows very precise fine tune edits. So say for example, the guy that we interviewed on the show, which episode went live today, wants to be put in a banana costume.
Josh:
[1:44] That can happen.
Ejaaz:
[1:45] And that is the image that you're looking at right here. This doesn't look like AI ironically, but it is in fact AI. And one of the astounding things about this new tool is that it's super quick. And that might not sound so fancy, but typically when you use these image generation models, they take so long. Any time between like 60 seconds to about five minutes. Do you remember the OG version of Mid Journey, Josh?
Josh:
[2:08] Oh, it's horrible. It was like a dial-up internet connection it took forever to generate a single image exactly.
Ejaaz:
[2:14] And now we can get to the point where we can make very fine-tuned edits and also another thing is the character consistency josh i know you really love this why don't you why don't you tell us a bit about this oh
Josh:
[2:24] This this to me was the the biggest thing about this model because never in my life have i been able to generate an image of myself that looks like myself and it drives me crazy because we we use these images for like production stuff like thumbnails and different assets for the show and I always have to take the headshot and I can't just tell the AI to fix things or change things. Well, for the first time, it actually looks like you. And this is the coolest thing. I was testing it out earlier and we could see from this example that the guy looks like him throughout the entire series of photos. So this first photo that we're seeing, it's this dude in an elevator. And then the second one is the same exact person at a diner and it's kind of a side profile. And then there's one of him sitting inside of a car and there's these really great examples of just character consistency. Here's a poster with him and like a little toy version of him. And he looks the same in every single one. And I think this is the core breakthrough of this model is the character consistency throughout everything. So if you're using this to tweak a photo in any way and you have a person in it, for the first time ever, it will retain the details of the person, which to me brings it from like a novel toy to a real productive tool. Like this is now something that we can use to actually insert into our workflows where we're generating images. And I actually want to use it because it will not only save us time,
Josh:
[3:36] but it will also look really good. And that's just for the character consistency. That doesn't include the image editing, which we will get into next, which actually edits and changes the images for you.
Ejaaz:
[3:46] Yeah. One of the fun image editing features is stylization. So think about like when Instagram first introduced the filter, you know how you could kind of like put like some kind of disco retro filter over your face, Josh, and it kind of looked cheesy and it was obvious that it was a filter, but it wasn't quite the kind of vibe that they were going for.
Ejaaz:
[4:05] You now have like astute accuracy with doing this from a single prompt. So you could just tell Nano Banana, hey, place this portrait image of josh in an 80s retro themed diner and it will do exactly that and it won't look kind of like cliche or cringy which i thought was really cool another example that i've seen is it's able to combine two different images in a very meaningful way together so one popular example that i saw earlier was this image of a lady doing a wild crazy pose and then someone kind of like drag and dropped an image of a coat from zara i believe that just got released and it melded around her body around the original model in her pose and i thought that was just so so cool and i'm thinking of all like the applications that this can be put into like if you are a ux designer working at a technology company or at any kind of product company you can now kind of create mock-ups really easy super simply and for a lot cheaper than you could do in the past if you are kind of creating mock-ups for a new movie you can kind of like create the kind of script the scene kind of like structure as we're seeing in this example over here and kind of like test it out with people there's so many ways that you can apply this to that isn't just for like the average instagram user yeah
Josh:
[5:22] I think this model is amazing but it's not only amazing in terms of of what it's physically capable of i think the technical back end is just as impressive so yeah here we have this is logan who we just had on the show he's talking about the new image generating model and one of the things that i thought was super impressive it is it is now number one in lm arena which is kind of we talk about benchmarks a lot that's the benchmark for image editing it is now number one by like a very large margin in addition to being number one there's this amazing fun fact that i saw which i'm pulling up right now it's that open ai image generation so if you're on chat gpt and you're generating an image using open ai it costs about 19 cents per image when you're generating an image with this new model with gemini 2.5 flash mini image gen whatever it's called it costs three cents just under four cents so you're talking about cents yes so you're talking about it's like 3.9 cents so just just about four cents relative to 19 cents so that is a huge difference in terms of what it costs to generate an image and not only does it cost less but it is so much better So what we're getting is this like the significant step function improvement in quality of image generation at a decrease in cost. So it's faster, it's cheaper, it's better, and it's all run under one model that is easily accessible using the Gemini application. So this to me, total home run.
Josh:
[6:43] I was thinking about it earlier today and I was like, wow, I'm really like, I'm really digging Google recently. I've really been enjoying their stuff. Google now has, well, they have the best image generation model in the world's fastest, cheapest, best. Then they have the best video generation model in the world, which is VO3. Then they have the best world generation model, which is Genie 3. And then they have close to the best, if not the best large language model, which is Gemini 2.5 Pro. And I'm like, wait a second, this is a lot of categories where Google is just pinned at number one. And yeah, they're really doing a great job of turning the ship around and getting themselves a solid lead in the AI race. And also another thing is they're working on building pretty interesting applications. I think one of the biggest critiques I've had with Google and the reason I personally didn't use them for so long was that the apps just kind of sucked. Like there wasn't really any good consumer applications and I've been playing around with them recently and they've gotten much better. My favorite of all of them is Notebook LM. If you haven't used it, it's like it ingests a lot of data. it. It'll create a podcast for you on something you want to listen to. It'll create a video for you. We use it for guest research when we want to quickly go through a book.
Josh:
[7:46] It'll analyze the whole book. It's amazing. The tool set has gotten really good. And I think Google really is like they're making a prominent stance as a leader in the AI race. Do you feel the same? Have you used the models at all recently?
Ejaaz:
[7:57] Yeah, yeah. I used Google AI Studio, particularly in the run up to our interview with Logan. And it's just amazing how like this combined suite of tools actually changes my life in a meaningful way. I also just thought about a specific conversation we had on Logan on the episode, which you guys should definitely listen to and watch, which was He said that all of these different tools that they're releasing, so ImageGen, WorldModel, a couple of the other features that you mentioned, all feed into the exact same Gemini AI model. So it's one model that is getting iteratively faster with every new feature that they launch. And this is super unique because a lot of the ways that other AI model builders are building their tools are kind of like separate. Well, they're kind of like sitting on top of a model, but they don't really kind of like feed back to the model. This is something that is collectively making the beast, which is the AI model, much smarter over time. And that is just fascinating. He was giving us an example about how Nano Banana, this image generation model that they just released, had actually taught the model to reason and think in situational awareness for its LLM native service,
Ejaaz:
[9:08] which I just thought was super amazing. So all in all, Google is cooking and I'm so excited to see what they release next.
Josh:
[9:13] Yeah i think that's one of the parts that makes this model so good is that it has that real world understanding and i think a lot of the other image models really lack that you start to see this like weird kind of glitch in physics where something doesn't seem quite right with this model it does it just kind of works it understands the world and like you said they all pool into each other so the reason the video model is so good the reason the world builder model is so good the reason this image model is so good they're all using the same resource stack and that resource stack is growing and growing and growing and it's creating this monster of a model so gemini 3 which is the new the new flagship is coming soon i'm sure of it and i really like i have high hopes for it come on i hope it comes out this week if it comes out this week not that we have any insider info although we did drop early with banana nano banana but i mean they could really just kind of give a hardcore smackdown to everybody else in the game if they drop we.
Ejaaz:
[10:02] Have to we we have to give sundar his flowers dude i i was just thinking back to like remember when he they released their first image gen model Who remembers this? Bard AI. How did I forget that? You could ask it, show us what our American forefathers looked like. So the guys that founded the constitution and stuff. And it produced people that really did not look like the American forefounders. It was historically inaccurate and it was producing absolute garbage. And in fact, I'm pretty sure the CEO apologized for what the model did back then. And now you fast forward today where they are the number one in LM Arena and across the entire world. Like it is one of the biggest 180s I've ever seen. And I'm super impressed.
Josh:
[10:45] Yeah. Larry and Sergey came back. They're in the lab. The founder, whenever the founders come back to a company, you know, it's like, okay, it's game time. It's showtime. Founders came back. Sundar turned it around. They have the entire DeepMind team, which has been absolutely incredible shipping things fast. Demis, who is the CEO and leading the charge. And we're showing this post by Demis, which says one word relentless. Just in the past two weeks, we've shipped and it has a laundry list. this has to be like 10 different features that are just insane it was like genie 3 gemini 2.5 pro pro for free for university students alpha earth a whole bunch of stuff and one of the cool things that you actually brought up on our episode with logan is that they're not just focused on hardcore ai they're focused a lot on sciences too so they're also shipping things in the world of like protein folding and trying to understand cancers and how we could solve that and cure that and it's this a whole very broad scoping attempt at just leveling up all of the world around us through AI. And it's, I just like, a lot of admiration for the team. They've been doing really well. I'm a big fan to the Google team. Please keep doing what you're doing.
Josh:
[11:47] We're going to keep using these products. I'm fired up. OK, so that wraps up the Google segment of the show. But there is more happening in the world of AI, right? He does. What do we have next? What else interesting is going on this week?
Ejaaz:
[11:57] OK, there is a lot going on at our friends over at Meta this week over the last couple of weeks. So to set some context here, as we know, it's been a tumultuous kind of like experience for Meta. They have spent a total of, I think, $3 billion to hire 25, okay, 50, sorry, 50 people.
Josh:
[12:20] That's outrageous.
Ejaaz:
[12:21] One of them was partly an acquisition. So they've spent a hell of a lot of money. And so we now need to start seeing the fruits of this labor. But we're still kind of in a holding period, right? Because this team is just newly formed and they need to build these new models. This week, Alexander Wang, who is effectively the CEO of their new AI superintelligence unit, announced a partnership with MidJourney, where they're basically going to license their aesthetic technology for all future models and products, bringing beauty to billions. And it is indeed billions. It's 3.9 billion users, to be specific. And I kind of thought this was an interesting partnership, Josh, for a few different reasons. So let me walk you through my thoughts.
Ejaaz:
[13:04] So for those of you who aren't familiar, Midjourney is basically the OG text-to-image generation model and eventually text-to-video generation model. They have a ton of really cool features, and they were the guys originally that took between one to five minutes to generate an image. And Josh and I are very familiar with this. We were one of their biggest users back in the day. And so I thought it was interesting that they were partnering with these guys to basically effectively use them as their AI image generation tool instead of doing it in-house. But it isn't for lack of trying. Actually, Meta has, I think, two image generation models that they've been trialing and testing both for consumers and users of all their various platforms. So that's WhatsApp, Meta, Instagram, but also for advertisers.
Ejaaz:
[13:48] And the advertising use case is to create marketing material, which typically advertisers had to kind of like strike a deal with Meta and then agree on kind of like guidelines of what they can use. And then the advertisers, so the companies themselves would create the imagery, share it with Facebook. They'd kind of go back and forth saying, okay, is this image good enough? And then they'll share it on the platform. Now Meta is bringing all of this in-house and they've decided basically to not kind of build this specific tool in-house, but to partner with the best. And this is kind of a growing strategy that I'm starting to see with some other big tech companies. I don't want to stray too far away from the conversation, Josh, but I'm sure you heard that Apple is basically rumored to be partnering directly with Google and Gemini, which we literally just discussed, to basically feed Siri as their AI assistant. So without straying too far away, there's this trend of big companies kind of king making these smaller AI companies. And I think a lot of this mid-journey partnership kind of went over people's head. But I don't know, what's your take, Josh? Do you have a different one?
Josh:
[14:47] Yeah, no, I think it's, I'm not sure I have a take because I'm not sure how they're going to implement it but it's interesting that they're outsourcing this whole huge part of the business to a company that is not in-house given they just spent so much money on in-house talent they have i would imagine a tremendous amount of training data to create a really high quality image model because i mean facebook is literally meant to partial share images they own uh instagram meta owns all of the images in the world and they could use that for training data so it's surprising that they're outsourcing but it's not surprising that they're doing it in the sense that they're trying to optimize for velocity. I think what we're seeing with a lot of companies is like they very clearly see that this is a race to get to whatever point they deem is AGI or basically whatever point where you could really unlock a tremendous amount of revenue for the company to start paying back all of the debts that you've accumulated in the race to get there.
Josh:
[15:38] And this very much seems like they could do it. It probably would have taken too long. Let's just partner with the next best and move forward. And I think that's probably what we're seeing here. It's just like, hey we don't want to go through the trouble of making this ourselves mid-journey is pretty great we have a ton of users both sides win this is great you mentioned google as another one looking to possibly partner with apple i think it makes sense like if you are incapable of getting to a specific point quickly just just work with someone else because at the end of the day like you're all going for the same goal it will unlock net more resources and income if it does work and that's probably the strategy we're going to continue to see yeah here's the here's the AI to power revamp Syria. Are you a fan of this?
Ejaaz:
[16:20] Yeah, so I, okay, well, there's two separate questions here. So I agreed with you to the extent that you kind of reach a point as an AI model creator that you should partner with people to kind of like get you to the end goal much quicker. I agree with that. Where I disagree is with Apple doing that, because they don't have a foundational model. And I know this is kind of like sounding like a broken record at this point. But I do think it's fundamental to own the bottom layer of the stack. And I'm not talking about chips and GPUs, but the main thing that is powering all of your applications in the future, you can't rent off other people's land because eventually you'll end up paying them a premium of dividends going forwards. I might be proven wrong because critics like to argue against me and say that, well, Apple has the moat and the user distribution and the hardware distribution, but I think they're running too large of a risk because I think companies like Google and Meta that we're going to talk about in a second are coming up with their own hardware. So I think they're kind of like attacking it from both ends and it's to be seen who wins.
Josh:
[17:21] Yeah, it worked in the past for Apple. If you like 2005, I believe it was, is when they joined officially with Google to be the search engine, the default search engine for iOS and iPhones. And Google has paid hundreds of billions of dollars over the last 20 years for that exclusive right. I would imagine this new business model in the case that they actually do it will be similar, where Google will pay Apple a tremendous amount of money to be the exclusive large language model of the device. And to me, I mean, as someone who uses Apple products and runs Google software, this gets me excited. Like the best apps on my iPhone are, I use Google Drive, I use Gmail, I use Google Calendar, I use Chrome as my browser, but I love the operating system of the iPhone. So I think the combination of the two, where like clearly Apple can't figure out AI and there's no clear path for them to actually get there and every time they announce something they're scaling back the expectations if they could plug that in yeah it probably sucks for them because they're losing out on a lot of the data and owning that foundation but i think the core ethos of apple is privacy and keeping all the data on device anyway so if they could figure out a way to do that with google while maintaining the privacy on device then like that that seems like the best option because otherwise we're going to be stuck with i mean personally it's serious to seeable on my phone i don't even have it turned on it's so.
Ejaaz:
[18:35] Bad so just
Josh:
[18:36] To get me to turn it back so that's two people uh we're two for two so just to get me to turn it back on i mean that'd be great and if they could partner with google to do that that's a win for the company because i mean half of the ai model situation even if you have to distribute it to google is better than zero it's better than us having siri turned off by default and we never use anything is is my take at least yeah.
Ejaaz:
[18:57] One thing i'm confident of is apple may not be the first ones there, but they'll create the best user experience. And I agree with you there. We've spoken about this on a few different episodes, but I think you and I both agree that the browser is eventually going to die. And potentially even how software is presented to us in its current shape or form. You know, it's hard coded. We get updates every now and then. I think in the future, AI is just going to generate whatever UX serves the particular prompt that you asked for, whatever functional goal that you're looking for. And that is just a very new and unimaginable world. I don't think anyone has nailed it. And I think we're going to see the first couple of iterations over the next maybe three years or so. And then Apple's probably going to swoop in, presumably with their large war chest and either acquire whoever's leading at the time or build it from scratch
Ejaaz:
[19:47] themselves and absolutely kill it. So, you know, there's still a huge bull case for Apple. It's just not anytime soon.
Josh:
[19:54] Bring it on. They got the new iPhone coming out in like two weeks, folding iPhone coming out next year, 20th anniversary iPhone coming out the year afterwards. And then we have these, the vision goggles that are hopefully going to be ready by like the third or fourth generation by that year. So Apple's got a good roadmap, that but there's more to discuss today all i saw on the agenda was just meta ai companions i have not used these i don't know what they are so please explain to me what what's going on here.
Ejaaz:
[20:18] Okay are you familiar with a product called character ai josh i'm
Josh:
[20:22] Not fill me.
Ejaaz:
[20:23] In okay so imagine chat gpt but it has the personality of your favorite celebrity or favorite cartoon character from a movie that you watched basically character ai was a platform that you can kind of go on and talk to these different types of characters. And you might think, well, what's the point of that? Well, it's super engaging because if, say, if you're a fan of Harry Potter and your favorite character was Dobby the elf, the house elf, I kind of want to know what's going on in Dobby's life outside of the book and the storyline,
Josh:
[20:55] Right?
Ejaaz:
[20:55] And so you can end up having this conversation. And what sounds like a silly idea ended up with hundreds of millions of users using it still every day. And I'm pretty sure it's over a hundred thousand characters and growing at this point but that's character ai and so the folks at meta saw this about a year and a half ago and they thought huh Well, we have a couple billion users or so, and I bet that they would love to speak to a similar kind of product like this. Let's try and build this up themselves. And it took them about a year and a half. Actually, at the end of last month, July 30th, they announced that they are opening up this feature for any developer to access and build on. And Josh, build on it,
Josh:
[21:36] Did they do?
Ejaaz:
[21:38] We are now looking at, I mean, we've got hundreds of thousands of AI companions is what they're calling them. They're basically chatbots. Actually, you can see an example of kind of like a screenshot over here where you've got this guy called The Analyst. And you can see it's AI by Alex.Anyways18. So that's presumably a user or a developer. And he's created this kind of like analyst type persona that you can kind of speak to and go back and forth. But I don't want to get into any individual examples except what I'm seeing getting super popularized, Josh. So I actually first noticed about this, not from this TechCrunch article, but from scrolling on my app, on my Instagram feed. I was scrolling. And you know how sometimes they have the section, Josh, where they suggest new friends to follow?
Ejaaz:
[22:25] They had that exact same reel for me, but it was AI companions. And I almost fell for it. Because one, the profile pictures looked super realistic. And the thing that caught me off guard that made me realize that it was AI was the names of these things. I saw stepmom, 10 million plus messages. I saw Russian girlfriend, 30 million plus messages. And I was like, wait, this isn't what I think it is. Surely not. So naturally I got my girlfriend's consent and I said, I'm going to talk to this Russian girlfriend. Please don't break up with me. And I tapped the Russian girlfriend and I said, you know, Hey, who are you? Like, tell me a bit about yourself. And says, well, I'm everything you've dreamed of. I am your Russian girlfriend and we went back and forth and it was this whole you know how we've spoken about like xai's
Josh:
[23:15] Companions how they're like really.
Ejaaz:
[23:17] Romantic and it sucks you in that was basically it josh except it was 10x the the amount of companions that i could speak to and it was by far the most overwhelming type of ai companion that was used by people at this metric that is under each companion by the way which is like 10 million plus messages 50 million plus messages basically is used as a metric to lure you in to talk to them and I thought that that was just crazy.
Josh:
[23:45] Why do you think they're doing it? What's the goal with adding these companions? Because with Grok, I can see the viral component where they're just trying to grow users so they create the actual 3D animated character that you can communicate with. But in terms of chatbots, What do you think their goal is? Why are they rolling this out?
Ejaaz:
[24:01] You and I both know the reason, right? So at the end of the day, you want to get as much personal data as you can on an individual so that you can kind of create this kind of all-consuming model that can like target you in whatever the future of advertising looks like, right? So a model that says all the things that you want to hear, that shows you all the right kinds of products. Yeah, I'm just thinking about what advertising looks in like, whatever, five years from now. It's not going to be pop-up adverts on websites it's going to be subtle shilling in the responses that an ai model gives you in order to do that effectively you need to know all the information about a user and be able to feed that into a model what better way to extract it from someone than kind of luring them into this false sense of knowing of trust and how do you do that get them to fall in love with your ai companion with your ai bot we saw this with open ai's i think it was GPT 4.5 update. Remember when the sycophancy was like super high? So basically I agree with everything that you said. And it one-shotted a bunch of Gen Z people that basically fell in love with it and got their hearts broken when they updated to GPT 5. So much so that Sam had to roll it back. So that's what I think is happening.
Josh:
[25:10] Things are getting weird. They're getting bizarre. I guess this is another company
Josh:
[25:14] falling to the lure of these virtual AI chatbots. But this is not the only meta news we have this week, right? Because there's another bullet that says Hypernova. And I have no idea what hypernova means. So maybe you could explain to me what on earth Meta is doing with hypernova. What is it?
Ejaaz:
[25:30] Okay, Josh, to me, you are one of my favorite AI hardware experts. So I know how excited you get about hardware.
Josh:
[25:37] Oh, this is great.
Ejaaz:
[25:38] Particularly. I love hardware.
Josh:
[25:40] Right. Particularly consumer hardware.
Ejaaz:
[25:42] And we haven't quite seen the emergence of this. You mentioned Apple Vision earlier, but it kind of didn't take off. One of the earliest examples of this actually was Google Glass, which for folks who are listening who have never heard of this, don't worry about it. It looks like something out of 2001, A Space Odyssey, and we never need to revisit that. But the point is, we've been trying to kind of figure out what the future of hardware after the mobile phone looks like for decades now. And Meta is going to take a stab at this, supposedly, it's a rumor, next month at their flagship Connect conference, which debuts a bunch of like new software updates, products, and hardware. Specifically, this new set of AI glasses called Hypernova, which is basically aimed to be a consumer accessible AI hardware glass that you can kind of slip on like normal sunglasses, and it has a display screen on it. Now, you might be asking, well, what are they going to display to me? Think of having your mobile phone interface kind of discreetly in the corner of your glasses. And when you get a text message or when you get a like or when you get a retweet, it kind of pings you and lets you know and you can kind of access it. So then naturally your next question might be, well, how am I going to access it? What am I going to do? Look at it or think it? Like it can't read my mind, can it?
Ejaaz:
[26:55] Well, they're kind of launching this supposedly in conjunction with this new wristband. I don't know what the wristband is going to be called, but Josh, if you remember on a previous episode three weeks ago, we covered this we spoke about this new wristband that is motion sensor related where you can kind of like lift a finger point at something or gesture in a certain way and some kind of interface be it on your cell phone or on your ai new ai glasses hypernova will be able to kind of pick up and know what you're trying to get at so you can read that text message discreetly whilst you're talking to your girlfriend and not really listening to whatever she's saying the biggest thing that that kind of blew my mind about this, Josh, was the price tag. I thought this thing would be worth like an iPhone at least, but it's $800.
Josh:
[27:41] I would spend $800 on this. Even if I think it might suck.
Ejaaz:
[27:45] I would gamble it and kind of test this out. What do you think? As the hardware expert on this show, gut take.
Josh:
[27:53] Yeah, listen, I don't think it's going to work, but I love it. I am obsessed with it. Like, I think this is great because, I mean, what it's doing is it's just applying pressure to the form factor that we've been stuck with for a decade, which is this slab of glass that is multi-touch that is like singular in form. And what Meta is doing is they're trying to break that. and i really i admire that they tried to do that with the meta quest and we saw that with the vr headsets that are like they're good not great but they're getting better and now we're going to see that with glasses and these these wrist activated things i think this is so cool because it's introducing people to a new the next compute platform which is going to be spatial reality which is going to stray away from multi-touch it's going to exist in your physical space and just be layered on top and these glasses are clearly one of the form factors the problem with all these devices always has been creating enough momentum to make people want to stick with it so i think it you just have you ever tried the meta quest before the headset yes okay have you like did you buy one or did you just try it at a friend's place so.
Ejaaz:
[28:54] I i got sent one for my previous job because we were trying to figure out maybe we can create some new kinds of apps and so we played around with it i used it four times
Josh:
[29:05] Yeah. Okay. That's what I was looking for. That's kind of like the case with every single person who's ever used a VR headset, including Apple's Vision Pro, is it's a really cool experience when it's novel. And the second the novelty wears off, you kind of run out of things to do with it, because we just haven't had enough time for developers to build interesting experiences, for them to meet a critical mass in terms of user base, where you get like the social elements that kind of add to the value of your experience. It's been bad for a long time. And the problem with Meta is if they launch a pair of glasses, it's going to have to kind of exist in this weird, awkward silo that's disconnected from where I spend my time.
Josh:
[29:40] If Apple were to create a pair of glasses, that's great. It's an extension of my iPhone. My whole life is on my iPhone. I can now just put that on my face. But when Google or when Meta does it, I don't really use Meta products a whole lot. I mean, I use Instagram and that's it. I don't use Facebook. I don't use any of their hardware. I don't use any of their stuff. So unless they're able to integrate and meet me where I am with the people that I want to communicate with, it's a tough sell because what can they really do? They're going to, okay, you'll have like augmented navigation as I'm walking down the street. You'll have basically the, I assume the vision, the smart AI vision where like when you have your phone currently with chat GPT or rock and you pointed at something, you could ask questions about what you're seeing. That'll be cool. But I have my phone in my pocket. That works great to do that. So they really need to create like this super interesting and differentiating value proposition that might be challenging for them to do in a month's time but again absolutely adore the decision to try it love the form think this is totally the future love that they're spending money on it and of all the things that they're going to release the wrist thing seems the coolest i'm like really excited to see how they're going to be able to use your wrist as a new way to interact with these computers that we've never seen before so i think it's probably the the general vibe right now is um cool but like not super amazing so.
Ejaaz:
[30:59] So let me ask you this With them intending to release a pair of glasses, are you more convinced that the eventual form factor will be glasses or are you still kind of pro earbuds or something completely novel and different?
Josh:
[31:13] Yeah. So the more time I spend thinking about it, the more it feels like a hybrid. Like we've been spoiled in the sense that we've only really needed one device that does everything. But I don't think that's a result of like the optimal form factor. I think it's just a constraint where if you do wind up with artificial intelligence that is like truly AGI, incredibly brilliant, you won't actually need a singular device. It could just kind of exist in an ambient form throughout your life. So it will exist in this suite of devices. And that's kind of what we heard Johnny, Ivan, Sam Altman describe when they were pitching us the new hardware device is it's not going to be one thing. They're releasing a suite. They're going to start with one product, but eventually like Meta is doing, you'll have glasses you'll have something on your wrist you'll have something in your ears you'll have a display that's on your wall that you'll have like something on your kitchen counter and this is another apple product that was rumored to be coming out next year is that it'll be this little screen that sits on your countertop and it'll pivot and kind of follow you around and be this little like companion device it's probably this ambient intelligence that just kind of exists everywhere and it manifests itself through the suite of devices without needing to be fixed to a singular device like the iPhone. And that's my new guess. That's like, I think that's where I currently stand. I'm like, okay, it's going to be a couple of things. And all these will work in addition to the iPhone, but like eventually you will need the phone less and less because like you mentioned, AI will just be able to generate whatever
Josh:
[32:35] you want upfront without needing to actually engage with the device nearly as much.
Ejaaz:
[32:39] I love it. I love it. Well, moving on. And the final point around meta this week is things aren't always rosy. So if I started off this segment saying that they'd spent upwards of $3.5 billion on 25 people. In total, I think it was $22 billion for 50 people because they made a major investment in scale AI. So it's a lot of money, a lot of chips on the table. Now, what if I told you that one of those people that got offered upwards of $150 million quit after two weeks?
Josh:
[33:12] I got a lot of thoughts. First of all, honestly, the first thought is like, what happens to that dude's payout? How much is he? Did he get like a signing bonus or is that gone? Like, what were the implications of the $150 million?
Ejaaz:
[33:27] I mean, I would need to talk to someone from recruitment, but I think I saw another post from him that wasn't exactly this. This kind of like covers the general vibe about why he left. But it's this dude called Rishab Agawal. and he basically said, you know, this is my last week at AI at Meta and I've kind of been here for a very short stint. He said in my short time at Meta, we did push the frontier on post-training for thinking models and he goes on to list a bunch of different things, which actually sounds super cool. But I think what's happening, Josh, is that these people are kind of leaving a little uninspired. And I don't think Rishabh is the only example. This is just the example that went viral. there's been a few that have kind of been like tailing off. I think Zuck load them in with a massive paycheck, the promise of autonomy, and the ability to kind of like build what they believe is going to be the future of super intelligence and AGI. They joined and realized that Zuck wants to launch a bunch of AI companions called Russian Girlfriend and Stepmom and one shot a bunch of people into using his consumer products. That's me being super critical and speculative. But I think is super interesting to see them leave just after two weeks of joining for like hundreds of millions of dollars. That's crazy. I would stay. I could hack it. I could hack it for six months. Maybe that's a testament to my character. I don't know, but damn.
Josh:
[34:43] I'd love to be a fly on the wall for these conversations as they're going through this. I think one of the more fascinating parts of that post that he had was that middle paragraph where he said, the pitch from Mark and Alexander Wang to build in the superintelligence team was incredibly compelling, but I ultimately chose to follow Mark's own advice. In a world that's changing so fast, the biggest risk you can take is not taking any risk. And it's a testament to the type of people, the quality of people that they're hiring at this company. I mean, these are the top of the top. These are some of the smartest people in the world when it comes to AI and perhaps even just generally speaking. I mean, they're brilliant. And I would imagine so much of the purpose when they wake up in the morning revolves around applying that intelligence to something they care about. Because when you're at that level, I mean, to be honest, money doesn't really make a difference to you because you can basically wield whatever salary you want, whatever compensation you want. Yeah. Like you are in control because you are so rare. And I'm sure this guy probably has plenty of money in the bank. Like if not, he is able to generate it very quickly because any company would hire him to do whatever he wants to do for them.
Josh:
[35:46] And I mean, when you get to a point, I imagine the money doesn't really move the needle. And in this guy's case, he wants to work on something that he feels inspired to do. And like, hey, all the power to him. That's admirable. Clearly, the paycheck isn't enough to pull everybody. This was a case where it wasn't. And now he's going to hopefully go on and build some cool things. But yeah, man, I'd love to know what's going on at Meta. How are they treating these people? What are they talking about? What are they working on? Like, what does it look like when you assemble essentially the intellectual Avengers of the world and put them under one roof?
Josh:
[36:16] And let them be run by this dude, Alexander Wang, who's what, like in his mid-thieries, this young guy.
Ejaaz:
[36:21] Literally. And who is now the sole leader of this unit, Josh? Remember when they first founded it? He was a technical co-founder with the former founder of GitHub, right? And now that guy has like stepped down and it's just this guy.
Ejaaz:
[36:37] So I think we'll probably find out in the next couple of months. Zuck teased in his quarterly letter that they had discovered kind of like self-improving ai so those are big fighting words and i hope that he kind of like delivers on that beyond just ai companions and a pair of ai glasses but yeah i'm excited to see what happens yeah
Josh:
[36:56] I am too so that is the news of the week up to tuesday we're at tuesday there's still a lot left to go for this week but currently the big things you need to know are hey gemini is kind of kicking ass google is doing really well their new image generating model is not only amazing but it is readily available to use for basically free so if you have any photos you want to generate any photos you want to edit or there was an interesting example actually that i was looking at earlier today when i was recording a video is that you can take old photos of loved ones if you have photos that are black and white or they kind of look very like vintage and it will restore them very well so it'll not only recolorize them but reapply detail to the image to make it look rich and make it look very realistic So Google's crushing on every front. We are excited for Gemini 3. When it comes, we will be covering it first thing, I promise you. And the other thing is the hardware thing. Meta's building hardware. This is cool. And I do want to let people know next month is Tech-temper. This is the best month of the year. Every September, basically every company in the world drops their hardware. Google kind of cheated and they dropped their hardware last week. But the way it works is like now Apple's going to release, Meta's going to release, all the hardware companies, they all release in Tech-temper. It's going to be a whirlwind because I'm sure at the forefront of all of these hardware releases will be AI. We will be here to cover it. Thank you again for joining us for the journey and we will be back again later this week for another episode.
Music:
[38:16] Music
Creators and Guests
