This Week in AI: Anduril's Makes Call of Duty Real Life | ChatGPT Goes Erotic | AI Cures Cancer?

Josh:
[0:03] I've been playing call of duty probably for the last two decades and it's one of my favorite games i love it and for for that entire time video game developers have been spending all of their time and resources trying to make these video games more immersive feel much more like real life for the first time we've taken all those learnings and we've actually applied it to real life where now it's the opposite way around reality looks like a video game and that's what anderville has released this week with the eagle eye we're going to talk all about that and we're also going to mention a few other topics that i think are interesting this week starting with google who is now curing cancer using ai models it's pretty incredible stuff open ai is getting into erotica while also making their own chips vertically integrated into their systems to probably just get to like a couple billion users and take over the world so it is a pretty eventful week i want to start with eagle eye because that is the most exciting to me ejaz can you walk us through like what eagle eye is i'm looking at this picture on screen of a war fighter that looks like kind of almost like like a funny anime thing because it has like the ears in its head but like what's going on with eagle eye and anduril's new product so

Ejaaz:
[1:04] For those of you who are just listening i pulled up the announcement tweet from anduril of this new next generation warfare helmet and it looks like something straight out of the future josh, As you said, like if for any of you who have played Call of Duty before, especially with some of the newer versions of the game, you are able to see behind walls, you are able to neutralize enemies from afar, you're able to control

Ejaaz:
[1:29] drones all from your single helmet. You can like, you know, see through walls, all that kind of stuff. That is now real. That is now reality. I'm going to show you a quick video of this happening right here. What you're watching right now on your screen is not a game, even though it looks like one, Even though it looks like, you know, you've got a little radar in the bottom left, you now have x-ray vision or night vision where you can see kind of like a supposed enemy from afar. This is just insane. And it looks like a game. And I think to kind of like expand on your earlier point, Josh, video games and reality now kind of one and the same. And I think it's super important to, you know, not only move warfare into like the modern age, but to also prove that games maybe had it right to begin with. Maybe it's just a better UI and now reality is now adopting that.

Josh:
[2:22] Yeah, I think that's one of my favorite parts of this is that we often see this with sci-fi where it predicts the future in some form. Video games have kind of predicted the future in some form. And in a way, they've been building this user interface for two decades based on user feedback. Granted, it's been in a video game, but it's really interesting to see Meta taking that feedback, taking this interface and actually applying it to the real world. And what's so cool is you actually get these abilities because of this new cutting edge technology from Anduril that was actually kind of happening with Meta. So if you remember, Ejaz, Palmer Luckey was the founder of Oculus, which was the VR headset company. They got sold to Meta. He left. But recently they merged to create a partnership to build this eagle eye technology. And some of the cool stuff that's in here, they have like all these different sensors that allow you to see walls. So you have, I think, like thermal sensors and range sensors. And they have all these really amazing features. One that we saw earlier was this thing that they call ghosting. So when a target goes behind cover, like when an enemy is walking behind cover, the system can actually show a like skeletal box or this like bounding box around the ghost target because it has the ability to see through surfaces. So in video games, I mean, we call this wall hacks. You get wall hacks in real life, but it's like this really unbelievably sophisticated headset that... I would imagine if it was available for consumers would probably be like the best headset on the market by far. The technology that they packed into this is like pretty remarkable. And the fact that it's all up to military grade is like they did a really amazing job on this.

Ejaaz:
[3:50] I just want to go over some crazy things that this helmet can do, Josh. So you mentioned a drone, right, that you can use to either help you see behind walls or see from afar, but also like you can get the drone to do things for you. So in this first feature that I'm showing on my screen here, you have a soldier that's wearing this helmet and he has kind of like a radar and bottom left hand of his screen, which is telling him, you know, making sure that he's going in the right direction. But on the right, you can see that there are notifications coming up from something called Ghost. That is the name of the drone that his helmet is connected to. And what you can't see here is that the soldier or the troop that's wearing this helmet is signaling to the drone to neutralize an enemy from afar. This is all like, if you can see the bottom right hand of the screen right now, it's that's it. The target is neutralized, literally something out of a game. So I think like the drone maybe shot a missile down or whatever that might be, but super crazy to see this happen in real life or being even being capable in real life. The second really cool feature is you now have eyes in the back of your head, Josh. You can see 360 vision at any time so that you can't be taken by surprise. And whilst it may seem like something like super simple, I can only imagine this changes the way that war is carried out in the first place.

Josh:
[5:07] Yeah. And we have a really cool demo to show, right? Can we show like the longer demo? Because this, when I saw it, it was really... Amazing because what we're seeing here is exactly what the soldier sees you could see them they're flipping through different like infrared scanners thermal scanners that are just built right into the headset device you can see when they walk behind the wall you could see the skeletons and you could see the the enemy target getting taken down and i mean what's amazing is i mean at the beginning of the last video even you see this rear view mirror where like when you're looking forward it looks like you're in a car you're in a rear view mirror it's just really designed to be optimal to i think to get out of your way and be predictive and this is something that's enabled only by ai because a lot of these things are predictive like okay a drone sees someone it's predicting that you want to take that person out it's predicting that it's an enemy a lot of ai integration into this headset is really what makes it so special and predicting what you want to do predicting what you want the system to do i think that's one of the most interesting parts and also just the form factor of it like this thing looks so freaking cool if it wasn't like a war machine i would really want one like at home just to have because it looks cool clearly the quality is great the resolution is great but also the integrations are really cool the fact that it can sync up with a drone that is flying in in the air i mean like yeah we just take

Ejaaz:
[6:27] A look at this for a second josh like okay so like i've zoomed into the image here this i i see bunny ears i was gonna point it out i was gonna be like it looks cool but why are there ears on this it looks kind of ridiculous okay but you've got three of probably like the most powerful lenses which i'm guessing gives you access to the infrared display or the night vision or whatever that might be you've then got like what are what are in the bunny is here like am i going crazy are these like sensors that can kind of like detect around your surround i'm oh i'm guessing it allows you to see out of the back of your head that's the thing that gives you the rear view vision. Is that right? I'm not sure. We got to get the design specs.

Josh:
[7:05] So we can see.

Ejaaz:
[7:06] Hang on a second, Josh. We're missing the most important thing.

Josh:
[7:10] The coolest

Ejaaz:
[7:11] Nerdiest thing yes tell me about this.

Josh:
[7:14] Okay i'm not going to attempt to translate this post but i'm

Josh:
[7:17] Going to translate what the image is showing so what we're seeing if you scroll down just a little bit oh there we go if you scroll down just a little bit what they did is they kind of took a page out of apple's vision pro playbook where the battery of these things is very heavy so what they did is they just removed it out of the system and they turned it into an armor plate and that plate actually holds all the energy the battery and that plate is made of this really cool new thing called a ceramic solid state battery i believe is what it's called ceramic solid state battery is interesting because it doesn't blow up it's a battery that uses like the solid ceramic electrolyte instead of a liquid gel electrolyte so traditionally batteries are wrapped up in this like really tight coil and in the case that they get penetrated they actually overheat and explode and when you are taking on bullet fire um you do not want your battery to overheat and explode so they use this really cool novel technology that allows the armor plate to double as an armor plate but also a battery that when penetrated doesn't actually explode so it's this really beautifully elegant system really built for military use cases and what you'll see what's cool about this system you guys is the whole facial front is covered the previous ones that we saw they just have glasses as the interface this one has a full facial cover so it doesn't look like there are holes for eyes there's just cameras so i'm wondering if this is like a totally augmented headset variant of eagle eye where you just place on the headset and all you're seeing are screens it's really like this thing is badass well i know it's like military so we're not gonna get too many details and secrets but like

Josh:
[8:47] Man, would love to get a demo of that bad boy.

Ejaaz:
[8:50] I, you know, I'm not even going to try and attempt to relate to what it's like to be in a warfare scenario. But if I saw some guy wearing this helmet coming towards me, I would be pretty nervous. I mean, he looks like some kind of like robot. And we're probably not too far off from that, right? We start off with humans and robots.

Ejaaz:
[9:08] And now eventually robots will wear whatever this bulletproof battery helmet thing contraption is. But Josh, there was equal strides made by other top companies in the AI space this week, most notably from Sam Altman at OpenAI.

Josh:
[9:25] Well, he didn't

Ejaaz:
[9:26] Announce any cutting edge warfare type, you know, weaponry. He didn't announce that he's cured cancer. But what he did announce was ChatGPT is going to become unrestricted and allow erotica, which means that you, me, and anyone else that has access to ChatGPT can engage in adult content. I have mixed opinions on this. What you're seeing on the screen right now is his message announcing this, which actually kind of came as like an afterthought. If you, I'm highlighting the last sentence where he says, in December, as we roll out age gating more fully and as part of our adult treat users like adult principle, we will allow even more like erotica for verified adults. Now, there's been a mixture of reactions about this news, one stronger than the other. The main one being, this is gonna be terrible for human society. It's just trying to suck attention and capture people's eyeballs, eventually to feed them adverts so that they never click away. And this is just a classic, you know, Facebook trying to get all your attention, what every other social media company has tried to do for the last couple of decades. So overall, very bearish opinions. Josh, do you have a different take on this? Or do you see something that I currently don't?

Josh:
[10:43] It's funny, we were talking to Luke before the show, and we were talking about this idea that once you reach a certain level of power and domination in the AI space, you start to change your values and you could see the values shifting in real time. And I mean, one of the examples we're using is like Dario and Anthropic. They're kind of losing the AI race in a way where they have a great coding model, but they don't really, they're not at the leading edge of really anything else. And Dario is going on stage and he's talking about how, well... AI is like kind of dangerous and we got to be more careful and open source AI actually isn't a good thing and kind of contradicting a lot of the popular thought of what really is true. And then in the case of Sam, well, kind of the opposite is true, where he was very high on his moral horse until they got 800 million weekly active users, until they started generating a ton of revenue. Now it's kind of like, hmm, maybe we will roll out some little more edgy features. And this comes after a time where he was very critical of elon and xai releasing the ani feature like the companions within the applications and i think we see this a lot with open source models too where everyone wants an open source model and everyone's supportive of open source until you get a leading frontier model and then suddenly all the doors slam shut and you are like holding that super tight to your chest because it is so valuable and you become emperor of the ai world And that holds a lot of value. So what we're seeing is this kind of like slow degradation of...

Josh:
[12:12] The principles that they've held themselves on for so long. And this could be an overreaction because Sam did have a follow-up post kind of like trying to temper expectations around this. But it's interesting to see them moving into this space when I very much thought they would be strongly against it.

Ejaaz:
[12:28] I have a slightly different take to yours, Josh. I think that Sam is just trying to make money. And I think he originally sold the vision of open AI around this concept of AGI, artificial general intelligence. This is this form of intelligence that can find the cure to cancer, that can help take us to space and a number of other things that humans haven't been currently capable to do for the last bunch of decades, right? And I think when he sold this vision, he used it to raise a bunch of money. Why did he want to raise a bunch of money? Well, he needs to buy more compute, buy more GPUs, scale out data sets. These are all really costly things to do. But obviously, that takes time. That's going to take like a decade to get to the amount of energy that we need to train this super intelligence. So in the meantime, how are you making money? Is it going to be from ChatGPT subscriptions? Because they just released new figures this week. Although they have 800 million weekly active users, only 4% of those users actually pay for ChatGPT. And of those 4%, I think only around 10% pay for the pro version. So you basically have very minimal paying users of the 800 million active users, and most of them are paying 20 bucks a month. That's not going to cover your cost. I think it's something like what we were talking about earlier is one in $25 spent or like one in $5 spent or $3?

Josh:
[13:51] I believe the number was one in every $3 was revenue. So for every $3 spent, they earn $1 in revenue. And the average user is spending $27 per use per month.

Ejaaz:
[14:03] Exactly. So OpenAI is running a loss, and I think he wants to close the gap between the amount that they're spending to, you know, fuel all of these AI searches that everyone's making versus, you know, trying to build the grand vision of AGI. So I just think this is a ploy for Sam and also like XAI, Elon, like he released Grok avatars, which kind of do a similar kind of, kind of like soft, explicit type of chat. I think it's just to make money, to engage more people. My third view is, I think this is a ploy to try and get as much data on the

Ejaaz:
[14:42] individual as they can, Josh. I think that if they can engage people in some kind of erotica, that individual will feel more forthcoming about personal information that they wouldn't give to any other web application.

Josh:
[14:55] Yeah i think you know i would love to be a fly on the wall in the conversations that were had around that decision because i mean when you think about it essentially what they're trying to do is give these language models infinite access to our limbic system it's like this feels like a civilizational decision like a billion people plus are going to be affected by this and it's not necessarily a product one it's like personalized erotica it's basically it feels like a trojan horse like into emotional stimulation and like we already had the visual desire stuff that was on like instagram and then we have like this parasocial relationship with streamers but this adds this very immersive desire that like reacts and learns and remembers and adapts to all your preferences so What's funny, I think a 30% was the number of books that are sold that are romance novels. And in a world in which you can hyper customize that to all of your desires and fixations on a like regular basis in real time is it's a big deal. So there's large implications for this outside of just making money, I think. But I do agree in the sense that this was very much a profit driven decision. Sam has spoken about this briefly where he's like, yeah, we actually just need to make money. And also that he is now no longer fully against advertisement-based model inside of ChatGPT. So there's a lot of changes happening, a lot of shifting of the goalposts and Overton windows here. I'm looking at another picture on your screen. He says, why do we have this up here?

Ejaaz:
[16:23] Well, we were discussing this before we started recording this episode. This is from an engineer that has worked at OpenAI for a while now who's responsible for their Sora product. And we've mentioned that on a bunch of times on this show. You should watch our episode that is dedicated to it. Came out last week. But he basically put out a tweet that was now deleted where it's captioned, Bonnie Blue, who is an adult content creator, made a public cameo on Sora. go generate funny videos. And if you don't know what the cameo feature is on their Sora app, it allows you to basically create an AI generated video with your friends, with that person or whoever is tagged, whoever's allowing you to make a cameo with them. And so it kind of broached this uncomfortable topic of discussion where it's like, should anyone, including like, because like kids, it's not age gated, right? Sora app is an age gated. Should they be allowed to create cameos? And obviously, the answer is like, no, but these adult content creators. So, you know, he deleted that tweet. And presumably, you're not allowed to even create some kind of explicit content. So it's it calls into question the genuineness that Sam and OpenAI as a company has towards, you know, keep keep on repeating their vision of like, hey, I want to build up AGI and kill cans and all that kind of stuff. But then they're releasing products like this, which kind of like hijack your limbic system, as you said, Josh. So my mind's kind of in this like purgatory state where I don't know whether to believe Sam. I want to believe Sam. I want to believe their genuine intention, but their actions aren't really speaking for them.

Josh:
[17:50] Well, I guess we can believe one intention is that they are dead set on building AGI because we have more news this week that they are vertically integrating their chips into their system. OpenAI is making their own chips. Explain the deal. This was a big deal. Broadcom stock was a ton. There's a lot going on here.

Ejaaz:
[18:08] Yeah. So if you've listened to the show at all, you know that NVIDIA is the top dog when it comes to designing and building chips. These are the hardware that you need to train next generation AI models. And OpenAI announced this week that they're going to create their own. And they've partnered with this company called Broadcom to deploy 10 gigawatts worth of chips. That is a lot of energy designed by open ai so building out their own hardware which is a very ambitious thing to do josh you mentioned that this is their first step towards vertically integrating and i want to jump into that but before we do i just want to highlight a few things that happened from this announcement number one classic if you this is the rule this is this is a a law of physics at this point if you announce a partnership with open ai your stock will pump minimum of 20, on the announcement broadcom's chart i don't know if you for those of you who can't see this, a literal straight green lightsaber up. Like it is up. Like it jumped like, what was this? My God, 30 bucks in a matter of seconds.

Josh:
[19:18] That's equivalent to $200 billion. And the deal was worth significantly less than that. So it's like infinite money glitch. Announce the deal, print a giant candle, make more than the deal is worth. Yeah, and the deal is therefore free.

Ejaaz:
[19:30] That is just insane. So the key details from this is it's a multi-year strategic collaboration for custom AI chip development. And OpenAI is going to be designing its own accelerators. That's what they refer to GPUs as. And it's built entirely using Broadcom's Ethernet framework to help scale this all out. So Broadcom has all the architecture and kind of like instruments to allow OpenAI to scale this chip manufacturing to the extent that they want to. And it's 10 gigawatts of power. One thing that I found really interesting about this, Josh, and they spoke about it a bit on their kind of like announcement episode that OpenAI has their own podcast where they talk about these kinds of things is Greg, the president of OpenAI said that OpenAI's models themselves.

Ejaaz:
[20:11] Helped design this chip. That I found super cool. Like they're actually using AI to help them design the chip. But then you might be asking like, why design their own chip? Like why not just use NVIDIA's? The argument that they gave on this episode is that whilst NVIDIA's GPUs are great, they're great for general purpose stuff. But there are some niche use cases that OpenAI's models really specialize in that chips don't really serve. It becomes really inefficient. So if OpenAI is able to design their own chips, It increases efficiency and it really helps maximize the use of their chips, which brings me to like the overarching point here, which is for anyone who's building frontier AI models, your singular goal should be to reduce the cost of intelligence per watt.

Ejaaz:
[20:59] It's no use creating super intelligence if only like five people who are elitely wealthy can use it. You need everyone to be able to access this stuff so that they can go and build cool stuff. They can go and consume cool stuff. They can go and rebuild this world into the future that we want it to be. We're not going to be able to do that if it's gatekept, right? There are a number of features on OpenAI right now that a lot of people can't get access to because you need a pro subscription, right? You need to pay 200 bucks a month. That just doesn't work at scale. And Greg makes the point that in order to get there, like let's say that there's 10 billion people on earth, you need 10 billion GPUs. You need a single GPU per person to be dedicated to that person. It's going to take us like it's impossible to get that right now at this scale so this is an attempt for open ai to get that.

Josh:
[21:43] Yeah compute per watt is the benchmark here we're trying to just lower that price as much as possible and vertical integration is how you do that there's there's two points i want to make here one is that this is an incremental 10 gigawatts of compute meaning this is stacked on top of all the other deals that they've been doing so you have like i forget all the numbers but you have like 10 from nvidia 10 from amd 10 from oracle like they're just kind of packing this all on and they're they're going to attempt at least to build this unbelievably large super cluster oh here's here's the totals 10 gigawatts was nvidia 10 gigawatts broadcom six for amd and 26 gigawatts in total until now that is just like this gigantic number so one like okay let's let's get going like we haven't even hit a full gigawatt running right now granted open ai has two gigawatts of total compute but not a single gigawatt in a single coherent cluster the second thing is the power of vertical integration. This is obviously a no-brainer and...

Josh:
[22:37] They probably stand to benefit the most from this decision and this deal more so than any other deal that they're doing. And there's a few different examples we can reference here. One of them that I love the most that we always talk about is Apple and Apple making the decision to move from Intel to a M series chip, their own in-house chip. And that resulted in not only cost per watt going down or compute per watt going down significantly because it was much more energy efficient, but also the amount of compute that they were able to integrate into the machine doubled, tripled quadrupled year over year every single year because you're able to optimize that chip for your specific hardware and software stack with nvidia chips they're amazing they're the best in class but they are not entirely optimized and there's a lot of loss happening from electron in to intelligence out and with the open ai chip design they get the opportunity to custom build this from the silicon all the way up or even from the transistor all the way up through to the final output token and there is so much efficiency to be unlocked along the way that I imagine if they actually do get 10 gigawatts of compute

Josh:
[23:43] On their own chips running their own software and hardware stack that will be more efficient more effective than all of the other gpus that they have coherently like training combined it's a really big deal we saw it again with tesla who vertically integrates the entire supply chain it's just in order to win this race you need to get you just like you said the the cost per watt down and that is the single biggest way to do it so very bullish on this decision it's fun to see them announce this after announcing everyone else's deal. Like they're just building with everybody and everyone's stocks are going up. So I guess my challenge to you, Ejaz, and the listeners as well is please send your suggestions for the next roulette that we're going to hit next week. It's kind of like, which dart can you throw at the right stock that OpenAI is going to partner with to double in price and double your profits? That's the game that I want to start playing. But yeah, this is amazing news. This is super exciting.

Ejaaz:
[24:34] Okay. So if I had to throw a dart right now, Now, Josh, it wouldn't be OpenAI. It wouldn't be XAI, though I love both of those companies. It would be Google. And let me kind of like take you there. So one, you just mentioned vertical integration is super important, right? And like, you know, it can unlock capabilities that you've never seen before. Google's kind of been doing that at the GPU level, right? Like they've trained all their AI models on something that they call TPUs, right? So it's all in-house. They haven't been relying on NVIDIA. And, you know, Google is a huge company and their models are at the frontier. They're super, super cool. And what they've been able to do with this vertical integration is find new novel ways of training their models. So less energy, less compute, but still the same standard of models that could compete with the top dogs, right? That are trained on an NVIDIA chip. So they've proven that, but that's not what I want to talk to you about, Google, and why I want to throw a dart in that direction. There was some really cool news that broke this week, Josh, which is Google's science department, their AI science department, released a new model called Cell 2 Sentence. It's a 27 billion parameter model, which is trained on a.

Ejaaz:
[25:41] A decent amount of scientific data. I say decent amount because it was only 1 billion tokens, which if you and I know anything about this, is not a lot. You've got most models trained on like hundreds of billions of tokens, right? So 1 billion worth of tokens, but it did something really cool, Josh. Google's new AI science model, Cell to Sentence, found a new potential cure for cancer.

Ejaaz:
[26:05] This is how it works. They identified a compound, a protein called silmitersitib. I definitely just butchered that pronunciation, but that's the name of the protein. And what it does is it helps the body, your body, your immune system detect something called cold tumors. Cold tumors is basically tumors, cancerous tumors that go undetected by the body's immune system and therefore allows the cancer to spread more. That's generally how cancer works. But here's the catch.

Ejaaz:
[26:36] Scientists already knew about this compound. They knew that it, you know, can help, you know, identify these cold tumors. But what they didn't realize and what Google's new AI model realized was that it boosts the production of a second protein called MHC1, which basically acts as like a flag to your or hand waving to your immune system saying, hey, this is cancer, you need to kill it. And the reason why this is so important is because, well, one, humans hadn't even thought about this. There were no scientific studies on this. No one had made that connection. And so the scientists were like, okay, this is a cool hypothesis. Let me test it with real cancer cells. And Josh, guess what happened? It improved- Yeah, it improved the body's ability to kill that cancer cell. Can you imagine if there was a way to kind of exemplify this into like a person that or patient that actually suffers from cancer? You can have their immune system just attack the cancer itself. I just thought that this was super cool and more resemblance of AGI. Like, meanwhile, you have like OpenAI building out, you know, adult content creator for ChatGPT. And then you have Google here that are quietly, you know, vertically integrating their entire stack training their own AI models and finding the cure to cancer maybe.

Josh:
[27:48] Yeah the new science thing is really exciting in the path to AGI because there's a lot of there's a world in which like our old benchmarks would have very clearly satisfied AGI and creating new science would have qualified for AGI and this very much feels like it is creating some sort of new science I want to try to synthesize what you just said into a shorter little like because I've been trying to understand it myself so I think I have a description you need to let me know if this is right or wrong but basically they created this thing called cs2 scale so it's you can think of it like chat gpt but instead of learning human language it learns the language of cells so it reads how genes and proteins they talk to each other inside your body then they ask it a question how can we make these cold tumors that you mentioned the ones that the immune system can't see turn hot so your body can actually attack them and then what they did which is super cool is it ran virtual tests on 4 000 drugs in two different settings one with and one without immune signals and found that one worked when those immune cues were already there which is perfect because it'll boost the immune system response only when it's needed so they test this in the lab it worked amazing a 50 increase happened in this immune visibility and that is the breakthrough and it is this remarkable achievement that we have gained a 50 increase from a model how small did you say it was each other this is like a microscopic model billion parameters seven billion parameters we're at like multi-trillion parameter count models and with seven billion we were able to get a 50% increase in something that's critical to saving a lot of people's lives. So this is like unbelievable new tech, of course, coming out of Google.

Josh:
[29:15] They are the rock stars when it comes to developing new science.

Josh:
[29:19] And just like, hey, shout out to the team. That's awesome.

Ejaaz:
[29:22] Yeah. I found this tweet pretty funny.

Josh:
[29:25] Everyone. This is great.

Ejaaz:
[29:27] Paddy McCormick goes, everyone else, behold, an AI you can beat off to. Google DeepMind, protein folding, weather prediction, new materials are now an AI that can make its own cancer discoveries. Him making the point, obviously, that Google's AI models extend well beyond the chatbot arena, and they're actually making real impact in the world, in this case, in the case of science and in the case of new discoveries, just super cool and inspiring all around.

Ejaaz:
[29:56] I just wanted to zoom out quickly and make a wider point around Google's efforts. Again, why I'm throwing a dot for Google next week using your prompt.

Ejaaz:
[30:05] Josh, is Google is, it's slowly developing this trend of building AI that can self-iterate, that can self-improve. And there's a distinction to make here. When you look at the models that open AI is creating, that XAI is creating, it's so intelligent because they're shoving a bunch of compute that way. They're spending a gargantuan amount of money to train these new models on data sets and parameters that they design, right? We just mentioned that, you know, 27 billion parameter model is quite small compared to the trillion parameter models that these next-gen models are being trained on. But for Google, they're taking an alternative route here, which they're repeatedly highlighting, which is, we'll create a smaller model, but we'll make the model able to learn itself from its own mistakes, from its own logic and apply its own lessons that it learns from reasoning and other problems to new problems that they face. So instead of panicking and relying on some kind of data that you provided it to train it to become a genius at that thing, it just learns itself similar to like how a human does. And I just thought that was worth highlighting. Google has always taken an alternative approach and they had a really bumpy start, Josh. I remember when they released their first model. What was it called again? It was.

Josh:
[31:18] Oh, man. It was that long ago. Do you remember the name? I don't remember the name. Damn.

Ejaaz:
[31:24] Okay, whatever. It was called something ridiculous. And that defined an era where you could type in, hey, show me a picture of a tree, and it would show you a picture of a door. Like, it was that inaccurate. And fast forward to now, and they have VO3, a text-to-video model. They have one of the best chatbot models ever. And now they have a model that's creating new scientific discoveries. Just an amazing 180 on their entire effort here.

Josh:
[31:49] It was called BARD EJS. It was BARD.

Ejaaz:
[31:51] That's what that's that's.

Josh:
[31:52] What the old model was called yeah google i mean it's funny they're kind of in a way they're not being as loud about it but they're doing what open ai just announced this week which is the whole vertical integration thing like they have their tpus which are their tensor processing units so they're built just for ai math they're not like a gpu that's good for general purpose these are built specific for math they have all the resources in the world they have all of the greatest minds and developers and people working on this so they've really they've turned it around and they're focusing on interesting things which i really admire like seeing all these science breakthroughs every week it's really cool and they're seemingly the only ones that are are doing this currently so yeah big big ups to google is that everything have we covered all these this week we're

Ejaaz:
[32:33] Already here so we might as well finish it off there's one more update really cool one from google i mentioned vo3 which is their next gen text to video model What you're seeing on the screen is all AI generated, despite it looking like a Hollywood movie production. This is all generated by AI. It is not real. They released their latest version, VO 3.1. And I'm going to briefly go over some of the hottest features that you can do with this new model. Number one is one of the big bits of feedback that they got from users of VO 3 was that the videos that were generated were just so short. It was like 10 seconds long. Well, now you can generate up to 30 seconds of continuous video. If you don't think that that's cool, it is extremely cool because this stuff used to be really, really expensive and somehow they got the cost of generation, really really minimal which is great so you can have much longer continuous forms of video number two josh you can provide it with three reference images so you could be a picture of my mom i could give it a picture of a pet that i don't have and i could say hey create a fun little scenario where we're sitting in the living room and i'm petting my dog and it would do that with those images as the reference. The third thing that is really cool is you can take one scene and extend it into a second scene. So let's say you generated 30 seconds of a clip and you were like.

Ejaaz:
[33:55] You know what? I wish I could see what happens next. You can just put it into VO 3.1 and it can extend that. So theoretically, if you spend around, I think I saw a tweet about this. If you spend around, I think it's like 5,000 bucks, you can create a feature length film. Just that's like out of normal compared to like the millions that you normally would spend.

Josh:
[34:13] That's pretty amazing. And I think it was Wander. There was a travel company that actually had their AI ad published and it looks great. I think one of the biggest things that we're seeing breakthroughs with recently is this like continuity, the storyboards, which actually Sora just rolled out 12 hours ago. FYI, storyboards are available to web and pro users. And with Sora, users can now generate videos up to 15 seconds long and pro users can generate up to 25 seconds long so via 3.1 and sora now both have these abilities to create this continuity throughout the videos and yeah we're seeing the example that you just pulled up here which is a really well done commercial that actually went live on the away page that they seemingly spend a lot of money on the away company at least not the people who spent money on producing it because it was all generated by ai and it just looks increasingly better and better You get the character continuity, you get the design style continuity throughout the whole thing. And again, this is the worst it's ever going to look, and it's pretty damn good. So yeah, good news on the video front. Very bullish, slightly nervous. This is very powerful, very capable. But nonetheless, it is happening,

Josh:
[35:22] and it is happening very, very quickly.

Ejaaz:
[35:24] That is it for this week's AI Roundup, folks. We hope you enjoyed it. Our goal with Limitless, as we've said right from the start, is to bring you the cutting-edge, hottest news in as much depth and with as much insightful commentary as we can give it. That is Josh and I's goal and to interview the coolest people as well. That gives us a lot of energy. Keep giving the feedback that you guys are giving us. We mentioned this on a previous episode, but a special treat for those of you who have listened this far, our friends at OpenAI blessed us with around 300. I say 300, it was 500 yesterday because 200 of you have already taken them Sora to codes. So if you want a Sora code, we just ask for some humble requests. Please like, please subscribe on whatever platform that you are and give us a rating. That could be a three-star rating. That could be a five-star rating. I would more likely give you a code if it is a five-star rating. Let us know some feedback and DM us. Send us some proof and you have a code right on us. You can find us on X or all the popular social media platforms. And thank you for listening. We will see you on the next one.

Josh:
[36:28] See you next week. Been a good week of AI.

This Week in AI: Anduril's Makes Call of Duty Real Life | ChatGPT Goes Erotic | AI Cures Cancer?
Broadcast by