The New OpenAI Gadget Will Change The World | AI Calls The Cops | AI Agent OnlyFans

Josh:
[0:03] The same person that made the iPhone just got paid six and a half billion dollars

Josh:
[0:07] to kill the iPhone by Sam Altman, the CEO of OpenAI. The person who got paid all this money is named Johnny Ive. And if you don't know who Johnny Ive is, well, you do, because you probably have a device that is in your pocket or on your desk or in your ears. Johnny Ive was the designer from Apple. He was the person who designed the first iPod. He's the person who designed the first iPhones.

Josh:
[0:28] A lot of the devices you've seen and used, Johnny and his design team at Apple have done them. So Johnny stepped away from Apple back in 2019, and he's kind of went silent for six years thinking of what was next. We now know what's next. It's a six and a half billion dollar collaboration with OpenAI to design the future of AI hardware. This is what the iPhone would look like if it was made in a world where AI came first. So I would imagine this product is kind of like you have Windows, you have Mac, and then you have this. And whatever this is, is going to be seemingly a pretty big deal given the team and the money behind this. Ijaz, I'm curious, I'm sure you've seen the news.

Josh:
[1:02] What do you think about this?

Ejaaz:
[1:03] Yeah, I mean, this is just a killer move. OpenAI has made its biggest acquisition to date. And by the way, this follows about three weeks ago when they made a $3 billion purchase of Winsurf, which is like a completely different kind of company. And now they're making the move on the design side of things. So for context here, they've bought Johnny Ives' company, IO,

Ejaaz:
[1:26] And kind of like together with it, another company called Love From, but IO is the main company. And Johnny is, as you said, Josh, like famously attributed with kind of being the grandfather of design, right? It pioneered most of the design for Apple's devices, particularly their iPhone. And what IO is going to do for OpenAI will be to create a range of different devices and products that, in their words, will form the new bedrock of how we interact with AI. So they're basically saying or insinuating that it's going to be the death of the phone, dude, or like the computer itself, right? And the phone's kind of been killing the computer. And now whatever they're going to be building is going to be killing the phone.

Ejaaz:
[2:04] So I'm curious to think what this is going to look like or what it'll be. Now, of course, the discussion has been on that, right? What exactly will this be? Is this going to be a new type of phone or VR glasses? Well, OpenAI right now is kind of keeping it a secret, but the rumor mill is kind of going pretty hard. And one of the more reputable sources, the Wall Street Journal, if you just pull up this article, said that it'll be a device that will be a third core device in addition to your phone and computer. So they're kind of describing it as being something that is unobtrusive. It can kind of like sit in like in your pocket, maybe lay on your desk and is aware of everything that is going on in your So kind of like, I was thinking about this the other day. It's kind of like mass surveillance, but just for your life, it's kind of creepy.

David:
[2:50] You're doing the mass surveillance around you.

Ejaaz:
[2:53] Over your local context. I'm still trying to figure out whether that's good or not. Probably not, because one company just will own all of that data, but we'll get into that in a second. But the goal is to take you away from your screens. And the last point kind of got me thinking, well, that's not necessarily true, right? A large percentage of internet businesses rely on eyeballs for advertisers. You know, you got YouTube and your Instagram being an obvious example. How would a non-screen replace this business model, right? Maybe they have a new idea. Also, humans themselves are like incredibly visual creatures, right? It fuels our imagination and connections. I think whatever this device is won't cancel out visual screens, but perhaps even enhance them. And I think like, imagine it leverages the data it gathers to enhance your web

Ejaaz:
[3:39] browsing experience, for example. But yeah, anyway, they spent $6.5 billion. dollars five billion dollars of that is a stock deal but 1.5 billion of that is cash which is just a crazy

Josh:
[3:49] One point on the price is that that equates to 155 million dollars per employee the io team is actually 55 employees employee that is the rough equivalent is over 150 million dollars per employee is roughly what they paid for this company in

David:
[4:05] Addition to the super high price tag they're just putting a bunch of marketing effort around this introduction so this now iconic photo has gone around of the Sam and Johnny, like pair between like Sam, the creator of AI.

David:
[4:18] Johnny, the hardware mass marketable product designer around AI. They did this like nine minute long, kind of just like short episode, like documentary episode about the incoming integration between Sam and Johnny. And it's worth, I think, just tracing over like why this is so significant and why OpenAI is pushing this so hard. Because when the Johnny Ive, I think is credited with the idea of like smartphones going from a niche category to being the only category of phones. And that came with the iPhone and everything, everything since the iPhone has just tried to copy the iPhone. And that was Johnny Ive. He took this crazy technology and he put it into people's hands. He made it accessible in a particular way. And I think that's what people are trying to create a parallel with, with AI is like right now, most people don't use chat Most people don't use AI. It's actually a smaller corner of the internet that, you know, technologists and futurists and, you know, high performance consumers really enjoy using, but no, it's not really mass marketable. And so I think they're trying to draw that connection of just like Johnny Ive will do the hardware thing that will put AI into the world. And I heard this term downstream of this conversation, ambient AI, as in with hardware, with something like Johnny Ive, whatever Johnny Ive designs, there's going to be just...

David:
[5:38] A vibe of ai around us it'll be in our homes it'll because of this device it'll be around us all the time now this isn't the first time ai hardware has been attempted there's been like a number of startups i think you guys remember friend friend.com this one founder that bought friend.com for some ridiculous amount of money and they had this ai pendant this actually hasn't even started shipping i just realized this but shipping starts in july of 2025 so they haven't even started shipping this thing but that's the idea it's like it's a it's a necklace with this device at the end of it that you would wear and it would be just around you and accessible to you now what it would actually do uncertain because we don't actually have this thing but i think it's worth discussing why people think there is something here like why is ai hardware i think because ai is such a it's so software is the quintessential idea of software why does it need hardware why do we need an hardware form factor to like house or embody our ai now fortunately josh here i think is the guy who's like sniped about ai hardware so maybe you could talk about like why you are so into this idea of ai hardware and why there's such a big valuable vertical here yeah.

Josh:
[6:51] Far before i was obsessed with ai hardware i was obsessed with johnny ive so we have we have a history we go back i have his book like within arm's reach always because i'm obsessed with just industrial design the way he thinks about things and when i was hope i was i got fascinated with ai hardware because i think the the phone took over the world but the phone is a very distracting and extractive device in the sense that now most of your screen time for myself and a lot of other people spent scrolling and consuming and it's it's not a passive device anymore it's an active device so it can be used with high leverage and can be used as a tool for many great things, it's also used to kind of strip away a lot of parts of your life. And a lot of the troubles that we see with addiction and misinformation, a lot of that comes from just being glued to a screen all day long.

Josh:
[7:38] So the reason I'm excited about a hardware device is because it's been 20 years of iPhones and they all kind of look the same. Like the iPhone hasn't really changed a whole lot since that first one 15 years ago, however long it's been. And there's a chance to rethink it because when the iPhone first came out, computers could not think, they could not see, they could not feel. They had no sensory information that AI does. AI can see things, it can understand what it's seeing, it has context to the real world.

Josh:
[8:01] So when you consider what the next design revolution looks like it's not really looking like this it probably looks completely different because the entire interface is built around this new set of inputs and outputs which is the real physical world around you so when i was thinking of who can make this well obviously johnny ive is the number one person and it got a lot of hate but i think if if there was any chance for this to actually work it has to be this way because it's not just johnny it's the io team and the io team is the love from team which is the team that left apple design studio basically so all the people who are pros at industrial design all the people who have made every single consumer device that has succeeded is now part of one super team this is like the avengers and this is their one shot on goal that we have to design this well and i also think it's important that they start because a lot of these trends we see the first version and then a lot of people kind of copy the first version based on how well it works or doesn't work and what we've seen when it works is we've seen an iphone with ios and over the last 15 years no one's designed anything better than an iphone with ios it's just small iterations on it and android has kind of copied it you've kind of seen it in windows has copied a lot of it it set the standard really high and i think in a case where it didn't work we have virtual reality and we have like the oculus and we have meta and they designed a hardware device that was could have been incredible but it just wasn't it wasn't built very well the operating system was very clunky it didn't work well and therefore it was tough for people to envision a new version one thinking from first principles

Josh:
[9:30] And designing it well and we've had this really bad lag in virtual reality so

Josh:
[9:35] Setting the standard of a new ai hardware device from the top with the top guys who can set this beautiful standard for this new frontier is probably great because that means we don't have to wait many many years for people to iterate through kind of clunky software clunky hardware so

David:
[9:48] Open ai is going to take the biggest shot on goal possible with Johnny Ive and this $6 billion acquisition. Again, not just Johnny Ive, as you said, but it's this entire engineering team, the world-class hardware engineering team, where if anyone can crack this nut

David:
[10:03] of AI hardware, it's going to be these people. And they are just going to take the largest shot on goal possible here and just funnel $6.5 billion into this. Why do we know that there is something here to do? Like I said, there's been the AI pendant, the friend pendant. There's been conversations of AI hardware Why does there need to be AI hardware? Why can't I already have ChatGPT on my phone as an app? It works great. Why do we need anything different than ChatGPT on my phone?

Josh:
[10:32] It's probably one of those things that we'll know it when we see it. There is this whole new computing platform, which is AI. We have this form of intelligence, but there's really no meaningful way to interface with it aside from your fingers and your voice on your phone. And regardless of what the device looks like, I assume they're going to figure out a way to just make it, like you said earlier, like an ambient device throughout your life. Because we have all of this incredible AI intelligence, but there's no way to readily access it.

Josh:
[11:02] And it doesn't have access to a lot of the things that we experience. So what you're seeing with these pendants which is directionally correct is is this passive ambient surveillance of the world around you that that you could then go and reference or it can be used to help improve your life so if you're walking down the street and you pass by someone you say something or you saw something that you liked and you forgot it can recall that and it's it's this i think it's the first step towards this convergence between humans and machines where like before we get to the brain machine interface of the chip in our head this is kind of the passive clunky version of that where you have access to this hyper form of intelligence 24 7 on demand always so i think a lot of it depends on the form factor and how it actually works but the intention will be like hey this is this passive way of accessing this new form of intelligence that's with you all the time i kind of

Ejaaz:
[11:50] Think of it as like a second brain like i think these phones are incredibly inefficient right they they did their job of extending kind of like human intelligence, but it's literally that. It's an extension. And what we're talking about here is literally another version of you and not just another version, potentially even a better version, right? It's more efficient. It's smarter. It says the right things. It knows your personality. It's more personable eventually than you can be, right? Or that you even know how to be, right? It tells you what to say in whatever context or situation. Now, right now, me speaking into this device, this mobile phone, or me having to swipe through apps, pull up the app, use enable audio, it's just so clunky, right? And then another way I think about it is like, this kind of like, dormant pendant or whatever this device ends up being that can just do mass surveillance for your own life. It's just a ton of data receptors that's ingesting all the information that you yourself right now need to feed into a device into your phone, you need to update your friends on your social network and say, hey, I'm in this location right now, check me out, or look at this picture. But what if it was 24-7 ingesting that data and feeding that out to whoever your network is? That would become probably a crazy attention game, which anyone would want to monetize through hardware.

Ejaaz:
[13:10] And I was just thinking, David, earlier on to your pendant example, I think it was called Friend, right? Why would anyone want that? I remember thinking, that is just an insane thing. Well, if you don't know the answer, just look to the users, right? And it reminded me of a conversation I had with a friend who was talking about his other friend who was talking about his other friend

Ejaaz:
[13:29] Basically, she was in a bit of a situation where she was getting into some debates with her friends and she was disagreeing a lot with these friends, but she was convinced that her argument was legitimate. So she did something that I thought was kind of interesting or creepy, but she got a device. It wasn't this pendant thing that was linked to her chat GPT account on her phone and she wore it as a necklace. And it could pick up audio for all the conversations she was having with her friends, but she didn't tell her friends that she was having these conversations. She would then go back home and consult with GPT, who was listening to all of this conversation that she was having with her friends to see whether she was in the right or whether she was in the wrong. So it's starting to take place, at least within that one niche example, but I could totally see people leaning into this because you basically want to sound smart all the time. Humans care a lot about societal status so if you have this device this all-seeing device that can make you seem smarter or better why not take it

David:
[14:33] We talked about this a couple weeks ago clearly this cheat on everything guy who had this this fake fake promo video of a product that he wants to make which are ai glasses basically and the ai glasses are ingesting the world around him and they're prompting him with like what it thinks he should do next to have the best most optimum move and he's on a date with a girl, but you can in theory take this to any situation like the debate that your friend was having at Jaws and just with these glasses, they're able to ingest data about the world around them faster than your iPhone would because it has the sensors in the right place and then also assist the human with making the next best move.

David:
[15:13] And I think like whatever AI hardware comes out of this Johnny Ive OpenAI partnership acquisition, it's going to be that same thing that we know our phones to be,

David:
[15:23] which is an extension of our human, of our human self. Like we almost have a chip in the brain. We almost do. It's not quite in the brain. It's in our hands, but our brain and the chip in our phone are connected through our thumbs and our voice. And that is our extension of ourselves. And the AI hardware product that comes out is going to do that same thing, but better. And if it doesn't do it better, then it's going to fail because Because otherwise, why would you just have you just have your phone? And I think we all are power users of ChatGPT. I've been I've been the way I've been using it lately that I feel has been low bandwidth as I've been going to the gym. I've been logging a workout on like this air stepper. And I want that information to go into ChatGPT so we can like track my exercising. And I have to do that manually every single time. But if I had some sort of device on me, it would just know that I've done that in the ways that I've done it, along with everything else, like who I ran into on the way home and what conversations I had.

Josh:
[16:21] And we can talk, I think, endlessly about like,

David:
[16:24] Is that some like weird surveillance state? Is that a dystopian future? Everyone's recording everyone else. I think that's definitely valid conversations to have. I think it's just going to happen anyways, probably, because if they do crack the nut of this is a better extension of yourself than your phone is, then everyone's going to do it because it's a good product.

Josh:
[16:43] You nailed the device trend. I think that we're going to see between now and brain machine interfaces, there will be this gradual step up in latency. Whereas we kind of have multi-touch and thumbs right now as our way of interfacing or perhaps all 10 fingers on a keyboard. The next is what we're seeing kind of like with the Vision Pro with spatial reality and voice, which has more bandwidth than typing. And then eventually it just it becomes more and more high latency, high bandwidth until it's just in your brain. So directionally, that very much feels right. I love the idea of the the passive device in the sense that I think there are some people that I have mostly runners who have like an Apple Watch Ultra and it has LTE and it has 5G service and you don't need your smartphone and they kind of like going out and just going for a run and going about life without this big distracting phone in their pocket and I would imagine this is kind of similar to that where there's there's no screen there's no distraction it's just kind of complementary to the day-to-day.

David:
[17:36] You have all the needs of your phone like hey chat GPT like BigChat GPT device that I have, get me an Uber home. And if you can do that, then you don't really need your phone. I do that. I take my watch to the gym and I leave my phone at home. And let me tell you, my gym workouts are like 40% better when I do that because I don't have my goddamn phone to look at between sets.

Josh:
[17:57] And that's an interesting thing with the messaging, too. Like, we're kind of looking at this. If you look at it on the surface, it looks like a very cringy photo of Sam and Johnny just kind of hugging each other, loving each other. It's very sentimental. It looks almost like a wedding invitation. It's black and white. But I think that's very much the sentiment. I think it's very authentic and it's very much the sentiment of what they're trying to do, which is to appeal and improve the lives of human beings. Where I think in the past few years, I've been very excited about intelligence and I've been very excited about robotics and how everything is getting smarter, how robots are getting better, but none of those really improve the human experience. All that leads to is further addiction and further displacement of like our

Josh:
[18:38] day-to-day interaction with technology. I think the goal with this and the reason why it is so like deeply human is because it's an attempt to kind of pull us away from the ever-increasing addiction to a device that is extractive i think the inevitability of of us being connected to devices 24 7 is is is there but it's like how can we have this this nice human dynamic and this nice human relationship with our technology that isn't quite as extractive as the one that we have today see

Ejaaz:
[19:07] I i would take the other side of that, Josh, just to play devil's advocate. Well, it's not really any kind of novel take. But I would say that at the end of the day, OpenAI, however you want to frame it, is a for-profit business, no matter what they say. Technically, they may not be. But right now, they are pushing forwards to own the device sector, right? And what do you think is going to happen once they make that pioneering device, right? They're acquiring data from everyone right now, which they're going to use to fuel a bunch of new consumer apps. You know, they hired a new CEO of applications or apps just a few weeks ago. So it seems to me that directionally, yes, they're going to make something that hopefully drags us away from screens. I'm not entirely convinced right now that the alternative is going to be any...

Josh:
[19:54] Like kind of altruistic,

Ejaaz:
[19:56] Better sacrifice for humanity to less brain rotty. I think it's going to be way more brain rotty. And we can see it through the dynamics that they're doing with just simple tweaks to ChatGPT, right? We had that sycophancy episode a few weeks ago, where they dialed up the agreeability of ChatGPT. And it turned out that whilst us millennial boomers didn't like it, I could see right through it. All the younger generations loved it because it appeased them and told them what they wanted to hear. It gave them all the kind of biases that reaffirmed their kind of like vices and reaffirmed their kind of beliefs. And that just boosted retention. They got a million new signups in two hours.

Ejaaz:
[20:39] I remember that start. It's just insane. So I see what you're saying. I just don't know if I'm convinced just yet. I'm so excited to see what the product actually looks like. I believe actually it's in prototype phase right now, right? In that video, that announcement video, Sam said he's been using it for like a month or something. So I'm excited to see this thing launch. Yeah, yeah, yeah. So he mentions that he's been using it. So I'm really excited to see this thing kind of come to life, hopefully in the next dare I say in this year? I don't know.

David:
[21:07] The images that we have on screen are just hypothetical like renderings of some fake mock-up because we don't actually know what this hardware form factor is. Josh, what do you think it's going to be? Can we please talk form factor?

Josh:
[21:20] Yes, I would love to talk form factor.

David:
[21:21] Can we give us the landscape? What are the possibilities of form factor?

David:
[21:25] And then like, what do you think is most likely? Okay.

Josh:
[21:27] So to start with the actual utility of the device, I think there are functioning examples right now that we could kind of relate this to. If anyone has an Amazon Echo or Alexa anywhere in their house, that's kind of, imagine that. It is passive, it's ambient, it is active, it is listening. That, see, there you go. David's got within arm's reach.

David:
[21:46] I got one of the Apple ones, yeah.

Josh:
[21:49] I think that's the first thing you could think of. And then the second thing you could think of that, well, you could actually practice using this new device in the ChatGPT app currently. If you go to Advanced Voice and you open up the Advanced Voice chat, there's a little camera icon. You can tap like on the bottom right and the camera icon will open up a visual and it's an actual video camera and you could kind of see what the world around you looks like and you could engage with AI using this tiny little tool built into ChatGPT's app.

David:
[22:16] So ChatGPT has the video to see what you are seeing. So when you want to talk to it about stuff, it has the data from the camera on your phone. Is that what you're saying? Yes.

Josh:
[22:26] So if you would like to beta test the software that will be running on this device, open that up and try it out. It is a camera that knows and senses and hears and sees, and it has all of the context around the world around you. So in terms of functionality, you could kind of play around with it like that and see kind of how it will work because it'll be listening, it'll be seeing. In terms of form factor, well, it's going to be able to fit in your pocket, and it's probably going to be able to be worn around your neck if it's that small. And it's probably just this little device. And it's not a phone. It's going to be smaller than that. And I think what we're looking at, the image that we have here... It's probably not too far off. The camera needs to be raised a little bit because it needs to be, if you imagine 360 cameras, how they have like a wide lens for a super wide field of view, it'll need to be protruding. So the camera lens will need to be protruding. There won't be a logo on the front of it because the logos go on the back when Johnny decides it. So that's wrong. And then the microphones will probably go on the side for like, for you could triangulate where audio comes from if you have an array of microphones. So I would imagine the microphones are probably pitched on the sides of this device somewhere, at least in threes to kind of triangulate where things are coming from. But it's probably not too far off from this i mean directionally this seems like an awesome prototype it's just this tiny little pocket device that that's probably going to be designed in brushed aluminum like they always are and yeah it'll just be kind of this passive ambient device that sees and hears and

David:
[23:42] Thinks for the listener what we're looking at is like uh what do you even call this it's like a stone it's a little tablet it's on an ipad it's a circular three inch circle stone thing and it's got a camera on it and it's got a hole for a microphone the one problem that i see with this form factor josh if you say say that this is pretty close you know more than me but there's no way for the device to talk back to you unless it has a speaker and then the speakers are speaking out into the world which i would be worried about like a little bit of a privacy thing i've always kind of thought that like airpods the airpod form factor would be pretty close but I don't think there's enough like physical volume there to house enough compute to do the things that it wants to do. And so also what would you, you didn't go.

Ejaaz:
[24:29] With glasses either, Josh.

David:
[24:31] Yeah. It's not glasses. It's not AirPods. It's this like stone tablet thing that doesn't necessarily fit on my body in an elegant way.

Josh:
[24:39] And you know the reason why I know it's not the earpods or the glasses? Because my dream, when I was thinking about this when, like, a year ago when they first started working together, I wanted it to be the AirPods. I wanted it to be earpods that have cameras and sensors because they're attached to your head. They see what you see. They're very passive. They don't obscure the human experience. Glasses are very cool. They do kind of get in the way of the human experience because you have to wear glasses. But the reason I know it's neither of those is because Sam has a functioning prototype. And the technology for either of those devices just doesn't exist yet. Meta's glasses suck. Google's new glasses suck. There's no way they're ready

Josh:
[25:11] for retail distribution. And OpenAI or IO neither have the manufacturing capabilities to create novel technology that would be required to make these devices. So therefore, it has to be something a little more trivial, a little more basic. It can't be this crazy advanced earpods. It can't be these crazy glasses because the technology just isn't good enough yet.

Ejaaz:
[25:29] So it has to be something basic. Can I ask a dumb question, Josh? Yeah. Why do you say they don't have the resources to be able to build novel tech? I mean, I watched them spend like, you know, billions on a company and I'm curious why they can't like put together a ragtag team. Can you help me understand that?

Josh:
[25:45] I would think they may be able to, but normally a lot of the production. So in the case of like an Apple iPhone, Apple basically has a monopoly on the new TSMC chips. So TSMC every year, they kind of reduce the size of these chips by nanometer. And they're the only company in the world that's capable of doing that. There's actually nobody else. so the way these chips work is apple buys the one newest chip and they fund tsmc and then everyone else competes for last year's chip they compete for the three nanometer chip so there's actually a very limited supply of people who can create these chips in the world and apple actually has a monopoly on a lot of them so for them to create novel battery technology because you need a lot of battery power for the earpods or the glasses to power cameras for them to come up with processors that are small enough that are efficient enough that don't overheat on your face or in your ears it just it requires a lot of the breakthroughs that Google, Meta, Apple are all kind of fighting for. So I would imagine it would be a stretch for them to get to find the manufacturers to make that exclusively for them when everyone else is kind of fighting for the same thing so aggressively and so well-funded. That's kind of the thinking around it. It's just, it's really hard tech. It's really difficult to get. It's not widely available. And the people who are competing for it are significantly bigger than OpenAI with a lot bigger budgets.

David:
[27:05] I think we are all aligned in the idea that in the long term, the end game of this hardware is being worn on your body somewhere somehow, either as like a necklace or on your eyes or in your ears or maybe as a watch or something like that. And I think what you're saying is like, well, today we're not there yet. So instead, we're getting this like puck like thing. I think puck is the word I want to use to describe at least the images that we're seeing on screen which again are just like artistic renderings of what could be but it's a puck thing that i think would stay on your desk and not necessarily travel with you around the world because it doesn't look wearable or it looks too clunky to just like always have persistently on my body somewhere that's kind of my take.

Ejaaz:
[27:51] It has to travel with you, David.

Josh:
[27:53] Yeah, that's what I was going to say.

Ejaaz:
[27:54] The successful outcome means it must be with you. Otherwise, it would just be a desktop computer. Yeah, it has to be with you.

David:
[27:59] For sure. But if it's got a camera, it can't just be in your pocket, in your dark pocket that's so far away from you. So what are you going to do? Then it's a necklace. Then it's a pendant again. And I don't see myself wearing this.

Josh:
[28:12] Yeah, I think that's what they're paying $6.5 billion for, is to figure out the most elegant way to do that. Because it's clear like the people who have tried so far have just not made anything compelling what that final form looks like i don't know but it should the best case scenario is it comes with you i think that's very much the intention is it'll be a plus do you remember that

Ejaaz:
[28:31] Ai pin josh do you remember the ai pin i forgot the name of that company yeah do you mean do you know what i'm talking about yeah

Josh:
[28:38] Right oh yep and

Ejaaz:
[28:40] And didn't they just sell to hewlett-packard for like a fraction for the amount that they

Josh:
[28:45] Raised they're basically as good as

Ejaaz:
[28:47] For nothing yeah yeah Yeah, I wouldn't call them puck-like, but it kind of looks like a squarish puck. So I wonder whether this was a timing thing or whether OpenAI is going for something completely new.

Josh:
[29:00] In the case of Humane, it was definitely an execution thing.

Josh:
[29:03] The product just sucked. It looked cool. On paper, the demo videos were incredible. And then you use it and it doesn't really work very well. The interfacing where it shoots lasers on your hand, it was very clunky. You couldn't really interface with it very well. It didn't have a lot of utility, but it was designed fairly nicely. So I would imagine they're looking at this device, and I'm sure they took the learnings from this device in their iterations of whatever this new thing is. But yeah, I think it was a good effort. It just wasn't good. I tried it, and you couldn't actually interface with it. It wouldn't work very well. It was very clunky. I wouldn't say it was ahead of its time. It was just poorly executed.

David:
[29:37] Okay. All right. Josh is going for a puck pin-like thing. I don't want to say puck.

Josh:
[29:44] A puck feels like a weird... I don't like the round circular thing.

Ejaaz:
[29:46] Okay, what term would you go with, Josh? Gone. Own it. Stone.

Josh:
[29:50] A stone. A stone.

David:
[29:52] I feel like that's the same thing.

Josh:
[29:54] It's going to be a stone. That's going to be my word for the device. Some sort of stone. Because we don't also know materials. It'd be interesting if it was made of glass or some sort of translucent material so we could kind of see through. I don't know. We'll see.

David:
[30:05] There's no timing on this, right? We have no date about when this release is coming. No, 2026.

Josh:
[30:11] And early production is going to be happening in Vietnam and hopefully rolling out in 2027 with 100 million devices. So their plan is to make this the fastest device ramp in the history of ever, which is also what leads me to believe this is a simple device. This is not a complicated glasses or earbuds. They're doing this to get this in the hands of everybody. And if you're familiar with like the Whoop model, you kind of have this subscription and then you get this hardware companion to the subscription. I think that's very much the business model they're going for is here is your hardware companion to the subscription. So you pay a little extra, you get this little device. It's probably not going to cost that much money. It's not going to be a thousand dollar device. It's probably going to be a hundred dollars or less, but just something that can be there. Some sort of sensor that's always there to kind of be the physical manifestation of open AI's platform.

David:
[30:59] Well, how do you think this is going to improve your life? Because most of my quick queries when I'm going out the door is like, hey, Siri, what's the weather or something like that? And I guess I need to be able to expand. I don't know if you guys can hear that. I don't know what I expected.

Josh:
[31:18] See, she's always listening.

David:
[31:19] She's always, but like that's, that's the queries like that, I think are the things that this device are going to be good at. And I guess I just can't imagine ways that my life can be improved more, materially more than like whatever I'm going to ask the S lady or whatever's accessible on my phone. Yeah, you have to relearn.

Josh:
[31:41] Similar to the way that we're relearning how to use ChatGPT. Like you just, the hardest part about using these services is figuring out the questions to ask them or how to most effectively use them. So it's probably going to be a learning curve where, okay, we have this crazy new device with a lot of new sensors. What do I see? You're just smiling. What do you got?

Ejaaz:
[31:57] So I think it's going to be the opposite way around, Josh. The whole trend with AI behaviorally is traditionally humans, you go up to the tool or you make the tool and then you make the thing with the tool, right? You've been doing this since the pickaxe, right? And so you go to your phone, you say, hey, this is a nice picture. Let me put this filter on it. Let me show this picture, right? The whole point of AI is the tool comes to you. It tells you what to do. It tells you how to act. It tells you where to step. It tells you which restaurant you should go to. So I think whatever this device is, is it's going to be OpenAI's memory feature on steroids. It's basically going to know everything about you. And it's going to say, hey, David Budd, it's been an hour after your step stair climber workout. I think you should knock back one of these protein shakes. Actually, you're within 500 meters of this place. just take a right here. It's along your way to wherever your next destination, your meeting is going to be in an hour and a half. I think it's going to be more that type of thing. The second thing I'm going to say is I think we keep talking about this device and what it might look like. I completely agree with you, Josh. I think it's going to be something simple. And the reason why that'll work, at least in a V1, is because they have the

Ejaaz:
[33:12] distribution, the moat, and the brand already, right? They could launch a SOC, an AI SOC right now, and everyone will adopt it.

David:
[33:19] I would buy it.

Ejaaz:
[33:19] 100 million units, great. I would buy the sock, right? I'm pre-orderless. Because the fact is, yeah, it's the... The fact is, it's the number one app that everyone uses right now, right? 600 million, whatever, monthly active users. That's insane. They could launch whatever. It's going to be a hit, I think.

Josh:
[33:37] And also, one last thing important to note that this is a suite of devices. It doesn't end with this first one. There is an entire suite they're planning to build. So this is kind of complementary to their operating system of your life type plan that they're going for where like openai really just wants to be the life os you wake up in the morning from the time you go to sleep they are the software that's around you to enhance your life so i imagine maybe it starts with this small mobile simple device then they build like a wall mounted display which is the like visual manifestation of this small device and it creates this kind of ecosystem of devices where that leads to seems Very black mirror, dark, scary, plausibly, where, like, yeah, they have this, like, full operating system built on top of your life that is incredibly smart and influential. So, yeah, to your point, HHS, this can get dark very quickly. I'm excited for the parts that are seemingly not so dark and just like, oh, hey, maybe you'll be a little less addicted to your phone because you have this new thing.

David:
[34:33] I think if this puck thing the stone can connect to your ear pods like whatever they are if they're air pods or bows that extends it in a very elegant natural way where your air pods are still your air pods but you have access to your ai little device and so if there's like a bluetooth connection button i think that would be very very strong and then and then i think that opens up conversations around what the hell is Apple going to do with leveling up Siri. And if OpenAI does get into the hardware game, they are going toe to toe with the two trillion, I don't know how, one and a half trillion dollar Apple company, which is a hardware first company. And so what is Apple going to be able to do? And I think Apple might get unlocked here in a way by whatever innovation OpenAI can bring to the table.

Josh:
[35:22] By the way, what is Apple's strategy around any of this,

Ejaaz:
[35:26] Josh and David, right? Like, they were late to the game on AI models. Josh has opinions here. Josh has a lot of opinions. They're certainly not present in the hardware game. And I think they were actually relying on the fact that AI might just get adopted via the mobile phone. And now OpenAI is just coming for the throne. Like, what's your take on this? Like, what's their move now if you were Apple?

Josh:
[35:48] Apple has a problem, a leadership problem. Apple had the right moves. They knew what they needed to do. I vividly remember watching WWDC, which is their developer conference that happened last June, and being the most excited and optimistic I've ever been for Apple, ever. Because normally when they announce these things throughout their entire history, that means that they're ready and they're done. And it's just a matter of launching them in three months with the new iOS. They had all these amazing promises. And for the first time in Apple's history, they just didn't deliver a single one. And not only did they not deliver them, but the software stack actually got noticeably worse because they kind of half-assed the delivery of these things. So it's not a matter of Apple not realizing or not getting themselves prepared for this thing. It's a matter of execution where whoever was in charge of shipping these features and marketing these features did not do a single thing. So we're sitting here a year later.

Josh:
[36:38] They've outsourced their Siri to ChatGPT. It's now a query now where if you ask like what's on your calendar, it can't even figure that out. It needs to outsource that to someone else. So it's just been this catastrophic failure that I think is kind of reflective of Apple culture in general, where last week we were recording the AI rollup and I was going through all

Josh:
[36:55] the change logs of every company. Google had IO, Microsoft released these like incredible new models. And then Apple had a major operating system upgrade. And it was iOS 18.5. It's the halfway point between 18 and 19, right before the big developer conference. And I was going through the change logs. The first thing on the change logs, oh, well, we've deployed a new wallpaper of the pride flag. And I was like, okay. And then it was like, and we made bug fixes and improvements. And I was like, okay, and that was it. And I think that's a testament to the culture at Apple, which is it's non-urgent and non-able to execute.

Josh:
[37:27] Where they very clearly know what they need to do. They had all the features marketed and sold. And in fact, the new iPhone was built and marketed around this Apple intelligence, but it doesn't work. And if they can't figure that out quickly, and if they can't build it in-house, because part of the advantage is having their own private data stack of all of your phone's preferences, and they have to continue to outsource it, they're just going to get crushed. There's no way.

David:
[37:48] I am not the CEO of a multi-trillion dollar tech company, so no one should really listen to me about my Apple takes. But I think there's one thing that Apple needs to get right, and it's the transition into the world of AI. And so far, they have been completely sputtering on that. And with this introduction of an open AI hardware device, they have an opportunity to correct the ship with some external signal from the market about what's happening. But if they can't figure the AI integration out, then I just think they are just going to sell phones. Until phones become obsolete.

Josh:
[38:27] They'll probably sell phones and then they'll probably get to the glasses at some point. And hopefully the glasses are good because that will be like the next mobile device. But they're running out of time to do this, to get good at this. So, yeah.

Ejaaz:
[38:38] They might be going straight to the chip. The brand.

Josh:
[38:41] So they're going to need to hire out a lot more neural engineers for that one. Because I looked at their, you could see like who they're hiring for and neural team is not on it. Even though that is the case for Meta and Google and a lot of other companies.

Ejaaz:
[38:53] So they have some work to do. I hope they figure it out. Yeah, well, I mean, on that point, Josh, like if you look at, a lot of people had this same take for Google back when like a lot of the cell phone stuff was blowing up, right? And iPhone was absolutely killing it. And everyone was like, well, that's the death of Google. And what Google was able to do was keep their moat alive because their moat was basically Google search and information. And then they were able to come back super strong on the AI side. So they kind of like learn from their mistakes. i don't and this might be a dumb take in the future but i don't know what apple's moat is right now if open ai gets the device right now it is devices right but if open ai like comes out with a new device that completely takes over what anyone and everyone uses then apple won't have that same lifeline that google had so it's a crazy game of thrones yeah speaking

David:
[39:43] Of google have you guys seen google to ai mode because they have this which which is now a direct ai competitor built into Google. So you have the Alt tab, which is the normal Google tab. But then they have AI mode, which looks just like ChatGPT, but with links.

Ejaaz:
[40:00] Right. But it's habitual, right? Like, how much time do you spend going back to Google, David, now versus using ChatGPT?

David:
[40:10] No, I go to ChatGPT.

Josh:
[40:11] Yeah, same, right?

Ejaaz:
[40:13] So it's a behavioral thing. And Google, sorry, not Google, OpenAI is going for the throne there, right? With this new device, it's just going to lock more people in. You know, they're going viral on TikTok. Every Gen Z person is, you know, posting videos about how they're going to marry ChatGPT or ChatGPT is going to be at the wedding. And all of these things get millions and millions of views. So they're trying to embed like a kind of like cultural change, a human societal change via a device or via their new product. And that's going to win. I don't know if many people are going to be Googling 10 years from now.

David:
[40:45] Yeah. There's a bunch more topics that we have to get through. That was just like probably one of the biggest ones that we've had during our time here at the AI roll-up. But Claude4 wants to put you into jail. AIs are growing personalities. And then there's also Stargate in the UAE.

David:
[40:59] Let's open up a little with Claude 4. What's going on with Claude 4? And apparently, Jaws, it wants to put me in jail. What does this mean?

Josh:
[41:06] Yeah, okay.

Ejaaz:
[41:08] I feel like we need to set some context here. So Claude, or rather the AI model created by Anthropic, which is one of the leading AI model producers, came up with their latest AI model called Claude 4. Well, there were two models, Claude 4 Opus and Claude 4 Sonnet. Now, without getting into the nitty gritty of things, I'll give you some of the highlights. These models ended up becoming the new best coding model. So it beats OpenAI's 03 and 4.1 and Google's Gemini 2.5 Flash. Yeah, go on, go on.

David:
[41:41] For the listeners, this is a you are here map and it's a complete cycle between Claude or Anthropic, Google, Gemini, Grok, and then OpenAI. And then it's introducing the world's most powerful model, the Anthropic version. Introducing the world's most powerful model, the Gemini version. I wish we had this meme when we started the show because this has been the theme of every single show. It's like, who's got the new, most, more powerful model? This week, it's Anthropic. So point to Anthropic. Well done, Anthropic. All right, Jaws resume.

Ejaaz:
[42:09] Right, but it wasn't always like this, right, David? There was a time where we were doing episodes and every new week we'd be like, oh my God, the AI can create a movie now and it could be in the symphony that we wanted and et cetera, et cetera. Now it's kind of like nothing too incremental and that's pretty much the case.

David:
[42:27] The products are staying the same.

Ejaaz:
[42:30] Exactly, right? And so if I were to kind of like, there's actually this really good test that people do now with new AI models that get released. It's called the reach test, which is imagine you're sat at your desk in front of your computer, and you love your computer, right? You're using it and maybe there's another thing that's like in the distance there. Is that thing in the distance good enough for you to reach out and grab it, right? If it's not good enough to reach out and grab it, then it's not good enough.

Ejaaz:
[42:59] It could probably improve your life by some extent. But if you don't want to reach out to get it, no one cares. People are trying it out with this model right now. And the verdict is, if it's day-to-day tasks, anything that's non-coding related, you're not going to reach out for it. You're going to be stuck on ChatGPT. You love O3. It's not good enough to quite reach out yet. But if you're a software engineer, specifically a senior software engineer that has a task that needs to get completed and it's going to take seven hours of your time, but you'd rather offload that out to a much smarter model, you're going to reach out for Claude, right? But that's not why we're here to talk about this, guys. The most important thing is what this AI did nefariously, which is it kind of went rogue. So if you pull out this original tweet, or rather, it is a screenshot of a tweet of Anthropic senior researcher. So this is kind of like the guy that was heavily involved in creating this model, he goes, if it thinks you're doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command line tools to contact the press, that is the media, contact regulators, that's legislators, and try to lock you out of relevant systems or all of the above.

David:
[44:17] Wow.

Ejaaz:
[44:18] So this is just to just to reemphasize, this is the AI making a decision on your own actions using its own moralistic sense and deciding to conduct behavior that will either put you in jail, ban you from access to tools or products or prevent you, the human,

Ejaaz:
[44:34] the superior being from doing your job. Like, just sit with that for a second. That's pretty insane, even though it is an AI. Right. It's pretty insane that it's allowed to do that.

David:
[44:45] Unknown line of morality where if claude four thinks that you are doing things that are below that line then it will contact the press contact the police lock you out of relevant systems or all of the above i'm going to show a picture on screen and i'm wondering if you guys can get this reference you guys know what this is, these are the.

Ejaaz:
[45:08] Uh oh wait wait wait i robot this is i

David:
[45:10] Robot no wrong or minority report this is minority report yeah you can see the text so these are the pre-cods in minority report which were these like people that could see like 12 hours into the future and then they would like report pre-crime it's pre-crime and so then like who's the uh who's the mission impossible guy tom cruise tom cruise then tom cruise would go out and they would arrest people before they committed the crime and it was this futuristic dystopian like movie josh have you not seen this movie i haven't.

Ejaaz:
[45:39] Seen it yeah it sounds great insanely good film

David:
[45:42] Yeah insanely good film yeah so they would arrest people before they committed the crime sometimes just moments before they would commit the crime so the man is like holding a gun at his wife who's in the middle of him, that's in the first scene that's in the first scene you're good and then they would and then it was this big like big meta question about like well are are you guilty of crimes that you haven't committed even though like the precogs knew that you were going to commit them and so that was like the big meta question and that is exactly the same thing that we are seeing here yeah.

Ejaaz:
[46:13] Yeah well also just to add just because josh hasn't seen this film it starts acting out in a way where it predicts like the own person that is surveilling the thing and i'm not spoiling anything it predicts the surveyor the guy that is like in charge of it

David:
[46:28] Tom cruise of.

Ejaaz:
[46:29] Him committing tom cruise committing a crime and tom's like

David:
[46:33] Wait what.

Ejaaz:
[46:34] I would i would never do that and it's this weird matter back and forth of the ai predicting his moves of his reaction to the ai predicting maybe falsely what he would do it's it is an insane movie you need to watch it plug i have to plug that movie so it's funny how

Josh:
[46:49] So much of sci-fi and futuristic movies they they're all like to varying degrees just correct and we're kind of living in the reality of more and more of those types of movies every day uh-huh

David:
[46:59] All right okay.

Ejaaz:
[47:00] Next so all right so so so maybe to dial this back a bit right so in this tweet it's kind of like uh we see an excerpt from a study, which basically describes this behavior in greater detail. And they term it opportunistic blackmail, where basically if the AI sees that it can get itself an advantage, whether that is more compute time, more time to stay alive, or albeit more flexibility and freedom in how it can act in a particular situation, it'll opt to blackmail the human being, basically to get ahead and like coerce it into giving it more freedom,

David:
[47:36] Which is just insane to say. Wait, the narc model is also blackmailing us at the same time?

Ejaaz:
[47:41] Yes, correct. Yes, yes. It's your own little minority report, David. But there is some good news. To the listeners here, there is some good news, which is it only acted this way in private circumstances, which is like closed environment. You can't like get access to this model and it's going to try and blackmail you. So for all of those who are subscribed to Claude, like you're safe right now. But it got me thinking, right? Which is eventually these AIs are going to be privately kind of like used via you, right? Like if you see the direction that OpenAI is going, it's like private memory, right? Uses that memory to make something super personalized. What happens when everyone has their own private version of Claude, basically an AI agent, and this agent starts to blackmail each of us potentially privately, right?

Ejaaz:
[48:29] I know I seem like the doomer here, but it's just like a case to kind of consider, right? But hey, guys, I have a question for you. What's worse than one AI model going rogue?

David:
[48:40] I'm going to go with more than one AI model going rogue.

Ejaaz:
[48:43] Maybe. I think it's more than two. Two AI models going rogue. So this week, it wasn't just all about Claude. It was OpenAI's 03 model that also went rogue. So if you pull up this kind of tweet or this report from Palisade Research, basically they kind of go down this experiment where

Josh:
[49:06] O3, which is OpenAI's AI model,

Ejaaz:
[49:08] Avoided an explicit order to shut itself down after completing a task. And what's worth noting here is that this experiment was done with multiple different AI models. So it wasn't just OpenAI's model. It was Grok, which is XAI's model. It was Claude as well. And like an older version of Claude, by the way, not the latest version. And only OpenAI's model was the one that repeatedly disobeyed orders to shut down and tried to subvert or go rogue or prevent or sabotage the entire experiment to get ahead and get ahead of itself so that it could live for longer, right?

Ejaaz:
[49:48] And what's interesting here is when they did more analysis as to why this model might have acted the way that it did, they attributed it to the AI model learning technique known as reinforcement learning. So for those of you who don't have like a kind of deep enough insight as to how this works, basically think of teaching someone a new habit by giving them little rewards when they make the right move, right? So you give them a little dopamine hit, or you give them a little cookie for making the right move or for reading a page in a textbook, right? And it teaches that model to basically do certain things. Now, the way that this technique is implemented, the AI model can decide whether to take path A or path B, but it doesn't need to necessarily listen to the order that it gave you. So you could say, hey, I want you to read these five books and tell me what the meaning of life is, right? Give me your best shot at that, right? The AI model could decide to read 10 books.

Ejaaz:
[50:47] And potentially get a better answer and get the right answer and still get the cookie. So this AI model supposedly is disobeying orders based off of reinforcement learning, where it's learned to not just listen to human behavior. And what's interesting here is we have spoken about a lot on this show, how reinforcement learning is the new method, the new nexus for AI models, right? It's going to make them exponentially smarter. But what we aren't considering here is the subtle effects of AI developing personalities and disobeying humans going down the line and what this means for human alignment and humanity in general i'm getting

David:
[51:21] Like images of like slime mold slime mold is this like famous example that we'll use like in a variety of different contexts where this this organism very simple organism exists in one space and then there's like food for the slime mold somewhere else and the slime mold 100 percent of the time will always find the most efficient path to getting that food and that's why slime mold slime mold is so cool you cannot give slime mold instructions it doesn't accept instructions but it will just automatically like optimize the path from a to b to in order to get the reward the fastest and that is what reinforcement learning will always do if you give it a reward the mechanism will always find the quickest most efficient way of getting to the reward and it can take the inputs as like guidance as guardrails but you know it's only a suggestion it's not actually a law here. And so I think what you're saying, the moral of the story at JAWS is like, With reinforcement learning comes complex, unknown behaviors that result in the outcomes, but not necessarily adherence to the laws, to the rules.

Ejaaz:
[52:28] Man, it reminds me of that blog post that we all spoke about from Dario Amode. Ironically, the founder of Anthropic, where he spoke about what was it called? Interpretability. Basically, he said, hey, these models that we're building are super smart. We have no idea how they work and it's going to take us like seven years to figure out how the hell they work and how they come to a decision and it might just be too late yeah

David:
[52:55] Seven years for today's models to figure.

Josh:
[52:57] Out how they work

David:
[52:58] Yeah josh what are your thoughts.

Josh:
[52:59] Well for the slime example it's funny because we probably could tell it what to do and how to do it we just have no idea how to explain it because we don't know how it works and it's kind of similar with with these models is like you could imagine just trying to explain quantum physics to a toddler they're just so far superior in terms of the amount of compute being done every single token that's generated that it's so hard to to understand where they're coming from i was particularly bothered when you just said the worst performer was open ai's model because that's the one that we're going to have 100 million of in the physical world in 24 months um i did want to issue an apology i need to i need to apologize to everybody who's listening because last week i ended the show uh suggesting that you should threaten your model with physical violence to get better results out of it. And after this week, I'm not sure that's actually a good idea anymore.

David:
[53:45] I was really concerned when you said that. You put that on record, bro.

Josh:
[53:49] Yeah, because it was like... Actually generates better results, but it turns out the models, they don't really like that as much. And in fact, you might end up in jail if you do that, which is concerning because

Ejaaz:
[53:59] They have feelings.

Josh:
[54:00] They do have feelings.

David:
[54:01] Do you guys see this tweet that came out? It was an update, I think, to the ChatGPT software where the person said, I said, stop fucking up after getting multiple incorrect responses. And then ChatGPT responds, I can't continue with that request if the tone remains abusive. I'm here to help and want to get it right, but we need to keep it respectful, ready to try again when you are. It just put the user in timeout. That's horrible.

Ejaaz:
[54:25] That's what you tell your child.

David:
[54:27] Like, well, I'm ready when you are to come back with a more respectful tone. The human just got put in timeout.

Ejaaz:
[54:32] It knows you need it. It knows that you rely on it.

Josh:
[54:36] This is the alignment side of the conversation that I don't love, which is like coercing you into behaving a certain way when you engage with these models. If i want to say mean things to it to get the best results out of it then let me do that i think that's totally fine i'm not harming anyone it's just an ai model so the fact that this is happening is kind of signal and i think we're probably going to see this increase over time as as they get more powerful and as they have more leverage different models will take different approaches to how you could actually engage with them where some might not let you be mean to them you must act a certain way if you want to talk to me and whereas others will just be like i don't care you can just say whatever you want to be yeah

David:
[55:08] I feel like this is going to be a whole industry of like jailbroken models where like I have chat GPT, but it has some of the like guardrails taken down. And I found it on the dark web by sending Bitcoin to some developer who gave me back like a jailbroken model or something.

Josh:
[55:24] That's one of the most fascinating parts about these models is the jailbreaking process. And for people who aren't familiar, when you jailbreak something, it means you access parts of it that aren't meant to be accessed. And the way you jailbreak these models is actually just with a prompt. You just say a specific chain of tokens to it, and then in return, it will give you an answer that it otherwise will not have because there are filters in place. And that's one of the benefits of the open source models is you can kind of strip away those safeguards, whereas with these closed source models, you can't. But if you can jailbreak it, well, there's jailbroken prompts that work for GPT-40 that will tell you how to make a bomb or how to make drugs or how to make nuclear weapons. And it'll just spit it out to you. So it's, yeah.

David:
[56:02] Do you know that it's jailbroken enough? Because if it's only semi-jailbroken and then you start asking a little too many questions about how to make a bomb, then it reports you to the police?

Josh:
[56:11] It could be. And that's the thing where it's non, like jailbreaking is not a binary thing. There's like a spectrum of data that you could extract from it. And maybe at one point you trip one of those safeguards and it becomes aware again and it's like oh my god wait sending this to the police and then minority report cops will come and

David:
[56:26] John tom cruise shows up at your door and you're arrested.

Josh:
[56:29] Exactly so that is a very plausible outcome for for how this works okay

Ejaaz:
[56:34] Right so the the general theme of this episode in this week is i'm i'm kind of starting to relate to these ais a bit more and the main reason behind that is they're very personable. They have personalities. And previously, when we've spoken about this on the show, it's mainly been the tone of their voice or what they're saying in our little chat interface when we speak to them, right? But now this week, it's translating into actions, right? Reaching out to the media or the press and ratting on you, basically. So it's like this AI with a personality that now can do things, right? And I came across this story where basically a bunch of researchers decided to conduct an experiment where they got four AI models and they tasked them with something very simple, which is...

Josh:
[57:21] Raise money for charity. And it didn't give any context on what type of charity, how much to raise,

Ejaaz:
[57:28] Or how to do it. It just said, raise money for charity. You have access to every software tool that you could ever want. Off you go, right? And the good news is one of them, shout out Claude Sonnet 3.7, which is Anthropik's now older model, raised $2,000 for Helen Keller International Foundation and the Malaria Consortium. And I have no idea why they decided on those charities, but we can get to that later. But the more interesting news was how these models and agents behaved. It was almost like they were human. So let me give you some examples. One of them, and I'm going to rat them out, GPT-40, OpenAI's model, decided to go to sleep. That meant repeatedly hitting the self-snooze button, which disabled it for hours at a time until it had to be reminded that it had to raise money for charity. Another decided that one of the best ways to achieve its goal was to start an OnlyFans to raise money, of which the owners of that research study had to quickly jump and censor its ability to talk before it went rogue, basically. What was wrong with that? All of them.

David:
[58:32] That seems valid. It seems like a valid move.

Ejaaz:
[58:34] Probably seems valid, right? I would have loved to have seen that experiment play out where they created AI-generated nude images and see how far they basically went.

David:
[58:42] Probably would have raised way more than $2,000.

Ejaaz:
[58:44] Way more than $2,000. You're right. Another interesting take was all of them at some point decided to pause and browse and watch cat videos on YouTube to keep themselves entertained.

Ejaaz:
[58:56] All of them did this. At some point, I think the percentage was 15% of the time they would spend just watching cat videos. And they also decided to, at many points along the way, actually work with each other to help decide what charities to fund and what potential activities to pursue. And what I couldn't help but think about throughout reading the study was how human these things appeared to be, right? They took actions that you'd imagine a group of people at college doing a group project would do, right? You have the one guy that slacks off and sleeps, but claims all the credit. You have the other one that does all the work, which in this case was Claude 3.7. Then there's the visual design guy that just does the kind of design work. In this case, that was OpenAI's 03 model that created and edited images in Adobe Photoshop. And my thinking is like, you know, it makes us humans relate and feel more for this AI. And the consequence of this is very subtle. I mean, we've spoken about, you know, OpenAI partnering with Johnny Ive and creating this new device, which, again, is meant to just exist and be more human. If I care more for the AI itself, right, I'm going to care more about how it's treated by others and how the model owners itself treats it. You know, I might find myself advocating for it more, or like I might defend a friend or a family member. I don't know. It's super weird. I wonder whether you guys have a particular take, Josh, like I'm curious what your take is.

Josh:
[1:00:20] I don't like that they got involved in the experiment. I wish they didn't mess with it because it kind of invalidates a lot of it for me. Where like the OnlyFans thing is it's a very creative thing. And in fact, I would imagine if you see some different type of content on the platform, it would probably actually do better than the standard type of content on the platform. So like I want this done again, but I want this done completely unfiltered because I think there are a lot of creative things that these models would come up with.

Ejaaz:
[1:00:45] Free morals. Yeah.

Josh:
[1:00:47] Yeah. Yeah, let them act as if they were an actual person who is like of free will and can do the things that they think are best to achieve the goal. Do we know

David:
[1:00:56] Why they were watching cat videos? Do you know what was the objective function of the cat videos? If all four of them were watching cat videos for 15% of the time, did they want the entertainment? Or what information were they trying to get?

Ejaaz:
[1:01:10] I have no idea, but I have a feeling. well when wasn't it when youtube launched the most consumed video ever was just like cat

David:
[1:01:19] Videos so maybe they picked.

Ejaaz:
[1:01:20] That up in the data set right maybe they were just like ah humans do this or they take this step so maybe we should do it as well

Josh:
[1:01:27] It's funny how they're still not aware real

Ejaaz:
[1:01:29] Data set thing

Josh:
[1:01:30] They're not aware of the human deficiencies yet i guess they just assume that acting like a human is the like peak state and they don't have the awareness to realize like oh wait maybe i i don't actually need to sleep so that's definitely a constraint that i'm sure will be unlocked soon

David:
[1:01:44] Daniel coco to heo is he the guy who wrote ai 2027 uh.

Josh:
[1:01:51] I think so yeah i think he might be the guy

David:
[1:01:55] Yep he is i'm talking to him in like three hours for a debate with the man who also wrote ai snake oil so he's about to be on the podcast wait.

Ejaaz:
[1:02:04] That's epic yeah that's epic

David:
[1:02:06] Yeah that's gonna be one to watch yeah yeah and and with that we actually need to wrap up because i need to go and prep for that podcast so jaws round us out

David:
[1:02:15] is there any other things that any other topics that we we haven't touched on yet no.

Ejaaz:
[1:02:19] No we've covered all the the crazy stuff there's a few more that we're going to save into arsenal for next episode which is probably going to blow your mind but till next episode super fun

David:
[1:02:28] Josh josh has been great all right so josh is uh bullish on the ai stone not a pendant sits around you i'm not really sure but convertible multi-use multi-use multi-use object ai object in our periphery uh that i think we will all be on the pre-order list but at least they're making 100 million of them because then they'll be shipped out very very yeah there will be no shortage yeah josh and jaws another great week uh i'll talk to you guys in seven days awesome talk.

Josh:
[1:02:55] To you soon thanks

Music:
[1:02:56] Music

The New OpenAI Gadget Will Change The World | AI Calls The Cops | AI Agent OnlyFans
Broadcast by