The Good and Bad of AI Coding: Amazon Shuts Down, Autoresearch, Claude Code Review, Lovable
Ejaaz:
Last week, Amazon's entire platform crashed for six hours. No one could shop,
Ejaaz:
buy anything, they couldn't even see prices.
Ejaaz:
The reason was because a junior developer had submitted an AI-generated piece
Ejaaz:
of code which crashed the entire platform, and it cost them millions and millions of dollars.
Ejaaz:
Now, Anthropic, the creators of Claude Code, which is what Amazon was using
Ejaaz:
to create AI-generated code,
Ejaaz:
also had a similar issue where their entire platform has been suffering from
Ejaaz:
outages this entire week.
Ejaaz:
They actually also released a new product called Code Review,
Ejaaz:
which helps use AI to help fix the code problems that their own model is fixing.
Ejaaz:
It's all getting incredibly complex right now. And Amazon tried to hide the
Ejaaz:
entire thing from a Financial Times reporter. It's all pretty crazy.
Ejaaz:
And it forces us to answer the new narrative, which is AI-generated code isn't going anyway.
Ejaaz:
Demand is at an all-time high, but have we hit a wall? Has it become too dangerous to use AI to code?
Josh:
Yeah, and there's this interesting phenomenon happening. As these AI coding
Josh:
abilities in general become better, where currently they account for,
Josh:
what, about 4% of total GitHub commits.
Josh:
The expectation is at the end of the year, they will account for 20% of total GitHub commits.
Josh:
And there is this increasing reliance on these AI tools, but that creates these
Josh:
key choke points and points of failure that have a significant effect.
Josh:
I mean, And this Amazon outage was a huge deal that was pushed by a single person.
Josh:
And then within Claude Code, they've been having a lot of downtime.
Josh:
And the Anthropic, I mean, we were trying to use the programs today and it was
Josh:
down. The servers weren't working quite right. So there's a lot of these growing
Josh:
pains that are happening.
Josh:
And it seems like we're running into more issues faster than people are trying to mitigate it.
Josh:
So that's where I suspect this Claude Code checking agent comes in that we're
Josh:
going to cover later. But this Amazon story was pretty fascinating.
Josh:
Like this cost Amazon billions of dollars.
Ejaaz:
Yeah, so let me walk you through the timeline for this, because this actually
Ejaaz:
isn't the first outage Amazon's experience because of an AI-generated piece of code.
Ejaaz:
So back in early December, actually November of 2025, they took a really hard-line
Ejaaz:
approach and a new policy was invoked, which was...
Ejaaz:
I want 80% of all Amazon's code generated to be AI generated.
Ejaaz:
And this was their goal to be achieved by the end of 2026.
Ejaaz:
Now, this flips, completely flips from a company that I think employs like hundreds
Ejaaz:
of thousands of engineers and wants them all to kind of like handwrite or hand type the code.
Ejaaz:
So this is a pretty aggressive flip. Amazon's been laying off tens of thousands
Ejaaz:
of people. So this kind of is in trend with what they wanted to do.
Ejaaz:
In the middle of December 2025, they experienced their first outage.
Ejaaz:
It was 13 hours back then.
Ejaaz:
So this is all adding up, by the way, we're talking about 10s of millions of
Ejaaz:
dollars, then in late 2025. So at the end of December, there was another outage.
Ejaaz:
And then we have the outages that we're speaking about today.
Ejaaz:
And the issue that this flags is, although AI is like a really useful tool to
Ejaaz:
generate code, and it finds bugs, it actually might create more problems than you'd expected.
Ejaaz:
Because the issue that they're seeing is junior developers that come in that
Ejaaz:
don't understand Amazon's code base, just kind of use AI to run like code to
Ejaaz:
run autonomously, figure out what they want to create, and then they just submit
Ejaaz:
it without actually reviewing and understanding it.
Ejaaz:
And if this goes unguarded, it creates and results in issues like this.
Josh:
Yeah. And there's a problem that's starting to happen now where agents are creating
Josh:
code far faster than humans can keep up and check it.
Josh:
So it becomes this impossibility where if you want to move at the velocity that
Josh:
AI enables, you are simply unable to keep up with the change log of what's happening.
Josh:
So you have to defer some sort of trust level to this AI, to its ability to
Josh:
check itself, to run tests, to verify that it actually works.
Josh:
And in some cases, it doesn't. I mean, I'm sure some people listening to this
Josh:
experienced the outage.
Josh:
And it wasn't just an AWS outage. This is Amazon, the actual storefront,
Josh:
where my dad actually texted me.
Josh:
He was like, am I doing something wrong? Can you place this order for me?
Josh:
Because it's not going through.
Josh:
And then I went to check myself and the whole service was down. I went on Twitter.
Josh:
I saw Casey Neistat was posting a bunch of photos about this. It
Josh:
was this big deal because anyone who was trying to order anything from
Josh:
amazon the entire storefront was just totally offline so
Josh:
aws the web services runs a significant percentage
Josh:
of the internet amazon the storefront runs a significant percentage of the e-commerce
Josh:
and these things have been going down at an increasing rate due to these tools
Josh:
like cloud code so they're moving much faster but they're they're breaking things
Josh:
this is like the early zuck facebook mantra it's like move fast and break things
Josh:
they're now taking it to the extreme and
Josh:
As we become increasingly reliant on these tools, we might see this start to,
Josh:
I mean, permeate even further than just Amazon.
Ejaaz:
Well, the weirdest part about this is that AI kind of has moved from this assistive
Ejaaz:
tool to now being the foundational bedrock for a lot of these different like
Ejaaz:
services that you've just mentioned. Like AWS runs the entire internet.
Ejaaz:
Every single business that you interact with online probably uses AWS on the
Ejaaz:
back end. So if they go down, then your business goes down as well.
Ejaaz:
That's where we get all these outages.
Ejaaz:
Amazon had some pretty severe repercussions. They have now done a complete 180
Ejaaz:
on their policy of generating 80% of their code via AI.
Ejaaz:
And they've now said, if you're a junior developer or engineer working at Amazon,
Ejaaz:
you now need to get your manager's permission to submit code.
Ejaaz:
So this is so tough because you've now gone from trying to automate the entire
Ejaaz:
thing to now making it incredibly worse.
Ejaaz:
So you're using AI to kind of generate the code, but then you still have to
Ejaaz:
rely on a human constraint. and there are fewer managers than they are like junior developers.
Ejaaz:
So it all just becomes really bogged up as a pipeline to kind of shipping changes.
Ejaaz:
So I think this is going to slow down Amazon massively.
Ejaaz:
But if it's for the result of, you know, saving your company tens of millions
Ejaaz:
of dollars, fair play. But yeah, it's a pretty hard line approach.
Josh:
And this all seems kind of connected to the trends that we've been seeing,
Josh:
right? Particularly around jobs and distribution.
Josh:
It's like a lot of people are cutting jobs, which means they're increasing the
Josh:
reliance on these AI tools. But if the result is that the actual output of these
Josh:
ai tools is causing a lot of damage it creates this this tension and
Josh:
And it seems like the AI labs themselves, Anthropic, OpenAI,
Josh:
they cannot hire talented people fast enough.
Josh:
They are looking to hire every engineer under the sun who is competent and capable
Josh:
of building great products.
Josh:
But a lot of other companies are deferring that workload to these AI labs,
Josh:
which is starting to create this interesting dynamic where the job market is
Josh:
suffering, but the actual productive output of these companies is not to an extent.
Josh:
And that's operating under the assumption that these code tools will get better.
Josh:
But if Amazon and a lot of other companies start putting in these thresholds
Josh:
that, again, bottleneck and throttle that amount of AI production capability,
Josh:
like that probably doesn't help the cause a whole lot.
Ejaaz:
There's also the angle that AWS or just data centers in general are becoming
Ejaaz:
like a really valuable or national security threat level asset, right?
Ejaaz:
So if we look at the current Iran conflict that's being waged between the US
Ejaaz:
and Iran, Iran actually targeted strikes specifically to Amazon's AWS centers
Ejaaz:
out in the UAE and Bahrain.
Ejaaz:
And this caused a bunch of the outages as well. So it's not even just like AI coding level threats.
Ejaaz:
This is becoming a geopolitical weapon at this point.
Josh:
It makes sense. Every single story we tell, like this Game of Thrones narrative
Josh:
that's kind of existed throughout the history of Limitless,
Josh:
it's just elevating itself higher and higher to the global stage,
Josh:
where now it's impossible to have any sort of conflict or
Josh:
large decision without ai being in the middle of it like when you think
Josh:
about this conflict what are the bigger things it's this like anthropic
Josh:
deal with the pentagon they're now striking the aws data centers it's coming
Josh:
for the infrastructure that's building these tools that are so powerful but
Josh:
this isn't just happening on the worldwide stage this is happening on the consumer
Josh:
and commercial level too you were mentioning something about lovable and how
Josh:
much they've grown recently could you just explain lovable for the people who
Josh:
aren't familiar and what's what how big they have grown recently it's crazy
Ejaaz:
Okay, so typically the people who code, this might shock some of you,
Ejaaz:
are the people who have learned to code.
Ejaaz:
They're very technically focused work. But there are people like you and I and
Ejaaz:
a bunch of our friends that don't necessarily know how to code,
Ejaaz:
but still want to do that.
Ejaaz:
And that's what resulted in Vibe coding, right?
Ejaaz:
Now, Claude Code and products like Cursor are still very oriented for technical folks.
Ejaaz:
A platform like Lovable is for people who have zero experience coding,
Ejaaz:
but still want to code things. So the Lovable platform is actually a really
Ejaaz:
cool platform where you just type whatever you want.
Ejaaz:
Like I want to create an app that tracks my fitness reps or whatever that might
Ejaaz:
be or create or suggest cooking recipes for my type of cuisine.
Ejaaz:
And it does so in a couple of minutes.
Ejaaz:
And it's really intuitive. It makes it super easy.
Ejaaz:
And they have the additional perk of being able to deploy it live as a website
Ejaaz:
or an app that you can publish on the app store super easily.
Ejaaz:
So they have this end-to-end experience.
Ejaaz:
Now, Lovable, when it started, had like a rocket-type trajectory.
Ejaaz:
Within the first 12 months, I think they hit $200 million ARR,
Ejaaz:
which is just like an insane growth. All of these AI companies are blowing up.
Ejaaz:
Now, in the last month alone, they've added an extra $100 million of AR.
Ejaaz:
They were at $300 million. Now they're at $400 million.
Ejaaz:
So that's like roughly like just over 30% growth.
Ejaaz:
Just insane amount of demand. And that's the point I want to make.
Ejaaz:
Like when I look at these Amazon outages and Amazon restricting their developers
Ejaaz:
from using Vibe Coding to improve their product, I don't think this is necessarily
Ejaaz:
a wave that they can stop.
Ejaaz:
So they should stop trying to implement policies that are restricting people
Ejaaz:
from using it and instead try to figure out what is a better way to implement
Ejaaz:
AI-generated code because this thing's not going away.
Ejaaz:
Lovable, Cursor, Claw code are all skyrocketing and we're not going to stop that.
Josh:
And I think a lot of the, I mean, aside of Claude Code, right, is Lovable and Cursor.
Josh:
They're kind of model agnostic. They'll use whatever model you put.
Josh:
So I was looking at Polymarket earlier today because they have markets for which
Josh:
one of these models is actually going to be the best in which time.
Josh:
And this is interesting because if you looked at these charts towards the middle
Josh:
of January, you'd notice that Anthropic was kind of the favorite.
Josh:
Anthropic was looking like they were going to have the best model.
Josh:
But the reality now is that by March 31st, there's a very clear divide that
Josh:
has happened over the last six to eight weeks. in OpenAI having an 85% chance
Josh:
of having the best model.
Josh:
Does that mean there's going to be something new or is that just the current model? We don't know.
Josh:
Currently they're at 5.4, which is fantastic. I think everyone is kind of unanimously
Josh:
decided that it's the best for coding.
Josh:
But then there's also a fun following market here about Claude going down because
Josh:
it appears as if Claude is really just not doing as good as it needs to be.
Josh:
It's not as stable. We were trying to use it this morning. It wasn't working.
Josh:
They're clearly experiencing growing pains because they have all these new users
Josh:
and they're having to throttle reasoning.
Josh:
They're having to limit the amount of searches that people are able to use.
Josh:
I'm sure the researchers who use these to train the models are not happy.
Josh:
And it looks like based on Polymarket, we'll be seeing somewhere between five to nine more outages in
Ejaaz:
The month of March. Yeah, basically every single day this month.
Josh:
Yeah, like it seems like they're having problems. And this is proof.
Josh:
I mean, you can actually bet on the problems that they're having and make some
Josh:
money on it, right? This is what, $26,000 of volume.
Josh:
So thank you, Polymarket, for sponsoring this segment. And I guess now we can
Josh:
get onto the solution, which is what Anthropic is starting to roll out.
Josh:
But they're doing it in a way that's a little less than desirable.
Ejaaz:
Yeah. So the main instigators of this entire AI coding debacle is Anthropic.
Ejaaz:
They created Claude Code.
Ejaaz:
Amazon owns 21% of Anthropic, which is just crazy to say out loud.
Ejaaz:
And so they use Anthropic's model, Claude Code, to do all their AI code generation.
Ejaaz:
But Anthropic decided, hmm, instead of trying to make our AI coding models better,
Ejaaz:
let's just release a new product called code review, which will review your AI generated code.
Ejaaz:
So you now have Claude generating the AI code and then also reviewing the code
Ejaaz:
as a separate product that you pay for.
Ejaaz:
And it's all for the beautiful price of, I think it's like 15 to 20 bucks. Where is it? Yeah.
Ejaaz:
15 to 20 bucks per review, which some might say is quite expensive.
Josh:
And they scale based on the pull request complexity. So that number can get higher.
Josh:
And what I find it funny is they're selling the problem where they're selling
Josh:
you a model, you use it to code, it's going to create problems.
Josh:
And then they're selling you a separate package for the solution where,
Josh:
oh, you have problems in our code.
Josh:
Well, here for another $25, $50, whatever it may be, we'll actually do a code
Josh:
review and we'll fix the problems that we've created.
Josh:
And that creates this uniquely disincentivized incentive where there's no real
Josh:
reason for Claude to want to fix problems when they could just sell this package
Josh:
on top to remove the problem.
Josh:
And this comes in the face of chat gpt
Josh:
and open ai doing the polar opposite where they're actually offering these
Josh:
for free and here's rohan from open ai if you
Josh:
want ai code review but don't want to pay 25 per review check
Josh:
out codex review it leverages frontier codex models finds complex issues and
Josh:
100 usage based most runs should cost one dollar or less so here we have another
Josh:
head-to-head collision happening where chat gpt open ai is going head-to-head
Josh:
with cloud code and And it seems like Cloud Code is the favorite for a lot of people,
Josh:
but I've noticed a lot of the more technical people on my timeline who are building
Josh:
pretty hardcore things have been really relying on 5.4 and using codecs much more.
Ejaaz:
Yeah, so to set the precedence, OpenAI and Anthropic now are neck and neck at
Ejaaz:
building the best coding models.
Ejaaz:
And Codex 5.4 from OpenAI has taken the lead. A lot of engineers are now saying
Ejaaz:
it's way better than Opus 4.6, as you just mentioned.
Ejaaz:
In terms of these security code review products that each company offers,
Ejaaz:
OpenAI is instinctively cheaper.
Ejaaz:
If you have a $20 a month subscription, you now get access to this code review
Ejaaz:
tool for no extra cost. So it's automatically a better tool to use.
Ejaaz:
Now, in terms of quality, in terms of how good it is at code review,
Ejaaz:
we don't know because they haven't made any of their datasets public.
Ejaaz:
Anthropic has, and it's pretty damn good.
Ejaaz:
Also, if you compare Anthropic's product to traditional SaaS vendors that have
Ejaaz:
these kind of AppSec security tools, it is still way cheaper.
Ejaaz:
A lot of people were in my comments, actually, because I tweeted about this
Ejaaz:
saying it was cheaper. They were like, no, it's not.
Ejaaz:
I could use this SaaS tool. And I'm like, yeah, but you're not factoring in
Ejaaz:
the engineering hours, they spend like 30 to 60 minutes per bug.
Ejaaz:
That adds up to about $100.
Ejaaz:
So it's still technically cheaper, but I would rather use OpenAI's tool,
Ejaaz:
which is just way better.
Ejaaz:
Now, Anthropic itself are facing so many outages for Claude.
Ejaaz:
You mentioned earlier that, you know, Claude is out for you this morning.
Ejaaz:
It's also out for me this morning. And I think this is because OpenAI basically
Ejaaz:
has all the money in compute to subsidize this cost for a tool like this,
Ejaaz:
whereas Anthropic is struggling.
Ejaaz:
They're adding a million users
Ejaaz:
to their general database or to the general user base every single day.
Ejaaz:
And they're not able to subsidize people's compute anymore. So they're just
Ejaaz:
shutting people's access off to it. And it's kind of frustrating to see, to be honest.
Josh:
Yeah, and the way you have to think about this is each one of these companies
Josh:
has a limited amount of compute.
Josh:
And that compute takes care of everything. It has to serve the customers.
Josh:
It has to serve the developers, the researchers.
Josh:
And a lot of times when you're developing these new technologies or you're training
Josh:
new models, the training run consumes a tremendous amount of GPU power.
Josh:
The researchers want a lot of GPUs to run tests and run trials on,
Josh:
but you still have to find extra resources to serve all of the users who are querying this model.
Josh:
Every single day. And I was recently listening to Dario on, I think,
Josh:
the Dwarkesh podcast, where he was talking about how they think about how many
Josh:
GPUs to order into the future.
Josh:
Because they go to Jensen, they go to NVIDIA, they say, hi, Jensen,
Josh:
we would like to place this amount of GPU purchases for this year.
Josh:
And what he was mentioning that I found interesting is that they can't over
Josh:
order because if they're off by just a small percentage, the incremental cost
Josh:
of those GPUs will far outweigh the growth, the revenue that they get from growth.
Josh:
And what it sounded like he did is he was just taking the current growth and
Josh:
mapping it to the future to hedge themselves against this collapse from debt
Josh:
that they would have from over-ordering.
Josh:
But the reality is, is that since that episode was recorded and now they have
Josh:
gone fully vertical, like that curve has steepened significantly more and they're
Josh:
going to have to figure out a way to solve this because this is not an easy
Josh:
thing. These orders get placed years in advance.
Josh:
They're projecting for a certain curve and they're getting a different one.
Josh:
So is this a short-term thing?
Josh:
Is this going to be durable? I don't know, but it is noteworthy that Anthropoc
Josh:
is having some growing pains.
Ejaaz:
Okay, so we have two problems here. We have an amazing AI model that can code,
Ejaaz:
and that is inevitably going to be the future. Lovable's demand proves that.
Ejaaz:
But then we're also running into the issue where we have all these security
Ejaaz:
flaws and we could lose tens to hundreds of millions of dollars.
Ejaaz:
Amazon demonstrated that, right? So we're at this conundrum.
Ejaaz:
How on earth do we solve this? Do you know what the solution might be, Josh?
Ejaaz:
It might be AI. It might be AI itself.
Josh:
That checks out.
Ejaaz:
You have to use AI to solve both of these individual AI problems, I'm afraid.
Ejaaz:
So, Andre Carpathy, the godfather of AI, as I like to call him,
Ejaaz:
released this really cool experiment that is just completely blown up,
Ejaaz:
which hints at what the future solution might be to solving this problem that
Ejaaz:
we just explained on this episode, which is called auto-research.
Ejaaz:
Now, the best way to think about this is he created an AI model that acts as an AI researcher.
Ejaaz:
But the coolest part about it is it autonomously does research.
Ejaaz:
Now, typically, when you use an LLM, you need to prompt it. And then it says,
Ejaaz:
I don't really know what this means.
Ejaaz:
Can you tell me what it means? And you have to keep working back and forth.
Ejaaz:
This thing runs completely on its own overnight, and in some cases, for days at a time.
Ejaaz:
He's done this overnight when he slept for like 9 to 12 hours,
Ejaaz:
and then he's done it again for two to three days.
Ejaaz:
And how the AI model works is he sets an objective. He says,
Ejaaz:
I want you to try and improve this part of your model.
Ejaaz:
So he's talking to the AI model and saying, you need to improve yourself in this particular way.
Ejaaz:
Go away and figure it out. The model then runs an experiment every five minutes.
Ejaaz:
If the experiment comes out with an improvement, it caches it in.
Ejaaz:
It improves its own model weight.
Ejaaz:
It improves its own internal insights, right?
Ejaaz:
If it doesn't, it discards it
Ejaaz:
and it tries again. And it runs these experiments on and on and on and on.
Ejaaz:
In this experiment, I think he ran around 150 experiments overnight.
Ejaaz:
And it improved itself pretty marginally, as is demonstrated by this graph.
Ejaaz:
The loss aversion just went crashingly down.
Ejaaz:
So it suggests that maybe Anthropic and Open Air, instead of having to rely
Ejaaz:
on releasing new tools every day, constrained by humans, they could just let
Ejaaz:
the AI improve itself and everyone's hunky-dory, maybe.
Josh:
This seems like an early form of this takeoff situation in which AIs become
Josh:
self-improving, because I haven't really seen this architecture before in such a
Josh:
open and easily accessible way i mean this is
Josh:
all open source this i mean andre says here it's 630 lines
Josh:
of code and traditionally it's on the human to iterate on
Josh:
the prompt to improve the prompt to improve the
Josh:
training code and to work together with the ai but this actually
Josh:
does it completely and entirely on its own it goes
Josh:
off it runs its own experiments it ingests those experiments updates
Josh:
its values that it has updates its view on the world and then creates a new
Josh:
set of experiments and runs this iterative process and what we saw in that chart
Josh:
is that there was these incremental improvements that andre just didn't see
Josh:
and again it's hard to overstate how much of a powerhouse andre is in terms
Josh:
of ai engineering and understanding do
Ejaaz:
He has two decades of experience in.
Josh:
This and like ai what like wasn't even
Josh:
around two decades ago like he was the guy who was building the damn thing
Josh:
yeah um so for him to come along and for
Josh:
the first time ever experience this novel breakthrough in which an ai
Josh:
isn't able to improve his code in ways that he didn't initially see
Josh:
it's like when ai played go for the first
Josh:
time or when ai is playing chess and it makes these erratic moves you would
Josh:
not have known that it was optimal but the model is
Josh:
is so precise has so much context and so
Josh:
much intelligence that it's actually able to make these optimizations that
Josh:
humans otherwise wouldn't have seen and this is one of the earlier instances where
Josh:
even andre got humbled and the code that he
Josh:
was running was significantly improved overnight by this
Josh:
self-recursive thing and the most interesting thing about this is it's
Josh:
open source it's available to run on your laptop on
Josh:
your pc on a computer right now and i
Josh:
have to imagine that ai labs are going to take this and run with
Josh:
it at scale where if you're training a model why would you not and and this
Josh:
model using this auto research can do better than andre i'm sure it could do
Josh:
better than the average mid-tier like prompt engineer at anthropic or chat or
Josh:
open ai and actually improve the entirety of the company so this feels like something that is
Josh:
starting from the bottom where in terms of it's like openly accessible it's
Josh:
open source and then we'll move its way up through everything and become this
Josh:
like auto-aggressive self-improving loop it's incredible i want to try it on
Josh:
something we need a problem that's hard enough to try this on where you can
Josh:
truly let it run for a series of days and then come back with the answer that's
Josh:
better than anything you could have ever imagined well maybe research
Ejaaz:
On how to make limitless the best and number one podcast in the world maybe i i'd be down to run this.
Josh:
That's probably we should try that we should try that we can make some assumptions
Josh:
on what it would it would say right It would say, well, we probably need to
Josh:
convince all of our listeners to go subscribe to the podcast on your favorite podcast player.
Josh:
Give it five stars, share it with all of their friends, and then make sure that
Josh:
they comment things they either like or don't like about it to keep the engagement
Josh:
rate high. Those things will help out the show.
Ejaaz:
That actually sounds pretty great. And if you're listening to this,
Ejaaz:
remember, you are no more than organic LLM.
Ejaaz:
So if you want to process that prompt that Josh just specified,
Ejaaz:
please go do it. It helps us out a massive amount.
Ejaaz:
But just to round up and wrap up this episode, it's this weird conundrum.
Ejaaz:
AI is either zero or 100 at this point.
Ejaaz:
It's either deteriorating your product and causing massive outages,
Ejaaz:
or it's doing something like this and helping the world's leading expert of
Ejaaz:
AI research actually do a better
Ejaaz:
job, which is just, there's no middle ground. It's either zero or 100.
Ejaaz:
And it's just super exciting to see where all of this is going.
Ejaaz:
I tweeted this out the other day.
Ejaaz:
My mind, I don't know about you, Josh, is just in a fog at this moment because
Ejaaz:
I can't keep up with everything that's going on.
Ejaaz:
Something's switched over the last 30 days where
Ejaaz:
The exponential curve has just like racketed up. I don't know whether this is
Ejaaz:
because I'm in my own echo chamber or not, but it just feels like we are getting
Ejaaz:
new model releases every single week at this point. There's so many new Frontier AI labs.
Ejaaz:
Jan LeCun, who is the former head of AI research at Meta, has now started his
Ejaaz:
own thing and he's focusing on world models. And I'm like, what is going on here?
Ejaaz:
It's just, it's so much, but you're going to hear all about it on Limitless.
Ejaaz:
Josh and I are working 24-7 doing this. We're releasing four new episodes every week, up from three.
Ejaaz:
The viewership has been crazy the comments on our last episode
Ejaaz:
where we talk about uploading a fruit flies brain into a laptop the banger of
Ejaaz:
an episode go check that out comments are like four times what we normally get
Ejaaz:
we're responding to every single one we are the most dialed show on ai that
Ejaaz:
you can potentially watch uh please like subscribe leave us comments give us feedback and josh yeah.
Josh:
And important to know like the four episodes is just a
Josh:
testament to how fast we're going um and just the need to cover
Josh:
more things so again you will not miss
Josh:
anything if you watch the show and we will continue to scale to match
Josh:
the the necessary demand to like sufficiently cover all of this and that's what's
Josh:
happening i mean it's just downstream of the craziness we are very much in this
Josh:
singularity event in the vertical part of the curve so enjoy it like i'm just
Josh:
trying to take it all in i can take it for what it is get excited about it share
Josh:
with everyone so yeah thank you guys all for watching as always and we'll see
Josh:
you in the roundup tomorrow
Creators and Guests
