Anthropic Just Got Hacked by China. These are the New Front Lines.

Ejaaz:
China just got exposed for stealing our AI. In a new report from Anthropic,

Ejaaz:
three top Chinese AI labs were exposed for having 16 million fraudulent conversations

Ejaaz:
with Claude with one specific goal, to try and steal its capabilities to train their own models.

Ejaaz:
Now, the week before, Google said the same thing about China attacking their Gemini models.

Ejaaz:
The week before that, OpenEye said the same thing. The top three American AI

Ejaaz:
labs are blaming China for trying to hack their own AI models.

Ejaaz:
But here's the twist in the story. What China's actually doing may not actually

Ejaaz:
be illegal in the first place.

Ejaaz:
In fact, this is something that every AI company is doing to get ahead in the AI race.

Ejaaz:
In this episode, we're going to explore what all these reports confirm and whether

Ejaaz:
distillation, the hacking vector, is actually a bad thing.

Josh:
Yeah, so it starts with this blog post that Anthropik published earlier this

Josh:
week that says, it's a title, Detecting and Preventing Distillation Attacks.

Josh:
And I guess maybe it's helpful to just kind of define distillation as a concept

Josh:
before we get into what they're accusing China of.

Josh:
And basically, the way it works is there is a teacher and a student model.

Josh:
So the teacher is the large model. That would be Anthropik's Claude Opus model. It's this huge model.

Josh:
They've spent hundreds of millions, billions of dollars training it and turning

Josh:
it into the model that we use every day.

Josh:
That model provides these high quality outputs to the student,

Josh:
which is the smaller model that is getting distilled from.

Josh:
So basically the smaller model, the distilled model learns to mimic the outputs

Josh:
of the larger model, but does so at a fraction of the cost because it's able

Josh:
to kind of cherry pick the types of outputs that it gets by prompting it very specifically.

Josh:
So anybody with sufficient access to a model and enough prompts can actually

Josh:
get enough information to emulate the large model with a much smaller data set.

Josh:
Now the outputs are not always as good as the large model, but they're significantly

Josh:
cheaper and oftentimes very close.

Josh:
So in the case that you get an extra breakthrough or two on top of that,

Josh:
you can build a pretty impressive model.

Josh:
And allegedly, that's what's happening with these models from China,

Josh:
at least according to Anthropic.

Ejaaz:
At least that's what they say. But why is what you just described a good thing?

Ejaaz:
It's because all the hundreds of billions of dollars that are invested in building

Ejaaz:
out the best AI model isn't sustainable for the long-term future.

Ejaaz:
In fact, if you want to have a model that's small enough to fit on your phone,

Ejaaz:
but as intelligent enough as the top models, it needs to be distilled through

Ejaaz:
that process that you just explained.

Ejaaz:
So it's going from a big model to a smaller model that is just as intelligent

Ejaaz:
in certain specific ways.

Ejaaz:
The stats from this China hack on Anthropic, Josh, are kind of insane.

Ejaaz:
So I mentioned 16 million exchanges, but they spun up 24,000 fake Anthropic

Ejaaz:
accounts. Now, I have to specify it.

Ejaaz:
Anthropic does not allow Chinese users to access their models for the specific

Ejaaz:
reason that adversaries to the U.S.

Ejaaz:
Could get access to superintelligence that they're building.

Ejaaz:
So DeepSeek, I'm going to name some names now. DeepSeek, one of the top AI labs

Ejaaz:
which caused the stock market to crash at the end of 2024, I believe,

Ejaaz:
were responsible for 150,000 of those exchanges. Moonshot AI, 3.4 million.

Ejaaz:
Minimax, which is a favorite that you and I have spoken about on the show,

Ejaaz:
13 million exchanges of those 15-minute conversations.

Ejaaz:
There is an argument here that the open source gold rush that has been happening

Ejaaz:
in China was mainly because they were stealing US secrets.

Josh:
Wouldn't that be funny if that was the case? And then if that's also the case,

Josh:
then what do you do about it? I mean, Anthropic kind of came out and they were

Josh:
very upset about this, clearly.

Josh:
But at the end of the day, it's like kind of on them. the onus is on them to

Josh:
protect their systems and prevent this from happening there's a really great

Josh:
great post that you have on screen and it's a joke it says my

Josh:
son asking me a lot of questions it's a distillation attack obviously

Josh:
and i think it's it's kind of funny where like the irony is and we can get into

Josh:
the hypocrisy of the whole thing is that anthropic as a company very much has

Josh:
done this in the past in order to get where they are and they are kind of the

Josh:
person who's crying wolf now saying wait we're getting attacked this is not

Josh:
allowed we should not be able to do this.

Ejaaz:
Yeah, I mean, Anthropic is doing this with their own models,

Ejaaz:
right? They've distilled Claude Opus into their Haiku model.

Ejaaz:
Google's distilled Gemini Ultra into Gemini Nano. This is a common practice.

Ejaaz:
So then the question becomes, which part of this is illegal?

Ejaaz:
What has China done that is illegal?

Ejaaz:
Well, it's two things that Anthropic has claimed. Number one,

Ejaaz:
they've got this fancy terms of service, which they're lawyers that have been

Ejaaz:
paid millions of dollars that have drafted up, which says, hey,

Ejaaz:
hey, hey, if it's our model that you're doing this to,

Ejaaz:
you can't do that. We've patented this thing. It's going to be illegal and we're

Ejaaz:
going to sue you in a court of law.

Ejaaz:
The issue is China is in China and they don't abide to the US legal system at all.

Ejaaz:
Which brings me to the second thing that they violated, which is a geographical

Ejaaz:
restriction. They don't let anyone in that region access clause.

Ejaaz:
So the fact that China has been able to pull this off from the top AI labs means

Ejaaz:
that they've illegally spun up accounts to do this.

Josh:
Well, you know who doesn't care about laws is China.

Josh:
Like they could not care less. In fact, this is the time for wartime CEOs.

Josh:
Like in very many ways, this is the largest war that's being fought between

Josh:
the US and China. And it's around AI.

Josh:
And I think for them to say, that's against our terms of service, this is wrong.

Josh:
Like that is not a grounds for defending yourself because clearly they have

Josh:
no disregard, they have no regard for any sort of law.

Josh:
I mean, you'd look at Seed Dance 2.0 and how it violates every copyright law under the sun.

Josh:
And yet people don't care. It's the best video generation model in the world that exists.

Josh:
So it is a challenge to claim that because they're violating terms of service,

Josh:
this is an illegal thing that you shouldn't be able to do.

Josh:
And they've not just cut off China, but I think it's important to note they've

Josh:
also cut off other frontier AI labs.

Josh:
They famously had this beef with XAI recently where they cut off all of the

Josh:
Claude Code access to other labs.

Josh:
So Anthropica has been very controlled and closed down in who's actually able

Josh:
to access their models. And it sounds like someone was able to bypass that and

Josh:
they just got pretty upset about it.

Ejaaz:
Well, I mean, the Pentagon is relying on the likes of Anthropic,

Ejaaz:
XAI, and OpenAI to fund the warfare effort against China, right?

Ejaaz:
To your point, we're in like a wartime position. Like these AI models are being

Ejaaz:
used as a geopolitical weapon.

Ejaaz:
And so whoever owns the best model per se can advance the quickest.

Ejaaz:
So it's like an economically dependent thing. And this whole drama with the

Ejaaz:
Pentagon has been, the Pentagon has been using Claude for pretty much quite

Ejaaz:
a lot of covert activity, including the recent capture of Nicolas Maduro,

Ejaaz:
the former, I guess, president of Venezuela.

Ejaaz:
And the issue now is that Anthropic is restricting Pentagon's access,

Ejaaz:
like American owned self-defense against these kinds of things.

Ejaaz:
And so the Pentagon is getting fed up and issuing them an ultimatum and saying,

Ejaaz:
listen, if you don't figure this out, we're going to classify you as a threat to the country.

Ejaaz:
Now, I have to give credit to Anthropic for maintaining their identity evenly

Ejaaz:
across every single facet, but

Ejaaz:
I don't think it's the smart way to do it because at the end of the day,

Ejaaz:
there are going to be things that require more uncensored versions and you just

Ejaaz:
need to be compliant with that fact.

Ejaaz:
Because to your point earlier, Josh, Claude, OpenAI, ChatGPT has become a national

Ejaaz:
asset. And so it needs to be treated as such.

Josh:
Yeah, it's a matter of national security. And the thing about Anthropic that's

Josh:
unique to Anthropic, and I'm not sure many other companies in the AI space is

Josh:
their mission statement, where if you talk to any employee who works at Anthropic,

Josh:
they'll tell you the purpose of the company is safety and alignment.

Josh:
And I think while it's a valiant effort and incredibly important,

Josh:
it doesn't really bode well for the current state of affairs in which

Josh:
velocity momentum and just raw speed to get

Josh:
to the best model possible is actually beneficial so

Josh:
i think what we're seeing here is there's just these increasing conflicts

Josh:
with i mean the secretary of defense and the pentagon wanting

Josh:
access to do things that they deem to

Josh:
be a matter of national security and like xai wanting to go and build code using

Josh:
their tools they're like no no no no that's not how we want this used we're

Josh:
not going to allow that and then the rumor is is that apparently the pentagon

Josh:
actually just kicked out anthropic and now grok and the xai team is responsible

Josh:
for being the AI provider for the Pentagon.

Josh:
So I found that interesting too. It's just like a little side development.

Ejaaz:
Well, I mean, like what you're getting at there is that some of these AI models

Ejaaz:
or AI companies in America are kind of being super hypocritical.

Ejaaz:
Like this tweet actually explains it really well.

Ejaaz:
Hey, did you hear about the little like $1.5 billion lawsuit that Anthropic

Ejaaz:
had to pay out over pirating or illegally downloading 7 million books to train their own models?

Ejaaz:
Open AI is facing similar lawsuits against newspapers or across newspapers,

Ejaaz:
code repositories and authors.

Ejaaz:
I'm pretty sure Anthropic got sued for using Reddit data to train their models.

Ejaaz:
Google trained their entire model over the index data that they took.

Ejaaz:
Now, the question then becomes, is that fair?

Ejaaz:
Who are paying the authors and creators of the content where these AI labs that

Ejaaz:
have like amassed hundreds of billions of dollars worth of valuation,

Ejaaz:
who's paying those creators?

Ejaaz:
No one is, right? So you could argue that that is a form of distillation.

Ejaaz:
Now, obviously, that's looking at it in a very black and white face,

Ejaaz:
but I do think it's hypocritical.

Ejaaz:
And most importantly, the memes are just so, so good here.

Ejaaz:
You've got people that are asking Claude in Chinese, what model are you?

Ejaaz:
And them replying, hey, I'm DeepSeek.

Ejaaz:
And then you've got this one here where it says, I can't believe someone would

Ejaaz:
just steal from Anthropic like this.

Ejaaz:
Anthropic spent millions of man hours handwriting code, text,

Ejaaz:
art, and books. Obviously,

Ejaaz:
you know tongue-in-cheek this isn't actually real the point that's being made

Ejaaz:
is that all information is kind of taken or stolen or interpreted in some way

Ejaaz:
shape or form so what makes it any different for china in this regard.

Josh:
The crux the argument is that the same foundation

Josh:
that anthropic built its models on is the foundation that chinese models are

Josh:
building their foundation on it's just one level kind of up where they clearly

Josh:
stole maybe not stole content but they clearly used the content that we've produced

Josh:
as humans over time to train their model.

Josh:
What Deep Seek is doing is the next layer up. It's taking the,

Josh:
I guess, the quantized version of all of the human intelligence that we've developed

Josh:
and then distilling that one layer up.

Josh:
It's easy to see why they would be upset, but it's also easy to see why everyone

Josh:
is kind of deeming them as hypocritical.

Josh:
It's like, again, you know that you are a nation state actor,

Josh:
like relatives to the rest of the world in one of the most important wars that's

Josh:
being fought. You know that you are going to be getting attacked.

Josh:
You know that these people are going to be coming for you to build their own

Josh:
models in the race for this AGI and beyond.

Josh:
And to think that it's not going to happen and to be upset when it does just seems wrong.

Josh:
And I think that's probably where a lot of the backlash is coming from is because,

Josh:
I mean, again, it's on them to solve for these issues before they happen or

Josh:
accept the consequences if they don't.

Josh:
And that's just what happens here. I mean, there's like, this is a bar fight.

Josh:
There are no rules in this fight. It is the only thing that you're trying to

Josh:
do is get to AGI as fast as possible. And clearly, China doesn't care.

Ejaaz:
Can I say something in China's defense? And maybe this is a hot take.

Ejaaz:
Their models be banging recently, okay?

Ejaaz:
Like they have been churning out new model updates from the likes of Alibaba with Quen 3.5.

Ejaaz:
By the way, if you haven't tried this model out, apparently it's really amazing

Ejaaz:
with agents. It's absolutely crushed benchmarks. Once again, open source.

Ejaaz:
We've got Minimax AI that we mentioned earlier, which was the biggest perpetrator

Ejaaz:
of this distillation attack against Anthropic.

Ejaaz:
It's the most used model on Open Router.

Josh:
Also, what's interesting is like Minimax 2.5 is the most popular Chinese model for Open Claw too.

Josh:
And i personally used it like when i was running into

Josh:
the oauth issues with claude because they were they were kind

Josh:
of threatening again they were threatening to ban users for using oauth for

Josh:
going around things they're just the hardos like they have no fun uh but when

Josh:
they were threatening to like break people's accounts and ban them i switched

Josh:
over to minimax 2.5 and it actually worked very well and it's a fraction of

Josh:
the cost and i was like hey if you're gonna push me away i'm gonna go here to

Josh:
these models that get the job done for me and minimax was that one I

Ejaaz:
Have a question for you like where are you geographically located right now are you in China.

Josh:
No, I'm certainly not.

Ejaaz:
Okay, so it looks like they're just giving you free access to do these things.

Ejaaz:
There's no geographical jurisdictions that they're kind of like placing on your

Ejaaz:
restrictions. They're just letting you do the thing.

Ejaaz:
It's awesome. Like all of these models are open source.

Ejaaz:
These are kind of embellishments that America should be propagating,

Ejaaz:
but they're not. They're playing the opposite.

Ejaaz:
They're playing kind of secretive and it's not working out in their favor.

Ejaaz:
I mean, you've got Minimax, all these latest Chinese labs, by the way, GLM-5,

Ejaaz:
Kimi-K 2.5, minimax are crazy good

Ejaaz:
at computer use and agentic tooling kimike 2.5

Ejaaz:
actually for the open claw fans out there released a

Ejaaz:
browser extension and it's actually really good because the

Ejaaz:
major issue with using open claw was that there was security issues

Ejaaz:
well they created a sandbox environment that you can now use it so they're innovating

Ejaaz:
at scale and to the case that they might be stealing certain secrets i don't

Ejaaz:
think this is regarded as a hack or a stealing thing i actually just think they're

Ejaaz:
trying to get better models out to more people and hey if america can use it

Ejaaz:
it's hardly a geopolitical thing so i don't know i'm kind of in the defense

Ejaaz:
of china here and maybe that's a hot take and then kind of finally i just want to one.

Josh:
Thing on that china note i'm not sure that it's out of their own goodwill of

Josh:
their heart i think it's like the reason they're open source is probably because

Josh:
they're behind i have i would imagine that if they did have an nvidia equivalent

Josh:
in China that was creating top tier GPUs and

Ejaaz:
They did have the.

Josh:
Yeah they would did have these leading models like Opus 4.6 and

Josh:
GPT 5.3 they would close it

Josh:
down because there is so much value in owning that but because China's behind

Josh:
there's value in being open and sharing it and gaining as much adoption as possible

Josh:
as quickly as possible and it seems like it's more strategic and tactical than

Josh:
out of the the goodness of their heart but yeah I mean again open source really benefits everyone.

Josh:
And as a US citizen, I've used plenty of the Chinese models and they work awesome

Josh:
because they're just so cheap and effective.

Ejaaz:
To be clear, it doesn't benefit everyone. It benefits the users of those models, right?

Ejaaz:
Because the American AI labs, their valuations are going to tank if you have

Ejaaz:
a Chinese open source, much cheaper version that can run on much less expensive hardware.

Ejaaz:
So it makes sense that the Chinese models are basically going the open source

Ejaaz:
route so that they can kind of like chip away at American valuations.

Ejaaz:
And then, as you said, Josh, entrench users in it. But it's

Ejaaz:
not even just american llms there's

Ejaaz:
chinese models that are specifically just good in

Ejaaz:
china and beat a lot of the american models like what you're looking on the

Ejaaz:
screen now is not transformers 6 it is a 30 second video from c dance 2.0 which

Ejaaz:
is a chinese video model which is just at the front of its like own race it's

Ejaaz:
basically at the top of its kind and it's super cheap to produce like hollywood

Ejaaz:
cinematic effects right now.

Ejaaz:
C-Dance 3, the stats were leaked the other day.

Ejaaz:
10 to 18 minutes of continuous cinematic video.

Ejaaz:
So we're going from 30 seconds to almost like a casual episode on,

Ejaaz:
I don't know, like on your network TV's worth in a matter of seconds.

Ejaaz:
It's just kind of insane to see.

Ejaaz:
And I don't think that, you know, this is a nudge against China.

Ejaaz:
I just think like, you know, this thing is accessible to anyone and everyone. Should be at scale soon.

Josh:
When you're in a bar fight, the dude who like

Josh:
smashes the bottle over the counter and starts waving

Josh:
it around as a weapon like that's the guy that wins the

Josh:
person who are armed and willing to break the rules and to do

Josh:
whatever it takes to win that's the person that wins and china time and time

Josh:
again has proven that that's what they're willing to do and it creates really

Josh:
difficult moral dilemma between companies like anthropic that i genuinely do

Josh:
believe have people's best interests at heart but none of the incentives align with that mission.

Josh:
There is no incentive for being safe when...

Josh:
The opposers on the other side of the planet have no regard for it because should

Josh:
being safe slow down our progress that only allows them to catch up or accelerate ahead.

Josh:
And then we are living under a world in which it is run by Chinese rules from

Josh:
Chinese models. And it's this impossibly difficult dilemma that they're trying to navigate.

Josh:
And I really have a lot of empathy for that because it's a difficult place.

Josh:
You want to create this safe super intelligence, this safe AGI that doesn't harm the world.

Josh:
But at the same time, you do need to be a wartime presence. You need to lock down your endpoints.

Josh:
You need to have detection for 24,000 fake accounts that are extracting tons of data for you.

Josh:
Like this is a serious issue. And I really hope that this is kind of like a

Josh:
warning cry or just like a refocusing for a lot of these AI labs in how important it is to...

Josh:
Keep your stuff locked down or just do whatever needs to be done to win this race?

Ejaaz:
To round this up, I see a few things happening going forwards.

Ejaaz:
Number one, I think companies like Anthropic and maybe even OpenAIR and Gemini

Ejaaz:
or Google to an extent are going to start locking down their APIs in a few ways.

Ejaaz:
Google started locking down their thing to OpenClaw, their API to OpenClaw this week.

Ejaaz:
Anthropic started doing the same after announcing this distillation attack.

Ejaaz:
Now, this is not going to be good for net net for users because,

Ejaaz:
you know, they say that they're preventing Chinese hacks, but really like it's

Ejaaz:
the software engineer in America that suffers from this.

Ejaaz:
And I would say it would have the opposite effect that they want,

Ejaaz:
which is these software engineers who can't afford, you know,

Ejaaz:
to spend tens of thousands of dollars every month to access top tier models

Ejaaz:
are just going to go to these Chinese models.

Ejaaz:
So it's going to have the opposite effect of what you actually want.

Ejaaz:
I think the other thing that we have to recognize, which is just the uncomfortable

Ejaaz:
truth, is this isn't a conversation about AI models and the AI race.

Ejaaz:
I think this is a geopolitical discussion.

Ejaaz:
This is America versus China, as it always has been.

Ejaaz:
And to Dario's point in, what was it, the name of it, Davos,

Ejaaz:
he stated that, you know, giving or selling GPUs or selling model access to

Ejaaz:
China is the equivalent of giving them the keys to nukes, right?

Ejaaz:
Because if you assume that these AI models are going to become intelligent enough,

Ejaaz:
they're going to be used against each other's adversaries.

Ejaaz:
So you can't necessarily or

Ejaaz:
you don't necessarily want to give China access to these side of things.

Josh:
The progress of AI and the safety of AI will fall to that lowest common denominator where like...

Josh:
We want a good video model. Well, China doesn't care for copyright.

Josh:
They go and create seed dance.

Josh:
Anthropic doesn't want to cooperate with the Pentagon and it wants to make sure

Josh:
that the Pentagon does things a little safer than the Pentagon would like.

Josh:
Well, Grok is there to step in and to fill that void.

Josh:
And the reality is, is that while these morals are so important to stand on,

Josh:
they're so incredibly difficult to enforce because the stakes are as high as they are.

Josh:
And I think when we look at the Game of Thrones, how do we evaluate all the

Josh:
positions of all these companies, it's becoming increasingly clear that the

Josh:
moral compass is going to become increasingly complex as the stakes get higher.

Josh:
And a company like Anthropic, who wants to be Anthropic, is going to have a

Josh:
very difficult time maintaining that, even though it's probably critical for

Josh:
the safety and well-being.

Ejaaz:
The other thing I was thinking about is when these types of hacks,

Ejaaz:
hacks using distillation, removes all the safety caps that American AI labs put in.

Ejaaz:
So for example, if you had an uncensored version of Claude, you could use it

Ejaaz:
to create or help you create biochemical weapons.

Ejaaz:
But Anthropic puts in safeguards so that you aren't able to do such things, right?

Ejaaz:
Chinese model labs that are distilling models into there to train their own

Ejaaz:
models don't have that safety limit.

Ejaaz:
You would need to rely on China being able to do that and not adding any nefarious backdoors.

Ejaaz:
So I see the point around American model labs being responsible for their own

Ejaaz:
thing and understanding that they are now a national level asset and they need

Ejaaz:
to kind of respond effectively.

Ejaaz:
But equally, we can't necessarily just be relaxed and let China do similar things

Ejaaz:
like this. So it is a tricky one.

Ejaaz:
I think, without doubt, the frontier of modern warfare against these two nations

Ejaaz:
looks like an AI model attacking each other. I don't think it's got anything to do with weapons.

Ejaaz:
It's got quite the opposite. That's why the Pentagon cares so much.

Ejaaz:
That's why they're signing deals with Open Air and Grok to create drone warfare

Ejaaz:
technology and so much more.

Ejaaz:
So I think this is in the end, we're going to see way more attacks from this,

Ejaaz:
Maybe even in switched roles, I don't know.

Josh:
Interesting

Ejaaz:
To see nevertheless.

Josh:
Yeah it's um i mean again this game of

Josh:
thrones is just going to keep getting more interesting higher stakes people

Josh:
are going to start sacrificing more and more and this is just the most recent example

Josh:
of anthropic being the one in the crosshairs but

Josh:
i'm sure it's just a matter of time until others are as well but i

Josh:
think that concludes the episode today that is the update the

Josh:
anthropic drama that's everything you need to know about it

Josh:
and i guess the the prompt for today which i'm

Josh:
curious about is like kind of where do you stand on the issue it's complicated

Josh:
because in a way everyone is right and everyone is wrong like everyone is breaking

Josh:
the rules but does like what are the rules actually are they actually able to

Josh:
be enforced i don't know but yeah i'm curious to hear just general takes on on the issue here it's

Ejaaz:
A good one like how do you feel like for those of you who are playing around

Ejaaz:
with like kimmy k 2.5 or minimax like myself do you feel like more likely to

Ejaaz:
pick them up now that you know what's going on or are you just kind of on the

Ejaaz:
side of like yeah this is happening and it's cheap for me to use and I can run

Ejaaz:
it privately at home. Maybe it doesn't matter. I don't know. Let us know.

Josh:
We didn't talk about that. Are you now more or less inclined to use these models?

Ejaaz:
Dude, I, okay, I'm just gonna be very honest. I'm still gonna use these models

Ejaaz:
because I'm not exactly convinced, even though I understand where Anthropik's

Ejaaz:
coming from, that distillation is such a bad thing.

Ejaaz:
I think they need to figure out a way to prevent people from distilling them.

Ejaaz:
If you can access it via an API, you've got a security issue, not a national threat.

Josh:
Yeah, I think I'm probably in the same boat where like I will continue to experiment

Josh:
with SeedDance because C Dance is so much better than everything else.

Josh:
And I'll just use the best products at the time. And I hope that the American

Josh:
companies continue to provide the best products.

Josh:
And yeah, I guess that concludes today's episode. So if you did enjoy,

Josh:
please don't forget to share it with your friends. That's a big way to help us grow.

Josh:
Liking, subscribing, commenting. If you're listening to this podcast,

Josh:
rating five stars goes a long way.

Josh:
And yeah, we have a amazing sub stack that comes out twice a week that you can

Josh:
also subscribe to. Everything is linked down below in the description.

Josh:
And Ejaz, unless you got anything else, So I think that's it for today.

Ejaaz:
No, that's it. So we'll see you guys on the next one. I'll see you folks. See you guys.

Anthropic Just Got Hacked by China. These are the New Front Lines.
Broadcast by