Aravind Srinivas: Perplexity CEO’s All-In Gamble to Take Down Google

David:
[0:03] We are here with Aravind Srinivas from Perplexity. Aravind, welcome to Limitless.

Aravind:
[0:08] Thank you for having me.

David:
[0:09] Aravind, we want to learn something from you. We generally understand on Limitless that the future is going to be very different from what it looks like today, most likely starting with the internet. So we want to hear how you think the internet will be different in five years, and how that thesis of yours has informed your leadership of your company, Perplexity.

Aravind:
[0:28] It's never easy to imagine the world like five years from now, even in normal circumstances. And we live in like a pretty different world where AI is advancing at a pace that we're just not used to. So I would be lying if I said, I really know how the world's going to look like five years from now. And I actually don't think anybody really does because it's so hard to work around scenarios where the capabilities of AI are constantly evolving every few months. Three months before, all the agentic capabilities where chain of actions and chain of tool calls were not reliable.

Aravind:
[1:10] You could see the contours of it, but it was not as well-defined as it is today. And even today, it's not as reliable as it needs to be for really having one employee per person that's just an AI.

Aravind:
[1:24] So definitely I'm only making like, you know, bets here that there'll be a lot of agents that surf the web, surf the internet and go and do tasks for people. We are no longer gonna like browse the internet for things that don't feel fun. In the sense, like no one likes paying their credit cards. No one likes like, you know, moving money from one bank account to another. No one likes like using legacy websites to book hotels. No one likes using these really archaic UI for finding a rental, a car rental, or a last-minute doctor appointment somewhere in a new city that you are in. These are all stuff where the different websites don't actually have good websites to deal with. They're buggy. There's no customer support. You got to go find their number somewhere. You got to find details from four or five different places, finding a lawyer for something. like these are all like very difficult ways in which the web is designed today. And AI hasn't solved that so far. AI has managed to assimilate information from many different sources and summarize it. That's what perplexity essentially started. But next step for AI is honestly to like take away the mundane, boring aspects of having to do the real work for you, right? And so then what happens is you just go and like, browse the internet for fun on websites that feel delightful.

Aravind:
[2:51] And different website owners can actually make their websites look delightful because AI can write a lot of code. And so my bet on the future is it's going to be glorious. I have a very optimistic view that people are going to have a lot more fun. And entertainment is going to be a way in which monetization happens even more in the coming years, creators, like podcasters like you, like many, many other formats of like communicating and like sharing information are going to like spread. And so people are also going to have more time on their hands. Like a lot of the ways in which AI companies are motivating their tools, it's like, hey, people hardly have time. So let me just have the AI do the work for you.

Aravind:
[3:34] That's only partially true. People actually have a lot of time. It's just that they find the work boring. That's why time spreads to fill the gap. Like if you allocate two hours to do something that only takes like 15 minutes, if the work for 15 minutes is so boring and like you have to deal with so many boring workflows, you still spend two hours doing it. And that's kind of like why people hate doing work and they get tired and then they don't want to do anything anymore. So I think that part will go away, hopefully. And that means that we're all going to have a lot more fun with each other. And consume the web in our own personalized fashion. We're going to have AIs proactively tell us what to go and consume. We can also consume ourselves. You can tune on and off how much AI dependence you want in your life. So it's going to be a very high agency, high curiosity-driven world. That's one of the reasons why we have built a brand around curiosity, because we think that's the one human emotion that, sorry, human characteristic, I wouldn't say emotion, but human characteristic, that is even more important in the age of AI.

Aravind:
[4:42] Especially knowing how to use AIs as well as knowing what to do in a world where AIs are able to do a lot of things we used to do.

David:
[4:50] Go into that a little bit more. Curiosity as a leading human drive for how we will navigate the internet. Why is curiosity so important? And maybe what will human curiosity look different when we have all these amazing tools at our disposal?

Aravind:
[5:06] So I think the world has more AI than the amount of skill required to make use of it today, right? Most AIs are pretty good. Yes, there are mistakes being made by all these AIs, hallucinations, sometimes like chatbots are too psychophantic. They're not able to reliably complete tasks, yes.

Aravind:
[5:28] But keep even the current state at which it can write code for you, build websites for you, do your research for you, answer whatever questions you have is already incredible. Incredible, but compared to the amount of people using it on a daily basis, it's not as high. Most people are still doing work in the traditional way. We have a browser called Comet. It's basically able to let you watch a YouTube video with the help of an AI. I don't even have to watch the full videos anymore. It's able to help me draft emails, LinkedIn posts, recruiting emails, recruiting messages, pull candidates to reach out to, pull up all these old emails I don't want to read, unsubscribe me from spam. It's already able to help me do all these things. But I do need to sometimes be curious in terms of how to actually make new uses out of it that I haven't tried yet. It's on my creativity. It's on myself to actually channelize that extra agency that it gives me. So I would say that's the current aspect in which curiosity is short-term useful, long-term, when these are all like, you just take it for granted that by default, you have to have the AI with you for everything you do. Let's assume that's the state. You still have to be curious in terms of what do you even work on? What is the next project you start? What are the kind of like seed prompts you can provide to AI, even if it helps you do the task?

Aravind:
[6:53] It's you channelizing like what questions to ask, what original project to like initially start working on. That imagination, like, okay, let's assume Einstein had all the scientific tools, right? It is still on him to question like, what happens if we travel at the speed of light? do all the Newtonian mechanics break at that regime? Should we build a completely new understanding of the world? Okay, why is this even a useful question to ask? Even if you didn't know the answer to that, like asking these questions and going deeper, or for that matter, like questioning Einstein's wisdom and going at the sub-particle level and drawing a distinction between the particle and wave nature, all these things are like stuff that physicists used to do just out of pure curiosity.

Aravind:
[7:42] And Jeffrey Hinton had a lot of curiosity on like what would happen if we built computers that simulated the brain. Even though computer science was all about deterministic programs, fundamentally AI is basically stochastic programs. Like you cannot guarantee an output in AI even today. So no LLM decoding is always the same. So people had their own like scientific curiosity to like go and explore things. But that was purely an academic exercise until now. With the access to all these AI tools, it's no longer going to be restricted to the elites. It's no longer going to be restricted to the professors, the scientists. So anyone with the curiosity that's childlike, when you have a child very early on, like, or you have someone in your family as a child and you go hang out with them, they ask you all the most basic questions that stumble you, right? Like, it feels great to be able to answer them. but also it feels like, oh damn, I never actually thought deeply about it. I just took it for granted what people told me or what I read on the internet about. So I think that sort of world is what we are heading to. And as you go deeper and deeper and as you ask more questions, and if AI actually helps you ask more questions, like it's not just answering your question, but it's actually, suggesting you more questions to ask and taking you into rabbit holes, you get that kind of joy that the early web adopters got with hyperlinks and Wikipedia and like embedded web pages.

Aravind:
[9:10] The internet used to actually attract librarians and like historians and like the intellectuals and the academics. That was how it started. That's kind of why also Amazon started selling books because it wanted to cater to the early audience of the internet, which happened to be people interested in books. So I think that's kind of how AI feels like today to me. It's really massively used by the early adopters who are thinkers and programmers and intellectuals and academics. But as the tools get easier and easier to use and as the tools get a lot of agency, it's really going to be way more accessible. And so the normal person who's curious is going to be actually having a lot of superpowers. And that's hopefully going to change the world in a very positive way.

Ejaaz:
[9:54] I love the way you frame the internet and AI as a vehicle to pursue curiosity and creativity. I think back to when I bootloaded my first CD-ROM or when I went on RuneScape for the first time and I could just kind of explore this new internet native world. I'm curious, AI is typically branded as something that will automate the world. And your kind of like picture of like curiosity and creativity. Do you think there's like a fine line between kind of AI constraining what an individual can look at and explore and search for versus something that they can use as a tool to create? I'm kind of thinking how you tread that line when you build the products that you build.

Aravind:
[10:40] Our product is built for helping you navigate the web and search more effectively than consuming through organized like 10 blue links, right? And in my opinion, like the skill of asking a question was not even there. We all like, I would say using the machine learning terminology, we all overfitted to the skill of typing in a couple of keywords and opening the links and reading those links and then synthesizing and summarizing the relevant information to our original question in our minds. And arriving at like whatever conclusions we have. That's kind of what we did in the last two decades because we just couldn't have a tool where we could just directly go and drop our questions. Now that's there. So what are we doing more of is like asking more questions. The first question, but also like a lot of follow-up questions. And so that's leading to a very different way in which you start. Like for example, let me give you an example of how my own life changed. Earlier, when I want to read and understand a topic, I would go and read the papers, the blog posts in a very linear fashion. I would go do the literature review. I would collect a bunch of resources to read that particular source of material.

Aravind:
[12:00] And then after reading all of that, I would come to my own conclusions. I would still have a lot of questions, but this is how I would do it. This is how everybody did it. Now it's very different. Let's say I'm the CEO of the company. I don't have time to go deep into anything anymore. But I do want to learn, like, what is this new thing everyone's talking about? Let's say MCP was a buzzword, and I wanted to know what it is. I don't have to go and read Anthropics documentation and blog posts to understand. I can actually, like, ignore the writer's perspective and just go directly to perplexity and ask, like, what the hell is this MCP thing?

Aravind:
[12:39] Like, you know, just, is this a buzzword? Is this like a pro, all it does is a different way of moving JSONs around between like servers and like the models, or is there something more to it? Like, why is everyone calling it the USB-C for the AI or internet?

Aravind:
[12:57] Like, what's the big deal about it? Like, I can ask things in a way where questions guide my learning rather than like the material and the blog post guiding my learning.

Aravind:
[13:10] Because anyway, after reading five different pages, I would still have like tons of questions. So why not? I started with the question and then after like 20 questions, I got a lot out of it. And now I can go and read the material in full. Like it's a very, it's kind of flipping the order. This is a personal taste. I don't recommend everybody do this, but because I don't have a lot of time, this is how I learn things at this point. Same thing happens. Like whatever happens to my body, whatever like i'm doing my workouts the foods i consume i don't actually go and like watch youtube videos of like diets recommended by bodybuilders and like like how to like lose fat without like losing muscle like like i don't have to watch like 20 videos i can just actually ask critical questions about the videos using the comment browser tell me like what is actually like contrarian on you here and like like i can ask it to like reference check a bunch of papers So it's leading to a very different way for me to consume knowledge out there. And I only think this is going to be even more awesome going forward. Kids don't have to consume the web or the internet like the way we did when we were kids. And then voice mode interactions are going to make it feel even more natural. And then having the ability to pull context from whatever you're seeing.

Aravind:
[14:28] Asking questions, is going to make it feel like even more device-free and like more natural. So I'm actually more excited for the next generation because they are.

Aravind:
[14:37] They're very lucky. I feel like, I don't know your ages, but I'm guessing you're all in my age group. So us, we were fine. We at least enjoyed the early web. I feel like the generation right after us really got affected by this whole social media push. And they got a lot of knowledge just by watching reels and charts and all these things. And I don't think that's very good because fundamentally it's net negative. I think the generation after that is going to be like, you know, go watch the reels for me and tell me like what's likely interesting to me. Like I trust my relationship with my agent who truly understands me and knows my preferences, knows my goals and objectives to go like consume the internet for me and give it to me the way I want. So that agency and that trust you get with your AI to do things for you and like to filter out the noise for you and help you seek the truth and help you stay curious. That's the kind of what we want to help create through our products in this company.

Ejaaz:
[15:44] I love that. You're describing essentially a new kind of online or browsing experience, right?

Aravind:
[15:51] So when I showed a comment to Mark Andreessen, the prompt he wanted to know was like, go to x.com, scroll through my like 100 tweets on my feed, based on my browsing history, filter out the noise and just show me like the 20 relevant ones. And it did an amazing job. Like what if this is like almost real time where I go to a website, I click a button, you know, I don't even have to write all this prompt. And then it just renders the website in the way I want to consume it. No website owner or like algorithm builder has a time to like customize everything to every person at a fine grain level, right? There are like Elon Musk will make a change to the X algorithm. And then you'll start seeing a lot of political posts or you'll start seeing a lot of memes. You'll start seeing a lot of random videos or anime content. You have no idea why you're suddenly seeing all this. And then it's not his fault either. He's just trying to maximize a few metrics for the company or maybe for himself. It's his property. And that's the, but I don't want us to continue living in. We need to have the agency to do things in the way we want.

Ejaaz:
[17:04] Yeah, it kind of sounds like you're describing the current world

Ejaaz:
[17:08] That we browse.

Ejaaz:
[17:09] And look online is kind of overfitting for skills that we don't need right now. So what do you think are the important skills that we should focus on now then as like that new younger audience that is entering the internet today?

Aravind:
[17:24] I would say critical thinking. So here's one skill I have learned to acquire over time, which is anytime you read a book or a biography of somebody, unless it's written by someone pretty neutral, usually it's a puff piece on them, right? Like usually it's like something that they collaborated with and, you know, meant to show the company or themselves in a very positive light. So you don't, you end up like getting a very biased perspective. So I would love to like, you know, have an AI when I'm reading a book together and critically review any chapter and tell me like perspectives that are contrarian to what the author has as well. So that's what I do when I read books right now. I just have a sidecar assistant on my browser and I just ask it for like, hey, just based on what you read in this chapter along with me, just tell me what are some things I should look out for that the author could be wrong here about. And that's just me looking for more perspectives, slightly the Peter Thiel-ish contrarian ideology. Again, not to be contrarian for the heck of it. I just want to know everything possible.

Aravind:
[18:38] And so that critical thinking will definitely be essential.

Aravind:
[18:42] And then the reason that you can consume everything in the way you want. Definitely helps you to like not get into echo chambers. So hopefully that opens up the mind for most people to like just question a lot of the things they see on the web. And I think the web is going to be filled with a lot of AI slop too. I don't want to just like give you this impression that AI is like all so amazing

Aravind:
[19:03] technology, just sit back and enjoy the ride. Like there's going to be a lot of slop. There's going to be a lot of AI generated misinformation, AI generated videos that are like photorealistic that you can't even say if it is real or not. And then content will be written a lot more by AIs than humans on the internet. So the only way to fight this is actually with the help of an AI like ours or what other people are building that are helping you seek the truth and guide you towards it, even without much effort from you through the right kind of prompts that are already cached and help you consume things the way you want and honestly work for you, like truly like keep your interests. Imagine a world where agents are doing shopping and like travel booking for you, right?

Aravind:
[19:49] And there might be a world where like, you know, whichever companies choose to do this, try to have advertisers like trying to get the attention of the agent instead of yourself, like ads at the agent level. Where do you like protect the user then who doesn't want this to happen? The way potentially it could work is the user and the agent have their own contract a handshake.

Aravind:
[20:13] It's all in the form of a system prompt and that prompt is protected. You cannot inject it. This doesn't exist today. You can do prompt injection to anything. So this would not work today. But imagine a future where we can reliably do this. Then no matter what the advertiser tries to do to get the agent to prefer them over some other merchant, the user's prompt to the agent protects them from the sort of advertising mechanism. So we need all the sort of like, like developed versions of like the current systems. It's very nascent today. It feels like the early days of the internet, but that's what I would like to like ensure to make sure like people are protected even against AI slop and advertising and all that going forward.

David:
[20:54] There's this notion in UX design that better UX involves fewer clicks on behalf of the humans. Like if we can just get them to tap fewer times to get what they want, that's generally considered to be good UX. And I see some of this, that same kind of sentiment being applied towards AI agents where, oh, we can actually just get AI agents to do things on behalf of their users. And that really puts humans into a very passive role. And I see pros and cons of that, right? Like, you know, sometimes I just don't want to think that hard and I just want to be entertained and that makes me feel good. I'm also worried about the costs of that as my brain turns off more frequently as my default mode. How do you think about when you're designing perplexity, how do you think about like this active versus passive human in the driver's seat when we can automate things, but then also like maybe we also want to encourage a more active driver when it comes to like managing these tools. How do you think about that trade-off?

Aravind:
[21:54] Yeah, it's a good question. So we think about it in the sense of keeping the user active in the process. So at least for agentic queries, where you go and ask perplexity to go do a deep research for you about like GLP ones. We do have the agent coming back and asking clarifying questions to the user, where the user can provide more input. I think ChatGPT is also doing this. ChatGPT explicitly forces you to reply. Perplexity does not force the user to reply. I think ours is a better design because sometimes when you don't know anything about a topic, like you don't know enough to even reply anything. So your reply doesn't matter. So you don't need to be blocked on the user to reply. But this is at least one way in which you can stay along and guide the agent towards like doing something that you want, right?

Aravind:
[22:42] And then certain other ways in which we do it on the comment browser, for example, if you go and ask it to buy something, it'll still ask you for confirmation before proceeding. It'll give you warnings, oh, this is going to be $100, are you sure you want to spend? So it's still going to keep your brain active as it does the work. But I'm zooming out and looking at your question at a more philosophical level.

Aravind:
[23:04] Like like if agents are at some point you do trust them they're smarter than you it's it's kind of like you hired some person smarter than you and like why why do you even need to micromanage them anymore so where do you apply your brain power it's not very dissimilar to like you know like you're running a company and you hired like two people and they're like world class and they're just doing everything and like even if you don't turn up for work like your company runs fine like what do you do then either you do another company or you start another business adjacent to your current business that helps you grow the current business even more. Or you start taking more bets within the same company by hiring a different set of people and trying to amplify what you can do within the company. So I guess that's how I see it. If you stay idle and do nothing, yes, I think there's going to be cognitive decline for sure. And I think that applies even more in the age of AI, where if AIs are able to do a lot of things for us, and we therefore take it for granted, and Bill Gates has this thing of people who have three-day work weeks or two-day work weeks in a world where AIs work really well. I think that's okay. By the way, I'm not against such a future where people only work two or three days a week and chill for the remaining four days.

Aravind:
[24:18] This whole five days a week thing was, the Industrial Revolution did it for us. Henry re-forward was one of the main reasons it happened because at that time, like the only way to maximize production efficiency is people turned up to the factories and did the work. Then machines started doing a lot more things and like people started finding different kinds of jobs and software and internet. All these things are like, like, you know, how we started evolving to deal with all these changes.

Aravind:
[24:43] So I'm sure we'll find, find more ways to keep ourselves occupied. At the same time, I'm sure like there'll be some people who just retire and pursue other passions, like, like hiking and like photography and like, you know, content creation, podcasting, there's a lot of like so many different ways in which you can, you can just have your own life. And that's kind of like making the world more multidimensional. Like some people tell me, you know, San Francisco is too one-dimensional. You come here, you just meet tech bros. Like they all talk about AI in cafes and no one, like it feels like, you know, it feels great. The energy is amazing. But people who like living in New York or London and, you know, you go into a bar and they're like someone, you meet someone who's like playing an instrument. You meet someone who's an artist. You meet someone who does like stage shows or like standup comedy and then engineers. There's like a lot of different types of people in New York and that attracts, you know, a lot of people to that city. I do think that like AI is getting better and better might make the society feel more that way globally across the world, not just like restricted to few cities.

Josh:
[25:50] So Arvind, people love perplexity. When I told some of my friends you were coming on, they were stoked. They use it all the time for like sports scores, for weather, for gambling suggestions.

Josh:
[25:58] And I find people have this affinity with perplexity. And I'd like for you to help me unpack why. So if I'm a user, if I'm someone who's listening to this podcast who uses ChatGPT, who uses Gemini and isn't quite sure what the advantage is to perplexity, can you describe the unique advantage you have why people would want to use the service and what you're doing in the background to actually deliver on that promise?

Aravind:
[26:19] Number one, we established the brand around accuracy and knowledge. And and so it's not meant to be an assistant that is broadly like you know or rather it's not meant to be an ai chat bot that's meant to be chatting with you on on and everything so you can go to chat chipt and like just have a bad day can you motivate me that perplexity is not meant for that and and so we're not trying to build a product that's good for search and research and knowledge and facts, and good for being your chat buddy or a companion, all in one. Gemini and ChatGP are trying to do that. So as a result of really optimizing for one thing, which is knowledge and facts and research, and giving the answer to the user in the most consumable fashion, like the highest density per pixel.

Aravind:
[27:10] Like in terms of information bandwidth, we do a better job and then we also are faster to just give you the same answer. So we really care about like, you know, what the user needs to like say. Even if the user doesn't have to be very precise in their prompts, we kind of understand their intent and give the answer faster and better. So the sports scores, you're saying, we do a lot of work on that because when you're asking the score of a game, you don't always return the answer in a wall of text. It's not fun. and you do want these widgets. Your brain is used to consuming those pixels. You don't want like live updates, stock graphs. Like you don't want to sometimes do deep dives on a company's revenue or the financials. We build a lot of dashboards for that. You don't want to be able to compare two different stocks. You don't want to be able to like go deeper into like the past scores or like, you know, like the different, like Formula One, for example, like you want to be able to like track the live updates in the game. So we did a lot of work towards like just giving the information in the highest possible information bandwidth, like consumable pixels. And we still haven't completed, like we haven't done a good job at tennis yet. Like I think we're still lagging behind on like soccer. So there's still a lot of work to do, but we at least care enough about this. And we want people to be able to like come to us for asking questions about anything and everything in the world. And that's the way we think about building the ultimate answer engine.

Josh:
[28:37] And how are you doing this? What's happening behind the scenes? So when I place a query with perplexity, what is the magic that's happening behind? Are you routing? We recently had the CEO of OpenRouter on who kind of described how you can route queries to different models. Are you just kind of aggregating the data yourself? Are you scraping the web and serving it along your own model? What's going on when I hit enter on that search box?

Aravind:
[28:55] Yeah. So every query gets classified. So sometimes it's a sports query, sometimes a weather, finance, or a regular query that doesn't need widgets. So every query gets classified. And then depending on the classifier, different UIs, like we call it generative UIs, different UIs are generated per query. And then for certain queries that require really accurate facts, you don't want to just use web links. You actually want to use a data provider that gives you a real-time data dump.

Aravind:
[29:24] That's what you need for finance. That's what you need for sports. That's what you need for weather. So we do that. For some queries, you actually need merchants or hotel inventory or those kinds of things. So we do that for travel and commerce. For some queries, you need data providers for local restaurants. We do that with Yelp, for example. And for other queries, you just need the regular web where you pull a bunch of links and you summarize the content in them. So that's what we do for most of the queries. That's a long tail. And you want to decide if you want to format it in Markdown or tables or just one paragraph or two paragraphs. And if the query came on the phone or if it came on the web, if it came on the phone, try to be a little more concise because people don't want to read a lot of text on the phone. And then you also want to decide if you want to reason and think longer for certain queries that are a little more ambiguous. Example, let's say you want to ask like, what's the age gap between the top five billionaires and their wives? So something like that, right? You want to like, who are the top five billionaires? So-and-so. So, okay. Like who are their wives? So-and-so. And so, like what is the age of like these 10 people, their birth dates, and then you want to calculate the differences. So you actually have to do...

Aravind:
[30:38] Some reasoning, and then give the answer in the form of a table. So the model has to automatically adapt based on the query, how much reasoning and how many steps of reasoning to apply. So that's all based on classifier decisions too. So we have essentially think of us as building this gigantic, complicated information router for humanity's curiosity and knowledge needs. That's basically what we're doing. And if we can do this at scale for all languages, all types of queries, all types of verticals, all types of like basic day-to-day tasks, there's like tremendous value in that. It doesn't even matter if we own the model or not, like just the value of the router in terms of knowing which models to use for what queries and like what kind of UIs to use and how much of compute to apply per query and getting like majority of the answers right and doing it with delightful latency and like UI is basically our goal.

Josh:
[31:33] Okay. So you have now, we have this, tool set, you're taking the complexities, you're merging it into one data set. And it seems like you're really exceptional at a few things. We mentioned sports. I know a lot of people also use it. A lot of people have been begging on Twitter actually for a perplexity to replace Bloomberg in terms of financials because it can do a lot of charts. And it seems like you're really strong at some of these categories, but where you're putting a lot of your time and effort is actually into the

Josh:
[31:56] browser itself with Comet. And I want to introduce Comet for the people who don't understand. It is your new AI browser. I'd love for you to share it with us because it seems like perplexity for a while you've kind of been living on rented land in order to use perplexity normally i'd have to go to chrome or i'd go to safari or i'd use like a different browser that is not native to you but now what you're doing is you're actually creating the full stack right you're creating the browser from your desktop you run the application you control the entire stack and you just introduce what comet is and kind of how it works

Aravind:
[32:23] So Comet is, we basically call it browser the speed of thought. So we all have like a lot of thoughts while we're on the browser. And we don't get to actually finish all of them because every task that we have in our head takes a lot of time. So Comet is meant to unify perplexity and the browser in a very native way, where perplexity evolves from just giving you answers to performing actions for you. And perplexity evolves from just pulling context from the web to pulling all context. Your browser history, your Google calendar, your Gmail, other tabs that you might have had open once upon a time, your Slack, your other workspace tools. And so it can pull all relevant personal context and the web context and have the agency to go take actions for you and available with you everywhere, on the search bar, on the sidecar, on the new tab page. So whatever webpage you're on, contextually helping you, right? That's the most important thing. Your work starts with some context. You're on a Google Doc and you're asking for help to edit the doc. You're on a Google Sheets and you're asking for help to source information from the web to help you fill the sheet. You are doing some work and you're trying to pull relevant context from the past that you might have exchanged on emails with your colleagues to help you like draft something.

Aravind:
[33:45] You're just about to interview someone and you want to pull all the background materials for them, You just want to do it like, hey, prepare me for my day tomorrow, and it's going to do it for you as part of the prompt. So we just want it to be a lot more intuitive, a lot more personal searches, a lot more personal context, and actually just taking away the mundane aspects of dealing with boring websites. So that's what Comet was meant to do, and it got off to a really great start. And it's right at the sweet spot where it's almost there, but not quite there yet. And I think that's where you want to be so that you want to ride the wave of like the models getting better and then closing the loop on like full reliability.

Josh:
[34:26] Yeah, I want to talk about the form factor and kind of design choices for Kama because I think a lot about intelligence, how it's going to improve over time and more importantly, how we're going to engage with it as we kind of ascend this growth curve. And when I think about the conclusion that I reached, it seems to be a little bit different than the browser. So when I think about a browser and you mentioned this earlier, there's kind of two uses for it, right? There's two buckets. There's productivity and then there's leisure. And productivity is kind of the work you do. It's me doing the agenda prep for the episodes. It's me shopping for laundry detergent or booking trips. And then for the leisure, it's like I'm watching YouTube videos, I'm watching Netflix, I'm scrolling my X timeline. I love that. That feels very uniquely human to me. And I kind of want to keep that. That feels special to me. So what I imagine is that productivity bucket kind of gets abstracted away through agents. And it seemed a little far-fetched a few weeks ago. But then I tried OpenAI's agent and I was like, wait, this is kind of cool. It kind of obviously gets away all the interfaces, the complexities of the browser, and it just gives me the answer. It understands my preference stack. It kind of knows everything. And I'm curious the design decision you made to actually preserve the form factor of the browser versus just going direct to the agent workflow that kind of takes away all the interfaces, the advertising, and then just serves you the answer that you're looking for.

Aravind:
[35:35] So the work begins from where you are, not like from an empty chatbot. Like for example, you're actually like in the middle of drafting a note for a memo and you want to pull context from what you've already discussed with your colleagues on Slack. You don't even want to like copy paste the memo and ask, Hey, like, can you pull all the context from the past that I might've discussed with David and Yas or something like that, right? Like you literally want to just have the assistant right next to you and just say.

Aravind:
[36:08] Can you pull the relevant context that I might be missing here? And you don't even have to say pull it from Slack. It'll just automatically know what to pull and edit it for you right in place. The other advantages are that this constant switching tabs and copy pasting context here and there, and then taking the outputs from one place, putting it back in another place, all that stuff saved for you when you just have it natively embedded in wherever you are.

Aravind:
[36:36] And in terms of architectural decisions, like the Chattapity agent is way slower than the Comet browser. People have done comparisons and whatever takes you like 11 minutes to do on Chattapity agent will probably take you like less than a minute on Comet because there's a lot of advantages in just parsing information on the client side and just using the server side for the frontier model reasoning, but not having to create an entire server side session of your client browser, and then doing all the compute there. There's again another round trip between that and like where the models are actually hosted and then like actually like sending the result back to you, to the client. It's just like very slow and unreliable and sometimes gets stuck and retries and you don't even know what's going on compared to having like full control on the client, which is a lot more secure. Passwords don't need to be communicated. Everything's stored locally. All your content is local. You don't ever have to worry about like a server-side session of whatever you're doing and like everything's much faster because there's only like one way like two way communication between like whatever information on the client and whatever models are running on the server but that's all about it so and like for example you might want to take help even on your ex or Netflix or YouTube right like I'm on YouTube.

Aravind:
[37:52] I just want to say, hey, there's this podcast these guys did with Aravind, and I just want to get that exactly what Aravind said about ChatGPT agent. Can you pull it up from, can you exactly edit the clip out where he only talks about this and upload it as a separate video to YouTube and help me watch it? We're not able to do all these things yet, but very much on the horizons of happening. You can exactly pull it from the right timestamp, and you don't have to go show transcript, command F, chat GPT agent, and then again, like move around those, the playback slider in terms of where I exactly started speaking, all that's not needed. It's much better. So it can also help you with personal tasks, like, like a lot of like work, like, like sometimes you're just watching YouTube and you can, you just might want to pull the whole transcript and use it for your, your next thing later, or send it to somebody pretty quickly. Or while you're watching YouTube, you might want to like book a dinner reservation on the side. Everything can just be, and you might just want to like see if the agent's making progress and you can just consume your content. Everything is just much more seamless and integrated in one environment. It's the stickiest product that humanity has built so far since the last 20 years or like almost 30 years, like we've been using browsers.

Aravind:
[39:12] Yes, like it's changed a little. Firefox innovated on the concept of tabs. Google innovated on the concept of like tabs as separate processes. But other than that, there's not been much changes. So for the first time, we're able to like give it to you in a familiar front end, familiar UI, but give you a lot more agency. That's basically what we decided to do. And it's okay. Like if eventually the agency is so reliable that like you don't need to actually like, you know, open your browser at all. you just have to type into the new tab page and it does everything for you. It's completely fine. But we think of a future where like people will still be doing work, but they'll just be doing it with a lot more AI help, but they'll still retain all the agency. I think that's the kind of future we believe in. And I think embedding the AI

Aravind:
[39:57] directly in the browser is a better approach.

Josh:
[39:59] Yeah, I think the browser is 35 years old, 1990. So we've been using it for a long time. It's clearly very sticky. And when you mentioned the perks of using Comet Browser now, I agree. We actually graciously got access to it. We were able to test it out and try it. And it is so much faster than using the agent feature because it has all the integrations built in. It had my Google accounts. It had all of my login integrations. But my question to you is what happens when eventually they do get faster, when the agents kind of collapse that time and latency where you don't have to spin up a virtual machine. It doesn't take that long. And it really truly is just like a browserless experience. I know people are working on hardware devices to kind of just bring that into reality and remove a lot of the interface. So do you see the browser being the continued form factor as we move forward? Or do you eventually see Komet evolving into something a little more abstract than just a box with a little tab on top?

Aravind:
[40:50] Look, I'm not particular on the browser remaining the front end for information consumption. Like, I don't think that's necessary for the browser to be relevant. That's the whole point.

Aravind:
[41:01] The time it takes for the agent to actually, like the abstracted out agent to actually do the work for you is not a bottleneck by the models getting more intelligent. It's purely an architectural choice of spinning up server-side sessions for each of your browsing tabs or third-party services.

Aravind:
[41:21] And models will get more intelligent and reliable in terms of controlling these websites. But fundamentally, what's happening is you're just spinning up a browser session on the server side. That's all that's happening. And you still need the infrastructure of doing a browser, whether it's on the client or the server, whether it's headless or like with the front end, you still need all the infrastructure to do this, right? Like when you're asking on Comet, go and buy this on DoorDash for me, we're not actually opening DoorDash and have the agent like rendering it on pixels and have the agent like click on things. It's actually done in a much more efficient way, but just directly consuming the JavaScript and taking actions there. We give you the front end in terms of the progress bar to see what's going on. That's just more meant for transparency and like user reliability, but the agent doesn't need to consume it in the way you consume it. So that's not really like a server or a client decision. It's more that where do you actually start? Where are you already on most of the time? Are you going to be mostly on the chatbot? Like, is that where you're going to be spending most of your time on?

Aravind:
[42:32] In that case, it makes sense to move the browser back to the cloud and keep you on the chat. But that's not how we are, right? We are actually most of the time on the browser. We're opening the chatbot as another tab or Google as another tab or Perplexity as another tab. But we are mostly on the browser where like you and I are on Riverside now. But Riverside, I'm recording it on Comet right now. It earlier used to only work on Chrome, but now it works. Okay. Yeah, we fixed the bug.

Aravind:
[42:59] Look, here's the thing. I'm on Riverside. We're talking. I might want to have Comet listen to us while we're talking and just loop it into our conversation. And, you know, it can also like come and do a podcast together with us or like answer our questions that you'll miss out on all these experiences when you're just like stuck in this single chatbot window all the time. And it feels empty and there's no like new context all the time. Whereas on the browser, you just open like Twitter or like link, you know, you go to Twitter and like scroll through a few feeds and then your world is already like chaotic and interesting. And that you miss out all that by just staying in the chatbot and always have to think about what prompt to add to the chatbot. I think that's why we think the browser is more interesting because context keeps coming. So it's like no limit to your curiosity in terms of what you can do with it.

David:
[43:51] What I'm thinking, I'm seeing happening here is there's this notion that AI is just going to come and improve all of our lives in all these different ways. And it's going to come via these assistants. And what I'm seeing with the browser, with this browser model, the common model, is what you are doing with perplexity is you are making a bet that the browser form factor is the most useful assistant form factor to take the unbridled intelligence of these LLM models from OpenAI and all these things. And you're just making a bet that like, okay, we'll make the assistant actually just the browser. And there are other, maybe competitors out there that maybe you wouldn't...

Aravind:
[44:30] The assistant needs to have browsing, whether it retains the front of the browser or not. Right. And then I think on mobile, you're not going to actually use it as a user on the web. On mobile, you're not actually going to be opening tabs on the mobile browser. Yeah. You're actually going to just go to the individual apps. You're not going to go to x.com on my mobile browser. You're going to open x as an app. Right. So on mobile, the way the assistant takes advantage of the browser functionality is calling the third-party apps, which you cannot, the OS restricts you from calling the third-party apps. I cannot open DoorDash. I cannot open Uber, Amazon, or like Twitter or LinkedIn, and go do stuff for you there. The OS does not let me do that as another app. Siri can potentially do that, but that's because it's not even an app. It's native to the OS. So that's where having the browser as an explicit standalone app and helping me either run a cloud server side session of that or like doing it on your client as a background process has a lot of flexibility in terms of what all I can let the assistant do beyond just answering questions. Right.

David:
[45:42] And I suppose there's going to be a handful of products that are like this where

David:
[45:46] they are trying to make a just a useful form factor for AI around your person. And one of them is a browser because like you said, We spend so much of our time in a browser. Another one is, you know, maybe people aren't really intuitively thinking of this as like a competitor, but I see it in the same category as like those pendant things, those physical devices, where it's just another form factor and it's supposed to assist you using AI. And this one is a, it's not a browser, but it's this thing that's with you in real life. You're away from your desktop, you're away from your phone, your phone's in your pocket, but it's another form factor of something that's supposed to assist you and make your life better. And is that how you see the category of what you're building in? You're just trying to make the best form factor possible to create a useful AI assistant tool?

Aravind:
[46:32] Yeah, definitely. Like the memory and the context you can pull from the browser is second to none, in my opinion. I think people believe in the pendant and like.

David:
[46:41] Whatever the thing

Aravind:
[46:43] That you can put on, the chain, necklace. Largely unproven. Yeah. Glass, like record everything you're talking to and... Fundamentally, it's actually like a less efficient way to like store things, battery draining compared to like taking advantage of the battery of your phone or your MacBook, which is what the browser does. And like also a lot more engineering resources have been put into making browsers like consumeless battery and memory and like well understood code to optimize. The chips are way more powerful. So that's what the pendant lacks. It has to constantly drain your Bluetooth on the phone and keep uploading things to the server, keep using the internet connection of your phone. So it's not meant for, and maybe you don't even need to record that much. Like it feels an overkill. Whereas on the other hand, every website you've gone to, having access to your email and calendar, what are all meetings you've attended, your flights, your dinner plans. I already know so much to be able to help you just through the browser context.

Aravind:
[47:48] And it also feels less creepy to me in terms of like constantly having to go around this device and recording people without their permission, whereas the browser only gets your own personal context and only with your own permission, by the way, and like you can choose to do things on incognito. So that's another like advantage I feel the browser has. And if you want your phone to record a particular meeting, you can always choose to do that. It's pretty easy. There are recorder apps. You can have existing apps have a record button and then log all the context, dump it into your local drive there on the app itself. It can be stored locally on the client, doesn't have to be pushed to the server. Context can be pulled from there. The browser can do all these things. It's pretty easy to do all these things. So that's why I'm not a big believer in the hardware. I think hardware is very interesting when you go to the AirPods level. Like when I have an AirPod and I can just talk to it while walking and it has cameras and I can actually ask questions about restaurants, menus. It gives me a completely new way to like go shop online. There are a lot of these advantages you have with the glass or the AirPod where glass can help you render things. The AirPod can help you just see and talk. So I believe in those, but I'm not a big believer in like devices that need to record every single thing you're talking to or speaking about. and then taking all that as context and pushing it into a chat on the server. I don't think that's needed.

Ejaaz:
[49:14] That's funny. Josh, we've spoken about different form factors back and forth before, and he kind of guessed the AirPods with a camera that was able to kind of see and sense everything. Aravind, like, Perplexy is like the first major AI company to come out with the AI browser, right? And it's no secret now that the likes of OpenAI and Google are going to be releasing like new browsers or enhanced browsers. And you mentioned on a previous podcast, I think it was with Y Combinator, that the reason why Google kind of like didn't like created a separate kind of search engine and didn't integrate AI directly into their search engine was because it didn't function or work the same way. And my question to you is, if OpenAI releases a browser tomorrow, what do you think is the main moat that Perplexity Comet has over everyone else? Is it these kind of natural, intuitive human flows that you described? Is that where you're going to play the best in? Or is it these agentic flows? Can you help us understand what that looks like?

Aravind:
[50:21] I mean, I think they are going to work on a browser. it's been communicated by the press already. So what would the mode? I think mode's obviously going to be around like having a better product, moving faster, shipping new things that are not just whatever we shipped already, but things that have to do with like long running processes, like kind of like the cloud code equivalent for day-to-day browsing tasks. Some people like to think about the browser as the IDE for your life. And then the coding agent could be the thing that's fundamentally missing. Like right now you have synchronous agents that do things for you in place in real time, but asynchronous agents are doing things on the background or like taking for much longer, but can take on harder tasks that needs to be stitched together, much more long context management, stateful memory, all that stuff's still missing. So we need to build that.

Aravind:
[51:16] They'll want to work on all that too. So I think like the mode's really just going to come from whoever executes better, And unlike a chatbot where you just ship features, a browser is a massive commitment to be multi-platform and constant upgrades and tons of bug fixes, having to deal with so many different versions of the operating systems, and both on mobile and desktop. And a lot of like architectural decisions between what it says on the client and the server, security, privacy guarantees, enterprise versions for people to use it safely at work, lots of like context handling bugs and errors, constantly having to deal with new models.

Aravind:
[51:56] Having the ability to use like multiple models, not just one, so that agentic capabilities on different models are going to always like, you know, never be the same. So we have a lot of advantages is by just being ultra product-focused company versus being a model company that's doing the compute cluster build-outs and Stargate and like Sora video generation, like chatbot companionship.

Aravind:
[52:20] Like image generation, search, like there's like 20, 30 different projects they do and browsers is one of them. For us, it's everything. So we're betting the house on it. And if you're a very tiny startup that has very little funding, we're obviously still going to lose. But fortunately, we're not that. We have a reasonable distribution and we have

Aravind:
[52:41] a lot of funding. So a lot of great talent here. So I think like it's a very natural bet to take, even if an established company like OpenAI wants to work on the same thing. It only validates our thesis even more. And we are also betting on the fact that open source models are going to catch up to the capabilities of the frontier ones and we'll be able to migrate off the closed models for whatever we do today. And we'll still be using the closed models for things we won't be able to do today for like new cutting edge things.

Ejaaz:
[53:10] Like I said earlier, like you guys were the first to launch an AI or major AI browser. How, like if you were to think about form factors going into the future. You mentioned that, you know, you're not really a fan of hardware devices. If you were to augment your browser in the future, what would you build next?

Aravind:
[53:30] Yeah, I've said this before. I think the only next step after the browser is the OS. Like you, that's the final frontier. Like, because the only one who has, like, the only reason you even build a browser for doing a lot of the agents is you cannot control iOS or Android. It's interesting, you might think Android is open source, so you can control it. No, you can fork it and you can make Android whatever you want. But.

Aravind:
[53:52] You really cannot get a phone maker to ship your version of Android without like getting approval from Google for that. And if they're not the default search, like they're not going to let you ship a version of Android without, with the Play Store and with the core Google apps of the Google Maps and YouTube and Gmail and Calendar and so on. And if they don't let, if they don't ship their apps on, and let other people ship their apps on your version of Android, no phone maker is even incentivized to sell those phones in any market. So you basically have to build a super app that can call every other app such that you don't even need the app store. But that's kind of why you need the browser because the browser essentially, once it becomes everything app and it can call Ubers and buy stuff on Amazon and generative UIs are all so fast and nimble and doesn't feel like you're missing out on the apps, you would still need things like X and Instagram WhatsApp to message people. It's very hard to get around the lack of having apps. So I think this is a much bigger vision than even shipping the browser where you have to actually.

Aravind:
[55:02] Convince social media companies and other people to actually, ignore the Play Store and ship apps along with you to a new version of Android and then convince a massive phone maker like Samsung or Motorola or someone like the largest OEMs to actually ship this phone in the market. That's the ultimate endgame. And I don't think we have graduated to work on that. The best step to get there, to deserve a right to try that, is to ship a really amazing mobile browser and get a lot of distribution on this and really improve the reliability and latency of the product to the extent that people feel like the browser is the everything app and it feels like an OS by itself. And they're willing to actually try out a new phone that can have like a new version of Android. And I think that that's the, in my opinion, once you complete that last step in the trajectory, that'll be the true end of like the Google monopoly, in my opinion, because that's when like they cannot control anything here. Like on Android, they control was a default search and 68% of their revenue is mobile advertising. And so if I remove Google search as a default and let you just use an assistant for all your search needs and you can navigate the web and information, everything all done in a seamless way.

Aravind:
[56:22] Most of the revenue on search ads tanks as a result of that. So you actually need to get market share like through distribution on the phones and that needs like massive phone maker like Samsung to back you. So this is basically the end game and you also have to build a good business model around the agents and, you know, the subscription revenue for people who want to like experience the internet and like services through this new form factor. So the world has to change quite a lot for these things to happen, but we're not like working on perplexity as a short-term like project. Like it's going to take a decade to realize all this and a baby steps along the way and Comet is the first step towards that.

David:
[57:05] I think that if there's one big takeaway I learned from this episode, it's the notion, the reasoning why a AI native iOS software must be AI first. Like that's ultimately where we ultimately end up.

Aravind:
[57:19] You could also consider building like Windows or as not exactly Windows, but, you know, Arrival to Windows or Mac OS. But again, you're going to end up with the same problems. Like Microsoft might not want to ship their apps to your OS because they just don't want to encourage Arrival. And that's why like all the Microsoft apps, like the Office 365 apps suck on Linux. And that's one of the reasons why Linux has failed to get distribution.

David:
[57:45] It does kind of beg the question, if we are, if the end game is an AI native operating system, what's more likely? Apple finally figures their shit out and converts iOS to being AI native. Microsoft has Windows and they figure out how to make Windows AI native or a startup, maybe ChatGPT and OpenAI are trying to get into this game or like a younger startup like Perplexity. Are these the players of the game or how am I?

Aravind:
[58:14] Google is still relevant too.

David:
[58:15] Google is Google relevant. If we do see an AI native operating system, it's going to come from one of these players, Apple, Microsoft, ChatGPT, Google, and Perplexity.

Aravind:
[58:26] I think so. Or meta, you never know. But I think so. These are the main players. And I'm even fortunate to be considered in this list. Everybody else has 10 to 100x more capital, maybe 1,000x also. And so it's definitely... But I would say the main advantage in terms of structural restrictions exists with Apple.

Aravind:
[58:50] They basically, yes, like they are going to lose the Google ad revenue share, if they change the way search and Safari works. But that might be a thing they might lose anyway if the judge rules to that effect in the DOJ case. So if they are going to lose it anyway, they might go all in on this vision and change the iPhone to be more AI native. Google, on the other hand, will not be able to do it as fast on Android phones. They might try it on the Pixel phones where the distribution is much smaller so they don't actually lose much ads and they might sense the market and then try to go deeper on the other OEMs, but they have more constraints and restrictions here. And OpenAI doesn't have the ability to go build out its own device. It has the same problems we have in terms of convincing Samsung to go do this with them. And similarly, Meta has the same problems. It doesn't have search, it doesn't have a browser, it doesn't have great models. And Microsoft, Windows doesn't have the phone. Abstraction. So it's not going to be multi-platform like Google or Apple can do.

Josh:
[1:00:00] Well, Aravind, I want to say thank you and congratulations for having a seat at the table. That is no easy feat. I mean, you've gone from, what, half of a billion dollars to 18 billion in 18 months or some outrageous growth like that. So congratulations on all the success. For the people who are listening, who are curious about what we're talking about today, how would you say is the best way to reach perplexity? How would you want to intro people to use your product? Where should they go?

Aravind:
[1:00:21] Perplexity.ai, that's the web landing. On mobile apps, iPhone, and App Store and Play Store, just type perplexity on the Play Store, so App Store. Ignore the ads at the top, like Gemini and Claude advertising stuff. Gotta love it. Just go directly to our app, yeah.

Josh:
[1:00:39] Amazing. Well, Aravind, thank you so much for taking the time to joining us today. We really appreciate it.

Aravind Srinivas: Perplexity CEO’s All-In Gamble to Take Down Google
Broadcast by