(Illustration: A lot of hard work has been going on behind the scenes. Le Bouchon Ogasawara, Shibuya, Tokyo. Image source: Ernest)
AI technology is evolving rapidly, and the emergence of ChatGPT has sparked a new wave of AI fervor. If companies want to get a piece of this trend, they must maintain a high degree of innovation and agility. A SaaS startup called Perplexity embraced change amidst this wave, pivoting from a Text-to-SQL tool to a general AI search engine. This interview with Perplexity’s CTO Denis Yarats will discuss how they identified this trend and made product adjustments to achieve product-market fit.
Contents
tl;dr
- Perplexity pivoted from a Text-to-SQL tool to a general AI-powered search engine after the success of ChatGPT, seeing an opportunity to enhance it.
- Perplexity’s core competency is the orchestration of combining large language models with search to provide fast, accurate, and cost-efficient answers.
- The data users provide is Perplexity’s key advantage, allowing them to fine-tune models for their specific product and optimize the user experience.
- Perplexity focuses on serving knowledge workers and professionals who value time-saving and well-researched answers, with a subscription-based business model.
- Perplexity actively pursues hardware partnerships to expand distribution and drive adoption, while also exploring new monetization opportunities like ads.
Watch the Video
Content selected from The Unusual Ventures Startup Field Guide: The Product-Market Fit Podcast.
Content
Preview
- there is a lot of attention on ChatGPT and we have something that can enhance it. literally in a span of two days prototype, like a very simple website. We never thought, it’s gonna receive any attention. . When we start looking at the usage to our surprise, the traffic did not drop. In fact, it increased.
- And so we made this decision to stop working on Text to SQL, disregard four months of work, all the infrastructure we’ve built, and like fully focus on general search.
Opening
- Sandhya: Welcome to the Startup Field Guide, where we learn from successful founders of unicorn startups how their companies truly found product market fit. I’m your host Sandhya Hegde, and today we’ll be diving into the story of Perplexity. Perplexity is an AI powered search engine that provides answers to user questions. Founded in 2022 and recently allegedly valued at over a billion dollars, Perplexity recently crossed 10 million monthly active users and is growing very fast. Joining us today is Denis Yarats, the CTO and Co-founder of Perplexity. Welcome to the Field Guide Denis.
- Thanks for having me and I’m excited to join in your podcast.
Founding Team
- Sandhya: So, Dennis, you, you know, were a research scientist at Facebook a couple years ago. How did you meet Arvind and how did the rest of your founding team come together?
- Yeah.
- So there’s a very interesting story. So while I was a research scientist at Facebook AI Research, I was working on something called reinforcement learning mostly for like robotics, but that’s one of the sort of like essential piece for ChatGPT, like a relationship. And so Aravind and I were, happened to work on the very similar problems.
- And one day it was like middle of 2020 when COVID was just going on. We published independently, exactly the same paper, exactly the same research result. And since that point, we start talking and like collaborating together. I spent some time at Berkeley, working with him and his advisor. And after, after that, we were like maintaining our relationships.
- So he went on to open AI and, um, and and I was like, in early 2022 was about to like graduate and trying to see for like other opportunities. And we’ve been talking and it was becoming like, obviously clear that GPT is getting stronger and stronger. And there is going to be like an opportunity to create a company.
- And so we, around June 2022 decided to, he left OpenAI, I left Meta, so we decided to ,do something.
- Sandhya: Such a great story, especially because, instead of being mad at each other for publishing the same work. I think you were two days before him, right, you had mentioned? You became friends instead. That is such a sweet story.
- Sandhya: And what about the rest of your team? Once you decided to get started, how did you think about, okay, who are the other co founders you need to add to your team and why?
- Yeah. Essentially, so we we’re both like research scientists, more like AI people. And we definitely knew from the beginning that we need somebody who is very strong on product and like in general, like engineering.
- And it so happened for us kind of that one of my friend and a former colleague, Johnny Ho was, whom I worked at Quora, back in 2013. So he also recently became available at that time. And so he was like the smartest person I knew. He was for example, like IOI 1 world champion, so as a high schooler, so like it’s, number one in the world, it’s not easy to do.
- And yeah. And so we start working. Three of us together, trying different prototypes. And yeah, once we got Johnny, I was like very confident we can do something interesting.
Prototype
- Sandhya: What was the very original idea, like kind of idea number one that you started working on in August 2022?
- Yeah, so actually we even started it’s maybe around like July.
- I remember that’s been like prior to joining. The idea was like simple. We wanted to do search, but couldn’t do it for like obviously reason because we wouldn’t get funding. And so we decided to do something simpler, text to SQL. Essentially at that time, there was a pretty decent model called DaVinci2 from Open AI.
- And so we decided to build a a tool that can translate a natural language into SQL query and then execute it on a database. One of the first things we wanted to tackle is create an interesting database of public data. And so one of our first interests was actually Twitter. So we went ahead and back then it was like much easier to do cause there was like API.
- So we scraped a lot of or I guess downloaded a lot of Twitter data and stored them, organized them in database. And started to create like a natural language interface around that data. So you could have asked like questions like, how many followers Elon Musk has that have more than 1 million followers themselves.
- So it’s do this like joint operations. We got like some rendering. So it was like a very cool demo. In fact, this is a demo we used to get Yann LeCun as our seed investor, like angel investor, cause he’s spends a lot of time on Twitter. So we went to his office at NYU and showed him this demo. And he was like, very excited about it.
- That’s how we started, but it was, we wanted to do search , you can think of this as a search and kind of a more narrower to me structured data. But we still be spending like some time prototyping, like more general like search. And in fact, around like October, 2022, we had an internal sort of Slack bot that was essentially very first prototype of Perplexity.
- So we would use it to ask like questions about like medical insurance for our employees and stuff like that. Something that we didn’t know a lot about, but it was very useful to see the first glimpses of this technology.
- Sandhya: right. You’re only a few months into building Perplexity, you are focused on text to SQL enterprise customers, and November, ChatGPT launches, and it’s one of the most successful product launches in the history of product launches. What was the conversation like within your team? What were you assessing this as? Sure you’re very excited, also thinking about what it means for you. I’m curious.
- Yeah, I very like clearly remember this day. I was just waking up and I saw a lot going on Twitter starts very… like to our credit. I think we like very quickly recognized that this is groundbreaking So it’s not something that’s just gonna come and go and Yeah, I remember we very clearly start thinking about, like, how, what we have, and we had this prototype, right?
- It just so happened it was also addressing the very early feedback that ChatGPT was getting, where people would complain about like hallucinations. People would complain about not knowing where their information is coming from because there is no like citations or anything. And this is this was exactly what as our like prototype was doing.
- And so we put two things together, saying, okay, so there is a lot of attention on ChatGPT and we have something that can enhance it. And so we like literally in a span of two days prototype, like a very simple website. Put it out as a joke on Twitter. We never thought, it’s gonna receive any attention.
- And to our surprise it actually did. So we started receiving a lot of buzz on Twitter, like bunch of people start like retweeting us and praising it. So even though it was like very horrible implementation, it was like very slow, it didn’t work well. But that was like a very interesting sign for us just because we know we can do it way more, much better, but even in the current form there is something about it like people did like it.
- And so we’re still at that time. We’re like, not sure, should we procced it? Because, we’re like thinking, okay, so maybe it’s going to go like a week or two and then it’s going to fade away. And we were like, especially entering into the holiday season, the Christmas and New Year. So we like waiting.
- We decided, okay, let’s just see how it goes. And early December. January, when we start looking at the usage to our surprise, the traffic did not drop. In fact, it increased. So it was like, okay, so this is, there’s something here, so it’s not normal. And so we made this decision. Even though it seems like it could have been like our decision, but actually it was like very unanimous, very easy decision to stop, completely stop working on Text to SQL, disregard like four weeks, four months of work, all the infrastructure we’ve built, and like fully focus on general search. And I think, yeah, it was a very right decision to do.
Pivot
- Sandhya: And Yeah. What a fascinating pivot. I’m curious, obviously you had a lot of confidence that you were solving problems that matter to people, right? You could see that from the early feedback, whether that’s the hallucination, the citations, using RAG. All of those things. However, you must have thought about what’s our long term competitive advantage since we don’t own the core foundation model here. Open AI does. What was that conversation like? How did you talk about okay, if this works, how do we win and how do we maintain a superior product over the providers of the LLMs that you might be using? I’m sure there must have been some skepticism internally from your team as well. I’m curious.
- Yeah, this is definitely was a very important question. And something we still, you know thinking about and not only like being dependent, like a wrapper and depend on OpenAI as a LLM provider, but there is also right after, I think around January or like early February, like Bing released a very similar product.
- So just like much better company so being shouted, they’re also, they have everything, they have like distribution, they have search, they have LLM. So it’s there is no there is no good reason for why we should exist, right? So it’s just impossible, but turns out it was for whatever reason our product was better and people preferring us to everything, to all the other, I guess, alternatives out there.
- And I think answer your first question, I think it’s like very interesting. The way I look at it is that being a wrapper is actually was a very essential and very important position to be early in those days, just because , this is something that only became available like when OpenAI rolled out the API, like before, imagine three years ago you wanted to build Perplexity or like something like that.
- So you had to do, at that time you have to like first, even before launching the product, you have to collect data, train model, launch the product and only then like figure out if it actually has market fit or not. Would have been stupid to do this exactly the same, when you have this API available. And it like the OpenAI API, like essentially allows you to turn the problem around, flip it from its head and like first verify if there is like market fit and if so, then figure out what to do.
- And that’s why I feel like to me was the best decision we’ve made. And the thesis here is just actually, the models, in my opinion while it’s very important, it’s not the mode, right? There is now, especially we also like fortunate a lot that they’re like open source community in picking up the steam.
- So now that there’s like very capable open source models that you can just take and function on top of but if it weren’t in the case, we would gotten to a point where we have like enough capital where we can pre train our own model. Actually pre training is one part of it, but I feel like the more, more complicated part is fine tuning and like post training and optimizing this model for your specific product.
- And to do that the only necessary ingredient is user data. Like you need to establish the data play. If you don’t have that, like what, if you even, if you have the best pre trained model, it’s just, it’s useless. So that’s why our thesis always was okay so model is not the moat Data is the moat and like product and the brand.
- And so once we get to this point, so now we basically have a lot of data. We know exactly how people use Perplexity. We know exactly what they’re asking. We know exactly how to optimize model to improve metrics, because we can just like literally AB test everything. And so we just take this data. And take whatever model is out there available to post train on top and we do that.
- So that’s was essentially, you can think of as like bootstrapping. So you bootstrap on something, get the data, and then you can replace all of those pieces that you don’t have over time. And I feel like this is exactly for all the pieces of infrastructure, we stopped, we started with something same idea was like we searched, we started with some search provider, but then, as we start seeing what needs to be done, so we bootstrapped on it and like then built our own infrastructure and just keep improving it.
- Sandhya: And it sounds especially your background in RL and being able to take advantage of user preference data to have a superior product experience is really key. Yeah. Yeah. This is important, right? Like basically, each product is very unique. So there’s like different qualities of the model, different properties model you really like optimizing over then let’s say ChatGPT.
- So for us, for example, we don’t know, we don’t know, like our, we don’t want our model to hallucinate. So what we do, we design like reward function in a way that it would like, we refuses to answer if there is no support. Yeah. And so we just train for it, right? Maybe for some other products is actually okay to hallucinate because it’s maybe it’s going to make it more engaged or something.
- Sandhya: right, yeah, there are places where the hallucination is a feature, not a bug.
- yeah, exactly.
Perplexity’s early adopters
- Sandhya: And I am curious, you’re obviously building like a very horizontal mass consumer product, but did you, but you still, have a small enough audience, obviously, compared to the full search market that you probably have some use cases that are much more common than others.
- Do you have an internal early adopter persona that you were optimizing for and what did that look like? What was the early adopter and, what were the use cases you wanted to make really good? Yeah, exactly. Like we, even in the current stage, we are like not going after the whole web search space. It’s just, enormous, but I think like something that we’ve seen early on and seen right now is just a lot of people who use us.
- They’re like knowledge workers type of persona, right? Like people who do all kinds of like research, like like academic market, some financial and stuff like that. So there’s basically people who search internet to in order to solve whatever task, not just some recreational to get to know what’s the score.
- What’s the weather, maybe some navigational queries. Those are very important, but they’re not as many ties as well as some of those knowledge workers queries, just because eventually those queries going to lead to some decision, right? So decision making and that’s very important. In fact, like Google, huge company, but has a lot of all kinds of users, but there is a very skewed distribution of how much money they make from whom, and like very small percent of user makes majority of the revenue for Google.
- And and so happen is just we have a small like overlap with that portion of the users, not the users who like, create like this issue, like free traffic, but some somebody who can pay for it. And that’s why I think there is some opportunities to like, unlikely we’re ever going to get as big as Google.
- And that’s honestly not our goal, but if we can provide a tool that’s going to be useful, it’s going to save time for a small portion of users. But those users, professionals, knowledge workers, people who can pay for those services. I think, we can create a successful business.
Early customer feedback
- Sandhya: And were there any surprises for you in the early customer feedback? Anything that stood out that like really crystallized the product direction for Perplexity?
- Yeah. There was like a few interesting things that I did not expect to see. Like one of them was people searching for other people Hmm. or themselves like vanity searches. That was a very common use case and it still is. And actually I think it’s like a very easy. So imagine you as a I know like salesperson or whatever you have to meet somebody and you want to a very quick understanding about like somebody who you’re going to talk to soon, right? So it’s you just go to Perplexity to ask about this person and and
- get like very quick write up. But also obviously like a lot of people. We’re interested in like academic research. So they’re like, they’re asking, if we can do add like PDFs, not just like a web documents, but like some more like specialized literature and specialized. indices where it’s very hard to use Google for those.
- Like you just you basically have to, yeah, you can get a link, but then you have to go and like research. And so that’s something I think we’re optimizing for.
- Sandhya: Makes sense. When I think about what Google’s core technical competency was, and, what helped them, stand out, right, because they were certainly not the first or only search company, but what helped them really beat, Google, sorry, Yahoo or Bing was, they were the best at content indexing. They’ve scaled the hardware infrastructure really well. And they also had a pricing model, right, that really helped them with having good user experience, right? They came up with a pricing model that really helped them improve their user adoption as well. I’m curious when you think about okay, what are the core technical competencies that like Perplexity always needs to be the best at, right? Compared to whatever, chatbot other model companies might launch or, whatever else or whatever Google might launch tomorrow. How do you think about what are the one or two things that you as the CTO of Perplexity think we need to make sure we’re always the best at?
- Yeah. Yeah. I’m like, I’m personally like a big fan of Google. I feel like in my opinion, like Google search is probably the most complex system and sophisticated system that humanity ever viewed.
- And and two like core concepts that I really like about Google is speed and accuracy. And I think they’re like by far the best into it. So this is, those principles is something I care like deeply about and trying to make sure like Perplexity is also very fast and very accurate. And It’s in this new era where you have to combine like a very expensive and hard to run like LLM with like search.
- So you have to figure out like how to do this efficiently and fast and without sacrificing the quality. So I feel like our core competencies like this like orchestration part where how would you given a query, how would you make sure that, you can answer it perfectly, you can answer it very fast. You can also do it cost efficiently.
- So it’s not a it’s basically it’s like a multi dimensional optimization problem. And so doing that, It’s difficult and something that we focus in a lot on. And then basically once you start solving this problem, then you can start like deviating a little bit into the direction of LLM, into direction of search.
- And okay, so if I want to optimize this minor objective, so do I need to improve our search somehow, like in whatever regard. So the main thing to understand is just the search index, even though I say like Google is the most complicated thing, doesn’t in this new LLM world, it doesn’t have to be as complicated as it used to be.
- So we don’t have to spend so much time on designing this, like manually crafting the ranking signals and stuff because a little LLM is going to take care of this. So you basically already know, like some answers to the all the trade offs that you need to do, like when you build a classical search, like when classical search is all about like trade offs, like precision recall freshness of frequency update and stuff like that.
- And so now given that, that it has to be, it’s going to work together with LLM, so certain decisions become easier. And then The same exactly comes on the LLM side. Like you have specific product problem you wanted to solve. Do you need to run the most capable or like largest LLM? Probably not. Depends on query.
- Like maybe some query do require that. Some don’t. So how would you like route this query to like appropriate system? How do you then maybe have a smaller model that can do like decently well on like certain queries. So you can like to steal. And yeah, that’s basically. controlling the whole orchestration and then optimizing individual components of like LLMs and search in order for everything like to work together perfectly. That's I feel like our core competency.
Subscription pricing
- Sandhya: Yeah, no, fascinating. You’re right. There’s so many jobs that you can specifically choose to use smaller models for, and you’re constantly having to decide what is the smallest model you can use that will still give close to the best possible experience for your user. That’s yeah really fascinating. How do you think about, especially given your business model which is going to be subscription. I think that’s the power of your model. And the biggest reason why the average consumer would want to try a Perplexity is, you don’t want the like 10 links and five ads. You really just want to save time, get a well researched answer. That means subscription pricing. How do you think about gross margins at that for that business model, and of course, you’re still early and in hyper growth mode. But how do you think about long term gross margins and the implications of kind of the pricing model you have chosen? How do you think it works over time?
- Yeah yeah, that’s a good question.
- So I think like currently the subscription is the main model right now. I’m sure there’s going to be like something else into the future, but even now it’s v ery interesting to see that margin is actually pretty good especially As you get a certain scale, as you do all necessary work to optimize experience, something that I just described on.
- And and the most interesting thing, also something that we observed over like last year is it becomes cheaper to run those models. Like hardware becomes cheaper, models get smaller and better. Like even OpenAI, API price, it like dropped, I think four or five times over, over last year. And then we also build like certain things in house and now we don’t have to rely like on OpenAI API as much, so we observing the margins increasing over time.
- Which is good. Obviously we keep adding like new features to make our product even better. But I think I have like full confidence that like, we will be able to continue to do and like eventually at some bigger scale, we’re going to have a very good margins. Still, there is going to be like other opportunities to monetize.
- I don’t rule out ads. I think like ads in the current form as Google have, it is probably not something we’re going to be doing, but I think there’s like ways to have ads and some in a way that actually is going to be helpful for the users, right? Like people don’t really mind ads if it’s helpful, right? If you like searching for something and you want to buy something and like exactly perfect product for you, it’s going to be good.
- It’s just People hate when you, they see a lot of irrelevant ads and a lot of them. So that’s why I feel like definitely somebody is going to, hopefully it’s going to be us, I’m sure like Google is also going to be looking into this, how do you reimagine ads in the LLM world? So we’re going to be looking into that as well.
- But yeah, so far like subscriptions has been pretty well.
Hardware
- Sandhya: Makes sense. You mentioned hardware, so maybe this is a good segue point. So I have been super impressed with how much Perplexity has leaned into hardware partnerships already. I’m curious what was the motivation behind that, whether or not it’s Rabbit or these other kind of new partnerships you recently announced? What was the motivation behind it? And what’s what are you learning from these early phone and glass partnerships?
- Yeah. The motivation is actually pretty simple. We’re still very small company, like a lot of people, we’d like never invested any single dollar in a sort of like advertisement or anything like that, so it was all natural and like organic growth, but if you people, if like people compare us, like a Google challenger or whatever, so it’s it’s pretty funny because they’re like, Have a completely the completely level of distribution, right? So and that’s why, if you ever want to even get an inch closer to
- them you have to have distribution. And to that point, we decided, okay, so if there’s like a other, like maybe company at the similar stage as we are, who are, like also innovating in different directions like rabbit, you mentioned there’s like the glasses and phone as And We like doing interesting stuff.
- They’re doing interesting stuff. So it seems like there’s like an opportunity and we by working together, we can create a opportunity for both all the parties and so take on big guys. Cause otherwise, it’s just going to be impossible to compete with them ourselves. Cause we just, like in a different league.
- Those guys. And so that was like primarily motivation, but also then it was good to see that a lot of our users and their users like cheering for us, they’re people who like, like basically things that are like new things that like new products. And they really like when those different new products like work together.
- So it’s just ultimately, honestly, it’s a win for the consumer. So that’s why I think and it’s also fun for us to do. So hopefully I think like by doing that, we will encourage like a larger adoption and we’re going to get more users in the end.
- Sandhya: Right. And hopefully, really good learnings from experiments and user interfaces, right? And what are the different ways people are asking questions and want to consume information and navigate that kind of information space?
- The main learning is everybody wants to be very fast, so no, nobody wants to wait for an answer. They want to get like instant answers. And that’s something like a big challenge for us. So we like spending a lot of time to optimize our infrastructure.
- Sandhya: right. And, keeping with that thread, obviously OpenAI, Google are thinking about hardware, custom chips. How are you thinking about just, maybe not just for Perplexity, but what the kind of chip ecosystem will look like? Are, they’re going to be really custom chips for each model that will give you the best performance for that particular model. What’s your take on how this pans out? And what would be ideal for Perplexity?
- Yeah. That’s definitely a very interesting question. I think so far honestly, I guess we, at this point we have what we have, like a GPUs from NVIDIA and we have like TPUs from Google, but maybe at a lesser scale, and it seems it seems like to me that the, not the chips is the the hardest part to build, but actually software around it.
- So specifically. To me, it feels like CUDA is the main sort of like moat for NVIDIA rather than the chips. Cause it’s just so much software like PyTorch, all of the other stuff is just built on top of CUDA and it’s like very hard to replace. So that’s yet to be seen. Obviously, we would love to see competition in that space as well, I feel like competition in general best for everybody because it just ultimately creates a better product, creates a better opportunities so we would love to see competition there as well.
- And then, yeah as you said different models can utilize different hardware. And honestly. Maybe we don’t know yet if transformers is the ultimate architecture that’s going to stay, right? Transformers are good just because there’s perfect hardware for it in terms of GPUs. What if somebody comes with different chips propel like a different architectures.
- Maybe it was like sparse components into it. So that, that remains to be seen, but I definitely expecting to see a fierce competition in that direction. And I definitely think there’s going to be like multiple players in that space and ultimately it’s going to be best for us.
Future product vision
- Sandhya: Makes sense. And could you chat a little bit about how you’re thinking about, Perplexity, future product vision and in such a rapidly evolving ecosystem? How do you think about, what does the company and tech stack need to look like in two years and four years? And who are you trying to hire to future proof the company?
- Yeah, this is a very interesting question because like from one point of view, it’s it’s very hard to plan. Like that far in advance, just because we’ve been trying to do this, but like all the time we had to scratch our plans and do something else.
- So that’s, I honestly like right now, like we, we have some general like threads, what do we want to do? We just want to excel in like search and vertical of search specifically, as I mentioned, build the best possible product for like knowledge workers or just like some portion of them. And That means is yeah, just like improving product around being able to answer very complex questions, like something that requires you, maybe half an hour, like Googling right now, can you answer those questions very fast and reliably?
- So that’s something we’re going to be building in general. I think we also want to, a little bit like, like adapt, like bunch of more like classical things in our search, like some people want to see like sports results so then maybe we should also support that. So like those types of things. And yeah, like integrating some of the different like APIs and like providers.
- Like recently, for example, we added like a local search, like maps. I think that’s the, obviously like very useful. Maybe like shopping is going to be something that we’re going to add at some point. Yeah, but the main goal is just build the best possible product. And I think two main. And directions that we're going to be attacking is like speed and quality.
- Got but apart from that, it’s like very hard to predict because we also depend a lot on what big guys is going to be doing, like what Google is going to release, what OpenAI is going to release. So that’s also dictates what we do.
- Sandhya: It sounds very relaxing
- it’s not because yeah you always have to, yeah, you always have to be about those guys, but it’s good. It’s been like that.
- Sandhya: So Yeah, I’m curious, what are some AI kind of products maybe you are using to build Perplexity or maybe even in, your day to day that you have big fans of and you are excited about?
- yeah I’m personally like a big fan of ChatGPT. It’s like amazing product. I think, it basically without ChatGPT, like we wouldn’t happen for sure.
- So I, I use, ChatGPT like relatively frequently. I think that was a good one. I think apart from that surprisingly, I don’t really use coding yet, like any coding assistant. I don’t know. I still feel like I’m better than AIs in that aspect. But we’ll see. Yeah, that, that’s probably the main one.
- I’m guessing like, yeah, probably ChatGPT is the one. Oh, obviously I think like we, I’m like a big fan of like voice generation. I think we’ve been using it extensively in our product. I think like things like ElevenLabs is very impressive. So it’s good to see.
Advice for founders
- Sandhya: Awesome. Any advice for, the next generation of founders building startups in the time of AI?
- Yeah. I feel like it’s basically it’s be comfortable when everything’s uncomfortable. I think that’s the main one. It’s every day is basically going to be a battle. And yeah, you have to just like mentally. Be prepared for that. I think like also be stable in a sense that if things are good, they’re like never as good as people say, if things are bad.
- They’re also like, not as bad as people say. So don’t deviate too much of like lower variance. And because, yeah, especially in the current days, whether it’s Twitter or like X, information just changes so quickly from day to day. Try to stay grounded and just like you optimize in the fundamental work.
- Ultimately, it’s still what’s going to matter is if you don’t overreact to certain things, just try to stick to your mission, try to stick to your vision, obviously take into account whatever happens outside, but don’t just like fully jump on it. So if you basically give up on your original idea, that means okay, so likely your idea was not great.
- And then very likely , something that’s going to come up, jump on, it’s not going to be great either. Yeah, do those things. And I feel like also maybe the other one, the big one is like hiring is the most important thing. You’re doing like, without hiring, like great people, it’s like nothing is possible.
- Sandhya: Don’t try to do everything yourself.
- Yeah. Yeah. Yeah. Yeah. It’s basically, very quickly you will realize it’s, you cannot scale as much as through the team.
- Sandhya: That’s great advice. So thank you so much for coming on our podcast, Denis. This is lovely and I’m sure your pivot is going to go down in history as one of the most timely and incredibly awesome pivots. It’s very excited to see where Perplexity goes. I’m a happy customer myself and really really amazed at, what’s a high quality product you all have built. Kudos.
- Thank you. Thank you so much for having me. And that was very interesting to chat with you.
Reference
International Olympiad in Informatics - Wikipedia. I recall that I was lucky enough to participate in the Taiwan IOI selection in high school. At that time, I wrote programs on white papers to answer questions. ↩︎