.jpg)
Prime Venture Partners Podcast
A podcast for entrepreneurs who are looking to build & grow their startups. Avoid common traps & learn uncommon strategies & tactics from makers & doers of startup ecosystem. Prime Ventures is a early-stage venture fund which focuses on startups that not only need capital but also require mentoring to transform them into disruptive companies. We share a passion for working closely with entrepreneurs and enjoy sharing their journey in a high-frequency, interactive and fun environment.Read more about us at http://primevp.in
Prime Venture Partners Podcast
The AI Opportunity for Startups with Shripati Acharya, Pankaj Agarwal and Jerome Manuel
In this special podcast episode, we talk about the massive AI opportunity - how it has evolved since the introduction of the first GPT models and what the future looks like and why we at Prime are super excited about it.
Our in-house Artificial Intelligence (AI) experts Shripati Acharya and Pankaj Agarwal (Investments @ PrimeVenturePartners) provide a comprehensive overview of the foundational technologies (LLMs, GPTs, Tokens) driving AI, the impact on startups and business models that will create and reshape trillion dollars worth of economic value in the process.
If you are an entrepreneur today, this is a pot of gold as we unravel why $50B venture money was invested in AI startups globally since 2023. Did you know since the launch of GPT in 2022, $100B+ have been invested in AI startups!
Listen/watch the podcast to learn more about:
0:00 - The Evolution of Artificial Intelligence
5:36 - Understanding Machine Learning and AI
13:20 - Why $100B+ is invested in AI since 2022!
27:17 - Which AI startups will get funded?
37:22 - Future of AI Applications and Workforce
Enjoyed the podcast? Please consider leaving a review on Apple Podcasts and subscribe wherever you are listening to this.
Follow Prime Venture Partners:
LinkedIn: https://www.linkedin.com/company/primevp/
Twitter: https://twitter.com/Primevp_in
This podcast is for you. Do let us know what you like about the podcast, what you don't like, the guests you'd like to have on the podcast and the topics you'd like us to cover in future episodes.
Please share your feedback here: https://primevp.in/podcastfeedback
Some say AI is the next electricity. Some say AI is the next internet. But, very interestingly, in the late 1940s or during the World War II, Alan Turing, the father of artificial intelligence, was working on the basics, the foundations, of this particular technology. Very interestingly, he was using this technology to decipher Nazi codes, which helped him win the war right. So that is what AI is all about and that's where it started. So we at Prime are super excited about the opportunity in AI and today's podcast is all about that.
Jerome Manuel:Welcome to the Prime Venture Partners podcast. Today we have Shripati Acharya and Pankaj Agarwal, who are going to talk to us about the various opportunities and why entrepreneurs should definitely build in AI. So welcome to the podcast, gentlemen, and let's start off by talking about very basic stuff, right? Like I'm just going to throw out words and you tell us what the definition is, so that even a common person can understand what AI is all about, right? So let's get started and tell us what is AI. What is this AI? Like, everywhere, we're just hearing about AI, right? Yeah, anybody can start.
Shripati Acharya:So AI is really the ability for machines to respond to our questions in a manner in which it is useful for us in a very, very, very abstract fashion. Right, but perhaps the right way to look at it, because you quoted 1940 as a starting point, which is absolutely true, and AI also had several inventions and incarnations and then long periods of hibernations right before the current golden age, once one could say, of AI, right? So I feel that our interaction with the machine and talking of Turing, the test of AI really was. The Turing test says that if you're interacting with a machine, if you can figure out if it is a machine or a human, another side of it, right, that's really what the Turing test is.
Shripati Acharya:So I guess that's the that's what AI really stands for is machine responding a human another side of it. Right, that's really what the turing test is. So I guess, uh, that's the, that's what ais really stands for. Is machine responding to human prompts or human questions? And the level of uh the responses really indicates the quality of the ai got it, got it.
Jerome Manuel:So definitely sounds like a sci-fi movie, right? So I think that's the base of most of these films. So what is Gen AI? Like you told us, what is Gen AI?
Shripati Acharya:Gen AI. I think of it like if you give it a paragraph and you just have a lot of fill in the blanks in it, right, and you start like filling in those blanks with what the words ought to be, right. That is the generation part. So it's actually creating content to fill in and what it is doing is that it is trained on a whole bunch of other stuff and based on that it is actually answering your questions. And the interesting thing and I guess a fascinating thing here is that the answer it is giving is not something which it has particularly learned.
Shripati Acharya:So until now, when we think of search, it is all retrieval based, which is, if you're actually looking for, saying, hey, I want to know a restaurant in Indranagar, right, what is the best restaurant? Where can I eat pizza or what have you. That information is in some database, somewhere it's out on the web and Google is extremely fast at figuring out and giving you exactly that piece of information and the paragraph will be actually something that somebody else has written. So it's a retrieval-based search, right. But when you ask that question to a generative ai pro uh, you know model, you're asking it of chat, gpt or any of the other models. It is creating an answer which is not anywhere in what it was trained on, and hence it's actually generating an answer, and that's why it's such a fascinating, uh, new epoch in, really in ai got it?
Jerome Manuel:I think so, pankaj. You were recently talking about perplexity, right, and like how that has replaced Google search for you, and that is exactly your use case here. So can you tell us about perplexity and like how that is different from Google, right?
Pankaj Agarwal:Yeah, I think so, as Shripati said, right. So Google has this crazy expertise on finding sort of links that will most likely help you answer whatever you're searching for right, either looking for an information or looking for a restaurant or something like that. But gen ai generally, and perplexity specifically, is structured around this. You know philosophy, that it will give you the answers and not like list the links on which you can potentially find those answers right. So, for example, I can go on perplexity.
Pankaj Agarwal:In fact, you know I'm planning a bali trip later in the year and my starting point to plan that trip was I went on perplexity and said you know, I have about nine days. These are the four areas I would like to spend. You know, suggest some things that I should be doing there or what would an itinerary look like, and it in a very, in a super fast way, actually generated like a decent plan. That gave me a starting point and which, on which, you know, I'm actually building my trip on top of that, looking very similar to what it recommended, right. So, yeah, fundamental differences rather than a retrieval base, right, which is kind of the search that Google kind of enables. Perplexity is, of course, learning from all of that, but it is generating answers to your specific queries rather than giving you links for you to figure it out, got it.
Jerome Manuel:So what is machine learning? People keep saying AI, ml. So how do we understand what is AI, what is ML, if that can be addressed?
Shripati Acharya:So these are all like actually foundational technology. Ai is a very generic term. Machine learning is actually what is leading to what we perceive as an AI from the entity which is actually the output of that model, right? So machine learning really is taking a lot of data and then learning something based on that data. Right, that's what that's what machine learning is, and there are very different ways of learning, uh, from that particular data and that is the stuff which has been uh, which has been evolving.
Shripati Acharya:So all the models, all the various algorithms, etc. Come under the broad rubric of uh, of machine learning, uh, which is the underlying, I guess, technology which is driving all of these things.
Jerome Manuel:Got it, got it. And like when we speak of MLs, right, like also, llm is a very popular term. I keep hearing this. So what are LLMs?
Pankaj Agarwal:Yeah, so LLMs, I mean, perform as large language models, right, and these are like these massive models trained on you know crazy amounts of data to enable gen AI, right, enable generative AI, right, and, of course, you know, it's not just limited to text, although that has been sort of the mainstream so far with the, you know, launch of GPT a couple of years ago and the stuff that has happened, but it's sort of shifting, multimodal and, like you know, you can generate not just text but images, videos and you know, so on and so forth, right, um, so llms are like the core component, uh, of of a, of a generative ai system.
Pankaj Agarwal:Uh, these are foundational models that are trained on, you know, um, for example, gpt wasT-4 was trained on 15 trillion tokens of data, right, and these LLMs have the ability to generate various kind of outputs, you know, as I said, whether text or video or images based on a natural language interface. Right, so, as a human, you could enter, you know, right, from you know, doing basic search of travel plan that I just talked about, to like writing a code, right, for whatever application you're developing, right, um, so llms kind of power that, right. So shipati touched upon models in ml. Llms are a kind of models, right, but they are like so massive, uh. So you know, intelligent.
Jerome Manuel:They're able to do a lot of tasks in a very, very human-like way got it, got it, and so you kind of of spoke about GPTs a couple of times while you were explaining LLMs, right, and also tokens. So can you like give us a brief about GPT? What? Is GPT right, I mean the full form of it, how it operates and what are tokens in this ecosystem.
Pankaj Agarwal:So GPT is a kind of LLM, right? The whole research that enabled these large language models to sort of come about is based on, you know, this whole transformer architecture and there's a paper that was written, I think you know, in 2015, 2016. You know which was around. I think attention is all you need, if I remember the title of the topic, right, and tokens are nothing but combination of words, right? So 15 trillion tokens roughly would translate to about 20 trillion words or something like that. Right? So you know fancy way to kind of look at or measure the quantity of data on which it is trained, but token is a combination of word or kind of a you know a phrase which is used to train these models.
Jerome Manuel:Got it. So I'm assuming, like the current young folks, right? Maybe somebody who's in their 10th grade, 11th grade, would ideally want to be somebody who's building AI in the future. That'll be the most lucrative career, right? So what leads to this path of building AI, or who kind of becomes this proficient AI builder developer, right? Like, if you can touch about that Before we get to that, actually, I think the Pankaj Chandra mentioned this paper.
Shripati Acharya:Right, attention is all you need, and it's very seminal in terms of like why this actually was a breakthrough in terms of how the machine learning systems were actually designed Right, and previously the work would be on okay, if you actually there was this. Back in 2009 or so, there was this database of images called ImageNet which was released by Stanford, and the idea here was that you'd have a bunch of cats and whatever. It is a bunch of images. You need to correctly recognize the images, and so the machine learning algorithms were tested against that. It's around like 14 15 million images in that database and that was like the benchmark for it. And then as the algorithms got better and better and better, they got better and better at actually recognizing those images versus a human image versus a human, and now, of course, they're way short past the accuracy of humans on that right.
Shripati Acharya:So that is something which only happened about 10 years ago the ability of algorithms to actually go back, and one of the uh contributing like seminal articles was this one which Pankaj mentioned, and the meaning of attention here is really that the words in a particular sentence. What they mean is related not just to the words which are coming just before it, which is really how algorithms were previously trained. But the context of that entire sentence? All right. And one example here is that just reading recently, which I liked. It said the key to the cabinet is on the table.
Shripati Acharya:All right, so now the article is actually refers to the key, which is like standing somewhere behind in the in the sentence, right, so you need to be able to figure out who is referring to what, and so, if you're actually able to take a large, large enough context, that's where the context window notion comes into the picture right so we, in this context, is trying to correlate like this is related to which you know which word in the past, right, and so once you're able to do that properly and you are able to say, well, this, what is really related to that, and hence the weight of what comes after this is more likely to say on the table versus something else right, so the system just becomes better at it and that's what the attention piece of it is, and so the algorithms around that turned out to be remarkably good, very surprisingly surprisingly good in terms of their performance on image net, and that's why I got all this attention.
Shripati Acharya:Uh, going in on that, and then, with large language models, the idea that how about let's just train it on like open text which is available on the net and then finally actually let the prompts be, instead of being some structured input, be just free form text, is what has actually created this evolution which we are sitting on right now.
Shripati Acharya:Right, got it, so it's just important to understand that this sort of like has been built on a lot of uh, you know, work which had already been done, but some of these uh innovations towards the end actually just tipped it over the edge and made the mass adoption possible. Because just we can relate to it so much? Because language is how we communicate with each other and free-form language is how we are now communicating with the systems.
Jerome Manuel:Right, okay, that's super interesting, right, like thanks. Thanks for bringing that up. Would you like to address that particular question about who can eventually become this magnificent AI developer? Thanks for bringing that up. Would you like to address that particular question about who can eventually become this magnificent AI developer? Is it going to be enterprises largely leading the race, like, say, for example, somebody who is in their 11th, 12th grade? How do they go on to become the best AI developer in the future?
Pankaj Agarwal:I think, see, that is the future. In fact, coincidentally, I was reading an article today about the best of the engineering schools in the country that is, in india, are kind of reshaping their computer science programs to focus on machine learning and ai. Right, see, it is a field that I feel is going to be relevant for next 50 years, right, um, and to kind of give, put some numbers around it, right, um, you know, the core component of, like, you know, playing with models, training them, fine-tuning them and so on and so forth, um is, uh, you know, the best kind of person or the job to do, that is is a job of an ml engineer or ml researcher, both these jobs profiles put together. We have less than 500 000000 such people globally right, not just in India, like, globally right, and you can imagine right Now the kind of excitement and action that is.
Pankaj Agarwal:You know, that is happening already. That is going to happen over the next few years. You are going to need a lot many people with those capabilities, right? So, even if you are remotely, you know, building an expertise in AI, ml, you're going to be highly, highly valuable in the job market. So, computer science, if you can, I mean, I wish that every single 10th, 11th year old in the country today if they could take computer science and became ML engineers, that would be fantastic. I think there would be more than enough for everybody to do Got it, got it.
Jerome Manuel:So I think that's that's like great context setting right. Uh, so in the last couple of years, specifically two and a half years, something remarkable has happened, like would you like to start off from the last two and a half years and tell us where it's gone and where it's possibly going, and where do you see it?
Pankaj Agarwal:yeah we I can start, would love to kind of see so like and I think we touched upon it already but, like last couple of years, have seen or demonstrated the massive abilities that LLMs have kind of unlocked right Because of this natural language interface in which we can kind of interact with it. Right, they have shown, you know, remarkable success on generating human-like text on variety of topics. Right, and you know, of course, gpt2 sorry, gpt3 was sort of like the inflection point in the whole journey, but in less than two years, like gpt amassed about 200 million users, generating a revenue of 1.5 billion dollars. Like it's a holy grail of a consumer tech company. Right, none of this has happened before. Right, even the biggest of the giants today, they were not making any revenue for many years. So GPT-4, openai has kind of had that kind of impact and it has obviously caused a lot of excitement to the extent that in just 2023, there were like $50 billion that were venture investments that were put into ai and actually, if you kind of look at the scale of investment since the launch of gpt, you know three, it's more than 100 billion dollars, right, in just last couple of years, right, you know. So I mean the fact is that it has become a mainstream conversation consumers and enterprises alike are excited about it, will be affected by it and you will see it getting kind of adopted everywhere.
Pankaj Agarwal:I mean tech companies or even non-tech companies will be deploying these models into their production use cases. Consumers will see a massive shift in the way you consume, create, uh, and interact on the internet. And you know it's, you know it's, it's here to stay for, you know, as long as you know, I can imagine it definitely in my lifetime, right, uh and yeah, a lot of lot of things to be excited about, and it's opened up. Of course, you know, with the shift comes a lot of areas, opportunities for the new entrants to kind of make their space, make, make a make a place for themselves. And you know this is that is the business we are in and I'm definitely excited so it is really a massive, massive revolution, I would say, as far as the technological advance is concerned.
Shripati Acharya:So just think about it from a uh, the resources required to do a startup okay. So if you're building a startup in late 90s right, we're doing snapfish then you needed we actually had to store photos a gigantic EMC machine actually sitting in our data center. We had our own machine in which we had our own storage. That was what was required to actually do a really seriously scaled website. Then comes the whole notion of cloud and the entire cost of hosting and running your own website and your machines just like disappears.
Shripati Acharya:So now it's like sipping from a straw you only go ahead and do your storage and it goes magically somewhere in the cloud. You need more people and hence you need more memory and hence you need more compute. It magically shows up. You need more server instances of your web server to show up to actually handle the increased scale. Those get spun up and all of this is happening magically in the cloud. So dramatically reduced, and hence we had an explosion in startups and hence we had an explosion in startups. But then, with AI and GPT, now the cost of actually doing complex programming and creating complex software is again dramatically reducing. So previously, if you had 100 people to do a startup, 1,000 people to do a large company, it shrunk down to 100 and I won't be surprised if it's now we are talking tens of people sized companies creating massive impact in terms of the kind of software which they are creating.
Shripati Acharya:That is what it is doing because, in one sense, we have completely democratized creation with GPT, and creation includes not just creating software which is everything right, everything runs on software it's also creating art, it's creating music, it's creating videos, it's creating marketing materials, it's synthesizing voice, all of these things.
Shripati Acharya:and now the scene, as a result, shifts on folks who are actually really understanding how to deliver the value proposition, because how to deliver that value proposition can now be done with a very small number of resources. To underline Pankaj's point, if you are a kid going to college or graduating, going to college or graduating and you have an understanding of of this entire discipline in computer science, of ai, of how to create products, you can do it with a very small number of people, very cost effectively and deliver meaningful impact.
Jerome Manuel:So it's a very exciting time like so the largest of companies in terms of market cap could be possibly tens of people in the future, right, so that's massively interesting, right. And yeah, like you did speak about hundreds of billions of dollars being invested in startups, like could you kind of throw what kinds of startups have come up Like maybe like broad categories, what is definitely interesting and being seen in the ecosystem?
Pankaj Agarwal:The whole tech stack is getting revisited, right. So we spoke about GPT and LLMs, but there are only one layer of the whole tech stack. Right, Of course they're the most important layer. In fact that is the layer that has enabled the rest of it. But to make it useful for, let's say, businesses, for you know, other applications to happen, there are other parts to that layer as well. So you know, because there are various frameworks to look at it, the way Sripati and I look at it and kind of debated a lot about it, is that obviously there's a model layer and there's a data layer and then there is a deployment layer Model we have spoken a lot about, right, you know, GPT-4, GPT is just one example.
Pankaj Agarwal:Of course that was like the state of art, but you know now there is a state of art every week that is getting released. Right, state of art model there are right now about 1 million models that are hosted by Hugging Face right, which is like a repository and library for various models. Then there is a data layer, right. So the way to look at, of course you know, as Shripati touched upon earlier, you need this massive amount of data to train those base models. But unlikely that those base models by themselves will be relevant or will be useful for most of the, let's say use cases, either for businesses or consumers alike. Right, for example, you know, while tip planning was one use case, right, and you could imagine the model can be trained to do better and better on that front, you could imagine sales could be a use case, right. So of course these models kind of do a decent job in broad set of domains.
Pankaj Agarwal:But to make it reliable, accurate in a particular domain, there's a lot of work that needs to happen. And you know, solving for this data layer to fine tune etc will be needed. And the third is deployment layer, which is once these models go into production. It creates other kind of challenges that need to be solved. For because security 1LDBT is kind of open up, you need to think.
Pankaj Agarwal:I mean, unlike the traditional software development, these models are non-deterministic in nature. What that means is that Shripati and I might be asking the model the same question and it will throw different outputs, right. So all of these of course you know, need to be figured to make it useful, right. So you know, of course we can go deeper into all of these, right. But the point is that across, first of all, the stack itself is getting revisited and within each layer of the stack, the stack itself is getting revisited and within, within each layer of the stack, there are very specific problems or challenges that need to be solved to make it really surpass that whole excitement and make it useful and the transformation to kind of happen right. And you'll see, we're seeing actually excitement across all those those layers and you know interesting innovation, you know interesting startups come up across all those layers.
Jerome Manuel:So got it. So what I'm understanding is there are three broad categories model layer, data layer and deployment layer, right Like so. For example, if I'm interested in building a startup, I look, I pick one of these categories, identify problems and then kind of build and go. Is that what it is?
Pankaj Agarwal:It's a good starting point, but we think and you know Shibati, would love to hear from him as well that, like it's so dynamic, so much action is happening. We think that, of course, you can find a starting point somewhere here, but eventually large companies you know 100 billion dollar companies we spoke about will most likely operate at multiple of these layers. Right, uh, you know, uh, but yeah, um, you know, would love to hear what you kind of have to say there.
Shripati Acharya:Yeah, it's. I agree with what Pankaj is saying here. Think of it as the three layers is an inverted pyramid.
Shripati Acharya:There are going to be a small number of models at the bottom of this pyramid. Right, it's an inverted pyramid. Right On top of it will be the middle layer of tools and a number of other things that are happening data abstraction, data transformation and all these tools which are there, and on top of that will be actually the applications. Obviously, the applications will be the maximum number right, and the way to think about it is similar to what is happening in the cloud. Right At the bottom, there are a small number of chip providers to the cloud uh providers right, the machines, the hardware. On top of it are the cloud providers and we know there are only three or four of them, right but the applications are like thousands and thousands of applications.
Shripati Acharya:So that's where the action and, at least from a vc lens, the investability and, from a startup lens, the opportunity will will be on the application layer, but, at the same time, just the layer below that, which is what we call a dev tools layer or the tooling layer, will also have a lot of opportunity and really depends on the DNA of the founders, on which area they want to operate in.
Shripati Acharya:Area they want to operate in, principally because with a native AI, look, you're fundamentally going to rethink the way you're building the product. This is not dissimilar to how we were thought about mobile first companies. So 2007-8, whenever the iPhone first came out, right, it led to this whole generation of companies which were mobile first. All the stuff which was first, you know, on the desktop, then a client server, then, uh, then became a cloud-based, then had to become mobile first and it required thinking and a new way of thinking. There's a lot of companies which which you walk from there and a lot of services you use today are all mobile ways, right, uber and swiggy and all these, uh, all Airbnb, et cetera fundamentally leverage your phone.
Prime Venture Partners:So I feel that that's what's going to happen.
Shripati Acharya:So a lot of current software stack is going to get rebuilt. So you can be operating there in terms of rebuilding the software stack or on the applications which are actually going to sit on top of that software stack, which didn't actually exist at all.
Pankaj Agarwal:now you are able to uh, you're not able to have for example, we talk about instacart, which is like a 10 billion dollar company now, but the previous avatar of instacart was this company called webvan, right, which went bust during, I you know, during, uh, dot com bubble.
Pankaj Agarwal:It is exactly the same proposition, right, but why did it not work that time?
Pankaj Agarwal:Fundamental difference is the underlying tech and the kind of innovation that happened obviously changed the consumer behavior.
Pankaj Agarwal:That kind of brought about the demand as well as supply and I'll come to that in a moment but also the advent of mobile and the cloud and everything kind of reshape the cost structure to make it viable, right, um, webvan was, by design, a desktop app, right, and while it could cater to the demand side of it, but to really get the supply sorted, you needed that uh, you know delivery partner or whatever to have the mobile app you know, to have the have the location, to have the have the allocation, uh, kind of you know algorithms, right, which kind of enable that. And there are like multiple such examples. Of course, there could be fundamentally new applications that can be thought of in an AI native way, but a good thought exercise for some of these would be entrepreneurs, could be that, okay, what you know what companies and units, internet all the information is out there what business models and stuff just did not work, which could be redone with AI and can be made viable in like from so many, from so many angles, that's very interesting.
Jerome Manuel:See right, there are no bad ideas.
Shripati Acharya:It's only bad timing right. So many of the many of the ideas which are probably not viable earlier could become viable now, just given the new, the new landscape that's a very interesting insight for entrepreneurs, right?
Jerome Manuel:so just shifting gears a bit right, because you kind of listen to a bunch of pitches I'm sure in the past two years I've heard a lot of ai pitches. What are some things you're hearing and what are they building in? Right, like maybe tips on that, like what are they building, what is not exciting, what is definitely exciting? You and helps you move the needle. So I think both of you right that's really helpful.
Shripati Acharya:So you're saying, like, what is it that we are excited about today? Right, yeah, in Gen AI. So, very broadly, taking a slice at the, you know, direct B2C applications and there are B2B kind of things the way we look at it is that, on B2C side, actually, more than building the applications, the acquisition of the customer is going to be, has always been hard, is hard, will continue to be hard. Right, because getting the attention of the customer customer, getting them to actually go and use your consumer rather, I should say getting attention of the consumer, getting them to be users of your product, engaging them, retaining them, etc. Are all tough problems and hence, of course, the companies are able to do that, are able to recommend where you know, where you have valuations. So, while we can create a lot of very interesting and we are seeing- a lot of very interesting apps on the B2C side.
Shripati Acharya:What we cannot forget it's not technology, but without a deep understanding of how to do cost effective distribution, it's not going to be something which is viable in our opinion. So when we are looking at B2C companies and interest engineer companies, the ones which we find the most interesting are the ones which we have which have an insight a non-obvious insight into how they are going to get distribution at scale. But when you look at B2B, it's very interesting because the enterprise is like the spend on enterprise is something gargantuan, like 5 trillion, I guess. 5,000 billion is like the annual on enterprise is something gargantuan, like five trillion. Like is five thousand billion is like the annual enterprise spend which occurs and obviously a slice of that is going to start moving into uh, into enterprise, into ai, slowly at first, and then at uh, then at scale, as it said, of most seminal technologies.
Shripati Acharya:We overestimate the impact in the short term and underestimate in the long term, and it is something like that. You might think that the enterprise transformation is going to occur immediately. It won't, but over a longer term it sure will and the impact will be much larger. The way we are seeing in our conversations with a lot of CIOs and CEOs in enterprise spaces that we need to understand how their issues with AI are.
Prime Venture Partners:They all want to do something in AI.
Shripati Acharya:They understand it's a very transformative technology and hence they need to do something. However, they have a number of concerns and care about, One of which is that they at this point of time, landscape is changing very rapidly.
Shripati Acharya:Pankaj mentioned a million models in a hugging phase and more are getting added they're getting you know you talked about LLMs they're getting SLMs, small language models, smaller and smaller models which more or less actually for specific kind of problems, actually behave as good, and sometimes better than the larger models, right? So there are all these model proliferation going on. There's open source, there's closed source, and so on and so forth.
Shripati Acharya:So enterprises are justifiably taking a wait-and-see approach on where they're going to put their bets. So they are not going to put their bets on one model, right? Which has a certain implication, which means that they are unlikely to start hosting their own models and start developing applications. They're much more comfortable, probably, with a architecture which enables them to switch and play with different models. Right, that's one. The second thing is that they are very concerned about data privacy, specifically exfiltration of data, which is data from the enterprise going out of the enterprise. So if you are a startup and you actually say, hey look, I'm going to help you create this new model, we'll fine-tune a model. Fine-tune is the process by which the model itself and the weights of the data in that model are changed with respect to the data which is fed into it. Right? Actually, you're modifying the base model itself, right? That's fine-tuning. It's something which enterprises will fundamentally be uncomfortable with, because you're sending the data out of the enterprise, which is not something which is going to work.
Shripati Acharya:If they just try doing it within their own cloud, it actually is going to be a lot easier to convince them to try out the product. The third thing they are concerned about is hallucinations. Right, Hallucination is just a fancy word for saying incorrect output and somewhat random output. It's as if, you know, the model got drunk and started saying stuff which it might regret, right?
Shripati Acharya:So obviously from that standpoint, if you have a customer facing AI product, enterprises will be very careful with deploying such a thing, because obviously customer facing thing cannot have a lot of errors in it. What that? Means is that the things which they are going to be most comfortable with, where they are eager to deploy, is stuff in which there is a human in them. So that's why we see all these co-pilots and such taking a lot of adoption.
Pankaj Agarwal:So in a software co-pilot.
Shripati Acharya:It's an assistant, it's sort of like an autocomplete you're writing code, it fills it up, it helps you write one simple function very quickly, etc. Etc.
Shripati Acharya:But no problem if it makes a mistake you as a programmer, are going to fix it, or you are a creative, you're writing a blog or you're creating a picture. Something comes in. You like it, you don't like it. You change it a little bit, you take some help, add something in. So human in the loop right, it's there, but at the same time they do know that they need to do something. There's a lot of pressure from their own boards to say, well, what are we doing about Genia? Just imagine the CIO. This is a meeting conversation which is happening in every board. What are we doing? We obviously cannot be sitting ducks and just waiting there.
Shripati Acharya:As a result, there are tons and tons of pilots and we think that the next 12 months or so at least 2024, will be a year of pilots for enterprise, but production deployments will come, and they'll come in 2025, 2026 and so forth. So in this entire milieu, we feel that the folks who are actually actively experimenting within the enterprise are the developers. So what Pankaj and I are really looking to and talking to folks are what we really like are solutions which are catering to developers and helping them create these solutions, which are now a very fundamentally different way of creating it, because, as Pankaj mentioned earlier, the output is not deterministic. Previously, if you write a particular piece of code and you gave a certain input, you always got the same output. That's what determinism is. But non-determinism is a little bit like Bangalore traffic you can go from here to Indira Nagar in 15 minutes at midnight to like two hours at 7 pm and raining. It's become like that.
Shripati Acharya:So the same input gives all kinds of different outputs. So how do you now write applications in that case? Test applications in that scenario, etc. Will all have to change. So we feel that the opportunity on the enterprise side is large. The consumer side is also large. We have the distribution challenge. Enterprise side also has distribution challenges to a smaller extent. But the adoption etc has to be really thought through in terms of what is the insertion point for the for the particular product insertion points.
Pankaj Agarwal:The point has to be, as Tripathi said, developers, right, and the reason, enterprises, as he said, the key set of people or personas, enterprises that are excited about it are developers, right, whether it is from their own interest or whether the CEO is kind of throwing the problem at them. So we, you know there's like a massive opportunity in the whole developer tooling layer to really help them navigate through this. And, as I said, by the way, right, all this fine tuning, new model development. You know, doing some of this advanced stuff, less than 500 000 people globally can do that right, but you have 30 million of these developers that would be doing like a large uh portion of application layer development and stuff like that, and you know how do you solve for them to be able to develop an ai, is, is a is a very massive opportunity, okay.
Jerome Manuel:So this was super informative and great right like. So what are some parting thoughts both of you have?
Shripati Acharya:so maybe I will say a couple of pitfalls I see in some of the, in many of the pitches which we see. Perhaps I can share that to avoid here is think about what is the level of value that you're doing on top of the underlying models, when you're actually creating the product of the application. And what we want to avoid as an entrepreneur is making too thin a layer, because obviously a layer on top of GPT can be very useful but at the same time as GPT itself adds more features into those layers, can get absorbed into it and suddenly the startup doesn't become relevant.
Shripati Acharya:That's one part.
Shripati Acharya:The second thing is that as a founder, you have to also think what will the landscape look five years from now?
Shripati Acharya:And five years from now we can be sitting at GP, gpt7 or gpt8, I don't know however quickly the models are released and we can safely expect them to be at least 10x better than today's models. Better, just define how better is better in terms of accuracy, speed, cost, ability to and size and being on the edge and so on and so forth. So whatever product is being created and the value which is being added on top of the foundational models has to be relevant even then. So this thought process is very important because otherwise what is getting good adoption today might just disappear 12 to 18 months from now when the next version of GPT comes along. So the way to think about it is that ideally, the product or service is such which actually benefits from a better foundation model, and the better the foundation model, the more exciting that service becomes then, something which actually gets subsumed by that or becomes less relevant. So I would say that's probably my top thought.
Jerome Manuel:Got it. So for founders, definitely think five years from now, so that you're staying relevant to the models then and not necessarily build like the right now.
Shripati Acharya:Just imagine that the current model was 10x better. How good or how important what you're building would be in that scenario. And if the answer is it becomes even more relevant than it is today, then you're on the right path.
Jerome Manuel:Got it, got it. I think that's super valuable for founders. Yeah, pankaj, you have something to add.
Pankaj Agarwal:Yeah, I think I'd like to add that you know, there's always there's. You know, devin Demo came about, right, which was like AI software developer developers going to be relevant, right, you hear that, are these knowledge workers going to be relevant? Right, but the way I think about it is really and I think Benedict Evans kind of got me introduced to that but there is this whole concept of Jevons paradox, right. So it's an economic theory which states that as technology reduces costs and increases efficiency, the overall demand, you know, goes up because of the fall in cost per unit, right, and I think the best way to look at it is, you know, I don't know how familiar our audience would be, but in like late 70s, like precursor to Excel, was VisiCal, right, which was like a spreadsheet software, and you know the business model that you know.
Pankaj Agarwal:You know an I banker is able to develop today on an excel in a day. It used to take an army of finance, finance and accounting folks in a month to be able to develop that, right, and when excel came about, thinking that, okay, accountants will be made irrelevant, it will be replaced. But I mean, I found this interesting chart and I think I don't know if you can put it up in a show notes, but the demand of accountants has never been higher than since 1970.
Pankaj Agarwal:It has gone up like crazy, right. So that's the thing with AI as well. Of course, it will make a lot of things more efficient. I do think that knowledge-based work will be made more efficient. The cost per unit will be drastically reduced, but that will result in massive increase in demand, right? You have, as I, as I said, 30 million software developers now, if you can make them more productive, more efficient, the use cases which were previously cost prohibitive to be digitized, will come within the purview and you'll be able to tackle that and you know kind of address that with this increase, this thing.
Pankaj Agarwal:So I don't think anybody is getting irrelevant anymore. Ai, like any other technology, will likely have both the sides to it. We have to hope it's, you know, kind of the positives of it outweighs the negative, and we are very excited that you know it's. It's gonna be the way, um, and yeah, uh, nothing to be scared about in in my view. Right, you just have to, I think, as a professional, um, as a mid-career person or wherever you are in your journey, the best you can do, or you should do, is make yourself or up, kill yourself, um, keep yourself relevant as ai takes over. I don't think anybody's getting replaced as such.
Jerome Manuel:So my big takeaway is learn, unlearn and relearn Right. And AI is not replacing humans. So do use this podcast as informative material so that you kind of upscale a little bit. And so the opportunity in AI is massive. So that's the big takeaway. So we would look forward to hearing any pitches from any of the opportunities we are excited about AI at Prime. So do reach out. And yeah, that's about it today. Thank you, shripati, thank you Pankaj, it was a great session. Yeah, thanks, folks.
Prime Venture Partners:Dear listeners, thank you for listening to this episode of the podcast. Subscribe now on your favorite podcast app for free. Thank you. You get your podcasts? Then hit subscribe and if you have enjoyed the show, we would be really grateful if you leave us a review on Apple podcast. To read the full transcript, find the link in the show notes.