While Machine Learning has been around for decades, the recent advancements in large language models and launch of ChatGPT have created a cambrian explosion of applications and investor interest. Artificial intelligence has become the enabling technology of our time, and it’s impacting every industry we invest in at Greylock.

At the same time, financial services represents 25% of the global economy, and has perhaps the most to gain from better prediction models. Even slight improvements in forecasting default rates on a loan or cash flows of a business can have a dramatic economic impact. But for the most part, fintech has been left out of the conversation, partly because there is low margin for error in a regulated space.

Encouragingly, this is starting to change. Recently, Bloomberg announced BloombergGPT, a large language model trained on cleaned financial data. Ramp, a fintech startup that is noted for being one of the fastest-growing companies ever to hit $100M in ARR, is among the fintech frontrunners using automation and machine learning to help customers with expense management, payments, reporting, and more. The upside of automated, intelligent, personalized, and more secure financial services with the help of AI is in reach.

So what does this mean for fintech entrepreneurs?

I asked Ramp CEO and co-founder Eric Glyman what type of company he would start if he wasn’t running Ramp. He said he’d look for manual workflows with lots of data, and find ways to automate them and own the transaction. For starters, accounting is fundamentally pattern-based, involves networks of proprietary data, and requires a significant amount of repetitive manual analysis. Glyman says.

“Most AI companies today are focused on productivity in the workplace – anywhere you have a lot of knowledge work fundamentally based in data, where by default almost all of it is digital,” says Glyman. “If you can start to get involved in those workflows in both the movement of funds, and the reduction of work, and the augmentation of all the folks involved, there are a lot of interesting things that we could do. If there is an open field that allows innovation that has happened in the rest of the world, it should no longer miss financial services.”

Glyman joined me and my fellow Greylock partner Reid Hoffman for a wide-ranging discussion on how AI is impacting every profession today – and how there is considerable room for it to impact financial services in the future.

This conversation was recorded in front of a live audience of founders, investors, developers, and technologists. You can watch the video or listen to the interview at the links below, or wherever you get your podcasts.


 

EPISODE TRANSCRIPT

Seth Rosenberg:

Hey, everyone. Welcome. Thank you so much for taking some time out of your Friday evening to spend some time together.

The joke is that every fintech investor is now an AI investor. But, you know, obviously at Greylock we’re investing in fintech and AI, and we’re also investing in the New York community. And so I thought this would be a good opportunity to bring everyone together.

The [AI] space is moving so quickly, right? And we’re lucky to have people like Reid and Eric and Kevin and everyone else in this room who are kind of in the middle of this.

So with that, I don’t think either of you really need an intro, obviously, this is Reid Hoffman and Eric Glyman, CEO and founder of Ramp.

Let’s get into it.

So, Reid, I wanted to start off for you to just kind of introduce the topic. You’ve been investing in AI for a long time, but there seems to be this explosion over the last six months of investor interest, applications. So what’s going on?

Reid Hoffman:

So, all right, look, macro frame is what’s really going on here is the application of scale-compute to create interesting computational artifacts.

We began to see that in the earliest stages with things like AlphaGo/Alphazero and the GO results – by the way, the protein folding stuff comes more out of that lineage than it does from the large language models. And then people started showing OpenAI, more specifically, there was some stuff from Google Brain and other kinds of things that are part of our investments started showing that you could do – out of training out of like 1 to 2 trillion tokens of language data –you could make things that create an amazing kind of artifact that doesn’t just do the kind of like, “Oh, look, write the Declaration of Independence as a sonnet,” or you know, “Translate my poem into Chinese,” all that kind of stuff – which of course it does. But also does coding, also does legal, also does medical, also can get a five on the biology AP exam and all the rest. And this is the path that we’re on with this.

And so that’s why generally speaking, it’s under the term artificial intelligence, because most of these amazing things are things that we would previously have looked at as cognitive achievements. But part of the prediction is not just that there will continue to be an amazing set of things coming from AI.

So another of Seth’s and my partners, Saam Motamedi and I wrote an article last fall that said every professional will have a copilot that is between useful and essential within 2 to 5 years. And we define professional as you process information and do something on it. That’s everybody in this room, plus doctors, plus small business owners, plus legal plus, plus, plus, plus developers, etcetera. That’s just from the large language models.

Now, presume that what’s happening there (and finance and all the rest) presume that what’s happening there isn’t just because you can think of what industry impact will be. That’s true of every professional, every industry that hires professionals can think about what that transformation looks like.

But I think we will see, in addition to amazing things from large language models, I think we will see other techniques of the use of this kind of scale-compute to create things. We’ll see melds of them in various ways. You see some of that with Bing chat going “Okay, we’ve got scale-compute and server which has truth and identity and a bunch of other stuff, along with large language models.

And here’s what revolutionizes the search place. And part of when Kevin, who’s here and others in AI who were looking at this and going, “Oh..” Because we saw all this in August of last year. It’s always easy to predict the future when you’re seeing it with your own hands.

And we said,”Ok, let’s get ready and start building stuff.”

And that’s what’s going on across AI. And so it’s extremely substantive. And what’s more, we’re just dipping our toes into this.

Like this is not like, “Oh, it’s a hype moment. It’s the big thing.” This is like if we were saying, you know, back in 92, 1992, 1993, “Oh, yeah, the Internet, it’s really hyped right now.”

Anyway, that’s AI in a nutshell.

SR:

And you know, Reid is always an optimist, but in the last 12 months there have been many partner meetings at Greylock where he’s kind of rung the bell of “Pay attention to this! This is meaningful and it’s right around the corner.” And I think we’re all seeing that happen.

RH:

And then ChatGPT came out.

“What’s going on across AI is extremely substantive. And we’re just dipping our toes into it.”

SR:

Yes. Yeah, exactly.

So, Eric. So yeah, maybe just give us an overview of what Ramp does and how everything that we just described is affecting your business and fintech more broadly?

Eric Glyman:

Absolutely. And wonderful to be here.

Ramp is a finance automation platform. We’re focused on functionality, workflow and productivity related to movement of money. We’re known for operating the fastest-growing corporate card in the US, bill payment software, expense management, accounting, automation and the like.

And all of our products are designed with the intent of helping customers spend less money. Most financial products are designed with the same or more, and spend less time. And so really what we’re focused on is workflows that companies need to run in order to disburse funds, close their books, and everything in between.

I think the effect on AI from the business has been, frankly, profound from even the founding year of the company, where there’s simple things and expense management like text or receipt match it to the proper transaction is very simple machine learning to today.

When you think about accounting, ultimately these are generally accepted accounting principles. These are rules fundamentally of how transactions should be categorized and coded. And as you think about what are the patterns and how can you learn from the ten, 10,000-plus businesses, hundreds of thousands of folks learning, using, you know, automating both keeping of records, risk management and assessment to even go to market? Even in the way that Ramp has been able to grow so efficiently has come down to embracing AI and in our sales characteristics and lead routing and mapping the like.

And so I’m happy to go deep, but there’s probably ten core workstreams all throughout the business that are leveraging in some way from pattern matching to generative use cases as well.

SR:

Yeah, that makes sense. So one interesting topic is– and I know Sam Altman talks a lot about this, where kind of the original narrative of AI was that it was going to automate more basic tasks, like more solvable problems like accounting or law. The original thesis was that the last things to be automated were poets and photographers and graphic designers. It turns out that that’s actually been the reverse, at least with this first wave of generative AI, where creativity gives the models a little bit of leeway for making mistakes.

And (at least the current state of these generative models) don’t lead to 100% accuracy. But in fields like health care or like financial services, 100% accuracy is required in many cases. So what’s your take on how fields like finance are going to be able to leverage these models?

RH:

Well, the first premise of the question is not 100% correct, because there is no 100% human accuracy. I presume most of the people in this room would know that if you were asked to have an average radiologist read your x-ray film or a trained AI, you should take the trained AI if it’s an average one. If it’s the best one, or like one of the best ones, great. Take that and better yet, take the two together. So there is no such thing as human infallibility anywhere, including in finance,

EG:

Yeah, that’s definitely true.

RH:

So part of what we have to do is we have to kind of figure out… Now, part of the thing is we know how to hold human systems accountable and what accountability law looks like and what the error rate within humans, and what’s acceptable when it’s person plus machine, or when it’s machine-driven, those will be relevant variables.

Obviously, we worry about things like, for example, AI applied to credit decisioning. [It’s a question] do you have systematically biased data? Now one of the benefits is to say, well, that’s becomes our scientific problem, which you can then work on and fix as opposed to the human judgment problem where you say, “Well, we have that human judgment problem in how we’re allocating credit scores or credit worthiness or parole or everything else today.

And our human system, the machine system may start out looking like it’s going to systematize the old one, but we can fix it so that we can move past that, especially in person plus machine.

And that’s, I think, one of the things you’re going to see with AI across a number of fields, including in finance and some other areas. I did publish a book last week called Impromptu with GPT-4 as a co-author, where the central thesis was it’s not artificial intelligence, it’s augmentation intelligence. And obviously in various ways it is also artificial intelligence. But it’s a look at how you amplify people like the copilot thing that I was saying earlier.

As an exemplar of that, now, I did deploy my own personal team in helping with this. We started writing the book in January. Right? So with that, we got it out. And there are physical copies now. The physical copies are more print on demand from Amazon, but we got it out last week.

So that’s the kind of thing of the proof in the pudding with like think about how this augments human activity. Again, augmentation amplification. That’s the pattern to be thinking as you go forward. And of course, there’s a whole bunch of stuff when you begin to think about fintech, fintech products, how you operate as a fintech company, where this touches.

SR:

Yeah, that makes sense. And Eric, I think you have a pretty similar point of view around this topic. Yeah. Anything you’d add in terms of even just practically like how you build guardrails when you’re dealing with sensitive information and AI?

EG:

Yeah, on the fintech side of it. I mean, I think it’s just a very unique industry where it’s such a large sector of the economy, which, as most of the world would agree,I think it’s accelerating, but went from no phones to flip phones to iPhones, people still have the same credit cards in their wallet, same bank accounts. And I think in a way, folks in this room know the fintech industry blew open over the past five years. And actually, software-first orientation business models that can also move funds and get involved in workflows have started to spring up in a major way.

So there’s a lot of opportunities around it. When I think about some of the guardrails and patterns I agree with Reid, inherently that there’s never been perfect – when you think about underwriting, you know, best efforts, when you think about fraud, you take patterns to understand and prevent. And so, well, I think perfection is actually quite rare. I do think the copilot model is a powerful one. When you’re thinking about for us, one of our core customers is accountants, where they need to be going and pattern-matching and taking the learnings and bringing that back to augment and speed up is a very quick pattern.

But also most of the customers, most employees who are not accountants don’t know that being able to simplify, and just as encoding over the past seven months, it’s gone from the primary language of coding from Python to arguably in a year or two, it may be English is the primary way. I think those are the same patterns you’ll find in workplace productivity, expense management, accounting, vendor intelligence, and the like.

And so I think it’s really thinking about where it needs to be to a high degree, with risk and underwriting, you may not want it making all the decisions and you want to see in the back test, and there’s ways you can get at it heuristically, whereas if you have operational loops in, that copilot design pattern has been an emerging first way to deal with it.

“Fintech blew open over the past five years. Software-first orientation models that can move funds and get involved in workflows have started to spring up in a major way.”

RH:

And actually not to put them on the spot, but I think Kevin Scott is the CTO of Microsoft who’s here who pointed out to me is like, Oh, what’s the largest programming language in the world? English.

EG:

Yeah.

SR:

And Reid, I just want to double down on the point you made around the fact that AI is not perfect, but now there are humans, and in many cases I actually performs better.

How do you think we deal with the political, social, regulatory kind of mindset shift, right? Like one example is autonomous vehicles? Where, okay, you know, autonomous vehicles are safer than humans, yet we need to we’re not necessarily socially comfortable with autonomous vehicles making those mistakes.

RH:

So part of the reason why I would say that we need to own it as a society is every year that we delay shifting to basically autonomous vehicles on the roads, we’re killing 40,000 people. Blood’s on our hands.

So the important thing is to say it’s actually worth saving, and it won’t be like, you still won’t get below a thousand or something. I don’t know what the number is. It’ll be a lot, but you’ll save a ton of lives. And so you said, look, we have to solve these accountability issues.

We have to solve these kinds of things. And by the way, it’s very similar when you get to obviously, when you look at the press and government, there’s all this buzz about, “Oh, my God, AI and it’s going to have some job impacts and we should slow it down all the rest.” And obviously there’s all this discussion about competition with China and so forth,” which is important.

But here’s a way of making it tangible: Current GPT-4, looking at it. Line of sight to an AI doctor and an AI tutor on every cell phone, deliverable cheaply enough that anyone who has a smartphone can have access. Every month you delay that, think about what the human cost of that delay is, right? That’s when I talk to government folks and say, “Think about it this way!”

Right? I’m not saying don’t ignore data and large companies and ecosystems. It’s not to say those are all irrelevant variables. But like one of the classic ways that democracy fails a la climate and everything else is, what about children and grandchildren and all these like, all these other people? How do we help them? And this is part of it. It literally is buildable today with the technology. It’s just a question of how soon and how do we get there?

SR:

Yeah, and doubling down on that kind of optimism.

RH:

I’m happy to be an optimist. That’s not optimism. That’s truth.

SR:

I mean, yeah, I don’t think you have to convince the people in the room.

There’s a bunch of founders in this room, and also people who want to start companies. And so Maybe this question is starting with Eric. You know, let’s say let’s say you hadn’t started Ramp, or were the current CEO of Ramp. You see this kind of Cambrian explosion in AI and you see these advancements that Reid’s describing. What are the most interesting opportunities? Like what type of startup would you build?

EG:

Yeah, I mean, I think in some sense that many of the companies behind AI often have focus on productivity in the workplace, I think in some sense reveals where many of these interesting use cases are put differently. I think when you look at a lot of knowledge work fundamentally based in data, where by default almost all of it is digital, if you can start to get involved in those workflows in both the movement of funds and reduction of work and augmentation of those folks, I think there’s a lot of interesting things.

And so, look, I think one of the most interesting areas is frankly, around accounting. There’s a lot. It is fundamentally patterned based. It’s funny, there is a large set of folks who need to look at repeated data. They’re both within a company and you can learn macros. So there’s data networks, there’s proprietary data, there are some network effects, there’s clear patterns and some personalization that needs to take place. And I think when you start to combine those efforts, I think accounting is a very interesting space.

I do think in fintech particularly, I think there’s both the ability to have better risk assessment and fraud fighting – as well as probably a great opportunity for fraudsters too, I wouldn’t recommend it as a venture-funded business. But, you know, when you think about a lot of this…

RH:

At least not a U.S. domiciled one, but yes.

EG:

It turns out these are big businesses, just not based here.

RH:

Russian VCs. Yes.

EG:

A lot of opportunities for when you can generate someone’s look, face, sound, and predict information about them. And so I think both sides around that’s going to have very significant opportunities.

And last, secondly, even outside of AI, the notion that core financial service products – which once were locked, if you were a bank, you could store money and move money. Now, those requirement are all you need to be thoughtful about regulations and the effect it has. You know, it’s much more competitive and I think in a good way, an open field that allows innovation that has happened in the rest of the world should no longer miss financial services.

And so I think it’s quite exciting. Yeah.

SR:

Yeah, and Reid, just along these lines, you know, there’s a debate on whether startups or incumbents are kind of better positioned to take advantage of these advancements in large models. What’s your framework in terms of which incumbents are best positioned to take advantage of this wave and and which opportunities are more available for new entrants?

“An open field that allows innovation that has happened in the rest of the world should no longer miss financial services.”

RH:

Well, the short answer is there’s such a tsunami of stuff here, there’s massive opportunity all around. So the usual kind of false dichotomy question is it’s only going to be Microsoft, OpenAI, Google and everything and too bad for everything else. No. Are there going to be things that these companies are going to dominate and do? Yes, but there’s tons of room for other things. Among them, you know, like last year with Mustafa Suleyman, former co-founder of DeepMind, venture partner at Greylock, we co-founded Inflection.

Unfortunately, I won’t be able to talk much about Inflection. We will talk more about it in a month. Maybe we’ll come back through town. But, you know, and that’s a startup opportunity. So you’re putting our money where our mouth is and doing this. And we have obviously a variety of great companies Adept with David, Cresta, Snorkel, etcetera. We have a whole stack of AI companies that we’ve invested in and are going to.

So now to the more broad thing is there’s going to be a combination of I think, two broad trends.
One broad trend is the mega models, which are super valuable and important, and there’s a bunch of ways that will turn out to be super valuable and important. But if you take interesting areas like medicine, law coding and you say, “Okay, we’re going to spend $500 Million to make the larger, better model of this and it’s going to be 20% better.” Well, in those, you’re going to do it. We live in the Internet distribution, major distribution ends and you’re like, okay, the 20% better product we’ll just naturally have with no anything else, some network effects because everyone can get it through the Internet as a way of doing it.

And so the people doing the really big models and putting that in which will be small and number of and I don’t think only Microsoft Google OpenAI. I think there will be, you know, 1 to 5 others.

SR:

OpenAI is a good example of that. You would have predicted that five years ago. Or, you actually would have because you did invest in it, but most people wouldn’t.

RH:

Yes, although the investment was from my foundation because it was like, no, actually, in fact, this project from AGI is a very good thing. We had a discussion around the partnership table of “Should we do this?” And we’re like, “Okay, no revenue plan, no go to market plan. Like we have a responsibility or LPs to put something into it that is doing that.”

So, you know, in retrospect, if you could have said, “Hey, look at now, then we would have said, how much of it can we take?”

But you know, that’s always the easy part of investing is the ten year look back.

But anyway, there will be this large, large channel and then there also will be a whole bunch of smaller models for all kinds of reasons – especially tuned for something in finance that does a specific kind of accounting or fraud or other kinds of something that may run on your phone. And there will be a whole bunch of those things.

So, for example, you know, GPT-3 was very expensive to run the last compute run on. You know, about a month ago, I saw something that maybe, you know, this is a swag 80% of GPT-3 that costs $3 Million to make, right? And this is partially this other channel of stuff that will be happening images, text, other kinds of things. And both of those will have great economic opportunities in them.

And the thing that people make a little bit too much mistake to go to, is just because I have an AI tech…”It’s like actually there’s a lot of what you go to market. What’s your business model? How does that like how do you competitively position, how do you create a moat? Is the motor network effect or is it something else? “Those will broadly still play into how you’re thinking about how the tech disrupts things.

So the short answer is that it’s not only large incumbents isn’t only startups, it’s massively both.

SR:

Yeah, that makes a lot of sense.

And you know, Eric, on this topic of large language models versus more fine tune models, I’m curious as it relates to Ramp. Obviously you have a huge kind of workflow opportunity of just applying some of these large language models to your existing product, you know, having better underwriting. ET cetera. Are there any opportunities for you to also invest in some fine tuned models and some of your own AI?

EG:

I mean, I think it’s a super interesting and prescient question for a lot of practitioners, people building startups. Who do you bet on either A, the mega models, or be more fine tuned in-house development models? And frankly, some I’d be curious for for your view and opinion on it. You know, it seems to me I think there’s very.

RH:

The very unhelpful general answer is it depends. It’s the more detailed question that we’ll get to get to maybe X or Y, you know.

EG:

Fair enough.

RH:

Anyway, so I didn’t mean to interrupt your answer.

EG:

Totally we’ll, we’ll dig in but I think that’s right.

Ultimately, what do I think? So first,I think that for a variety of use cases, I wouldn’t bet against what’s happening in the mega models themselves for the vast amount of use cases, and powering more experienced generalized use cases using that.

But as folks building businesses, there are a variety of other things I would be thinking about: is there proprietary data that’s involved in the workflow of your business? Is there a data effect? Is there some level of personalization? And as you run it through, does the experience get better for every customer?

And I think even back to some of the themes that we’ve been touching on. There may be broad based risk in underwriting that once you start getting data, you can apply and turn it on. But there may be even smaller use cases and loops from tagging transactions to understanding more about specific merchants and learning from the set and sharing that back out that you can tune to to your model, and I think makes sense for tune models.

I think one of the other questions, specifically for finance, is that most of these mega models have been trained primarily, whether it’s text or images, in some cases code, I think that there are larger models being built on numbers, relationships, accounting. And so I think the answer will evolve over time.

I think at the outset of the training set of the mega models, the functional answer today is training more locally. But be ready and think about if your stack prepared to make a switch, and evaluate. I think that the core infrastructure and the way it could evolve is changing so rapidly. And so building in a way where you need to change things out is important in this style of architecture.

RH:

Everything Eric said is exactly right.

Here’s how I’d also amplify, which is, if your theory of the game is a thin layer around the AI model, you’d better be playing on the trend of the large models, right? You better be anticipating the large model. If that’s not your theory, then the small model or the self run model or whatever can itself go. But what happens is people go,” Well, I just put a thin model layer around it.” You’re like, “Well, the large model is going to blow you out of the water almost every single time, unless you just happen to be of the thought that,’I’m training to be the next large model’s back end. I’m just training for it. The next one happens.” Fine. That can be a strategy too. But anyway, that would be another principle to add to it.

“I think at the outset of training mega models, the functional answer today is to train more locally. But be ready and think about if your stack is prepared to make a switch and evaluate. That is the core infrastructure and it can evolve so rapidly.”

EG:

That’s awesome. Thank you.

SR:

One of the final topics here, I just wanted to double click on. Obviously these models are very powerful. We talked about the risk of using these for fraud phishing attacks.

I guess I’ll pass it to Reid: You know, obviously we’re investors in Abnormal Security. That’s taking the other side of it, which is using AI to detect phishing attacks and protect people.

RH:

There’s tons of security applications of this and unfortunately, tons of offense applications, too. You know, the Russian VCs I mentioned.

SR:

Yeah. And so, yeah, I’m curious. What are the vectors of offense defense, how do we put guardrails around this technology as a society and then also as a business? Like how should you think about the risk factors here?

RH:

So one of the things I’ll start with, paradoxically, is a defense on some of the criticism that OpenAI gets because people say, I was open, I should be open source, etcetera. And people go, “Oh, the people are releasing open source models.

SR:

People say that.

RH:

Yes, people say that. I don’t say that, but people say that. And academics want it because they want to access the open source models and entrepreneurs want it because they want to be able to build on it. By the way, I understand all this stuff. The problem is that an open source, large language model of sufficient capability is a built-in phishing tool, right?!. Just to be really clear, it’s like,”Here, you want to do cyber phishing? We have it. We could do it right now.” And so you have to be much more careful about open sourcing these things.

I mean, for example, to be clear about something, last year, DALL-E from OpenAI was ready four months before it launched. And why? Because they took the extra four months and they said, “Well, it could be used for these kind of bad things. It could be used for, you know, child sexual material. It could be used for revenge porn, it could be used, etc. We don’t want any of these use cases. We’re going to spend the extra time to really make sure that these are very difficult to do with our tool.”

And by the way, part of the reason why they offer it through an API is we can be paying attention and we go, “What’s that? Let’s fix that, right?”

And you get other companies to go, Oh, we’re heroes because we’re releasing the open source models. And the open source models cause a notable increase in this kind of garbage, you know, distributed on the Internet and, you know, terrible impact for the stuff. And by the way, it’s not the financials, but a whole bunch of other stuff.

So I’d say one kind of area is to say, well, we got to be much more thoughtful about the good outcomes because, by the way, doctor, tutor, like hugely important outcomes, you know, fraud prevention, cyber prevention. And one of the other things is because it’s being driven primarily by commercial entities versus governmental entities, all of them are going, “Well, we don’t do weapons.”

And by the way, that’s a respectful, honorable position. But on the other hand, said, okay, there are going to be weapons coming out of this and you need to understand them because you can’t defend against them if you don’t understand them.

So like, I’ve been going around to just about every lab that has a major effort and going, “No, no, you should work on some weapon stuff too. You should learn advanced security procedures. You shouldn’t do like the weak ass stuff like the NSA, which is a contractor, can run out the door with them. That would be bad, Right? But you should do it, you should figure that out. We should be doing that in advance so that we’re not vulnerable to them because there are real vulnerabilities.

And of course, when you see that as a venture firm like Greylock, we start going, “Oh, we should start investing in a lot of security companies and we should do this and this and this because we’ve got to make this happen. I think we see.

SR:

I think we say that every year regardless.

RH:

Maybe ten X this year. Yes. Yes.

SR:

Okay. I want to end this part of the discussion just on an optimistic note of…

RH:

I Haven’t been optimistic.

SR:

Well, this is just the realist Reid. So let’s project five years out right? What impact will this technology have on just regular people and what are you most optimistic for?

EG:

I mean, I think…it’s funny. Many of my competitors, their founders were around in the 1800s. They wore top hats. You know, these organizations are not built thinking about how much time things take for people. They have all the time in the world, tens of thousands of employees. And time is functionally free. But it’s not for most people.

And when I think about most aspects of financial services, there’s an incredible amount of busy work that’s required, whether it’s in applications, in reviews, in submitting expense reports and doing accounting and doing procurement, figuring out what things cost.

And if you collapse the amount of work it takes in order to get at more data and understand what’s happening in the world and have the world’s knowledge given to you more rapidly, personalized to the problems you’re solving or have work done for you, it’s incredibly freeing.

For us, one of the most common things that customers will say about Ramp is, “I don’t think about expense reports anymore. I have time actually to do the interesting strategic work, not just to go and collect receipts in tag transactions.”

And I think that’s a very small and early sign of things to come. And so in many ways, allowing people to be more strategic and focus on higher level work in more interesting and profound questions, I think that’s the potential.

“Many of my competitors were founded by people who were around in the 1800s.”

RH:

Yeah, I think – by the way, excellently said, because I think it can transform not just the productivity, but also the joy and meaningfulness of the stuff. That’s actually one of the things that’s frequently mistaken. I think you highlighted that very well.

So with both my Greylock and Microsoft hats on, I kind of thought this thought about this through the lens of a company and you said, okay, say you gave everyone the power as being 10X and because you know, the classic press dilemma, like the, “Oh my God, the people are going to be laid off and it’s going to be disastrous.”

Okay, so let’s walk through the departments. We got 10X salespeople. Are we going to lay off any salespeople? No. Tech sales would be great. Let’s take it. You know, marketing people, well, you might have different functions because the person who’s doing the data entry those functions, we don’t need as much. But by the way, do you want less marketing? Do you now have a competitive bar about how your marketing is if you have the same number of marketing people? Product? Yeah, engineering. Probably like, you know, blah blah, blah, probably even accounting because you now can do all kinds of different kinds of accounting and analysis of the business and a bunch of other stuff. Again, marketing has to change now. It doesn’t mean it’s all roses and utopia, but like we’ve walked through a huge number of departments looking at this and gone, no, this is going to just be help. Like helping these companies operate a whole lot better is not going to create any kind of…there’ll be some workforce transition in terms of what skills and what are you focusing on. Customer service you might have less, right?

So it doesn’t mean, again, zero. But if you look at the overall package, you go actually, in fact, this is not a workforce transformation. And then when I’m talking to us folks, you know, well, what have we been doing for the last 20 years? We’ve been putting customer service jobs in India and the Philippines. And, you know, this is actually, in fact, not actually a big workforce problem. And so the optimistic thing is actually, in fact, I think this will be productive. And I think part of the reason I didn’t start where I normally start, which is also the joy of work, will be a lot better.

And by the way, I presume, given the tech and everything else that everyone here has played with, with ChatGPT at least. If you don’t stay up at night, kind of going, “Oh, my God!” I kind of don’t understand you. Right? Like, it’s fun. It’s interesting, right?

SR:

You should tell my girlfriend that I will. Maybe it’s normal. It’s totally normal. Yeah. Oh, yeah, sure.

RH:

That may cost you a bourbon.

SR:

Yeah, well said. I was thinking maybe we’d open it up to the audience for maybe just a couple of questions, anyone? Yeah. Anyone have any questions for Reid and Eric?

Audience Member:

So there was an art gallery open here that you know about a few days ago at one of the most prestigious galleries, the Google. And it was a creator who had made all of the art presented with DALL-E 2. And I think it would be fair to say that some of the most, you know, is a public art opening. Anyone could walk in and see it. And there were some of the most famous photographers and artists and film directors in the world there, and they were all depressed, because they felt like, “This is it. This is the beginning of the end.”

I mean, one of the photos, you could have easily thought it as a Diane Arbus the twins photo, but, you know, a new work. So what do you have to say to those people?

RH:

So first, a bit of history on that one because it’s kind of fun. And then.. what I would say. So the history is Bennett Miller, amazing film director, Moneyball, etcetera, etcetera. Friend of a number of ours here, mine. Happened to be in town having lunch with Mustafa Suleiman, my co-founder at Inflexion. I said, Oh, come join us. You know, you like to say AI stuff. And so we were talking about what was happening. This was last year about image stuff. And I looked at it and I went, you know, “I think OpenAI would give you early access to DALL-E. Would you be interested in it?” And Bennett, because he’s inventive, said yes. And I went, okay, great. And I called Sam on my way to the airport and I was like, “Hey, I kind of promised Bennett open access. Are you cool with that?” And he’s like, “Yeah,” I was like, “Ok, great.”

And a month later, Bennett showed me this amazing art stuff that I couldn’t have even thought that DALL-E. was capable of doing. I was like, “Dude, this is really good. We got to show it to the OpenAI people. I don’t think they have any idea…” Because part of what’s happening with this, these hundreds of billions of parameter models is turning computer science into a natural science. Right? Like, it’s like, like who knew you could do this stuff and you unleash someone creative doing it?

And so, you know, it was like, okay. And we did that and that was all great and ends up at the Gagosian now.

Given that I have that we are on camera and I haven’t said to this person. I was talking to a musician last July and I said, “We have a program at OpenAI that can either give me an original X like a Celine Dion, etcetera, and create that song or can give 15 seconds of it and create the rest. I know that right now you’re feeling terrified.” Bad, right? You’re like, “Holy shit, my job is done.” Here is why I think it should excite your creativity, because you can go do that and then when it plays out your four minute bit, you can go, “Oh, the split between second 25 and 35.

That’s really good. And the split between a minute, 30 and a minute, 48, that’s really good. And I’m going to take that and make something a lot better. This can be a tool for amplifying me.”

Yes. You have to learn the new tool. That’s the transformation. That’s so yes, if you’re like, “I refuse, I’m really good. I’m just the expert now and I don’t need any new tools.” Okay. World’s a little harder now, right, for that. But if you go, “Oh my God, I could use this tool.” Because by the way, who are these people? They have amazing artistic sense. Like, for example, part of what Bennett taught me was, what Bennett’s genius is that he thinks intensely, visually as a director. Now we’re giving the tools. I go type it all in here. And he can also then create art because he doesn’t. Because he’s a genius at that stuff. Right? It’s an amplifier.

And that’s the thing. So, yes, they have to learn the tools and they go, “Well, I don’t want to learn the tools.” Well, well, look, “I drive a horse and buggy and I don’t want to learn how to drive a car. “Okay, That’s a choice, right? But go learn the car. Like, do that.” That’s what the general answer is.

SR:

That is pretty cool to see these creative minds, these tools in the hands of the creator.

RH:

I had no clue. By the way, it’s at the Gagosian go see it. I had no clue until Ben went, “Oh, let me show you a few things. I was like, Oh my god. Holy Moly.

SR:

Maybe we’ll do one or two more questions?

Audience Member:

This question is for Reid.

So, you know, as the founder of one of the foundation model companies, we’re out of Stanford. Two weeks ago, they took 50,000 input output pairs from a 175 billion-parameter model from OpenAI. And they fine tuned a 13 billion parameter model off of it. It costs like $600 and it’s as good as the OpenAI model. How do you view Moats evolving now that anyone can just replicate your state of the art model? Just having access to it, not the architecture or anything, just the inputs and outputs.

RH:

Very deep question and great question. And the fundamental answer is it’s unknown and we’re discovering it, right? There is no simple answer to that question. There will be some answers to that question. It’s just, we’re in the middle of this huge ocean and I got a ship and I’m going to launch it. Like, where does it lead and what does it do?

Now, a little bit to the earlier point is that there are still a bunch of things that still we have learned in software entrepreneurship about. Well, actually, in fact, you get a certain kinds of system integration. You have a good go to market virality, something else. You have a data set, stuff that you were talking about earlier, things that say, “Oh, that gives me…” like it isn’t that all of the old wisdom about entrepreneurship and software entrepreneurship has just been dumped off the side of the boat right now. It’s a technological platform change when I say it’s bigger than any others, it’s because it’s the crescendo. You can’t have it without the Internet. You can’t have it without mobile. You can’t have without cloud. It takes all of those and amplifies them. And that’s why it’s completely changing the game, all these things.

So yes, but that doesn’t mean that the old business model or the old business wisdom kind of things, all of those folks, all of those traits are still relevant, but they’re transforming. And while figuring out which ones matter in which way it is now. So it was a great question for everybody.

Audience Member:

Yeah. Thanks for that.

Seth:

Any more questions?

Audience Member:

Just look further to that question on this idea of like, will LLMS just be commoditized? Is first mover advantage just so much more important now? And when OpenAI released the plugins today, I thought that was brilliant because now it’s like that’s first ChatGPT was becoming commoditized anyway and now it’s like, “Oh, that’s interesting. Now they have plugins that’s gonna be hard for everyone else to get plugin. So I agree that the idea of a traditional model still applies.

RH:

Exactly. And the short answer is, some of it – this is the reason why it’s a work in progress. Some of it will be completely commoditized, it’ll kind of like it’s the cost of the compute electricity, etcetera, etcetera. And you can get it from 3 or 4 different players. And it’s kind of like, which one do you buy? Well, you know, I like this soft drink versus that soft drink kind of thing,

But which way do you make it non commodity and how do you do that? Maybe some of it’s in the tech, maybe some of it’s in the business. Maybe like classic is a network of developers doing plug ins. That’s the classic play. So of course, and you know, I have the privilege and honor to have been working with the OpenAI team. They’re very smart.

SR:

All right. I think let’s break there. But thank you, Reid. Thank you, Eric. Really appreciate you taking the time.

RH:

Yeah, thanks. And thank all of you for coming to our dinner on Friday night.

SR:

Yeah. Thank you so much for being here. I hope you have great conversations with the people here and. Yeah, enjoy the evening. Yeah.

WRITTEN BY

Seth Rosenberg

Seth is looking for promising early-stage founders who are dedicated to making bold moves in fintech and artificial intelligence.

visually hidden