The Intelligent Future
Conversations About the State of AI
The pace of advancement in artificial intelligence technology has rapidly increased in recent years. That has led to an explosion of new companies and products aimed at amplifying and extending human efforts both in enterprise and consumer markets.
This is evident both in the evolving everyday software stack – which includes products such as advanced data indexing and management systems, and applied machine learning tools that automatically counter cyberthreats – to the emerging use of large language models that are able to generate sophisticated artistic and linguistic creations. Each development has spawned a wave of innovation that has opened up countless opportunities for large, established businesses and fledgling startups alike.
As we live through this moment of profound advancement, we are also tasked with identifying the many potential risks associated with AI, and in turn, making many ethical considerations when developing tools.
To gain a more cohesive perspective of the industry, we recently hosted the Intelligent Future: a day-long summit with AI experts, visionaries, and early stage innovators across the field. Guests include Microsoft CTO Kevin Scott, OpenAI CEO Sam Altman, Stanford professor Dr. Fei-Fei Li, and founders from some of the most promising AI startups today. In the coming weeks, we will be sharing the conversations from that day on the Greymatter podcast and our YouTube channel.
To kick things off, we sat down on the Greymatter podcast to talk with Blitzscaling co-author Chris Yeh about everything we are seeing in the industry, and to set the stage for the subsequent discussions with our guests from the summit. You can listen to the interview at the link below or wherever you get your podcasts.
EPISODE TRANSCRIPT
Chris Yeh:
Hello, and welcome to Greymatter. The podcast from Greylock. I’m Chris Yeh and I’m here with Greylock General Partners, Reid Hoffman, and Saam Motamedi. Today, we’re going to talk about artificial intelligence.
Now, Greylock has partnered with quite a number of startups in this rapidly advancing field, including Abnormal Security, Inflection, Adept, Snorkel, Neuro And Cresta. Reid and Saam have led many of these investments and have a unique vantage point on developments in AI.
This discussion is a precursor to a series of conversations Reid and Saam will be having with top experts, visionaries, and early stage innovators across the AI ecosystem. They’ll discuss everything from the latest achievements in robotics and applied machine learning, to large language models, and the ethical considerations of AI. And of course, they’ll explore the many new opportunities for startups within this field.
Guests of this series will include Microsoft CTO, Kevin Scott, Stanford professor, Dr. Fei-Fei Li, OpenAI CEO Sam Altman, and the founders of some of the most promising AI startups today. All of these conversations will be available in the coming weeks on the Greymatter Podcast.
So, gentlemen, let’s start with some of the basics. What are some of the biggest technological developments in AI in recent years? And what do they mean for the state of the field?
Reid Hoffman:
Well, one of the things that’s pretty interesting here, Chris, is that there’ve actually been a number of really important ones. Like the whole artificial intelligence field is a renaissance. My undergraduate major was symbolic systems at Stanford, which was kind of symbolic intelligence and cognitive science. And while some of those same techniques are used in deep learning and neural networks and that kind of thing, there’s also been a huge number of new things.
And so even though I’m going to throw out a few, it’s just beginning. So one of the really central things was kind of creating a new set of paradigms for training these models. Generally speaking, multi-billion parameters, kind of can be 20, can be 250, going on 500, et cetera, kind of large network models. One part of the training paradigm is this thing called transformers, which is, roughly, take a lot of data and text and then start predicting, using the removal of interesting words, start predicting, and that becomes a generative model.
And that kind of generativity then creates all kinds of interesting possibilities, not just a generational language, but a bunch of other things. But by the way, part of the reason I start with training paradigms is kind of goes all the way back to some of the Deep Mind stuff that Demmis, Shane and Mustafa Suleyman did, which is like game on game. Training it by playing itself in AlphaGo or chess and other kinds of, or pong, and ways of doing things. So there’s a whole bunch of supervised and partially unsupervised and unsupervised training paradigms.
This leads to large language models and foundational models, which is where a lot of the magic tends to, the most amazing magic currently tends to come from. And then there’s image generators, which use diffusion and other kinds of models too, with most notably with OpenAI’s DALL-E, that create pictures that even when you say kind of like, “Okay, a future city in watercolor,” it will actually give you a watercolor image of a Jetsons kind of future and so forth.
I recently did some essays on this and kind of released some NFTs. Because another thing we do at Greylock here is also look at the Web 3 stuff. And so it’s kind of putting these together as part of it. And those are just, I mean, part of what’s amazing is those are some highlights, but that’s just the beginning.
Saam Motamedi:
Yeah. And just to tack on, I mean, Reid called out a couple of the really important trends here. One of the things that’s been interesting to me is just the focus and innovation we’ve seen around generative. I think if you rewind several years ago, a lot of the focus was on discriminative models. So show me a photo and tell me, “Is this a cat or a dog?” And that’s interesting, but we talked about DALL-E on the image generation side, we’ve seen models like GPT-3 on the text generation side.
And what’s interesting is we’re now beginning to see the applications of generative actually go mainstream. So today, if you’re a copywriter, you can have AI help you actually dramatically increase the speed at which you develop copy, whether that’s marketing copy, writing a job spec. We’re going to see a lot of really interesting things get built on top of a mixed generation and models like DALL-E. And so I think generative is a really, really important area.
The second thing I’d call out is kind of the combination of these large language models with program synthesis. And the ability to take action on behalf of users, I think, is something that we’re beginning to see early signs of. And we’re going to see more and more of. So imagine you have models that can understand, but then actually act on behalf of the end user. And in an enterprise context, take actions in business applications to kind of automate workflows. I think that’s going to be important.
And then a third piece is just what’s happening on the infrastructure side. So if you think about the pipeline to actually building a machine learning application, starting at data management, and then doing things like model discovery, training, serving, monitoring, there has been a lot that’s happened over the last 18 to 24 months – both in terms of new companies and open source projects – that make each step of the pipeline much, much easier.
And so for new application developers, new product builders, there’s a lot to actually leverage and get your AI products to market much faster. And so I think these are all ingredients that are going to lead to a complete explosion in the number of AI applications we see over the coming quarters and years.
CY:
I’m struck, Saam, by your notion of this generativity and activity on behalf of the user. That really seems like it’s fundamentally different from what came before. It feels almost like, to draw an analogy from earlier, the transition from web 1.0, to web 2.0. Things just open up and there’s just so much more that’s possible as a result.