AI for the Next Era

As the field of artificial intelligence moves forward, applications have graduated from buzzword-laden projects to fully developed tools impacting a wide range of industries.

While science fiction-esque fears of robots taking over are still overblown, the potential for AI tools to significantly augment human ideas and workflows on an everyday basis is more tangible than ever. This is evident in products like the collaborative effort of Microsoft and OpenAI to produce the AI programming tool Copilot, to OpenAI’s language generator GPT-3 and image creation platform DALL-E.

Now, the field is poised to move into the next stage, says OpenAI CEO Sam Altman. From massive leaps forward in large language models and multimodal models that move between image and language, to applications that significantly extend the capabilities of scientists, Altman sees artificial intelligence as the foundational platform from which numerous advancements will be made across all industries.

“If you just think about that alone as a way to unlock the applications people will be able to build, that would be a huge victory for all of us and just a massive step forward and a genuine technological revolution,” says Altman. “I think that these powerful models will be one of the genuine new technological platforms, which we haven’t really had since mobile. And there’s always an explosion of new companies right after.”

Altman joined me for a far-reaching discussion on the current state of AI and what’s to come next during Greylock’s Intelligent Future event, a day-long summit hosted by myself and fellow Greylock general partner Saam Motamedi. The summit featured experts and entrepreneurs from some of today’s leading artificial intelligence organizations. You can listen to this interview here, or wherever you get your podcasts. You can also watch the video of this interview on our YouTube channel here.

EPISODE TRANSCRIPT

Reid Hoffman:
So all right, let’s start a little bit more pragmatic, but then we’ll branch out. So one of the things I think a lot of folks here are interested in is, based on the APIs, that very large models will create, what are the real business opportunities? What are the ways to look forward? And then given the APIs will be available to multiple players, how do you create distinctive businesses on them?

Sam Altman:
Yeah. So I think so far, we’ve been in the realm where you can do an incredible copywriting business or you can do an education service or whatever. But I don’t think we’ve yet seen the people go after the trillion dollar take on Google. And I think that’s about to happen. Maybe it’ll be successful. Maybe Google will do it themselves. But I would guess that with the quality of language models we’ll see in the coming years, there will be a serious challenge to Google for the first time for a search product. And I think people are really starting to think about “How did the fundamental things change?” And that’s going to be really powerful.

I think that a human level chatbot interface that actually works this time around, I think many of these trends that we all made fun of were just too early. The chatbot thing was good. It was just too early. Now it can work. And I think having new medical services that are done through that, where you get great advice or new education services, these are going to be very large companies.

I think we’ll get multimodal models in not that much longer, and that’ll open up new things. I think people are doing amazing work with agents that can use computers to do things for you, use programs and this idea of a language interface where you say a natural language – what you want in this kind of dialogue back and forth. You can iterate and refine it, and the computer just does it for you. You see some of this with DALL-E and CoPilot in very early ways.

But I think this is going to be a massive trend, and very large businesses will get built with this as the interface, and more generally [I think] that these very powerful models will be one of the genuine new technological platforms, which we haven’t really had since mobile. And there’s always an explosion of new companies right after, so that’ll be cool.

RH:
And what do you think the key things are, given the large language model we provided as an API service? What are the things that you think that folks who are thinking about these AI businesses should think about as to how you create an enduring differentiated business?

SA:
I think there will be a small handful of fundamental large models out there that other people build on. But right now what happens is a company makes a large language model (API enabled to build on top of it), and I think there will be a middle layer that becomes really important where… I’m skeptical of all of the startups that are trying to train their own models. I don’t think that’s going to keep going. But what I think will happen is there’ll be a whole new set of startups that take an existing very large model of the future and tune it, which is not just fine tuning, all of the things you can do.

I think there’ll be a lot of access provided to create the model for medicine or using a computer or a friend or whatever. And then those companies will create a lot of enduring value because they will have a special version of it. They won’t have to have created the base model, but they will have created something they can use just for themselves or share with others that has this unique data flywheel going that improves over time and all of that. So I think there will be a lot of value created in that middle layer.

RH:
And what do you think some of the most surprising ones will be? It’s a little bit like, for example, a surprise from a couple years ago – we talked a little bit to Kevin Scott about this morning as we opened up – which is train on the internet, do code. So what do you think some of the surprises will be if you didn’t realize it reached that far?

SA:
I think the biggest systemic mistake in thinking people are making right now is they’re like, “All right, maybe I was skeptical, but this language model thing is really going to work and, sure, images, video too. But it’s not going to be generating net new knowledge for humanity. It’s just going to do what other people have done. And that’s still great. That still brings the marginal cost of intelligence very low. It’s not going to cure cancer. It’s not going to add to the sum total of human scientific knowledge.” And that is what I think will turn out to be wrong that most surprises the current experts in the field.

RH:
Yep. So let’s go to science then as the next thing. What are some of the things – whether it’s building on the APIs, or use of APIs by scientists – what are some of the places where science will get accelerated and how?

SA:
So I think there’s two things happening now and then a bigger third one later. One is there are these science dedicated products, like AlphaFold. And those are adding huge amounts of value, and you’re going to be seeing this way more and way more. I think if I had time to do something else, I would be so excited to go after a bio company right now. I think you can just do amazing things there.

Anyway, there’s another thing that’s happening, which is tools that just make us all much more productive that help us think of new research directions that write a bunch of our codes so we can be twice as productive. And that impact on the net output of one engineer or scientist, I think, will be the surprising way that AI contributes to science that is outside of the obvious models. But even just seeing now what I think these tools are capable of doing, CoPilot is an example. There’s much cooler stuff than that. That will be a significant change to the way that technological development, scientific development happens. So those are the two that I think are huge now and lead to just an acceleration of progress.

"I'm a big believer that the only real driver of human progress and economic growth over the long term is the societal structure that enables scientific progress, and then scientific progress itself."

But then the big thing that I think people are starting to explore is – and I hesitate to use this word because I think there’s one way it’s used, which is fine, and one that is more scary – but AI that can start to be an AI scientist and self-improve. Can we automate our own jobs, as AI developers very first, the very thing we do? Can that help us solve the really hard alignment problems that we don’t know how to solve? That, honestly, I think is how it’s going to happen.

The scary version of self-improvement (and the one from the science fiction books) is editing your own code and changing your optimization algorithm and whatever else. But there’s a less scary version of self-improvement, which is what humans do, which is if we try to go off and discover new science, we come up with explanations. We test them. We think.

Whatever process we do that is special to humans, teaching AI to do that, I’m very excited to see what that does for the total. I’m a big believer that the only real driver of human progress and economic growth over the long term is the societal structure that enables scientific progress and then scientific progress itself. And I think we’re going to make a lot more of that.

RH:
Well, especially science that’s deployed in technology. I think probably most people understand what the alignment problem is, but it’s probably worth four sentences on the alignment problem.

SA:
Yeah. So the alignment problem is we’re going to make this incredibly powerful system and be really bad if it doesn’t do what we want, or if it has goals that are either in conflict with ours. [There are] many sci-fi movies about what happens there, or goals where it just doesn’t care about us that much.

And so the alignment problem is: how do we build AGI that does what is in the best interest of humanity? How do we make sure that humanity gets to determine the future of humanity? And how do we avoid both accidental misuse where something goes wrong that we didn’t intend, intentional misuse, where a bad person is using a AGI for great harm, even if that’s what other person wants, and then the inner alignment problems, where what if this thing just becomes a creature that views us as a threat?

The way that I think the self-improving systems help us is not necessarily by the nature of self-improvement, but we have some ideas about how to solve the alignment problem at a small scale. And we’ve been able to align OpenAI’s biggest models better than we thought we would at this point. So that’s good. We have some ideas about what to do next, but we cannot honestly look anyone in the eye and say we see out 100 years how we’re going to solve this problem. But once the AI is good enough that we can ask it, “Hey, can you help us do alignment research?” I think that’s going to be a new tool in the toolbox.

RH:
Yeah. For example, one of the conversations you and I had is, could we tell the agent, “Don’t be racist”? As opposed to trying to figure out all the different things where the weird correlative data that exists on all of the models and everything else may lead to racist outcomes, it could actually in fact do a self-cleansing.

SA:
Totally. Once the model gets smart enough that it really understands what racism looks like and how complex that is, you can say, “Don’t be racist.”

RH:
Yeah, exactly. What do you think the moonshots are in the terms of evolution of the next couple years that people should be looking out for?

SA:
In terms of evolution of where AI will go?

RH:
Yeah.

SA:
I’ll start with the higher certainty things. I think language models are going to go just much, much further than people think, and we’re very excited to see what happens there. I think it’s what a lot of people say about running out of compute, running out of data. That’s all true. But I think there’s so much algorithmic progress to come that we’re going to have a very exciting time.

Another thing is I think we will get true multimodal models working. And so not just text and images but every modality you have in one model is able to easily fluidly move between things. I think we will have models that continuously learn. So right now, if you use GPT whatever, it’s stuck in the time that it was trained. And the more you use it, it doesn’t get any better and all of that. I think we’ll get that changed. So I’m very excited about all of that.

And if you just think about what that alone is going to unlock and the applications people will be able to build with that, that would be a huge victory for all of us and just a massive step forward and a genuine technological revolution – if that all had happened. But I think we’re likely to keep making research progress into new paradigms as well. We’ve been pleasantly surprised on the upside about what seems to be happening. And I think all these questions about new knowledge generation (how do we really advance humanity?) I think there will be systems that can help us with that.

RH:
So one thing I think would be useful to share – because folks don’t realize that you’re actually making these strong predictions from a fairly critical point of view, not just “We can take that hill” – say a little bit about some of the areas that you think are current loosely talked about, for example, AI and fusion.

SA:
Oh yeah. So one of the unfortunate things that’s happened is AI has become the mega buzzword, which is usually a really bad sign. I hope it doesn’t mean the field is about to fall apart. But historically, that’s a very bad sign for new startup creation or whatever if everybody is like, “I’m this with AI.” And that’s definitely happening now. We were talking about are there all these people saying, “I’m doing these RL models for fusion or whatever,” and as far as we can tell they’re all much worse than what smart physicists have figured out.

I think it is just an area where people are going to say everything is now, “This plus AI.” Many things will be true. I do think this will be the biggest technological platform of the generation.

We like to make predictions where we can be on the frontier, understand predictably what the scaling laws look like (or have already done the research) where we can say, “All right, this new thing is going to work and make predictions out of that way.”

And that’s how we try to run OpenAI, which is to do the next thing in front of us when we have high confidence and take 10% of the company to just totally go off and explore, which has led to huge wins.

Oh, I feel bad saying this. I doubt we’ll still be using the transformers in five years. I hope we’re not. I hope we find something way better. But the transformers obviously have been remarkable. So I think it’s important to always look for where I am going to find the next totally new paradigm. But I think that’s the way to make predictions. Don’t pay attention to the AI for everything. Can I see something working, and can I see how it predictably gets better? And then, of course, leave room open for – you can’t plan the greatness – but sometimes the research breakthrough happens.

RH:
So I’m going to ask two more questions and then open it up because I want to make sure that people have a chance to do the broader discussion, although I’m trying to paint the broad picture so you can get the crazy ass questions as part of this.

What do you think is going to happen vis-a-vis the application of AI to these very important systems, for example, financial markets? Because the very natural thing would be to say, “Well, let’s do a high frequency quant trading system on top of this,” and other kinds of things. Is it just a neutral arms race? What’s your thought in it’s almost like the “Life 3.0, Omega’s” point of view?

SA:
Yeah. I think [AI] is going to just seep in everywhere. My basic model of the next decade is that the marginal cost of intelligence and the marginal cost of energy are going to trend rapidly towards zero, surprisingly far. And those, I think, are two of the major inputs into the cost of everything else, except the cost of things we want to be expensive, the status goods, whatever.
I think you have to assume that’s going to touch almost everything because these seismic shifts that happen when the whole cost structure of society changes, which happened many times before, the temptation is always to underestimate those. So I wouldn’t make a high confidence prediction about anything that doesn’t change a lot or where that doesn’t get to be applied.

But one of the things that is important is it’s not like the thing trends all the way to zero. They just trend towards there. And so it’s like someone will still be willing to spend a huge amount of money on compute, and energy. They’ll just get unimaginable amounts about that. And so who’s going to do that, and where’s it going to get the weirdest, not because the cost comes way down, but the amount spent actually goes way up?

RH:
Yes, the intersection of the two curves.

SA:
Yeah. The thing got 10 or got 100 times cheaper than the cost of energy, 100 million times cheaper than the cost of intelligence, and I was still willing to spend 1,000 times more in today’s dollars. What happens then?

RH:
And then last of the buzzword bingo part of the future questions: metaverse and AI, what do you see coming in this?

SA:
I think they are both independently cool things. It’s not totally clear to me other than how AI will impact all computing.

RH:
Yeah. Well, obviously, computing, simulation, environments, agents, possibly entertainment, certainly education, like an AI tutor and so forth, those would be baseline. But the question is there anything that’s occurred to you that’s-

SA:
In the upside case, which I think has a reasonable chance of happening, the upside case, the metaverse turns out to be more like something on the order of the iPhone, a new container for software and a new computer interaction thing and AI turns out to be something on the order of a legitimate technological revolution. And so I think it’s more like how the metaverse is going to fit into this new world of AI than AI fit into the metaverse. But low confidence.

RH:
But TBD. All right. Questions?

Audience Member:
Hey there. How do you see foundational technologies like GPT-3 affecting the pace of life science research specifically? You can group in medical research there and just quickening the iteration cycles. And then what do you see as the rate limiter in life science research and where we won’t be able to get past because there are just laws of nature, something like that?

SA:
I think the currently available models are not good enough to have made a big impact on the field. At least that’s what most life sciences researchers have told me. They’ve all looked at it and they’re like, that’s a little helpful in some cases. There’s been some promising work in genomics, but stuff on a bench top hasn’t really impacted it. I think that’s going to change. And I think this is one of these areas where there will be these new $100 billion to $1 trillion companies started, and those areas are rare.

If you can really make a future of pharma company that is just hundreds of times better than what’s out there today, that’s going to be really different. As you mentioned, there still will be the rate limit of bio has to run at its own thing and human trials take over long they take.

So I think an interesting cut of this is, where can you avoid that? The synthetic bio companies that I’ve seen that have been most interesting are the ones that find a way to make the cycle time super fast. And that benefits an AI that’s giving you a lot of good ideas, but you’ve still got to test them, which is where things are right now.

I’m a huge believer in startups that the thing you want is low costs and fast cycle times. And if you have those, you can then compete as a startup against the big incumbents. And so I wouldn’t go pick cardiac disease as my first thing to go after right now with this new company. But using bio to manufacture something, that sounds great. I think the other thing is the simulators are still so bad. And if I were a bio-meets-AI startup, I would certainly try to work on that somehow.

RH:
When do you think the AI tech will help create itself – almost like the self-improvement [aspect] – will help make the simulator significantly better?

SA:
People are working on that now. I don’t know quite how it’s going, but very smart people are very optimistic about that.

"I don't think all the deep biological things will be changed by AI. I think we will still really care about interaction with other people. I think the stuff that people cared about 50,000 years ago is more likely to be the stuff that people care about 100 years from now than 100 years ago."

RH:
Other questions. And I can keep going on questions. I just want to make sure you guys had a chance at this. Ah, here. Yes. Great. Mic is coming.

Audience Member:
Awesome, thank you. I was curious, what aspects of life do you think won’t be changed by AI?

SA:
All of the deep biological things. I think we will still really care about interaction with other people. We’ll still have fun, and the reward systems of our brain are still going to work the same way. We’re still going to have the same drives to create new things and compete for silly status and form families and whatever. So I think the stuff that people cared about 50,000 years ago is more likely to be the stuff that people care about 100 years from now than 100 years ago.

RH:
As an amplifier on that before we get to whatever the next question is, what do you think are the best utopian science fiction universes so far?

SA:
Good question. Star Trek is pretty good, honestly. I do like all of the ones that are where we turn our focus to exploring and understanding the universe as much as we can. That’s not a utopian one. Well, maybe I think the last question is an incredible short story. That came to mind.

RH:
Yep, yep. I was expecting you to say Iain Banks on the Culture.

SA:
Those are great. There’s not one sci-fi universe that I could point to and say I think all of this is great. But the collective optimistic corner of sci-fi, which is a smallish corner, I’m excited about. Actually, I took a few days off to write a sci-fi story, and I had so much fun doing it, just about the optimistic case of AGI, that it made me want to go read a bunch more. So I’m looking for recommendations of more to read now, the less known stuff if you have anything.

RH:
I will get you some great thumb recommendations.

Audience Member:
So in a similar vein, one of my favorite sci-fi books is called Childhood’s End by Arthur Clark from the ’60s I think. And I guess the one sentence summary is aliens come to the Earth to try to save us, and they just take our kids and leave everything else. So-

RH:
They’re slightly more optimistic than that. But yes. Ascension into the over mind is meant to be more utopian, but yes. You may not read it that way, but yes.

Audience Member:
Well, so in our current universe, our current situation, a lot of people think about family building and fertility. Different people have different ways of approaching this. But from where you stand, what do you see as the most promising solutions? It might not be a technological solution, but I’m curious what you think. Other than everyone having 10 kids, how do we-

SA:
Of everyone having 10 kids?

Audience Member:
Yeah. How do you populate? How do you see family building coexisting with AGI and high tech?

SA:
This is a question that comes up at OpenAI a lot. How should one think about having kids? There’s, I think, no consensus answer to this. There are people who say, “Yeah, I always thought I was going to have kids, and I’m not going to because of AGI,” just for all the obvious reasons and, I think, some less obvious ones. There’s people who say, “Well, it’s going to be the only thing for me to do in 15, 20 years, so of course I’m going to have a big family. That’s what I’m going to spend my time doing. I’ll just raise great kids. And then I think that’s what will bring me fulfillment.”

I think, as always, it is a personal decision. I get very depressed when people are like, “I’m not having kids because of AGI.” The EA community is like, “I’m not doing that because they’re all going to die.” The techno-optimists are like, “Well, I want to merge into the AGI and go off exploring the universe. And it’s going to be so wonderful, and I want total freedom.”

But I find all of those quite depressing. I think having a lot of kids is great. I want to do that now even more than I did when I was younger. And I’m excited for it.

Audience Member:
What do you think will be the way that most users interact with foundation models in five years? Do you think there’ll be a number of verticalized AI startups that essentially have adapted a fine-tuned foundation model to industry? Or do you think prompt engineering will be something many organizations have as an in-house function?

SA:
I don’t think we’ll still be doing prompt engineering in five years. And this’ll be integrated everywhere. Either with text or voice, depending on the context, you will just interface in language and get the computer to do whatever you want. And that will apply to generate an image where maybe we still do a little bit of prompt engineering, but it’s just going to get it to go off and do this research for me and do this complicated thing or just be my therapist and help me figure out to make my life better or go use my computer for me and do this thing or any number of other things. But I think the fundamental interface will be natural language.

"What will always matter is the quality of ideas and the understanding of what you want."

RH:
Let me actually push on that a little bit before we get to the next question. To some degree, just like we have a wide range of human talents right now and taking a look, for example, at DALL-E, when you have a great visual thinker, they can get a lot more out of DALL-E because they know how to think more. They know how to iterate the loop through the test. Don’t you think that will be a general truth about most of these things? While it’ll be natural language as the way you’re doing it, there will be almost an evolving set of human talents about going that extra mile.

SA:
100%. I just hope it’s not figuring out how to hack the prompt by adding one magic word to the end that changes everything else. What will matter is the quality of ideas and the understanding of what you want. So the artist will still do the best with image generation but not because they figured out to add this one magic word at the end of it. Because they were just able to articulate it with a creative eye that I don’t have.

RH:
And what they have is a vision and how they’re visual thinking and iterating through it. And obviously, it’ll be that word or prompt now, but it’ll iterate to better. All right. At least we have a question here.

Audience Member:
Hey. Thanks so much. I think the term AGI is thrown around a lot. And sometimes I’ve noticed in my own discussions, the sources of confusion just come from people having different definitions of AGI. And so it can be the magic box where everyone just projects their ideas onto it. And I just want to get a sense for me, how would you define AGI, and how do you think you’ll know when we achieve it?

SA:
Yeah. I should’ve defined that earlier. It’s a great point. I think there’s a lot of valid definitions to this, but for me, AGI is basically the equivalent of a median human that you could hire as a coworker. And then they could do anything that you’d be happy with a remote coworker doing just behind a computer, which includes learning how to go be a doctor, learning how to go be a very competent coder.

There’s a lot of stuff that a median human is capable of getting good at. And I think one of the skills of an AGI is not any particular milestone but the meta skill of learning to figure things out and that it can go decide to get good at whatever you need. So for me, that’s AGI. And then super intelligence is when it’s smarter than all of humanity put together.

RH:
Did you have a question? Yep. Great.

Audience Member:
Thanks. Just what would you say in the next 20, 30 years are some of the main societal issues that will arise as AI continues to grow? And what can we do today to mitigate those issues?

SA:
Obviously, the economic impacts are huge. And I think if it is as divergent as I think it could be for some people doing incredibly well and others not, I think society just won’t tolerate it at this time. And so figuring out when we’re going to disrupt so much of economic activity, and even if it’s not all disrupted by 20 or 30 years from now, I think it’ll be clear that it’s all going to be.

What is the new social contract? My guess is that the things that we’ll have to figure out are how we think about fairly distributing wealth, access to AGI systems, which will be the commodity of the realm, and governance, how we collectively decide what they can do, what they don’t do, things like that. And I think figuring out the answer to those questions is going to just be huge.
I’m optimistic that people will figure out how to spend their time and be very fulfilled. I think people worry about that in a little bit of a silly way. I’m sure what people do will be very different, but we always solve this problem. But I do think the concept of wealth and access and governance, those are all going to change, and how we address those will be huge.

RH:
Actually, one thing. I don’t know what level of devs you can share that with, but one of the things I love about what OpenAI and you guys are doing is they think about these questions a lot themselves, and they initiate some research. So you’ve initiated some research on this stuff.

SA:
Yeah. So we run the largest UBI experiment in the world. We have a year and a quarter left in a five-year project. I don’t think that’s the only solution, but I think it’s a great thing to be doing. And I think we should have 10 more things like that that we try. We also try different ways to get input from a lot of the groups that we think will be most affected and see how we can do that early in the cycle. We’ve explored more recently how this technology can be used for reskilling people that are going to be impacted early. We’ll try to do a lot more stuff like that too.

RH:
Yeah. So the organization is actually, in fact, these are great questions, addressing them and actually doing a bunch of interesting research on it. So the next question.

Audience Member:
Hi. So creativity came up today in several of the panels, and it seems to me that the way it’s being used, you have tools for human creators and go and expand human creativity. So where do you think the line is between these tools to allow a creator to be more productive and artificial creativity to do everything itself?

SA:
Yeah. And I think we’re seeing this now that as tools for creatives, that is going to be the great application of AI in the short term. People love it. It’s really helpful. And I think it is, at least in what we’re seeing so far, not replacing. It is mostly enhancing. It’s replacing in some cases, but for the majority of the kind of work that people in these fields want to be doing, it’s enhancing. And I think we’ll see that trend continue for a long time. Eventually, yeah, it probably is just like we look out 100 years, it can do the whole creative job.

I think it’s interesting that if you ask people 10 years ago about how AI was going to have an impact, with a lot of confidence from most people, you would’ve heard, first, it’s going to come for the blue collar jobs working in the factories, truck drivers, whatever. Then it will come for the low skill white collar jobs. Then the very high skill, really high IQ white collar jobs, like a programmer or whatever. And then very last of all and maybe never, it’s going to take the creative jobs. And it’s going exactly the other direction.

There’s an interesting reminder in here generally about how hard predictions are, but more specifically about we’re not always very aware, maybe even ourselves, of what skills are hard and easy, what uses most of our brain and doesn’t or how difficult bodies are to control or make or whatever.

RH:
We have one more question over here.

Audience Member:
Hey. Thanks for being here. So you mentioned that you could be skeptical of any startup trying to train their own language model.So what I have heard, which might be wrong, is that large language models depend on data and compute. And any startup can access the same amount of data because it’s just internet data. And different companies might have different amounts of compute, but I guess as they’re big players, they’ll get the same amount of compute. So how could a large language model startup differentiate from another?

SA:
I think it’ll be this middle layer. I think in some sense, the startups will train their own models, just not from the beginning. They will take base models that are hugely trained with a gigantic amount of compute and data, and then they will train on top of those to create the model for each vertical. So in some sense, they are training their own models, just not from scratch. But they’re doing the 1% of training that really matters for whatever this use case is going to be.
Those startups, I think, will be hugely successful and very differentiated startups there. But that’ll be about the data flywheel that the startup is able to do, the all of the pieces on top of and below. This could include prompt engineering for a while or whatever, the core base model. I think that’s just going to get too complex and too expensive, and the world also just doesn’t make enough chips.

RH:
So Sam has a work thing he needs to get to. And as you probably can tell with the very far ranging thing, Sam always expands boundaries. And a little bit unlike when you’re feeling depressed, whether it’s [the future for] kids or anything else, you’re the person I always turn to for help.

SA:
I appreciate that.

I think no one knows we’re sitting on this precipice of AI. And people are like, “It’s either going to be really great or really terrible.” You got to plan for the worst. It’s not a strategy to say it’s all going to be okay, but you may as well emotionally feel like we’re going to get to the great future and work as hard as you can to get there and play for it rather than act from this place of fear and despair all the time.

RH:
Yeah, because if we acted from a place of fear and paranoia, we would not be where we are today. So let’s thank Sam for spending dinner with us.

SA:
Thank you.