Name an industry, work environment, or geographic locale, and James Manyika will have valuable insights on how it can be impacted by technology and business.

From his nearly three-decade tenure as a highly-sought out advisor at McKinsey Global, to his roles on numerous influential boards and commissions including vice chair of the Global Development Council under the Obama administration, Manyika has singular capacity for seeing complex questions of society, economics, and tech from a kaleidoscope of seemingly contradictory vantage points.

So it’s no surprise that Google recently appointed him as the company’s first-ever SVP of Technology and Society.

What started for Manyika as an undergraduate research project on neural networks became a PhD in machine learning (the then-preferred term for AI) and a lifelong pursuit to take that understanding to solve large-scale societal problems.

From that early research to his new post at Google, the throughline of his work, says Manyika, has been a deep focus on questions of: “What are all these advanced technologies going to mean for society, in the best sense, in the sense of what opportunities will be created? What things do you want to make sure we manage well?” and, of particular importance to him with AI: “How do we get it right? How can we make sure that things go right?

In this interview from our Iconversations series, Manyika talks with Reid Hoffman, Greylock general partner and longtime friend and colleague, about the current state of AI, where on the horizon he sees AGI, and his take on the problems of bias presented — or rather, illuminated — by algorithms, as well as what can be done to solve them.

You can watch the video from the event on our YouTube channel, or listen to the full conversation here:

 

EPISODE TRANSCRIPT

Reid Hoffman:
Hi, everyone. Welcome to another edition of Greylock’s Iconversations. I’m Reid Hoffman, a general partner at Greylock, and your host for today’s event. I am thrilled to welcome James Manyika as our guest today.

James is the newly appointed SVP of Technology and Society at Google, a role he has taken on following more than 27 years at McKinsey Global, where he advised leaders at many of the top tech companies of the world.

James has focused on artificial intelligence, robotics, and globalization for his entire career, and he’s had extensive experience in pretty much every type of workplace imaginable, from academia to government agencies to private companies and nonprofits. His contributions range from books and articles to speeches, lectures, and, of course, countless moments of critical advice in top-secret capacities.

He’s held several government advisory posts, including as vice chair of the Global Development Council under the Obama administration, and he was named to the Digital Economy Board and National Innovation Board. He also serves on the board of the Council of Foreign Relations and recently co-chaired the State of California’s Future of Work Commission.

Today, he serves in various science and technology capacities, including as the distinguished fellow of Stanford’s Human-Centered AI Institute, fellow and visiting professor at Oxford, and board member at the Broad Institute of MIT and Harvard, among others. He is also a fellow of the American Academy of Arts and Sciences and a member of the National Academy Science, Engineering, and Medicines Committee on Responsible Computing.

He’s also a great friend of mine, with whom I have shared many incredible initiatives and experiences, including our recent trip to the South Pole, where I believe James was the first Zimbabwean-born visitor to the geographic South Pole.

James, as always, thank you for being with us today. You’ve had a long and busy career through one of the fastest changing periods in our history. And, to this day, you are constantly expanding your exploration zone, not just to the South Pole. Welcome.

James Manyika:
Well, thank you, Reid. I’m excited to do this with you. It’s always fun to be in conversation with you.

RH:
So, we were at Oxford together, but didn’t know each other then. You’ve been focused on artificial intelligence and robotics for your entire career, including then, from the technical aspects to the ethical aspects, which obviously is super important for designing, implementing, [and] founding technology. What set you off on this course?

JM:
Well, thanks for asking the question. In fact, apart from growing up watching all the science-fiction films — 2001: A Space Odyssey and everything — I had a very peculiar thing happen when I was an undergraduate in Zimbabwe.

I was looking for an undergraduate research project, and it turns out that a postdoc was visiting from Canada. This postdoc had been one of Geoff Hinton’s students, actually. He said, “Well, why don’t you do a project building a neural network?” And I said, “What is that?” So that was actually the first time I ever actually programmed a neural network, because, as some of your audience will know, Jeff Hinton was among one of the people who actually pioneered the successful run we’ve had now with deep learning and neural networks.

That was how I got started. From that, I got hooked, [and] went to Oxford. At Oxford, I did a few different things. But when I finished, I’d done a doctorate in AI and robotics. And that was a fascinating time.

At the time, by the way, there was often a reluctance to call this AI, because of the previous period of AI winter. So we actually called it machine perception, machine learning, and other things — anything but AI.

RH:
Exactly. And one doesn’t normally go from a PhD in machine learning, machine perception, to McKinsey. What was that move? And in particular, of course, and we’ll get into this in some depth, thinking about societal and economic ecosystems as part of this.

JM:
Well, it was a very accidental thing for me, Reid, because part of it was, quite frankly, an excuse to be in California, because when McKinsey made me an offer, I could be in California. And I’d been spending some time, by the way, even after my PhD, at Jet Propulsion Labs when I was a visiting scientist in the Man Machine Systems Group, because some of the things I worked on my doctorate were applicable then. And besides, a few friends of mine had this crazy idea that we might actually build an autonomous car.

So while my other friends were at Berkeley and were working, Stuart Russell and others, I said, “Well, maybe this McKinsey thing is an excuse to be in California, be in the Bay Area.” So I actually took a leave of absence from the other stuff that I was doing, just as an excuse to be in the Bay Area, and I guess I ended up staying.

But more seriously, though, I think part of what I learned at the time was that I was fascinated by very large-scale problems. And, of course, technology is a big part of that, but also just thinking about large-scale societal questions was what fascinated me. And McKinsey seemed to be a great platform and place to do that, particularly at the Global Institute, [which] I ended up leading for many years.

RH:
Yep. Well then the Global Institute is, obviously, producing a huge amount of very interesting and very kind of — practical’s not the word, but rooted in what kinds of things to do and what trends are happening. And so, that makes sense.

So: What’s this new role at Google? What are you going to be doing?

JM:
Well, thank you. It almost feels like a continuation of the things that I’ve been passionate about, Reid, in the sense that it has this big title of technology and society. But what it’s really about is to really think about: What are all these advanced technologies going to mean for society, in the best sense, in the sense of what opportunities will be created? What things do you want to make sure we manage well?

So I’m going to be spending a lot of time doing research, spending time with the amazing AI teams at Google, Jeff and Demis and others, and also thinking about the next generation of bets and investments and how those might affect society. And, quite frankly, talking to a lot of people inside Google, outside of Google, and trying to engage in these issues around technology and society.

You and I have spent a lot of time thinking about, “How do we make sure all of this— How do we get it right? How can we make sure that things go right?” And I’m particularly excited about making sure, in an area like AI and robotics, where I’ve spent most of my time — I want to make sure it turns out right.

RH:
One of the things that you have coming up is through the American Academy of Arts and Sciences, where both you and I are members. They have a magazine, a very thoughtful set of issues, Dædalus. You have an AI issue coming up.

To begin to dig into these issues and some of the things that you’ve been doing at McKinsey and will be doing at Google: What are some of the issues that you’re going to be addressing in that issue of Dædalus? And what are some of the things that you think that technologists should be thinking about, as they think about this amazing technology; this Renaissance we’re going into in AI?

JM:
Well, first of all, I was so excited when the American Academy asked me about a year ago to curate and edit this special edition of Dædalus. Normally when Dædalus comes out, it’s eight essays, but this one’s going to have 27 essays. And what’s fun about that is I was able to arm-twist my friends and people I know. So I’ve got everybody contributing an essay. Jeff Dean has an essay, Kevin Scott at Microsoft has an essay. Mira Murati from OpenAI has an essay. Stuart Russell from Berkeley.

So you’ve got half the contributors [as] sort of AI pioneers and people who really work in the frontier of the technology. The other half are people who are thinking about the implications in society. So they are people like Michael Spence, a Nobel laureate, philosophers, and ethicists. This issue is actually called “AI in Society.”

But if I come to your question, Reid, about where are we in this, I think, on the technical side, it’s a very exciting time. I mean, your audience will know, and the participants will know, that we’ve had an incredible run with techniques like deep learning and reinforcement learning. And especially in deep learning, now, you got a turbocharge recently with the development of transformer models. So these systems are working remarkably well.

One of the debates you will hear in the collection, and it’s very much in the community, is: Are the current approaches sufficient to get us to remarkably powerful AI or even ultimately artificial general intelligence? So this is one of the debates in the field, which is: Are these techniques and approaches enough? And if you take a room of AI people, half of them will say, “Oh yeah, this is all we need.” And if you talk to the other half will say, “Well, this stuff is great, but we need so much more.”

And usually what they’re getting at there is [that] there still are some very hard problems in AI. Things like: Can we actually do causal reasoning with these systems? Can we get to issues of meaning? Can we do transfer learning? Can we do what Daniel Kahneman and others describe as System 2 tasks? So there are still some very hard problems.

And the debate is: Are these techniques enough, or are we going to need other things? By the way, that debate is typically when you ask people about AGI, about how far away it is. Much of that debate hinges on this question. So the people who think we’ve got all the tools and techniques, will say, “Oh no, it’s very, very close.” The people who don’t think we’ve got enough, will say, “No, no, no. We still need some major conceptual, additional breakthroughs before we can get there.” This is one of the fun debates. And you’ll see this in the collection, at least on the technical question.

Now of course, there are other questions that are being debated in the edition. I’m sure we’ll get into some of the societal issues, the economic/jobs implications. The fun topic you and I have fun with all the time: great-power competition. Right? How do you think about how this plays out on the global stage? It’s going to be a pretty broad fascinating discussion.

RH:
Yeah. Let’s use that as initial discourse. Because one of the things I think sometimes gets lost in this discourse is, you say, “Well, look, there’s two camps. One thinks that AGI is 10, 20, 30 years in the future.” And there’s always this little betting pool of where do you get to 20-percent or 50-percent or 80-percent as a chance in [that] number of years. And you and I both run that little opening poll question in groups and rooms. But even if you don’t get to AGI, the transformative impact of this new Renaissance in AI is going to be huge.

Say a little bit about what you see as coming, even if you don’t get to AGI. And what are some of the societal questions that technologists should be thinking about, and we as a broader discourse? So, kind of: what’s possible, even with just machine learning as it is, and then what are some of the important questions for us to address?

JM:
First of all, there’s just the economics of it. One of the things that is actually truly exciting is the amazing transformational potential of these technologies to economic growth and prosperity. A quick way to that is just 30 seconds of economic theory. If you look at how the economy grows –  people got Nobel prizes for this –  there’s something called growth disaggregation, that Bob Solow and others got.

So you get GDP growth by getting productivity growth and labor supply growth. You put those together; GDP grows. Now, with aging and other things, much of our forward-looking economic growth is going to come from productivity growth. And at the core of productivity growth is technological innovation. And at this point in time,  AI and related technologies have extraordinary potential to transform how productivity growth happens. So, in that macro sense, we need it for the economy and economic growth.

“AI and related technologies have extraordinary potential to transform how productivity growth happens.”

To come back to a much more practical level, we did some research with friends at Google and Microsoft and a few others to look at use cases in the economy. So, coming down from the lofty kind of economic-theory level, you come down to actual use cases. And if you look at these techniques — initially, we looked at something like 400 use cases; now that library has gone up [and] we have thousands of use cases — What you find is that the powerful, economically compelling and commercial use cases are actually there in every sector of the economy. This is not just something in the tech sector, but it’s everywhere. And the biggies, by the way, are in retail and transportation logistics, just from a shared view of applicable use-cases.

And it’s not just a sectoral question; it’s also a functional question. Sales and marketing is actually one of the biggest arenas. Supply chain and logistics, if you think about it as a function inside of companies. The use cases are there everywhere.

In fact, when we first tried to size this, Reid, just looking at what looked like the realistic use cases, we got to a number that was at least six trillion dollars, potentially, annually in the near term. And if you scale that up to all the other use cases that are emerging, the numbers get bigger and bigger. So in a nutshell, the economic possibilities and potential, both for the economy and for businesses, are tremendous.

Now, the other question that comes up, of course, as an implication, even before we get to AGI, [is] the question about jobs and labor markets. And this is where the story is a bit mixed, in the following sense: While the economic case is clear, the jobs/labor market question is a bit mixed. It’s mixed in the sense that we’ve done lots of studies, as have others, too, and generally what most of those studies conclude is that: First of all, don’t worry about a jobless future, at least not for the next several decades. Because what’ll happen is: We’re going to lose some jobs, but we’re also going to gain some jobs — but the biggest effect is that most jobs will change.

Therefore, even though we have jobs for the future, the question is: How do we deal with the transitions? And the transition issues here have to do with reskilling, with occupational changes, some of the wage effects. And these are the things that we’re really going to need to navigate and think through quite carefully. But, for the most part, don’t worry about a jobless future, at least not for the next few decades. So the labor market question has been a little bit more complex.

Now, there are other effects people think about. These are the societal, ethical, and other questions, and I’m sure we’ll get into those. But because [of how we use] these transformational technologies, questions of use and misuse, questions of the second-order effect when it comes to things like information, disinformation, deep fakes, etc. — there’s a whole other set of implications that we have to think about. It gets to questions of governance, use, and misuse.

Then of course, we get to one of your favorite topics, Reid, which is: How does this play out in terms of great-power competition? We know that, already, two countries are racing ahead of everybody, the United States and China. Others are not quite progressing at the same pace. So what does this all mean when you’ve got so much economically at stake, but also these national security and geopolitical-strategic interests at stake? That’s another whole question in the arena for this field.

RH:
I totally agree, obviously, given the number of conversations we’ve had. I mean, I think part of the frame of this is for folks to realize that, even though there’s been a lot of drum-rolling and as-yet relatively modest industrial impact from the AI stuff, as the software transitions, both into the — one of the frequent expressions I use: the worlds of bits and atoms — we’re going to see some particularly massive changes. And it’ll probably be a little bit more surprising and a little sooner than we think.

Some things, for example jobs, I think will stay. The jobs will transform, but there will be jobs. It’s just the transformation, as you say. One of the things that is, obviously, I think part of what makes everyone nervous about AI technology is the explainability — the transparency problem, which is the: “What is this device doing?”

So, maybe this device will drive the vehicle, but will it make sure that it will drive it safely? And do you know it? And how do you get a sense of it, given how complicated [it is, with] hundreds of billions of parameters in these transformer models? How can you bring other people into the discourse? And how do you get other people to get some confidence about what’s happening? How to put those two elements together?

JM:
I think that’s an important question, Reid, because I think despite our collective exuberance about these technologies, there are some major limitations, and, as I said, some hard problems. But I think it’s worth dwelling a little bit on the limitations and gaps, before we get to the truly hard AI problems.

I think the limitations, most of them have to do the things that are likely to erode public trust. And a lot of those include things like bias in algorithms or in the data or the copra that I used to train them, questions about brittleness, for example, in the systems. This issue of when you have outer-band distributional nonstationarities, for example. When you train the datasets on these sets of things, and then you suddenly present them something out of distribution, it makes different predictions. And also the issue of explainability, because from a trust building standpoint—

What I often find interesting, Reid, is that often people who don’t understand the technology, outside the tech industry, think that the tech industry is trying to hide something on explainability. No, it’s just that the neural-network structure, the structure of these algorithms, are such that you can’t actually open it up and say, “It made this decision, because of this particular variable, that particular variable, or this data set” — although we’re starting to get better at that.

So the question is: How do we address these limitations and gaps that are likely to erode public trust? And I think it’s important to keep public trust in these systems, because, if these systems are going to show up in health applications, in autonomous vehicles, etc., people need to understand and be trustful of these systems.

Now, how we get to that — I think there’s a lot more research and work to be done. And I think some of that is underway; I think we’re starting to make progress on that front. But also, I think, having the public keep up with the field and understand how these systems work — there’s a lot of education, a lot of involvement and participation in the processes, that is going to have to [happen].

This is one of the things we have to get right. We have to continue to build public trust in these systems and be quite open and transparent about what we know and what we don’t know. So those are some of these limitations we need to work through, in addition to all the hard, other problems we still have to work on. There’s a lot of work to do.

“We have to continue to build public trust in these systems and be quite open and transparent about what we know and what we don’t know.”

And, by the way, I should have mentioned: One of these trust-eroding limitations are some of the toxic outputs sometimes from these algorithms, which happen, not all of the time, but some of the time. This will be one of the criticisms, for example, of large language models. I’m sure we’ll get to talk about large language models. But that’s one of the criticisms: that occasionally you’ll get sexist or racist outputs from these systems. We need to fix that, but that’s one of these trust-eroding issues with these systems.

RH:
Yeah. Let’s go to the large language models, which is one of the areas they’re generating. And every group that I know that’s working on them is now very focused on questions like racial bias and alignment and safety questions, in part because they kind of reflect the data that’s there.

And it is, to some degree, a partial Rorschach test on our society, but, of course, also on where the data is. It’s a little bit like, classically, in the criminal justice system, one of the reasons why an algorithm that goes across the data set that’s reported captures the fact that, over the last decades, police forces have more [heavily] policed communities of color than they have others.

All the data gets enshrined, and it’s one of the things you have to be really careful about in things like risk and parole and credit and other kinds of things that are super important. And I think one of the really good things is everyone now goes, “Yep, we’re doing something on it.”

JM:
Yeah. And, in fact, I think one of the things about these issues of bias — which has shown up, as you said, in the criminal justice system, in financial lending, in hiring, for example — often the issue is with data and often the issue is how that data is collected. And often the issue is actually not the algorithms, quite frankly, but rather how society has set itself up to gather this data. And policing’s a good example, because we know, for example: There’s greater police presence in certain neighborhoods and places, so more data is collected. So are you surprised, therefore, when [certain] predictions are made from areas where a lot of data is being collected? Of course not.

Similarly, you see it on the other end of the spectrum in financial lending, where it’s the opposite problem — where if you’ve got people who are quote-unquote “off the financial grid,” often algorithms make bad predictions about whether they’re worth lending to or not. Whereas people for whom we know a lot about, we tend to make better predictions about.

So this question about how society itself captures the data is as much of an issue as it is for the algorithms. Now, that’s not to say that we shouldn’t think about ways to spot that in the algorithms and how we curate the data. We should. But quite often these systems are actually highlighting our issues, quite frankly.

On this fairness and bias, I think you saw this paper, Reid, where some AI researchers tried to say, “OK, so, please, society: Give us definitions of fairness, so we can actually build algorithms that do that.” And I think, in that paper, they identified 21 different definitions of fairness. So, how do you even create algorithms for that? I think these systems are actually forcing us, as society, to ask ourselves some really hard questions.

I don’t think we’ve ever fully defined fairness. I was actually at an interesting symposium, which included AI researchers, sociologists, lawyers, and philosophers. And the general conclusion, by the way, in that discussion was: Society, we’ve never really defined fairness. We tend to use two proxies for it.

One proxy that we use is what you might call procedural fairness. So, In other words, if something’s gone through this process, we’ll assume it’s fair. Just because it’s gone through this process, we’ll take the output as fair. [The other is] what’s sometimes referred to as compositional fairness. We’ll say, “OK, if this group of people made the hiring decisions, we’ll accept what they say, because the group is made up of these people, this particular composition.”

But you notice, in both cases, we’ve not really defined what the actual fair outcome looks like. We’ve defined a process that we take as fair, or we’ve defined a composition to make the decision that we think is fair. So, in other words, we’re still ducking the question: What does fair look like?

“I think these systems are actually forcing us, as society, to ask ourselves some really hard questions.”

RH:
Well, and, obviously, part of the challenge of fairness — which is the reason why, [in] that excellent paper, which I think I first read because you sent it to me — is that it’s somewhat political, and human groups conflict. And they say, “Well, I want this definition of fairness, because it’s better for me.” And it’s part of the reason why these things are always complicated.

What are some of the best ideas that you’re seeing for how to deal with, call it, broadly, these corrections of societal biases? Because I think one of the things you and I share, although we’ve been in groups that don’t have this point of view, is that, look: One of the benefits of putting this in an AI system is we can fix it. We can make the system better. You go, “Well, OK, if current software algorithm systems are doing bad things when they are making parole recommendations or when they’re making financial credit decisions, they’re doing so, in part, because of the data that’s there.”

What are some of the things that you’re seeing that are like, OK, this could be a good fix, because then we could make society better, because we fixed this underlying bias that might have been there anyway — and so, as opposed to being institutionalized by the new AI software system, it can be improved? What are some of the ideas that you’re seeing that are the best things to implement on that?

JM:
A few things, but first of all, just to note, or acknowledge: There’s a good reason why people are concerned about algorithms in this sense, because even though they have the potential to do a lot of good, they tend to be what some have called formalizers and amplifiers.

Because, if you get the algorithm wrong,  and you’re going to deploy it at scale, and you’re going to bake it in. And so the effect, in some ways, may be much worse than a single biased judge, when in fact the whole system of judges is now relying on the system. So, this question of formalization and application is a legitimate concern.

But to your question about: What are some of the ways that are getting at these things?, I think there’s some very cool technical work that’s going on, where people are building ways to actually examine datasets and actually spot structural biases in the datasets themselves.

There’s some terrific work that some researchers, Sylvia Chiappa being one of them, are doing [on] things like counterfactual fairness, which is: How do you actually build for competing algorithms that actually try to understand and do counterfactuals on the data to say, “If we took this part out, could we have made a different prediction? If we take this part out?” So, you can imagine, having AI actually help with that problem is one of the clever ways to actually spot the biases in the system. So I’m actually excited about that kind of work that’s going on.

But I think the other part is just actually having different kinds of people be involved in the process. So, one of the things that I’ve been involved in, which I think you mentioned at the beginning — the National Academies of Science, Engineering, and Medicine put together this committee that I’ve been on, on responsible computing.

And you suddenly realize one of the things that happens is: You need people to ask the right questions. You need people to actually understand and ask the right questions — having different, multidisciplinary people; diversity, especially in a disciplinary sense — who can actually ask the counterfactuals in the development process. That can actually make a big difference.

I should point out, though, Reid: one of the other fun debates in the field is — much like there’s a fun debate about, Are these techniques sufficient or not? — the other debate has to do with approaches to developing these systems, in the following sense. There’s one camp that feels, “We can correct and improve these systems, if we put them out into the world and have people use them and capture the errors, etc. So if we do that at scale, having real people use them, that’s the way to improve them.” The other camp says, “No, no, no. We need to test and test and test and test and test and test, before we deploy anything.”

So you are seeing this play out in the real world, even in approaches to autonomous vehicles, for example, where one school of thought is: Put them out into real systems, and we’ll learn. Others say: No, no. Keep doing training runs and training runs. So this is one of the questions and competing approaches to this question of: How do we get these systems right?

RH:
Yep, I completely agree. One of the things you gestured at earlier, and this has obviously been in the dialogue for a number of years, played to your California’s Future of Work Commission, was: the Silicon Valley folks tend to go, “Oh my god, the future’s going to be here right away.” This isn’t yet getting the AGI. This is the transformation of work. They tend to say, “Oh my god it’s like a Star Trek future; robots are going to be generating everything. We need universal basic income; jobs are going away,” etc., etc.

Say a little bit about the actual work you’ve done looking at this, what [does] the actual shape of the next few decades look like, so that we can predict and intervene in ways that are good for society, whether it’s building the technology or understanding policy? What does that future of work look like?

JM:
Yeah, I think this is where we’ve got some really hard questions.

The commission you’re pointing to, this is an interesting arrangement. I was co-chairing the commission with my co-chair, she’s an extraordinary leader of one of the largest labor unions. And on the commission itself, we had business, people from labor, people from academia, technologists, and others. It was a very diverse group.

And when you look at these questions about the future of work, I think I’d parse the issues in a few categories. One is the issue around re-skilling. The re-skilling one’s an important one, because we know that jobs are changing. Many more jobs will change than will be lost, actually. So having people be able to adapt and keep up with those changes is a really large problem.

One of the fun, quick exercises you can do, Reid, is: Anybody tells you, “Oh, we’re doing re-skilling.” The next question you want to ask is, “OK, so how many people?” And what you typically find is that it’s been very hard to do re-skilling at scale. So people may say, “We re-skilled a hundred people, maybe a thousand people,” but the scale of the res-killing that’s going to be required over the next decade or two is actually in the millions. So you’ve got those kinds of issues.

The other issue that we have are the wage effects. And let me go into a little bit of detail on this one. We’ve seen this play out in the real world. The good thing is that I think technologists are correct when we say, “These AI technologies are going to complement as opposed to replace people.” That’s generally true, but that often can have both positive and adverse wage effects in the following sense.

For example, imagine a technology that complements you, Reid, or complements me or complements a radiologist — that actually is great. It makes you and me and the radiologist way more productive, and everybody benefits. We are more productive. We earn more money. The outcomes are good for everybody. People get the benefit of the outputs. So it works out; that’s great.

But look at the other end, when you complement, for example, some more basic occupations — in some cases, the technology’s actually complementing and doing the value-added portion of that work, and what’s left over for the human is maybe the less-differentiated aspect of that work.

And what happens is an economic question, what you’ve just done is expanded the labor pool available for doing that. So, in other words, the classic example is the London Black Cab example, where, decades ago, in London, to be a Black Cab driver, you had to literally memorize the map of London. And there’s a test called —

RH:
The Knowledge.

JM:
The Knowledge, exactly. You had to pass that. And there have been some very good studies that have actually shown that, with the advent of GPS systems, for example, it literally took away and had an impact on wages, because all you now needed to do to be a really good driver in London, was just know how to drive, because GPS would solve the knowledge part of it. And so what that does, it expands the labor pool available, and in fact can have a depressive effect on wages.

We have to think about these wage effects. And that’s typically what then takes you to, sometimes, the UBI [universal basic income] conversation, because what happens is it’s a recognition that: On the one hand, we’re creating, as suggested at the beginning, enormous economic potential. So, if we’re going to create all this abundance economically, and yet we may have these adverse wage effects or worse, shouldn’t we be somehow creating a way for people to generate income? That’s where the UBI question typically comes from.

Now, personally, I’m not a proponent of UBI, but I like the UBI discussion in the following sense, because it’s getting at a real question, which is: What happens when we are creating economic surpluses, and yet wages are not going up as much for everybody? What do we do about that? I like the UBI question for the debate it provokes about the wage effects, which, by the way, are already here.

One of the things we’ve learned from the California Future of Work Commission, and other studies across the country, is that: We already know there’s inequality. We know that technology is one of multiple factors contributing to that. So, the immediate question we’re going to need to deal with is not, Are we going to have any jobs?, but: Are the jobs going to pay enough? And that’s a very real question. You see all these commissions across the country and other countries, especially in the advanced economies. It’s a real question.

RH:
And clearly we’re already facing some of the things in that, although I think, personally, and I’d be curious to your point of view, most of the things that people are saying are currently wage effects of automation are more wage effects of globalization.

The automation is still very much coming, and it’s important to focus on. But it’s like, “Oh, the robots are taking your jobs!” Well, I’m not sure actually the question is yet that the robots are taking your jobs, as much as: Can the robots get here soon enough to empower higher-wage jobs? And Can we get the higher-wage-jobs part of it? is, I think, the more relevant part. What do you think?

JM:
Yeah, I think that’s right. I think the globalization in jobs and wage question was a legitimate and real question, if you look at the late ’90s, early 2000s, because that was, in fact, the driver of some of the wage effects we saw at that time. But I think that trend is going away, because we know that pretty much everywhere — including in places like China and others that were playing the labor arbitrage game — the wages are going up, too.

The dynamic going forward is less on wages. It’s less about globalization.

By the way, if you go through — typically the globalization/wage question plays out often in manufacturing and other service-sector jobs — what happens there, if you looked at it now, [is] the only places where you still have some extraordinary labor arbitrage going on tends to be in a few narrow parts of manufacturing, typically furniture manufacturing, some portions of textiles. It’s not everywhere. Large parts of the manufacturing sector, which include automotive, chemicals, etc., that effect is not really at play anymore.

On a forward-looking basis, I don’t think the debate on wages is about globalization. It’s about the structure of the economy. We know that when you have an economy that is service-sector dominated — which most advanced economies are, ours certainly is — the wage question is very real. I mean, we can talk about manufacturing all we want. Today, manufacturing is 9-percent of the U.S. labor force. It’s not the dominant piece. The rest of it is everything else, including the service-sector economy. The last time manufacturing was a big part of the economy was 1958, [that] was the peak of it. It’s been coming down ever since.

“On a forward-looking basis, I don’t think the debate on wages is about globalization. It’s about the structure of the economy.”

RH:
Interesting. Yeah. Although, the classic problem is the service economy absorbs a lot of people in terms of number of jobs, but they don’t really get the leverage to get wages up. That is, in fact, an interesting challenge.

JM:
That is correct. And by the way, I’m kind of being somewhat flippant about manufacturing, its size and scale. One of the nice things about having a manufacturing sector, it has these multiplier factors around it. We know that when there’s manufacturing activity going on, the multiplier effect on adjacent services is actually a positive one. So there is a need to actually have manufacturing — not that everybody’s going to work in manufacturing, because I don’t think that’s going to happen — but it has these wonderful multiplier effects.

But you’re right, that’s the question, which is: How do we think about wages and income in a service-sector driven economy? Our portions of the service sector are the lucky ones. The rest looks like people who work in restaurants, people who work in services, etc. The wage structures of that are quite different.

RH:
Let’s shift to artificial general intelligence, now. So, as you’re doing in Dædalus, and as we have a wide variety of AI researchers. Some are like, “Look, we’ve got fundamental things” — the things that you gestured at before, everything from symbolic reasoning to kind of language, one-shot learning, transfer learning, etc. — as like, “No, these things have not been… We have fundamental innovations coming.” Other folks are saying, “No, actually, the scale of these language models or the foundational models, together with some other innovations, will in fact make this stuff work.” And so these are the things that are kind of playing into this.

What are some of the observations you might share with our community today, our group, on how to think about AGI, soon? Possibilities, probabilities, constraints?

JM:
Yeah. So, first of all, just start with large language models. I mean, these have been remarkable, starting with the advent of that classic Transformer paper that Vaswani and others at Google did about three and a half years ago. That has led to everybody building a large language model now. I mean, Google’s went from BERT and now there’s LaMDA. OpenAI’s went from GPT-1, -2, and -3, and there’s more to come. And Microsoft is building MTNLG. And DeepMind has built Gopher.

So, there are just bigger and bigger models being built. The performance, not entirely, seems to improve with size, but there may be some limits to that. And what’s remarkable about large language models, as you know, Reid, is the fact that they’ve also been able to have these multimodal outputs. So you can go from natural language to natural language outputs, but also do natural language to software code. Microsoft and OpenAI did this, went from GPT-3 to Codex, which generated software code. You can do natural language to images.

That’s what’s often led to people starting to think about them as foundational models. So the question is: Are those a way to get to things that start to look like general intelligence?

People working in deep learning, people working in reinforcement — by the way, reinforcement learning is what you’ve seen in DeepMind and others, and OpenAI does quite a bit. DeepMind is taking it to do all the Alphas, AlphaZero, AlphaFold, in doing so to do science, etc., etc. — I think what people generally conclude is that these approaches still have a lot of headroom. There’s still a lot more. We’ve not reached the limits of these techniques yet, and these approaches, and we’re going to see many more innovations.

But you come back to this question of: Is that enough? I happen to think that that may not be enough, actually, because there are still some really hard problems about causal reasoning and meaning and understanding, about: Are we actually building in understanding and generating understanding?

We still have a hard time doing science. We still have a hard time, for example, generating systems that create novelty or conjectures — say mathematics, for example, although there’s a fun, recent paper on knot theory and topology that came out of deep mind that is kind of fascinating — but these are still very hard questions, questions about memory and persistence.

We don’t quite know how to do that. In fact, many still think that there’s a lot more we can learn from neuroscience, that we haven’t fully mined what we can learn from neuroscience about cognitive agents and models and how that works.

So, to declare my own biases, I tend to be, I think, in the camp of “we need more conceptual breakthroughs before we get to AGI.” But at the same time, I think we’re going to get a lot more out of these systems that we are building today.

The question is: At what point should we start to prepare for the possibility of AGI? And I don’t think we need to go all the way to AGI before we start to have systems where we need to be thinking about the implications. I think it’s one of the key questions, quite frankly, for the field: How do we prepare for that? How do we coordinate?

I think one of the amazing things that has happened is that the players and the researchers at the forefront of all these developments are the good folks, so far, I think, with the best intentions. Of course, they compete intensely, but I think they’re all trying to want the best for society. I think they’ve written this into their missions, etc.

That’s good. But I think that may not always be the case. I mean, these systems can become so powerful. And so we need to start to think about: How do we approach questions about safety? How do we think about questions of control, questions of human alignment?

Stuart Russell has been a professor at Berkeley. You’ve read his book, and others may have read it or find it interesting, it is on human-compatible AI. This is a question of: How do we build provably beneficial systems that are roughly aligned with our interests and what’s best for us? And this is a very hard, technical, scientific question. But, quite frankly, this is a societal question. I think these are some of the hard questions. But, as I said, I’m of the view, Reid, that we need more than just deep learning.

RH:
I completely agree. And also, I think even the most sophisticated people on deep learning think that there needs to be some more, too. The real question is: Is it this much more? Or this much more? How many innovations is it in order to get there?

One of the things that you’ve gestured at, very importantly, at the very beginning of our discussion today, is that it is going to be essential to build and maintain public trust, because these systems are going to have a huge impact on people’s lives, the economic system, jobs, wages. There’s a variety of places where they can intervene.

One of the weirdnesses in the public trust is this is going to be, if you look at the current activity, it’s all massively being driven by corporations. Actually, in fact, frankly, both within the U.S. and within China and other places, all corporations. And obviously, there are some worries about corporations doing this. There are profit motives. There are some societal questions that are more urgent to society and individuals than they are to corporations. Fairness, for example, might be one of the things that people would worry about.

What are some of the ways that we, as technologists, should not only be thinking about it, but engaging in public dialogue and doing things that would help build and maintain, or even rebuild and maintain — because I don’t think public trust is at a current high water mark — in order to be doing that?

JM:
Yeah. And this is a very hard question, Reid. And I think it’s actually one of the things that’s complicated about AI compared to other technologies, which is that it is being led primarily by the private sector, as you said.

A few things come to mind. One is: The sector itself, I think, has to be more transparent about how it’s doing what it’s doing. And I think the public would actually be quite pleased. One of the things that I always find interesting, and this is even before my role at Google, [is that] it has always struck me that when you spend time with the people actually building these systems, and the current leaders who are developing these systems, pick any one of them — they’re actually very thoughtful about what they’re doing. And they can debate about, “Is this approach better?”

But you’ll find that they actually care about the ethical implications. They may just be debating how to achieve them, but much of what they’re doing is not as available and transparent to the public.

People don’t know that these discussions are happening, these debates are happening, these issues are happening, so they assume that it’s all entirely driven by the profit motive, and that’s not true. I think there’s just something about more transparency, which actually would show that a lot of thought is going to these systems. So, that’s one.

The other is, I think, some involvement by the tech sector and the developers with policy makers and regulators. I mean, I’m always surprised by how little understanding there is in regulatory policy-making circles about how these systems work. You don’t have to listen to hearings you see on CSPAN and others to see: Oh my goodness, they really don’t understand this stuff. So I think there’s something about the continual education, involvement, and engagement with policy makers, so that all of us — and the people who are going to be making decisions about these systems — understand them.

I do also think that there’s some place, quite frankly, for some peer pressure and peer-group pressure in systems. One of the things I was quite pleased about a few years ago was when the Partnership on AI was created, which is intended to be a peer group of companies who are leading research in AI coming together, trying to establish their own set of standards.

I think here, there’s a lot to be learned, by the way, from what happened in genomics and biology. There’s a famous Asilomar Symposium in the ’70s and ’80s that actually created a peer set of rules and principles about how to do genomics research that have largely held. And I know you’ve been a part of this, Reid.

This ecosystem has attempted to do those things. I think we need more of those kinds of things, but even that won’t be enough. I think in the end, we may [need] some clearer rules of the road. We may need to do that in a global sense, because it’s not just about the U.S. and about Silicon Valley.

These systems are being built everywhere. I think that’s why it’s a little bit complicated, because you’ve got a few different competing things going on at the global stage. One, of the purely economic stakes, and the economic stakes, by the way, are not just for companies, but they’re for countries too. So, the economic stakes affect countries and companies.

Then you’ve got the national-security stakes. I mean, you and I were on this task force on innovation in national security, and you could see the tensions on that task force. You’ve got innovators and economists on the one end saying, “Oh, but this is good for society and the economy.” And then people coming out with a national security background saying, “But no, no, no. This might not be as good for our national interest.” So the question is: How do we navigate those tensions?

I think one of the key differences between AI and other technologies that have had national security implications like, I don’t know, nuclear science for example, is that — the nice thing about nuclear science is that the governments typically lead in that, and so there’s an alignment — it’s governments who’ve largely led in that work, in the nuclear age, in the Cold War. And all the capabilities are on the government side, so you have an aligned center of interest.

The nice thing about nuclear technology, by the way, is that if somebody uses it, we can detect it. AI is a little bit different, in the sense that, on the one hand, all the innovations are in the private sector. The private sectors think about global economic opportunities, not just national questions. And so you’ve got that dissonance. The other dissonance you’ve got is the detectability-accountability question, which is much harder with AI systems, in a way, than it is in, say, biological weapons or nuclear weapons. So this is a much more complicated arena, and there’s a lot to think about.

RH:
We have a question from one of my favorite people from the audience, Selina Tobaccowala, which is, I think, a classic and important question here and applies across a wide variety of fields.

This one, in particular: ”You say, ‘sexism and the bias in the algorithm’s data.’ How much of it is just better measurement and validation? And how much of it is the people doing the work, who will have a natural kind of validation, cross-check, understanding about whether or not they’re on track or not? What do you think these components need to be in order to improve our alignment and the outcome of these algorithms? What’s the balance and play between these?”

JM:
I think you need both. And I’d actually add a couple more. When I say both, I think making sure we diversify the people involved in the development and questioning of these systems. And here diversity is important, not just in the gender or racial-ethnicity sense, but also jin the disciplinary sense, because you find that sometimes social scientists ask different kinds of questions. Ethicists will ask different kinds of questions.

So, I think the question is correct. I think there’s something about diversifying the people involved in the development of these systems. I think that there’s also the other part, which is better thinking and curation of how we train these algorithms. And that comes down to the data question, quite frankly, and how society collects, aggregates the data. But even there, I’d also add the technology itself can help too.

On the data front, I’d say we need to solve for society, how society collects this data, as in the policing example, for example. But we also need to think about, Can AI itself actually help us spot those things, spot those patterns? So in other words: AI can actually be a tool to spot bias. And, frankly, you’re starting to see examples of systems that try to do that, particularly with toxic outputs, for example.

One of the nice things that some of the latest large language models do is they’re starting to do their pre-checking of their outputs, using almost adversarial, different, other large language models to do that. So the AI itself can be a tool for that.

I think we’re going to need all these components. Now, how much you put weight on one of those elements versus the other, I think it’s very arena-dependent. In the work that we’re doing for the National Academies, how you think about that question in commercial environments, where you have market forces at work is very different than how you might think about it, for example, in health- and safety-sensitive arenas, like in healthcare or transportation systems, for example.

So I think there’s something about tailoring the mix of those things to the different settings and arenas. But you need all of them, I think.

RH:
Well, we have time for one last, quick, fun question, since we’ve gone through these very, very serious issues. James, I always recommend everything you write and everything you do, because it’s balanced and thoughtful and brings in considerations, and it’s erudite and has depth of accommodation of scholarship, but also understanding and pragmatism and usefulness. What things would you recommend to this audience about what stood out for you lately? Books, podcasts, movies — what would you recommend?

JM:
Masters of Scale.

RH:
No, I wasn’t looking for that one.

JM:
No, no, no, but it’s true, right?

RH:
Yes.

JM:
It’s true. Gosh. I mean, I think in terms of the topics that we’re talking about, your audience might find this collection of Dædalus interesting when it comes out, partly because it’s an amazing group of people writing it. And they have very different views, by the way. And you’ll see the debates when you read it. That’s one.

There’s a wonderful book that Eric Schmidt and Henry Kissinger put together, and Dan Huttenlocher, which tries to get at some of these geopolitical questions. That’s actually worth reading, and it gets a lot of the issues.

And, quite frankly, there are some really good papers. I mean, one of the nice things about the AI field is that, even in the technical research, we’ve actually started to write good, approachable, readable papers. So I often think, for anybody listening interested in deep learning, the paper that was written last year by Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. As you know, the three of them won the Turing Award for their work in deep learning. They were kind of taking stock of deep learning, where that particular set of approaches stands and what more needs to be done. It’s actually a very good and very readable paper.

So there’s a bunch of things like that, Reid, that I think are interesting.

RH:
A hundred percent. So that brings us to the end of our discussion. James, thank you for being with us today. As always, I would take this show on the road any day, as you know. Right?

JM:
Well, thank you. You and I still have a lot to debate and argue about with all these things, so…

RH:
We will be doing more in a variety of circumstances. I think everyone has seen we’ve had a lot of people logged in, a lot of our treasured members of our Greylock community here, because they are looking for expertise and insights, such as the ones you provide. And we appreciate you spending this really valuable time with us. Thank you again, James. As always, erudite, insightful, comprehensive, and deep. Thank you very much.

JM:
Thanks for having me, Reid.

 

WRITTEN BY

Reid Hoffman

Reid builds networks to grow iconic global businesses, as an entrepreneur and as an investor.

visually hidden

James Manyika

Google's SVP of Technology & Society