Google's James Manyika on Ensuring the AI Transformation is Done Right
Name an industry, work environment, or geographic locale, and James Manyika will have valuable insights on how it can be impacted by technology and business.
From his nearly three-decade tenure as a highly-sought out advisor at McKinsey Global, to his roles on numerous influential boards and commissions including vice chair of the Global Development Council under the Obama administration, Manyika has singular capacity for seeing complex questions of society, economics, and tech from a kaleidoscope of seemingly contradictory vantage points.
So it’s no surprise that Google recently appointed him as the company’s first-ever SVP of Technology and Society.
What started for Manyika as an undergraduate research project on neural networks became a PhD in machine learning (the then-preferred term for AI) and a lifelong pursuit to take that understanding to solve large-scale societal problems.
From that early research to his new post at Google, the throughline of his work, says Manyika, has been a deep focus on questions of: “What are all these advanced technologies going to mean for society, in the best sense, in the sense of what opportunities will be created? What things do you want to make sure we manage well?” and, of particular importance to him with AI: “How do we get it right? How can we make sure that things go right?
In this interview from our Iconversations series, Manyika talks with Reid Hoffman, Greylock general partner and longtime friend and colleague, about the current state of AI, where on the horizon he sees AGI, and his take on the problems of bias presented — or rather, illuminated — by algorithms, as well as what can be done to solve them.
You can watch the video from the event on our YouTube channel, or listen to the full conversation here:
Hi, everyone. Welcome to another edition of Greylock’s Iconversations. I’m Reid Hoffman, a general partner at Greylock, and your host for today’s event. I am thrilled to welcome James Manyika as our guest today.
James is the newly appointed SVP of Technology and Society at Google, a role he has taken on following more than 27 years at McKinsey Global, where he advised leaders at many of the top tech companies of the world.
James has focused on artificial intelligence, robotics, and globalization for his entire career, and he’s had extensive experience in pretty much every type of workplace imaginable, from academia to government agencies to private companies and nonprofits. His contributions range from books and articles to speeches, lectures, and, of course, countless moments of critical advice in top-secret capacities.
He’s held several government advisory posts, including as vice chair of the Global Development Council under the Obama administration, and he was named to the Digital Economy Board and National Innovation Board. He also serves on the board of the Council of Foreign Relations and recently co-chaired the State of California’s Future of Work Commission.
Today, he serves in various science and technology capacities, including as the distinguished fellow of Stanford’s Human-Centered AI Institute, fellow and visiting professor at Oxford, and board member at the Broad Institute of MIT and Harvard, among others. He is also a fellow of the American Academy of Arts and Sciences and a member of the National Academy Science, Engineering, and Medicines Committee on Responsible Computing.
He’s also a great friend of mine, with whom I have shared many incredible initiatives and experiences, including our recent trip to the South Pole, where I believe James was the first Zimbabwean-born visitor to the geographic South Pole.
James, as always, thank you for being with us today. You’ve had a long and busy career through one of the fastest changing periods in our history. And, to this day, you are constantly expanding your exploration zone, not just to the South Pole. Welcome.
Well, thank you, Reid. I’m excited to do this with you. It’s always fun to be in conversation with you.
So, we were at Oxford together, but didn’t know each other then. You’ve been focused on artificial intelligence and robotics for your entire career, including then, from the technical aspects to the ethical aspects, which obviously is super important for designing, implementing, [and] founding technology. What set you off on this course?
Well, thanks for asking the question. In fact, apart from growing up watching all the science-fiction films — 2001: A Space Odyssey and everything — I had a very peculiar thing happen when I was an undergraduate in Zimbabwe.
I was looking for an undergraduate research project, and it turns out that a postdoc was visiting from Canada. This postdoc had been one of Geoff Hinton’s students, actually. He said, “Well, why don’t you do a project building a neural network?” And I said, “What is that?” So that was actually the first time I ever actually programmed a neural network, because, as some of your audience will know, Jeff Hinton was among one of the people who actually pioneered the successful run we’ve had now with deep learning and neural networks.
That was how I got started. From that, I got hooked, [and] went to Oxford. At Oxford, I did a few different things. But when I finished, I’d done a doctorate in AI and robotics. And that was a fascinating time.
At the time, by the way, there was often a reluctance to call this AI, because of the previous period of AI winter. So we actually called it machine perception, machine learning, and other things — anything but AI.
Exactly. And one doesn’t normally go from a PhD in machine learning, machine perception, to McKinsey. What was that move? And in particular, of course, and we’ll get into this in some depth, thinking about societal and economic ecosystems as part of this.
Well, it was a very accidental thing for me, Reid, because part of it was, quite frankly, an excuse to be in California, because when McKinsey made me an offer, I could be in California. And I’d been spending some time, by the way, even after my PhD, at Jet Propulsion Labs when I was a visiting scientist in the Man Machine Systems Group, because some of the things I worked on my doctorate were applicable then. And besides, a few friends of mine had this crazy idea that we might actually build an autonomous car.
So while my other friends were at Berkeley and were working, Stuart Russell and others, I said, “Well, maybe this McKinsey thing is an excuse to be in California, be in the Bay Area.” So I actually took a leave of absence from the other stuff that I was doing, just as an excuse to be in the Bay Area, and I guess I ended up staying.
But more seriously, though, I think part of what I learned at the time was that I was fascinated by very large-scale problems. And, of course, technology is a big part of that, but also just thinking about large-scale societal questions was what fascinated me. And McKinsey seemed to be a great platform and place to do that, particularly at the Global Institute, [which] I ended up leading for many years.
Yep. Well then the Global Institute is, obviously, producing a huge amount of very interesting and very kind of — practical’s not the word, but rooted in what kinds of things to do and what trends are happening. And so, that makes sense.
So: What’s this new role at Google? What are you going to be doing?
Well, thank you. It almost feels like a continuation of the things that I’ve been passionate about, Reid, in the sense that it has this big title of technology and society. But what it’s really about is to really think about: What are all these advanced technologies going to mean for society, in the best sense, in the sense of what opportunities will be created? What things do you want to make sure we manage well?
So I’m going to be spending a lot of time doing research, spending time with the amazing AI teams at Google, Jeff and Demis and others, and also thinking about the next generation of bets and investments and how those might affect society. And, quite frankly, talking to a lot of people inside Google, outside of Google, and trying to engage in these issues around technology and society.
You and I have spent a lot of time thinking about, “How do we make sure all of this— How do we get it right? How can we make sure that things go right?” And I’m particularly excited about making sure, in an area like AI and robotics, where I’ve spent most of my time — I want to make sure it turns out right.
One of the things that you have coming up is through the American Academy of Arts and Sciences, where both you and I are members. They have a magazine, a very thoughtful set of issues, Dædalus. You have an AI issue coming up.
To begin to dig into these issues and some of the things that you’ve been doing at McKinsey and will be doing at Google: What are some of the issues that you’re going to be addressing in that issue of Dædalus? And what are some of the things that you think that technologists should be thinking about, as they think about this amazing technology; this Renaissance we’re going into in AI?
Well, first of all, I was so excited when the American Academy asked me about a year ago to curate and edit this special edition of Dædalus. Normally when Dædalus comes out, it’s eight essays, but this one’s going to have 27 essays. And what’s fun about that is I was able to arm-twist my friends and people I know. So I’ve got everybody contributing an essay. Jeff Dean has an essay, Kevin Scott at Microsoft has an essay. Mira Murati from OpenAI has an essay. Stuart Russell from Berkeley.
So you’ve got half the contributors [as] sort of AI pioneers and people who really work in the frontier of the technology. The other half are people who are thinking about the implications in society. So they are people like Michael Spence, a Nobel laureate, philosophers, and ethicists. This issue is actually called “AI in Society.”
But if I come to your question, Reid, about where are we in this, I think, on the technical side, it’s a very exciting time. I mean, your audience will know, and the participants will know, that we’ve had an incredible run with techniques like deep learning and reinforcement learning. And especially in deep learning, now, you got a turbocharge recently with the development of transformer models. So these systems are working remarkably well.
One of the debates you will hear in the collection, and it’s very much in the community, is: Are the current approaches sufficient to get us to remarkably powerful AI or even ultimately artificial general intelligence? So this is one of the debates in the field, which is: Are these techniques and approaches enough? And if you take a room of AI people, half of them will say, “Oh yeah, this is all we need.” And if you talk to the other half will say, “Well, this stuff is great, but we need so much more.”
And usually what they’re getting at there is [that] there still are some very hard problems in AI. Things like: Can we actually do causal reasoning with these systems? Can we get to issues of meaning? Can we do transfer learning? Can we do what Daniel Kahneman and others describe as System 2 tasks? So there are still some very hard problems.
And the debate is: Are these techniques enough, or are we going to need other things? By the way, that debate is typically when you ask people about AGI, about how far away it is. Much of that debate hinges on this question. So the people who think we’ve got all the tools and techniques, will say, “Oh no, it’s very, very close.” The people who don’t think we’ve got enough, will say, “No, no, no. We still need some major conceptual, additional breakthroughs before we can get there.” This is one of the fun debates. And you’ll see this in the collection, at least on the technical question.
Now of course, there are other questions that are being debated in the edition. I’m sure we’ll get into some of the societal issues, the economic/jobs implications. The fun topic you and I have fun with all the time: great-power competition. Right? How do you think about how this plays out on the global stage? It’s going to be a pretty broad fascinating discussion.
Yeah. Let’s use that as initial discourse. Because one of the things I think sometimes gets lost in this discourse is, you say, “Well, look, there’s two camps. One thinks that AGI is 10, 20, 30 years in the future.” And there’s always this little betting pool of where do you get to 20-percent or 50-percent or 80-percent as a chance in [that] number of years. And you and I both run that little opening poll question in groups and rooms. But even if you don’t get to AGI, the transformative impact of this new Renaissance in AI is going to be huge.
Say a little bit about what you see as coming, even if you don’t get to AGI. And what are some of the societal questions that technologists should be thinking about, and we as a broader discourse? So, kind of: what’s possible, even with just machine learning as it is, and then what are some of the important questions for us to address?
First of all, there’s just the economics of it. One of the things that is actually truly exciting is the amazing transformational potential of these technologies to economic growth and prosperity. A quick way to that is just 30 seconds of economic theory. If you look at how the economy grows – people got Nobel prizes for this – there’s something called growth disaggregation, that Bob Solow and others got.
So you get GDP growth by getting productivity growth and labor supply growth. You put those together; GDP grows. Now, with aging and other things, much of our forward-looking economic growth is going to come from productivity growth. And at the core of productivity growth is technological innovation. And at this point in time, AI and related technologies have extraordinary potential to transform how productivity growth happens. So, in that macro sense, we need it for the economy and economic growth.