Searching for Good: A Conversation with Google’s Peter Norvig

 

 Current Affairs Editor Alana Daly Mulligan talks with Google’s Director of Research Peter Norvig about the development of Artificial Intelligence; the good, the bad, and the misunderstood. 

 

“Google started out, and really, Google couldn’t do anything.” Peter Norvig tells me. This was definitely something I wasn’t expecting to hear from the mouth of the Director of Research at Google. He elaborates and says that the last 15 years have been essential in the development of the company from the humble search browser beginnings, to not only a source of answers, but a solution to them. I start by asking Norvig how he would Google himself without using his name. He chuckles. “That would be harder,” he replies. I get it. After all, he’s one of the world’s foremost minds on Artificial Intelligence. Before taking on his current role, he was the director of Google’s core search algorithms group, as well as the co-author of Artificial Intelligence: A Modern Approach, one of the leading textbooks in the field. “I guess at heart I’m just a programmer,” he says, eminently humble, “I think of AI as a way of communicating with machines in a more natural fashion. In traditional programming, you have to be an expert, you need to have these skills, you need to be able to write down instructions that a computer can understand. In artificial intelligence and machine learning, we try to make that easier by saying rather than having to tell the computer what to do step by step, just tell it what you want and show it some examples and let it learn from that data.” 

 

I wanted to talk to Norvig about how Artificial Intelligence helps us learn, given the fact that many students around the world are operating from a “University of Bed” basis for over two months. Norvig is one of the leaders in the field of online-learning–his 100,000 student class at Stanford in Fall 2011 is perhaps one of his more well-known feats among those outside the realms of his research. For Norvig, AI has instantly been connected with the importance of knowing how students learn most effectively with the result of better teaching. While there is most certainly promise there, he thinks the progress we have made in AI and digital learning has been slow for a number of reasons: “One is I overestimated the value of the content [of the lesson] and underestimated the value of the personal relations and I think that really the key to learning is a close relationship and a desire to be part of a team.” This is certainly something many students can relate to given the current circumstances. The desire to have a group of friends to learn with or the in-person support of a human. “Motivation is more important than information,” Norvig tells me, “so the online stuff was going in the direction of being less personal, less motivating.”

 

The second reason Norvig gives for the slow materialisation of online learning success is quite simply the pace at which we can gather data on students learning: “What we sometimes find out in order to be able to predict two years into the future, you have to run students through two years-worth of classes! There are no shortcuts, you can’t do it faster than real-time. And that’s true in education, in other areas we’ve been able to take shortcuts…in education, we don’t have a good model of how a student’s brain works and what helps them learn or not learn. So we can’t create any simulations and we’re left learning from real life, one year at a time.”

 

With these speedbumps in sight, I ask Norvig what’s the future of AI and education? “One of the things I’m really interested in is at the creation end.”  He elaborates that he wants to create tools for teachers and students that make it easier to create classes and get feedback on them. Imagine an application like Google Docs but instead of opening a word-processing document, it opens a teaching-processing document that allows you to plan lessons, collate interactions with students, get feedback, improve both the online classroom and in-person learning environment. It is an exciting thought. He also sees flaws in the tools that currently exist and says we need to improve on what we have so that expert knowledge is not required to operate: “I like to collect really good examples of learning experiences, and when I see really good ones, it’s from someone who’s an expert in ten different things! Where they have to really understand their subject matter, really have to understand how to be a good teacher and understand this video tool and web programming tool…that’s not really going to scale, we can’t rely on everybody being an expert in ten things.”


Photo by Morning Brew on Unsplash

 

The big general question, of course, is what should we be afraid of when it comes to AI? He puts my mind at ease, Arnold Schwarzenegger won’t be going all Terminator on us quite yet, but tells me it’s the less obvious changes we should worry about: “You should be wary of any new powerful technology so I’m glad that people are starting to think about the possible societal effects of AI, it’s not that the robots are going to take over and try to go after us, it’s that we’re going to take over and give [AI] the power the same way we gave automobiles the power, and we should make sure we do that in the right way.”

 

After the education warm-up round to fuel me with faith in humanity, I ask some more uncomfortable questions. Launching right in, I ask if Google and other tech companies are exploiting people through poorly understood consent requirements online, as well as loopholes in international law, and how should this be dealt with? “I spent a bunch of time recently almost as a grief counselor for somebody on our team who was doing the GDPR compliance because it was just such a mess and it was our fault.” He says, half-joking, but mostly not. We laugh regardless, oh GDPR you old so-and-so. 

 

He offers me a four-pronged approach to the solutions in exploitation, citing the self-policing of companies, new industry standards, the influence of governments and the mediation of third-party groups as ways to protect the individual online as the Age of Data continues to spiral outwards: “We should’ve been doing that all along, it’s not that we’re trying to get away from that it’s just that all this bookkeeping is hard and we didn’t do it because we didn’t have to in every case, but now we have to. The end result will be a better one.”

 

From that, clearly feeling in a cheery mood, I ask Norvig about Google’s old slogan “don’t be evil” which was axed as the company line a little under a decade ago. Was the rapid-development of tech meaning that companies were leaving morals on the sidelines? It wasn’t the most original line of questioning, but an important one nonetheless. “I think the world is becoming much more complicated.” He answers; “When the ‘don’t be evil’ slogan came out, the company was 200 people all in the one building and it was good to have an overall message. Everyone had the same understanding of what evil was…But when you’re 100,000 people spread across the world, it’s harder to do that.” The globalisation of the company means there is no copy-and-paste setting to put the nuanced Western idea of ‘evil’ on other countries, procedures and guidelines now light the way. He reiterates that just because the slogan is gone, that doesn’t mean Google is endorsing “be evil”, if anything “Google feels a much stronger responsibility to be a good citizen”, he says. 

 

Good citizenship is something Norvig mentioned a few times throughout the interview, and it seems to be central to the messaging he believes Google is trying to portray. He says it’s important to be informed, even if just as citizens. He points out that AI has crept into our lives in subtle ways, and it will keep doing so. Good is there, as is bad, it’s all about what you want to find, what you search for.