Skip to main contentSkip to navigationSkip to navigation
Peter Norvig Google director of research
Peter Norvig, director of research at Google. Photograph: Bloomberg/Getty Images
Peter Norvig, director of research at Google. Photograph: Bloomberg/Getty Images

Google's Peter Norvig: 'I have the best job in the world'

This article is more than 11 years old
Google's director of research talks artificial intelligence, personal computing, mapping, and what the internet giant is planning next

"I already have the best job in the world at the best company in the world," says a note on Peter Norvig's personal website warning recruiters not to bother contacting him. The job: director of research. The company: Google.

You don't have to be a Google fan to see why Norvig would be happy there at this point in time. In every generation there is a handful of labs where gangs of smart people cluster. In this generation, for this moment, one of them is Google, which seems to have recruited half the smart people in the known world.

Say that to Norvig, and like a flash he'll ask for the names of the other half; but of course even he has limits. "We don't get our pick of everyone because there are some things we can't offer," he says. It's no fit for someone who wants to start their own company or work in a small one, and other than its work on self-driving cars, Google doesn't fund research on hardware, although it does give grants to some university projects.

"We still have to make choices internally. There's a little more freedom than at a startup: bad choices won't drive you immediately out of business, but you can't say, here's something I want to do and here are 20 spare engineers." Instead, the reality of having to move people around to pursue new ideas forces the setting of priorities, just like anywhere else. Some of these priorities sound just plain weird: besides the much-reported self-driving cars and augmented reality glasses there are rumours of space elevators and robots.

In the 1960s and 1970s in the US the famous clusters were at Bell Labs, IBM's Watson Research Lab, and Xerox PARC. All three were famed for developing things that had nothing to do with their company's core business – and also failing to exploit their inventions successfully. PARC in particular became famous for having multiple future industries born right there in its lab and letting them all escape to make other companies rich: graphical interfaces, personal computers, desktop publishing, the ethernet networking standard.

Norvig is conscious of this past, and when it's mentioned, he brings up the 1999 book Fumbling the Future: How Xerox Invented, then Ignored, the First Personal Computer, by Douglas K Smith and Robert C Alexander, in order to argue with its interpretation of events.

"The book says they fumbled the future, but in a way they invented the future," he says. "I think they rented the future." He goes on to outline his idea of the train of thought: "There will be a day in which people can afford PCs, but we're not quite there yet. So take $200,000 and give researchers personal computers so we can see what the future is going to be like. In a sense, we're doing the same thing at Google." That would be the cars … the glasses … the 16,000 computers thrown together across 1,000 servers and set to examining 10 million 200x200 pixel single-frame images taken from YouTube videos recognise cat to see what they come up with. "Sometimes that's the hard part – imagining what's going to be possible and saying, how might it be done?"

But Norvig is also conscious that it was still true that those labs often produced research that their companies could not exploit. Google, he says, doesn't work that way: its research is more closely integrated into the rest of the company.

"In some ways we're similar to something more like Intel, where it has research groups that try to start new businesses, and if they kickstart something and somebody else makes most of the profit from that new business, they're fine with that as long as the industry buys Intel chips. We're similar – if we invent something new, even if we don't own it, if it brings two new people to use the internet that didn't before, the odds are that at least one of them will become our customer. So it's a success for us if we launch a new industry."

This explains the cars and glasses. "We think of them as extending from a strength we already have - cars, from our mapping capability, and glasses similarly, from communications and location services," he says. "We have to make a plausible case that it connects to strengths we have." Acceptance of these technologies, he thinks, will come faster than we may expect: his teenaged kids are frustrated that self-driving cars won't be on the market soon enough to excuse them from the need to learn how to park.

A defining moment in terms of Google's mapping services, came with 9/11. For one thing, that was a moment when the shift from TV to the web for breaking news became apparent. Both 9/11 and Hurricane Katrina, which devastated New Orleans in 2005, showed Google something it didn't know about its own services: "We thought we were building an atlas that you buy once a decade and look up stuff, but people were asking us, 'How does New Orleans look today that's different from yesterday?' and we realised there was a time component." Norvig says that third dimension of time will continue to become more important to mapping and the company's coverage of different parts of the world grows. "People will demand more up-to-date coverage."

A common theme throughout Norvig's career is work on artificial intelligence. He began as a mathematician, but moved to computers when he found them easier. As early as the mid 1980s, he began moving toward probabilistic reasoning and dealing with uncertainty. The theorems this kind of work is built on, the work of the 18th-century English mathematician and minister Thomas Bayes, are in use everywhere now, but at that time were still regarded with great suspicion, even in the AI community. For one thing, to work effectively, Bayesian systems need a lot of numbers and statistics to draw on, and no one could yet see where these were going to come from. For another, the mode of thinking seemed too different from the way human brains operate: people don't think through problems by using numbers, the argument ran, so programs shouldn't either. Both these objections have been answered in the decades since. The first, because huge amounts of data are available now. As for the second, while people don't do large blocks of arithmetic in their heads, there are analogies that can be drawn between the electrical and chemical processes in our brains to probabilistic reasoning.

"And," says Norvig, "people built systems and they worked, which is the best way to convince somebody."

An example of the kind of system Norvig is talking about here is Google's Translate facility, which was built by researchers who in many cases had no knowledge of the languages (other than English) they were working with. A baby learns languages by hearing and imitation; total immersion. A student wishing to understand a new language learns vocabulary lists. A linguist studies grammar and literature or conversations. Google's computers, on the other hand, did none of these. Instead, Google took advantage of the web, where it's easy to find large numbers of matched pairs of already translated documents. These were statistically analysed to find billions of word pairs that then could be used to learn how to map phrases to phrases like, in Norvig's analogy, solving a jigsaw puzzle. The phrases in turn help disambiguate meanings of words that are commonly used in multiple ways. Eliminate the ones you know, see what's left and what new correspondences you can find.

Back to the 16,000 processors, built into a neural network with a billion connections. After three days, it identified cats with an accuracy rate of 74.8%. This is what huge amounts of data will do for you: give a powerful enough computer enough stuff to work on to develop its own concepts it can then use for pattern recognition. Yet the results, which Norvig described at this year's Singularity Summit, haven't made him more of a believer that the steady exponential increase in computing power is leading us to the Singularity, the moment when artificial intelligence matches human intelligence - and then passes it. Norvig is more dubious than you might expect about this prospect, given that he's an adviser at the Singularity University.

"My biggest concern is people who are too specific about dates," he says. Even Oxford's Stuart Armstrong, who pinned it down at this year's summit to between 10 to 100 years from now, seemed to him too specific.

"I support the Singularity Institute because I think its message that there is a lot of change happening and accelerating, and it's going to have effects on society and people should be aware of that, is a good message."

Even so, he sympathises with the late John McCarthy, who worked on artificial intelligence for more than 50 years and even late in his life (he died in 2011) dismissed the Singularity robustly as "nonsense". In preparing for his talk at the 2007 Summit, Norvig did some research to answer the question, "Are we at a specific point today that's different from the past?"

Using keywords and phrases such as "AI" and "unlike past" to pull out likely candidate papers, he read through abstracts and sorted them by decade.

"Every decade there were a couple of new ones, and then some of the same ideas came back again. I couldn't see anything that said that this decade is distinct from the previous ones. They all seemed like some old ideas, some new ideas, and we think the new ones will help. I didn't see anything about now we've got it. So I guess I'm with John. We're not at a privileged point in time." In sum, "We're inventing new stuff, but it doesn't seem that different today than it did in the past."

Certainly, we have systems that help us design complex things, from bridges to new types of computer chips, but it's still a partnership between human and machine. "This idea that intelligence is the one thing that amplifies itself indefinitely, I guess is what I'm resistant to. Intelligence can let you solve harder problems, but some problems are just resistant, and you get to a point that being smarter isn't going to help you at all, and I think a lot of our problems are like that. Like in politics - it's not like we're saying that if only we had a politician who was slightly smarter all our problems would go away."

This is the more subtle problem: do smart people overestimate the value of intelligence?

"Kevin Kelly [the founding executive editor of Wired] and I talked about this; he calls it 'intelligentism' – this prejudice that intelligence is the only attribute that matters. We think intelligence is important – we call our species after it – but if we were elephants maybe we'd be trying to have super strength, or if we were cheetahs super speed. There are these societal problems that are hard because of the way they are, and it's not just that we're not smart enough to solve them."

This article was amended on 27 November 2012. The original incorrectly described Peter Norvig as a fellow of the Singularity Institute. He is an adviser at a different organisation, the Singularity University.

Most viewed

Most viewed