Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech

Stuart Russell, UC Berkeley computer science professor and co-author of the standard textbook "Artificial Intelligence: a Modern Approach."

Stuart Russell, UC Berkeley computer science professor and co-author of the standard textbook "Artificial Intelligence: a Modern Approach." (JUAN MABROMATA/AFP/Getty Images)

From Hollywood to Silicon Valley, California leads the world with big ideas. On Monday we launched a new series focusing on a few of them.

Our first episode comes from Stuart Russell. He’s a computer science professor at UC Berkeley and a world-renowned expert in artificial intelligence. His idea?

“In the future, moral philosophy will be a key industry sector,” says Russell.

Translation? In the future, the nature of human values and the process by which we make moral decisions will be big business in tech.

Russell’s idea is at the center of a debate going on right now in computer science.

“So, imagine, if you want to build a robot to go in people’s homes,” Russell says. “This is something that could happen in the next decade,” he says.

He says that at first robots will do chores around the house, such as cooking, cleaning and laundry. But eventually they will take on more human tasks.

Right now in Japan there’s a robot named Pepper that’s designed to serve as a human companion. It’s being tested with senior citizens. The idea is that instead of getting your granny a cat to keep her company, you’d get her Pepper. Russell imagines that one day robots will take care of our kids.

“If you want to build a robot to go into people’s homes, you don’t want to come home and find it’s put the cat in the oven for dinner, thinking that was a good thing to do because the kids were hungry and there was nothing in the fridge, right?” asks Russell.

But how would the robot know that’s not what you wanted?

“You would want that robot preloaded with a pretty good set of values,” Russell says. “So presumably the robot companies will get their values loaded into the robot from a values company.”

The humanoid robot Pepper chats with children at a high-tech gadgets exhibition in Tokyo.
The humanoid robot Pepper chats with children at a high-tech gadgets exhibition in Tokyo. (YOSHIKAZU TSUNO/AFP/Getty Images)

Sounds a little creepy, no? Russell says fear of the brave new world of robots is as old as the word itself. In fact, the word “robot” was coined in a play in which robots take over the world. From “Frankenstein” to “The Terminator,” that theme has run through the arts and popular culture ever since.

But Russell says for the most part, scientists didn’t take such concerns seriously.

“The normal response to those kinds of things is to say, ‘Oh well, you know it’s a long way off in the future, so we don’t have to worry about this,’ ” says Russell.

But recently that attitude has changed. In the past few years, scientists have been more vocal about the dangers artificial intelligence could pose to humanity. Theoretical physicist Stephen Hawking told the BBC that he thinks the “development of full artificial intelligence could spell the end of the human race.”

And earlier this year Hawking and hundreds of AI researchers signed an open letter, saying that if the industry doesn’t start building safeguards into artificial intelligence it could spell doom for humanity. Tesla CEO Elon Musk, who also signed the letter, gave $10 million to the cause. He went so far as to say that artificial intelligence could be humanity’s biggest “existential threat.”

Physicist Stephen Hawking has said the "development of full artificial intelligence could spell the end of the human race."
Physicist Stephen Hawking has said the “development of full artificial intelligence could spell the end of the human race.” (Frederick M. Brown/Getty Images)

Russell also signed the letter, but he says his view is less apocalyptic. He says that, until now, the field of artificial intelligence has been singularly focused on giving robots the ability to make “high-quality” decisions.

“At the moment, we don’t know how to give the robot what you might call human values,” he says.

But Russell believes that as this problem becomes clearer, it’s only natural that people will start to focus their energy on solving it.

And he says, not to be flip, but nobody’s going to buy a robot that cooks a cat. So it’s just a matter of time before tech companies, universities and the government start pouring resources into programming robots with morals.

“In some sense [the robots’] only purpose in existing is to help us realize our values, and perhaps it’ll make people better,” says Russell.

Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech 24 March,2016Queena Sook Kim

  • Imagine a world in which someone hypnotized by an Artificial Intelligence agent to think their experience is real (and not a simulation) reveals their aptitude to employers. I imagine a world in which it is impossible to progress in life without a #LifeGamesNovel certificate of competence….

    On Facebook look up “LifeGames” – it is a group devoted to Virtual Reality and Artificial Intelligence – facebook. com/groups/LifeGames/

  • One day (perhaps soon) computer scientists will succeed in creating a machine intelligent enough to organize, read, and understand everything written on the subjects of philosophy, religion and psychology, and all poetry and literature. We’ll then ask, “From all this, what do you conclude is the purpose and meaning of human existence?” The computer may well reply: the evidence is overwhelming that everything good, true, beautiful, and wonderful about your nature involves the fact that you have an immortal and eternal soul.”

  • Dr. Suchitra Abel

    I am not into moral philosophy, though I studied some of that, many years ago. No regrets about not doing moral philosophy. I am doing high-level research in the field of Economics of Radical Uncertainty. (It is also related to Cyber Security). My research is going on extremely well, and will produce great results that no one has ever seen.

  • Dr. Suchitra Abel

    Since I do not think that Moral Philosophy will be able to help Robots in any significant manner, I do not wish to post this in any social media.

Author

Queena Sook Kim

Queena Sook Kim is the Senior Editor of the Silicon Valley Desk. In this role, she covers the intersection of technology and life in the Bay Area. 

Before taking this post, Queena was the host of The California Report. The daily morning show airs on KQED in San Francisco, one of the nation’s largest NPR affiliates, and on 30 stations across the state. In that role, she produces and reports on news, politics and life in the Golden State. Queena likes to take sideways look at the larger trends changing the state. One of her favorite stories asked why Latino journalists “over’pronounce” their Spanish surnames as a way of looking at how immigration is creating a culture shift in California.

Before joining The California Report, Queena was a Senior Reporter covering technology for Marketplace, the daily business show that airs on public radio. Queena covered daily tech business stories and reported on larger technology trends. She did a series of stories looking at role of social engineering in hacking and on a start-up in Silicon Valley that’s trying to use technology, instead of animals, to make meat that bleeds.

Queena started her career as a business journalist at the Wall Street Journal, where she spent four years covering the paper, home building and toy industries. She wrote A1 stories about the unusually aggressive tactics KB Home took against its home buyers. and the resurgence of “Cracker” architecture in Florida. She also wrote section front stories on marketing trends and

As a journalist, Queena has spent much of her career helping start-up editorial products. She was on the founding editorial team of The Bay Citizen, an experimental, online news site in San Francisco that was funded by the late hillbilly billionaire Warren Hellman. In 2009, Queena received a grant from the Corporation of Public Broadcasting to start-up a podcast called CyberFrequencies, which reported on the culture of technology. She also helped start-up two radio shows - Off-Ramp and Pacific Drift - for KPCC, the NPR-affiliate in Los Angeles. Off-Ramp was awarded 1st Place for news and Public Affairs programming by the PRINDI and the L.A. Press club. Queena’s stories have appeared on NPR’s Day to Day, Hearing Voices, WNYC’s Studio 360, WBUR’s Here and Now, BBC’s Global Perspectives and New York Times’ multimedia page.

In 1994, Queena won a Fulbright Grant to teach and study in Seoul, South Korea. She was also selected to be a Teach For America Corps Member in 1991 and taught elementary school in the Inglewood Unified School District in Southern California.

Queena is a frequent public speaker and has given talks at UC Berkeley, Stanford University, San Francisco State University, PRINDI conference and the Craigslist Foundation Boot Camp. Queena went to UC Berkeley’s Graduate School of Journalism and graduated cum laude from New York University with a B.A. in Politics. She grew up in Southern California and lives in Berkeley, Ca in a big fixer on which she spends most weekends, well, fixing.

Sponsored by

Become a KQED sponsor