An engineer points to a screen.

It’s been a busy month in the field of artificial intelligence (AI). In a face-off of man versus machine, the world champion of the Go board game lost to Google’s AI program. And just last week, Microsoft unveiled a program designed to Tweet like a teenage girl — only to have it devolve into praising Hitler and lambasting feminists. Both these events offer a glimpse into the machine learning industry, where companies are competing to create the first viable artificial intelligence software. Forum discusses the latest in AI and machine learning – a field that’s estimated to reach $40 billion by 2020.

Pedro Domingos, professor of Computer Science and Engineering, University of Washington; author of "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World"
Cade Metz, senior writer, WIRED
Andrew Ng, chief scientist at the Silicon Valley AI Lab, Baidu; co-founder and chair of Coursera

  • Robert Thomas

    Q: How best to achieve Artificial Intelligence” when no one can even agree about what “natural intelligence” is?

    A: Easy: identify some machine function you’ve just managed, or materially improved upon (or better yet, some milestone attainable with just a little more grant money) and then define THAT to be what “natural intelligence” is.


    It’s a lucrative scam and it’s been used with very great success since the 1950s to extract lumps of money from various organs of the federal government and from other institutions. Nowadays, instead of academics having to pull the wool over the eyes of deputy secretaries and generals visiting from the Pentagon, the barkers are sophisticated commercial interests and the marks are the far more easily gulled members of the popular press.

    When I was in high school I had the opportunity for a period in the early 1970s to hang out as an occasional visitor to the Donald C. Power Building off of Arastradero Road in Palo Alto, when John McCarthy’s and Les Earnest’s Stanford Artificial Intelligence Laboratory was housed there.

    Little pitchers have big ears, eh?

    On two occasions while visiting there (when I was about fifteen) – sneaking games of “Galaxy!” that were displayed on the Triple-I vector monitors and lending a hand operating a drill press – V.I.P.s arrived. This was presaged by the appearance of polyester ties around the collars of the short-sleeved gingham shirts the guys’ moms sent to them. On one of these occasions the visitors were gray heads in dark suits; the other time they were in uniforms with fruit salad on their chests. Weirdly, these old guys seemed utterly charmed by the long-haired eggheads at the lab and were astonished by the autonomous robots and other technology on display. A variety of defense and intelligence applications were demonstrated (I was told) – creating just the sort of impression required to open up the nation’s checkbooks once again.

    After the generals departed, the ties came off, no more martial system development issues were discussed and the Invention of the Modern World using DoD money proceeded apace.

    • Bill_Woods

      “DC Power Lab” was a great, if misleading, name.

      • Robert Thomas

        Donald Clinton Power of Columbus, Ohio, was from 1951 to 1971 president, C.E.O. and finally chairman of General Telephone & Electric (later “GTE”). Power was consequential in that he managed the merger between GT&E and Sylvania corporation.

        Around 1959, GT&E had a thought to build a West coast research facility to mirror those of IBM at its bucolic Yorktown heights (architect Eero Saarinen’s) Thomas J Watson Research Center. Reversals at GT&E caused them to abandon the building while not quite finished. In 1962, McCarthy and Earnest were looking to move their work from M.I.T. to Stanford and the deal for them to occupy the building was struck.

        D.C. Power at Felt Lake was an otherworldly, magical and sublime place. It made a big impression on my youthful self and the people there cemented my lifelong interest in robotics, data manipulation and storage and in communication networks. It was haunted and magnificent. Now, the site is an equestrian ground.

  • Reverend Lurlean Tucker

    Will this technology create jobs (and not just for techies) or put people out of work? Where’s the social responsibility in AI engineering?

    • Robert Thomas

      There is none. Why should there be?

      Techniques may be employed for the benefit of people or otherwise. That is the choice of those who employ them.

    • geraldfnord

      I sincerely hope it puts everyone out-of-work, that we might play—productively, in many cases, as that’s more fun—rather than work because we’re threatened by needless starvation, exposure, and ill-health.

    • Noelle

      We as a society need to address job displacement and I don’t see that happening yet. It would be nice to have an alternative economy that addresses everyone’s needs for food,shelter,fun etc.

      • geraldfnord

        The problem is that it won’t happen until powerful people are endangered by jobs displacement, and 0.) they’re great at protecting themselves, 1.) they don’t really have jobs the way we do (or don’t), and 2.) for some of them, our suffering jobs displacement and its threat is just dandy, so they don’t even feel an empathetic threat.

        Really, it took the Bolshies, horrible as they were, to put The Fear into our upper classes sufficient to make some of them decide to at least simulate empathy and let the rest of us wet our beaks.

      • Reverend Lurlean Tucker

        An alternative economy is fine as long as it’s legal. I think the mainstream economy should provide for people’s basic needs.

  • De Blo

    All of this has happened before, and all of it will happen again.

    • geraldfnord

      ‘Pics or it didn’ t happen. ‘—Anonymous.

      Sorry for the snark, but it’ s very easy to claim that sort of thing, and I am not acquainted with one jot of evidence indicating that A.I. has been developed before.

      • Bill_Woods
      • Scott A

        I think De Blo was making a light-hearted reference to Battlestar Galactica, with the implication that WE may be the product of AI (either directly, as hinted in the TV show – or in a simulation).

        In other words, we will create AI, AI will replace us in some way, eventually not know they are “Artificial” and go on to produce their own AI, and in turn…

        • geraldfnord

          Apologies;I didn’t watch much or either version of the series, and there are so many people willing to believe anything about Ancient Times….

        • De Blo

          You got it.

  • Another Mike

    An elderly Peninsula couple we know gave up their opera season tickets when they no longer felt comfortable driving at night. The repeated cost of a cab (at that time) seemed prohibitive.
    Owning a self-driving car would have let them continue their enjoyment.

  • Ben Rawner

    Are there any AI that are self aware like humans are self aware?

    Are any of your guests worried about the weaponization of AI in military hardware? South Korea already has ai turrets set up at the dmz.

    • Another Mike

      Compare to two, three, and four year old humans.

      • geraldfnord

        All right, but I know plenty of nominal adults who seem barely self-aware….

    • geraldfnord

      Are there any submarines that can swim? (Thanks, Edsger.)

  • Ben Rawner

    I have to thank Andrew for his great online course when he was teaching at stanford. I am currently watching it and it’s great.

    • geraldfnord

      I’d like to thank Andrew for the opportunity to take Dr Thrun’s self-driving car class, as well as the interviewing course I keep dropping.

  • William – SF

    Gives me pause to know that AI is learning …from us.

  • Kenneth Jaffee

    AI could be a tool of significance in making life easier but in the US I fear that it will only go to furthering the gap between the 1% and the rest of us as it exacerbates the destruction of the middle class by eroding the jobs available to them

    • Noelle

      yeah “making the world a better place”

    • Robert Thomas

      This is not a problem, conceptually. Let’s start at the bottom.

      Blow up all the backhoes and issue everyone a shovel.

  • anon

    How many on the panel have signed the Open Letter “Campaign to Stop Killer Robots” recently recognized by the UN and has over 20,000 signatories and many notable AI researchers.

  • Scott A

    Many of the new advances in training AI have come from looking at how humans (and other animals) learn. As we iterate on those techniques, we are likely to stumble into new ways of better training AI – which hints at better ways of nurturing learning in humans as well.

    — What have you learned about teaching AI that you think is relevant to teaching humans? —

    (e.g. other techniques like master-apprentice pairing and quasi-unconscious conditioning to the right responses like those seen in teaching people how to identify Axis/Ally airplanes, or sex baby chicks)

  • Jonathan Bernard

    Where’s all this data coming from and at what costs to humans?

  • Robert Thomas

    “Machine Learning” is little more than a new name for Feigenbaum’s “expert systems”.

    Donald Fink published a survey of Artificial Intelligence in 1966 called Computers and the Human Mind, an Introduction to Artificial Intelligence (Doubleday Anchor Books). I have it on my shelf along with others by John McCarthy and Terry Winograd and Ed Feigenbaum and Dave Rumelhart et al.

    I see very little progress toward any actual analog to human thinking achieved between Fink’s observation of the field in 1966 and any available today, other than those measurable by degree, due to faster and more widely distributed hardware.

    • Scott A

      You seem well verse in the state of the field – up until about 12 years ago.

      The state of the art has changed DRAMATICALLY since then.

      Simulating neural networks for pattern recognition had a spike of popularity early in the field (1960s), but were largely abandoned due to their expense (computationally in time and in hardware costs). Because of that, it was largely abandoned for things closer to “expert systems.”

      Over the past few years, the neural net approach has seen a forceful resurgence with better hardware, new approaches to training the AI. (FAR better/larger libraries of reference material to train against, and the powerful technique of “enforcement learning” – i.e. playing against itself in the AlphaGo case, to generate more reference material from which to learn)

      What made AlphaGo so formidable was that it combined expert systems with a trained neural net, making it possible for it to narrow down the search space for likely wins CONSIDERABLY.

      There is no reason not believe that these (and other approaches) will be applicable to many, many more fields.

      • Robert Thomas

        Pardon my skepticism. I think you’re largely agreeing with me. As I say, the techniques have improved but there have been no revolutions in conception that match the advancement enabled merely by deeper and faster memory and more and faster (and vastly cheaper) compute cycles. No one doubts (I don’t) the great strides that have been made in data manipulation. While working at Cray Research and SGI, I once made some myself; I’m delighted by its further progress.

        Again, no one agrees what natural intelligence is. How can one expect to agree whether artificial intelligence has been approached?

        I attended a talk by Terry Winograd around 1985 where he said almost exactly what you’ve written here; the irony is that I heard Winograd himself say the same thing in the lunchroom at SAIL over a decade before that. Today, the world has yet to introduce a vending machine as smart or as vindictive as the Prancing Pony – the cantankerous A.I.-empowered fixture that lorded over that venue.

        A.I. [varies as] controlled thermonuclear fusion

        • geraldfnord

          Since True A.I.* is always twenty years away, and nuclear fusion always thirty, this implies that we will never get True A.I., and when it’s ten years old it will be fusion-powered.

          (There are very few renormalisation jokes.)

          *Presumably, True A.I. speaks with a Scottish accent.

  • Robert Thomas

    Man! I wish I could make my brain dissipate a hundred watts! Losing weight would be a lot easier.

    The entire adult human body dissipates about that much. The brain (about 2% of body mass) dissipates something like ten or fifteen.

    • Scott A

      I agree – his numbers were off on this one..

      Most humans idle ±100W, and peak at <~400W at maximum exertion.

      The sources I've read for brain "power" vary from 10W up to 40W (a wider range that I'd expected) – which is a remarkable percentage of our overall caloric needs.

      • geraldfnord

        I can’t find a clip on-line, but one of the best moments of “Futurama” ‘s later years was a blistering minute or so insulting the {humans as power-source} premise of “The Matrix” almost as much as it deserves:

        (“Near Death Wish”:)
        LEELA: Their bodies are being used to generate electricity. The idea came from an old movie called “The Matrix”.

        FRY: But-but wouldn’t almost anything make a better battery than a human body? Like a potato?

        BENDER: Or a battery?

        FRY: Plus, no matter how much energy they produced, it would take more energy than that to keep them alive!

        LEELA: I know, I know. It sounds absurd. In fact, when “The Matrix” first came out, it seemed like the single crummiest, laziest, most awful, dimwitted idea in the entire history of science fiction, but it turned out to be true. Who knew?

        FRY: Good work, writer of “The Matrix”.

  • Chris OConnell

    I just heard dystopia expressed as optimism with machine robot wars, but there might be a happy ending at least.

    • William – SF

      Yes, likely just an ending.

  • Another Mike

    Remember 1973’s Westworld, where nothing can possibly go wrong, go wrong, go wrong?

    • De Blo

      I remember that nothing could possibl-ie go wrong at Itchy and Scratchy World (the violentest place on Earth).

  • KC

    Discussion about autonomous killer AI in Open Letter and podcast

  • aaron

    Why does this show deserve two hours of precious airtime? One hour would be a sufficient amount of time for Michael to talk about something interesting or to poorly regurgitate what has already been discussed on one of npr s more in-depth news programs. Don’t even get me started on the rebroadcast in the evening, it makes me run to my radio and turn to wnyu a statin that would never let something like this air so many damn times a day. Give someone else some airtime, maybe Michael can spend that extra hour boring another group of people in a more exciting way.

Sponsored by

Become a KQED sponsor