Ray Kurzweil

On Monday, futurist and visionary inventor Ray Kurzweil joined Google as its new director of engineering, to work on projects related to machine learning and language processing. We’ll talk to Kurzweil about his new position and his latest book, “How to Create a Mind: The Secret of Human Thought Revealed.”

Ray Kurzweil, director of engineering for Google and author of "How to Create a Mind" and other books including "The Singularity is Near" and "The Age of Spiritual Machines"

  • Wes

    I am inclined to agree with James Kunstler’s criticism of Kurzweil in “Too Much Magical Thinking.” We are already seeing the effects of our population having exceeded the carrying capacity of earth and the economic and resource implications of that are already being felt. I would not stake my grandchildren’s lives on achieving any “singularity”.

  • eric

    sometime during the show, when appropriate, can you ask Dr. Kutzweil why his film on the singularity never came out after all that media build up, including through KQED’s own TechNation?

  • troll

    The subject of identity and its relationship to brain matter or nervous systems is explored in a very helpful manner by text such as edited by Daniel C. Dennett’s “The Mind’s I”. It lays the groundwork for a disputation that abstraction of consciousness from human biology would be meaningful *as* a mind per se. Minsky and others have adequately addressed the philosophical limitations of supposing autonomy and ‘artificial intelligence’ without a examination of how we’d recognize it or create it without determining it. This is also the criticism of the notion of ‘creating randomness’ (both are fallacious).

  • troll

    Isn’t pattern-recognition an imperative aspect of rational thought? Doesn’t logic require matching patterns and confirming their correlation to the real (physical) world?

  • Matt

    Matt from San Diego:

    Has Mr. Kurzweil considered the possibility that the Singularity represents a tipping point in collective intelligence rather than an advanced A.I.? Technological advances have been increasing the speed of information flow between individuals; isn’t it possible that this could lead to something akin to a global brain? What if the Singularity is us?

    • Marty Fouts

      There is no singularity. Crudely put, Moore’s law only applies to semiconductors, not to the consequence of the application of those semiconductors.

  • Jase Kay

    I know Mr. Kurzweil is a founder of Singularity University at NASA Ames here in the Bay Area. Can be briefly comment on what it is in context of his new role at Google? Thank you.

  • Marty Fouts

    AI proponents have been predicting great things for AI since Minsky in the 60s. Kurzweil is going down the same path Minsky went down. Why doesn’t he realize he will reach the same dead end?

  • chrisco

    The New Yorker has a quite extensive fact checking process. They are surely NOT above reproach but to just call it nonsense and content-free is not adequate nor believable.

  • mark

    Mark from Santa Rosa:

    I have a question regarding immortality by 2045:
    Are you refering to a corporeal immortality where our bodies never wear out or become dysfunctional? Or a mental immortality where, in one way or another our minds seperate from our bodies and become ageless?

    If mental, do you think there is a ever a chance for reanimation of the deceassed?

  • fadista

    Does Dr Kurzweil consider the moral ramifications when developing new technologies? For example, the application of face & voice recognition technologies will certainly increase the capacity for surveillance by governments and organized power structures, such as corporations, but has negligible advantage or practical application for most people. In other words, do we do something just because we learn to do so, such as splitting the atom? What are scientists thinking when they explore new territory? What are their sponsors after? The government & corporate mindset is that what is good for us is good for humanity and that is a dangerous way of thinking.

    • Marty Fouts

      Scientists think about fame, prizes, sometime tenure, often money, and the joy of exploration, in no particular order. Some are trying to solve specific problems, others are driven by curiosity, most by the agenda of their funding source. I don’t think funding agencies ever act as if their good is the common good.

      • Nick

        Tell that to the scientists who cured polio or invented the computer.

        • jurban

          Science never claimed to be a moral study. It’s a merely a process to extrapolate the truth by disproving the null hypothesis. Morality and policy is the domain of other disciplines – and hopefully those governed by logical individuals. Science, as a pure discipline, finds the truth as evidence allows regardless of the individual’s incentives. Technology applies those discoveries driven by profit or curiosity or mandate. Stopping Science is stopping the pursuit of truth. IMHO, that is a slippery slope we should avoid. Application of technologies will always be governed by continually evolving policies. I’m comfortable with that as long as we develop good policies.

  • TK_PhD

    I noticed when I started using a cell phone, I started relying on technology and I stopped remembering phone numbers. Do you have any concern that the more we begin to rely on artificial intelligence the less we will use of our own?

    • Marty Fouts

      This was noticed long ago, with the introduction of inexpensive calculators, but even longer ago with the introduction of books. Our ability to quickly learn and recite has diminished but other abilities have grown.

  • Guest

    As Ray Kurzweil explains, we can see more and more integration of human/machine. What do you think will happen when humans become completely superfluous for production, research, and the machines completely “outsmart” us in every aspect ?

  • Stellaa

    It’s funny how we get a brilliant mind and all the host and listeners do is read the vapid criticism from people who only criticize yet offer no interesting ideas or solutions.

  • James Chou

    Current processor base hardware design is not particularly efficient solving the problems (pattern recognition) you mentioned. As it’s very different from how human brain function. Do you think it’s very beneficial to design hardware mimicking our brain structure? Or is it critical? Or with processors running in multi Ghz, the pattern recognition capability is not in critical path anymore?

    • Marty Fouts

      The complexity that modern systems are capable of modeling has long passed the complexity of the human brain. This has led some AI researchers to start over with a completely different model leading to, for example, robots that can walk but require much less processing power. There is a growing believe in the research community that the traditional model of goal-directed top-down intelligence is naive and rarely appropriate. This would go a long way towards explaining the failure of machine ‘intelligence’ and ‘learning’ to date.

  • Jim

    When artificial intelligence writes better poetry, more poets will disappear.

  • Gabe Gross

    I was part of the Fifth generation period (mid-80s). Since then I saw lots of specialized applications become successful (like voice recognition, robotics movement, vision) but the tasks of reasoning have not progressed much. Until that gets resolved, a major piece will be missing from AI.

    Gabe Gross

    • Marty Fouts

      Neither the universe, nor intelligence are hierarchical. One needs to go beyond the mind/brain paradigm of the 70s that is still prevalent and look at current research in biology that demonstrates things such as corvid behavior demonstrating that the neocortex doesn’t play the role in intelligence that AI researchers assign it.

  • islandnotes

    The computer industry’s support of Mr. Kurzweil’s ideas is to be expected and is precisely how such an industry generates itself. As Marshall McLuhan noted some fifty years ago, alongside using such tools that supplement our nervous systems, we also all function — to greater or lesser extent — as reproductive organs in the perpetuation of these tools (like it or not). However, as a teacher of children for many years now, I’m also compelled to cite what McLuhan referenced as ‘auto-amputation’ whereby a sort of numbing need take place as we implement such extensions of ourselves. Perceptions are numbed as we become the servomechanism of our extended or repeated image: we become a closed system. Suffice to say that as folks of Kurzweil/Google’s ilk, with their massive financial/technical capital attempt to prescribe for “humanity‚Äôs grand challenges” we’d all do well to recognize the natural distinction between: computational logic/reason, which fits neatly and isolated on electronic circuitry — and living human expression, which at its highest transcends merely rational and which connects us all. The elephant in the room is the displacement of genuine human interaction that occurs when the zeal for profit ignores this. As I continually face new students with less and less ability to use their hands and hearts as other than machine interfaces, I hope these distinctions with also find spokespeople given such platforms for expression.

    • jurban

      I think you are projecting the future potential of computer-man interface based on the present limitations and maturity of our social engagement. One potential enhancement could be wireless connections between the respective communication centers in our brain. In effect, ESP, and possibly a new form of language that is more fundamental and less prone to miss-interpretation. IMHO, that’s an enhancement to human engagement and can extend to other sensory experiences. We’re not talking about iPhones, here.

      • islandnotes

        Sure we’re talking about iPhones here, and beyond. It’s also an
        inherently dialectic conversation. Obviously technological progress has
        good and bad consequence. However, this emerging technology has the
        potential to create radical monopolies, whereby those prioritizing discernment
        for the appropriateness of a given technology — as essential to
        progress in “societal engagement” — are as out of luck as those who
        might question the automobile-paradigm. There is a brand of reason that
        posits, “If we can technically do it, we should”. Another that goes,
        “let’s engineer a populace’s value sets so that they will consume our product”. All I’m saying is that more than ever, we ought to place our
        native human capacities and values as equally essential to moving
        forward towards wellness. I’d be even more impressed by Google if they
        were to launch an ‘Inappropriate Application of Technology’-division.
        Not holding my breath on that one…

  • advancedatheist

    Ironically “futurists” like Kurzweil emphasize computing, computing and even more computing in their visions of “the future” because, as Peter Thiel has argued lately, most forms of innovation in the engineering of stuff and energy have become effectively illegal since about 1970. By contrast, we have the freedom to innovate all we want with software and computers. That would explain why we have tablet computers and smart phones right out of the science fiction I grew up reading in the 1970’s, but we live in cities which have started to look like the “After” photos of a zombie apocalypse.

    But we need energy and stuff to survive and thrive, not bits. The world would benefit more from removing these restrictions so that we can see the emergence of Kurzweil-like people in the areas which really matter.

Sponsored by

Become a KQED sponsor