by Andrew Boyd
Today, thinking machines. The University of Houston presents this series about the machines that make our civilization run, and the people whose ingenuity created them.
In January of 2015 Microsoft co-founder Bill Gates gave a warning. He was worried about the emergence of a super intelligence that within a matter of decades, intelligent machines might pose a threat to humanity. And Gates wasn't alone. Joining him in signing a petition were hundreds of scientists with equally impressive credentials, including physicist Stephen Hawking. Wrote Hawking, "Whereas the short-term impact of [artificial intelligence] depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Bill Gates speaks at the Annual World Economic Forum. Photo Credit: Flickr
It was with these thoughts in mind that I picked up the book What to Think About Machines That Think. It's a collection of brief essays by a broad cross section of notable thinkers on the topic of machine intelligence. Will machines evolve to harness us for energy as in the Matrix films? Or will we push a button and hand control to a twisted war machine as in the Terminator films?
Gates, Hawking and others don't envision such theatrical endings. But their concern over the future of humankind is quite real. And the book's contributors address many topics, not just the apocalyptic potentialities. Daniel Dennett, for example, is more concerned about atrophy of the human mind as we delegate more and more tasks to computers. But for the many intriguing thoughts found in the book's pages, one of the most interesting had to do with the nature of intelligence itself.
Nick Bostrom's new book Super Intelligence. Photo Credit: Flickr/ Anders Samberg
We, as the most intellectually gifted species on earth, tend to associate intelligence with other human traits. One such trait is the subjugation of other species for our own purposes. We saddle horses for transportation and raise cattle for sustenance -- certainly activities that demonstrate intelligence. But does this mean that as a machine developed intelligence it would necessarily seek to subjugate those around it? "Intelligence," writes Steven Pinker of Harvard, "is the ability to deploy novel means to achieve a goal." Controlling others isn't a requirement.
Humanity battling against sentient machines in "Terminator". Photo Credit: Wikimedia Commons
Of course, the will and ability to control the world has had undeniable survival value for humans. The film 2001 went so far as to suggest that the act of killing was the very seed of humankind's evolutionary success. But should intelligent machines arrive on the scene, and that alone remains far from clear, they won't be the product of billions of years of fighting to survive. The need to exercise control over humans won't be part of their genetic makeup, so to speak. Why would battling the human race even come to mind?
Pinker leaves us with a humorous reflection. "It's telling," he writes, "that many of our techno-prophets don't entertain the possibility that artificial intelligence will naturally develop along female lines, fully capable of solving problems but with no desire to annihilate innocents or dominate the civilization." That's a thought that hopefully will let us all sleep a little better.
I'm Andy Boyd at the University of Houston, where we're interested in the way inventive minds work.
J. Brockman, ed. What to Think About Machines That Think. New York: HarperCollins, 2015.
An Open Letter: Research Priorities for a Robust and Beneficial Artificial Intelligence. From the website of the Future of Life Institute: http://futureoflife.org/ai-open-letter. Accessed February 16, 2016.
N. Statt. Bill Gates is Worried About Artificial Intelligence Too. From the CNET website: http://www.cnet.com/news/bill-gates-is-worried-about-artificial-intelligence-too. Accessed February 16, 2016.
This episode was first aired on February 18, 2016