Elon Musk says Artificial Intelligence is a “fundamental existential risk”, Mark Zuckerberg called his comments irresponsible. Who is correct? |
Hawking lost his ability to speak in 1985 following a life threatening bout with pneumonia. Doctors
Digital technology came to Hawking's aid in the form of an Apple II computer running a program, branded Equalizer, connected to a primitive speech synthesizer. Hawking could spell words at the rate of 15 words per minute by pressing a hand-held button as letters appeared on Apple's computer screen.
Over time, ALS deprived Hawking of his ability to press the hand-held button. A primitive AI application came to the rescue in the form of a word prediction program. But even with modifications allowing Hawking to select letters and other functions with cheek movements, Hawking's communication speed degraded to a couple words per minute.
A number of experiments designed by Intel subsequently failed (caps that read brain waves, technology to track eye movements, etc.). By 2015, elements of AI were advancing rapidly and greatly improved Hawking's ability to communicate.
Intel provided Hawking with a context aware program called Assistive Context-Aware Toolkit (ACAT) using Hawking's vocabulary and writing style to predict whole words or phrases. The program used a database containing the text of his lectures, books, and other writings. The technology worked much like your smart phone suggesting words and phrases before you complete typing a text message. In place of typing, Hawking operated the software using spectacles with an infrared switch whuch detects small facial movements that control the cursor and select functions to perform.
Over time Hawking became more concerned with the balance between the benefits of AI and its potential for harm. He was not alone. An open letter signed by Hawking, Steve Wozniak, Elon Musk, and 8000 others including the most prominent AI scientists in the world states: "Stanford’s One-Hundred Year Study of Artificial Intelligence includes loss of control of AI systems as an area of study, specifically highlighting concerns over the possibility that … we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes — and that such powerful systems would threaten humanity."
In a talk at the Web Summit technology conference in Lisbon, Portugal, Hawking stated (AI) could be the "worst event in the history of our civilization".
Not all concerns about AI are as apocalyptic. Professor Maggie Boden, with five decades of deep involvement in artificial intelligence research, is less concerned with death by hostile machines than with more immediate impacts on our lives. She worries more about the impact of robot's and other AI machines' effects on the course of human behavior. For example, there will be strong financial incentives to cut costs by substituting machines for humans. What happens to the elderly person dependent on care from machines in place of human contact, or self driving cars that fail to respect pedestrians or fail to recognize a person in need of emergency attention? [As I write this, the first fatal accident caused by an autonomous car was reported; a self driving Uber car struck and killed a woman crossing a street in Arizona].
It doesn't take a huge stretch of imagination to think that smart phones and other devices filled with AI will dramatically change the world's cultures and politics in ways that threaten human existence sooner than will malicious robots.
If humanity is not threatened by AI, its institutions are. The open letter referenced above contained a prophetic warning a decade before our present political situation: "...cyberattack between states and private actors will be a risk factor for harm from near-future." Witness the current use of sophisticated algorithms that can influence individuals based on the information obtainable from our personal online activity and other publicly available data.
I leave it to you to judge AI for yourself. My own view is that AI researchers are prudent to heed the closing message of a document putting forth long term research priorities for AI:
"It seems self-evident that the growing capabilities of AI are leading to an increased potential for impact on human society. It is the duty of AI researchers to ensure that the future impact is beneficial."