Artificial intelligence as an emerging technology has created a variety of responses from industry professionals. Some such as Elon Musk and Bill Gates offering warnings and see the dangers AI can present. Others such as Mark Zuckerberg are far more optimistic about the effects AI will have on modern society. In the first of a three part blog series, we’ll examine the more pessimistic view some take of AI with statements for Elon Musk, Bill Gates, and Stephen Hawking.
Why The Warnings About AI?
Artificial intelligence is a rapidly advancing technology that is already seeing use in several industries. Current AI programs are relatively simplistic when compared to theoretical possibilities AI has as a technology. One need only look at cell phones to see how quickly technology can advance in just a few years. Many industry experts, personalities, and educated professionals fear what AI has the potential to become.
Elon Musk View
Elon Musk is well known as the key figure behind Tesla Motors and SpaceX. His research in both companies is highly innovative with Tesla Motors’ contributions to the automotive industry being widely praised. Musk has a notable distaste of artificial intelligence which he considers a threat to the entirety of human civilization. Musk fears are twofold the first is focused chiefly on one the possible development of dangerous and highly evolved AI programs the humans lack the capacity to control. Second governmental agencies tend to be reactive to dangers instead of proactive i.e. they wait for bad things to happen instead of trying to prevent them from occurring in the first place. Musk favors immediate governmental regulation of AI instead of waiting for events requiring regulation occur.
Bill Gates View
As the founder of Microsoft Bill Gates is one of the most well-known names in technology. Like Elon Musk he shares concerns about AIs that become too powerful to control. Gates, however, specifies a key difference between ‘simple AIs’ that are designed to do repetitive tasks and highly devoted AIs capable of decision making and original thought. Like Musk his fears relate to what artificial intelligence could create if it is developed unchecked and without consideration for possible dangers.
Stephen Hawking View
Physicist Stephen Hawking has also made statements about the possible dangers of AI for the human species. He stated that it will either be the best or worst thing that has ever happened to humanity. While Hawking’s view is certainly more positive than others (he notes that AI could undo the environmental damage and end poverty for example) he also shares the concern about the damage a highly evolved superintelligent AI could do if left unchecked and uncontrolled by its human creators.
Certainly, there is a common underlying theme in the concerns about AI expressed by Musk, Gates, and Hawking. This theme is the danger of possibilities. AI is a technology that has potential to change the entire structure of civilization as we know it and if AI reaches the upper limits of what some scientists think is possible it could be a technology with a will of its own something never before seen in history. Humanity being destroyed by its own creations is a common theme in writing and is one that plays on common fears. However, despite these concerns, not all personalities view AI with such suspicion. In part two of this series, we’ll look at more optimistic viewpoints about AI and the future it may bring.
Read Our Mini Blog Series
- Blog #1 The Possible Dangers Of AI
- Blog #2 Positive Views Of The Future AI Could Bring
- Blog #3 Our Thoughts: A Balanced View On AI
- Adversarial attacks on neural network policies
- This Google AI video classifier is easily fooled by subliminal images
- Towards Deep Neural Network Architectures Robust to Adversarial Examples