In the final of our three part blog series, we present a third viewpoint, our own. Like many readers such as yourself, we’ve read the same warnings about AI put out in the media by professionals, experts, and technology leaders. However, we’ve also read the more positive views put out by others in the same fields and with the same level of expertise. With so many conflicting viewpoints it can be hard to find a consensus.
When viewing new technology (or new information in general) taking an extremist view is rarely helpful. But our current society seems to favor absolute statements where any topic brought up and it is either the best thing ever or the absolute worst. Sadly more pragmatic nuanced thinking can get lost in the noise of a simple answer. Like many others, we are also confused and worried about what artificial intelligence can do to society but we are also hopeful. There are ways to look at AI that are cautious but also recognize its great potential.
Looking To History: The Greeks
In the ancient world, The Greeks stand as a forbearer of modern civilization in many ways. They were also a people who hated extremes in views, disliked automatons, and were philosophical considering their actions and what it meant to society at large. Their way of thinking is one that can certainly be applied to AI. The danger of AI is based around unchecked development (an extreme approach), automating everything (even that which may not require it), and thinking only of scientific results and not societal impacts. A ‘Greek Approach’ removes much of AIs dangers and allows for a measured and mature approach to technological development.
Openness And Fighting Fears
Much of the fear around AI is based on a far older fear that being the fear of the unknown. AI being developed in secret by larger companies that reports on findings but not necessarily procedure creates an environment that encourages fear mongering. One of the best ways to combat this fear is a free, open, and unified approach to AI development. Not only does this increase public knowledge but it also allows for the sharing of information which not only improves AI development but also increases safety as more scientists can view any possible dangers.
Humans And Technology
Fears about what humanity will do with new and dangerous technology are hardly new. When nuclear technology became more widespread fear of nuclear war was quite widespread and despite this, it has yet to occur. A common repeating theme in the story of humans and technology is that errors are unavoidable as new technology is developed but in response to dangerous new methods, legal standards, and methods of use will develop in response. AI is no different. The idea that super intelligent AI programs, systems, or robots would break free of all human control implies that there would be no precautions, safety standards, or protections put in place as the technology develops. While errors will certainly occur complete and utter destruction goes against established patterns of behavior humans have in dealing with new technology.
AI offers humanity a chance to change the way we approach work, our daily lives, and even how society is structured. Ray Kurzweil is correct in that AI has a chance to solve many of the issues we currently face as a species. Also, Elon Musk is correct in that AI could be a highly destructive force unless we approach it carefully and with a focus on careful planning, limitations, safety, and have plans in place to deal with possible dangers. The key to developing AI safely is to treat such a powerful development with the respect it deserves.