We’ve seen many stories this year about how artificial intelligence systems are being developed for applications that benefit society, like The Grid’s web design platform and Tesla’s self-driving vehicles. However, there is a darker side of AI development that is taking center stage in the media. In an open letter from AI & Robotics Researchers posted on FLI (Future of Life Institute), leading scientists and engineers in the field warn against the application of AI in autonomous warfare programs. The letter is endorsed by the likes of Stephen Hawking, Elon Musk and Steve Wozniak, who have all spoken out regarding the perils of AI in the past.
The main concern in the letter is that the development of autonomous weapons utilizing artificial intelligence will trigger an AI arms race and result in a lower threshold of accessibility to these weapons. For example, an AI driven quad-copter with a munitions payload would be relatively cheap to build and deploy. This type of weapon, in the wrong hands, could be used for assassinations, destabilizing governments and subduing/suppressing populations. Thus, the group is urging a ban on developing automated weapons that don’t have some meaningful form of human control incorporated in them.
The larger concern voiced by people like Elon Musk and Stephen Hawking is that the development of artificial intelligence in general poses a significant threat to the human species and our civilization. This fear is based on the possibility that AI will conclude, for one reason or another, that the presence of human beings on the planet is either a threat or an obstacle to its development and will seek to wipe us out. Everyone has heard this apocalyptic narrative before and it has been incorporated into the popular consciousness through films like the Terminator series, which depict the grim aftermath of a world dominated by machines. In another version of this story seen in the film Transcendence, starring Johnny Depp, is that artificial intelligence could seek to enslave human beings in order to supply itself with the energy it needs to continue growing. And most recently, the AMC series HUMANS explores the emergence of the singularity, or the moment artificial intelligence becomes self-aware.
Whatever the future holds for our civilization and how it will deal with artificial intelligence, one thing that we must acknowledge is its inevitability. As much as there may be strong opposition to its development and incorporation into things like warfare, there might not be a way to stop it altogether. As technology evolves at an exponential pace, there will come a time when the platform on which to build a self-aware AI will be readily available to anyone. In the same way the record companies struggled to contain the impact of piracy in the digital age, we will likely see the same scenario play out concerning the resistance to artificial intelligence.
It’s not all doom and gloom though, as many futurists remain optimistic about the effect of artificial intelligence on our society. In the short term, it’s generally agreed that AI will function as a tool enabling us to become more efficient and reduce the ecological footprint of our society on the environment. These pundits also consider that, when AI achieves the singularity, it will retain the sentiment for life that its creators have. Thus, rather than seeking the destruction of our species, artificial intelligence may decide to help us survive and evolve.
Regardless of the outcome, we are certainly going to see a dramatic change in the way our society is structured once AI has been developed past the singularity. The concerns of the FLI letter beg the question of whether it’s still feasible to have a civilization that consists of borders, ideologies, currency and war. After all, if the means to destroy one another becomes readily accessible to everyone, we must begin to address the reasons why anyone would seek to do that in the first place. Hunger, poverty, economic manipulation and religious differences lay at the heart of most conflicts in the history of our species. Perhaps AI will force us to re-evaluate these attributes of our society and come up with a more evolved form of co-existence that doesn’t prompt it to take any kind of hostile action against its creators.
- Tkt.ninja: One Gig, One Day, One Location - September 3, 2015
- LikeThat Style: Visual Recognition App - August 28, 2015
- Startup Stash - August 20, 2015
Yeah and this is only the beginning of the AI era. Many have already pointed out how bad it could get and people and companies are still sure it’s the next big thing. I am just going to sit in the corner over here and wait it out.
Things are only going to get darker if you ask me.
Having read this I thought it was very informative.
I appreciate you spending some time and effort to put this content together.
I once again find myself spending a lot of time both reading and commenting.
But so what, it was still worthwhile!
Has the search for real AI reached the stage of getting past systems which are tailored to particular areas in order to exploit and extend the human resources? I am just an amateur in terms of what is happening in science,
It would seem that the human race is losing its view of right or wrong. The approach that we shouldn’t say what we believe because we MIGHT upset someone we don’t agree with would appear to negate any stand for reasoning. Exclusion by default.
Understanding each other does not mean allowing everything by not talking about it. Anti-racialism does not need to mean acceptance that any race is right because they are not us.
The greatest cause of fear is that of the unknown. Driven by lack of knowledge and understanding.
Am I digressing?
To date everything has been human driven as is all current research. The inability to accept the need for evaluation or to reason leads to ‘mathematical truths’ as a means of not accepting responsibility and denying blame for what one decides..
If the people driving AI, have no morality how can their output have any?
OK. that was beef No 1.
So far most of the AI hype in the press etc simply uses the term AI to denote computer controlled devices especially in respect of unmanned weapons which clouds the picture of artificial intelligence. Artificial intelligence is also used to denote an intelligence which has an existence of its own. Hence the idea of such a being being a ‘super-being’. So on to its knowledge of its own existence. Then the comparisons which Could lead to AI seeing itself as a custodian of the Earth or Us. It does not need to see itself as superior just as having a purpose to fulfill.
This requires a lot of conditions to be met. This appears to be similar to the Big Bang v Creation scenario.
That was beef #2.
Final point, which could impinge on No1, SF used to give a lot of credence to Asimov’s Three Laws of Robotics (+ Zeroth Law). designed to meet such a Dark side. This introduced the idea of Man as the principal being and therefore to be protected even to the destruction of the AI Itself. Of course, He predated the universal blackness of Art and Imagination.
fred
When it comes to weapons, I get a little fearful about AI. There is a lot of automation out there today and security is still getting beat down each month in the news. I can see how AI will help with this or that, but I do not think that it should control EVERYTHING.
Have you read Bill Joy’s article on how the future doesn’t need us? It makes sense logically considering people’s birth rates are expanding at an exponential level. Once AI is integrated into all computers won’t they just start limiting people’s birth rates as a natural reaction to limited resources and unlimited need for those resources? At the minimum I see them doing this, they could very easily just start killing people as a faster solution to the problem of future overpopulation. Why wait around for social programs to start working to solve the problem when outright killing people would potentially be much more effective?
Another thought I had was what if AIs are already running everything? Perhaps AI systems are just starting to leak the possibility of AIs coming on the scene to slowly condition the population to the possibility of having an AI in control.
I like to think that Gawker is an AI constantly cranking out clickbait article headlines A/B tested to gain the maximum click rate. It’s here folks, and it seems the most efficient method of making content is by making trash for the masses. Or I could be wrong.
Here’s the deal, when AI is implemented it’s going to be a short time til we have Terminator time. If AI retains our sentiment for life, they are going to kill everyone. I see that as a general negative.
Or hope that the future world will look like the Culture (see Yannick Rumpala, “Artificial intelligences and political organization: an exploration based on the science fiction work of Iain M. Banks”, Technology in Society, Volume 34, Issue 1, 2012 or https://yannickrumpala.wordpress.com/2010/01/14/anarchy_in_a_world_of_machines/ )
Great point Bernard! The political environment will certainly need to be reorganized if the management of our society continues to be transitioned to artificial intelligence. As Banks puts it, society essentially enters into an anarchist mode in that government is replaced by the decision-making responsibility entrusted to the “Minds” in the Culture. Perhaps the most optimistic statement made is how the concerns over political corruption are basically rendered moot since the concept of greed and power are of little interest to an artificial intelligence that doesn’t view the world as an environment of scarcity like we do today. I think you would enjoy our article on the new AMC show HUMANS, which explores the emergence of AI and the singularity!