Updating the Ethical Rules of Artificial Intelligence

We are living in a world in which we are surrounded by technology. We are now so immersed in smartphones, tablets, and computers that the youngest generation of people who are growing up in this world are labeled as the iGeneration. Named after the sudden rise of the tech conglomerate Apple and their iPods, iPhones, and iPads, this generation has watched technological advancements occur at an unprecedented pace. And with this ascension, computers and artificial intelligence have quietly been dominating humans in more and more facets of everyday life.

First, it was chess, an activity that many believed humans could use their superior intuition and creativity to defeat artificial intelligence. In 1997, however, the supercomputer Deep Blue tactfully defeated then world chess champion and one of the greatest chess players of all time, Garry Kasparov. Then, AlphaGo, a software designed by Google to play the complex, Chinese board game Go, defeated both the Korean legend Lee Se-dol and the current top-ranked Go player in the world, Ke Jie. Even in complex video games such as Dota 2, artificial intelligence software such as OpenAI, which is backed by Elon Musk, has started to show an ability to crush human players. It is likely that artificial intelligence will continue to develop, eventually taking over jobs in numerous fields, including potentially playing vital roles in banking, healthcare, education. The progression of autonomous robots could drastically improve the quality of life for billions of people worldwide, by improving the quality of all of these services. But perhaps the most frightening possibility is the responsibility that may be placed on artificially intelligent robots in the military. And with this possibility comes questions of both ethics and safety that must be answered rapidly if we are to ensure that the future remains bright for humanity.

Image result for garry kasparov deep blue

Currently, many AI developers and engineers, as well as entrepreneurs such as Elon Musk,  have stated that “killer robots“, or autonomous machines with the ability to kill, are the “third revolution in warfare, after gunpowder and nuclear arms.” These robots would have the ability to make decisions based on information regarding the current situation or final objective of a mission. Already, these machines are being tested and even in use around the world. South Korea has deployed autonomous sentry guns on the North Korean border, though they currently require human intervention to fire. Great Britain has a prototype for a drone with the ability to carry out missions on it’s own, and the US Navy has reportedly been experimenting with an underwater drone named the Sea Hunter. In all of these cases, ethics becomes one of the largest concerns for engineers. How can developers ensure that autonomous machines do not take innocent lives? Who should have access to artificial intelligence software? How can we prevent autonomous machines from turning on their creators, as is portrayed in so many movies? Should autonomous machines even be allowed to take a human life in the first place?

These questions have already been explored by many science fiction writers. It has long been apparent that artificial intelligence brings immense potential, but also poses an immense risk to mankind. Without proper construction, it is possible that artificial intelligence surpasses mankind in the production and operation of weaponry. Perhaps out of self-preservation, autonomous robots may decide that humans pose a threat to them, and as a result, may decide to eliminate humankind itself. But, if we can answer the aforementioned ethical questions, and program machinery so that these potential risks never come to fruition, artificial intelligence may prove to be an amazing benefit to billions of people around the world.

Image result for killer robots

Yet, as a collective global society that has become enamored with the power of drones and autonomous weapons, we are woefully unable to answer these difficult questions. Very few guidelines have been put forth regarding these subjects, even as the role artificial intelligence plays in the military continues to grow. Amongst the select few sets of rules that have been put forth, one of the most poignant is known as Asimov’s laws, which were put forth by science-fiction writer Isaac Asimov nearly 75 years ago. They state the following:

  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

 

Image result for isaac asimov's three laws of roboticsAt first glance, these laws seem to be secure and just. If we are to follow these laws, autonomous weapons would play no role in military combat, and engineers would avoid having to program artificial intelligence software capable of answering complex moral and ethical questions. However, upon closer inspection, these laws may prove to be problematic. Frequently, military members may make rash judgments based on emotion and self-preservation, that can ultimately prove to be less ethical than those of a well-programmed machine. Additionally, military personnel frequently require force and violence to rescue civilians and fellow countrymen from the depths of war, often risking their lives for the safety of others. If this role was replaced by self-controlled machinery, one might argue that Asimov’s First Law is detrimental to the success of an undeniably honourable cause. Finally, the ambiguity of these laws makes misinterpretation a likely concern for engineers that choose to follow these guidelines.

Due to these concerns with Asimov’s propositions, the refinement of these ethical laws is necessary for the safe advancement of artificial intelligence. The British Standards Institution is one of the first groups to do so. They have created an official set of guidelines that not only builds on Asimov’s laws, but also adds on content regarding complex topics such as transparency. While this is a start, it is only the beginning of a potentially infinite discussion regarding ethical topics such as robotic attachment to humans, sexism, racism, and so on.

The importance of ethics in the field is undeniable, and if harnessed properly, can drastically improve quality of life worldwide. There will always be risks and fears regarding artificial intelligence, but they will likely be minimized with the implementation of a solid set of rules for artificial intelligence developers to follow. It is also undeniable that autonomous robots have a great potential to change mankind for the better. If we are able to create rules regarding ethics for robots to follow, the continued rise of artificial intelligence may prove to be the greatest step forward in the history of mankind.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s