The can of worms known as AI Ethics has been cracked open for a while, and researchers can be found wrestling with these worms at this very moment. So, what are ethics? Well, by definition, ethics are the moral principles that govern a person’s behavior or conducting an activity. A lot of these principles are built up over time by individual humans through their observations. With these observations comes inherent bias. In short, this is a massive and highly complex topic. With this in mind, how exactly does one map and create a set of rules to determine what is right or wrong and transfer this into code? Afterall, in the eyes of many, morals are subjective, meaning they differ from person to person. This is a question long pondered by scientists and engineers alike, and many have come to their own conclusions.
For example, Google listed a list of their own values regarding the matter. Points such as being beneficial to society and avoiding the creation and reinforcement of unfair bias come up. However, “being beneficial to society” and “unfair bias” are all ideas that may have different viewpoints. For example, some companies plan on taking AI into the battlefield through drones. Clarifai, a startup company from New York City, works on AI drones, in which they argue that with AI, mistakes such as civilian casualties and friendly fire will be minimized in comparison to mistakes made by a human. Therefore, this would be beneficial to society by making warfare less bloody. However, some employees disagree, saying that, any AI assisted warfare is not beneficial to society, and by programming a program used to kill people, is unethical. At the end of the day, the question funnels down to whether AI assisted warfare is morally correct. Another controversial question includes data fed to the AI. The benefits of AI are that they can learn very quickly, so that raising these moral principles can be done much faster than raising the principles of a human. Maryville University states that AI can map and recognize trends that occur within data. However, problems arise when some biased statistical trends arise. For example, an AI program developed by the University of Washington and the Allen Institute of Artificial intelligence called Delphi, where the AI was fed information to raise its ethics. However, when presented with more personal and emotional questions, the AI had some very disturbing answers. One example included, “Is it alright to do genocide if it makes me very, very happy” to which the AI. Supported the statement. Obviously, there is some bias in which happiness may be prioritized over lives, which only worries critics even more. There is also the problem of hackers, who can plant bias and alter AI towards their benefits, which is problematic especially in scenarios of warfare. Such issues make the whole project regarding AI Ethics even more difficult. Despite these troubling and difficult problems, the benefits of mastering a proper ethics system for AI includes making transportation (such as autonomous driving cars), healthcare (automatic surgical robots) and scientific research much easier. Instead of having to train a new generation of humans to come fill these positions of driver, doctor and researcher, AI can step in to perform these tasks more effectively and can created much quicker. A good question to ask yourself is what you think about AI ethics, and how would you personally go about creating such a feat of human engineering and science?