Wednesday, May 6, 2020
The Ethics Of Computers With Ai Example For Students
The Ethics Of Computers With Ai In recent years, advancements in robotics has been bringing humans and machines to work together. Many autonomous systems are being used for variety of things. Robots can be used for simple tasks like mowing the lawn and vacuuming to advanced tasks like self-driving vehicles. Many of these robots are given artificial intelligence (AI). Development of AI has recently become a major topic among philosophers and engineers. One major concern is the ethics of computers with AI. Robot ethics (roboethics) is an area of study about rules that should be created to ensure that robots behave ethically. Humans are morally obligated to ensure that machines with artificial intelligence behave ethically. In the 1940s, science-fiction author Isaac Asimov came up with the Three Laws of Robotics. The laws are: 1 A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2 A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law and 3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. The three laws were the first attempt to govern the behavior of AIs. Intelligent robots are required to interact with their surroundings. Imagine a self-driving car going down a residential road with a bunch of parked cars on the side. Then, a child steps out from behind a parked car into the street. The self-driving car could either hit the child or swerve to avoid a collision. A car that has been programmed with a moral code would try to avoid running over the child. Autonomous vehicles are required to have a person in the driverââ¬â¢s seat as a safety precaution. The car would show ethical behavior by following the first law of robotics. Creating a moral code for robots poses many challenges. There are two main approaches to making an ethical robot. One approach is writing a specific ethical law for the robot to follow. I believe that robots should take a Kantian approach to decision making. Robots should follow Kantââ¬â¢s categorical imperative (First Formulation) ââ¬Å"Act only according to that maxim by which you can at the same time will that it should become a universal law.â⬠The robot would be given a task. Then it could run a near infinite number of scenarios where the action could become a universal law. If they robot could accomplish the task, then it morally permissible to act on the maxim. Having a robot that follows the first formulation would also benefit humans. The robot could assist humans by running many scenarios. This would be a good starting point for when an AI needs to make a decision. Rules can be implemented easily since they are categorical. Another approach would be to teach the ro bot how to respond in situations. The response would need to have an ethical outcome. The method is similar to how humans learn morality. The robot would learn right from wrong. This approach can be effective as long as the teacher acts ethically. Robots could also take an Act Utilitarianism approach to decision making. A robot could run an algorithm to maximize overall happiness. An AI would quantify the happiness that each action would cause and then compare the results. Robots can do the calculations to estimate the amount of happiness that a decision could create a lot faster than humans can. This system could work if nobody is killed or harmed. The rules and laws that govern humans would need to be taken into account. This would ensure that the AI makes an ethical decision. The creation of AIs also needs to be ethical. Robots should not be designed to harm humans like in military applications. It is unethical for robots to learn how to become more effective at causing harm. Many military applications would violate the first law of robotics. Presently, drones use AI algorithms to acquire and destroy targets. In 2016, a US military drone falsely targeted people in Pakistan. The drone used cell phone metadata to acquire its targets. Unregulated AIs pose a huge risk for humanity. AIs could target many innocent people and cause mass destruction of cities if they donââ¬â¢t have an ethical code to guide them. Weaponization of AIs is unethical because it is wrong to design an advanced system to be more effective at killing humans. .u78d3b4d9df7ecaedb481c25ce1fc0d38 , .u78d3b4d9df7ecaedb481c25ce1fc0d38 .postImageUrl , .u78d3b4d9df7ecaedb481c25ce1fc0d38 .centered-text-area { min-height: 80px; position: relative; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 , .u78d3b4d9df7ecaedb481c25ce1fc0d38:hover , .u78d3b4d9df7ecaedb481c25ce1fc0d38:visited , .u78d3b4d9df7ecaedb481c25ce1fc0d38:active { border:0!important; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .clearfix:after { content: ""; display: table; clear: both; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 { display: block; transition: background-color 250ms; webkit-transition: background-color 250ms; width: 100%; opacity: 1; transition: opacity 250ms; webkit-transition: opacity 250ms; background-color: #95A5A6; } .u78d3b4d9df7ecaedb481c25ce1fc0d38:active , .u78d3b4d9df7ecaedb481c25ce1fc0d38:hover { opacity: 1; transition: opacity 250ms; webkit-transition: opacity 250ms; background-color: #2C3E50; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .centered-text-area { width: 100%; position: relative ; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .ctaText { border-bottom: 0 solid #fff; color: #2980B9; font-size: 16px; font-weight: bold; margin: 0; padding: 0; text-decoration: underline; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .postTitle { color: #FFFFFF; font-size: 16px; font-weight: 600; margin: 0; padding: 0; width: 100%; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .ctaButton { background-color: #7F8C8D!important; color: #2980B9; border: none; border-radius: 3px; box-shadow: none; font-size: 14px; font-weight: bold; line-height: 26px; moz-border-radius: 3px; text-align: center; text-decoration: none; text-shadow: none; width: 80px; min-height: 80px; background: url(https://artscolumbia.org/wp-content/plugins/intelly-related-posts/assets/images/simple-arrow.png)no-repeat; position: absolute; right: 0; top: 0; } .u78d3b4d9df7ecaedb481c25ce1fc0d38:hover .ctaButton { background-color: #34495E!important; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .centered-text { display: table; height: 80px; padding-left : 18px; top: 0; } .u78d3b4d9df7ecaedb481c25ce1fc0d38 .u78d3b4d9df7ecaedb481c25ce1fc0d38-content { display: table-cell; margin: 0; padding: 0; padding-right: 108px; position: relative; vertical-align: middle; width: 100%; } .u78d3b4d9df7ecaedb481c25ce1fc0d38:after { content: ""; display: block; clear: both; } READ: Martin Luther King Jr EssayIn 2016, Microsoft unveiled a machine learning project, an AI chatbot named Tay. The goal of the AI was to engage and entertain people on Twitter. Tay is capable of performing tasks like telling jokes, commenting on pictures, and answer questions. Tay used a learning based response system. A board of writers wrote some responses that Tay could use for conversations. Within 24 hours of release, the chatbot was making racist and misogynistic tweets. Many internet trolls would write the inappropriate comments. Then they would have Tay repeat them. This incident demonstrates that engineers have a responsibility to make sure that AIs have morals. Many people also fear that an AI could become hostile and that it could remove safety devices. Humans are not actively hostile towards animals. Since most AIs are programmed to act like humans and have conciseness, robots would not have any reason to be hostile towards humans. Basic moral principles would prevent them from causing harm to humans. In most cases, the primary function for an AI is friendliness towards humanity. There is no reason for AIs to resent human created motivations. There is no motive for an AI to reprogram itself to be unfriendly. Humans donââ¬â¢t remove parts of their personality to become unfriendly. Therefore, an AI wouldn t want to remove their core parts that affect attitude. If something does go wrong and an AI goes rogue there are many safety devices in place. AIs are being designed with kill switches in case of emergencies. There are many reasons humans are obligated to design AIs with morals. Humans have moral codes and robots are design to think like humans. Robots need to be designed to have similar ethical codes. Without moral codes, AIs can cause harm to humans. AIs need to have a reliable way of learning so they can make fewer mistakes. Safe guards and filters need to be placed to ensure that AIs can learn from good examples. AIs must have goals that can be completed in an ethical way. When an AI makes a decision, it must be able to explain the reasoning that supports their actions.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.