| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Ethics

Page history last edited by /pd 4 years, 1 month ago

2.3 Crowdsourced workers

Platforms such as Amazon’s Mechanical Turk (MTurk) and CrowdFlower, as well as vendor-managed systems like Clickworker, let companies hire contract workers to complete tasks, down to the level of microtasks that may only take a few seconds to complete. Read about the ethical issues that arise regarding the many crowdworkers whose low-paid, behind-the-scenes labor underlies many AI systems.

2.4 Biased algorithms

AI systems can excel at identifying patterns that let companies target specific customers more precisely. As you’ve seen throughout the program, this ability helps companies serve the unique needs of niche customers. But, sometimes this targeting can go awry. For example, Facebook’s algorithms enabled advertisers to reach self-described racists. Facebook’s COO, Sheryl Sandberg, publicly apologized for this “totally inappropriate” outcome and Facebook pledged to add 3,000 people to its 4,500-member team of employees to review and remove content that violates its community guidelines.

In another example, Microsoft set out to build a chatbot that could tweet like a teenager.  Microsoft announced “Tay” on March 23, 2016, describing it as “an experiment in conversational understanding” and released it on Twitter. The idea was that the more Tay engaged in conversation with people on Twitter, the smarter it would become. Unfortunately, Tay learned all too well. As people sent racist, misogynist, anti-Semitic Tweets its way, Tay started responding in a similar tone, not simply repeating back statements, but creating new ones of its own in the same unfortunate vein. At first, Microsoft deleted the offensive statements, but, within 24 hours, shut down Tay to “make some adjustments.” 

 

  1. Four researchers in the field of AI share their views, concerns, and possible solutions for reducing and avoiding societal risks associated with AI. 
  2. Read about the differing views of Elon Musk and Mark Zuckerberg on the safety of AI.
  3. According to Sandra Wachter, Researcher in Data Ethics at the University of Oxford, although building systems that can detect bias is complicated, it is in principle possible and “is a responsibility that we as society should not shy away from.”
  4. Read Adel Nehme’s discussion of the short-term ethical concerns of AI
  5. In 1942, science fiction writer Isaac Asimov coined his Three Laws of Robotics in his short story, “Runaround.” The three laws are outlined in Figure 1.

This image depicts Asimov’s three laws of robotics. Law 1 states that a robot may not injure a human being, or through inaction, allow a human being to come to harm, Law 2 states that a robot must obey orders given to it by human beings, except where such orders would conflict with the first law, and Law 3 states that a robot must protect its own existence as long as such protection does not conflict with the first or second law.
Figure 1: Asimov’s Three Laws of Robotics.

 

 

 

Comments (0)

You don't have permission to comment on this page.