2.3 Crowdsourced workers
Platforms such as Amazon’s Mechanical Turk (MTurk) and CrowdFlower, as well as vendor-managed systems like Clickworker, let companies hire contract workers to complete tasks, down to the level of microtasks that may only take a few seconds to complete. Read about the ethical issues that arise regarding the many crowdworkers whose low-paid, behind-the-scenes labor underlies many AI systems.
2.4 Biased algorithms
AI systems can excel at identifying patterns that let companies target specific customers more precisely. As you’ve seen throughout the program, this ability helps companies serve the unique needs of niche customers. But, sometimes this targeting can go awry. For example, Facebook’s algorithms enabled advertisers to reach self-described racists. Facebook’s COO, Sheryl Sandberg, publicly apologized for this “totally inappropriate” outcome and Facebook pledged to add 3,000 people to its 4,500-member team of employees to review and remove content that violates its community guidelines.
In another example, Microsoft set out to build a chatbot that could tweet like a teenager. Microsoft announced “Tay” on March 23, 2016, describing it as “an experiment in conversational understanding” and released it on Twitter. The idea was that the more Tay engaged in conversation with people on Twitter, the smarter it would become. Unfortunately, Tay learned all too well. As people sent racist, misogynist, anti-Semitic Tweets its way, Tay started responding in a similar tone, not simply repeating back statements, but creating new ones of its own in the same unfortunate vein. At first, Microsoft deleted the offensive statements, but, within 24 hours, shut down Tay to “make some adjustments.”
- Four researchers in the field of AI share their views, concerns, and possible solutions for reducing and avoiding societal risks associated with AI.
- Read about the differing views of Elon Musk and Mark Zuckerberg on the safety of AI.
- According to Sandra Wachter, Researcher in Data Ethics at the University of Oxford, although building systems that can detect bias is complicated, it is in principle possible and “is a responsibility that we as society should not shy away from.”
- Read Adel Nehme’s discussion of the short-term ethical concerns of AI.
- In 1942, science fiction writer Isaac Asimov coined his Three Laws of Robotics in his short story, “Runaround.” The three laws are outlined in Figure 1.
Figure 1: Asimov’s Three Laws of Robotics.
Comments (0)
You don't have permission to comment on this page.