Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Making Machines Safe for Humans: The Case of Siri
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
[attachment=69909]


Abstract
This paper explores cultural conceptions of human-machine communication through a discourse analysis of U.S. news media accounts of Apple’s launch of Siri - a voice-activated, personal assistant application. Through this analysis of online reports regarding Siri’s initial reception from The New York Times, CNN, and ABC News several themes emerge regarding the nature of Siri and communication with it. These themes portray Siri as the future made real; as part friendly female; as a futuristic servant at the users’ beck and call. In totality these portrayals establish Siri as the antithesis of malicious AI machines and position her as a non-threatening, technological slave firmly under the control of the user. Siri is “safe” for humans. Or, is it? This paper concludes by questioning whether the control we have over Siri is real or an illusion that reinforces what Carey and Quirk (1989) called the “rhetoric of the electronic sublime.”
Keywords
human-machine communication; artificial intelligence; Siri; discourse analysis

Introduction
When Turing argued that machines one day would be able to think and engage in competent conversation with humans, he anticipated the backlash against his idea that threatened the sanctity of the human mind (Turing, 1992). By the time Turing published his seminal work in 1950, the debate over the impact of technology on society was already an old one. Extending back to the Phaedrus and up through the industrial revolution into the computer age, philosophers had long weighed the promise and peril of emerging technologies. As is evident in Leo Marx’s (2000) The Machine in the Garden, the impact of machines also has occupied a space within the greater cultural consciousness. More than 60 years after Turing proposed the idea of talking with intelligent computers, we now have the capability to do so. In 2011, Apple launched the iPhone 4s with a new feature - Siri, a voicecontrolled, artificial intelligence application that functions as a personal assistant. The goal of this paper is to continue this exploration of our cultural reactions to machines, this time by focusing on how we conceive of a program that can talk back to us.
Siri is the focus of this study for several reasons: Although people have communicated vocally and haptically with machines before Siri, the program, which began as a $110 million defense project (SRI, 2012), was unique when introduced because it used natural language, instead of computer commands, to communicate (Aron, 2011; Rousch, 2010). Siri speaks with a female voice in the U.S. and gives the illusion of having a personality. As an AI program, Siri learns from and adapts to both individual users and all of its users collectively (Apple, 2012; Aron, 2011). Siri also is more accessible to the public than most AI technology. The application’s introduction with the iPhone 4s created a buzz in the U.S. media that caught the attention of both technophiles and average users. And so, Siri can help scholars better understand how people made sense of and reacted to voice-controlled, AI technology when it was introduced on a large scale.

Approach
This study employs discourse analysis, a qualitative approach, to explore media accounts regarding
Apple’s launch of Siri in three prominent U.S. news websites: CNN, ABC News, and The New York
Times. The start date for the study was Oct. 1, 2011 - three days before Apple introduced Siri with the
iPhone 4s. Stories were read through Siri’s public release on Oct. 14 until a break in the flow of initial
1



Selected Papers of Internet Research 14.0, 2013: Denver, USA

stories regarding Siri was reached. The last story studied from CNN was Oct. 25, The New York Times, Oct. 27, and ABC News, Oct. 28. Eighteen stories from CNN, 23 from The New York Times, and 12 from ABC News were analyzed. Texts were read multiple times to identify themes. The different terms used to refer to Siri as well as to describe talking with Siri were analyzed. The context of the stories and the individual sentences and paragraphs containing references to and communication with Siri also were analyzed.

Analysis
From the analysis emerges a picture of Siri as a “safe” machine established through its portrayal as the future realized, as part ‘friendly’ human, and as servant.
News stories highlight the futuristic qualities of talking with a machine and having it talk back, setting up Siri, as, one reporter states, “the stuff of science fiction” (Gross, 2011a, par. 4). References to science fiction movies, shows, and characters occur throughout stories discussing Siri. Although some direct comparisons are made between Siri and what could be considered malicious machines, like HAL 9000, they often are tongue-in-cheek. These connections with science fiction serve as heuristic to make sense of a talking device, and, in doing so, portray Siri not as a dangerous machine but as the promise of science fiction brought to life.
Human qualities in machines can be perceived as a threat, but news reports anthropomorphize Siri in a way that downplays concern based on her female gender, her interaction capabilities, and humor. Besides referring to Siri as a program, or it, reporters also call Siri a she or her based on Siri’s female voice. News accounts also focus on Siri’s helpfulness and humorous responses to requests. In the article “Snide, Sassy Siri has Plenty to Say,” Gross (2011b) explains: “This awareness and sense of humor has already earned her some fans” (par. 8). In the United States, women are culturally perceived as less of a threat than men, and this focus on Siri’s gender and humor further removes her from the category of threatening machine.
Apple (2012) describes Siri as an “intelligent personal assistant,” but in news stories, Siri is portrayed as more of a servant in firm control of the user. Some of the more than three dozen terms the news outlets employ for Siri focus on control of the program - “voice-controlled assistant,” “voice-activated servant,” and “voice-commanded minion.” News reports also include the description Siri gives when asked about its nature: Siri replies that it is a “humble personal assistant” (e.g. Grobart, 2011; Gross, 2011a). The way news accounts describe communication with Siri also reinforces this sense of control. The term conversation is not typically used to describe communication with Siri. Instead, news accounts refer to giving commands to Siri or describe it as responding to the needs of humans. The program waits to be spoken to and does what it is told.

Discussion
Together these portrayals of Siri establish the program as a positive technological development that is non-threatening to humans and, in fact, remains firmly under our control, serving us when summoned. The way Siri is initially discussed and received, which also is a reflection of its design, works to mitigate the societal concerns regarding artificial intelligence that Turing faced and science fiction writers utilized for drama. Siri, or she, is seemingly made “safe” for humans.
Or, is it? This depiction of Siri and the praise heaped on the program contain threads of cultural
discourse regarding new technology that predate Siri and can be described as what Carey and Quirk
(1989) refer to as the “rhetoric of the electronic sublime” (p. 139). The argument goes that through
electricity our hopes are realized; however, this rhetoric contains a false hope. As Carey and Quirk
(1989) argue, this promotion of the machine does not free us; we become more dependent upon
machines and the structures of power that produce and promote them. And so, while we can command
Siri to text a partner or schedule an appointment and she doesn’t appear to threaten our humanity