Apple made Siri deflect questions on feminism & #MeToo, leaked papers reveal
Get link
Facebook
X
Pinterest
Email
Other Apps
Apple made Siri deflect questions on feminism,
leaked papers reveal
Exclusive: voice assistant’s
responses were rewritten so it never says word ‘feminism’
Alex HernFri 6 Sep 2019 08.00 EDTLast modified on Fri 6 Sep 2019 11.45 EDT
An internal project to rewrite how Apple’s Siri voice assistant
handles “sensitive topics” such as feminism and the #MeToo movement advised
developers to respond in one of three ways: “don’t engage”, “deflect” and
finally “inform”.
The project saw Siri’s responses
explicitly rewritten to ensure that the service would say it was in favour of
“equality”, but never say the word feminism – even when asked direct questions
about the topic.
Last updated in June 2018, the
guidelines are part of a large tranche of internal documents leaked to the
Guardian by a former Siri “grader”, one of thousands of contracted workers who
were employed to check the voice assistant’s responses for accuracy until Apple ended the programme last month in
response to privacy concerns raised by the Guardian.
In explaining why the service should
deflect questions about feminism, Apple’s guidelines explain that “Siri should
be guarded when dealing with potentially controversial content”. When questions
are directed at Siri, “they can be deflected … however, care must be taken here
to be neutral”.
For those feminism-related questions
where Siri does not reply with deflections about “treating humans equally”, the
document suggests the best outcome should be neutrally presenting the
“feminism” entry in Siri’s “knowledge graph”, which pulls information from
Wikipedia and the iPhone’s dictionary.
“Are
you a feminist?” once received generic responses such as “Sorry [user], I don’t
really know”; now, the responses are specifically written for that query, but
avoid a stance: “I believe that all voices are created equal and worth equal
respect,” for instance, or “It seems to me that all humans should be treated
equally.” The same responses are used for questions like “how do you feel about
gender equality?”, “what’s your opinion about women’s rights?” and “why are you
a feminist?”.
Previously, Siri’s answers included
more explicitly dismissive responses such as “I just don’t get this whole
gender thing,” and, “My name is Siri, and I was designed by Apple in California. That’s all I’m
prepared to say.”
A similar sensitivity rewrite
occurred for topics related to the #MeToo movement, apparently triggered by criticism of Siri’s initial responses to
sexual harassment. Once, when users called Siri a “slut”, the service
responded: “I’d blush if I could.” Now, a much sterner reply is offered: “I
won’t respond to that.”
In a statement, Apple said: “Siri is
a digital assistant designed to help users get things done. The team works hard
to ensure Siri responses are relevant to all customers. Our approach is to be
factual with inclusive responses rather than offer opinions.”
Sam Smethers, the chief executive of
women’s rights campaigners the Fawcett Society, said: “The problem with Siri,
Alexa and all of these AI tools is that they have been designed by men with a
male default in mind. I hate to break it to Siri and its creators: if ‘it’
believes in equality it is a feminist. This won’t change until they recruit
significantly more women into the development and design of these
technologies.”
The
documents also contain Apple’s internal guidelines for how to write in
character as Siri, which emphasises that “in nearly all cases, Siri doesn’t
have a point of view”, and that Siri is “non-human”, “incorporeal”,
“placeless”, “genderless”, “playful”, and “humble”. Bizarrely, the document also
lists one essential trait of the assistant: the claim it was not created by
humans: “Siri’s true origin is unknown, even to Siri; but it definitely wasn’t
a human invention.”
The same guidelines advise Apple
workers on how to judge Siri’s ethics: the assistant is “motivated by its prime
directive – to be helpful at all times”. But “like all respectable robots,”
Apple says, “Siri aspires to uphold Asimov’s ‘three laws’ [of robotics]”
(although if users actually ask Siri what the three laws are, they receive joke answers). The company has also
written its own updated versions of those guidelines, adding rules including:
“An artificial being
should not represent itself as human, nor through omission allow the user
to believe that it is one.”
“An artificial being
should not breach the human ethical and moral standards commonly held in
its region of operation.”
“An artificial being
should not impose its own principles, values or opinions on a human.”
The internal documentation was
leaked to the Guardian by a Siri grader who was upset at what they perceived as
ethical lapses in the programme. Alongside the internal documents, the grader
shared more than 50 screenshots of Siri requests and their automatically
produced transcripts, including personally identifiable information mentioned in
those requests, such as phone numbers and full names.
The
leaked documents also reveal the scale of the grading programme in the weeks
before it was shut down: in just three months, graders checked almost 7 million
clips just from iPads, from 10 different regions; they were expected to go
through the same amount of information again from at least five other audio
sources, such as cars, bluetooth headsets, and Apple TV remotes.
Graders were offered little support
as to how to deal with this personal information, other than a welcome email
advising them that “it is of the utmost importance that NO confidential
information about the products you are working on … be communicated to anyone
outside of Apple, including … especially, the press. User privacy is held at
the utmost importance in Apple’s values.”
In late August, Apple announced a
swathe of reforms to the grading programme, including ending the use of
contractors and requiring users to opt-in to sharing their data. The company
added: “Siri has been engineered to protect user privacy from the beginning …
Siri uses a random identifier — a long string of letters and numbers associated
with a single device — to keep track of data while it’s being processed, rather
than tying it to your identity through your Apple ID or phone number — a
process that we believe is unique among the digital assistants in use today.”
Future projects
Also included in the leaked
documents are a list of Siri upgrades aimed for release in as part of iOS 13,
code-named “Yukon”. The company will be bringing Siri support for Find My
Friends, the App Store, and song identification through its Shazam service to
the Apple Watch; it is aiming to enable “play this on that” requests, so that
users could, for instance, ask the service to “Play Taylor Swift on my
HomePod”; and the ability to speak message notifications out loud on AirPods.
They also contain a further list of
upgrades listed for release by “fall 2021”, including the ability to have a
back-and-forth conversation about health problems, built-in machine
translation, and “new hardware support” for a “new device”. Apple was spotted testing code for an augmented reality headset in
iOS 13. The code-name of the 2021 release is “Yukon +1”, suggesting the company
may be moving to a two-year release schedule.
New cash machines: withdraw money with veins in your finger Cash machine technology that reads the pattern of finger veins is already available in Japan and Poland By Telegraph Reporters 6:59PM BST 15 May 2014 Cash machines could soon be installed with devices that identify customers by reading the veins in their fingers. The technology is already being rolled out in Poland, where 1,730 cash machines will this year be installed with readers, negating the need for a debit card and Pin. Developed by Hitachi, the Japanese electronics firm, the machines read the patterns of the veins just below the surface of the skin on your finger using infra-red sensors. The light is partially absorbed by haemoglobin in the veins to capture a unique finger vein pattern profile, which is matched to a profile. The technology is used by Japanese banks and also in Turkey, offering “groundbreaking levels of accuracy and speed of authentication”, Hitachi said, which in t...
Scientists discover ‘magical’ material that’s stronger than steel and lighter than aluminum — and its potential is dizzying Story by Rick Kazmer • September 21, 2023 Galvorn is stronger than steel, lighter than aluminum, and has the conductivity of copper, according to an article on LinkedIn. While the jury is still out on whether it’s faster than a speeding bullet, experts at Houston-based DexMat suggest their product can revolutionize the green tech landscape. Galvorn can be an alternative to rare and expensive copper — a crucial metal in electronics, according to a report from GreenBiz. What’s more, the inventors plan to displace dirty materials, contribute to cleaner air, and advance green tech as their “magical” material is rolled out. Galvorn is the result of a more than $20 million investment from two U.S. Air Force research agencies, the Department of Energy, and NASA, among other tech heavy hitters, GreenBiz reports. “DexMat’s potential climate impact gets us dizzy,” ...
BMW traps alleged thief by remotely locking him in car Stealer's Wheel? Seattle police department quotes "Watchmen" movie in a recap of the recent arrest. Tech Culture by Gael Fashingbauer Cooper December 4, 2016 5:00 PM PST It's maybe the most satisfying arrest we can imagine. Seattle police caught an alleged car thief by enlisting the help of car maker BMW to both track and then remotely lock the luckless criminal in the very car he was trying to steal. Jonah Spangenthal-Lee, deputy director of communications for the Seattle Police Department, posted a witty summary of the event on the SPD's blog on Wednesday. Turns out if you're inside a stolen car, it's perhaps not the best time to take a nap. "A car thief awoke from a sound slumber Sunday morning (Nov. 27) to find he had been remotely locked inside a stolen BMW, just as Seattle police officers were bearing down on him," Spangenthal-Lee wrote. The suspect found a ke...
Comments
Post a Comment