Apple made Siri deflect questions on feminism & #MeToo, leaked papers reveal
Get link
Facebook
X
Pinterest
Email
Other Apps
Apple made Siri deflect questions on feminism,
leaked papers reveal
Exclusive: voice assistant’s
responses were rewritten so it never says word ‘feminism’
Alex HernFri 6 Sep 2019 08.00 EDTLast modified on Fri 6 Sep 2019 11.45 EDT
An internal project to rewrite how Apple’s Siri voice assistant
handles “sensitive topics” such as feminism and the #MeToo movement advised
developers to respond in one of three ways: “don’t engage”, “deflect” and
finally “inform”.
The project saw Siri’s responses
explicitly rewritten to ensure that the service would say it was in favour of
“equality”, but never say the word feminism – even when asked direct questions
about the topic.
Last updated in June 2018, the
guidelines are part of a large tranche of internal documents leaked to the
Guardian by a former Siri “grader”, one of thousands of contracted workers who
were employed to check the voice assistant’s responses for accuracy until Apple ended the programme last month in
response to privacy concerns raised by the Guardian.
In explaining why the service should
deflect questions about feminism, Apple’s guidelines explain that “Siri should
be guarded when dealing with potentially controversial content”. When questions
are directed at Siri, “they can be deflected … however, care must be taken here
to be neutral”.
For those feminism-related questions
where Siri does not reply with deflections about “treating humans equally”, the
document suggests the best outcome should be neutrally presenting the
“feminism” entry in Siri’s “knowledge graph”, which pulls information from
Wikipedia and the iPhone’s dictionary.
“Are
you a feminist?” once received generic responses such as “Sorry [user], I don’t
really know”; now, the responses are specifically written for that query, but
avoid a stance: “I believe that all voices are created equal and worth equal
respect,” for instance, or “It seems to me that all humans should be treated
equally.” The same responses are used for questions like “how do you feel about
gender equality?”, “what’s your opinion about women’s rights?” and “why are you
a feminist?”.
Previously, Siri’s answers included
more explicitly dismissive responses such as “I just don’t get this whole
gender thing,” and, “My name is Siri, and I was designed by Apple in California. That’s all I’m
prepared to say.”
A similar sensitivity rewrite
occurred for topics related to the #MeToo movement, apparently triggered by criticism of Siri’s initial responses to
sexual harassment. Once, when users called Siri a “slut”, the service
responded: “I’d blush if I could.” Now, a much sterner reply is offered: “I
won’t respond to that.”
In a statement, Apple said: “Siri is
a digital assistant designed to help users get things done. The team works hard
to ensure Siri responses are relevant to all customers. Our approach is to be
factual with inclusive responses rather than offer opinions.”
Sam Smethers, the chief executive of
women’s rights campaigners the Fawcett Society, said: “The problem with Siri,
Alexa and all of these AI tools is that they have been designed by men with a
male default in mind. I hate to break it to Siri and its creators: if ‘it’
believes in equality it is a feminist. This won’t change until they recruit
significantly more women into the development and design of these
technologies.”
The
documents also contain Apple’s internal guidelines for how to write in
character as Siri, which emphasises that “in nearly all cases, Siri doesn’t
have a point of view”, and that Siri is “non-human”, “incorporeal”,
“placeless”, “genderless”, “playful”, and “humble”. Bizarrely, the document also
lists one essential trait of the assistant: the claim it was not created by
humans: “Siri’s true origin is unknown, even to Siri; but it definitely wasn’t
a human invention.”
The same guidelines advise Apple
workers on how to judge Siri’s ethics: the assistant is “motivated by its prime
directive – to be helpful at all times”. But “like all respectable robots,”
Apple says, “Siri aspires to uphold Asimov’s ‘three laws’ [of robotics]”
(although if users actually ask Siri what the three laws are, they receive joke answers). The company has also
written its own updated versions of those guidelines, adding rules including:
“An artificial being
should not represent itself as human, nor through omission allow the user
to believe that it is one.”
“An artificial being
should not breach the human ethical and moral standards commonly held in
its region of operation.”
“An artificial being
should not impose its own principles, values or opinions on a human.”
The internal documentation was
leaked to the Guardian by a Siri grader who was upset at what they perceived as
ethical lapses in the programme. Alongside the internal documents, the grader
shared more than 50 screenshots of Siri requests and their automatically
produced transcripts, including personally identifiable information mentioned in
those requests, such as phone numbers and full names.
The
leaked documents also reveal the scale of the grading programme in the weeks
before it was shut down: in just three months, graders checked almost 7 million
clips just from iPads, from 10 different regions; they were expected to go
through the same amount of information again from at least five other audio
sources, such as cars, bluetooth headsets, and Apple TV remotes.
Graders were offered little support
as to how to deal with this personal information, other than a welcome email
advising them that “it is of the utmost importance that NO confidential
information about the products you are working on … be communicated to anyone
outside of Apple, including … especially, the press. User privacy is held at
the utmost importance in Apple’s values.”
In late August, Apple announced a
swathe of reforms to the grading programme, including ending the use of
contractors and requiring users to opt-in to sharing their data. The company
added: “Siri has been engineered to protect user privacy from the beginning …
Siri uses a random identifier — a long string of letters and numbers associated
with a single device — to keep track of data while it’s being processed, rather
than tying it to your identity through your Apple ID or phone number — a
process that we believe is unique among the digital assistants in use today.”
Future projects
Also included in the leaked
documents are a list of Siri upgrades aimed for release in as part of iOS 13,
code-named “Yukon”. The company will be bringing Siri support for Find My
Friends, the App Store, and song identification through its Shazam service to
the Apple Watch; it is aiming to enable “play this on that” requests, so that
users could, for instance, ask the service to “Play Taylor Swift on my
HomePod”; and the ability to speak message notifications out loud on AirPods.
They also contain a further list of
upgrades listed for release by “fall 2021”, including the ability to have a
back-and-forth conversation about health problems, built-in machine
translation, and “new hardware support” for a “new device”. Apple was spotted testing code for an augmented reality headset in
iOS 13. The code-name of the 2021 release is “Yukon +1”, suggesting the company
may be moving to a two-year release schedule.
New cash machines: withdraw money with veins in your finger Cash machine technology that reads the pattern of finger veins is already available in Japan and Poland By Telegraph Reporters 6:59PM BST 15 May 2014 Cash machines could soon be installed with devices that identify customers by reading the veins in their fingers. The technology is already being rolled out in Poland, where 1,730 cash machines will this year be installed with readers, negating the need for a debit card and Pin. Developed by Hitachi, the Japanese electronics firm, the machines read the patterns of the veins just below the surface of the skin on your finger using infra-red sensors. The light is partially absorbed by haemoglobin in the veins to capture a unique finger vein pattern profile, which is matched to a profile. The technology is used by Japanese banks and also in Turkey, offering “groundbreaking levels of accuracy and speed of authentication”, Hitachi said, which in t...
Will AI replace doctors who read X-rays, or just make them better than ever? As AI moves into medicine, perhaps no one has more to gain or lose than radiologists, the doctors who review medical scans for signs of cancer and other diseases By MATTHEW PERRONE AP Health Writer May 14, 2024, 9:16 AM ET WASHINGTON -- How good would an algorithm have to be to take over your job? It’s a new question for many workers amid the rise of ChatGPT and other AI programs that can hold conversations, write stories and even generate songs and images within seconds. For doctors who review scans to spot cancer and other diseases, however, AI has loomed for about a decade as more algorithms promise to improve accuracy, speed up work and, in some cases, take over entire parts of the job. Predictions have ranged from doomsday scenarios in which AI fully replaces radiologists, to sunny futures in which it frees them to focus on the most rewarding aspects of their work. That tension reflects how AI is rollin...
The City That’s Trying to Replace Politicians With Computers (It’s Working) After sneaking his AI-written water bill into law, Ramiro Rosário says government press-release writers could go, too By Samantha Pearson and Luciana Magalhaes Dec. 22, 2023 8:58 am ET PORTO ALEGRE, Brazil — In a country with a history of corruption and government inefficiency, Councilman Ramiro Rosário has come up with what he believes is a winning strategy to improve the work of politicians: replace them with computers. The 37-year-old legislator in Brazil’s southern city of Porto Alegre passed the country’s first law in November that was written entirely by ChatGPT, the artificial-intelligence chatbot developed by the San Francisco startup OpenAI. The law itself was purposefully boring—a proposal to stop the local water company from charging residents for new water meters when they were stolen from their front yards. It would easily pass, calculated Rosário. One recent day, donning jeans and sneakers...
Comments
Post a Comment