Artificial Intelligence—With Very Real Biases
Artificial Intelligence—With Very Real Biases
According to Microsoft researcher Kate Crawford, digital
brains can be just as error-prone and biased as ours
By Kate Crawford Oct. 17, 2017 11:05 a.m. ET
What do you imagine when someone mentions artificial
intelligence? Perhaps it’s something drawn from science-fiction films: Hal’s
glowing eye, a shape-shifting terminator or the sound of Samantha’s all-knowing
voice in the movie “Her.”
As someone who researches the social implications of AI,
I tend to think of something far more banal: a municipal water system, part of
the substrate of our everyday lives. We expect these systems to work—to quench
our thirst, water our plants and bathe our children. And we assume that the
water flowing into our homes and offices is safe. Only when disaster strikes—as
it did in Flint, Mich.—do we realize the critical importance of safe and
reliable infrastructure.
Artificial intelligence is quickly becoming part of the
information infrastructure we rely on every day. Early-stage AI technologies
are filtering into everything from driving directions to job and loan
applications. But unlike our water systems, there are no established methods to
test AI for safety, fairness or effectiveness. Error-prone or biased
artificial-intelligence systems have the potential to taint our social
ecosystem in ways that are initially hard to detect, harmful in the long term
and expensive—or even impossible—to reverse. And unlike public infrastructure,
AI systems are largely developed by private companies and governed by
proprietary, black-box algorithms.
A good example is today’s workplace, where hundreds of
new AI technologies are already influencing hiring processes, often without
proper testing or notice to candidates. New AI recruitment companies offer to
analyze video interviews of job candidates so that employers can “compare” an
applicant’s facial movements, vocabulary and body language with the expressions
of their best employees. But with this technology comes the risk of invisibly
embedding bias into the hiring system by choosing new hires simply because they
mirror the old ones. What if Uber, with its history of poorly behaved
executives, used a system like this? And attempting to replicate the perfect
employee is an outdated model of management science: Recent studies have shown
that monocultures are bad for business and that diverse workplaces outperform
more homogenous ones.
New systems are also being advertised that use AI to
analyze young job applicants’ social media for signs of “excessive drinking”
that could affect workplace performance. This is completely unscientific
correlation thinking, which stigmatizes particular types of self-expression
without any evidence that it detects real problems. Even worse, it normalizes
the surveillance of job applicants without their knowledge before they get in
the door.
These systems “learn” from social data that reflects human
history, with all its biases and prejudices intact. Algorithms can
unintentionally boost those biases, as many computer scientists have shown.
Last year, a ProPublica expose on “Machine Bias” showed how algorithmic
risk-assessment systems are spreading bias within our criminal-justice system.
So-called predictive policing systems are suffering from a lack of strong
predeployment bias testing and monitoring. As one RAND study showed, Chicago’s
algorithmic “heat list” system for identifying at-risk individuals failed to
significantly reduce violent crime and also increased police harassment
complaints by the very populations it was meant to protect. We have a long way
to go before these systems can come close to the nuance of human decision
making and even further until they can offer real accountability.
Artificial intelligence is still in its early
adolescence, flush with new capacities but still very primitive in its
understanding of the world. Today’s AI is extraordinarily powerful when it
comes to detecting patterns but lacks social and contextual awareness. It’s a
minor issue when it comes to targeted Instagram advertising but a far more
serious one if AI is deciding who gets a job, what political news you read or
who gets out of jail.
AI companies are now targeting everything from criminal
justice to health care. But we need much more research about how these systems
work before we unleash them on our most sensitive social institutions. To this
end, I’ve been working with both academic and tech industry colleagues to
launch The AI Now Institute, based at New York University. It’s a
multidisciplinary center that brings together social scientists, computer
scientists, lawyers, economists, and engineers to study the complex social
implications of these technologies.
As the organizational theorist Peter Drucker once wrote,
we can’t manage what we can’t measure. As AI becomes the new infrastructure,
flowing invisibly through our daily lives like the water in our faucets, we
must understand its short- and long-term effects and know that it is safe for
all to use. This is a critical moment for positive interventions, which will
require new tests and methodologies drawn from diverse disciplines to help us
understand AI in the context of complex social systems. Only by developing a
deeper understanding of AI systems as they act in the world can we ensure that
this new infrastructure never turns toxic.
—Crawford is a distinguished research professor at NYU
and a principal researcher at Microsoft
Comments
Post a Comment