Algorithms have gotten out of control. It's time to regulate them.
McDonald's announced recently
that it purchased Dynamic Yield, an AI company it will use
to analyze customer habits to try and sell them more food. When a hamburger
shack is using algorithms to stoke sales, it's clear we have entered a new era.
But the ubiquity of algorithms is not merely an evolution of technology.
Rather, it represents the emergence of a whole new set of questions around
ethics, bias, and equity with which we must grapple. Up until now, algorithms
have been deployed with relatively little oversight. It may be time for that to
change.
Algorithms — complex
equations that are used to make decisions — are becoming fundamental to the
functioning of modern society. But they also bring with them a heap of
problems. For example, a revealing Bloomberg piece recently described how YouTube has a long
history of suppressing employee concerns about false or bigoted content on the
platform in favor of the AI-based content sorting system that determines which
videos the site recommends to users. That's a problem!
It may be time to
consider that, rather than trying to regulate the big tech companies, it may in
fact be more useful to regulate the algorithms they use. And we may need
something like an "algorithm czar" to help. The algorithm czar would
be a person or department in the government whose sole purpose is to regulate
the use of algorithms. After all, algorithms are positively everywhere these
days, and each company uses a very complex and closely guarded set of formulas
to determine what their users see. The algorithms people are most familiar with
are probably Google's search function, or Facebook's News Feed.
But algorithms extend
far beyond that. In a sense, they are an almost inevitable result of the
digital era, which has produced a flood of data so overwhelming that we need an
automated, digital method of dealing with it. Think about the number of web
pages in the world: Google essentially has no choice but to use algorithms to
tune its search results.
But that inevitability
also means that algorithms are used in an increasing number of fields. They are
utilized in job searches at large companies to sort through
applicants. Credit checks and loan applications are filtered through algorithms. There are even cases in
which algorithms are used to predict the rates of prison recidivism.
When these systems are
used in such varying and serious ways, algorithms become not just a method of
filtering data, but a way of outsourcing decision making. And crucially,
despite claims from some that the digital, computerized nature of algorithms
means they are free of bias, in fact, the opposite is true. Humans code
algorithms and, consciously or not, seed them with their own flawed
perspectives. Algorithms also draw on existing information to make decisions.
As a result, algorithms always have the potential to
exacerbate or replicate human bias. The algorithms used to predict prison
recidivism, for example, could have devastating consequences if
their predictions are wrong.
So it absolutely makes
sense to regulate these specific technological forms. Algorithms are an obscure
layer of mediation between people and the things that affect their lives, and
many people fail to understand how they operate. If most people don't know how Facebook's News Feed
works, they are also highly likely to be confused about how a credit check
works, too.
A kind of algorithm
czar, or an algorithm department, would be responsible both for setting out the
rules by which algorithms can operate, and then overseeing their fair and
correct enforcement. Rather than attempting to regulate big tech companies — as
some politicians, including Sen. Elizabeth Warren (D-Mass.), have suggested — looking at the
constituent parts that form our digital landscape may be more beneficial. After
all, beyond algorithms, there are other complex fields — artificial
intelligence, machine learning, facial recognition, data tracking, and many,
many more — that may need their own rules and regulations. Instead of trying to
curtail companies, perhaps we should try to regulate the way in which emerging
technologies are applied to ensure they don't in fact make worse the existing
fractures and problems with which we already live.
There was once a time
when the arrival of smart technologies like AI and algorithms were hailed as a
way to help us tackle some of society's deepest, most intractable problems:
systemic bias, the replicating nature of privilege, or the basic unfairness
that corrupts so much of our most noble ideals. Perhaps that could still be
true — but only if we empower governments to challenge the haphazard
applications of these powerful, sometimes dangerous technologies.
Comments
Post a Comment