AI bias: How tech determines if you land job, get a loan or end up in jail
AI bias: How tech determines if you land job,
get a loan or end up in jail
Computer
software teaches customer service agents how to be more compassionate, schools use machine learning to scan for weapons and
mass shooters on campus, and doctors use AI to map the root cause of diseases.
Sectors such as cybersecurity,
online entertainment and retail use the tech in combination
with wide swaths of customer data in revolutionary ways to streamline
services.
Though these applications may
seem harmless, perhaps even helpful, the AI is only as good as the information
fed into it, which can have serious implications.
You might not realize it, but
AI helps determine whether you qualify for a loan in some cases.
There are products in the pipeline that could have police officers
stopping you because software identified you as someone else.
Imagine if people on the
street could take a photo of you, then a computer scanned a database
to tell them everything about you, or if an airport's security camera
flagged your face while a bad guy walked clean through TSA.
Those are real-world
possibilities when the tech that’s supposed to bolster convenience
has human bias baked into the framework.
"Artificial intelligence is a super powerful tool, and like any really
powerful tool, it can be used to do a lot of things – some of which are good
and some of which can be problematic," said Eric Sydell, executive
vice president of innovation at Modern Hire, which develops AI-enabled
software.
"In the early stages of
any new technology like this, you see a lot of companies trying to figure out
how to bring it into their business," Sydell said, "and some are
doing it better than others."
Whether it's intentional or
not, humans make judgments that can spill over into the code created for AI to
follow. That means AI can contain implicit racial, gender and
ideological biases, which prompted an array of federal and state
regulatory efforts.
Criminal justice
In June,
Rep. Don Beyer, D-Va., offered two amendments to a
House appropriations bill that would prevent federal funds from
covering facial recognition technology by law enforcement and require the National
Science Foundation to report to Congress on the social impacts of AI.
Cops are always on the lookout
to fine those who are texting while driving. But now Australia is taking things
to the next level and using artificial intelligence to catch drivers who are
guilty of this offense. Veuer’s Susana Victoria Perez has more. Buzz60
"I don’t think we should
ban all federal dollars from doing all AI. We just have to do it
thoughtfully," Beyer told USA TODAY. He said computer learning and facial
recognition software could enable police to falsely identify someone, prompting
a cop to reach for a gun in extreme cases.
"I think very soon we
will ask to ban the use of facial recognition technology on body cams because
of the real-time concerns," Beyer said. "When data is inaccurate, it
could cause a situation to get out of control."
AI is used in predictive
analysis, in which a computer reveals how likely a person is to commit a crime.
Though it's not quite to the extent of the "precrime"
police units of the Tom Cruise sci-fi hit "Minority Report," the
technique has faced scrutiny over whether it improves safety or
simply perpetuates inequities.
Americans have voiced mixed
support of AI applications, and the majority (82%) agree that it
should be regulated, according to a study this year from the Center
for the Governance of AI and Oxford University’s Future of Humanity Institute.
When it comes to facial
recognition specifically, Americans say law enforcement agencies will put the tech to good use.
Jobs
Numerous studies suggest that automation will destroy jobs for humans.
For example, Oxford academics Carl Benedikt Frey
and Michael Osborne estimated that 47% of American jobs are at high risk
of automation by the mid-2030s.
As workers worry about
being displaced by computers, others are hired
thanks to AI-enabled software.
The technology can
match employees who have the ideal skill sets for a specific work
environment with employers who may be too busy to have humans screen
candidates.
"Whenever you apply for a
loan, there may be AI to figure out if that loan should be given or not,"
said Kunal Verma, co-founder of AppZen, an AI platform for finance teams
with clients including WeWork and Amazon.
The technology is often touted
as a faster and more accurate assessment of a potential loan borrower as
it can sift through tons of data in seconds. However, there's room for
error.
If the information fed into
an algorithm shows that you live in an area where a lot of people have
defaulted on their loans, the system may determine you are not reliable, Verma
said.
"It may also happen that
the area may have a lot of people of certain minorities or other
characteristics that could lead to a bias in the algorithm," Verma
said.
Solutions to bias
Bias can creep in at almost
every stage of the deep-learning process; however, algorithms can
also help reduce disparities caused by poor human judgment.
Comments
Post a Comment