Professors warn tech is taking over decisions from humans
NATHAN RUBBELKE -
STAFF REPORTER • AUGUST 31, 2017
Scholars create
program to test software for bias and discrimination
In the future,
your future might depend on a series of carefully calculated zeros and ones.
As technology
improves, humans become less involved in decisions that affect our lives — and
that isn’t exactly a good thing.
As artificial
intelligence gains ground, college professors at the University of
Massachusetts-Amherst have developed a program to test software for bias and
discrimination.
Yes, racial
discrimination. But more than that. Healthcare decisions. Loan decisions. Heck,
even how Amazon decides package-sending rates.
“Today, software
determines who gets a loan or gets hired, computes risk-assessment scores that
help decide who goes to jail and who is set free, and aids in diagnosing and
treating medical patients,” according to the program’s developers.
With that, it’s
critical “software does not discriminate against groups or individuals,” argue
researchers, adding that their field of study is “undervalued” and “countless
examples of unfair software have emerged.”
In a scholarly
article published for an upcoming software engineering conference, computer
scientists Alexandra Meliou and Yuriy Brun, who created the program along with
PhD student Sainyam Galhotra, detail the “growing concern” of software
discrimination.
In the paper, the
two professors forecast the evolving and increasing influence software will
have on human life in the future, and argue software currently plays an
outsized role in society.
“Going forward,
the importance of ensuring fairness in software will only increase,” the paper
states.
The scholars used
examples that illustrate bias against the wealthy and the downtrodden.
One of the
examples of software discrimination provided by Brun and Meliou is programming
used by Amazon.com to determine what geographic areas would receive free
same-day delivery. Following the roll out of the program last year, it was
alleged that minority neighborhoods received the perk at a much lower rate than
predominantly white neighborhoods.
The authors also
highlight how software used in the public sector can project bias, spotlighting
technology used in a number of states that calculates scores on the likeliness
that a criminal might commit additional crimes. According to the researchers,
the software has been found to “falsely flag black defendants as future
criminals, wrongly labeling them this way at almost twice the rate of white defendants.”
Meliou and Brun
are not the first to issue concerns about the perverse influence of software
and the risk it may pose in perpetuating discrimination.
In May 2016, the
White House released a short paper remarking that technology and data can be leveraged
to promote equality but that it also is “full of risk.”
“Predictors of
success can become barriers to entry; careful marketing can be rooted in
stereotype. Without deliberate care, these innovations can easily hardwire
discrimination, reinforce bias, and mask opportunity,” the paper stated.
The two
professors tout their program as a tool that could be used by companies and
government agencies that use software to make decisions.
“As with software
quality, testing is likely to be the best way to evaluate software fairness
properties,” their paper states.
Brun and Meliou
did not respond to a College Fix request for comment, and a university media
relations employee said it was not possible to speak with the two researchers.
Comments
Post a Comment