Algorithms With Minds of Their Own. How do we ensure that artificial intelligence is accountable?

Algorithms With Minds of Their Own

How do we ensure that artificial intelligence is accountable?

By Curt Levey and  Ryan Hagemann Nov. 12, 2017 4:11 p.m. ET

Everyone wants to know: Will artificial intelligence doom mankind—or save the world? But this is the wrong question. In the near future, the biggest challenge to human control and acceptance of artificial intelligence is the technology’s complexity and opacity, not its potential to turn against us like HAL in “2001: A Space Odyssey.” This “black box” problem arises from the trait that makes artificial intelligence so powerful: its ability to learn and improve from experience without explicit instructions.

Machines learn through artificial neural networks that work like the human brain. As these networks are presented with numerous examples of their desired behavior, they learn through the modification of connection strengths, or “weights,” between the artificial neurons in the network. Imagine trying to figure out why a person made a particular decision by examining the connections in his brain. Examining the weights of a neural network is only slightly more illuminating.

Concerns about why a machine-learning system reaches a particular decision are greatest when the stakes are highest. For example, risk-assessment models relying on artificial intelligence are being used in criminal sentencing and bail determinations in Wisconsin and other states. Former Attorney General Eric Holder and others worry that such models disproportionately hurt racial minorities. Many of these critics believe the solution is mandated transparency, up to and including public disclosure of these systems’ weights or computer code.

But such disclosure will not tell you much, because the machine’s “thought process” is not explicitly described in the weights, computer code or anywhere else. Instead, it is subtly encoded in the interplay between the weights and the neural network’s architecture. Transparency sounds nice, but it’s not necessarily helpful and may be harmful.

Requiring disclosure of the inner workings of artificial-intelligence models could allow people to rig the system. It could also reveal trade secrets and otherwise harm the competitive advantage of a system’s developers. The situation becomes even more complicated when sensitive or confidential data is involved.

A better solution is to make artificial intelligence accountable. The concepts of accountability and transparency are sometimes conflated, but the former does not involve disclosure of a system’s inner workings. Instead, accountability should include explainability, confidence measures, procedural regularity, and responsibility.

Explainability ensures that nontechnical reasons can be given for why an artificial-intelligence model reached a particular decision. Confidence measures communicate the certainty that a given decision is accurate. Procedural regularity means the artificial-intelligence system’s decision-making process is applied in the same manner every time. And responsibility ensures individuals have easily accessible avenues for disputing decisions that adversely affect them.

Requiring accountability would reassure those affected by decisions derived from artificial intelligence while avoiding the potential harms associated with transparency. It also decreases the need for complicated regulations spelling out precisely what details need to be disclosed.

There already are real-world examples of successfully implemented accountability measures. One of us, Curt Levey, had experience with this two decades ago as a scientist at HNC Software. Recognizing the need for better means to assess reliability, he developed a patented technology providing reasons and confidence measures for the decisions made by neural networks. The technology was used to explain decisions made by the company’s neural network-based product for evaluating credit applications. It worked so well that FICO bought the company.

This patented technology also provides accountability in FICO’s Falcon Platform, a neural-network system that detects payment-card fraud. Financial institutions and their customers need to understand why an incident of fraud is suspected, and the technology met that challenge, opening the door for Falcon’s widespread adoption by the financial industry. FICO estimates that today Falcon protects approximately 65% of all credit card transactions world-wide.

Falcon’s ability to detect suspicious patterns of behavior has also found use in counterterrorism efforts. Following the Sept. 11 attacks, the same neural network technology was used by airlines to identify high-risk passengers. That’s a far cry from Elon Musk’s assertion that artificial intelligence will cause World War III.

Until recently the success of systems like Falcon went underreported. Artificial-intelligence pioneer John McCarthy noted decades ago, “As soon as it works, no one calls it AI anymore.” Further advances in artificial intelligence promise many more benefits for mankind, but only if society avoids strangling this burgeoning technology with burdensome and unnecessary transparency regulations.

Mr. Levey is president of the Committee for Justice. Mr. Hagemann is director of technology policy at the Niskanen Center.


Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger