Robot Psychologist Scientists Go Inside Minds of Machines.'
Career of the Future: Robot Psychologist
Engineers are using cognitive psychology to figure out
how AIs think and make them more accountable
By Christopher Mims July 9, 2017 9:00 a.m. ET
Artificial-intelligence engineers have a problem: They
often don’t know what their creations are thinking.
As artificial intelligence grows in complexity and
prevalence, it also grows more powerful. AI already has factored into decisions
about who goes to jail and who receives a loan. There are suggestions AI should
determine who gets the best chance to live when a self-driving car faces an
unavoidable crash.
Defining AI is slippery and growing more so, as startups
slather the buzzword over whatever they are doing. It is generally accepted as
any attempt to ape human intelligence and abilities.
One subset that has taken off is neural networks, systems
that “learn” as humans do through training, turning experience into networks of
simulated neurons. The result isn’t code, but an unreadable, tangled mass of
millions—in some cases billions—of artificial neurons, which explains why those
who create modern AIs can be befuddled as to how they solve tasks.
Most researchers agree the challenge of understanding AI
is pressing. If we don’t know how an artificial mind works, how can we
ascertain its biases or predict its mistakes?
As artificial intelligence gets more complex and
prevalent, it also becomes more powerful. There are suggestions AI should
determine which driver gets the best chance to live in the event of a crash of
self-driving cars. It is important to know when AI will behave unexpectedly.
We won’t know in advance if an AI is racist, or what
unexpected thought patterns it might have that would make it crash an
autonomous vehicle. We might not know about an AI’s biases until long after it
has made countless decisions. It’s important to know when an AI will fail or
behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I
“A big problem is people treat AI or machine learning as
being very neutral,” said Tracy Chou, a software engineer who worked with
machine learning at Pinterest Inc. “And a lot of that is people not
understanding that it’s humans who design these models and humans who choose
the data they are trained on.”
An example can be found on Google Translate. Ask it to
translate “doctor” to Portuguese, and it always returns the male form of the
noun, médico, over the female, médica. Type in “nurse” and you get enfermeira
(female)—never enfermeiro (male).
Conspiracy? No, it is a natural consequence of biases
inherent in the bodies of literature used to train translation systems.
Something similar happens in data when researchers eliminate the category of
race: Other data, such as where a person lives, correlate so strongly with race
that they become unintentional proxies for it.
Unlike with humans, we can’t just ask a robot why it does
what it does. Artificial intelligences can excel at narrow tasks, but even
those that talk have introspective powers about on par with a cockroach.
It is a difficult enough problem to crack that the
Defense Advanced Research Projects Agency, better known as Darpa, is funding
researchers working on “explainable artificial intelligence.”
Here’s why we’re in this pickle: A good way to solve
problems in computer science is for engineers to code a neural
network—essentially a primitive brain—and train it by feeding it enormous piles
of data. Once the AI has had enough time to chew through a bunch of images
labeled “cat,” for example, it can reliably pick out pictures of a cat.
The tricky bit is that neural networks learn by altering
their own innards. This is basically how your brain works, too. And like the
connections between the 86 billion or so neurons in your brain, the precise way
an AI “thinks” is incomprehensible.
Engineers call this the “interpretability” problem (as
in, the lack of it) and refer to neural networks as “black boxes”—things we can
stimulate and observe but whose insides we can’t understand.
Researchers at DeepMind Technologies Ltd., a subsidiary
of Alphabet Inc., announced a novel way to get inside the minds of machines:
treat them like human children.
To say engineers are using the techniques of cognitive
psychology on AI isn’t an analogy. The team at DeepMind used exactly the same
tests and materials psychologists use on children to tease out how their AI
thinks, says David Barrett, a DeepMind research scientist who worked on the
project.
Decades of research on unpacking the human brain through
cognitive science may now be applied to machines, potentially unlocking a whole
new avenue for understanding AI and making it accountable, he said.
A result of DeepMind’s research: We now know at least one
of its AIs—a “one-shot learning model” designed to learn words after being
exposed to them only once—is, surprisingly, solving problems the same way
humans do. Like humans, it is identifying objects by shape, even though it
wasn’t taught to, and even though there are other ways to identify random
objects, such as color, texture or movement. Previously, how it learned was
opaque.
Understanding is just the beginning of how we interact
with artificial intelligences. The other half of robot psychology is what might
be described as therapy—that is, changing an AI’s mind.
Because engineers typically create many versions of an AI
when trying to discover the best one, the use of cognitive psychology could
give engineers more power to choose the ones that “think” the way we want them
to, Mr. Barrett said. Alternatively, we might find it’s better when AIs don’t
think like us: We might learn something new about how to solve problems.
The upshot is that when we replace human decision-makers
with artificial intelligences, AIs have the potential to be better, with fewer
mistakes and more accountability, because their output is measurable and we
might be able to trace exactly how they make decisions.
We ask humans to do this all the time—in a court of law,
when dissecting a business decision—but humans are notoriously unreliable
narrators. With machines, at last, we could have decision-makers whose every
bias and fleeting impulse can be inspected and potentially altered.
Appeared in the July 10, 2017, print edition as 'Career
of the Future: Robot Psychologist Scientists Go Inside Minds of Machines.'
Comments
Post a Comment