Crime-predicting A.I. isn’t science fiction. It’s about to roll out in India
Crime-predicting A.I. isn’t science fiction. It’s about
to roll out in India
By John R. Quain 4.11.18 - 3:00AM
Artificial intelligence programs promise to do
everything, from predicting the weather to piloting autonomous cars. Now AI is
being applied to video surveillance systems, promising to thwart criminal
activity not by detecting crimes in progress but by identifying a crime–before
it happens. The goal is to prevent violence such as sexual assaults, but could
such admirable intentions turn into Minority Report-style pre-crime nightmares?
Such a possibility may seem like a plot line from an
episode of Black Mirror, but it’s no longer the stuff of science fiction.
Cortica, an Israeli company with deep roots in security and AI research,
recently formed a partnership in India with Best Group to analyze the terabytes
of data streaming from CCTV cameras in public areas. One of the goals is to
improve safety in public places, such as city streets, bus stops, and train
stations.
It’s already common for law enforcement in cities like
London and New York to employ facial recognition and license plate matching as
part of their video camera surveillance. But Cortica’s AI promises to take it
much further by looking for “behavioral anomalies” that signal someone is about
to commit a violent crime.
The software is based on the type of military and
government security screening systems that try to identify terrorists by
monitoring people in real-time, looking for so-called micro-expressions —
minuscule twitches or mannerisms that can belie a person’s nefarious
intentions. Such telltale signs are so small they can elude an experienced
detective but not the unblinking eye of AI.
At a meeting in Tel Aviv before the deal was announced,
co-founder and COO Karina Odinaev explained that Cortica’s software is intended
to address challenges in identifying objects that aren’t easily classified
according to traditional stereotypes. One example Odinaev described involved
corner cases (such as a bed falling off a truck on the highway) that are
encountered in driving situations, precisely the sort of unique events that
programs controlling autonomous cars will have to be able to handle in the
future.
“For that, you need unsupervised learning,” Odinaev said.
In other words, the software has to learn in the same way that humans learn.
Going directly to the brain
Cortica’s AI software monitors people in real-time,
looking for micro-expressions — minuscule twitches or mannerisms that can belie
a person’s nefarious intentions.
To create such a program, Cortica did not go the neural
network route (which despite its name is based on probabilities and computing
models rather than how actual brains work). Instead, Cortica went to the
source, in this case a cortical segment of a rat’s brain. By keeping a piece of
brain alive ex vivo (outside the body) and connecting it to a microelectrode
array, Cortica was able to study how the cortex reacted to particular stimuli.
By monitoring the electrical signals, the researchers were able to identify
specific groups of neurons called cliques that processed specific concepts.
From there, the company built signature files and mathematical models to
simulate the original processes in the brain.
The result, according to Cortica, is an approach to AI
that allows for advanced learning while remaining transparent. In other words,
if the system makes a mistake — say, it falsely anticipates that a riot is
about to break out or that a car ahead is about to pull out of a driveway —
programmers can easily trace the problem back to the process or signature file
responsible for the erroneous judgment. (Contrast this with so-called deep
learning neural networks, which are essentially black boxes and may have to be
completely re-trained if they make a mistake.)
Initially, Cortica’s Autonomous AI will be used by Best
Group in India to analyze the massive amounts of data generated by cameras in
public places to improve safety and efficiency. Best Group is a diversified
company involved in infrastructure development and a major supplier to
government and construction clients. So it wants to learn how to tell when
things are running smoothly — and when they’re not.
But it is hoped that Cortica’s software will do
considerably more in the future. It could be used in future robotaxis to
monitor passenger behavior and prevent sexual assaults, for example. Cortica’s
software can also combine data not just from video cameras, but also from
drones and satellites. And it can learn to judge behavioral differences, not
just between law abiding citizens and erstwhile criminals, but also between a
peaceful crowded market and a political demonstration that’s about to turn
violent.
Such predictive information would allow a city to deploy
law enforcement to a potentially dangerous situation before lives are lost.
However, in the wrong hands, it could also be abused. A despotic regime, for
example, might use such information to suppress dissent and arrest people
before they even had a chance to organize a protest.
In New York City, during a demonstration of how Cortica’s
Autonomous AI is being applied to autonomous cars, Cortica’s vice resident,
Patrick Flynn, explained that the company is focused on making the software
efficient and reliable to deliver the most accurate classification data
possible. What clients do with that information — stop a car or make it speed
up to avoid an accident, for example — is up to them. The same would apply to
how a city or government might allocate police resources.
“The policy decisions are strictly outside of Cortica’s
area,” Flynn said.
Would we give up privacy for improved security?
Nevertheless, the marriage of AI to networks that are
ubiquitous of webcams is starting to generate more anxiety about privacy and
personal liberty. And it’s not just foreign despotic governments that people
are worried about.
In New Orleans, Mayor Mitch Landrieu has proposed a $40
million crime-fighting surveillance plan, which includes networking together
municipal cameras with the live feeds from private webcams operated by
businesses and individuals. The proposal has already drawn public protests from
immigrant workers concerned that federal immigration officials will use the
cameras to hunt down undocumented workers and deport them.
Meanwhile, like subjects trapped in a Black Mirror world,
consumers may already be unwittingly submitting themselves to such AI-powered
surveillance. Google’s $249 Clips camera, for example, uses a rudimentary form
of AI to automatically take pictures when it sees something it deems
significant. Amazon, whose Alexa is already the subject of eavesdropping
paranoia, has purchased popular video doorbell company Ring. GE Appliances is
also planning to debut a video camera equipped hub for kitchens later this
year. In Europe, Electrolux will debut a steam oven this year with a built-in
webcam.
While these technologies raise the specter of Big Brother
monitoring our every move, there’s still the laudable hope that using
sophisticated AI like Cortica’s program could improve safety, efficiency, and
save lives. One can’t help wondering, for example, what would have happened if
such technology were available and used in the Uber that 19-year-old Nikolas
Cruz took on his way to murder 17 people at Marjory Stoneman Douglas High School.
The Uber driver didn’t notice anything amiss with Cruz, but could an AI
equipped camera have detected microexpressions revealing his intentions and
alerted the police? In the future, we may find out.
https://www.digitaltrends.com/cool-tech/could-ai-based-surveillance-predict-crime-before-it-happens/
Comments
Post a Comment