End of public anonymity? The next challenge for facial recognition is identifying people whose faces are covered
The next challenge for facial recognition is identifying
people whose faces are covered
Current methods are unreliable, but progress is being
made — and quickly
by James Vincent
Sep 6, 2017, 3:27pm EDT
Facial recognition is becoming more and more common, but
ask anyone how to avoid it and they’ll say: easy, just wear a mask. In the
future, though, that might not be enough. Facial recognition technology is
under development that’s capable of identifying someone even if their face is
covered up — and it could mean that staying anonymous in public will be harder
than ever before.
The topic was raised this week after research published
on the preprint server arXiv describing just such a system was shared in a
popular AI newsletter. Using deep learning and a dataset of pictures of people
wearing various disguises, researchers were able to train a neural network that
could potentially identify masked faces with some reliability. Academic and
sociologist Zeynep Tufekci shared the work on Twitter, noting that such
technology could become a tool of oppression, with authoritarian states using
it to identify anonymous protestors and stifle dissent.
The paper itself needs to be taken with a pinch of salt,
though. Its results were far less accurate than industry-level standards (when
someone was wearing a cap, sunglasses, and a scarf, for example, the system
could only identify them 55 percent of the time); it used a small dataset; and
experts in the field have criticized its methodology.
“It doesn’t strike me as a particularly convincing
paper,” Patrik Huber, a researcher at the University of Surrey who specializes
in face tracking and analysis, told The Verge. He pointed out that the system
doesn’t actually match disguised faces to mugshots or portraits, but instead
used something called “facial keypoints” (the distances between facial features
like eyes, noses, lips, etc) as a proxy for someone’s identity.
An image from the recent study, showing how the neural
networks estimate “facial keypoints” even when the face is covered.
But although the paper has its flaws, the challenge of
recognizing people when their faces are covered is one that plenty of teams are
working on — and making quick progress.
Facebook, for example, has trained neural networks that
can recognize people based on characteristics like hair, body shape, and
posture. Facial recognition systems that work on portions of the face have also
been developed (although, again; not ready for commercial use). And there are
other, more exotic methods to identify people. AI-powered gait analysis, for
example, can recognize individuals with a high degree of accuracy, and even
works with low-resolution footage — the sort you might get from a CCTV camera.
One system for identifying masked individuals developed
at the University of Basel in Switzerland recreates a 3D model of the target’s
face based on what it can see. Bernhard Egger, one of the scientists behind the
work, told The Verge that he expected “lots of development” in this area in the
near future, but thought that there would always be ways to fool the machine.
“Maybe machines will outperform humans on very specific tasks with partial
occlusions,” said Egger. “But, I believe, it will still be possible to not be
recognized if you want to avoid this.”
Wearing a rigid mask that covers the whole face, for
example, would give current facial recognition systems nothing to go on. And
other researchers have developed patterned glasses that are specially designed
to trick and confuse AI facial recognition systems. Getting clear pictures is
also difficult. Egger points out that we’re used to facial recognition
performing quickly and accurately, but that’s in situations where the subject
is compliant — scanning their face with a phone, for example, or at a border
checkpoint.
Privacy advocates, though, say even if these systems have
flaws, they’re still likely to be embraced by law enforcement. Last month, for
example, police in London used real-time facial recognition to scan people
attending the annual Notting Hill Carnival. Before the event they assembled a
“bespoke dataset” with images of more than 500 people who were either banned from
attending or wanted for arrest and then set up cameras at one of the Carnival’s
main thoroughfares. According to a report from human rights group Liberty, only
one attendee was successfully identified using this system (and even then his
arrest warrant was out-of-date) while there were 35 false positives. The police
still deemed it a success.
If you combine this attitude with the increasing adoption
of police body cameras, the growth of facial recognition databases, and new AI
techniques for analyzing data, it seems clear that public anonymity is being
undermined. And in the current political climate, where protests are becoming
more common and more violent, this is potentially very dangerous. And as
Tufekci noted on Twitter, this new technology is often developed without
considering the uses it might be put to.
Amarjot Singh, the lead researcher behind the recent
paper published on arXiv, said he thought the systems themselves were neutral,
and whether they would have a harmful effect on society depended on how they
were deployed. “There are more benefits to this technology than harm,” he told The
Verge. “Everything can be used in a good way and a negative way. Even a car.”
He added that he and his colleagues were working to get funding to improve
their system, and that they might eventually commercialize it. “To expand the
dataset we might try and make a product out of it,” said Singh. “We’re not very
sure of that yet, but we will definitely be expanding the dataset.”
Comments
Post a Comment