AI to monitor online behavior of people with Top Secret Clearance...
The US Military Is Creating the Future of Employee
Monitoring
U.S. Air Force cyber security technicians with the 355th
Communications Squadron review work orders at Davis-Monthan Air Force Base
Ariz., Sept. 26, 2018.
BY PATRICK TUCKER MARCH 26, 2019
A new AI-enabled pilot project aims to sense “micro
changes” in the behavior of people with top-secret clearances. If it works, it
could be the future of corporate HR.
The U.S. military has the hardest job in human resources:
evaluating hundreds of thousands of people for their ability to protect the
nation’s secrets. Central to that task is a question at the heart of all labor
relations: how do you know when to extend trust or take it away?
The office of the Defense Security Service, or DSS,
believes artificial intelligence and machine learning can help. Its new pilot
project aims to sift and apply massive amounts of data on people who hold or
are seeking security clearances. The goal is not just to detect employees who
have betrayed their trust, but to predict which ones might — allowing problems
to be resolved with calm conversation rather than punishment.
If the pilot proves successful, it could provide a model
for the future of corporate HR. But the concept also affords employers an
unprecedented window into the digital lives of their workers, broaching new
questions about the relationship between employers, employees, and information
in the age of big data and AI.
The pilot is based on an urgent need. Last June, the
Defense Department took over the task of working through the security clearance
backlog — more than 600,000 people. Some people — and the organizations that
want to hire them — wait more than a year, according to a September report from
the National Background Investigations Bureau. Those delays stem from an
antiquated system that involves mailing questionnaires to former places of
employment, sometimes including summer jobs held during an applicant’s
adolescence, waiting (and hoping) for a response, and scanning the returned
paper document into a mainframe database of the sort that existed before cloud
computing.
In addition to being old-fashioned, that process sheds
light on an individual only to the degree that past serves as prologue. As an
indicator of future behavior, it’s deeply wanting, say officials.
This effort to create a new way to gauge potential
employees’ risk is being led by Mark Nehmer, the technical director of research
and development and technology transfer at DSS’ National Background
Investigative Services.
This spring, DSS is launching what they describe as a
“risk-based user activity pilot.” It involves collecting an individual’s
digital footprint, or “cyber activity,” essentially what they are doing online,
and then matching that with other data that the Defense Department has on the
person. Since “online” has come to encompass all of life, the effect, they
hope, will be a full snapshot the person.
“We anticipate early results in the fall,” a DSS official
said in an email on Tuesday.
The Department of Defense already does some digital user
activity monitoring. But the pilot seeks a lot more information than is
currently the norm.
We are putting
these into a construct so that we can detect minute changes in behavior across
an entire pattern of life, so that we can detect stress.
MARK NEHMER, DSS’
NATIONAL BACKGROUND INVESTIGATIVE SERVICES
“In the Department of Defense, user activity monitoring
is typically constructed around an endpoint. So think of your laptop. It’s just
monitoring activity on your laptop. It’s not looking at any other cyber data
that’s available” — perhaps 20 percent of the available digital information on
a person, Nehmer said at a November briefing put on by company, C3, a
California-based technology company serving as a partner on the pilot.
The pilot seeks a much fuller spectrum of digital
information and then combines it with other data within the Defense Department
and beyond, using machine learning algorithms to derive insights. “Once
constructed fully, it will look at the bulk of the cyber data you generate,”
said Nehmer. “It’s IP-based with a date time stamp on it. There’s no name
associated with it; you actually have to go to a different set of logs to marry
those two things up.” (The data is entirely from Defense Department owned
equipment, not personal systems.)
The pilot will also bring in material that a person put
on their Standard Form 86 — basically forms that people seeking a security
clearance fill out, and other data not related to cyber activity. “Let’s not
just look at what you’re doing on your individual computer because there’s all
this other information that we have available to us that we’re not looking at
just because it’s hard,” he says, meaning that the data is not in a format
that’s easily digestible by machine learning programs. “There’s this other
human behavior data that’s not indicated in the cyber world.”
That data will join data from what’s called continuous
evaluation, a current effort to monitor life events related to clearance
holders, such as getting married or divorced, entering into a lot of debt or
getting a sudden windfall, tax returns, arrests, sudden foreign travel, etc.
Over a million military personnel are currently enrolled in the existing
continuous evaluation system.
The eventual goal, said Nehmer, is a system that can
sense not just impending insider crime but also far more intimate states of
being; pre-crime, if you will. “We can begin to add whether or not the activity
that the individual is producing is increasing, decreasing, or staying within a
fairly normative range. If it’s staying within a normative range then as long
as there are no business rules that are broken, likely we don’t have a problem.
If their activity dramatically increases there is likely stress. But there are
a lot of ways to measure activity. If it significantly decreases, it’s likely
that there is some external controlling factor on that.”
In other words, it’s a window not just into a person’s
past but their present state of mind, or what Nehmer calls “micro changes” in
behavior.
“We are putting these into a construct so that we can
detect minute changes in behavior across an entire pattern of life, so that we
can detect stress,” he said. “Fundamentally, we are there to look out for micro
changes in behavior that might indicate that a person is interested, or
disinterested, in continuing their affiliation with the Department of Defense,
or discontinuing their affiliation with life,” he said, referring specifically
to the epidemic of veteran suicides. “If we do our jobs right, we can help
prevent suicides, data breaches, or things that people, when they’re under
stress, things that they do,” he says.
We don’t want to
lose our valuable employees. We want to help our valuable employees remain our
valuable employees.
MARK NEHMER, DSS’
NATIONAL BACKGROUND INVESTIGATIVE SERVICES
Of course, not everyone is happy with the idea of their
employers detecting “micro-changes” in their behavior, especially if they
believe that they might be punished for what those changes indicate. That goes
for people who are exchanging their privacy for the immense privilege and power
that comes with holding a secret or top-secret clearance. It’s a tradeoff, but
not an easy one.
Nehmer is acutely aware of this. He’s adamant that the
objective isn’t to slap the cuffs on people; it’s to reveal changes in mood,
outlook, or behavior before punishment becomes necessary, before the employee
starts to hate where they work or what they’re doing.
As Nehmer describes it, the best outcome is a
conversation between employee and manager well before there is a crime, when
any workplace issue is in its earliest state; or “to detect changes in life
pattern that might indicate that an individual is moving into a position where
they may need some intervention. That intervention, if we are doing our jobs
right, is most likely to just be a person talking to them, having a
conversation,” he says. “We don’t want to lose our valuable employees. We want
to help our valuable employees remain our valuable employees,” he says.
Reaching that goal will take a lot of data; and data collection
within the Defense Department isn’t a straightforward process. “At least a
couple of years ago, there were 15,000 major systems across the Department of
Defense, each one configured differently, each one putting out different types
of data. That’s just the systems. Then you have all of these different things
related to personnel, and each one of them describes a single event in a
different way,” said Nehmer.
It also requires data that’s not within the Defense
Department as an institution but is available in other ways, through data
sharing partnerships with other governmental and commercial entities. Nehmer
didn’t want to say exactly which entities but it’s all data that the Department
has legal access to as part of its screening and counter-intelligence
activities. Still, getting those entities to share data for new pilot employee
monitoring applications requires a lot of legal negotiation. If a data sharing
partnership has to be renegotiated to allow for continuous evaluation or user
activity monitoring, it can take nine months. And there are a lot of
data-sharing partnerships to work through. Nehmer says that a lot of time is
spent working with lawyers to keep the pilot in-line with the law and get the
data it needs to be effective.
It’s still an open question as whether the pilot will
create the sort of “snapshot” that DSS is looking to build. If it is, it will
have set a new standard for using data to predict and manage employees.
When you are seeking a job with highly sensitive national
secrets, you agree to give up a lot of information about yourself. So it’s not
clear how well user activity monitoring will transport to private companies
that don’t have a national security interest. But if it works, companies in the
future may want to implement something similar, creating a new norm for
employee monitoring.
The appeal of a system like the one Nehmer describes to
outside companies depends, of course, on several factors: whether that snapshot
actually results in improved performance, rather than just decreased risk; and,
most importantly, whether employees perceive a reward in participating in such
monitoring, how willing they are to be a part of it.
In some instances, where the result of the program really
is less stress, better work, better relationships with managers, etc. a happier
work life, then the rewards, perhaps, will justify the exposure in the mind of
the employee or job applicant. In industries like tech, where firms compete
hard for qualified applicants, it’s difficult to imagine talented workers
signing up for a job where their data is always used against them. But equally
competitive employees might be attracted to program where their data worked for
them (so long as they understood how the system functioned.)
In other instances, where managers choose to use
monitoring to increase costs for bad performance without offering rewards,
future employee monitoring will probably feel truly Orwellian. And there are
other concerns. It’s not clear how wide-scale adoption of user activity monitoring
would effect, for instance, the ability of officials to communicate with the
media on background, or engage in whistleblower activities, or other behavior
that is morally and ethically justified but may have an immediate negative
effect on the organization.
But none of those outcomes is the direct result of the
technology so much as the discretion of future managers. Some bosses are good.
Some are bad. In the future, each will be empowered to be more truly what they
are.
But that, too, will be impossible to conceal.
Comments
Post a Comment