AI that was deemed too dangerous to be released has now been released into the world....
AI
DEEMED ‘TOO DANGEROUS TO RELEASE’ MAKES IT OUT INTO THE WORLD
Extremists could generate
'synthetic propaganda', automatically creating white supremacist screeds,
researchers warn
Andrew Griffin November
7, 2019
An AI that was
deemed too dangerous to be released has now been released into the world.
Researchers
had feared that the model, known as "GPT-2", was so powerful that it
could be maliciously misused by everyone from politicians to scammers.
GPT-2
was created for a simple purpose: it can be fed a piece of text, and is able to
predict the words that will come next. By doing so, it is able to create long
strings of writing that are largely indistinguishable from those written
by a human being.
But
it became clear that it was worryingly good at that job, with its text creation
so powerful that it could be used to scam people and may undermine trust in the
things we read.
What's more, the model can be abused by extremist groups to create
"synthetic propaganda" that would allow them to automatically
generate long text promoting white supremacy or jihadist Islamis, for instance.
"Due to our concerns
about malicious applications of the technology, we are not releasing the
trained model," wrote OpenAI in a
February blog post, released when it made the announcement. "As an experiment
in responsible disclosure, we are instead releasing a much smaller model for
researchers to experiment with, as well as a technical paper."
At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.
The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.
It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.
Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.
"These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns," they wrote. "The public at large will need to become more skeptical of text they find online, just as the “deep fakes” phenomenon calls for more skepticism about images."
The researchers said that experts needed to work to consider "how research into the generation of synthetic images, videos, audio, and text may further combine to unlock new as-yet-unanticipated capabilities for these actors, and should seek to create better technical and non-technical countermeasures".
Comments
Post a Comment