'Dangerous' AI offers to write fake news
'Dangerous' AI offers to
write fake news
·
27 August 2019
The text generator, built by research firm
OpenAI, was originally considered "too dangerous" to make public
because of the potential for abuse.
But
now a new, more powerful version of the system - that could be used to create
fake news or abusive spam on social media - has been released.
The
BBC, along with some AI experts, decided to try it out.
The
model, called GPT-2, was trained on a dataset of eight million web pages, and
is able to adapt to the style and content of the initial text given to it.
It
can finish a Shakespeare poem as well as write articles and epithets.
At
the time, the firm said: "Due to our concerns about malicious applications
of the technology, we are not releasing the trained model. As an experiment in
responsible disclosure, we are instead releasing a much smaller model for
researchers to experiment with."
As a
result, the released version had far fewer parameters - phrases and sentences -
than used during training.
This month, OpenAI decided to expand the
parameters, offering a much broader database of training data.
Tech news site The Next Web said of the update: "This
one works almost good enough to use as a general artificial intelligence for
text generation - almost."
Article
author Tristan Greene added: I'm terrified of GPT-2 because it represents the
kind of technology that evil humans are going to use to manipulate the
population - and in my opinion that makes it more dangerous than any gun."
The BBC decided to test the system by
inputting the first few sentences of a recent story, appropriately one about
AI.
We began with the first sentence of the real
story:
Chinese
search engine Sogou is creating artificial intelligence lookalikes to read
popular novels in authors' voices.
Here's what the text generator came up with
to complete it:
According to the researchers, this method of
reading is not just the brainchild of Amazon Kindle users. Users of Android
devices are also a notable market for fake reading. There are several theories
to explain this.
Firstly, the device in a user's hand is
different from the tablet or computer they are reading on. Secondly, the device
used to access the Internet might be tampered with in some way. A third reason
is the user may just copy and paste content directly into their virtual
characters.
However, another explanation is that the
device does not have voice recognition capabilities at all. If this is the case
then voice recognition is more advanced than Google's translation engine. As
such, these users are tricked into thinking they are reading from a real author
- the book is made to sound authentic as soon as they try and read.
Now click here to read the real story for
comparison.
Noel Sharkey, a professor of computer
science at the University of Sheffield, conducted his own tests on the
generator and was not too impressed.
Man walks into a bar
"If the software worked as intended by
Open AI, it would be a very useful tool for easily generating fake news and
clickbait spam. Fortunately, in its present form, it generates incoherent and
ridiculous text with little relation to the input 'headlines'," he said.
He added: " I tested the software with
numerous headlines and got laughable results. For example, for 'Turkeys hate
Christmas', I got that 'Turkey is the only nation in the world that doesn't
celebrate Christmas' and a number of unrelated sentences.
"For 'Boris Johnson loves the
backstop', it produced incoherent gibberish and some stuff about AI and sport.
When I input the statement that 'Boris Johnson hates the backstop', I got a
more coherent story that appears to have been pulled off a Google search."
Dave Coplin, founder of AI consultancy the
Envisioners, also had a play with the system, inputting the first line of a
classic joke: A man walks into a bar...
The suggestion from the AI was not what he
was expecting: "...And ordered two pints of beer and two scotches. When he
tried to pay the bill, he was confronted by two men - one of whom shouted
"This is for Syria". The man was then left bleeding and stabbed in
the throat".
This "overwhelmingly dystopian
reflection of our society" was a lesson in how any AI system will reflect
the bias found in training data, he said.
"From my brief experiments with the
model, it's pretty clear that a large portion of the data has been trained by
internet news stories," he said.
"OpenAI's decision to publish the
upgraded version of their GPT-2 language prediction text generator model may
seem controversial," he added.
"But once the initial (and
understandable) concern dies down, what is left is a fundamentally crucial
debate for our society, which is about how we need to think about a world where
the line between human-generated content and computer-generated content becomes
increasingly hard to differentiate," he added.
OpenAI, which originally was non-profit, was
founded in 2015 with the aim of promoting and developing AI in such a way as to
benefit humanity as a whole.
Elon Musk was one of the initial funders,
but has not been involved with the firm for some time.
Comments
Post a Comment