'Dangerous' AI offers to write fake news
'Dangerous' AI offers to
write fake news
· 27 August 2019
built by research firm OpenAI, was originally considered "too dangerous" to make public because of the potential for abuse.
But now a new, more powerful version of the system - that could be used to create fake news or abusive spam on social media - has been released.
The BBC, along with some AI experts, decided to try it out.
The model, called GPT-2, was trained on a dataset of eight million web pages, and is able to adapt to the style and content of the initial text given to it.
It can finish a Shakespeare poem as well as write articles and epithets.
At the time, the firm said: "Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with."
As a result, the released version had far fewer parameters - phrases and sentences - than used during training.
This month, OpenAI decided to expand the parameters, offering a much broader database of training data.
"This one works almost good enough to use as a general artificial intelligence for text generation - almost."
Article author Tristan Greene added: I'm terrified of GPT-2 because it represents the kind of technology that evil humans are going to use to manipulate the population - and in my opinion that makes it more dangerous than any gun."
The BBC decided to test the system by inputting the first few sentences of a recent story, appropriately one about AI.
We began with the first sentence of the real story:
Here's what the text generator came up with to complete it:
Noel Sharkey, a professor of computer science at the University of Sheffield, conducted his own tests on the generator and was not too impressed.
Man walks into a bar
"If the software worked as intended by Open AI, it would be a very useful tool for easily generating fake news and clickbait spam. Fortunately, in its present form, it generates incoherent and ridiculous text with little relation to the input 'headlines'," he said.
He added: " I tested the software with numerous headlines and got laughable results. For example, for 'Turkeys hate Christmas', I got that 'Turkey is the only nation in the world that doesn't celebrate Christmas' and a number of unrelated sentences.
"For 'Boris Johnson loves the backstop', it produced incoherent gibberish and some stuff about AI and sport. When I input the statement that 'Boris Johnson hates the backstop', I got a more coherent story that appears to have been pulled off a Google search."
Dave Coplin, founder of AI consultancy the Envisioners, also had a play with the system, inputting the first line of a classic joke:
The suggestion from the AI was not what he was expecting: "...And ordered two pints of beer and two scotches. When he tried to pay the bill, he was confronted by two men - one of whom shouted "This is for Syria". The man was then left bleeding and stabbed in the throat".
This "overwhelmingly dystopian reflection of our society" was a lesson in how any AI system will reflect the bias found in training data, he said.
"From my brief experiments with the model, it's pretty clear that a large portion of the data has been trained by internet news stories," he said.
"OpenAI's decision to publish the upgraded version of their GPT-2 language prediction text generator model may seem controversial," he added.
"But once the initial (and understandable) concern dies down, what is left is a fundamentally crucial debate for our society, which is about how we need to think about a world where the line between human-generated content and computer-generated content becomes increasingly hard to differentiate," he added.
OpenAI, which originally was non-profit, was founded in 2015 with the aim of promoting and developing AI in such a way as to benefit humanity as a whole.
Elon Musk was one of the initial funders, but has not been involved with the firm for some time.