AI Can Almost Write Like a Human...—and More Advances are Coming


AI CAN ALMOST WRITE LIKE A HUMAN—AND MORE ADVANCES ARE COMING
A new language model, OpenAI’s GPT-3, is making waves for its ability to mimic writing, but it falls short on common sense. Some experts think an emerging technique called neuro-symbolic AI is the answer.
AUTHOR JARED COUNCIL PUBLISHED AUG. 11, 2020 9:00 AM ET
Last month, software developer Kevin Lacker tested GPT-3, the latest version of an artificial-intelligence language system developed by San Francisco-based software company OpenAI LP. The system isn’t yet public, but it set off a firestorm in tech circles after OpenAI gave select researchers and developers access so they could provide feedback. They observed its uncanny and unprecedented ability to answer trivia questions, generate long passages of coherent text, design simple software applications and offer plausible recipes for breakfast burritos.
Trained on roughly 300 billion words from across the internet, GPT-3 predicts what is most likely to follow a prompt from a human. But ask it to reason, and it struggles.
“If I have two shoes in a box, put a pencil in the box, and remove one shoe, what is left?” Mr. Lacker, who is based in Piedmont, Calif., typed into the software.
“A shoe,” GPT-3 replied, incorrectly.
The mistake reflects one of the central shortcomings of today’s language models: They are great at predicting the most likely next words in a sequence, but fall short at reasoning and common sense. “It pretends to be correct, but actually it’s correct for the wrong reasons. It doesn’t really understand the question very much at all,” says Yejin Choi, a computer science professor at the University of Washington and a research manager at the Allen Institute of AI.
Ms. Choi is among a group of AI researchers seeking to address this shortcoming by combining the AI technique that GPT-3 uses—called deep learning—with another technique known as symbolic learning.
The combined approach, known as neuro-symbolic AI, could help natural-language processing systems perceive symbols quickly and then reason to answer questions and even explain their decisions.
Deep learning involves feeding machines enormous data sets so they can learn to recognize or re-create images or text passages, but it reaches decisions in ways that can’t be explained. Symbolic learning clearly illustrates a machine’s decisions and logic, but it requires humans to encode knowledge and rules. The idea is that if a machine is given explicit knowledge in the form of symbols, such as “bat” and “hit,” then it could use what it knows to make inferences about scenarios, such as what happens if a bat hits a ball.
“What’s missing with today’s AI is we have to get beyond the level of the statistical correlations that deep learning models tend to learn,” says Mike Davies, director of Intel Corp.’s neuromorphic computing lab.
If successful, neuro-symbolic AI could pave the way for voice assistants that act based on an understanding of a user’s needs, not just questions, or, for better or worse, write film scripts that reflect a grasp of the world—potentially impacting industries including media, health care, banking, manufacturing, retail and more.
“They will become much better at assisting people because they’ll be better at being able to understand and communicate with people,” says Bern Elliot, research vice president at Gartner Inc.
Natural-language processing is already widely used. It powers customer-service chatbots, predictive text, social-media sentiment analysis and more. It also underpins systems that generate human-sounding written text, such as those that write news briefs or translate numerical business data into plain-English summaries.
“Those systems are still nascent, but you could imagine in the future, as the technology progresses, an entirely new field, a creative field, in terms of advertising, media and film being developed,” says Francesco Marconi, founder of New York-based Applied XL, which uses natural language processing to generate briefs of health and environmental data. Mr. Marconi is the former chief of research and development at The Wall Street Journal.
Neuro-symbolic AI could also improve a system’s ability to explain itself, heading off the criticism that deep learning is a “black box” that reaches conclusions in incomprehensible ways, says Sriram Raghavan, vice president of IBM Research AI.
The America's Cup, the world's oldest sailing competition, has a reputation for fostering innovation. In 2013, contestants began to use hydrofoils-underwater wings on the hull-to lift their boats out of the water during the race, allowing them to reach highway speeds and revolutionizing the sport. An Olympic sailor and a billionaire oil trader are now reimagining the technology to make passenger ferries faster and more eco-friendly.
The five industries investing the most in natural-language processing in the U.S.—retail, banking, manufacturing, health care and securities-and-investment services—are expected to double their spending on such technology, to $3.2 billion by 2023 from $1.6 billion this year, according to research firm IDC.
GPT-3 is part of a class of language systems, including Google’s BERT, that use deep learning to classify words or predict strings of text. Sam Altman, OpenAI’s chief executive, says GPT-3 struggles with reasoning tasks, in part because it was designed to focus on word prediction, not reasoning. But GPT-3 already has a limited capacity to reason, and that could improve, he says.
“We didn’t train it to do that, but it emerged in the process of getting better at predicting the next word in a sequence,” he says. “We believe that deep learning will eventually be able to reason quite well, but there’s a lot of research in front of us and of course we can’t predict that with certainty,” he adds.
GPT-3 is the successor to GPT-2, which was released in February 2019 and had 1.5 billion parameters—or weightings that can be adjusted to improve predictions—compared with GPT-3’s 175 billion. OpenAI launched in 2015 as a nonprofit AI research firm. Last year, Microsoft Corp. invested $1 billion in OpenAI’s for-profit subsidiary that is developing GPT-3.
The company is now trying to commercialize it. Early customers include businesses in the legal industry looking to translate legalese into plain English, and choose-your-own-adventure gaming companies using it to generate game scenarios.
Mr. Altman sees GPT-3 as a platform that could enable new business models in the way the iPhone spawned companies such as Uber. “We want to usher in this new era of AI-as-a-platform that gives birth [to] this Cambrian explosion of new products and services and new entire companies,” he says.
Ms. Choi of the Allen Institute believes that language understanding needs to be grounded in knowledge about how the world works. If a machine has a database of symbols that represent the real world—including objects and their behaviors and relationships over time—it can then learn to make inferences about scenarios involving those symbols.
Ms. Choi and her colleagues used a neuro-symbolic AI approach to train a system to answer what can be assumed about a given situation, using a data set of nearly 900,000 manually written rules of thumb. For instance, for a situation involving someone dropping a match on a pile of kindling, the system can infer that the person “wanted to start a fire” or “needed to have a lighter.”
The group’s research found that “neural models can acquire simple common sense capabilities and reason about previously unseen events.”
As AI systems become a bigger part of human life, Ms. Choi says, reasoning and inference can help them become more useful for people. “They’re in the phone, in the car, everywhere—they also need to understand humans better, so as not to make silly mistakes.”

Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car