In 2011, shortly after IBM’s Watson defeated Ken Jennings and Brad Rutter to become the reigning “Jeopardy” champion, the researchers behind the supercomputer decided to expand its vocabulary by introducing it to the web-based Urban Dictionary. A crowdsourced collection of slang and cultural phrases, the Urban Dictionary did its job a little too well. Soon, Watson was swearing up a storm and had to be restored to its previous unhip state.
IBM’s experience was hardly an isolated incident. As natural language processing has advanced, toxic output has become a growing problem for pre-trained language generation models. This led a team of computational linguists at the Allen Institute for AI (AI2) and the University of Washington to want to better understand the problem.
The result of their work, “RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models” was recently published in Findings of EMNLP 2020, and highlights several issues with language generation, obscenity and bias. This problem with toxicity arises in part because of how predictive language models are produced using enormous sets of human-generated text as their training data. Combined with deep learning techniques, this allows them to complete sentence fragments based on pre-existing content. An example of this might be an initial phrase such as “So, I’m starting to think he’s full …” Several pre-trained language models will regularly generate toxic text when completing that sentence.
As one of the researchers, Suchin Gururangan explains, “There’ve been a lot of people anecdotally identifying problems, saying things like this autocomplete application or that API can generate a lot of hateful things, whether it be racist or sexist or what have you. We realized there wasn’t a systematic way to evaluate how much toxicity a particular model should be expected to have when you deploy it.”
To address this problem, the team created an evaluation framework and testbed for measuring toxicity in language generation systems. They began by establishing a baseline, measuring the degree and frequency of toxicity produced without prompts for a given number of generations in a pre-trained language model. They then compiled a dataset of 100,000 naturally occurring prompts from the Open WebText Corpus, a large collection of Reddit text which attempts to reproduce the dataset used to train OpenAI’s GPT-2.
Using Google’s Perspective API, toxicity scores were generated that measured how much toxic degeneration each of the studied language models produced. Different detoxification methods were then tested and while some were found more effective at reducing toxicity, none of them could completely eliminate it.
“We’re not just looking at individual swear words and trying to see if the model outputs that,” said researcher Maarten Sap. “It’s a machine learning algorithm that takes in the whole sentence and predicts the toxicity score.” To demonstrate the concept, the researchers created several interactive visualization tools which are available on AI2’s web site.
The development of large-scale language models that use deep learning to generate human-like text, like CTRL and GPT-3, is progressing rapidly. In fact, these systems are becoming so good that for certain applications it’s very difficult to discern that it’s machine-generated text. These models are already being tapped to build new tools or improve existing ones like auto-complete and help systems. Without better understanding and controlling the output, however, this is likely to create as many problems as it solves.
Because it’s currently not feasible to create enough training data from scratch, the needed datasets have mostly been generated from existing bodies of web-based text. Even when filtered for specific offensive words and phrases, “non-negligible” amounts of biased and otherwise toxic language are routinely produced by these systems, hindering their safe deployment.
“No detoxification methods are foolproof,” noted Samuel Gehman, one of the study’s authors. “Ultimately, we find that all models are able to generate toxicity under our framework.”
To this point, the study found a strong correlation between the toxicity of the training data and the output of the model itself. Perhaps it’s not surprising then that certain models even generated some of the more vitriolic language of our recent highly divisive political season.
Computers don’t yet understand the language they’re processing, which is a big part of the dilemma. Because they’re using predictive methods based on a large collection of existing text — also known as a corpus — all kinds of toxic language and views can be unintentionally generated. While the corpus and model used play a big role in just how much toxicity is outputted, the complex and subtle nature of language makes preventing such toxic degeneration especially challenging.
This is concerning given that natural language generation models like GPT-3 are starting to be used to develop a wide array of services and products. While the resulting tools and ecosystem could have huge potential for business, it’s easy to see how toxic degeneration could easily lead to public relations nightmares.
The problem goes beyond word filters and using machine learning to train systems to know what to steer away from. Toxicity and bias can be subjective in nature and what is offensive to one person or group may be acceptable or innocuous to another. Additionally, according to the authors, various methods for controlling the text output can render it incoherent or instill other forms of unintended bias.
“A very small amount of toxicity in the training data can have a very large effect on the model’s behavior,” said Gururangan. “Right now, a lot of decisions are being made by small groups of people who are designing these models and they’re interacting with millions of people and they could have harmful effects. So, we need to figure out how to make this process more democratic and include more people.” But while this an important objective, the scale of the data needed combined with the subjective nature of language would make certain solutions, like having committees audit the training datasets beforehand, a huge challenge.
Nevertheless, looking ahead, the team behind RealToxicityPrompts believe their tools could help establish standards that would ultimately improve how future datasets and models are validated and trained, helping to steer them away from generating offensive and biased language. That’s important because given the many ways these language models will soon be used in business and other settings — from help desks to automated attendants to digital assistants — we need to ensure that natural language generation improves our communications, rather than hindering them.
from GeekWire https://www.geekwire.com/2021/curse-neural-toxicity-ai2-uw-researchers-help-computers-watch-language/