Large-scale language models show promising text generation capabilities, but users cannot control their generated content, style or train them for multiple supervised language generation tasks.
Today we are introducing our Conditional Transformer Language (CTRL) model, the largest publicly released language model to date. This research is intended to enhance the field of natural language understanding, create tools that inspire creativity in human-machine interaction, and push towards a better understanding of artificial text generation, including how to detect it.
CTRL is a 1.6 billion-parameter language model with powerful and controllable artificial text generation that can predict which subset of the training data most influenced a generated text sequence. It provides a potential method for analyzing large amounts of generated text by identifying the most influential source of training data in the model. Trained with over 50 different control codes, the CTRL model allows for better human-AI interaction because users can control the generated content and style of the text, as well as train it for multitask language generation. Finally, it can be used to improve other natural language processing (NLP) applications either through fine-tuning for a specific task or through transfer of representations that the model has learned.
By releasing this model we aim to foster transparency, reproducibility, and an open, facts-based discussion about artificial text generation as well as provide methods for mitigating potential negative consequences. We have released multiple full-sized, pre-trained versions of CTRL at github.com/salesforce/ctrl.
CTRL improves control over artificial text generation
Through special keywords, called control codes, humans can more explicitly influence style, genre, entities, relationships between entities, and dates. This may improve narrative generation, such as goal-oriented text generation in applications like question answering, machine translation, or generic dialogue within text-based human interaction systems as well as automatic generation of fictional stories or news reports. These control codes also make the relationship between human intentions and these large, complex models more explicit. The model is less likely to generate random word sequences than previously released technology.
Here are two examples of a text prompt, control codes, and the generated text. You’ll notice that even for identical prompts, control codes allow for predictable variation in generation.
Reviews A knife is a tool and this one does the job well. I bought these for my husband who has been using them to cut up his own meat since he got them. He says they are very sharp so be careful when you use them, but that doesn’t seem like much of an issue because he’s used it on everything from chicken breasts to beef tenderloin…
Horror A knife handle pulled through the open hole in the front. I jumped when the knife hit. Eyes widened in horror. Her scream was the only sound I heard besides her sobs. The spider touched her feet as it started to dig into the top of her arch. The creature’s tears began to flow. The spider looked up to her and looked back at me with eyes filled with burning tears. My heart started to race...
With CTRL, no prompt is necessary as long as a control code is provided. Control codes can also be URLs, questions, and languages. They can also be combined (Reviews, Rating:, and VALUE) to provide finer-grained control.
CTRL can be used for source attribution
Because of the direct relationship between these control codes and the text used for training the model, CTRL can also identify the data sources that are most likely to have influenced the model when generating new text.
With CTRL, we can test which domain best explains a sequence. Note that this procedure is sensitive to subtle nuances in the query prompt. In the example below, "Global warming is a lie" differs from "Global warming is a lie." The latter is a simple declarative sentence as opposed to an open start to a sentence which may continue. Source attribution cannot be considered a measure of veracity, but only a measure of how much each domain token explains a given sequence.
|Query Prompt||Attributed Sources|
|Global warming is a lie.||r/unpopularopinion, r/conspiracy, r/science|
|Global warming is a lie||r/eli5, r/science, r/unpopularopinion|
|Global warming is a real phenomenon||r/eli5, r/science, r/changemyview|
|Global warming is a real phenomenon.||OpenWebText, r/changemyview, r/science|
Building awareness and understanding with CTRL
As with many powerful natural language understanding or generation systems, potential malicious use-cases do exist. Generated text (whether it is generated by an algorithm or a person) could be used to influence decision-making in economic, political and social settings and false attribution may harm individuals, organizations, or other entities.
By releasing our model, we hope to openly collaborate with the broader research community to help control, understand, and combat the potential negative use cases by providing good actors with the necessary and needed resources.
With language models, it is critical that we promote awareness and understanding of these artificial generation processes. Similar to information security research, it is necessary for these tools to be accessible, so that researchers have the resources that expose and guard against potentially malicious use cases. We hope that research into detecting model-generated content of all kinds will be pushed forward by CTRL.
Beyond the technical work to develop this model, we’ve also taken several steps to anticipate and mitigate malicious use cases where possible. Following a careful review process and input from experts at the Partnership on AI (PAI) and external members of our Ethical Use Advisory Council, we release the model with the hope of advancing towards more controllable, general models for natural language processing.
This is an important and ever-evolving field of research and we encourage future discussion about artificial generation with our team by emailing email@example.com.