text generation

Tuning Language Models by Proxy
We develop an algorithm for “tuning” language models at decoding-time!
Self-Instruct: Aligning Language Models with Self-Generated Instructions
Large “instruction-tuned” language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to …
Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Using expert and anti-expert LMs to rewrite toxic text for safety
WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
We introduce a paradigm for dataset creation based on human and machine collaboration, and demonstrate its empirical effectiveness for collecting a new large-scale NLI dataset
Generated Knowledge Prompting for Commonsense Reasoning
Prompting GPT-3 to generate relevant background knowledge improves performance on a variety of commonsense reasoning tasks
DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts
Steering open-ended text generation toward desired or away from undesired attributes, using expert and anti-expert language models