In authors or contributors

OPT: Open Pre-trained Transformer Language Models

Resource type
Preprint
Authors/contributors
Title
OPT: Open Pre-trained Transformer Language Models
Abstract
Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.
Repository
arXiv
Archive ID
arXiv:2205.01068
Date
2022-06-21
Accessed
24/02/2024, 17:41
Short Title
OPT
Library Catalogue
Extra
arXiv:2205.01068 [cs]
Citation
Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., Mihaylov, T., Ott, M., Shleifer, S., Shuster, K., Simig, D., Koura, P. S., Sridhar, A., Wang, T., & Zettlemoyer, L. (2022). OPT: Open Pre-trained Transformer Language Models (arXiv:2205.01068). arXiv. https://doi.org/10.48550/arXiv.2205.01068