This article has been updated from when it was originally published on August 8, 2023
Modern Large Language Models (LLMs) are pre-trained on a large corpus of self-supervised textual data, then tuned to human preferences via techniques such as reinforcement learning from human feedback (RLHF).
LLMs have seen rapid advances over the last decade, particularly since the development of generative pre-trained transformers (GPTs) in 2012. Google’s BERT, introduced in 2018, represented a significant advance in capability and architecture…