Large Language Models
Overview
In this part, we look into the current large language models, starting how their training consists of both pre-training and fine-tuning, followed by how the models are trained to follow instructions and align with human preferences. We then outline the rise of “GPT”-models and open-source large language models more broadly, and discuss the recent trends related to large language models.
The chapters in this part are as follows.
- Pre-training and Fine-tuning introduces the concepts of pre-training and fine-tuning, which are key to the development of large language models.
- Instruction Tuning and RLHF discusses two techniques that are used to align large language models with human intentions.
- Rise of GPT and Open-Source Models outlines the rise of “GPT”-models and open-source large language models.
- Recent Trends outlines some of the recent trends in large language models.
- Summary summarizes the key takeaways from this part.