Working with LLMs

Overview


In this part, we look into working with large language models by providing them inputs — prompts — that guide in creation of outputs that are more in line with our expectations.

The chapters in this part are as follows.

  • Prompting Basics & Taxonomy introduces prompting and showcases different types of prompts, discussing how the prompt influences the outputs of large language models.
  • Clear Instructions and Context discusses the need to provide clear instructions to large language models, including the use of contextualization and persona.
  • In-context Learning introduces the concept of in-context learning, and discusses zero-shot, one-shot, and few-shot prompting.
  • Reference Text and Retrieval-Augmented Generation outlines how the outputs of large language models can be constrained to given input text.
  • Reasoning Through Problems provides an example of chain-of-thought prompting, a technique where large language models are provided some information of how to determine the answer.
  • Prompt Chaining discusses the use of multiple prompts in a sequence, where the output of one prompt is used as the input to the next prompt.
  • Limitations of Prompting discusses the fundamental limitations of prompting, and what prompting cannot solve.
  • Summary summarizes the key takeaways from this part.