Working with LLMs

In-context Learning


Learning Objectives

  • You recognize the term in-context learning and know of its role in prompting.
  • You know the differences between zero-shot, one-shot, and few-shot prompting.

A surprising feature of large language models was their ability to learn new tasks simply by being given examples within the prompt itself. This is called in-context learning — the model “learns” from examples provided in the conversation, without any changes to its underlying parameters.

This finding is what led to much of the early prompt engineering work.

In-context learning is “temporary” because the model’s parameters don’t change. The examples only influence the current response. Once the conversation ends, the model retains no memory of these examples.

Here, we explore three main types of prompting used in in-context learning: zero-shot, one-shot, and few-shot prompting.

Zero-shot prompting

Zero-shot prompting asks the model to perform a task based only on instructions, without providing examples. This relies on the model’s training to understand the task description.

Translate the following Finnish words into English.
Words: kahvi, tee, vesi

Words:

kahvi -> coffee
tee -> tea
vesi -> water

The model understands the task from the instruction alone. However, we have limited control over the exact output format. If we try the same prompt again, the format might vary slightly:

Translate the following Finnish words into English.
Words: kahvi, tee, vesi

Words:

kahvi - coffee
tee - tea
vesi - water

The translations are correct, but the separator changed from -> to -. For applications requiring strict output formatting, this variability can be problematic.

One-shot prompting

One-shot prompting provides a single example of the desired output format. The model learns the pattern from this example and applies it to new inputs.

Suppose we need a very specific format: each line starts with #, the word and translation are separated by #, and each line ends with # :). Like this:

#kahvi#coffee# :)
#tee#tea# :)
#vesi#water# :)

We can achieve this with one-shot prompting:

Translate the following Finnish words into English.

Words: yksi, kaksi
Output:
#yksi#one# :)
#kaksi#two# :)

Words: kahvi, tee, vesi
Output:

#kahvi#coffee# :)
#tee#tea# :)
#vesi#water# :)

The model correctly follows the formatting pattern demonstrated in the example.

One-shot prompting can also guide the model on what information to extract:

Provide the number of words and the longest word for the following sentences:

Sentence: I bought a hamburger.
Output:
Words: 4
Longest word: hamburger

Sentence: This makes plenty of sense.
Output:

Sentence: This makes plenty of sense.
Words: 5
Longest word: plenty

Note that the model included the sentence in the output even though the example didn’t show that clearly. This illustrates that one-shot learning isn’t perfect — sometimes additional examples help clarify expectations.

Few-shot prompting

Few-shot prompting provides multiple examples (typically 2-5) to more clearly establish the pattern. More examples generally lead to better consistency:

Provide the number of words and the longest word for the following sentences:

Sentence: I bought a hamburger.
Output:
Words: 4
Longest word: hamburger

Sentence: This makes plenty of sense.
Output:
Words: 5
Longest word: plenty

Sentence: Recursion in bugs -- another day in parasite.
Output:

Words: 8
Longest word: recursion

With two examples, the model better understands that the output should not include the sentence itself.

Loading Exercise...

Structuring in-context learning prompts

Several techniques help structure in-context learning prompts:

Use clear delimiters: Keywords like “Output:”, “Answer:”, or “Result:” help separate inputs from outputs. Alternatively, special tokens like ### or ----- can mark boundaries:

Provide the number of words and the longest word for the following sentences:
#####
Sentence: I bought a hamburger.
#####
Words: 4
Longest word: hamburger
#####
Sentence: This makes plenty of sense.
#####
Words: 5
Longest word: plenty
#####
Sentence: Recursion in bugs -- another day in parasite.
#####

Words: 8
Longest word: recursion

Loading Exercise...

Provide diverse examples: If examples are too similar, the model might overfit to specific patterns. Varied examples help it generalize better.

Order matters: Models can be sensitive to example order. If performance is inconsistent, try reordering examples.

Balance quantity and context length: More examples generally improve performance, but they also consume context space. Find the balance that works for your use case.

Loading Exercise...