Prompting and Prompt Engineering

Prompting


Learning Objectives

  • You know the concept of prompting and understand that the prompt influences large language model outputs.

Prompting large language models

Interaction with large language models happens by prompting, or in other words, by writing text that acts as the instruction or starting point for text generation. The prompt is the input that the model uses to generate text, and it can contain, for example, a question, a statement, or an instruction, or it can simply be text.

As an example of a question, the following prompt asks for the capital of Finland.

What is the capital of Finland?

The capital of Finland is Helsinki.

As an example of a statement, we could state that Turku is the capital of Finland, and see what the output will be.

Turku is the capital of Finland.

Actually, Helsinki is the capital of Finland. Turku is one of the country's oldest cities and was historically the capital before Helsinki was designated as the capital in 1812. Turku is known for its rich history, beautiful archipelago, and vibrant cultural scene.

As an example of an instruction, we could ask a translation for the Finnish sentence “Hei maailma!”.

Please translate the Finnish sentence "Hei maailma!" into English.

The Finnish sentence "Hei maailma!" translates to "Hello world!" in English.

Loading Exercise...

Text continuation

Large language models generate text based on the input prompt. While in the above examples, the prompts had explicit questions, statements, and instructions, the prompt does not have to have any of these. The model generates text based on the input prompt, regardless of the contents of the prompt.

Large language models have learned patterns from training data. Text generation (and continuation) is based on these patterns and the input prompt.

As an example, below, we provide a prompt “Roses are red, violets are blue, right now I am thinking of …” to the large language model, which then generates the response “a world where dreams come true.”.

Roses are red, violets are blue, right now I am thinking of ...

a world where dreams come true.

By adjusting the prompt, the output (typically) changes. In the above example, we started with the classic “Roses are red, violets are blue” structure, which the model interpreted as a poem. If we swap “violets” with “hamburgers”, the model still detects a structure that requires rhyming.

Roses are red, hamburgers are blue, right now I am thinking of ...

Something that rhymes with "blue" and relates to the context you've given. How about "a barbecue"?

Loading Exercise...

Non-deterministic outputs

The outputs from large language models are typically non-deterministic, especially when the prompts are not carefully crafted. This means that the same prompt can yield different types of outputs.

When we deviate further from the classic poem structure, the model no longer generates a poem. Below, we’ve adjusted the prompt further, removing the colors. As we notice, the response no longer rhymes.

Roses are in the garden, hamburgers are on the grill, right now I am thinking of ...

A delightful summer gathering or barbecue party!

The above exploration highlights how the words are interpreted in context, which is the key strength of the attention mechanism in large language models. When we modify the prompt enough, the interpreted context will be completely different.

A single word might be enough to bring us back to the original context. Replacing hamburgers, as done below, with violets brings us back to a response that has associated the input with a poem.

Roses are in the garden, violets are on the grill, right now I am thinking of ...

It seems like you're playing around with the classic poem structure "Roses are red, violets are blue." If you're looking for a continuation, here's a suggestion:

Roses are in the garden, violets are on the grill,
Right now I am thinking of a thrill.
Adventure calls, with excitement to fulfill,
Embarking on a journey, up every hill.

Above, we started with the classic “Roses are red, violets are blue” structure, which the model interpreted as a poem, even when we swapped “violets” with “hamburgers”. Changing the colors moved the model away from the poem structure, and the model no longer generated a poem. However, when we replaced “hamburgers” with “violets”, even if the colors were absent, the model again picked up the classic poem structure in the background.

The above examples illustrate how the model interprets the input based on the structure and content of the prompt. By adjusting the prompt, we can guide the model to generate text in a specific style or context.

Loading Exercise...