Chapter 8: Decoding Strategies
This chapter is about various decoding strategies. You will learn about deterministic methods (greedy search, beam search, contrastive search, contrastive decoding) and stochastic methods (top-k, top-p, sampling with temperature). This chapter also covers evaluation metrics for open ended text generation.
-
Chapter 08.01: What is Decoding?
Here we introduce the concept of decoding. Given a prompt and a generative language model, how does it generate text? The model produces a probability distribution over all tokens in the vocabulary. The way the model uses that probability distribution to generate the next token is what is called a decoding strategy.
-
Chapter 08.02: Greedy & Beam Search
Here we introduce two deterministic decoding strategies, greedy & beam search. Both methods are determenistic, which means there is no sampling involved when generating text. While greedy decoding always chooses the token with the highest probability, beam search keeps track of multiple beams to generate the next token.
-
Chapter 08.03: Stochastic Decoding & CS/CD
In this chapter you will learn about more methods beyond simple deterministic decoding strategies. We introduce sampling with temperature, where you add a temperature parameter into the softmax formula, top-k [1] and top-p [2] sampling, where you sample from a set of top tokens and finally contrastive search [3] and contrastive decoding [4].
-
Chapter 08.04: Decoding Hyperparameters & Practical considerations
In this chapter you will learn how to use the different decoding strategies in practice. When using models from huggingface you can choose the decoding strategy by specifying the hyperparameters of the generate method of those models.
-
Chapter 08.05: Evaluation Metrics
Here we answer the question on how to evaluate the generated outputs in open ended text generation. We first explain BLEU [1] and ROUGE [2], which are metrics for tasks with a gold reference. Then we introduce diversity, coherence [3] and MAUVE [4], which are metrics for tasks without a gold reference such as open ended text generation. You will also learn about human evaluation.