Chapter 04.02: Measuring Performance

Measuring the performance of language models poses challenges due to the subjective nature of language understanding and generation, as well as the diversity of tasks they are applied to. Performance can be assessed through various metrics including:

  1. Perplexity: Measures the model’s uncertainty in predicting the next word in a sequence, with lower perplexity indicating better performance.

  2. Accuracy: Measures the proportion of correct predictions made by the model on a classification task.

  3. BLEU (Bilingual Evaluation Understudy): Evaluates the quality of machine-translated text by comparing it to one or more reference translations.

  4. ROUGE (Recall-Oriented Understudy for Gisting Evaluation): Measures the overlap between system-generated summaries and reference summaries. However, each metric has its limitations and may not fully capture the model’s performance across all tasks and domains, highlighting the difficulty of comprehensive evaluation in natural language processing.

Lecture Slides