Empty Page

LLM Next Word Prediction

This visualization shows how an LLM processes input text to predict the next word:

  1. The input text is processed through multiple neural network layers (shown in the network visualization)
  2. Each potential next word is assigned a log probability score
  3. The green bars show relative likelihood of each word being the next token
  4. Lower log probability scores (closer to 0) indicate higher likelihood
Share by: