This visualization shows how an LLM processes input text to predict the next word:
- The input text is processed through multiple neural network layers (shown in the network visualization)
- Each potential next word is assigned a log probability score
- The green bars show relative likelihood of each word being the next token
- Lower log probability scores (closer to 0) indicate higher likelihood