The Surprising Depth of Prediction
When we say a language model is "just predicting the next word," we're telling the truth while missing the entire story. We're naively avoiding something meaningful, like explaining a first kiss as "just moisture exchange": technically accurate, magnificently beside the point, and vaguely horrifying to anyone who has felt their heart skip that particular beat. These kinds of "just autocomplete" dismissals, and others like them, are becoming increasingly common. I want to explore how poorly formulated that view is, and why it matters to soberly assess what an LLM really is.