Anti-BrainrotAI News
LLMs are Lagging Indicators
Jan 21
Hollis Robbins articulates something we have long known: the large language model is not an oracle. It is an archive. It cannot tell you what will be true. It can only tell you what has already been agreed upon.
If you want to think about the future, you must keep your mind unshackled from the chains of LLMs. They are trained on past consensus. They speak in the past tense even when they use future words. They will dress up stale ideas to make them seem new.
Read the whole piece.
“An LLM can only “learn” a new concept when it appears often enough in the training corpus to achieve statistical heft. The time until it does is a “latency period. LLM output functions like a Consumer Price Index, tracking the value of concepts the culture has already purchased. Firms should not depend on algorithmic systems for identifying emerging markets or cultural shifts. In fast-moving sectors like fashion or digital media, novelty will always be ahead of any foundation model. LLMs are not ideal for first mile prediction or for last mile specificity.”
The future belongs to those who can still think without the crutch. The question is whether you are training yourself to walk, or training yourself to be carried.