What happened to the AI winter debates?
Twelve months ago, we were reading intense discussions between LLM advocates and critics—should I call them negationists?—about whether LLM models would ever achieve AGI. Skeptics argued that inherent limitations in training—such as finite data and computational resources—are already, and will continue to be, responsible for plateauing improvements in LLM performance.
This debate now seems mostly over. It appears to have ended around the time the first reasoning models became available, and almost everyone seems to assume that AGI is impending. On one hand, these models seem to completely sidestep the inherent limitations of LLM training through some clever techniques; on the other hand, those limitations are still present!
You could argue that we will continue to invent new techniques to overcome traditional training bottlenecks and that these will be enough to achieve AGI, but that seems mostly like blind techno-optimism to me.
We need more literature on this!