🎪 Newsletters
Towards AI
2 min read
RNNs Cannot Think What Transformers Think Cheaply. ICLR 2026 Proved the Gap Is Exponential.
Author(s): DrSwarnenduAI Originally published on Towards AI. For a decade, we asked if RNNs can represent what Transformers represent. We proved they can. We forgot to ask how expensively. That omission just cost us ten years. “Can our architecture represent everything a Transformer can?” The benchmarks run. The perplexity scores appear. The answer, roughly, is yes. A paper at ICLR…