Found 2 discussions
Unpacking Why LLMs Struggle with Basic Counting: Tokenization and AI's 'Cocktail Party' Intelligence
September 4, 2025
Discover why advanced AI models like ChatGPT can't easily count to a million, exploring the impact of tokenization and the difference between sophisticated pattern matching and true general intelligence.
Users are observing AI models like ChatGPT and Gemini displaying 'thoughts' in non-English languages. This discussion explores why this happens, linking it to multilingual training, internal token efficiency, and research findings that suppressing it can even reduce performance.