‘Out of distribution’: Sridhar Vembu explains why LLMs aren’t truly creative

0 21


Zoho founder Sridhar Vembu weighed in on the limitations of large language models (LLMs) in a post on X, arguing that true creativity lies outside the boundaries of training data—and that’s where LLMs often falter.

“True creative work is ‘out of the training distribution’ work,” Vembu wrote, contrasting how LLMs operate with game-playing AI engines like those used in chess and Go. He noted that these engines, often powered by Monte Carlo Tree Search (MCTS), are capable of delivering genuinely creative moves because their foundational mechanics differ from LLMs.

“Chess or Go engines do come up with creative moves,” he said, implying that the structure of those systems enables exploration beyond rote learning. “The foundational approach they use, Monte Carlo Tree Search, is different from how LLMs work and that may explain why LLMs don’t do too well ‘out of their training distribution’.”

Vembu pointed out a key distinction: games have fixed rules and clearly defined boundaries—what’s valid or invalid is computable. In contrast, the real world is far less structured, making the application of LLMs to messy, unpredictable environments far more complex.

However, he identified software development as a potential bridge. “Software code has some of the character of games,” he wrote, suggesting that AI models used in coding could benefit from game-engine techniques like tree search to improve performance and creativity.

The comments come as developers, founders, and AI researchers continue to debate the generalization limits of LLMs, especially as they’re deployed in more complex real-world applications. Vembu’s take adds weight to the view that different models—and mindsets—may be needed to truly innovate beyond the data.





Source link

Leave A Reply

Your email address will not be published.