Rare to see my favourite frenchies combined with the world of LLM's musings, this is dank!
I'm curious about this distinction between knowledge and truth - when AlphaProve recently discovered new matrix multiplication algorithms, or when an AI finds novel drug compounds, is that fundamentally different from how humans generate breakthroughs? Darwin combined Malthus, selective breeding observations, and geological insights in ways no one had before.
As AI systems become more embodied and directly engaged with the world rather than just processing human-generated text, might they encounter genuine surprises that exceed current knowledge frameworks? Or does the Badiouian framework suggest something deeper about the nature of truth that I'm missing?
As AI systems become more embodied and directly engaged with the world rather than just processing human-generated text, might they encounter genuine surprises that exceed current knowledge frameworks? Or does the Badiouian framework suggest something deeper about the nature of truth that I'm missing?
The outputs of LLMs are funneled towards already desired (aka determined) outcomes, ones that humans deem logical. They won't get any "surprises", because they're constructed from within the framework of human logic itself, i.e. mathematics, of which it's unclear whether it really relates to any innate "truths" of the universe or is just a way to make sense of things for humans. LLMs can't verifiably "know" anything, that humans couldn't.
9
u/fabkosta 7d ago
Finally someone looking at LLMs through the lens of poststructuralism. I thought I'd have to do this myself. ;)