I’ve been trying to decode the myriad of debates and discussions surrounding generative AI. The big question everyone seems to grapple with is whether these AI models genuinely understand the world, relationships, logic, and can they solve problems in a structured manner? I’ve encountered a clear division among experts, with some holding the conviction that these models simply don’t have the foundation for logical reasoning or detailed planning. See my full reading list at the end of the post to go deeper.
The debates reminded me of a witty joke from Mark Haddon’s book “The Curious Incident of the Dog in the Night-Time”. The joke involves an economist, a logician, and a mathematician on a train journey through Scotland.