🧐 AI and epistemic integrity
A new public good for the Exponential Age
Who attacked the hospital? What version of events did your social media feed show you? How did it change over time? Did you trust traditionally reliable sources like Reuters and AP who were very quick to report—perhaps too quick? Did you observe the headlines of the New York Times evolve over the course of the night? Or did you wait for open-source analysts to start to hypothesise, and then deepen their understanding as dawn broke and satellites flew over head? Or had you already made up your mind?
There is the fog of war and there is a world where many different forces, from the need to break stories, the need for clicks, the shaping of distribution algorithms, the ease with which any side (or just random chaos merchants) can use AI tools to produce any kind of propaganda.
In a world already shaken by declining trust, polarisation (a consequence and cause of that decline), geopolitical meddling and a broken media business model, AI may worsen things. Certainly, AI leaders like Mustafa Suleyman and Stuart Russell have been among many voices to argue as much.
Introducing epistemic integrity
I’ve been toying with the idea of epistemic integrity for a while. It is a measure or a public good that reflects the degree to which a society maintains robust and resilient systems for ensuring the integrity of facts and knowledge.