6 Comments
User's avatar
Marshall Kirkpatrick's avatar

I do like this quite a bit, and I have questions about how transferrable this is. That Shaw and Nave paper's primary conclusion seems to be that AI is a System 3 (in a Thinking Fast and Slow framework) that is structurally compelling. People just keep surrendering, because AI is so authoritative and smooth! Even when they are told the AI is wrong, and they are paid a reward to think for themselves, they still are more apt to surrender than they ought to! I suppose these systems you're using are intended to change that structural relationship, but it's hard to say how transferrable these strategies will be in a world where the vast majority of traditional knowledge workers are NOT paid for original idea generation (imagine if they were!) and where the vast majority of people have far less experience to build on than you do.

Meanwhile, there's last June's MIT study ("Your Brain on ChatGPT") - a four-month study using EEG finding that relying on LLMs for essay writing weakens brain connectivity and cognitive engagement, with users showing reduced neural activity, lower essay ownership, and inability to recall their own work compared to brain-only and search engine users. And the brains didn't recover quickly when they stopped using the LLMs either! https://arxiv.org/abs/2506.08872

Now I love a good piece about how smart people can get smarter, I really do. But I wonder if this isn't at very high risk of a power law, K shaped outcome like social media's impact. Did it democratize publishing and create access to readers for many people traditionally speaking from the margins? Yes. But did it also contribute to a spiral of authoritarianism, climate retrenchment, and income inequality? Probably yes, that too.

How will we close the exponential gap on cognitive impacts of AI?

Chantal Smith's avatar

This is probably one of the biggest, most important but also most unknowable questions in AI. IMO we'll need to take an active role in designing systems (schools, workplaces, etc.) but also our own lives so that we mitigate these side effects

Pawel Jozefiak's avatar

Your distinction between cognitive offloading and cognitive surrender hits something I've been wrestling with while building my own AI agent system.

I gave it a detailed identity layer, memory of past decisions, even knowledge of my working patterns and personality. The agent got dramatically better. But I noticed I started trusting its judgment on things I should be deciding myself. The Stylometer trained on 60,000 words of your prose is a perfect example of strategic offloading.

You're not outsourcing taste. You're outsourcing the mechanical expression of taste you already have. That boundary between "tool that extends me" and "tool that replaces my thinking" is blurrier than most people admit. Protecting the generative space, like you describe with walks and fountain pens, might be the most important skill of this decade.

Sugendran Ganess's avatar

This article feels like it ended abruptly. Are you saying idea generation is the only thing you do without an AI and the rest of the time is spent with the AI scaffold? Would have also loved to hear about what you retain from the ideas that are scaffold supported. And if it's not much, are you okay with that?

I work in software and it's clear AI agents are changing what I work on, and probably for the better. What I'm trying to work through is how I get my team to digest what the LLM has told them into something they internalise and the communicate outwards. I can already see the seek->understand->distill->communicate start to atrophy in some people.

Chantal Smith's avatar

I don't think that the distinction is as clearcut, rather that Azeem makes sure to safeguard the conditions in which ideas arise, from AI.

And thanks for sharing your experience. That seems to be a very healthy pattern. We're also trying to find the best ways of digesting and communicating what comes out of the AIs.

Joe McFadden's avatar

Thanks Azeem for the insight into your current writing process. Your use of synthetic personas sounds very interesting. How are these built?