We invited one of the leading experts working at the intersection of organisational design and AI-augmented collective intelligence, EV member Gianni Giacomelli, to answer this question: Will AI dull or sharpen our minds?
In Part 1, Gianni breaks down the risks to individual and collective capacity. You can read it here:
In Part 2, you will learn how to approach managing the risks and designing solutions for a new type of emergent intelligence that will permeate organisations.
Gianni’s career spans over 25 years in innovation leadership positions, including C-level at large tech and professional services firms — as well as in academic research and development with renowned labs including the MIT Center for Collective Intelligence.
Enjoy!
Will AI sharpen or dull our minds?
Part 2: The solution is ours to shape By Gianni Giacomelli
In the last essay, I probed AI’s potentially profound influence on human individual and collective cognition, highlighting risks like diminished attention and creativity against potential enhancements. It contrasted historical tech revolutions with today’s AI landscape, revealing a complex interplay between augmented abilities and possible mental atrophy. The piece raised urgency in a moment of growing reliance on AI, urging a deliberate approach to harnessing machine intelligence without losing our cognitive edge.
In Part 1, I set out to answer two questions:
What is the individual risk: (a) the risk of becoming less attentive, less critical, less creative, less proactive? (b) What about not developing some foundational skills anymore?
What is the risk for our collective intelligence? Is it going to improve, or worsen?
The last essay contrasted historical tech revolutions with today’s AI landscape, revealing a complex interplay between augmented abilities and possible mental atrophy. The piece raised urgency in a moment of growing reliance on AI, urging a deliberate approach to harnessing machine intelligence without losing our cognitive edge.
There are at least three reasons to believe that the downsides are not destiny, and that they can be mitigated and shortened.
Humans and loops — new lessons in scalability
First, design for the infamous “human-in-the-loop”. For one, machines can be calibrated for asking questions: for instance, Sal Khan’s Khanmigo, Google AMIE, and Perplexity.ai do that already to an extent. AI-powered workflows can be built with scalable human intervention as part of it, designed around human attention and capabilities: we have done that with mind-numbing industrial processes, where machines tend to force humans into the role of a “residual robot”, since the Lean management era, for instance. In general, we should and can design better “cyborgs”, in the words of a recent study (BCG, Harvard, Wharton, Warwick, MIT), leveraging, among others, decades of human-computer interaction science and practice. To get there though, we also need a lift of human skills, so that we can ride the new wave instead of being submerged by it. The up/reskilling infrastructure isn’t fully there yet, and that is a whole separate opportunity we should heed to.
Lift humans (and machines) from where they are
Second, AI augments intelligence in different ways, depending on where the user (whether machine or human) is on a capability curve, as illustrated in the chart below. The ability of people to innovate can be conceptually represented by a (somewhat) normal-distribution curve of people and organisational ability. The left side is where people struggle even with the ability to apply existing knowledge to their problems - often because they don't have access to relevant, contextual information. The right side is where the “unknown-unknowns” are the real limits. That is where people operate at the frontier of their field, and they hit the limits of our current resources as a society and economy.
Keep reading with a 7-day free trial
Subscribe to Exponential View to keep reading this post and get 7 days of free access to the full post archives.