Hi all,
We invited one of the leading experts working at the intersection of organisational design and AI-augmented collective intelligence, Exponential View member Gianni Giacomelli, to answer this question: Will AI dull or sharpen our minds? He answers it in two parts: in Part 1, Gianni assesses the impact on individual and collective intelligence; in Part 2, he takes us through the solutions available to organisations today to prevent and mitigate potential risks. Part 2 will be sent out shortly after this email.
Gianni’s career spans over 25 years in innovation leadership positions, including C-level at large tech and professional services firms — as well as in academic research and development with renowned labs including the MIT Center for Collective Intelligence.
You’re in for a treat — we opened Part 1 to all readers of Exponential View, so feel free to forward this email to your colleagues and friends.
Will AI sharpen or dull our minds?
Part 1: The challenge By Gianni Giacomelli
AI is percolating into our economy and society, and it surrounds humans in a way it never did before. It has become ambient. Is that good for our intelligence?
Some highlight the risks. The FT’s Tim Harford recently asked “will we be ready” to assist the AI when it needs our judgement to make a decision? Or will we get stuck in the “paradox of automation,” where humans lose the ability to intervene when AI systems need us to? Some scenarios are benign, but many others are existential: like pilots over-relying on automated flight systems only to crash the plane when the computer goes dark (see, for example, the tragic case of Air France 447).
In this first part of my commentary, I will break the question down into two:
What is the risk for the individual? (a) The risk of becoming less attentive, less critical, less creative, less proactive? (b) The risk of not developing some foundational skills anymore? Is AI going to deprive us of some learning by doing?
What is the risk for our collective intelligence? This is not about the average or sum total of our individual intelligences but rather the emergent intelligence capabilities of the structures made of networks of people and assets (including machines) that behave collectively in ways that show intelligence above and beyond that of the individual components. Is that going to improve, or worsen?
Individual risks, and rewards
It stands to reason that some of the downside risk is real. But is it inevitable? And what is the upside?
The net effect of technology introduction has been in the past economically (and typically socially) positive in the long run. For sure, there can be huge volatility, and indeed dislocation, that sometimes last a long time. In exponential scenarios, with potential systemic instability, the past may not automatically be a good predictor of the future.
The research doesn’t seem to be fully settled, but there is some, and we can frame the problem based on a few examples:
The invention of agriculture didn’t make individual people smarter than hunter-gatherers. Some research even indicates that the size of our individual brain might have shrunk as our collective one, emerging from our societies’ networks, grew. BUT: without agriculture, the world’s society would likely be more primitive, and most of us wouldn’t want that world today.
The introduction of the printing press might have reduced most people’s ability to recite books by heart, and even contributed to the disappearance of jobs such as professional storytellers. But the effect on individuals (printed materials aid cognition) and societies (knowledge management) was a net positive.
Taxi drivers in London, after GPS introduction, didn’t have the same quality of spatial reasoning (and even their brain structure changed). BUT - did that make them worse taxi drivers? It seems to have helped less experienced drivers become more effective.
Typewriting was bad for handwriting (and handwriting is likely related to some level of creativity), BUT that was more than offset by other gains. By some accounts, typewriting saved forty minutes out of an hour, compared with the pen. Automated orthography corrections are increasingly making us unable to thoroughly spell-check things alone, BUT that allows us to write more.
A study on the use of robots in helping baseball umpires shows that the combined human-machine duo improves performance over humans alone, especially for lower-skilled humans. Humans who after using robots don’t receive assistance anymore seem to not be able to get back to their original skill levels. BUT: The introduction of robots also makes the game less acrimonious, with fewer disputes and expulsions. And a recent study on the use of computer vision in tennis showed that human umpires show better judgement when technology is deployed alongside them.
Even where humans lost the battle, like in playing Go, evidence shows that machines’ superiority pushed up the quality of the average professional Go player. After all, a game is supposed to be making us better - and in this case, AI competition did.
What about not developing some foundational skills?
Is AI depriving us of learning by doing?
How do we create stepping stones in some professions when machines do a lot of the entry-level work?
Consider modern finance, legal, and consulting professionals who haven’t developed, respectively, the algebra, writing, or handwritten storytelling skills of their predecessors. Does that make them less intelligent, or did that rather force them to develop skills that built off those machines, and spend more time on other tasks, such as interfacing with their stakeholders?
One transferable example comes from an unexpected place. About 10 years ago, there was a big concern in the Finance/Accounting community about the fact that the Finance Operations jobs were increasingly centralised in low-cost locations or outsourced, which means that future Chief Financial Officers (CFOs) wouldn’t grow up professionally by doing low-level work and then moving up. Ten years later, we don’t talk about that so much. For sure, some of the old skills, like the ability to spot mistakes in accounting systems, might have dwindled. Exception management, including its data mining and analytics component, instead of the daily running of operations, is where finance executives get trained for the top job. And indeed, they now learn how to have separate organisations run industrialised operations - as if they had their own supply chain. The new aspirant CFOs also have plenty of room for other capabilities that they can do more of: focusing on the crafting and the execution of strategy, sustainability, and partnering more closely with their peers and their organisations in running the business — as well as, of course, learning how to use advanced analytics and AI. Those who have embraced the change now thrive.
Humans have historically adapted to the introduction of new technological tools by developing new capabilities that complement those tools and push productivity - writ large - higher. At least, they did it so far, and in the long run.
What about the collective brain?
The collective intelligence side of the story shouldn’t be conflated with the previous one. From the printing press to the telephone, from email to the internet, and from mobile phones to Google, the introduction of collective-intelligence-enhancing architecture has historically enabled an explosion of collaboration and substantially reduced the time to access new knowledge. As a result, our knowledge graphs, with both content and people as nodes, new relationships have changed and their edges are now able to connect more ideas than ever. At parity of individual intelligence, on balance, that has made us - and certainly could make us - collectively smarter.
At the same time, algorithmic curation optimised on human tendencies has possibly deteriorated our ability to function cohesively as a society (see social media discourse polarisation, and at least partially related social polarisation - especially in the US), and likely impaired the resulting decision-making (political governance, or lack thereof, come to mind). The interplay between our godlike technology, Palaeolithic brains and mediaeval institutions1 might very well not lead to a net-higher collective intelligence today, at least in the short run. There is a real risk of dulling our supermind, right here. We will know even more after the many elections of this year.
Enter generative AI, and its alluring confidence, its ability to spin gratifying new artefacts in seconds, effortlessly. There is a real risk that many, too often, would get hypnotised, lower our guard and not exercise quality control. Some evidence points to humans “falling asleep at the wheel”. When the LLM made mistakes, BCG consultants with access to the tool were 19 percentage points more likely to produce incorrect solutions. And the range of ideas generative AI produces out of the box is not as good as what humans, collectively, would produce. Microsoft recently published a good literature review of the dangers of overreliance on AI.
It is ours to shape
So the risks are real, but they don’t seem unavoidable. In the next essay, I will explore the solutions available to us today, and some frameworks to keep developing them as capabilities - human and technological - change.
In Part 2, I will talk about how to scale humans in the loop, how to support people at different levels of capability, and how to design smarter networks of humans and machines so that we get collectively - not just individually - smarter.
You will receive Part 2 shortly after this email.
About Gianni Giacomelli: My teams and I envision and design organisations, services, and processes that use AI and other digital tools to transform work: from innovation to operations, from the front to the back office. To do so, I employ an approach derived from our MIT work on Collective Intelligence, as an organizational design principle, augmented by AI and other digital technologies. For decades, I have led innovation efforts where digital technology attacks complex business problems and their underlying processes and organizations. My career spans over 25 years in innovation leadership positions, including C-level in large, stockmarket-listed firms in tech and professional services, and collaboration with world-leading academics.
This quote “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology” is attributed to American sociobiologist Edward O. Wilson.
It is interesting that the DALL-E image of the worker in the cubicle has a screen that says "ChaGppt", literally.