This week marks the 500th edition of the Sunday newsletter. My aim all along has been to show that we live in extraordinary times.
As I mentioned yesterday, I sent the first edition to 20 friends in 2015 and called it Square 33. This was a nod to the chessboard-rice analogy of exponential growth. The first 32 squares on a chessboard involve relatively modest numbers that don’t feel world-changing. But crossing into the 33rd square launches us into the numbers that seem incomprehensible.
We are in square 33.
In today’s retrospective edition, we’ll step back to explore the building blocks of how we understand the Exponential Age. Let’s go!
10 charts to understand the Exponential Age
Technology transitions happen fast. Cars made horses obsolete as a mode of transportation in less than 30 years.
Over the span of a century, the dramatic shift we see in the graph above turns into mere blip. The new technology becomes a seed for the transformation of markets, social norms and economic systems.
Today, technological transitions are occurring faster than ever before, rendering old technologies obsolete.
was among the first to show this.The costs of key exponential technologies are collapsing — thanks to learning rates — and this makes them more stable and predictable in the long term than linear technologies.
At the same time, their adoption is surging.
Generative AI is being adopted faster than both the internet and personal computers. Some 39% of Americans aged 18–64 have used generative AI and more than 24% have used it at work at least once a week.

This is extraordinarily rapid adoption. Yet the landmark moment for generative AI will be when we finally see it reflected in productivity statistics. For electricity and the PC, an uptrend began after around 25% household adoption. The same may be true for generative AI.
New markets are going to emerge, particularly in the realms of energy and information. This confluence of factors is creating a landscape of change that is both exhilarating and challenging.
Energy, for instance, is transitioning from being finite, geographically constrained commodities to ubiquitous, improving technologies. A paper in Nature argued that we’re already past the tipping point where solar has won. As we wrote before…
[t]his means that nearly every country in the world can become an energy superpower. The sun shines favourably on nearly all of us. The great game of the Persian gulf will lose its relevance. So too will dastardly pipelines that straddle continents. The result is a fundamentally different economic model: one of abundance rather than scarcity, of distributed production rather than centralised control.
Cheaper information and intelligence will unlock new markets, boosting productivity and accelerating scientific breakthroughs across fields.
One of the biggest bottlenecks for humanity’s future is our ability to keep creating knowledge amidst declining populations and the biological limits of human cognition. We’ll need all the help we can get.
As some of you already know, I believe that AI can help us address complex problems that require knowledge beyond human biological constraints.
A great deal of innovation and science is combinatorial. I believe that in the next few years, systems built on LLMs, may break the siloed nature of scientific research. LLMs can help us analogise across domains, bringing insight from one domain to another. You can do this with consumer tools like Claude and with specific services like Elicit and Consensus. Mathematician Terence Tao makes a strong case this is happening today. He is “already seeing AI-enabled […] databases of ecosystems where people from many different areas of science are inputting their data on the same system and creating a holistic multimodal view.”
As we stand in Square 33, it’s crucial to remain curious and adaptive.
Exponential View is not just about understanding change - we exist to empower you to shape a future defined by abundance, interconnectedness and new possibilities.
Here’s to navigating these extraordinary times together — with a spirit of wonder and an eye toward the opportunities that lie ahead!
Members of Exponential View have access to top-tier analysis of all these developments every week, including our deep dives into AI, energy and other technologies:
Congrats on 500!
Az, there seems to be a misuse of terminology in this "new space". Defining or intimating this new software is artificial intelligence is misguided (not you, the 'collective' participants in the 'discussion' about WTF is "it". I believe this needs to be resolved ASAP.
These LLM are NOT human like intelligence or reasoning engines. This needs to be made clear.
An LLM is a statistical projection engine. It takes existing data elements and estimates the highest probably association the next term/phrase, in the context of the limit recall capacity defined by the training set.
This is nothing like a human. Sorry.
In fact, the concept of Artificial intelligence is so UNDEFINED, it's the convenient label to stick on anything AI related.
Az, this is noise.
What needs to happen is to deconstruct the elements of cognition (artifacts and problem solving skills) and execution. The results are measure precisely against the intended outcome or in the NEW creation which was previous UNASSOCIATED with this/these data points. In isolation, this is the "task" function of the LLM...and will be ONE of millions of solution sets based on experiential artifacts of billions of humans.
Problem: Comupter system (soup to nuts) do not have experiential artifacts to reference. As such, there is no circumstantial reference point for whatever actions followed to resolve the event. Yes, the path can be plotted and all the EXTRINSIC element can be 'captured' (vid, voice, location, time). What's missing is the creative element. Without a long diatribe here, let's stated the obvious.
AT PRESENT, LLM's are capable of highlevel logic inference and can make a variety of suggestions about "what's next". BUT, only in the context of a closed data set with a known set of constructs and qualified set of outcomes. In other words, if I want to solve the problem of why my wipers on my caddy are stuck, AND the key doesn't work AND the batt light is one. The probablity I am going to get THE answer will be inside a highly curated data set which has NO NOISE (like google search). Or? Yes, the gene fold data set is pure, OpenAI is not. That is where the state of the tech is today.
It produces good information, especially at this time when the hype cycle is about 40% reved up. Wait til Mar/Apr when the earth starts spin again. In that time, the only ideas which need attention are the technologies and solutions which REFINE the feed back loops, create new neurons insert in pre and post processing for clarity and front end interfaces which are EASIER than text or typing.
Today, this would be defined as an AGENT. Exactly. We are a colleciton of neural agents. Some drive the car (w/o thought) some cook perfect eggs (w/o thought), some resolve arguements (w creative visioning based on personal historical artifacts), etc, etc, etc.
This will be "STEP #2" on this path of technology innovation.
ALL the calls which state none sense like "Artificial Human intelligence that can mimic humans will be HERE in 2025"
No. Sam. Folks, that is Sams pitch deck. He is one of 5 people racing to the top Elon, Zuck, OpenAI, Goo, and Amaz. Who cares!!
The money, the near term money...I'm placing my bet on personal task centric agents where the results are measurable. (yes, thin on dets here, but from the 100,000 view)
We are a long way from human level artificial intelligence
What we have is
AUGEMENTED INTELLIGENCE.
and that's it for a while.
my thought.
I think one of the most profound developments will be AI undertaking knowledge work. So, not just a creator of knowledge, but a doer of work. Not just routine, drudge work but vast swathes of white collar work that provides wages for millions. This has very significant implications that are barely appreciated today by those who employ all us humans today.