10 Comments

I think this is interesting in terms of your last piece also. If the EU is going to go through a continent-wide military renaissance, is it possible AI can be used to organise, standardise and devise equipment and structures in terms of optimum military readiness to maximise freedom and protection through applied intelligence? It's quite possible that all markets could atrophy in a strong military scenario, and prospects for the technology be delayed for commercial purposes, but accelerated for war. The irony would be so heavy if the computer, devised as a tool for war by Alan Turing, could become the architect of European (or other) security nearly a century later..

Expand full comment

I'd tag this one. It could be your best red-eye cant sleep need to waste wheels up to wheels down work product!.

From one perspective your analysis was totally needed, maybe over due.

It pretty much captures each of the broad rear-view facing entropic tech evals of the past few biggies (PC started in 84, (Apple was 78), '94 network/Internet, 08 Iphone - data/voice/phones. I do not believe any of these would be a relevant comparison to AI's breadth or depth of reach in TODAY's soci-eco-tech environment. Each of these are "layers"...

What happened?

Each tech shift is of course preceded by the 'EOH' warnings; End Of Humanity. Each of those (above) has given the same EOH NOISE, but signal is available for fine tuned listening. AI is out of the barn! It's about getting in the game. When there was no bandwidth, construction exploded, no data centers, investment exploded...no cell towers, can I lease your Roof... :)

Along the way there was unexpected, unanticipated, exponential growth in new concepts, new business categories, new markets, scads of new revenue showered into the economy.

The predictions were chaos, entropy consumes the fragile human species, dust.

NO ONE...ever thinks any of those EOH diatribes...

WHY?

Because only a loonatic can imagine anything as wild and creative as what is about to come. Robots? flying cars? no answering services? no IRS dipstick? No 3hour hold times...somethings are heaven on earth!! But, someone would have to be nuts to pontificate very loudly, because the amount of change that is coming is big!

Can you imagine...some nut SAID HE'S going to CATCH a 400foot tall, 40000ton ROCKET!! hahaha...hes' a nut...right! :)

These type of people, the creators, the lunatics, when they speak, and act, it does one of two things...

CREATIVITY: The future engages your creative energy; anything is possible thinkers!!

OR

FEAR: It ignites base human FEAR. For those who are very unsettled by "Change" (a word synonymous with "life") thoughts about life altering, inevitable, no way out, change? For those who do not cope well with change/life, yes, these are significant enough to engage their Base human fear - being worthless, being invisible, or more invisible, being replaced, etc.; Ultimately, fear of being unlovable. These are the "oh shit we're gonna die" folks!

If there was no concept of "change", we arrived at today, 2025 and that's it!! In a coupla' years, 2028.... "its" over...

Yes, every point you made in the near term is valid; 100%.

The good news is there is a abundant supply of creatives and the entropy perspective never yeilds fruit. This makes sense. Creativity is required for human survival.

The logic would render to: Create or Die.

Your analysis is species wide so I don't see it shaking out the latter Az.

It was a good piece.

Expand full comment

One of your logical steps in this essay contains, IMO, a big part of the path to solutions. Legacy systems, processes, and people slow down the adoption of this type of technology. We saw it not only with the internet, but also with the predictive AI wave about 10 years back. In this case, there’s something alluring: that we might just “drop” AI agents where human agents were, without a chunk of the complexity of business process transformation (and consultants, lean six sigma, sprints etc etc which most people can’t relate to). That’s why the related announcements from Microsoft and OpenAI+SoftBank (Cristal) draw attention. I however continue to believe that we need better design of AI with humans in and on/over the loop, as well as other AIs in/on the loop (agentic AI design will help with that). In this spectrum of possible intervention, I see a great deal of disparity in funding: services companies, instead of software companies, will be the ones who will nail the human in/on the loop part. Are they receiving a commensurate amount of attention and, importantly, funding? I don’t believe so.

Expand full comment

this is a good article. also needs to consider more the human success factor, incentive structure and trust i.e. 'immune' response of the large organisation, power structures, the value comes from 'reinvention' of process (not automation) and motivation to change e.g. traditional retailers only changed in .com in response to Amazon. Having been on the sharp end of trying to get agentic architectures working in enterprises - there is a lot of 'plumbing' required, cross functional execution, and lots of cases where 99.x isn't good enough. Trust (B2C and B2B) will be critical factor. I can bank with Revolut for my holiday currency (but the average person will still have pay going into HSBC current account), I can use for agentic campaign management (but I'm not changing my system of record architecture any time soon). it will be adoption, trust and incentive structures.

Expand full comment

Great piece. I'm seeing a roughly 50:50 split between colleagues who think AI's potential outweighs the downside and vice versa. One thing I've noticed with any transformation, people can always think of reasons why not to act - too risky, too busy right now, waiting for others to make the first move, too expensive, don't like the messenger etc - and there is rarely an individual penalty to doing so.

Expand full comment

Wonderful essay!

I see a fuzzy situation, although I hope others with more knowledge try to flesh out Azeem's questions.

Fuzzy? Here are two examples.

1) I know a community college instructor who is using $20/month NotebookML to enhance her teaching. Great results. Deep student engagement. So that will add $240 to the GDP each year until the cost drops by 10x. Will her student's contribution to GDP jump?

2) I don't have the project yet, but I predict I will have a project in AEC similar to one I did 7 years ago, but for about $50,000 not $500,000.

So, I'm seeing a drop of GDP of ~$450,000.

The problem I'm seeing is not that AI can't have beneficial results, but that GDP is too crude. Even that is not right. GDP counts negative externalities as positive.

I realize that GDP has value, but I think understanding 'productivity' in the sense normal English speakers use it, we need to do better with how we measure. This problem is not limited to AI.

Expand full comment

Fantastic analysis! Another thing that maybe worth considering is the speed of change and progress in the AI space is probably keeping orgs from committing to significant changes and adopting a "let's wait until the dust settles" approach.

Expand full comment

I love this. Looking forward to read more articles of this sort.

Expand full comment

Great piece to think on - so many dimensions to consider in forecasting AI's growth and impact. I think the near-term (10 years) future of AI is very much dependent on macro forces - resource constraints (energy, materials to make chips) and geopolitics (chip design is truly globalized with many weak links that could crash it). Assuming one or a combination of these macro forces do not crater AI's progress too badly, then we have to consider economics. My view is that in the U.S., we are between growth engines: the last 80 years or so gave us demographics, industrialization, then expansion of service sectors...but that is all past peak; the next 30 years or so will see a reindustrialization boom as we bring back the 5 billion or so in annual imports (products+services), which considering multiplier effects, means another 15 billion in annual GDP added to the 27 billion we have now (normalized to 2025). It's the between-period we are in now for the next decade or so, where we have anemic growth, increasing financial pressure due to debt, unfunded liabilities, natural disasters/climate change, etc. I think in this period, there will be intense pressure for companies at all sizes and levels to reduce costs and boost efficiencies, because without a growth engine to power our economy, everyone will compete more fiercely for a cut of the pie that is not increasing (relative to inflation). Thus, I expect a boom in AI as companies do anything and everything to get more productive (AI is a productivity revolution that will compress the middle in all companies). Does this mean higher GDP? Maybe not. If AI will take 25% of knowledge-based jobs as we see them today (which I I believe), then no, job loss will offset these gains. Until we ramp up reindustrialization efforts, we could be in stall mode in GDP for a period of time.

Expand full comment

I appreciate this article and your pro-con projection of AI uptake in an era of societal values that promote unlimited growth as the highest human value. If unchecked capitalism falls apart in a severe economic contraction, unpredictable consequences will hit both The Born (human life) and The Made (artificial life). We are approaching an unprecedented nexus of events in human history. The American empire is undergoing a convulsion of self-destruction at a time of rapid heating of our global oceans. Climate certainty is dead. Stability of existing economic systems is threatened, which leads to extremes of uncertainty. Adaptation of new technologies will be haphazard and difficult. Surprising developments will confuse and disorient human decision making.

Expand full comment