11 Comments
User's avatar
Colin Brown's avatar

Thanks for sharing this guest essay. I knew Helen's name but wasn't aware of her substack.

I read this before I read https://substack.com/@platforms (AI and the strategic value of hype). They actually make an interesting twinning, as you can go back and forth between them.

Expand full comment
Helen Toner's avatar

Great shot/chaser, thanks for sharing.

Expand full comment
spencer wendt's avatar

I like the bracketing concept; near term AGI 5-8yrs, long term 3-5 decades. On first take, this would seem reasonable, until we extract the end game; A sentient presence occupying a space in the universe with some relevant features:

Sentience means the ability to feel, to be aware of sensations like pain, pleasure, and emotion. Most animals are sentient to some degree, but humans take it further.

Consciousness involves being aware of yourself and the world, having thoughts, memories, intentions, and a continuous sense of "I."

Self-awareness is the reflective capacity to think about your own thoughts, identity, existence, and purpose—essentially, being aware that you are aware.

AGI is NOT going to deliver a or any computer or robot 'being like a human' or 'living like a human' or 'doing the highly purposeful structured and the oddly random activities humans "do". There is no 'end game' for you or me, except survival until death.

The "do" part is "the problem. Folks attempting to prognosticate the development path and the date upon which a collection of lines of 1s and 0s in a box will replicate or copy the unorganized, randomness of a human existence.

I want to say the truth we all instinctively 'know" 'it'll never happen'! The fear in expressing the truth yields to an opening: "its' possible"!! Sure, expressing the possibility as a concept means the number is 0.00001, it is 'possible'.

But at the core of the belief is noise which sounds like "Wow, what is it that I have that a machine does not have, but cannot, will not, ever possess..."

Until this single element is either a) added to the equations and calc's OR b) conceptualized and quantified as a 'constant', there will never be a machine which can survive independently OR as a collective in this "environment" (aka: Earth).

Human Agency.

with no artifacts, no presence, no holding of a self generated purpose....

Never happen.

Keeping this AGI conversation alive is how the structure of our societies will embrace this new technology and as with previous techtonic shifts (tcp/ip, iphone, wheel, etc). It's not linear and not planned. Adoption happens for the purpose of advancing humans; including all of the things humans do, including yes, crime and problems which exist in the world, with or with the new "innovation".

But that is not the only part of society impacted by AGI. Which is where the focus needs to be (IMO)

When the dust settles, a year or two out, and the concepts of reasoning, knowledge, artifacts, agents, etc, will have more substntive definitions (which evolve with use cases, real world impacts). When the elements of a human, like thought, choice, outcomes, motivations, 'plan b', success, and failure are understood, like math equations. THEN, there is still no possibility of a man made creation which replicates the life or 'existence' of a human in this environment.

SO the real question isn't when are robots going to take over the planet. That's what make for good hollywood scripts. The question is how will we integrate AI/LLM (and BCH) into our current systems to move humanity forward by unlocking the creative elements and visions of humans who now have access to 'knowledge' ALL KNOWLEDGE...freely.

We're not only a long way off....slices of code will never get "here" (It's early, my 2nd cup is ready...)

BRB.

$1.00

Expand full comment
Ed Cockrell's avatar

What happens to the argument about AI intelligence if the measurement system is broader than memory, calculations, sorting, composing, and interacting with systems that are constructs of government, capitalism, and closed artificial mechanics? As a biological organism I need energy to survive and to be part of the reproductive process for my particular species of life to continue. AI alone is nothing. AI in its current condition is a tool for business capitalists to reduce demand for human workers. AI can be an effective sidekick for scientists sorting through immense data streams to enhance discovery. But AI cannot exist as a species of life that needs energy and physical structures to process inputs. If society collapses there can be no AI. The idea of AI is tied exclusively to human existence. But humans as a species can survive without AI; but AI becomes inert junk if it is not cared for by human watchers. Perhaps, the questions about AI should be more about how effectively humans incorporate the tool of AI into our biological life form. How fast will it take for the next level of human enhanced by AI to more robust at survival than a strictly biological human?

Expand full comment
Dan Collison's avatar

Moved elsewhere in comment tree

Expand full comment
Carl Hahn's avatar

Helen and CSET do great work (and of course that includes Azeem and the awesome exponential view team.) Helen’s framing and questions are very helpful. As someone who has been working —tactically and day by day — on operationalizing AI in large organizations, I remain convinced that we need NOW more investment in standards (even voluntary for you anti-regulators out there) and more practical help for companies, organizations and their teams. The human-machine teaming challenge is upon us ( true synthetic employees will be real and sooner than you think) while we’ve barely started thinking through all the legal and policy challenges (can/should an AI employees evaluate and sign contracts or purchase orders) and how to again incorporate this into operations and controls which are repeatable and auditable — so that AI helps the firms arrive at a better place and achieve success. The challenge will only increase as AI capability increases. We’ve a window for law/policy to catch up; let’s form consortiums and support standards setting to do that as a matter of urgency so we can responsibly deliver on the promise of AI

Expand full comment
Dan Collison's avatar

Tying in with how to assess and represent an AI’s abilities, for policy purposes, can there be a panel of AGI Tests that can be put in layman’s terms, similar to the role the Turing Test used to make AGI more tractable to the layman? That can give an overview of the strengths of a particular program or software?

Whether it’s

1) Math

2) Reading

3) Writing

4) Coding

5) Image interpretation

6) Image making

7) Navigating a controlled environment like a factory floor

8) Navigating a somewhat controlled environment like roads

9) Navigating terrain

10) Finding patterns in a variety of less structured data (images, sound files, genomes, readings from instruments, etc.)

Etc.

Maybe as a Radar Chart or Spider Chart

Expand full comment
Manar's avatar

When I first stumbled into a singularian conference in SF 20ish years ago it was explained that the name comes from the advent of superhuman AI being likely to be a singularity which meant (a) if we waited till the consensus was "it's probably coming soon" [<20 years] then it's probably too late to adapt it's inception to ensure it works for humanity so we're rolling the dice, and (b) superhuman AI in that timescale is likely arbitrarily super as ai will accelerate itself so a few years will be enough to "maybe human level is approaching" to way beyond that.

Ie it goes from not happened to happened in less than the societal adaptation time. On that take, I reckon there's a fair chance we've already hit the singularity.

Expand full comment
Manar's avatar

Grrr. Hopefully clear despite typos (on a runway...)

Expand full comment
Dan Collison's avatar

The Anthropocene has had a wide variety of effects on flora, fauna, microbiomes (external and internal), and even geological phenomena (such as weather, water tables, etc.).

What sorts of flora, fauna & geological phenomena have been domesticated? Which remain wild? Which have gone invasive (including the way some microbials now can more easily spread because of world travel), zoonotic, or rogue, including the microbiome? Which have gone extinct (megafauna; Dodo bird; microbiome characteristics; etc.)?

What sorts of intelligences will be domesticated? Remain wild? Become invasive or go rogue? Go vestigial or extinct?

Expand full comment
Dan Collison's avatar

Previous infographic & technological revolutions:

1) 3500 BCE - writing invented for business purposes, later spread to other uses

2) 15th C printing revolution had general use but early on was largely used for religious purposes

3) 19th C Industrial revolution was developed & unleashed for business purposes with unexpected and unplanned effects to the wider population

4)20th C atomic revolution was developed & controlled by governments largely for military purposes

5) 21st C AI revolution is mostly being developed in the West for business purposes and by the Chinese for business, governmental, and military purposes

What can be learned from the way these were developed, released, and controlled/not controlled?

Expand full comment