Discussion about this post

User's avatar
Colin Brown's avatar

Thanks for sharing this guest essay. I knew Helen's name but wasn't aware of her substack.

I read this before I read https://substack.com/@platforms (AI and the strategic value of hype). They actually make an interesting twinning, as you can go back and forth between them.

Expand full comment
spencer wendt's avatar

I like the bracketing concept; near term AGI 5-8yrs, long term 3-5 decades. On first take, this would seem reasonable, until we extract the end game; A sentient presence occupying a space in the universe with some relevant features:

Sentience means the ability to feel, to be aware of sensations like pain, pleasure, and emotion. Most animals are sentient to some degree, but humans take it further.

Consciousness involves being aware of yourself and the world, having thoughts, memories, intentions, and a continuous sense of "I."

Self-awareness is the reflective capacity to think about your own thoughts, identity, existence, and purpose—essentially, being aware that you are aware.

AGI is NOT going to deliver a or any computer or robot 'being like a human' or 'living like a human' or 'doing the highly purposeful structured and the oddly random activities humans "do". There is no 'end game' for you or me, except survival until death.

The "do" part is "the problem. Folks attempting to prognosticate the development path and the date upon which a collection of lines of 1s and 0s in a box will replicate or copy the unorganized, randomness of a human existence.

I want to say the truth we all instinctively 'know" 'it'll never happen'! The fear in expressing the truth yields to an opening: "its' possible"!! Sure, expressing the possibility as a concept means the number is 0.00001, it is 'possible'.

But at the core of the belief is noise which sounds like "Wow, what is it that I have that a machine does not have, but cannot, will not, ever possess..."

Until this single element is either a) added to the equations and calc's OR b) conceptualized and quantified as a 'constant', there will never be a machine which can survive independently OR as a collective in this "environment" (aka: Earth).

Human Agency.

with no artifacts, no presence, no holding of a self generated purpose....

Never happen.

Keeping this AGI conversation alive is how the structure of our societies will embrace this new technology and as with previous techtonic shifts (tcp/ip, iphone, wheel, etc). It's not linear and not planned. Adoption happens for the purpose of advancing humans; including all of the things humans do, including yes, crime and problems which exist in the world, with or with the new "innovation".

But that is not the only part of society impacted by AGI. Which is where the focus needs to be (IMO)

When the dust settles, a year or two out, and the concepts of reasoning, knowledge, artifacts, agents, etc, will have more substntive definitions (which evolve with use cases, real world impacts). When the elements of a human, like thought, choice, outcomes, motivations, 'plan b', success, and failure are understood, like math equations. THEN, there is still no possibility of a man made creation which replicates the life or 'existence' of a human in this environment.

SO the real question isn't when are robots going to take over the planet. That's what make for good hollywood scripts. The question is how will we integrate AI/LLM (and BCH) into our current systems to move humanity forward by unlocking the creative elements and visions of humans who now have access to 'knowledge' ALL KNOWLEDGE...freely.

We're not only a long way off....slices of code will never get "here" (It's early, my 2nd cup is ready...)

BRB.

$1.00

Expand full comment
9 more comments...

No posts