I'm not wholly convinced Apple are doing that bad on AI and may yet surprise. They are often slow to the party. The changes to the next versions of Shortcuts and Spotlight point the way to deeper task based integration (eventually!) Also, they seem to be bringing in useful if imperfectly working stealth AI e.g the notes transcription is great for recording meetings.
Not everyone will need their own frontier model, especially as open source ones get better.
But Apple do need to fix Siri asap. It hasn't materially improved for around a decade - and dictation remains woeful. I'm near weeping in frustration when dictating an idea only to see it mangling common or distinct words. Doubly annoying when I've got a shortcut that will automatically process it via AI for me into something more useful but only if the dictation is halfway competent.
Really unsure where ambient computing will go. Not convinced it will be voice based. We've had good voice based computing for a while - and outside of certain contexts eg driving I'm not convinced that talking to a device all day is what people want to do. I could see subtle gestures working though...and privacy concerns aside - always on recording and automatic suggestions based on that appealing.
Paid for AI services have shown people don't want enshittification, but suspect it will come to AI anyway cf the slow ruining of Amazon Prime. It's too tempting not to splatter ads over everything, even for those who pay.
Agree. there is a scenario where Apple retains sceptism about the developments and builds products that take that into that into account. Unlike Meta which is just casting around Apple might have a more developed strategy. Might doing quite a bit of work there admittedly. Though I imagine Apple has deep concerns about hallucinations.
Ray Dalio’s analysis is really worth reading and absorbing. I personally would describe our situation as being in a poly-crisis of TL:DR:
1. Geo-Political - collaboration, competition, confrontation and conflict.
2. Climate change and the energy transition - impacts and changes to trade and supply chains, etc.
3. Demographics and Society - increasing demographic dependancy ratios, immigration and workforce challenge, populism, polarisation and post-truth, etc.
4. Finance and debt - access to finance, bubbles (stem from asset prices inflating to insane levels and then popping), pubic, commercial and private debt, currency and trade wars, etc.
5. Digitisation and technology - AI, blockchain, quantum technologies, opportunities for innovation, but new and amplified risks, etc.
I think we are in the era of “people racing with the machines” (as per Brynjolfsson, et al) for some time.
However, if we consider AI as a normal technology revolution / General Purpose Technology (GPT) then we will witness populism and polarization as part of the automation process takes place, before a new “golden age” but from my limited thinking not one of AGI taking all the jobs.
How long before we reach this AI (and probably near net-zero (solar, wind, nuclear, etc) energy enabled “golden age” is a very open question.
There are always winners and losers, including rent seeking incumbent nations, organizations and specific jobs/people.
I think you are absolutely correct to identify the challenge of how as a world/society we cope with the transition, especially in an era of poly-crisis.
It is impressive but still think some scepticism is warranted. The paper concedes that preventing leakage is challenging because LLM pre-training corpora are opaque and enormous. It is a bit like what our CTO used to say "you have been hacked, you just don't know it yet"...."your data has been scrapped somehow...you just don't know it yet".
Also werent there reservations about funding of FrontierMath by OpenAI? I think before people have raised that with enough scale LLMs can “interpolate” between known proofs and produce a plausible synthesis that appears novel to humans even when no explicit solution 'exists'.
With such high hallucination rates for o4 mini this line stuck out - the researchers had concerns about the results being trusted to much given how certain the model was when wrong “If you say something with enough authority, people just get scared. I think o4-mini has mastered proof by intimidation; it says everything with so much confidence". They called it proof by intimidation. Need to walk into this world with eyes wide open.
"Sam Altman The Gentle Singularity: We are past the event horizon; the takeoff has started."
Az, you gotta control your self!! First, ween your self off anything related to Sweet Sam the Puppeteer, until THEY HAVE GONE PUBLIC!!!!!!!!!!!!!
AGI, the most over used, ill defined, worthless assemblage of phonetics since the first Ameoba burped in the Primordial Ooz.
We are no closer to "AGI" today than we were 100 years ago or another 100 years.
There will never be a human level machine occupying this planet.
Sam is raising money into a expanding universe with no real direction. How so?
2years ago "We gotta go big; Big = answer!" No, big, unbounded, = shit samwiches....
"Oh, we need to increase the prompt size!!" No, any prompt size LESS than the human capacity to generate ideas and ideate will be insuffencient; as has been verified this year.
Of wait...SAM released his master plan on X!! (uh huh..) "WE found the HOLY GRAIL: AGENTS.."(stop here),
This quickly morphed into "AGENTIC" which is now the NEXT "final step" and according to the fund raising sevant Sam AI I AM...
'OMG, I mean this time WE ARE CLOSE....TRUST ME <--- scam words. followed by
"All I need is [read this part - see if it sound famil] ANOTHER 700,TRILLION..." hahhah..
Sam the Scam...so good!!
Back to reality.
Agents are next...
The truth is what each of us knows, in fact, we all know truth.
No one with a symbalance of sanity would ever propose a machine can have a prescient state with agency in this dimension of reality and exist independently.
Never happening.
I love Sam. His combination of pure snake oil is only matched by ledgends of fund raising hype and middlin results.
No. AGI.
AGI = Snake oil.
you have to think thru what actually must occur in ALL OF LIFE, to intimate a machine can replace
YOU!
Never happening.
[Now we return to the "Apple Missed the Ball" convo...big miss]
I'm not wholly convinced Apple are doing that bad on AI and may yet surprise. They are often slow to the party. The changes to the next versions of Shortcuts and Spotlight point the way to deeper task based integration (eventually!) Also, they seem to be bringing in useful if imperfectly working stealth AI e.g the notes transcription is great for recording meetings.
Not everyone will need their own frontier model, especially as open source ones get better.
But Apple do need to fix Siri asap. It hasn't materially improved for around a decade - and dictation remains woeful. I'm near weeping in frustration when dictating an idea only to see it mangling common or distinct words. Doubly annoying when I've got a shortcut that will automatically process it via AI for me into something more useful but only if the dictation is halfway competent.
Really unsure where ambient computing will go. Not convinced it will be voice based. We've had good voice based computing for a while - and outside of certain contexts eg driving I'm not convinced that talking to a device all day is what people want to do. I could see subtle gestures working though...and privacy concerns aside - always on recording and automatic suggestions based on that appealing.
Paid for AI services have shown people don't want enshittification, but suspect it will come to AI anyway cf the slow ruining of Amazon Prime. It's too tempting not to splatter ads over everything, even for those who pay.
Agree. there is a scenario where Apple retains sceptism about the developments and builds products that take that into that into account. Unlike Meta which is just casting around Apple might have a more developed strategy. Might doing quite a bit of work there admittedly. Though I imagine Apple has deep concerns about hallucinations.
Ray Dalio’s analysis is really worth reading and absorbing. I personally would describe our situation as being in a poly-crisis of TL:DR:
1. Geo-Political - collaboration, competition, confrontation and conflict.
2. Climate change and the energy transition - impacts and changes to trade and supply chains, etc.
3. Demographics and Society - increasing demographic dependancy ratios, immigration and workforce challenge, populism, polarisation and post-truth, etc.
4. Finance and debt - access to finance, bubbles (stem from asset prices inflating to insane levels and then popping), pubic, commercial and private debt, currency and trade wars, etc.
5. Digitisation and technology - AI, blockchain, quantum technologies, opportunities for innovation, but new and amplified risks, etc.
and into this we want to embark on a project to replace humans in employment without a plan for how we cope with that as a society?
I think we are in the era of “people racing with the machines” (as per Brynjolfsson, et al) for some time.
However, if we consider AI as a normal technology revolution / General Purpose Technology (GPT) then we will witness populism and polarization as part of the automation process takes place, before a new “golden age” but from my limited thinking not one of AGI taking all the jobs.
How long before we reach this AI (and probably near net-zero (solar, wind, nuclear, etc) energy enabled “golden age” is a very open question.
There are always winners and losers, including rent seeking incumbent nations, organizations and specific jobs/people.
I think you are absolutely correct to identify the challenge of how as a world/society we cope with the transition, especially in an era of poly-crisis.
Do you mean his book Richard? Or something else?
Initially the link in the newsletter. However the book provides an interesting hypothesis that is worth thinking about.
It is impressive but still think some scepticism is warranted. The paper concedes that preventing leakage is challenging because LLM pre-training corpora are opaque and enormous. It is a bit like what our CTO used to say "you have been hacked, you just don't know it yet"...."your data has been scrapped somehow...you just don't know it yet".
Also werent there reservations about funding of FrontierMath by OpenAI? I think before people have raised that with enough scale LLMs can “interpolate” between known proofs and produce a plausible synthesis that appears novel to humans even when no explicit solution 'exists'.
With such high hallucination rates for o4 mini this line stuck out - the researchers had concerns about the results being trusted to much given how certain the model was when wrong “If you say something with enough authority, people just get scared. I think o4-mini has mastered proof by intimidation; it says everything with so much confidence". They called it proof by intimidation. Need to walk into this world with eyes wide open.
"Sam Altman The Gentle Singularity: We are past the event horizon; the takeoff has started."
Az, you gotta control your self!! First, ween your self off anything related to Sweet Sam the Puppeteer, until THEY HAVE GONE PUBLIC!!!!!!!!!!!!!
AGI, the most over used, ill defined, worthless assemblage of phonetics since the first Ameoba burped in the Primordial Ooz.
We are no closer to "AGI" today than we were 100 years ago or another 100 years.
There will never be a human level machine occupying this planet.
Sam is raising money into a expanding universe with no real direction. How so?
2years ago "We gotta go big; Big = answer!" No, big, unbounded, = shit samwiches....
"Oh, we need to increase the prompt size!!" No, any prompt size LESS than the human capacity to generate ideas and ideate will be insuffencient; as has been verified this year.
Of wait...SAM released his master plan on X!! (uh huh..) "WE found the HOLY GRAIL: AGENTS.."(stop here),
This quickly morphed into "AGENTIC" which is now the NEXT "final step" and according to the fund raising sevant Sam AI I AM...
'OMG, I mean this time WE ARE CLOSE....TRUST ME <--- scam words. followed by
"All I need is [read this part - see if it sound famil] ANOTHER 700,TRILLION..." hahhah..
Sam the Scam...so good!!
Back to reality.
Agents are next...
The truth is what each of us knows, in fact, we all know truth.
No one with a symbalance of sanity would ever propose a machine can have a prescient state with agency in this dimension of reality and exist independently.
Never happening.
I love Sam. His combination of pure snake oil is only matched by ledgends of fund raising hype and middlin results.
No. AGI.
AGI = Snake oil.
you have to think thru what actually must occur in ALL OF LIFE, to intimate a machine can replace
YOU!
Never happening.
[Now we return to the "Apple Missed the Ball" convo...big miss]
$0.0001