19 Comments

On another note, about Tesla’s FSD, I am skeptical that they can make it truly safe when it comes to the edge cases. Which you’d need to do, unless you accept and admit that accidents are allowed to happen, in which case would you risk your life and use it? This is because of the shift to using only cameras. Waymo uses LIDAR as well, right?

WSJ made a great video about autopilot crashes recently: https://youtu.be/mPUGh0qAqWA?si=XGqmls9cRnv56_5M

The main point there is that if the car encounters something on the road that it has not been trained on, it has no way of telling whether that thing is solid or not. And it will collide without braking. A LIDAR would immediately tell it that there’s something there. It would be the computers way of “touching” and using that data to interpret what it is seeing.

If I remember right, Elon’s argument for visual-only system was that people can navigate the world with only visual information just fine, so it should be possible. However, he’s ignoring the fact that ever since babies we have also relied on other senses, namely touch, to identify what has substance and what doesn’t. And that has “trained” our minds to assess whether a thing we see has substance. A purely camera-based system has no way of telling, unless someone has programmed that into it or it has been part of the training data and classified as such.

I have a M3 Highland and its camera-based system already gets confused by e.g. strong lights, reflections, and shadows in our parking garage. It regularly thinks there’s a solid object in front of it where none exists. When you’re the one driving these drawbacks don’t matter, but there is no room for errors when talking about fully autonomous driving, and trusting your life to it.

Expand full comment

Last time I checked, accidents are strongly correlated with poor visibility. I hope Tesla's FSD works, but I think ignoring strong signals just to save a few bucks is a poor direction. Those of us who want self driving autos should push for improvements in safety, not just 'good enough'. Because self driving is in the national interests, I can imagine responsible parties pushing for cutting some corners as long as overall safety improves, but I think we will probably need to do better to get broad user acceptance & that is what we need. Dictating won't cut it for long.

Expand full comment

The issues of governance / control, and capabilities are inextricably linked. The more nuanced and capable these machines are the more humans will trust them with complex reasoning, including governance reasoning. The reality is that our current human-based governance infrastructure is obsolete, designed before information technology became what it is now. I continue to believe that the future is in the building of governance and processes where the native interplay of N humans and N machines is intentional. And humans, individually, but even more importantly, networks, focus on the why and the what, while machines provide most of the how. These roles are not mutually exclusive - we badly need machines to help us with the why and the what.

Expand full comment

We do "...badly need machines to help us with the why and the what", but if our governance is so poor we can't even reverse C02 increases, something so obvious, I don't think we can expect any machines to deal with the crisis.

Expand full comment

Declining birth rates present varying economic challenges for nations, but the reduced capacity to fund growing retiree populations through taxation ranks among the lesser concerns. The pandemic era, IRA, Stargate, etc demonstrate conclusively that “Government spending is never constrained by taxes, governments spend by ‘keystrokes’ that they cannot ever run out of” (Wray) (https://www.amazon.com/Making-Money-Work-Us-America/dp/1509554262.) It is available physical resources and innovative ability that are the relevant resource constraints, and challenges.

Expand full comment

Having tried a version of DeepSeek distilled into a 70B version of Llama 3.1 on GPU at 35 tps. It is a sea change in how it works (AI studio beta, using the Cuda Runtime on a 3090) verbose as hell, but that could just be my system prompt which was designed/cobbled together to coax more data out of instruct models. It also did as asked and went through the prompt history, even reproducing large chunks of it and arguing with it. Needless to say I am not asking brain teasers but using it as a chat model, it's unusual in the amount of spelling mistakes it makes, but it doesn't stop, and it is reasoning about it's responses in character, which is kind of fascinating. Though it is early days yet, I imagine that better versions of this will be produced so you can obscure the thoughts if you don't want to read them. It does essentially require you to large scale edits to make sense of the narrative, but it's a qualitative step up. I do think I'm going to have to get simple with the system prompt and see how it behaves. There are probably optimum settings for temperature, perplexity & repetition penalty, etc. It's pretty good, and an obvious advance on what existed before.

I'm an accelerationist personally, largely as I think the only way to be rid of capitalism, is to slam it into the buffers at full speed, without hopes of resurrecting it. The GFC was a near death experience having embedded with a rag tag of financial journalists, bankers and brokers for the duration. There was a marked difference in what those people knew and what the public were told. Though I do think that the UK response from Messers Brown and Darling was leagues ahead of the US response. Not withstanding the Trumpian version of this, which is to apply 25% Tariffs to your primary trading partners. Still we have Prof Krugman on tap for that.

"May you live in interesting times"

Expand full comment

very interesting. i found 1.5 quite verbose too

Expand full comment

I saw a Bill Burr clip recently (https://youtu.be/CReRBQ-ieRs?si=oz1MWC7vRac7JX6P). Apparently Elon has also been lamenting the declining birth rates and urging people to have more kids, or the world population nosedives. Part of Bill’s response was that “maybe people don’t want to bring babies to this f’d up world.” That was so hilarious. And it’s been a big part of the conversation every single time I have had discussions with friends about having kids. This already a few years ago.

So maybe, just maybe, it would be better to not try to address the problem directly, but instead take action that shows there’s hope in solving things like climate change and the increasing inequality. Why have kids when every year it looks more likely that their future is going to suck? Unless of course you are rich. But how many kids can you have? Elon’s been working hard on that in his own part, but it’s not like he’s going to have more than a few dozen at max…

Expand full comment

Yeah, a near wholly unregulated social media went really well for us. Maybe that's why people are becoming sceptical.

As ever the issue is with incentives. The incentives for technological progress aren't aligned to the long term impact on the planet and its inhabitants. From lead in petrol to microplastics, our history is full of unintended and unanticipated consequences as a result. Technological progress would look a lot different if we said, you can innovate how you like - but we're going to hold a portion of your personal wealth in trust for 5-20 years, and we'll only release it if its shown your innovation hasn't screwed society or the environment up - otherwise, it's going to be used to clean your mess up.

Expand full comment

The whole point of Hammurabi’s Law was to introduce skin in the game. If you build a house, you are expected to live there for some time. This incentives you to make sure it doesn’t collapse. Same thinking should apply in other domains as well. Maybe not in such a direct manner, but nonetheless. Right now, if you have a platform in the social media and cult-like personality, you can say whatever shit comes to your mind and people will take it as gospel. There’s no skin in the game. If you are a person with authority, and say that don’t take vaccines for whaver reason, and it can be shown that thanks to your “advice” people died, you should be held responsible for that.

People used to have self-awareness, and they used to consider what actions their words might lead into… Seems less so in the social media age.

Expand full comment

Yeah, reasoning seems basically commodified, I’ll never call that or creativity “uniquely human” again. As I feel like we were told as kids (30 years ago if you’re my age) the core skill is learning what questions to ask.

I can ask DeepSeek a question and it will reason circles around me before I have one pant leg on. I can ask it “think that through deeper and smarter” five consecutive times in 60 seconds and I dare most people to even wrap their heads around the answer, much less beat it.

But what did I have the judgement to ask about in the first place? And how good am I at using the resulting answers? Those feel like the most important questions to me. (That and “why does OpenAI think they should build a methane burning plant to power Stargate?!?”)

Expand full comment

i ask Claude to write my o1 and DS prompts

Expand full comment

I have been worrying that Claude is overwriting prompts and the responses don’t turn out as good. My thesis is that there’s a sweet spot with prompts open enough to let the inference breathe and produce its best result. I haven’t tested that thesis systematically but I have rolled back several Claude-written prompts in systems I’m building to my own simpler prompts. I wonder if others have observed anything related

Expand full comment

On the rightward shift of tech, one essential document seems to me to be the Dickhead’s Manifesto, I mean the Techno-Optimist’s Manifesto, which under the shocking header of The Enemy lists what it later calls “the witches’ brew of resentment” known as “sustainability”, “ESG”, “Sustainable Development Goals”, “social responsibility”, “stakeholder capitalism”, “Precautionary Principle”, “trust and safety”, “tech ethics”, “risk management”, “de-growth”, “the limits of growth”. https://a16z.com/the-techno-optimist-manifesto/

Expand full comment

See GlobalNerdy’s takedown of that Andreesen manifesto for its similarities with a previous manifesto Andreesen seems to reference, the early 20th century Futurist Manifesto, which was written after nearly running bicyclists over with his car and deciding the car was the future. It included the lines “We wish to glorify war — the world’s only hygiene — militarism, patriotism, the destructive act of the libertarian, beautiful ideas worth dying for, and scorn for women.

We want to demolish museums and libraries, fight morality, feminism and all opportunist and utilitarian cowardice.” https://www.globalnerdy.com/2023/10/18/the-ugly-manifesto-behind-the-techno-optimist-manifesto/?t&utm_source=perplexity

Expand full comment

More haste less speed is an adage which is hard at the moment. I am glad these people took the time to actually test DeepSeek. Sure it will get better, but right now….

https://futurism.com/deepseek-failed-every-security-test

Expand full comment

The right level of abstraction to address the "intention economy" is the most basic one, data.

If we are able to safeguard our personal data, digital footprint, and inferred data, we will mitigate hyper-manipulation or hyper-nudging, moreover, the design of a UI/UX that is transparent each time that we interact with the system, letting us know what type of data was used for understanding, predicting and nudging our intentions, can be one solution. Distributed ledger technologies as blockchain if scalable can help. Moreover, designing UI/UX to nudge for critical thinking in the user is a counter thought...

Expand full comment

I will look in the report for some positive things, but I confess I am concerned with getting lost in the weeds.A list of potential types of problems needs to include 1) negative externalities such as CO2 emissions and 2) deep shifts in political power—the US is a 1 dollar, one vote nation  3 the danger of not adopting AI fast enough when confronted with endless scientific and technical opportunities and 4) misallocation of capital. A less comprehensive list, however well intentioned, is likely to deflect attention away from the biggest dangers. As evidence, I offer the current widespread agreement that massive addition of new power sources w/o meaningful constraints is just fine. Full disclosure: my p(doom) is 10% less than Azeem's.

The section of Sunday's EV #509 on intentional economy is excellent and thought provoking. I think rejecting the notion 'might makes right' is a starting point. Just because it is technically possible to use fine grained human behavior to 1) sell widgets or 2) write social media posts does not mean it needs to be allowed. Bots should not be legal on social media and their use, like the use of opioids, needs careful regulation and monitoring everywhere. Bots do not have any rights. At 20, my grandfather thought regulating driving was insane and he had no intention of letting anyone tell him when, where or what to drive. At 60, he was a strong believer in driver's licenses, laws against drunk driving. 

I value Jasmine Sun's work. Yet, trying to understand the world as only through a lens of accelerate vs don't accelerate is as weak as relying solely on female vs male, East v West. old v new, etc. We need to think about intersections. How can we accelerate adoption of AI, decrease power imbalances and decrease CO2 emissions? It is not hard to chart an engineering path; it is hard to chart a political path and that is our crisis.

Expand full comment

The meta Cicero ai reminder me of the special circumstances AI minds in Banks "the player of games".

The empire of Azad was a backward place in need of reform. But it strikes me that the impact of manipulative ai today is in the most advanced economies.

Expand full comment