Last night, Tesla recalled 362,000 cars saying that its Full Self -Driving Beta may cause crashes. And you will have read how users have been able to push Microsoft’s Bing Chat to the absolute limit. Simon Willison has catalogued some of the more egregious. This has seemingly pulled Microsoft’s share price back to where it was before the announcements a few days ago. A week is a long time in AI.
As I suggested in an essay earlier this week, these events are not disconnected:
I would not trust my research to ChatGPT’s unfiltered output more than I would trust my drive from Los Angeles to New York in Tesla’s full self-driving mode.
The approaches used to build both Tesla’s FSD and today’s chatbots have similarities, relying on statistical approaches which learn from observational data. They are not fed knowledge about how the real world works.
And because of this, statistics meets data into a probabilistic melange, weirdnesses may emerge. Edge cases, the engineers call them. Because the consequences of bizarre chatbot chat are smaller, we’re more willing to push them to the limit than we are with our cars.
Could this presage another AI winter? Could the boom of interest and investment lead to inflated expectations which brings a drought of funding and activity?
AI winters are real. We’ve had two before.