🔮 Apple’s Intelligence; solar beats oil; are companies ready?; plant minds & chess moves ++ #478
An insider’s guide to AI and exponential technologies
Hi, I’m Azeem Azhar. In this week’s edition, we explore Apple’s brilliant move — Apple Intelligence.
And in the rest of today’s issue:
Need to know: Tongwei
The solar manufacturer rivalling big oil.Today in data: OpenAI’s rapid growth
OpenAI has reached $3.4 billion annualised revenue – double that of late last year.Opinion: Globalisation’s challenges
Globalisation is struggling due to supply chain disruptions and tech races – what are the effects?
Sunday chart: Chinese Solar is catching Big Oil
Have you heard of Tongwei? How about BP? Everyone knows the Seven Sisters of Big Oil, but there’s a new rival on the block – let’s call them China’s Solar Seven.
David Fickling points out that The Solar Seven are now producing amounts of useful energy comparable to the Big Oil companies. If you consider the total amount of energy that each firm can produce without major reinvestment (either finding new reserves or depreciation of panels), “clean power moves clearly into the lead.” (See chart.)
This marks the beginning of a profound shift from oil to solar energy. Tongwei is already planning to double its current solar production capacity. My view is that solar producers will have less economic and political clout than Big Oil, which is probably a good thing.
Spotlight
Ideas we’re paying attention to this week
Apple’s Intelligence. Apple announced its long-awaited artificial intelligence initiative, Apple Intelligence. The firm knocked it out of the park. The consumer proposition stacks up, the technology is pretty impressive and strategically Apple has played it well. Starting with the product, Apple has effectively integrated an LLM layer into your phone, enabling interaction through Siri. One demo shows Siri tackling a daily task we all face, gathering information across multiple apps and making sense of it. Of course, scepticism is warranted with demos but what Apple showed is more than achievable when you can access the OS, so I suspect it aligns with reality. On the tech side, there are two impressive features: fitting small language models on iPhones and how Apple deals with privacy. Most of Apple’s processing will be done on the device using a model with 3 billion parameters. As I said back in January, smaller models are good enough for most environments. If the task is too complicated, it will be delegated to Apple’s Private Cloud Compute. Apple has made its cloud as private as current technology allows: it is stateless, meaning all user data is deleted after computation, and Apple has no way of accessing user data. As Matthew Green, professor of cryptography at Johns Hopkins, said, “If you gave an excellent team a huge pile of money and told them to build the best ‘private’ cloud in the world, it would probably look like this.”1 If a query is even more complicated than Apple’s proprietary AI models on the private cloud can handle, users can explicitly choose to send it to third-party systems, such as OpenAI’s GPT-4o, for processing2. In other words, you’ll only be sharing your user requests when you agree to do it. Very clever, all round. Where the company’s genius all comes together is business strategy. Apple has taken full advantage of vertical integration. Its in-house chips allow for a level of privacy few other companies could match. It has created plugin architecture that allows apps to be built with Apple Intelligence functionality, likely bringing in revenue to both OpenAI and Apple. Indeed both companies are so confident in the partnership’s success that no money has exchanged hands. It is the first Worldwide Developers Conference to impress me in years. I immediately checked to see if my devices were up to spec. This may be the boost that Apple needs. Slumping iPhone growth and concerns about the revenue impact of the right-to-repair have caused some pessimism, but Apple Intelligence can offer a reason to buy.
See also: A research paper demonstrated a method for eliminating matrix multiplication from LLMs, allowing researchers to process billion-parameter models on 13W – less than the 20W or so the human brain consumes.3 This would make small models run even better on a device.
Rhyme or reason? For all the talk of a big AI leap in the next few years, a key question remains: can LLMs reason independently, or is some other method required? According to experts at Arizona University, LLMs are broad knowledge sources that rely on pattern recognition, not deep understanding, and thus need external verification for effective reasoning. But, is something else missing? Terence Tao, a Fields Medalist, says AI has great potential as a mathematical co-pilot that enhances efficiency by checking proofs and formalising high-level instructions, but it doesn’t replace the human intuition and creativity required to generate original ideas. If Tao is correct, LLMs lack something crucial beyond generation, verification and critique – something that may not be easily found.
Data chameleon. Artists, writers and data creators are suing AI model makers. But, how do you prove that your data has been used? Identifying data used in inference is straightforward: the LLM directly alters one or a few data points. The problems AI-powered search startup Perplexity encountered with plagiarising a Forbes investigative journalism piece is a case in point. However, proving the use of data for model training is far more challenging, as individual data points are like needles in a haystack of billions. A new method has emerged that promises to statistically show which data has been used to train LLMs. This method could be invaluable in litigation and may well even encourage it. Model makers need more data, but they may end up with less, exacerbating the existing problem of the data wall – the notion that LLMs are running out of data to improve their training.
Data
In Qwen LLM models, Chinese responses to sensitive questions are over 80% less likely to be refused compared to English responses, often displaying defensive or propagandistic tones, while English responses tend to refuse or evade answering.
OpenAI has reached $3.4 billion annualised revenue – double that of late last year.
Electric buses were 26% of global bus sales in 2023.
And you can now buy an electric vehicle in the US with a range of over 300 miles for less than the cost of the average new car.
The National Bureau of Economic Research says that, since 1980, automation has contributed to an estimated 52% of the increase in economic inequality between different groups in the US. It also estimates that 60-90% of the potential productivity gains from automation are being reduced due to inefficient rent dissipation, which refers to the loss of economic value caused by the ineffective allocation of resources.
From January to April 2024, wind and solar energy accounted for 99% of the new utility-scale electricity generation capacity in the US.
Short morsels to appear smart at dinner parties
🤕 The Stanford Internet Observatory is winding down amid sustained pressures from the House Republicans.
🦾 Researchers have developed a humanoid robot that can (sort of) drive.
🔻 The video games industry set the record for most layoffs so far this year.
♟️ An analysis of over 5 billion chess games finds the rarest move in the game.
☘️ Do plants have minds? A charming read.
🤝The stupidly simple beginnings of the Microsoft and OpenAI partnership ft. Elon Musk.
⚖️Is inequality an economic externality?
🤨The Pentagon ran an anti-vax campaign – no, really, they did.
End note
Are we talking enough about the speed with which powerful AI systems will move into firms and what the real implications would be?
In my briefings to boards and leadership teams this summer, I’ve drawn attention to this. The provocation is this: “What assumptions are you making about the rate at which your organisation will be able to absorb this new cognitive capacity?” Much of what firms do is bounded less by financial capital or material resources but by their ability to explore a wider knowledge frontier.
Here’s an example: why did we only start to use LED lights in the past decade or so, rather than 80 years ago, even though they have much better characteristics than incandescent filament bulbs? (LEDs are longer lasting, energy efficient, safer, more tunable, smaller….)
The simple answer is that we didn’t work out how to create bright enough LEDs until the 1990s, and it took another 20 years of experimentation to learn how to create nice white light at a low cost.
In other words, we lacked the knowledge to build modern LED bulbs. The research was expensive because the intersection of the cost of research, available resources, and good enough humans was tiny.
That is about the change—we’re going to drive down the cost of cognitive exploration (by augmenting humans with more-or-less capable AI systems). Bosses will have a new organisational muscle to flex — the research and discovery muscle. But how many are ready to make use of it?
Let’s discuss!
😎 Azeem
P.S. I’d really like you to recommend Exponential View to a few friends. Click the share button.
What you’re up to – community updates
Sam Butler-Sloss co-published RMI’s annual energy transition presentation, the Cleantech Revolution.
Gianni Giacomelli argues that AI-augmented collective intelligence can enhance problem-solving by delivering timely, relevant information and filtering out noise.
Share your updates with EV readers by telling us what you’re up to here.
In short, your data is private on your device. It is also private (and not inspectable by Apple) when it goes to Private Cloud Compute. If you send it to a third-party, they can see the query but Apple will explicitly ask you to approve that transfer.
More options are likely coming in the future, with talks occurring between Apple and providers including Google, Anthropic & Baidu.
This method has only been proven to maintain performance up to 2.7 billion parameter models so far.
Excellent challenge question in the End Note. There is an increasing reality and opportunity of building upon Romer, Weitzman, et al’s, insights on seed ideas and recombinant innovation, etc. Overcoming individual beliefs, biases and experiences; socio-consensus in government and enterprise structures, processes and cultures to be able to do this is becoming one of the keys to success in the new age.
I was listening to people geek out on Apple's use of AI on Friday. Looks like they are using a 4bit quant of Llama3 and a whole load of LORA's (LOw Rank Adaption) that can loaded in and out of the base model on the fly. This means they can "fine tune" any application and also cut down on errors. To a degree anyway.
Also as regards litigation, Nvidia have today launched Nemotron a 340B model for synthetic data creation: https://blogs.nvidia.com/blog/nemotron-4-synthetic-data-generation-llm-training/