5 Comments
User's avatar
David Peto's avatar

I'd argue this why we need innovation in the software architectures of the data processing systems that utilise our compute power. If we are running out of road on how much more powerful we can make microchips, then we should look at the software architectures as the area that needs to evolve. We haven't really done any major innovation in things such as data indexing since the '60's.

Expand full comment
Rodolfo Rosini's avatar

yes, the issue is was that in order to leverage software scaling we need to replace what we are already using, so it was usually easier to just buy a new microprocessor to get that 10x. The problem with software scaling beyond the cost is that it's a one off, so we kick the can down the road and then we are there again in 4-5 years. Not discouting software scaling as there are some that think there are several orders of magnitude there of potential growth, just that we need to rethink computing at a much more fundamental level, and the sooner the better.

Expand full comment
Tanj's avatar

There is another factor, hardware optimization. If you look at word processing, there is no need to optimize the software because the workload is not increasing. Indeed, more WP is done now in browsers with inefficient language than is done by the pure apps written in C.

However, that WP (whether browser or app) will have AI added. This will use inferencing built into the SOC (look at all the mystery silicon already on an Apple A15) which runs several orders of magnitude more efficiently than software on the CPU or even on a GPU. This is what Moore's law is mostly fueled by now. Special acceleration. The leading edge innovation is going to require this, and the chips have plenty of room for it. Making that advance saves power (it is starting to eat into legacy functionality as well as new) and is not blocked by physics. It just needs manufacturing quality to keep improving so that larger chips are cost effective (as well as welcoming improvements in density and device power).

In the near term we are not near the limits. Next 5 years has mapped out improvements in size and energy efficiency of process, as well as reducing defect rates. In 5 years will another 5 years of improvements seem obvious? Perhaps. The kind of improvements needed will likely change, due to acceleration, and the current wave of bringing memory closer to computation.

Expand full comment
Rodolfo Rosini's avatar

On a long enough timescale the industry moves from general computing to specialized chipsets, and one a new platform is discovered it goes back again. So yeah I think you are correct about optimization; it's the cheapest way to scale hardware using existing technology and to some extent it's where all the money for startups has gone lately.

You are also correct re "5 years"-timelines. Issues will come to Moore's Law in the 2030s, and they will not be equally distributed as we are still using as of today chipsets that are super old. The problem that one can infer from the notes is that there is no new architecture that hasn't been demostrated in the lab that goes to commercial scale in less than 20 years and right now there is a gap between end of Moore's and beginning of QC for sure, and possibly beyond that as QC might only be a co-processing facility for some very specific computation.

Usually the horizon where predictions start to break down is ~7 years in semiconductors (source: my own observation looking at old R&D talks so take it with a pinch of salt), so right now a lot of uncertainty about what happens in 2030 and beyond. My issue is that the problem is currently underinvested.

Expand full comment
Tanj's avatar

I agree, but observe that the new platform IS the use of accelerators. 15 years ago there was a lot of chatter about dark silicon, but it morphed into filling an SOC with accelerators and figuring out the fabric and cache arrangements to hold it together. The anxiety about it all needing to be dark to avoid melting faded because mobile processes got more efficient (not following Intel trends of the time) and acceleration was so effective that clocks stayed moderate and chips could be cooled uniformly across a surface where multiple accelerations can operate in parallel. There simply is no "going back" for this new world. The next step in my view is increased integration with adjacent memory, and new kinds of accelerator (inferencing, 3D world tracking and projection, whatever).

Expand full comment