in your opinion, what's going to happen when the physical properties of silicon can't sustain moore's law anymore

nothing, for two reasons

first some background: moore’s law states that every year the number of components (transistors) on integrated circuits will double (due to engineering breakthroughs). it has proved to be somewhat correct. it occurs due to our ability to manufacture smaller and smaller transistors which has a few effects, discussed later. eventually we will hit a point where it no longer matters how small we can print transistors as the fundamental electrical characteristics of silicon break down

in the next couple of years, we will see chips from intel with transistors printed about ten nanometers apart. we approach the limits silicon can handle, theoretically, around ~1nm

in circuits this small, you start seeing tunneling effects which are phenomenons of quantum physics wherein the propagation delay of charge falls to zero, meaning stimulation of the source terminal of a transistor would elicit a response on the drain terminal without any time elapsing. electrons just “blink” from one end of the xsistor to the other. you’d think this would be a good thing, but it isn’t. anyone with advanced physics degrees or deep VLSI knowlege is welcome to chime in why.

anyway

the first reason is there is no alternative to silicon. we have poured billions into researching things like gallium arsenide as a replacement for silicon in integrated circuits. it doesn’t work as well as silicon. people will try to convince you otherwise and those people are crackpots

we have poured a lot of time & money into researching quantum computers and discovered that they are only superior for very specific tasks such as brute-forcing encryption keys and other things of that nature. they will also probably never cost a billion dollars each to manufacture, never need anything less than a power plant and vats of liquid helium to operate, etc etc

the logical “next step” might be optical computing. here, you fundamentally change the hardware paradigm from electrons traveling through traces cut in a mediating silicon substrate between transistors to photons traveling through ?? mediated by ?? between “phototransistors”. the underlying principle is that light, in some cases, travels faster than voltage propagates through conductors. i’m going to get a lot of asks saying “durr kremlin but the speed of light is constant and i took high school physics and blah blah blah” and that’s a discussion worth its own post

this kind of tech is far off. not in our lifetime, not in your children’s lifetime, not in your children’s children’s lifetime

the reason we make transistors smaller is so we can pack them together more closely. this reduces the distance charge must travel in the circuit, making the cycles of these circuits take less time to complete. smaller transistors also generally necessitate less impedance and operate correctly at lower voltages, meaning their operating frequency can increase without a corresponding drop in reliability

these are all nice things, but they are only one piece of the puzzle. how you lay out these transistors is a much more critical and relevant problem. taking a previous VLSI design and shrinking it only works to a point after which you must redesign the layout entirely. intel’s “tick-tock” release/development department follows this model. long before and long after we hit the fundamental limits of silicon, the problem will be laying out our CPU circuits in such a way that we can actually eek out the performance provided by smaller transistors. this is a much, much harder problem to solve than “how do i make the transistor smaller”

the second, more pragmatic reason is that CPUs are fast enough already. there are scarce few problems that can be solved with faster discrete processors that can’t be solved with a million slower ones linked together

the whole tiny-transistor thing is really more of a marketing phenomenon than anything else