Is Moore's Law Still Relevant in the Age of Semiconductors and Robosourcing?

In summary: As you would expect, the Wikipedia entry is pretty up to date... :wink:In summary, the conversation discusses the current state of Moore's Law, the potential for human development to end, and the impact of AI and other technologies on society. The main point of contention is whether or not Moore's Law has truly ended, with some arguing that it only refers to the number of transistors in a dense integrated circuit doubling every two years, while others believe it also encompasses the increase in computing power. The conversation also touches on the potential uses of quantum computing, such as quantum encryption in satellites.
  • #1
Gear300
1,213
9
TL;DR Summary
What is going on with it now and what's the projection?
I'm sure this gets asked a lot. CPUs and GPUs still kind of mystify me, since I do not know their science.
It came to me after studying some computational linguistics that it might be possible that in the next 100 years, we might have smart-translating, smart-talking machines without a theory to semantics. From what I garner, we have the grammar theory (Chomsky &c), but semantics is a bunch of "ad-hoc" or nonparametric stats and data science. Of course, this is not my most stupendous idea of future progress, but the idea is that with enough CPU power, machines can possibly imitate anything without theory. And as some people point out, this could put the labor market in a quandary. "There will be a mass extinction of the labor force, or of the people!" is what some nuts-and-crank person might say. But with automatic drones replacing 'copter cameramen, and automatic everything else, it might be the case.

From democracies and tyrants of Greek poli to arguments on the Roman forum, classical Greece and early Rome can be argued to have been a people's history. But in medieval Europe, the people's history was nil until Martin Luther's Protestant Reformation. So a people's history can rise and flag. Counterforces to the robosourcing trend are social media and stuff. Any opinions?, and what is the current state of Moore's Law?
 
Engineering news on Phys.org
  • #2
Gear300 said:
Summary: What is going on with it now and what's the projection?

what is the current state of Moore's Law?
As far back as I can remember, every year pundits said, "Moore's Law is coming to the end." I expect 2022 is no exception.

The thing pundits forget is human ingenuity. People have ways of inventing completely new ways of doing stuff.

What are your feelings about human development coming to an end?
 
  • #3
anorlunda said:
What are your feelings about human development coming to an end?
No idea really. I heard in a podcast something about a confluence of AI and music theory. So somebody whose played music a lot of their life might have then studied music theory and somehow made it to the above research. Who knows, it might be necessary to study right-brained maths.
 
  • #4
Gear300 said:
and what is the current state of Moore's Law?
As you would expect, the Wikipedia entry is pretty up to date... :wink:

1658513798562.png

https://en.wikipedia.org/wiki/Moore's_law
 
  • Like
  • Informative
Likes Vanadium 50 and phinds
  • #5
Gear300 said:
No idea really.
Sigh, I feel like Leonard on The Big Bang Theory who holds up the SARCASM sign. We don't have a sign or an unambiguous emoticon for SARCASM. Come to think of it, a SARCASM sign might be helpful.
 

Attachments

  • 1658517434737.png
    1658517434737.png
    65.1 KB · Views: 143
  • #6
In terms of the way people typically think of Moore's law, it ended about 15 years ago. That is when processors stopped getting faster. But as the graph indicates, Moore's law was really a statement about transistor density, and that is still increasing. But until about 2000-2005, chips were getting faster as they were getting denser, and that association is what ended. What it means is that we're not getting the improvements in computing power that Moore's law is associated with. More transistors theoretically means more power, but it's a lot harder to use them when operating in parallel than running linearly.
 
  • Informative
  • Like
Likes DaveE and berkeman
  • #7
russ_watters said:
In terms of the way people typically think of Moore's law, it ended about 15 years ago. That is when processors stopped getting faster. But as the graph indicates, Moore's law was really a statement about transistor density, and that is still increasing. But until about 2000-2005, chips were getting faster as they were getting denser, and that association is what ended. What it means is that we're not getting the improvements in computing power that Moore's law is associated with. More transistors theoretically means more power, but it's a lot harder to use them when operating in parallel than running linearly.
Nope. It actually has no specific reference to computers at all.


Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years.
 
  • #8
berkeman said:
As you would expect, the Wikipedia entry is pretty up to date... :wink:
anorlunda said:
Sigh, I feel like Leonard on The Big Bang Theory who holds up the SARCASM sign. We don't have a sign or an unambiguous emoticon for SARCASM. Come to think of it, a SARCASM sign might be helpful.
Lol. Thanks for the update. And yeah, generally speaking, I guess I wouldn't stop human development for the sake of it.

russ_watters said:
In terms of the way people typically think of Moore's law, it ended about 15 years ago. That is when processors stopped getting faster. But as the graph indicates, Moore's law was really a statement about transistor density, and that is still increasing. But until about 2000-2005, chips were getting faster as they were getting denser, and that association is what ended. What it means is that we're not getting the improvements in computing power that Moore's law is associated with. More transistors theoretically means more power, but it's a lot harder to use them when operating in parallel than running linearly.
Funny that the OWID plot doesn't slow down much after then. And of course I guess new innovations in computing. I wasn't too sure of quantum computing, but I've read up on some start-ups and also 'quantum exchanges' which try to do for quantum computers what early networks did for 'classical' computers. As far as their uses go, I've heard of at least one, which would be quantum encryption in satellites. Deterrence strategies on the satellite grid is high-priority, so it would be natural to employ if feasible. (If I have my facts right here, then I guess it is kind of amusing how deterrence norms drive progress.)
 
Last edited:
  • #9
anorlunda said:
Nope. It actually has no specific reference to computers at all.
So...the first few words of the post were critical to it. And the wiki you linked does indeed explain why computing power and integrated circuits (which are what provide computing power to computers) are related:
...a prediction by Moore's colleague, Intel executive David House. In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months[31] (with no increase in power consumption).
 
  • #10
Gear300 said:
I wasn't too sure of quantum computing, but I've read up on some start-ups and also 'quantum exchanges' which try to do for quantum computers what early networks did for 'classical' computers. As far as their uses go, I've heard of at least one, which would be quantum encryption in satellites. Deterrence strategies on the satellite grid is high-priority, so it would be natural to employ if feasible. (If I have my facts right here, then I guess it is kind of amusing how deterrence norms drive progress.)
Firstly, quantum communication (such quantum key distribution, QKD) is not directly related to quantum computing; they have some enabling technologies in common but that is all. Also, it is a much more mature field; QKD has been in practical (albeit limited) use for a number of years.

Secondly, quantum computers are -as far as we know- good at SOME problems (e.g., solving problems in quantum chemistry),. but they are no faster (in reality slower) than a classical computer for most applications.
The range of possible applications will probably increase as we develop new algorithms; but it is extremely unlikely that quantum computers will ever replace classical computers; quantum processors will be used alongside regular CPUs and called upon for specific problems (similar to how GPUs are used for e.g. machine learning today).

Currently, I don't think there is a clear route for coming up with a faster technology than silicon based CMOS although there are lots of potential candidates.
 
  • #11
I have heard that at around the length of 20 atoms, quantum effects start to degrade the performance. This would seem to be about 2 nm, which is where the current state-of-the-art intertransistor difference is. It would seem that this would hit Moore's Law's limit.
 
  • #12
swampwiz said:
I have heard that at around the length of 20 atoms, quantum effects start to degrade the performance. This would seem to be about 2 nm, which is where the current state-of-the-art intertransistor difference is. It would seem that this would hit Moore's Law's limit.
And?

Wikipedia: Most forecasters, including Gordon Moore,[114] expect Moore's law will end by around 2025.[115][112][116] Although Moore’s Law will reach a physical limitation, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.[117][118]
 
  • #13
swampwiz said:
I have heard that at around the length of 20 atoms, quantum effects start to degrade the performance.
Then we can abandon device bulk properties and finally use the quantum effects to advantage.
 
  • #14
As @russ_watters pointed out in #6, we are dealing with two definitions of Moore's Law.
  1. Relating to transistors per unit area on chip.
  2. Relating to digital computation performance.
Definition 2 is fueled as much by innovation and ingenuity in approach as by transistor count. So, I expect definition 2 to persist past 2025.

I gave up when they started producing multi-core CPU chips. Are they faster? Yes or no depending on what you're trying to do and how you go about it. So it becomes a matter of definition.
 
  • Like
Likes russ_watters
  • #15
russ_watters said:
In terms of the way people typically think of Moore's law, it ended about 15 years ago. That is when processors stopped getting faster. But as the graph indicates, Moore's law was really a statement about transistor density, and that is still increasing. But until about 2000-2005, chips were getting faster as they were getting denser, and that association is what ended. What it means is that we're not getting the improvements in computing power that Moore's law is associated with. More transistors theoretically means more power, but it's a lot harder to use them when operating in parallel than running linearly.
So if the chips were getting more & more transistor-dense, why hadn't there been proportionate increases in speed? Is it that the transistor density only speeds up the electron flow between transistors, but the transistors themselves are the slow-pokes? Is it because the transistor density reaches such a point that they can't be properly cooled down?
 
  • #16
In addition to QM, you have GR to deal with, right?

You keep up Moore’s law long enough you create a black hole -about 10^66 transistors in a cubic centimeter would do it

very crudely, if 5*10^10 is the current transistor density from the Wiki chart, have about 184 doublings to go, or around 370 years
 
  • #17
swampwiz said:
So if the chips were getting more & more transistor-dense, why hadn't there been proportionate increases in speed?
Speed is not the only property with value. As you mentioned, heat is another. So are cost, reliability, time-to-market, useful lifetime, pinout, size, radiation hardening, onboard I/O, onboard memory, and others I don't know about.

Then there is the definition of speed. Is a CPU with four cores 4 times faster than a single core CPU? Is a CISC chip faster than RISC or a DSP chip?

In my opinion, when the answer to a question includes the phrase, "it depends on how you define it," then I lose interest in the question.
 
  • #18
swampwiz said:
So if the chips were getting more & more transistor-dense, why hadn't there been proportionate increases in speed? Is it that the transistor density only speeds up the electron flow between transistors, but the transistors themselves are the slow-pokes? Is it because the transistor density reaches such a point that they can't be properly cooled down?
Heat is indeed a lot of it; heat is directly proportional to clock speed and obviously transistor count. Making transistors smaller reduces heat rejection, leaving heat rejection a linear function of clock speed. When PCs first came out the processor didn't even need a heatsink. Today they have big heat sinks with fans. It's not a coincidence the speed increases stopped about the same time heatsink size peaked. Manufacturers really don't want to sell water or refrigerant cooled home PCs. See: Dennard Scaling, which is an implication/daughter prediction of Moore's law: https://en.wikipedia.org/wiki/Dennard_scaling
(Note: I'm not 100% certain of that math, as the "derivation" section in that appears incomplete/inconsistent).

The speed of light is also a problem. Part of the reason why processors use clock multipliers vs their motherboards is it would be impossible to sync them at today's speeds due to the speed of light limiting two-way communication distance. Also, pipeline length on the processor.
 
Last edited:
  • #19
russ_watters said:
Heat is indeed a lot of it; heat is directly proportional to clock speed
Is it? Or is it the square?
 
  • #20
Vanadium 50 said:
Is it? Or is it the square?
I~CVf, and P~VI, so P~CV^2f
 
  • #21
Vanadium 50 said:
Is it? Or is it the square?
My understanding is that each cycle/switch of a transistor has a fixed heat dissipation so doubling the cycles doubles the heat dissipation. This is separate from transistor size/density, voltage, etc, holding them constant -- which you aren't guaranteed to be able to do. This says it is linear with frequency, square of the voltage (the equation Mike posted above):
https://en.wikipedia.org/wiki/Processor_power_dissipation
 
Last edited:
  • #22
anorlunda said:
I gave up when they started producing multi-core CPU chips. Are they faster? Yes or no depending on what you're trying to do and how you go about it. So it becomes a matter of definition.
I paid $1000 for a 4-core computer for generating graphics and was disappointed. If they are utilizing those extra cores there isn't much sign of it. A graphics background rendering process grind everything to a halt. Maybe what they got to do is have separate memory and storage. That is, have four+ separable computers built into a single device. If you've got 4 minutes of graphics, have four copies of all the files and have each computer do one minutes worth.
 
Last edited:
  • #23
I think the question of heat versus speed in chips is interesting. I tried looking it up.

The bolding is mine.

https://en.wikipedia.org/wiki/Overclocking#Cooling
All electronic circuits produce heat generated by the movement of electric current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example); this requires more cooling to avoid damaging the hardware by overheating. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. All electronic circuits produce heat generated by the movement of electric current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example); this requires more cooling to avoid damaging the hardware by overheating. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Conversely, the overclocker may decide to decrease the chip voltage while overclocking (a process known as undervolting), to reduce heat emissions while performance remains optimal.
 
  • #24
On second thought, overclocking is off topic. This thread is about Moore's Law. Overclocking refers to increasing the clock speed on the same technology. Moore's Law refers to changing technology and designs over the years. Sometimes design improves by moving less charge around in the first place.

So, help please. Who knows semiconductor/chip design issues better than we do? The question is heat generated versus Moore's Law.

Edit: Think of the CISC versus RISC history. RISC chips achieved better performance via architecture, not clock speed. RISC needs fewer clock cycles per instruction.
 

FAQ: Is Moore's Law Still Relevant in the Age of Semiconductors and Robosourcing?

What is Moore's Law?

Moore's Law is a prediction made by Intel co-founder Gordon Moore in 1965, stating that the number of transistors on a microchip would double every 18-24 months, leading to a rapid increase in computing power and decrease in cost.

Is Moore's Law still valid?

While Moore's Law has held true for over 50 years, it is becoming increasingly difficult to continue doubling the number of transistors on a chip. Some experts believe that we are reaching the physical limits of silicon-based technology, making it unlikely for Moore's Law to continue indefinitely.

How has Moore's Law affected technology?

Moore's Law has been a driving force behind the rapid advancements in technology, allowing for smaller, faster, and more powerful devices. It has enabled the development of smartphones, laptops, and other modern devices that we rely on daily.

What challenges are facing Moore's Law?

One of the main challenges facing Moore's Law is the physical limitations of silicon-based technology. As transistors become smaller, they start to experience quantum effects, making it difficult to maintain their functionality. Additionally, the cost of building and maintaining advanced chip manufacturing facilities is increasing.

Will Moore's Law continue in the future?

While it is uncertain if Moore's Law will continue at the same pace, many experts believe that advancements in technology, such as the use of new materials and techniques, may allow for continued progress. However, it is also possible that we may see a shift towards alternative technologies in the future.

Back
Top