Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Whither Moore's Law?

QuizzicalQuizzical Member LegendaryPosts: 25,355

A few years ago, I said something to the effect that if you want to know whether Moore's Law will survive much longer, check back around the end of 2013.  Everyone in the industry seemed to agree that the path to 22 nm chips was open, but beyond that was in doubt.

The end of 2013 was kind of an arbitrary marker, but had Intel kept pace with their stated goals, they would have launched a chip on 16 nm by then.  They haven't, and for that matter, probably haven't even started production of a chip on a new process node past 22 nm.  And the problems are hardly limited to Intel.  Intel launched CPUs at 22 nm in early 2012.  No one else has gotten that far, even today, with the rest of the industry stuck at 28 nm.  Indeed, AMD's high end CPUs just got to 28 nm last month.  Moore's Law is, at the very least, experiencing a serious bump in the road.

And yet there is a perfectly good explanation for this that is completely consistent with Moore's Law still being alive and well.  To go much further, the industry widely believes that they'll need FinFETs or something much like them.  Intel's 22 nm process node uses Tri-Gate, which certainly qualifies as "something much like FinFETs", and the rest of the industry is working on FinFETs.

Additionally, for reasons of physics, a given wavelength of light has a minimum area that it has to hit.  Having to take multiple passes to carve transistors in silicon gets expensive, and the industry is pushing toward the limits of what the excimer lasers that they've used for decades can do.  The industry is relying on EUV lasers with wavelengths more than an order of magnitude shorter than currently used by foundries to replace them soon, but if EUV doesn't come through, we could be stuck for a while.

In one sense, these are the sort of challenges that foundries have faced for decades.  It wasn't that long ago that they needed high-k metal gate, or silicon on insulator, or copper interconnects to advance further.  Solve the latest batch of challenges just like they've solved all of the previous ones and Moore's Law continues apace after today's mere hiccup.

Foundries seem to be optimistic that they can do exactly that, too.  In going from 22 nm to 14 nm in a single jump, Intel is trying for the biggest percentage jump in process node sizes that they've attempted in decades.  Global Foundries doesn't want to talk much about their upcoming 20 nm process node, but wants to talk a lot more about 14 nm.  They offer six different process nodes at 28 nm, but will offer only one at 20 nm, then back to many options at 14 nm.  Similarly, TSMC is just starting production of 20 nm parts, but is much more excited about the 16 nm process node that they hope to replace it with shortly.

But in another sense, something is very different this time:  cost.  I briefly alluded to it above.  It simply costs a lot more to bring up each new process node at a smaller geometry than the previous, larger one did.  Exponentially more, in fact.  It used to be that everyone and their neighbor's dog had their own fabs, at least as large computer chip companies went.  But a company with a few billion in annual revenue can't afford to spend more than that in an average year just bringing up the next new process node.  That's what drove AMD to sell off their fabs a few years ago, for example.  Today, Intel is the only company in the world with enough volume in their own chips to justify having their own fabs--and even Intel is starting to fab chips for other companies.

Even if there were no problems of quantum mechanics and matter were infinitely divisible, the cost of new process nodes would eventually bring Moore's Law to a crawl if not a complete halt.  Exponentially increasing costs cannot go on forever, and therefore will not.  A number of foundries seem to have basically given up on being anywhere near the cutting edge.  Intel still is, and TSMC and perhaps Samsung.  Global Foundries is trying to be, though not necessarily succeeding amid chronic delays.  And then?

This problem isn't confined to logic circuits, either.  There used to be many manufacturers of DRAM chips.  We are now down to three significant ones:  Samsung, Hynix, and Micron.

It's still likely that Intel launches 14 nm Broadwell chips this year, and that TSMC's 20 nm process node will come along quickly enough for AMD to get 20 nm cards out this year.  TSMC might well get 16 nm chips out the year after, with not just AMD and Nvidia GPUs, but a whole host of ARM chips and perhaps AMD CPUs, too.  If that happens, Moore's Law will perhaps have slowed a bit, but will still be alive and well in all but its most rigid formulations.

But that's a big if.  We know that Moore's Law is going to fail eventually.  Nothing can grow exponentially forever, and for modern chips to have about a million times as many transistors as the cutting edge chips did when Moore formulated his famous law is astounding in retrospect.  That's literally a million times as many transistors, not just some figurative "a lot more".

But that's also the sort of technology jump that is unprecedented in recorded human history, so there aren't any historical examples to compare it to and guess how it might end.  I'd bet on exponential growth giving way to something less than exponential but still growth.  But it will certainly be interesting to watch.

Comments

  • The user and all related content has been deleted.

    image

    Somebody, somewhere has better skills as you have, more experience as you have, is smarter than you, has more friends as you do and can stay online longer. Just pray he's not out to get you.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Mtibbs1989
    So, your topic was essentially what people who actually knew about Moore's law already knew?

    I doubt that you actually read the post.  Moore's Law seems to be far more widely known than FinFETs or EUV lithography.  And while everyone knows that Moore's Law will fail eventually, I haven't seen that much speculation that exponentially increasing cost of new process nodes could be the culprit--that the industry could choose to stop even though it could continue because it's just too expensive to continue.

  • olepiolepi Member EpicPosts: 2,829

    It has been an interesting thing to watch. My first microprocessor that I worked on used 5 micron NMOS transistors. Now we're looking at 14nm and below.

    I like to think of disk space for a good comparison. In 1980, a 300mb disk drive cost $15K, and was the size of a washing machine. My PC today has 3 Terabytes of disk, so that is 9,000 washing machines under my desk. That would cost $135 million dollars in 1980 dollars and occupy an entire building.

    Memory is another thing to watch. I remember buying .5 megabyte memory board (512K bytes)  for $4,800 in 1980.  My PC has 16 gb of ram, so that is 32,000 .5 meg memory boards at $4,800 = $153,000,000, or $153 million dollars in 1980 dollars.

    But interestingly enough, the UNIX software on the ARPANET I ran back in 1977 is very similar to the Linux machine on the Internet I have at home today.

    ------------
    2024: 47 years on the Net.


  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by olepi

    But interestingly enough, the UNIX software on the ARPANET I ran back in 1977 is very similar to the Linux machine on the Internet I have at home today.

    While you know this, that's because software does not neatly scale with Moore's Law.  Having more memory in which to run software and more space to store source code and compiled programs helps up to a point, but Moore's Law hasn't made human programmers more clever.  But it is still interesting to note that some computer-related things have progressed greatly and some haven't.

  • syntax42syntax42 Member UncommonPosts: 1,378

    As stated, there is a limit to the potential advancement we can achieve with current technology.  That limit is based on costs of advancement.  I'm still hopeful that some unexpected technology will come along and revolutionize the industry.  Quantum computing might be it, or it might not.  Organic computing (brains!) might even be the advancement we need to push past our current silicon chips, or whatever they are made from nowadays.  Regardless of the breakthrough, our current processor technology has maybe 20 more years before the advancement slows to a crawl, at best.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Perhaps a more practical viewpoint:

    Most everyone who's been with PCs in the last 10-15 years, when you got that new computer, it was significantly faster. The jump from 286 to 386 was the first major advancement I remember in my working lifetime, and perhaps in PC history as it was the first 32-bit CPU (and we are still largely running 32-bit software today).

    That jump occurred in just 3 years. The 286 released in 1982, the 386 in 1985. The 486 in 1989, the first Pentium in 1993 --- going 3-4 years between major jumps in architecture, and even in between major releases in architecture, we saw decent jumps in clock speeds in available models.

    And we are have the Pentium brand name today, it's 21 years old. Today it's largely regulated to the budget field, with the Core brand name having come out in 2006, and a Haswell Pentium G3220 today is a lot different than the original P5 Pentium, but this is just one real-world example.

    We have since went to the Tick-Tock model from Intel, and AMD is kind of in Limbo, having milked the Athlon/Kx line for 12 years, and the Bulldozer replacement kinda stumbled off the starting line (although looking better now).

    If I go and look a Titanfall - a game not quite released yet, the minimum CPU requirements are CPU's that were, by and large, released back in 2006. Maybe that's because it's a console port, or maybe not. ESO is PC only, and has lower CPU requirements than Titanfall does.

    If I went 3-4 years between computers in the 1990s, my new computer was massively faster. Today, if I go 3-4 years between computers, my new computer may, while technically "faster" based on core count, may actually perform software slower than the computer I'm replacing it with.

    Moore's Law, as it pertains to transistors, is pretty much what Quiz describes here -- but I propose that doesn't matter too much, because CPU's have been "Fast Enough" since the Mid-00's, and while we've seen more core counts and much better energy efficiency, we haven't really seen more performance that existing software, and even upcoming software, has really been able to leverage.

    Before about 2005 - the CPU's were single core largely, so by default almost all of our speed gain came from individual core performance. Then we hit the brick wall with regard to clock speed and the thermal envelope, and chipmakers started added cores so they could have more MIPS/FIPS/etc available, but that isn't the same thing has IPC performance.

    So does Moore's Law really matter, when all it will really do is allow us to pack more cores into a CPU... Maybe it would be a good thing to have transistor count stuck for a while, and force chipmakers to focus more on IPC (and associated tricks to boost IPC), and then we'd see more real-world performance out of existing software rather than requiring a total overhaul and paradigm shift to multithreaded software to get there.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by Ridelynn

    Moore's Law, as it pertains to transistors, is pretty much what Quiz describes here -- but I propose that doesn't matter too much, because CPU's have been "Fast Enough" since the Mid-00's, and while we've seen more core counts and much better energy efficiency, we haven't really seen more performance that existing software, and even upcoming software, has really been able to leverage.

    For desktop CPUs where you're not meaningfully constrained by power, Moore's Law doesn't matter much anymore.  But when the goal is performance per watt, not performance per thread, Moore's Law is still very important.  And performance per watt is a big deal in just about everything except for desktop CPUs and isolated servers.

    But a properly done die shrink doesn't just give you more transistors.  It means you can do the same computational work while using less power per transistor.

    You're very much constrained by power in laptops.  And even more so in tablets and cell phones.  Moore's Law still matters there.  I mentioned servers above, and for a single server CPU all by itself, the difference between 130 W and 95 W doesn't matter much for the same reasons as desktops.  But that difference sure matters if you're going to have 1000 of them in a room.

    And even in gaming desktops, Moore's Law still matters for GPUs.  High-end desktop video cards are already substantially power constrained, even in the desktop form factor.  The difference between 95 W and 130 W may not matter much in a desktop, but the difference between 250 W and 400 W sure does.  It's not an accident that Nvidia declined to clock the GeForce GTX 780 Ti as high as they had the GeForce GTX 770.

    Furthermore, transistors aren't just for doing computations; they're also for cache.  CPUs don't see much benefit from adding more cache, but GPUs will.  Intel has already demonstrated that taking their Haswell GT3 GPU and adding a big cache ("Crystalwell", or "Iris Pro") increases performance by about 1/3.

    There are some things that GPUs need to read from very, very frequently that already get put in GPU cache.  There are some that are read from infrequently enough that putting them in cache doesn't make sense.  But there is also the depth buffer and the frame buffer, which can together total tens of MB, but account for a large fraction of video memory reads and writes.  If you could put that in cache, it would make a huge difference, both in reducing memory bandwidth needed and in reducing power consumption from all those memory accesses.

    The problem with that is that you need tens of MB of cache to do it.  You can do that today, but if you're spending half of your GPU die space on GPU cache, you'd benefit more from adding more memory channels and more shaders and so forth instead.  The Xbox One already does have a 32 MB cache, but having to make room for all that cache is the reason why it needs a bigger die than the PS4 in spite of having a much weaker GPU.

    But if a given amount of cache takes 1/2 of your die at 28 nm, then it only takes 1/4 of your die at 20 nm, and 1/8 at 14 nm.  Simple cache also tends to be easier to scale well to new process nodes than complex logic circuits.  At some point, having a large GPU cache starts to make a ton of sense.  I expect such desktop video cards to first reach retail in about two years, on TSMC's 16 nm FinFET process node.  And they're going to be a very big deal as desktop video cards go.

    And this is perhaps an even bigger deal for integrated graphics both for desktops and laptops, as those are greatly constrained by memory bandwidth.  If what would have been 1/3 or 1/2 of your memory accesses instead use a new GPU L3 cache, that eases your memory bandwidth requirements considerably--or lets your GPU get far more performance out of the same memory bandwidth.

    If Intel decides to give Broadwell a hefty GPU cache, they might well be able to offer both laptops and desktops with a level of integrated graphics performance that AMD won't be able to touch until 2016.  Intel pricing will make that irrelevant in desktops, but gaming laptops running Intel graphics might be a serious option a year from now.

  • 13lake13lake Member UncommonPosts: 719

    Intel didn't release the first cpus on 22nm architecture in early 2013, that's just wrong, ...

    The first cpus based on the 22nm architecture were released at the end of april 2012, almost a whole year earlier that mentioned by the op.

    And going by default tick-tock intel cadence, under perfect conditions 14nm would be coming roughly 2 years after the first 22 nm which is april-may.

    3 months is the usual delay (median amount of delay) from tock to tick, so june-july would have been 14nm launch period, thus the amount of time it will be delayed will be determined by the number of months passed  from july 2014.

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Originally posted by 13lake

    Intel didn't release the first cpus on 22nm architecture in early 2013, that's just wrong, ...

    You're right.  I was off by a year.  I'll fix the original post.  Ivy Bridge indeed released in April 2012.

    Intel was once promising 45 nm in 2007, 32 nm in 2009, 22 nm in 2011, and so forth:

    http://www.dvhardware.net/article25794.html

    That was back when people expected that 16 nm would follow 22 nm.  More recently, they were still claiming 22 nm in 2011, then adding 14 nm in 2013, 10 nm in 2015, and so on:

    http://www.techspot.com/news/48577-intel-rd-envisions-10nm-chips-by-2015-already-developing-14nm-process.html

    When the first 32 nm chips launched in January 2010, not in 2009, Intel claimed that at least they were shipping in 2009, even if Intel hadn't given the OK to launch yet.  But at some point, Intel realized that such claims were getting ridiculous and moved everything back by a year:

    http://www.legitreviews.com/intel-core-i7-3770k-3-5ghz-ivy-bridge-processor-review_1914

    Note that that chart lists Nehalem as 2009, even though it had actually launched in November of 2008.

  • 13lake13lake Member UncommonPosts: 719
    yeah i remember those graphs, they were a year late up until the broadwell delays, now they are practically 2 years late, which puts doubt on maxing on the silicon before 2020
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    At op: might want to go read extremetech article on graphene .it will shed some light as the reason why everything was slammed to a halt . so to resume ,don't hold your breath ,Moore's law is here for at least another decade .the main issue they have now is cooling but it look like grapheme delay this issue also (at least 3 gen of part) by then the industry should have found a way to cool .so what is Intel doing ? Probably spending a lot of their hard earned breathing room ahead of the other desktop maker to find a way to harness all that hidden potentiality of graphene (estimated at being a peak 10000 better then other solution .we should have a 2 time better part soon .
Sign In or Register to comment.