Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Intel's 10 nm process node is delayed. Again.

QuizzicalQuizzical Member LegendaryPosts: 25,355
https://www.anandtech.com/show/12693/intel-delays-mass-production-of-10-nm-cpus-to-2019

Cannon Lake was originally supposed to arrive in 2016.  Now it will be here in 2019 at the earliest.  This together with TSMC's recent announcement of volume production on 7 nm heralds the end of Intel's lead in chip fabrication.

If Global Foundries' 7 nm process node is able to arrive around the end of this year as promised, then AMD could well have the best x86 CPUs for laptops and servers just by riding the advantages of a superior process node combined with an architecture that is merely competitive.  That would be quite a change, as AMD has been having to get by with inferior process nodes for decades.

But of course, saying if TSMC or Global Foundries can deliver is a huge "if".  If Intel had delivered on their 10 nm promises, Cannon Lake would have arrived in 2016.  Instead, we got Sky Lake, Sky Lake Refresh (Kaby Lake), and Sky Lake Refresh Refresh (Coffee Lake).  Who is excited for Sky Lake Refresh Refresh Refresh?

Comments

  • ScotScot Member LegendaryPosts: 22,986
    How we doing for something other than silicon for CPU's, they are going nowhere fast now and we need something or the wheels of the Technical Revolution are going to start spinning while standing still.
  • gervaise1gervaise1 Member EpicPosts: 6,919
    If Intel were still operating a tick-tock cadence then what this looks like to me is that they have scrapped the "tick" part. 

    So instead of rolling out a new design with the "tick" and then improving the manufacturing with a "tock" they are just going with the "tock".

    Could be a commercial element as well. If yields are small they would be looking to charge a premium price but in the current market? Does it exist? 

    Its not all doom and gloom for Intel though on the manufacturing front; their 64 layer process has allowed them to price their new M.2 NVMe SSDs very aggressively (they are very thin!). Well recieved as well.

    As far as competition goes though its not just Global Founderies. Samsung - in particular - may be a factor. 10nm up and running; first 7nm process last year; work on a new foundry started; Qualcomm a future customer; another set of record results just announced etc. etc. 


    When it comes to cpus though whilst the bottleneck remains the gpu does it matter?
    [Deleted User]
  • Asm0deusAsm0deus Member EpicPosts: 4,407
    What does this mean for cpu  and gaming in general?  I mean are we seeing a slow down or are we reaching the limit of how much cpu can be improved barring some new tech?


    Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.





  • ScotScot Member LegendaryPosts: 22,986
    edited April 2018
    Asm0deus said:
    What does this mean for cpu  and gaming in general?  I mean are we seeing a slow down or are we reaching the limit of how much cpu can be improved barring some new tech?



    As far as I understand CPU's have not increased in "speed" for the last five years. I think it was Quizzical who was talking about the writing on the wall for graphics cards a few weeks ago. Simply put they are having to use every design tweak and software solution to get any new marked improvement.

    I think games will look to where things can get better, SSD drives may become "neccessary", more RAM which will need to be faster. A bottle neck was reached five years ago, they can only do so much. Even if some genius design solution akin to cores is figured out, in five years time that will be milked dry.

    Can we have something to replace silicon chips please, gaming needs you! :D
    Asm0deus
  • laseritlaserit Member LegendaryPosts: 7,591
    Scot said:
    Asm0deus said:
    What does this mean for cpu  and gaming in general?  I mean are we seeing a slow down or are we reaching the limit of how much cpu can be improved barring some new tech?



    As far as I understand CPU's have not increased in "speed" for the last five years. I think it was Quizzical who was talking about the writing on the wall for graphics cards a few weeks ago. Simply put they are having to use every design tweak and software solution to get any new marked improvement.

    I think games will look to where things can get better, SSD drives may become "neccessary", more RAM which will need to be faster. A bottle neck was reached five years ago, they can only do so much. Even if some genius design solution akin to cores is figured out, in five years time that will be milked dry.

    Can we have something to replace silicon chips please, gaming needs you! :D
    The next revolution will probably come when we are able to manufacture in zero gravity.
    Scot

    "Be water my friend" - Bruce Lee

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Scot said:
    How we doing for something other than silicon for CPU's, they are going nowhere fast now and we need something or the wheels of the Technical Revolution are going to start spinning while standing still.
    There's already a lot of stuff besides silicon in the CPUs, though the base layer is still mostly silicon.  I'm not aware of anything imminent to replace silicon.

    The next big revolution in CPU fabrication is expected to be EUV lithography.  EUV = Extreme Ultra Violet, and the wavelength they use isn't quite technically x-rays, but it's close.  It's a short enough wavelength that they have to have a vacuum between the light source and the wafers because air or glass or whatever would just absorb it.

    TSMC and Global Foundries have both announced that they're going to use EUV next year.  They both say that their initial 7 nm process node will be made with the older DUV approach using Argon Fluoride lasers, but they'll move to EUV for some metal layers once it's ready.

    The basic problem right now is that they're trying to use light with a wavelength of 193 nm to carve features in silicon that are 16 nm or 14 nm or 10 nm across.  Done naively, that doesn't work very well, so they have to take two passes.  Or three.  Or four.  Or six.  Needing a bunch of separate exposures and masks and so forth to make just one metal layer adds a lot to the cost, and also adds a lot of things that can go wrong.  If you can do that in one exposure with EUV, that's huge.

    Of course, there have been a lot of such revolutions in CPU manufacturing over the course of the past however many years:  FinFETs, high-k dielectric, silicon on insulator, copper interconnects, etc.  Most people don't know the fine details of them, but just know that chips keep getting faster.
    craftseeker[Deleted User]Asm0deusScotacidblood
  • gervaise1gervaise1 Member EpicPosts: 6,919
    bestever said:
    Asm0deus said:
    What does this mean for cpu  and gaming in general?  I mean are we seeing a slow down or are we reaching the limit of how much cpu can be improved barring some new tech?


    They've seemed to hit the roadblock on Moore's law. I think they'll start pushing more into multithreading till they find a new break through in CPU tech. AMD has a good idea with infinity fabric and stacking of cores but it still needs some more refinement.
    What we have seen for some time now is a focus on power consumption.

    When Samsung unvieled their 7nm roadmap last May it included 8nm, 6nm, 5nm and 4nm - 4nm! They also said that - for any given design - the move from 10nm to 7nm would give them the option of either 10% more performance or a 35% reduction in power draw.

    More performance per watt brings many added benefits: longer battery life or power draw, lower operating cost, less heat so easier to cool and quieter and also a longer life etc.

    And physically smaller. Although sometime "packaging" delays the introduction of such gains. Modern SSDs for example have - for some time - being mostly boxes of air!

    Now however we have M.2 NVMe. The new generation of which - now being produced by Intel, Samsung, Western Digital etc. - offers up to 2Tb of storage on (22 x 2.38 x 80) mm. And in the case of Intel's design an active power draw of just 50mW (and almost nothing idle). 

    Collectively I think these advances (size, power draw, cost, storage, performance etc.) will mean that many more "products" are going to become "smart". Demand that will drive investment which - is theory - should result in further advances.

    Now a lawnmower that cuts the grass for use and maybe prunes the roses for you afterwards may not be what comes to mind when people think of Moore's Law but we seem to be edging closer towards https://en.wikipedia.org/wiki/The_Door_into_Summer
    [Deleted User]Asm0deus
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    gervaise1 said:
    If Intel were still operating a tick-tock cadence then what this looks like to me is that they have scrapped the "tick" part. 

    So instead of rolling out a new design with the "tick" and then improving the manufacturing with a "tock" they are just going with the "tock".

    Could be a commercial element as well. If yields are small they would be looking to charge a premium price but in the current market? Does it exist? 

    Its not all doom and gloom for Intel though on the manufacturing front; their 64 layer process has allowed them to price their new M.2 NVMe SSDs very aggressively (they are very thin!). Well recieved as well.

    As far as competition goes though its not just Global Founderies. Samsung - in particular - may be a factor. 10nm up and running; first 7nm process last year; work on a new foundry started; Qualcomm a future customer; another set of record results just announced etc. etc. 


    When it comes to cpus though whilst the bottleneck remains the gpu does it matter?
    It depends on how bad yields are and how much of a premium price you can charge.  If you have 50% yields but can charge $5000 per chip, you can just build twice as many as you need and eat the extra production cost.  That doesn't work so well if you have 2% yields or need to sell the chips at retail for $50 each.

    There's also some ambiguity about what yields mean in a given context.  If only 1% of your chips can hit your intended clock speed, but 30% of them can if you reduce the clock speed by 5% and 70% of them can if you also disable a core, what are your yields?

    Regardless, what the announcement really means is that Intel has looked at how much money they expect to make by pressing forward with Cannon Lake versus how much they expect if they keep selling 14 nm parts for a while longer and expect to make more on 14 nm.

    NAND flash is very different from logic chips, which is why it generally isn't the same companies that produce both.  Intel seems to be mostly getting out of the NAND business, too, as they've sold a bunch of IMFT resources to Micron.  3D NAND isn't about pursuing aggressive die shrinks, but about stacking more stuff on top of each other at much larger process nodes.  You can do that with NAND that barely puts out any heat, but your ability to do that with CPUs or GPUs that put out a ton of heat is far more limited.
    [Deleted User]Asm0deus
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    edited April 2018
    Asm0deus said:
    What does this mean for cpu  and gaming in general?  I mean are we seeing a slow down or are we reaching the limit of how much cpu can be improved barring some new tech?


    Basically, it means that Intel won't be able to improve their CPUs at all this year other than by building something that they could have built years ago but didn't bother or perhaps adding a few percent to the clock speed.  As I said in the original post, they could make Sky Lake Refresh Refresh Refresh.
    [Deleted User][Deleted User]Asm0deus
  • gervaise1gervaise1 Member EpicPosts: 6,919
    Quizzical said:
    gervaise1 said:
    <snip>
    It depends on how bad yields are and how much of a premium price you can charge.  If you have 50% yields but can charge $5000 per chip, you can just build twice as many as you need and eat the extra production cost.  That doesn't work so well if you have 2% yields or need to sell the chips at retail for $50 each.

    There's also some ambiguity about what yields mean in a given context.  If only 1% of your chips can hit your intended clock speed, but 30% of them can if you reduce the clock speed by 5% and 70% of them can if you also disable a core, what are your yields?

    Regardless, what the announcement really means is that Intel has looked at how much money they expect to make by pressing forward with Cannon Lake versus how much they expect if they keep selling 14 nm parts for a while longer and expect to make more on 14 nm.

    NAND flash is very different from logic chips, which is why it generally isn't the same companies that produce both.  Intel seems to be mostly getting out of the NAND business, too, as they've sold a bunch of IMFT resources to Micron.  3D NAND isn't about pursuing aggressive die shrinks, but about stacking more stuff on top of each other at much larger process nodes.  You can do that with NAND that barely puts out any heat, but your ability to do that with CPUs or GPUs that put out a ton of heat is far more limited.
    Absolutely agree that for Intel its a case of how much money they can make now vs. the future.

    For Samsung though its not just about NAND - which as you say is a different beast. They are going to be making chips for Qualcomm. OK a different type of cpu requiring less power etc. but - maybe - taken with what seems to be happening at TSMC and Global Founderies are we seeing other firms at least matching Intel's manufacturing excellence?  

    It is a question that is - certainly in my multi-continent, multi-company experience - probably impossible to answer. Even something simple like what does 10nm production mean. More accurately the question should be what percentage of a "10nm product" is actually made with 10nm processes. When machines cost millions its no surprise that "older machines" may be incorporated into "newer" designs. 

    And sometimes of course things manufacturing decisions simply don't work out well. Which could be the case with Intel. The yields may indeed be very, very bad. Samsung for example went one way for making OLEDs, LG went another. Samsung are still working on theirs, LG are now supplying Sony, Panasonic, Philips, etc. 
    [Deleted User]
  • IceAgeIceAge Member EpicPosts: 3,120
    Quizzical said:
    https://www.anandtech.com/show/12693/intel-delays-mass-production-of-10-nm-cpus-to-2019

    Cannon Lake was originally supposed to arrive in 2016.  Now it will be here in 2019 at the earliest.  This together with TSMC's recent announcement of volume production on 7 nm heralds the end of Intel's lead in chip fabrication.

    If Global Foundries' 7 nm process node is able to arrive around the end of this year as promised, then AMD could well have the best x86 CPUs for laptops and servers just by riding the advantages of a superior process node combined with an architecture that is merely competitive.  That would be quite a change, as AMD has been having to get by with inferior process nodes for decades.

    But of course, saying if TSMC or Global Foundries can deliver is a huge "if".  If Intel had delivered on their 10 nm promises, Cannon Lake would have arrived in 2016.  Instead, we got Sky Lake, Sky Lake Refresh (Kaby Lake), and Sky Lake Refresh Refresh (Coffee Lake).  Who is excited for Sky Lake Refresh Refresh Refresh?
    Ok, nice story.

    But no! AMD will not have anything better then Intel. I though you are already used with the idea.

    Even with the "delay" , Intel is still on-top by a mile : https://www.cpubenchmark.net/high_end_cpus.html . You can still see AMD here and there , no worries. 
    Ozmodan

    Reporter: What's behind Blizzard success, and how do you make your gamers happy?
    Blizzard Boss: Making gamers happy is not my concern, making money.. yes!

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    IceAge said:
    Quizzical said:
    https://www.anandtech.com/show/12693/intel-delays-mass-production-of-10-nm-cpus-to-2019

    Cannon Lake was originally supposed to arrive in 2016.  Now it will be here in 2019 at the earliest.  This together with TSMC's recent announcement of volume production on 7 nm heralds the end of Intel's lead in chip fabrication.

    If Global Foundries' 7 nm process node is able to arrive around the end of this year as promised, then AMD could well have the best x86 CPUs for laptops and servers just by riding the advantages of a superior process node combined with an architecture that is merely competitive.  That would be quite a change, as AMD has been having to get by with inferior process nodes for decades.

    But of course, saying if TSMC or Global Foundries can deliver is a huge "if".  If Intel had delivered on their 10 nm promises, Cannon Lake would have arrived in 2016.  Instead, we got Sky Lake, Sky Lake Refresh (Kaby Lake), and Sky Lake Refresh Refresh (Coffee Lake).  Who is excited for Sky Lake Refresh Refresh Refresh?
    Ok, nice story.

    But no! AMD will not have anything better then Intel. I though you are already used with the idea.

    Even with the "delay" , Intel is still on-top by a mile : https://www.cpubenchmark.net/high_end_cpus.html . You can still see AMD here and there , no worries. 
    I was going to make a case that you're talking about the past and I'm talking about the future.  But you've done such a spectacularly bad job of making your case that I won't bother with that.

    Instead, how about if you look at your own benchmark?  That's a synthetic benchmark that scales to a ton of CPU cores but really chokes on the NUMA-heavy approach of AMD's server processors.  There are such workloads, but that's wildly unrepresentative of consumer needs.  That list is mostly dominated by Xeon processors that cost a fortune and would be markedly inferior for most consumer use to something you can get from either AMD or Intel for under $300.

    But even if you look at your own benchmark, Intel only wins if you ignore the price tag.  Because that benchmark scales well to many CPU cores, and AMD gives you more cores for a given price than Intel, if you want the best performance for $x or less, then depending on your value of x, in that benchmark, the winner could be a Ryzen Threadripper 1950X, Ryzen Threadripper 1920X, Ryzen 7 2700X, Ryzen 7 2700, Ryzen 5 2600X, or Ryzen 5 2600--most of AMD's modern lineup.  Basically, if your chosen benchmark were the only thing that mattered and you had a budget just about anywhere between $100 and $1000, and wanted the best CPU you could get on that budget, that benchmark would say that you should buy AMD at nearly all price points.

    There is a case to be made that Intel CPUs are better than AMD right now.  But to make that case, you'll have to start by arguing that the benchmark you chose is irrelevant.
    Ozmodan
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Here's a WCCFtech markup of an old Intel slide:

    https://cdn.wccftech.com/wp-content/uploads/2018/04/Intel_10nm_3.png
  • RidelynnRidelynn Member EpicPosts: 7,383
    I can understand Intel's position. A few years ago, they had this very aggressive R&D roadmap for their x86 line. It was their bread and butter, their dominant revenue generator.

    In the mean time:

    - AMD competition in the PC/server space had all but vanished
    - PC sales have slumped, hard
    - Growth in the processing space has shifted to mobile/wearable/IoT space (and mostly gone to ARM), and to SIMD-style processing (GPUs)
    - Strong growth expected to continue for wireless/communications
    - Strong growth expected to continue for memory and storage

    So in light of all of that, it appears that Intel reallocated a lot of those R&D dollars into other areas that are showing growth potential. We don't necessarily see the R&D budget, but we can certainly see the effect of that. Intel has made hard pushes into each of those areas that are showing growth (with mixed results).

    Honestly, other that for the fact that x86 still drives a lot of revenue at Intel, I don't see them re-allocating back to x86 significantly. If any one of those other areas takes off for Intel (Optane, discrete GPU, an actual competitive mobile chip, Wireless 5G or other protocol, etc), the potential for those is huge.

    There is a conceivable future where, in the course of as little as 10 years, x86 could become a legacy platform. It happened to PPC, VAX, 68k, Alpha, and countless others as technologies and markets evolved.

    [Deleted User]
  • OzmodanOzmodan Member EpicPosts: 9,726
    IceAge said:
    Quizzical said:
    https://www.anandtech.com/show/12693/intel-delays-mass-production-of-10-nm-cpus-to-2019

    Cannon Lake was originally supposed to arrive in 2016.  Now it will be here in 2019 at the earliest.  This together with TSMC's recent announcement of volume production on 7 nm heralds the end of Intel's lead in chip fabrication.

    If Global Foundries' 7 nm process node is able to arrive around the end of this year as promised, then AMD could well have the best x86 CPUs for laptops and servers just by riding the advantages of a superior process node combined with an architecture that is merely competitive.  That would be quite a change, as AMD has been having to get by with inferior process nodes for decades.

    But of course, saying if TSMC or Global Foundries can deliver is a huge "if".  If Intel had delivered on their 10 nm promises, Cannon Lake would have arrived in 2016.  Instead, we got Sky Lake, Sky Lake Refresh (Kaby Lake), and Sky Lake Refresh Refresh (Coffee Lake).  Who is excited for Sky Lake Refresh Refresh Refresh?
    Ok, nice story.

    But no! AMD will not have anything better then Intel. I though you are already used with the idea.

    Even with the "delay" , Intel is still on-top by a mile : https://www.cpubenchmark.net/high_end_cpus.html . You can still see AMD here and there , no worries. 
    Since this is a gaming site, I would argue that AMD and Ryzen chips are pretty much dead even.  Latest benchmarks have AMD winning some and Intel winning others.  When you throw in the extra cost with Intel I think AMD comes out on top.  You also need to look at the extra overhead with Intel for both Meltdown and Spectre.

    AMD caught Intel with their pants down and they are scrambling to correct this issue.  Personally, competition in the market is good for everyeone.
    [Deleted User]Asm0deus
  • KellerKeller Member UncommonPosts: 602
    Intel has officially stepped away from the tick tock. Their CPUs consist out of 1 die which is harder to produce.
    AMD makes multi-die CPUs. 4 smaller dies make 1 CPU. That's one of the main reasons why they get better yields from their die shrinks to 7nm. Personally I'm worried about the connections between the dies, but I haven't read any horror stories.



  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Keller said:
    Intel has officially stepped away from the tick tock. Their CPUs consist out of 1 die which is harder to produce.
    AMD makes multi-die CPUs. 4 smaller dies make 1 CPU. That's one of the main reasons why they get better yields from their die shrinks to 7nm. Personally I'm worried about the connections between the dies, but I haven't read any horror stories.
    While that's true for Threadripper and EPYC, AMD's mainstream desktop and laptop CPUs are all single dies.  In particular, the latest Ryzen 7 2700X is a single die.  Presumably there will eventually be a Threadripper CPU that uses two of that die and an EPYC CPU that uses four of it.

    Even the multi-die CPUs have massive bandwidth connecting the dies, as they're all on the same interposer within the same package.  The bandwidth is enough that even if one CPU die needs only data from memory connected to other CPU dies, you can still get the full bandwidth.  That's markedly better than has traditionally been the case for multi-socket servers.  The latency of going to other dies is considerable, however, so you do get some substantial NUMA issues with Threadripper or EPYC, but less so than with a typical 2-socket server.
    Asm0deus
  • OzmodanOzmodan Member EpicPosts: 9,726
    Interesting, according to this article, even when 10nm chips do hit the market they will still be 1st generation ones meaning a small jump in processing power.

    https://techreport.com/review/33579/intel-outlines-its-struggles-with-10-nm-chip-production

    It seems they did not migrate to the EUV masking process and the old method has to go over the wafer so many times causes errors.  
  • wandericawanderica Member UncommonPosts: 370
    EUV has been in the works for a long time now, and the nerd in me is excited to finally see it producing working silicon.  What comes next is a question we've been asking for a decade now at least.  Personally, I think AMD's Multi-Core Module approach is going to bridge the gap.  Quantum computing (as cool as it is) is a completely different animal, and isn't really applicable, not to mention we're nowhere close to viability for PC use.  Jumping down a row on the periodic table could work, but that comes with years and years of testing plus new manufacturing processes to invent.  I think the next logical step is build vertically as we've seen Samsung and Hynix do with vertical NANDs and HBM. 

    Currently, the advantage goes to AMD in a way that I haven't seen in a decade and a half.  It's an exciting time for sure.  Don't forget though, Intel just hired Jim Keller (the brain behind Ryzen), so expect to see them come back with a vengeance in 3 years or so.  Intel will certainly remain competitive as they always have, but I expect their IPC advantage will evaporate quickly once Ryzen moves to GF's 7 nm process.


  • MadFrenchieMadFrenchie Member LegendaryPosts: 8,505
    Look on the bright side: this is a small delay in Skynet's plans to murder us all.

    I'm a glass half full kinda guy.
    [Deleted User]Ridelynnlaserit

    image
Sign In or Register to comment.