Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

On coming hardware

QuizzicalQuizzical Member LegendaryPosts: 25,355
Short version:  if you want to buy a gaming desktop, I don't see any compelling reason to wait unless you're willing to potentially wait a year or more.  If you want a gaming laptop, the next two years will likely bring the largest improvements in a long, long time.

Longer version:

Let's talk about CPUs first.  Intel is scheduled to launch Kaby Lake this year.  As it is rumored to be only a minor refresh after Cannon Lake was delayed, I'm not expecting it to matter much.  Kaby Lake will probably be to Sky Lake roughly as Devil's Canyon is to Haswell, Godaveri to Kaveri, or Richland to Trinity.  You'll probably get a minor clock speed bump and maybe a new chipset or some such, but nothing to get excited about.

AMD has two CPUs scheduled for launch this year, and both matter for different reasons.  Bristol Ridge is coming first, and brings Steamroller cores to the desktop in socket AM4, presumably with DDR4 memory.  DDR4 bandwidth should help integrated graphics performance considerably.  But Steamroller cores are far, far inferior to what Intel has had for the last several years, so don't expect a competitive CPU.

Still, Bristol Ridge matters for two reasons.  One is that if you want a cheap gaming laptop, AMD's APUs have been the best option here for years, and DDR4 will make Bristol Ridge considerably better than Carrizo.

The other is that if you want a cheap gaming desktop, you can get an AMD system, but you don't have much of an upgrade path.  Bristol Ridge is socket AM4, which will be the same as Summit Ridge, with Zen cores.  That means that a cheap rig that you buy soon could be upgraded to a very nice gaming rig a year or two later without having to replace the motherboard or memory.

That's not enough to make Bristol Ridge matter to people with larger budgets.  But if you've only got $500 to spend on a gaming machine, Bristol Ridge is worth waiting for, whether you need a desktop or a laptop.

But that leads us to Summit Ridge, which finally brings AMD's long-awaited Zen cores.  AMD is promising this in late 2016, and claiming a 40% IPC improvement over Steamroller.  Assuming it can clock the way it ought to, that should finally make AMD competitive with Intel on the CPU side.  My best guess is that Summit Ridge won't be quite as good as Kaby Lake, but it will be close--and AMD hasn't been close at least since before Sandy Bridge hit in 2011, and arguably since Conroe in 2006.

I do think it's important to remember that Zen cores are not going to be simultaneously good and cheap.  You've long been able to get a 6-core AMD FX-6300 for $110 or so.  If Zen cores are as good as AMD is trying to lead people to believe, I'd expect a CPU with 6 Zen cores to cost at least double that.  Even so, if Zen cores are perhaps 90% or 95% as good as Kaby Lake cores, you can make a good case for 6 fully enabled Zen cores over 4 Intel cores with hyperthreading disabled for the same price.

On we go to GPUs, where AMD Polaris and Nvidia Pascal are scheduled to launch this year on 14/16 nm finfet process nodes.  AMD showed off working Polaris 10 and 11 at CES, both running GDDR5.  Nvidia showed off a board that they claimed was Pascal, but pictures strongly indicate that was really either a GeForce GTX 980M or the Quadro equivalent.  The last time Nvidia tried that sort of stunt foreshadowed the Fermi debacle and about half a year worth of delays.

AMD is promising that Polaris will double their energy efficiency, but it's unclear if they're comparing to Tahti, Hawaii, Tonga, Fiji, or something else.  Regardless, I fully expect that both AMD and Nvidia will get large energy efficiency gains out of 4+ years worth of process node advances.

Furthermore, energy efficiency gains are what drive absolute performance gains.  Ever since the GeForce GTX 280 arrived in 2008, top end video cards have been increasingly limited primarily by how much power you're willing to burn.  Double your energy efficiency and you can double your performance while staying in whatever power envelope you decided was acceptable before.

But it's not automatic that the first cards of the coming generation will be all that big of a deal on the desktop.  What if Polaris 10 and 11 offer the performance and price of a Radeon R9 370X and 390X, but while using only 50 W and 100 W, respectively?  In a desktop, you could look at that and yawn.  (And the Nvidia fanboys who say you should buy Nvidia in a desktop today because of energy efficiency would suddenly conclude that energy efficiency doesn't matter in a desktop.)

But in a laptop?  Those would absolutely be worth waiting for.  And while I fully expect that AMD will get cards out before Nvidia (see what they showed off at CES), Nvidia will get big gains, too.  I don't see any real reason to believe that either particular vendor will win by much on energy efficiency (nor, for that matter, that they won't).

While Polaris 11 does go to 11, I'd be very, very surprised to see AMD build new high end cards on GDDR5.  Hawaii already had a 512-bit GDDR5 memory bus, so it's not like AMD can readily go larger there.  Try to beat Fiji performance in a 256-bit bus and you're starved for memory bandwidth.  Samsung announced starting production of HBM2 mere days ago, and GDDR5X production is apparently months away.  Thus, the new, high end cards on 14/16 nm might have to wait a while, but they're certainly coming.

Even so, that sort of discrete video card in a laptop may not be long for this world.  There are really only two reasons why gaming laptops still need discrete video cards:

1)  An APU means a CPU and GPU from the same vendor, and neither AMD CPUs nor Intel GPUs offer the performance characteristics you'd want.

2)  You can't get enough memory bandwidth to a CPU socket to feed a GPU.

Zen likely means that the first of those goes away.  HBM means that the second does, too.

And discrete video cards are hardly desirable in laptops or other small form factors.  People only get them because they have to in order to get the performance they want.  Go with an APU and you can greatly reduce both cost and size.  You eliminate the need for the CPU and GPU to communicate over PCI Express, and probably bring power consumption down, too.

Two years from now, AMD's laptop lineup might well look something like this:

1)  An SoC with two Zen cores, 6 Polaris compute units, and dual channel 2133 MHz DDR4 for $100.
2)  An SoC with four Zen cores, 12 Polaris compute units, and dual channel 2666 MHz DDR4 for $250.
3)  A package with four Zen cores, 32 Polaris compute units, and two stacks of HBM2 totaling 16 GB for $500.

I'm not sure if the latency characteristics of HBM would preclude using it as main system memory.  Alignment issues shouldn't, as you can overwhelm that with, "I don't care if we only use our bandwidth half as efficiently as DDR4; we have more than 10 times as much."  But assuming that HBM can be used as system memory, note that that $500 package also includes your system memory.

And if AMD can offer something like that, there'd scarcely be any reason at all to consider anything else in a gaming laptop in about the $400-$2000 price range--even if you assume that Kaby Lake is a little better than Zen and that Pascal is a little better than Polaris.

Comments

  • Righteous_RockRighteous_Rock Member RarePosts: 1,234
    How will Azure affect all of this?
  • 13lake13lake Member UncommonPosts: 719
    Quizzical said:
    3)  A package with four Zen cores, 32 Polaris compute units, and two stacks of HBM2 totaling 16 GB for $500.
    add Xiaomi to the recipe, and you have an insane laptop for $250-$300 :)
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    How will Azure affect all of this?
    It pretty much doesn't, as I'm talking about consumer-grade hardware.
  • WaldoCornWaldoCorn Member UncommonPosts: 235
    edited January 2016
    This may be a premature or even naive question, but what time frame (if any) do you see 970 level graphic performance, at the $200.00 price point? 

    See the world and all within it.
    Live a lifetime in every minute.

  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited January 2016
    WaldoCorn said:
    This may be a premature or even naive question, but what time frame (if any) do you see 970 level graphic performance, at the $200.00 price point? 
    10-12 months. When 970/390 class new gen card is out and clearance sales start. So about "holiday season 2016". If you want to buy used much sooner.
  • RidelynnRidelynn Member EpicPosts: 7,383
    The next generation will probably bring 970-level performance for the mid-low $200's, I would expect sales and rebates to bring it at or below $200 shortly after both sides have something released in that performance range.

    Each generation seems to bump a current level of performance down a pricing tier.

    The only drawback is, the entire next-generation lineup won't be released all at once. Maxwell came out with the 750 first, then the 970/980, then the rest of the Maxwell family we see today.

    So it wouldn't surprise me if the next generation came out initially in a mid-low range offering, or mobile-designated SKUs, then worked it's way up. But I have no idea what order the cards will be released in.
  • GdemamiGdemami Member EpicPosts: 12,342
    WaldoCorn said:
    This may be a premature or even naive question, but what time frame (if any) do you see 970 level graphic performance, at the $200.00 price point? 
    2 years at least. Performance gain is very slow for past years and there is no revolutionary technology ahead.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    To clarify, Summit Ridge is CPU-only, not an APU.  AMD hasn't officially announced any APUs with Zen cores.  But it would be completely insane if they don't come eventually.  I don't think AMD's publicly released roadmaps extend beyond the end of 2016.

    As for HBM2 as system memory, it might make sense if you need the bandwidth to feed a GPU, but I don't think it makes sense in cell phones or tablets.  There are trade-offs between bandwidth and power consumption, and HBM is built for high bandwidth, as the name implies (High Bandwidth Memory).  It uses less power than GDDR5, yes, but even a single stack of HBM as implemented in Fiji uses several watts--way too much for a tablet or cell phone.

    Though again, I'm not sure how practical this is.  If HBM2 adds 50 ns to latency as compared to DDR4, then that's a non-starter for most CPU purposes.  I'm not saying it does; it probably doesn't.  Given that it doesn't need to go off package, it might even be lower latency than DDR4.  But I don't know, and it could potentially be a problem.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Well, to be honest, even if AMD were pushing Intel harder - what would be the point?

    Software still hasn't caught up to a quad core Sandy Bridge, and even then only in some corner cases. A lot of people are still using 5+ year CPUs perfectly well and perfectly satisfied.

    You could say "Well, if the CPU power were broadly available, developers would find a use for it". I would counter that by saying it ~has~ been available, for years now, and developers still haven't found a good use for it.
  • KiyorisKiyoris Member RarePosts: 2,130
    edited January 2016
    I care more about NVMe SSD than CPU at this point.

    PCMark 8 increased workload to test NVMe, they are so fast Futuremark said that all old PCMark tests are now irrelevant and can no longer be compared.

    New Samsung NVMe SSD are insanely fast.
  • Righteous_RockRighteous_Rock Member RarePosts: 1,234
    Quizzical said:
    How will Azure affect all of this?
    It pretty much doesn't, as I'm talking about consumer-grade hardware.

    It is my understanding that Azure would bring lower end / lesser performing hardware to behave more like high end hardware. Typically when we are talking hardware we are talking price / performance. I guess I am wondering if Azure could lift current gen high end hardware to perform as well as new tech coming down the pipe. Basically Azure would close the gap on price / performance
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Torval said:
    I didn't realize Summit Ridge wasn't an APU. It would be insane of them not to go that way. I just don't see that competing with Intel, even if it is a little cheaper. Businesses aren't going to go that route.

    I couldn't find a lot, or really any useful, information about HBM2 latency. It doesn't seem to even being considered for system memory that I can tell. Here is an older article from ExtremeTech that skirts the question and is based on HBM1. http://www.extremetech.com/computing/197720-beyond-ddr4-understand-the-differences-between-wide-io-hbm-and-hybrid-memory-cube

    All of this aside I'm waiting a while to see how all this pans out in practice.
    What AMD did for a while is to release new CPU architectures as pure CPUs and new GPU architectures as pure GPUs, and then later make an APU that has both.  They've kept doing that with GPUs, though they stopped with the pure CPUs for a while because they knew that their CPUs weren't competitive.  Servers and data centers don't want a CPU with a bunch of Steamroller or Excavator cores, as they're not at all competitive with what Intel offers.  High end desktops want something high end, and that means Intel at the moment.  Cheaper desktops are adequately filled by APUs.  Laptops need integrated graphics so that they can have low idle power consumption.  So if AMD made a chip with 8 Steamroller or Excavator cores, hardly anyone would buy it.  AMD realized that and decided to stop making such chips for a while.  With Zen, they think they've got something competitive, so a pure CPU makes sense again.

    I don't think HBM makes sense for pure CPU system memory.  The reason I bring it up as a possibility for system memory is that if you've got a bunch of HBM on package anyway to feed the GPU part of an APU, you could save cost by using that as system memory and not also having DDR4 somewhere.

    For comparison, the PlayStation 4 uses GDDR5 as its system memory, as it needs it to feed the GPU anyway.  GDDR5 is wildly inappropriate as system memory in most cases, as it burns way too much power for laptop use and can't get enough capacity for desktop or server use.  But in the PS4, it makes sense.  I'd expect future consoles to head that way on HBM and for the same reasons.

    Incidentally, GPU memory as used in a GPU is very, very high latency.  But that's true even if you use DDR3.  GPUs are built for throughput, not latency, so they make everything high latency wherever there is any benefit to doing so, and then cover up the latency by having enormous numbers of threads resident.  No one cares when you're done with the first pixel in rendering a frame; what matters is when the entire frame is done and ready to ship off to the monitor.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Quizzical said:
    How will Azure affect all of this?
    It pretty much doesn't, as I'm talking about consumer-grade hardware.

    It is my understanding that Azure would bring lower end / lesser performing hardware to behave more like high end hardware. Typically when we are talking hardware we are talking price / performance. I guess I am wondering if Azure could lift current gen high end hardware to perform as well as new tech coming down the pipe. Basically Azure would close the gap on price / performance
    While there are certain things that cloud computing can do very, very well, it's wildly inappropriate to most consumer use.  In order for cloud computing to be appropriate, you need to be able to send a small amount of data to the cloud, have the cloud do a huge amount of computations, and then send a small amount of data back.  And furthermore, the latency of sending something both ways over the Internet needs to not cause problems.

    Think of GIMPS as an example of this sort of workload.  They send a number for you to check if it's prime.  You let your computer run for a month using idle time.  And then it determines that the number is prime or isn't and sends it back.  Even if you had latency measured in days rather than milliseconds, it would barely matter.  And the data to transmit is so small that overwhelming majority of what gets sent is protocol overhead.

    Outside of some anti-cheating measures that should be in place but aren't, the work that games do on the client most needs to be done on the client.  There have been some attempts at cloud gaming, most famously OnLive.  But there are two killer problems with this:

    1)  The latency is horrible.  Implicitly adding several dozen milliseconds of latency between when you input something and when the effects show up on the screen will make everything feel very laggy.

    2)  Streaming a video of your game from the cloud to you takes enormous amounts of bandwidth.  That bandwidth is usually going to be far more expensive than just rendering the game locally.  Lossy compression means you also get worse image quality.

    You might think you watch game streamers all the time.  But that's not latency sensitive in the slightest.  If watching someone on Twitch shows consistently you what is going on in a game five seconds after the person playing the game sees it, that's not a problem for you in the slightest.  (I don't know what the actual latency that Twitch imposes is, but it's surely far more than is acceptable for playing a game yourself.)

    Having a few seconds of latency allows compressing the video across time, which allows far better compression ratios with far less image quality degradation than if you have to handle each frame independently.  It also allows buffering the video so that a brief hiccup in your connection typically won't disrupt the video for you at all.

    But that's not an option when you're playing the game yourself.  If pressing a button to attack doesn't make you attack until two seconds later, most games become completely unplayable, and even strictly turn-based games become awkward.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    In my initial post, I said that Bristol Ridge would have Steamroller cores.  That's mistaken; it will actually have Excavator cores.  What I really meant was, the same as the newer Carrizo, not the older Kaveri.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    I found this on Hynix's web page:

    https://www.skhynix.com/eng/product/dramHBM.jsp

    It doesn't give all of the memory latencies for HBM2, but it looks like the ones it does give are about in line with DDR3.

    Another possibility that I didn't mention above was Intel packaging HBM2 or some other on-package memory into their CPUs.  They've certainly got good enough CPUs to do that, and it wouldn't surprise me if it will be cheaper to do than Crystalwell-style caches.  But until Intel can put together a good GPU, they're not really a viable contender for APU-based gaming laptops.
Sign In or Register to comment.