Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Intel launches the Xeon W-3175X in a desperation move

QuizzicalQuizzical Member LegendaryPosts: 20,830
The part is intended more for bragging rights than as something for people to actually buy, so most of the usual sites won't have a review.  If you want a review, here's one:

https://www.anandtech.com/show/13748/the-intel-xeon-w-3175x-review-28-unlocked-cores-2999-usd

Basically, Intel took their top of the line Xeon Platinum 8180 28-core server CPU and made an HEDT processor for consumers out of it.  They won't sell it directly to the public at retail like most consumer CPUs, but only directly through system integrators. For example, from Maingear, it starts at $14,899 and goes up from there if you want any extra options:

https://www.maingear.com/boutique/pc/configurePrd.asp?idproduct=3149

Rumors are that Intel is only going to sell 1500 of these in total, so there isn't going to be much of an ecosystem for it.  There will only be one CPU for the socket ever, and possibly only one motherboard because of the very low volumes involved.  The CPU can pull over 300 W at stock speeds, and much, much more if you overclock it.  If you've ever wanted to pull more than 1000 W from a CPU without liquid nitrogen, this is a likely candidate if you can get a beefy enough water chiller and a huge overclock.

This is the current top of the line HEDT (high end desktop) processor, but it won't remain that way for long.  Once Threadripper 3 comes along, likely later this year, this is going to feel really dated.

So why does it even exist?  So that Intel can claim that they have the top of the line HEDT processor for a brief period of time before they get overwhelmed by AMD's 7 nm lineup.  Intel will remain competitive on mainstream desktops for quite some time to come, as if you don't have very many cores, you can clock them really high and get good performance.  Try to do that with a lot of cores and you'll have some pretty severe heat problems, which is why Intel won't be able to compete in that market again at least until they move to 10 nm, if not 7 nm.
tweedledumb99
«1

Comments

  • HashbrickHashbrick Member RarePosts: 1,785
    Both will be junk because when you make a CPU with no intention of it being mass market there is no support for it and threadripper can't beat intel core per core and never will.
    I'm a simple man spoiled from MMOs of the old age.  Looking for a home but deserted.  My heart and time is not worthy for the MMOs of the new age.
  • MendelMendel Member EpicPosts: 3,117
    28 cores on a desktop?  Why not just go with a more traditional supercomputer, like a Cray, as your personal computer?  Heck, they might even be able to save some money.  https://www.hpcwire.com/2017/11/28/vintage-cray-supercomputer-rolls-auction/
    /sarcasm off

    Seriously, what desktop operation really needs this kind of power and aren't already done on a supercomputer?  Maybe some photomanipulation tasks.  But financial, astronomical, weather, economic simulation, physics and other seriously complex algorithms can choke even a 1,000,000 core supercomputer, with some problems taking over a year to complete.



    Logic, my dear, merely enables one to be wrong with great authority.

  • gervaise1gervaise1 Member EpicPosts: 6,073
    The "surprise" news seems to be the cpu's $2,999 RRP rather than the rumoured - according to the article $8k or later $4k price.

    Maybe the low interest - if the rumoured total of 1,500 was accurate influenced the decision. And maybe discussions with ASUS and Gigabyte as well both of who would have wanted to see some "reasonably realistic" sales predictions before giving the go-ahead to motherboard design & production.
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    Hashbrick said:
    Both will be junk because when you make a CPU with no intention of it being mass market there is no support for it and threadripper can't beat intel core per core and never will.
    This is for HEDT (high end desktop), not mainstream consumer desktops.  Ever since the original Threadripper forced Intel to improve their HEDT offerings to be competitive, you've been able to get reasonably good single-threaded performance in the HEDT market from both Intel and AMD, but that's not the primary focus of the market.  Rather, HEDT is for people who need:

    1)  high performance in things that scale to a lot of CPU cores,
    2)  high memory bandwidth,
    3)  high memory capacity, or
    4)  a lot of PCI Express bandwidth

    Option (1) is commonly the main thing driving HEDT, but the others can sometimes matter, too.  For example, it's easy to get 128 GB of memory in an HEDT platform from either AMD or Intel, but you can't in a Coffee Lake or Ryzen 7 system.  If you're doing something that scales well to many CPU cores, then a 32-core Threadripper 3 part (without the weird memory configuration that hamstrings the Threadripper 2990WX) is going to handily destroy the 18-core Core i9-9980XE.

    Because AMD is going to have an enormous advantage in performance per watt, as well as the ability to stick however many cores into the package that they want (up to 64), it's probable that they'll stick enough in there to make sure that Threadripper 3 handily beats the new Xeon W-3175X.
  • HashbrickHashbrick Member RarePosts: 1,785
    Quizzical said:
    Hashbrick said:
    Both will be junk because when you make a CPU with no intention of it being mass market there is no support for it and threadripper can't beat intel core per core and never will.
    This is for HEDT (high end desktop), not mainstream consumer desktops.  Ever since the original Threadripper forced Intel to improve their HEDT offerings to be competitive, you've been able to get reasonably good single-threaded performance in the HEDT market from both Intel and AMD, but that's not the primary focus of the market.  Rather, HEDT is for people who need:

    1)  high performance in things that scale to a lot of CPU cores,
    2)  high memory bandwidth,
    3)  high memory capacity, or
    4)  a lot of PCI Express bandwidth

    Option (1) is commonly the main thing driving HEDT, but the others can sometimes matter, too.  For example, it's easy to get 128 GB of memory in an HEDT platform from either AMD or Intel, but you can't in a Coffee Lake or Ryzen 7 system.  If you're doing something that scales well to many CPU cores, then a 32-core Threadripper 3 part (without the weird memory configuration that hamstrings the Threadripper 2990WX) is going to handily destroy the 18-core Core i9-9980XE.

    Because AMD is going to have an enormous advantage in performance per watt, as well as the ability to stick however many cores into the package that they want (up to 64), it's probable that they'll stick enough in there to make sure that Threadripper 3 handily beats the new Xeon W-3175X.
    That's my point though it takes AMD almost twice the cores to "beat" the performance of intel core per core.  Then months later you find out AMD really didn't do anything worthy of their time when the highend builders start really testing it deep.  I want to see AMD really outshine for once, like back in the day but they just continue to disappoint.  There's a reason Xeon cpu tech had such a long life cycle in server farms, it just couldn't be beat, performance and watt.

    Hopefully my junk comment bites me in the ass, like I said for once I'd like to see AMD compete as competition will drive both companies to stop holding back.
    Ozmodan
    I'm a simple man spoiled from MMOs of the old age.  Looking for a home but deserted.  My heart and time is not worthy for the MMOs of the new age.
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    gervaise1 said:
    The "surprise" news seems to be the cpu's $2,999 RRP rather than the rumoured - according to the article $8k or later $4k price.

    Maybe the low interest - if the rumoured total of 1,500 was accurate influenced the decision. And maybe discussions with ASUS and Gigabyte as well both of who would have wanted to see some "reasonably realistic" sales predictions before giving the go-ahead to motherboard design & production.
    The nominal price tag of $3000 isn't meaningful if you can't actually buy one for $3000.  And you can't.  If the cheapest system that you can buy that includes one is $10000, does it really matter how much of the price is due to the CPU and how much due to other parts?
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    Hashbrick said:
    Quizzical said:
    Hashbrick said:
    Both will be junk because when you make a CPU with no intention of it being mass market there is no support for it and threadripper can't beat intel core per core and never will.
    This is for HEDT (high end desktop), not mainstream consumer desktops.  Ever since the original Threadripper forced Intel to improve their HEDT offerings to be competitive, you've been able to get reasonably good single-threaded performance in the HEDT market from both Intel and AMD, but that's not the primary focus of the market.  Rather, HEDT is for people who need:

    1)  high performance in things that scale to a lot of CPU cores,
    2)  high memory bandwidth,
    3)  high memory capacity, or
    4)  a lot of PCI Express bandwidth

    Option (1) is commonly the main thing driving HEDT, but the others can sometimes matter, too.  For example, it's easy to get 128 GB of memory in an HEDT platform from either AMD or Intel, but you can't in a Coffee Lake or Ryzen 7 system.  If you're doing something that scales well to many CPU cores, then a 32-core Threadripper 3 part (without the weird memory configuration that hamstrings the Threadripper 2990WX) is going to handily destroy the 18-core Core i9-9980XE.

    Because AMD is going to have an enormous advantage in performance per watt, as well as the ability to stick however many cores into the package that they want (up to 64), it's probable that they'll stick enough in there to make sure that Threadripper 3 handily beats the new Xeon W-3175X.
    That's my point though it takes AMD almost twice the cores to "beat" the performance of intel core per core.  Then months later you find out AMD really didn't do anything worthy of their time when the highend builders start really testing it deep.  I want to see AMD really outshine for once, like back in the day but they just continue to disappoint.  There's a reason Xeon cpu tech had such a long life cycle in server farms, it just couldn't be beat, performance and watt.

    Hopefully my junk comment bites me in the ass, like I said for once I'd like to see AMD compete as competition will drive both companies to stop holding back.
    For programs that scale well to many CPU cores, it doesn't matter if it takes more cores to get more performance.  For programs that don't scale to many CPU cores, why are you looking at a 28-core monstrosity that costs a fortune?

    The underlying reason why Xeon has been dominant for about the last 12 years is that Intel has always been ahead of AMD on process nodes.  So long as their architectures were about as good (and sometimes they weren't), that let Intel offer more performance in the same power or the same performance with less power.  But today, their CPU architectures are about as good as each other.

    Around the middle of this year, AMD is going to be ahead on process nodes for the first time ever.  And not just slightly ahead; AMD is going to be way ahead.  TSMC's 7 nm node might even be better than Intel's long-delayed 10 nm, let alone Intel's 14 nm++^*@, or whatever they've most recently rebranded their maturing 14 nm node as.

    You could argue that AMD is a little behind Intel on performance per watt right now, but it's not a very big gap.  With the move to 7 nm, AMD will double their performance per watt overnight, while Intel stands still.  That's going to be a huge chasm in performance.  The bulk of the Xeon lineup will be thoroughly uncompetitive, with only the exception of whichever corner cases (e.g., 8-socket servers) AMD decides it isn't worth their bother to pursue.
  • RidelynnRidelynn Member EpicPosts: 6,803
    edited January 30
    @Hashbrick has a point. It does have some caveats, but...

    https://wccftech.com/first-look-intel-vs-amd-epyc-aws-cloud-iaas-benchmarks/
    Finally, here is the performance per $ comparison using AWS pricing as of 12th January, 2019. Once again this is on a relative basis. On average, the Intel counterparts provide higher value from anywhere between 1.25x all the way up to 4.1x with HPC. What these tests are trying to say here is that the Intel instances offer both higher value and absolute performances across almost all cloud use cases.


    Torval
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    Ridelynn said:
    @Hashbrick has a point. It does have some caveats, but...

    https://wccftech.com/first-look-intel-vs-amd-epyc-aws-cloud-iaas-benchmarks/
    Finally, here is the performance per $ comparison using AWS pricing as of 12th January, 2019. Once again this is on a relative basis. On average, the Intel counterparts provide higher value from anywhere between 1.25x all the way up to 4.1x with HPC. What these tests are trying to say here is that the Intel instances offer both higher value and absolute performances across almost all cloud use cases.
    Those benchmarks are garbage.  They're a case of measuring whatever is easiest to measure, without regard to whether it tells you what you want to know.

    How well a program that scales to many cores will perform on a given CPU will depend tremendously on what else is running at the same time.  That's why, in order to do a clean comparison, you want to make sure that nothing else is running, or perhaps rather, only the minimal operating system processes.  Renting a VM on a machine where unknown other people running unknown other processes are using most of the server and you just get some thin sliver is pretty terrible, as your results won't be repeatable because they'll depend on what else is running on the machine at the time.  That's exactly what they did for those benchmarks.

    Now, the top end 28-core Xeon Platinum usually will beat the top end 32-core EPYC, at least unless memory bandwidth is the limiting factor, which is where EPYC will win.  If AVX-512 or memory latency is the main concern, the Xeon might well win by a lot.  So benchmarks showing the current generation Xeon mostly beating the current EPYC aren't suspect on that basis.  But I am saying that their methodology is garbage.

    Really, though, my claims about AMD being more competitive are about future products, not past products.  Drop twice as many CPU cores into that socket for the Rome EPYC without updating the Intel CPUs and AMD suddenly offers double the performance for the same wattage as before.  That would tilt a whole lot of benchmarks in AMD's favor.
  • RidelynnRidelynn Member EpicPosts: 6,803
    edited January 31
    As a benchmark, I agree, it's garbage.

    But... those are real world commercial prices for access. And metrics for $/op, however bogus you want to believe they are.

    It may not be the most scientific experiment in the world, but it's still a worthy note. Somehow, Intel may charge the end user $$$$, but large datacenters are still getting Intel to be price competitive somehow. That is something that is worth taking notice of.

    And if we are looking at "old" Intel versus newer Epyc and that's how we are justifying the lower price - that's all the worse. I don't know, but I suspect the disparity really is that AWS is charging a higher markup on Epyc (or rather, they could have lowered the price farther and made the same profit, but what incentive do they have to do that really)

    Tomorrow's 7nm product may offer better price/performance and watt/performance... but that isn't available now, and while I agree it ~should~.... it's still a big ~should~.
    Torval
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    Ridelynn said:

    And if we are looking at "old" Intel versus newer Epyc and that's how we are justifying the lower price - that's all the worse. I don't know, but I suspect the disparity really is that AWS is charging a higher markup on Epyc (or rather, they could have lowered the price farther and made the same profit, but what incentive do they have to do that really)
    My point is that if you compare what is available today, Intel's server CPUs look pretty good, and mostly better than AMD's.  If you make the same comparison a year from now, it's probable that AMD will be way ahead.

    The original reason I brought this up is to say that the Xeon W-3175X is the legitimate, top of the line HEDT CPU today, but that's not going to last long, as AMD is going to do a die shrink soon and Intel isn't.  For comparison, the GeForce GTX 2080 Ti is the top of the line today (excluding Titan or professional cards), and is likely to remain so at the end of this year.  If we had compelling reason to believe that the Radeon VII was going to be much faster than it, buying one today would seem like a much worse deal.
  • CleffyCleffy Member RarePosts: 6,026
    I think it is an amazing piece of engineering. It's practically a GPU in regards to thermals. It makes sense why it's only through system builders starting at $14k and requires it's own mobo. I agree, it will seem silly in another couple months when CPUs come out that are faster, cheaper, cooler, and consume less electricity. But for now... It reminds me of the FX 9500s.
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    edited January 31
    Cleffy said:
    I think it is an amazing piece of engineering. It's practically a GPU in regards to thermals. It makes sense why it's only through system builders starting at $14k and requires it's own mobo. I agree, it will seem silly in another couple months when CPUs come out that are faster, cheaper, cooler, and consume less electricity. But for now... It reminds me of the FX 9500s.
    While I did think of the FX-9590, that wasn't the top of the line performance.  It was just a dumb part, targeted mainly at people who mistakenly thought that whichever CPU has the highest clock speed must be the fastest.

    And the FX-9590 didn't use nearly as much power as the Xeon W-3175X.  Nor did it have anywhere near the overclocking potential that this one does if you have a cooling system that can make heat not be a problem.
  • gervaise1gervaise1 Member EpicPosts: 6,073
    Quizzical said:
    gervaise1 said:
    The "surprise" news seems to be the cpu's $2,999 RRP rather than the rumoured - according to the article $8k or later $4k price.

    Maybe the low interest - if the rumoured total of 1,500 was accurate influenced the decision. And maybe discussions with ASUS and Gigabyte as well both of who would have wanted to see some "reasonably realistic" sales predictions before giving the go-ahead to motherboard design & production.
    The nominal price tag of $3000 isn't meaningful if you can't actually buy one for $3000.  And you can't.  If the cheapest system that you can buy that includes one is $10000, does it really matter how much of the price is due to the CPU and how much due to other parts?
    Publicity may factor into Intel's reasons for this. ASUS and Gigabyte though will be laser focused on $$$ and profit.

    And if, at the rumoured $10k, potential sales were as low as your suggested 1,500 then ASUS and Gigabyte may have taken a bow. So the difference between $10k and $3k could be difference between being able to buy a PC (from a system builder) and not buy being able to buy a PC with one of these cpus - at any cost. 

    So yeah I suspect its its meaningful.

    Now when AMD launch Rome - which they seem to have prioritised ahead of Navi - things may look very different sooner than in a years time.
     

  • RidelynnRidelynn Member EpicPosts: 6,803
    Quizzical said:
    Ridelynn said:

    And if we are looking at "old" Intel versus newer Epyc and that's how we are justifying the lower price - that's all the worse. I don't know, but I suspect the disparity really is that AWS is charging a higher markup on Epyc (or rather, they could have lowered the price farther and made the same profit, but what incentive do they have to do that really)
    My point is that if you compare what is available today, Intel's server CPUs look pretty good, and mostly better than AMD's.  If you make the same comparison a year from now, it's probable that AMD will be way ahead.
    My point was just that @Hashbrick's point that you dismissed was valid based on historical data, and I just provided some (arguably) empirical evidence of that. Intel has been more expensive, but despite that, it has made more economic sense in dense applications because of other technical benefits. Neither Zen or Zen+ have dislodged that yet, and neither has the fact that AMD is packing more cores per socket, and neither has the fact that AMD is vastly charging less per core than Intel.

    Maybe that no longer holds true with 7nm Zen2, but it's just "probable" and based on a lot of speculation and theory crafting, and certainly is not "certain". Today, and looking back for ... more than a decade really, Intel has had the upper hand, despite any marketing or technical measure AMD has been able to put forward. You make a great case on why that is "probable", but just because Zen2 is "probable" doesn't make @Hashbrick's point any less true - he's just stating what the obvious historical case has been.


    TorvalMendel
  • OzmodanOzmodan Member EpicPosts: 9,442
    Not sure what the point of this thread is as they are only making 1500 of them.  Not really a commercial item at all, just for people that have very specific needs.
  • RidelynnRidelynn Member EpicPosts: 6,803
    It's a part that shouldn't exist, and wouldn't have existed if AMD weren't where it's at. That's the point.

    If anyone had any doubt that AMD was pushing Intel's buttons, this CPU release should remove all that.
    TorvalQuizzical
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    Ozmodan said:
    Not sure what the point of this thread is as they are only making 1500 of them.  Not really a commercial item at all, just for people that have very specific needs.
    It's the same as the point of a lot of other threads on forums:  so that we can have something to talk about.
    Ridelynn
  • dave6660dave6660 Member UncommonPosts: 2,693
    Now that CPU's are getting into the 28 / 32 core counts, if Amdahl's Law still holds true, we're going to start seeing serious diminishing returns on adding more cores.

    Like you said, it's nice for bragging rights (maybe).

    “There are certain queer times and occasions in this strange mixed affair we call life when a man takes this whole universe for a vast practical joke, though the wit thereof he but dimly discerns, and more than suspects that the joke is at nobody's expense but his own.”
    -- Herman Melville

  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    dave6660 said:
    Now that CPU's are getting into the 28 / 32 core counts, if Amdahl's Law still holds true, we're going to start seeing serious diminishing returns on adding more cores.

    Like you said, it's nice for bragging rights (maybe).
    I don't see Amdahl's Law as being significant here.  For the most part, an algorithm either scales well to an enormous number of threads or else it doesn't.  If an algorithm would scale well to fifty threads, then it will probably scale well to a thousand, and likely even to a million.  And if it won't scale to fifty threads, then it probably doesn't scale well to ten, either, and has a good chance of not even scaling well to three.

    There are exceptions, of course.  But I think it would be very unusual for a program to be able to scale well to the 18 cores of the Core i9-9980XE, Intel's previous top of the line HEDT processor, but not to the 28 cores of this.
  • RidelynnRidelynn Member EpicPosts: 6,803
    I thought Amdahl's Law had more to do with latency, memory access, and other bottle necks that tend to creep up as you start increasing core counts... Hardware constraints which explain why core count performance increase won't scale linearly for a given problem.

    Not algorithms that may or may not allow for good parallelism, which would be software constraint.
  • gervaise1gervaise1 Member EpicPosts: 6,073
    Ozmodan said:
    Not sure what the point of this thread is as they are only making 1500 of them.  Not really a commercial item at all, just for people that have very specific needs.
    The 1,500 was / is a rumour, like the $10k price. I would be very surprised if both ASUS and Gigabyte would make design and manufacture motherboards if they only expected to sell - well if they split the sales a mere 750 each.
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    gervaise1 said:
    Ozmodan said:
    Not sure what the point of this thread is as they are only making 1500 of them.  Not really a commercial item at all, just for people that have very specific needs.
    The 1,500 was / is a rumour, like the $10k price. I would be very surprised if both ASUS and Gigabyte would make design and manufacture motherboards if they only expected to sell - well if they split the sales a mere 750 each.
    There are plenty of motherboards for peculiar, enterprise things that have far fewer than 750 units made.  If a customer wants only 100 units of a custom motherboard, but is willing to pay $20k each for them, but the motherboard design costs are $1 million and per-unit costs of building them $1k, there's plenty of money to be made on that.

    Note that you also can't buy the motherboards directly.  The Anandtech review estimated the motherboard price tag as $1500.  That's entirely consistent with only 750 units being made, and far more than flagship motherboards for more common platforms would cost.
  • QuizzicalQuizzical Member LegendaryPosts: 20,830
    Apparently there is only one motherboard model for this CPU.  It costs $1800:

    https://www.newegg.com/Product/Product.aspx?Item=N82E16813119192

    It also supports some Xeon CPUs.  Gigabyte has said that they're making a motherboard for it, too, though it's expected to be a few months before it is available.

    Asus's motherboard includes four 8-pin CPU power connectors, among other things.  I think that they're basically trying to build a motherboard where, if you try to make the CPU burn 1000 W, the motherboard is completely fine with that.  Or likely some figure significantly north of that.
  • grndzrogrndzro Member UncommonPosts: 1,152
    edited February 19
    Some people seem to be forgetting that Ryzen actually has equal IPC to Intel, but it is constrained by the cache/RAM system.

    Ryzen 3xxx/Zen 2 changes all that. It will be on par with Intel in IPC, ST, and clock speeds.
    Post edited by grndzro on
Sign In or Register to comment.