Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Intel launches Arc A-series discrete GPUs

QuizzicalQuizzical Member LegendaryPosts: 25,351
Intel has been promising for quite some time to jump into the discrete GPU market and provide some competition to Nvidia and AMD.  They had promised to have cards on shelves in Q1 2022.  Tomorrow is the last day of the quarter (or possibly today, depending on where in the world you live), so it was time to launch something.  So they did.  Kind of.

In a sense, this isn't quite their first discrete GPU launch since 1998.  There was also the Intel Server GPU and some weird discrete part that was roughly a Tiger Lake laptop chip without the processor and didn't properly support PCI Express.  But those were just test parts.  This is the launch of a GPU that they actually want people to buy.

The problem is that the parts that they actually wanted to launch today aren't ready yet.  The top end part has 4096 shaders and a 256-bit GDDR6 memory bus.  For comparison, Nvidia's newly announced GeForce RTX 3090 Ti has 10720 shaders and a 384-bit GDDR6X memory bus.  The Radeon RX 6900XT has 5120 shaders and a 256-bit GDDR6 memory bus, as well as a 128 MB L3 cache to reduce the need for so much memory bandwidth.  So it's unlikely that Intel will be able to challenge for top end performance, though they might be able to be competitive with, say, a GeForce RTX 3070 or a Radeon RX 6700 XT.

But if the parts that you want to actually launch aren't ready, you instead launch the one that is.  Even if it's the bottom of the line part with 1024 shaders and a 64-bit memory bus.  Intel is also offering a 768 shader salvage part with the same memory bus.  For comparison, the best integrated GPUs in Tiger Lake or Alder Lake parts have 768 shaders, as do the best integrated GPUs in AMD's Rembrandt (Ryzen Mobile 6000 series) parts.

In classic Intel fashion, when making performance claims about their new hardware, they don't acknowledge the existence of non-Intel hardware.  Even though basically all of the discrete GPUs that people will compare Intel's new ones to are made by either Nvidia or AMD.  Instead, they compare their new Arc A370M to the integrated GPU in a Core i7-1280P and promise that it's faster.  Well, it probably is, since it's basically the same architecture but with more shaders, clocked higher, and a faster memory bus.

But they don't even compare it to, say, the integrated GPU in a Ryzen 9 6980HX, which is currently AMD's best laptop part.  And unless they can beat that, then the card doesn't have much of a point.  They probably beat it sometimes; at minimum, they have more memory bandwidth.

Oh, and this launch is for laptops only.  Because who wants a new discrete card in a desktop that isn't necessarily faster than an integrated GPU?  Well, people who just want to display the desktop and whose old GPU died might want that, but if that's all that you want, there's no reason to wait for Intel rather than picking up a Radeon RX 550 or GeForce GTX 1050 or some such and calling it a day.

Laptop part vendors sometimes launch claiming that they have some large number of "design wins".  Instead of that, Intel says that the first design to use their new discrete laptop parts is a Samsung Galaxy Book2 Pro, which is available now.  That seems to be news to Samsung, whose site lists the Galaxy Book2 Pro as available only for pre-order, and doesn't mention anything about a discrete GPU inside.  Even so, if Intel could cajole a few laptop vendors into bringing Cannon Lake to market, I have no doubt that something with an Arc A370M will show up eventually.

Eventually.  That's the real takeaway here, as the real news is that the real hardware that gamers should care about is delayed.  The Arc 7 A770M with its 4096 shaders and 256-bit GDDR6 memory bus might well be a nifty laptop GPU.  It's desktop equivalent might well be a nifty mid-range GPU.  Or it might not.  But it's not here yet.  Intel is now promising a launch in "early Summer".  Probably even Summer of 2022, though the slide doesn't explicitly say that.

Of course, AMD and Nvidia presumably have new video cards coming later this year, likely on TSMC 5/4 nm.  Intel Arc Alchemist is likely to have little to no time on the market before it has to compete with a GeForce RTX 4000 series and/or a Radeon RX 7000 series, or whatever the incumbent GPU vendors decide to name their next parts.  At that point, being a process node behind will almost certainly make Intel look bad on any efficiency metrics you like.

Even so, you have to start somewhere, and it's awfully hard to start at the top.  One presumes that Intel GPUs can catch up on process nodes eventually, whether via Intel foundries catching up to TSMC or by hiring TSMC to build their GPUs, as they did with Arc Alchemist.  So bring on Battlemage, Celestial, and Druid.  If Intel can provide decent mid-range cards with decent reliability and driver support, that will be a boon for gamers.  Bringing more GPUs to market will bring down prices, at least unless the price of Ethereum spikes upward yet again.  But that is several big "if"s, and it will be several more months before we see if Intel can deliver.
[Deleted User]BrotherMaynard
«1

Comments

  • SylvinstarSylvinstar Member UncommonPosts: 158
    Here's the real news concerning Intel and Nvidia.  Mainly it's about Intel though =)




    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Here's the real news concerning Intel and Nvidia.  Mainly it's about Intel though =)

    That's about Intel's foundry business, not about Intel's GPUs.  Certainly, Intel would love to make their foundry business into a legitimate rival to TSMC and competitive for customers that need top end process nodes.  It's not clear whether they'll be able to do that, but they've got a real shot at it.

    Intel is very new to having external customers for their foundry, however.  Intel was the last company in the world to decide that they didn't have enough volume to justify having their own foundries for just their own products.  There used to be dozens of companies that had their own fabs.  Without sufficient volume to justify their own fabs, Intel had to either give up their foundries or take external customers, and they chose the latter.  We'll see how it plays out for them.

    For decades, Intel only built their own parts.  Since 22 nm, they've allowed a handful of other companies to use their fabs, but been very picky about it.  Now, they're trying to get any customers that they can, and would love to have Nvidia, AMD, Apple, Qualcomm, or any other big silicon vendors fabricate their parts at Intel.  If you get paid to fabricate your competitors' chips, then you win every generation no matter who which company designs the best parts.
    [Deleted User]
  • RidelynnRidelynn Member EpicPosts: 7,383
    I consider Intel's GPUs vaporware until proven wrong.

    Given Raja is on the case, I'm sure something will ship eventually, that it will be a giant heap of promises that delivers only on it's ability to disappoint.
    [Deleted User]Champie
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Intel isn't willing to compare their hardware to other vendors, so AMD decided to make that comparison for them:

    https://www.techpowerup.com/293522/amd-claims-radeon-rx-6500m-is-faster-than-intel-arc-a370m-graphics

    It shouldn't be surprising that they claim the Radeon RX 6500M is faster across the board than the Arc A370M, as the former has equivalent or better specs in all of the revealed paper specs.  There may be some degree of cherry-picking, but AMD lists half of the games that Intel listed in one of their slides.

    What is perhaps more interesting is that the AMD GPU does this in 3/4 of the transistor count on exactly the same process node as Intel.  One wouldn't expect Intel's first real discrete GPU in decades to blow away the competition, and you could argue that for Intel to be able to launch a product that isn't completely awful is at least a moral victory of sorts, considering their history.
    [Deleted User]
  • BrotherMaynardBrotherMaynard Member RarePosts: 567
    edited April 2022
    Making side-by-side comparisons between different architectures is a bit pointless; you can't just say Nvidia has 10k shaders, while this Intel GPU only has 4k. They are completely different beasts. You can see it with Nvidia and AMD, which at their launch were pretty even at the top end, with 6900XT even taking the lead in some games. Their numbers of paper are very different, though.

    Second point - just a minor quibble, really - is about 'Intel never mentioning their competitors'. They actually did, during their darkest period about a year ago, when they introduced their new CPUs by basically making video presentations that were 99% about AMD. It was pretty funny to watch, actually. It's like they were trying to avoid even mentioning their new product (and their awful naming, which even their PR staff couldn't remember) in their own promo videos.



    Regarding the new GPUs, I'm not really worried about their performance, it has been widely known these are not the performance beasts people have been looking for. We'll need to wait for the other ones, the 'Battlemage', the 'Celestial' and the 'Druid' (I have no idea what their marketing team have been smoking to come up with those - other than wanting a sequence of A, B, C and D). With 'Alchemist', Intel merely provides a better performance vs integrated graphics. At best, it will compete with the very low end of Nvidia and AMD. I don't think it has ever been Intel's goal to aim higher with this first batch.

    If they price them correctly (e.g. just a few bucks more than the integrated GPUs), they could sell very well and in the process Intel can get customers used to their new product type while ironing out any issues and working on the drivers as they prepare the real competition for Nvidia and AMD. That's pretty much the main role of this Arc GPU...

    Edit: one more thing re: competition with Nvidia / AMD. If GPU shortages re-appear with the next GPU generation (especially due to the Fab capacity booked for other customers), Intel might have a significant advantage. Both Nvidia and AMD would obviously focus on their high-end products, as they offer the best margin, while Intel could use their absence at the low end to put some serious foot in the door. Intel still has its own fabs (and is preparing new ones - but that's more medium terms stuff), so they have the flexibility and unrestricted access that the other two do not have. Look at what happened last time with AMD, as they had to simultaneously also fulfil orders for console HW and their own new CPU generation, while at the same time most of TSMC's capacity had been bought by Apple.


    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Making side-by-side comparisons between different architectures is a bit pointless; you can't just say Nvidia has 10k shaders, while this Intel GPU only has 4k. They are completely different beasts. You can see it with Nvidia and AMD, which at their launch were pretty even at the top end, with 6900XT even taking the lead in some games. Their numbers of paper are very different, though.

    Second point - just a minor quibble, really - is about 'Intel never mentioning their competitors'. They actually did, during their darkest period about a year ago, when they introduced their new CPUs by basically making video presentations that were 99% about AMD. It was pretty funny to watch, actually. It's like they were trying to avoid even mentioning their new product (and their awful naming, which even their PR staff couldn't remember) in their own promo videos.

    Regarding the new GPUs, I'm not really worried about their performance, it has been widely known these are not the performance beasts people have been looking for. We'll need to wait for the other ones, the 'Battlemage', the 'Celestial' and the 'Druid' (I have no idea what their marketing team have been smoking to come up with those - other than wanting a sequence of A, B, C and D). With 'Alchemist', Intel merely provides a better performance vs integrated graphics. At best, it will compete with the very low end of Nvidia and AMD. I don't think it has ever been Intel's goal to aim higher with this first batch.

    If they price them correctly (e.g. just a few bucks more than the integrated GPUs), they could sell very well and in the process Intel can get customers used to their new product type while ironing out any issues and working on the drivers as they prepare the real competition for Nvidia and AMD. That's pretty much the main role of this Arc GPU...

    Edit: one more thing re: competition with Nvidia / AMD. If GPU shortages re-appear with the next GPU generation (especially due to the Fab capacity booked for other customers), Intel might have a significant advantage. Both Nvidia and AMD would obviously focus on their high-end products, as they offer the best margin, while Intel could use their absence at the low end to put some serious foot in the door. Intel still has its own fabs (and is preparing new ones - but that's more medium terms stuff), so they have the flexibility and unrestricted access that the other two do not have. Look at what happened last time with AMD, as they had to simultaneously also fulfil orders for console HW and their own new CPU generation, while at the same time most of TSMC's capacity had been bought by Apple.
    I agree that comparisons based on specs across architectures are a little dubious.  You can make them a little better if you pro-rate them in various ways.  But while left unstated, I wasn't basing the specs purely on the number of shaders.  Most notably, the GeForce RTX 3090 Ti has 84 compute units, the Radeon RX 6900 XT has 80, and the top Arc Alchemist part has 32.  Paper specs do give you some idea of where a part is likely to land, though.

    The part that they launched is because it was the part that they had.  Their real target part is the bigger one that should be about 4x the performance of this one, or maybe a little more if it's clocked higher.

    The new Intel GPUs are manufactured at TSMC, not at Intel.  I'm not sure about future generations.  I'd assume that Intel would like to manufacture their own GPUs, but they'll also need a competitive process node to do that, which is something that they don't have today.
  • RidelynnRidelynn Member EpicPosts: 7,383
    I don't think you can give Intel a pass because this is their first discrete GPU. 

    It's not like this is Intel's first GPU. And no, not because of the technicalities i740 back in the late '90's, or the failed Larrabee of '06. They have been making GPUs for a really, really long time, and are the single largest vendor of GPUs -- since one has been included with almost every consumer processor they sell for the past really long time. Intel has been making graphics products since before nVidia was even founded.

    Now, sure, there are differences between a product that is internal to a CPU, and one destined to live outside without all the subsequent constraints. But .... I suspect, the relaxing or removal of constraints should make it easier, not harder, to get better performance if you don't have to worry about sharing die space or really tight thermal/power constraints.

    I suspect these products really only have two markets. The first would be OEMs, where Intel will give them such a great combo deal (and the fact that Intel will push all these form factor ideas to go with it). Vendors will build them, and they will probably sell in moderate numbers just because they will be available.

    But what I really think Intel is looking at is to cash in on some of the "gaming" craze, which is really just dog whistle for cryptominers - which is I think Intel's real audience here. They see that nVidia and AMD have sold everything they can make, and can't keep up - so there's obviously room for another player... so long as Crypto keeps the market hot, it doesn't matter if they release a crap gaming product with crap drivers - it just needs a decent hash rate and price tag and you won't be able to make enough to satisfy the "infinite returns" that mining has right now. At least until that bubble pops.
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    There are a lot of things that are new to Intel here, not just discrete GPUs.  Things are more likely to go wrong the first time you try something than on subsequent attempts when you can learn from what went wrong before.

    Intel's past high end products had all been manufactured at Intel fabs.  They had fabricated a handful of chipsets elsewhere when they were short on capacity at their own fabs, but that's not at all similar to trying to make a high end, competitive product.  Now they're trying to mix and match fabs both by fabricating the new generation of GPUs at TSMC and opening up their own fabs to build chips for nearly anyone else who wants to use them.

    There are a lot of things that can go wrong when you try to build a big GPU that just aren't problems on a small one.  Big GPUs are the standard example of a NOC, and shuffling around data at a rate of many terabytes per second is just plain harder when it's more data on a bigger chip.  Local caches that merely get replicated scale easily, but L2 cache and global memory are harder to scale.  The big Arc Alchemist and Ponte Vecchio are basically Intel's first real attempts at doing this on a much grander scale than they've previously done with Xeon.

    High performance memory such as GDDR6 or HBM2 isn't entirely new to Intel, as past iterations of Xeon Phi and certain FPGAs have also attempted to use it.  Those previous efforts turned out badly, though, in part because Xeon Phi was just awful all around and no one has figured out how to make high performance memory just work on an FPGA.  This will be their first real attempt at getting high performance memory to work with a product that can plausibly put it to good use.

    The gamer demands of driver support are far more demanding for higher end gaming parts than for integrated GPUs that really just need to display the desktop, decode videos, and not burn too much power.  Intel has really stepped up their GPU driver support from its abysmal state of 15 years ago (when they'd discontinue driver support a few months after launch), but having a lot of people actually care if they run into video driver bugs in a new game on launch day will be a new thing for Intel.

    Meanwhile, Intel claims that they're trying to break into the enterprise GPU compute market with Ponte Vecchio and finally bring some competition to Nvidia.  They don't necessarily have to be awesome to be profitable there, as just having a decent product and undercutting Nvidia's monopoly pricing levels might be enough to sell a lot of parts.  What they're doing with OneAPI looks really good on paper, though it remains to be seen just how good the delivered product will be.

    The GPU compute market is also unique because that's the one major GPU market that AMD mostly ignores, at least apart from mining.  Yes, yes, they have Radeon Instinct and ROCm and so forth, and it technically is possible to buy Radeon Instinct cards and run something or other on them.  But in order to be a real competitor in enterprise GPU compute, you have to have products that fit rack-mountable servers nicely (not just mid-tower desktops), you have to have good driver support for those products, and you have to make the products and drivers practical to actually buy and use.  AMD has had all three of those at various points in time, but never all three at once.  What's missing today is driver support:  ROCm is complete garbage, to the extent that Radeon Instinct is more a marketing stunt and a fabrication test vehicle than a real product.

    With all of the things that are new to Intel, do I expect Arc Alchemist (or Ponte Vecchio)  to be competitive with the competition on whatever efficiency metrics you like?  No, and I'd be really surprised if they are.  But just getting something that more or less works well enough to be useful while being substantially inferior to the competition could be major progress for Intel if it lets them see what went wrong and fix it in subsequent generations.
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Apparently the Intel laptop GPU launch was only for South Korea.  The parts will be available elsewhere eventually, but it might be a few months.
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Torval said:
    Quizzical said:
    Apparently the Intel laptop GPU launch was only for South Korea.  The parts will be available elsewhere eventually, but it might be a few months.

    I laugh-cried at this. It feels so pathetic and cringe to see how consumer hardware is being handled at this point.
    I think that the basic issue is that Intel promised that the cards would launch on Q1 2022. When there were rumors of delays, they doubled down and promised that they'd be on store shelves by the end of the first quarter.

    The delays were real, however.  That didn't mean that they had no working parts, but rather, that they only had a handful.  That's enough to launch one low-volume product in one market (and possibly let it sell out there), but not enough for a real, worldwide launch.

    It's common to have a handful of working chips for a product months before launch.  You manufacture a handful, test them out when they come back from the fabs, and see what's wrong.  There's nearly always something broken in first silicon, and more likely, a lot of things.  So you make changes and try again, and get back the new batch of chips six weeks later or so.  Repeat however many times it takes to get chips that work well enough that you're willing to send in a big order for millions of them.

    You could send in a big order sooner, but you don't want to produce a million chips and then have to throw them all in the garbage because they didn't work.  Thousands of chips is a lot cheaper and lets you diagnose what went wrong and fix it.  Once you've got something that entirely works and you're ready for a big order, you might well have thousands of working chips that you could sell.

    It will take months before the chips come back from the big order that you need for a real, volume launch come back, though.  You need to send some samples to design partners, such as laptop vendors making laptops that will use the cards, so that they can get their own hardware working properly with the new laptop without having to wait for larger volume to be available.

    Intel surely is doing some of that behind the scenes, too, but they decided to also do an official launch while they still had very low volume.  Companies usually don't do that, but it's not unheard of.  For example, it was about two months between the nominal launch of the GeForce GTX 680 and cards actually showing up at retail--but when the cards finally did show up, it was a flood of them.  The GeForce GTX 1080 and the early parts from the GeForce RTX 3000 series had very soft launches, but that was waiting on new memory standards, not GPU silicon.

    Usually vendors don't want to tell you to wait for their next generation, as it will mean that you don't buy their previous generation product.  For Intel discrete GPUs, there isn't a previous generation product on the market to buy.  So Intel didn't feel the usual pressure to wait.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Yeah, no pressure to wait for volume. I agree with that. In fact, I'd say they are heavily motivated to get it out the door as soon as they can, in any capacity they can.

    The reason being is that they know it's not terribly competitive versus this current generation, at least it doesn't appear that it will be based on most performance metrics -- and this actual first release is just confirming that.

    They need to get the product out the door ASAP so it gets compared to this generation, where it may not be the best, but it has a fighting shot at being competitive based on price/performance. Give AMD and nVidia enough time to get their next generation out the door (which may be as soon as later this year), and it makes these Intel parts look that much worse, and they have a much harder time competing on even price/performance.

    The other reason to get them out - the current mining bubble has all appearances of being ready to pop. Not saying crypto is (although I am eagerly awaiting that day) - just that mining seems to be back on a downward slope again. If Intel misses out on that, I think that was their real target market, and it will crush their sales projections if they don't have an infinite sales market ready to snap these things up no matter what the driver or (gaming) performance situation is.
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    https://community.intel.com/t5/Blogs/Products-and-Solutions/Gaming/Engineering-Arc-5-9-2022/post/1383055

    Apparently the desktop cards will initially be exclusive to China.  This is only for the low end, A3 parts that aren't much faster than an integrated GPU, anyway.  So in a sense, those parts being exclusive to China isn't really much of a loss to the market, but it is weird.

    The A7 parts that gamers may actually care about are coming later, and will initially be exclusive to OEMs and system integrators.  So basically, you still won't be able to buy one without buying it as part of an entire computer.  It sounds like Intel is worried that their drivers will completely fail if you try to pair it with unfamiliar parts, such as putting it in an old motherboard as a way to upgrade your system.

    Intel does still promise that the cards will launch and be available to buy eventually.  It's far from clear whether that will happen before GeForce 4000 series or Radeon 7000 series (or whatever the respective next generations are) show up on the market.
    [Deleted User]Ridelynn
  • MendelMendel Member LegendaryPosts: 5,609
    Announcing a new graphics card that doesn't have the components they really want to include may be the hardware manufacturers attempt to get on the crowdfunding bandwagon.  Maybe Intel hopes their cult will rival Star Citizen's cult following someday.  B)



    [Deleted User]

    Logic, my dear, merely enables one to be wrong with great authority.

  • RidelynnRidelynn Member EpicPosts: 7,383
    edited May 2022
    Yeah this just keeps getting better and better. It's so good they even promoted Raja to VP. 

    And the promises keep getting pushed back with very restrictive caveats. 

    I don't believe we'll see a general release (one with broad availability that isn't locked to an OEM builder) Intel GPU for another 2 years, and I give it even odds that by then Intel may well cancel the project - as i still maintain the biggest reason Intel finally decided to jump in here was to get a piece of the mining action, and that is finally dying back off.
    [Deleted User]
  • WordsworthWordsworth Member UncommonPosts: 166
    Go ahead, lil' buddy.  You might even get 7nm out next year before M1 gets 3...
    Ridelynn
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    edited May 2022
    Go ahead, lil' buddy.  You might even get 7nm out next year before M1 gets 3...
    They renamed 10 nm Superfin+ to 7 nm, and so Alder Lake is already "7 nm" and launched last year.

    That isn't cheating as much as it sounds, as TSMC and especially Samsung had already been exaggerating their node names for years.  Intel had to do likewise just to keep up.  What's reasonable to dispute is just how far of an exaggeration would make them comparable to their competitors.
    [Deleted User]
  • ChildoftheShadowsChildoftheShadows Member EpicPosts: 2,193
    edited May 2022
    "X-nm process" has turned into a marketing term and largely means nothing today. The processes used have changed so much that the size isn't comparable like it used to be.
    Ridelynn
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Ridelynn said:
    Yeah this just keeps getting better and better. It's so good they even promoted Raja to VP. 

    And the promises keep getting pushed back with very restrictive caveats. 

    I don't believe we'll see a general release (one with broad availability that isn't locked to an OEM builder) Intel GPU for another 2 years, and I give it even odds that by then Intel may well cancel the project - as i still maintain the biggest reason Intel finally decided to jump in here was to get a piece of the mining action, and that is finally dying back off.
    I'm more optimistic than that.  I think that we'll see Intel A700 series discrete video cards available as a standalone purchase (or possibly as a small bundle where the video card is the point of the bundle) on New Egg by the end of this year.

    What is happening is that they're trying to launch as early as they possibly can, so they do basically a paper launch, even though the hardware really isn't ready.  Think of the GeForce GTX 680, for example:  it was about two months between the official launch date and cards being actually available to buy, but once they showed up, there were a lot of them.  There's an old adage that you don't want to hype your next generation too soon because it will cannibalize sales of the previous one, but Intel doesn't have a previous generation to cannibalize.

    Intel wants reviews to compare their new lineup to the GeForce RTX 3000 and Radeon RX 6000 series, not their successors that may be out later this year.  If they want until they have everything in order for a hard launch, then that could easily mean reviews that say that the A770 or whatever is much further behind a GeForce RTX 4090 and Radeon RX 7900 than it would have been if the comparison was to previous generation hardware.

    Intel knows very well that they're going to lose money on this first generation of discrete GPUs.  That's part of the cost of breaking into the market.  But they'll make mistakes, learn from them, and have better hardware in future generations.  Whether that hardware ever becomes genuinely competitive as opposed to just a budget option is an open question.

    I think that Intel recognizes that they need to have good GPUs in order to be competitive as integrated GPUs eat up more and more of the market.  They don't want a future where most gamers use integrated graphics and insist on buying the $500 AMD APU over the $500 Intel APU because Intel's integrated GPU is no good.  They'll learn a lot from trying to build genuinely high end parts, and that will make their future integrated GPUs better than they would be otherwise, whether they're able to be competitive in the discrete GPU market or not.

    Lest some readers think that there won't be a future where most gamers use integrated graphics, we're already well on the way to it.  Game consoles have done that for years, and Apple's recent M1 Max chip likewise uses a high-performance integrated GPU.  As die shrinks make CPU cores smaller and smaller, we'll probably see APUs in the $300-$500 range with on-package memory that beat anything you can do for the same price with a CPU and a discrete GPU.
    finefluffRidelynnArglebargle
  • fineflufffinefluff Member RarePosts: 561
    Quizzical said:
    Ridelynn said:

    Lest some readers think that there won't be a future where most gamers use integrated graphics, we're already well on the way to it.  Game consoles have done that for years, and Apple's recent M1 Max chip likewise uses a high-performance integrated GPU.  As die shrinks make CPU cores smaller and smaller, we'll probably see APUs in the $300-$500 range with on-package memory that beat anything you can do for the same price with a CPU and a discrete GPU.
    Could that help explain the shift to higher pricing even for the lower end cards? Will GPU manufacturers focus on high cost high performance discrete GPUs because they expect the lower end of the gaming market to be covered by integrated graphics?
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    edited May 2022
    finefluff said:
    Quizzical said:
    Ridelynn said:

    Lest some readers think that there won't be a future where most gamers use integrated graphics, we're already well on the way to it.  Game consoles have done that for years, and Apple's recent M1 Max chip likewise uses a high-performance integrated GPU.  As die shrinks make CPU cores smaller and smaller, we'll probably see APUs in the $300-$500 range with on-package memory that beat anything you can do for the same price with a CPU and a discrete GPU.
    Could that help explain the shift to higher pricing even for the lower end cards? Will GPU manufacturers focus on high cost high performance discrete GPUs because they expect the lower end of the gaming market to be covered by integrated graphics?
    GPU vendors haven't launched new genuinely low end cards in quite some time.  The reason that the "low end" seems to be increasing in price is that what used to be mid-range is now the bottom of the lineup.

    For example, back around 2010, AMD's lineup ranged from the Radeon HD 5450 with one compute unit up to the Radeon HD 5870 with twenty.  The next year, Nvidia's lineup ranged from the GeForce GT 510 with one compute unit up to the GeForce GTX 580 with sixteen.  (I'm ignoring dual-GPU cards.)  That's a huge range, with the bottom of the line only 5-6% as powerful as the top of the line in the same generation.

    Today, in the GeForce RTX 3000 series, Nvidia's lineup ranges from the GeForce RTX 3050 with 20 compute units up to the GeForce RTX 3090 Ti with 84.  AMD's lineup ranges from the Radeon RX 6400 with 12 compute units up to the Radeon RX 6950 XT with 80.  Rather than the top of the line having 16 or 20 times as much hardware as the bottom, it's 4.2 or 7 times as much, which is a much smaller range.

    There's no reason why you can't make a smaller, cheaper GPU than that.  The recently launched Ryzen 3 5125C (for chromebooks) has an integrated GPU with 3 compute units.  But they don't bother making genuinely low end discrete cards anymore.

    There isn't enough of a market for that over an integrated GPU, and people who do need a discrete card can buy one from an older generation on an older process node.  They discontinue most of the lineup for older generations, but they do continue producing at least one low end card from some older generation.  You can still buy a new GeForce GT 1030 or Radeon RX 550 today, as well as some others that are much older, like a GeForce GT 730.
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    edited May 2022
    finefluff said:
    Quizzical said:
    Ridelynn said:

    Lest some readers think that there won't be a future where most gamers use integrated graphics, we're already well on the way to it.  Game consoles have done that for years, and Apple's recent M1 Max chip likewise uses a high-performance integrated GPU.  As die shrinks make CPU cores smaller and smaller, we'll probably see APUs in the $300-$500 range with on-package memory that beat anything you can do for the same price with a CPU and a discrete GPU.
    Could that help explain the shift to higher pricing even for the lower end cards? Will GPU manufacturers focus on high cost high performance discrete GPUs because they expect the lower end of the gaming market to be covered by integrated graphics?
    I should add that part of the apparent shift is due to miners buying everything.  That will settle down once cryptocurrency mining goes away, though it will take some time for the market to normalize.  The recent crash of Terra/Luna helps, as that brought down cryptocurrency prices far more broadly.
  • RidelynnRidelynn Member EpicPosts: 7,383
    I've watched a lot of Youtube videos.

    You can find some amazing stuff on Youtube.

    But it helps if you look at things other than Youtube from time to time.
    [Deleted User]
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited May 2022
    I came to a realization not terribly long ago. At the time I was a bit bent about the top end card price going through the roof - well, all card prices really. But what used to be top-end was around $600-700, and now it's over $2,000.

    But...

    The name on the card doesn't mean much. "Tiers" as we all became used to have completely been blown out of the water.

    As Quiz likes to say, you should look to roughly double your performance when you look to upgrade. I think that's a decent starting point - some people feel more comfortable chasing cutting edge and will upgrade a bit sooner, others are comfortable for a bit longer -- it's just a rough rule of thumb.

    I'd also state - that your upgrade should fit inside your budget. If doubling your performance (or whatever your metric is) isn't possible inside your budget; wait another generation. For a given price, the performance should increase, and as generations move forward, the price for that level of performance should come down. 

    That's the real metric -- that price / performance ratio. No matter what name they put on the card, or what price they release it for. It takes a bit of teasing to get that data out -- the naming convention was nice, but nVidia blew that out with the 2000 (Ampere) generation. 

    During the sane Good Old Days, you could roughly double your performance for the same cost every 2-3 generations. Buy a card, skip the next generation or two, and then you could find your huckleberry, for roughly the same price as you paid before.

    My concern is that the mid and lower ends (basically anything <$500) were obliterated with the pandemic. We need to see a return of cards in that $150-$300 budget range. If you look at the Valve Hardware Survey, the top 10 is dominated by cards that originally fit in that price range; and by consequence of the pandemic and mining making pretty much all cards cost >$500, we still see a bunch of older Pascal cards taking up the top slots.

    So, back to my original realization. I've made my peace with top end cards costing $2,000. That isn't my budget, but the cards are signifcantly faster than what I'm chasing anyway. So if people want to chase those, more power to them (and by power, I also mean electricity, those take a lot) = it shouldn't affect me, my budget, or what & when I'm looking at upgrading.

    Just for example, if you bought a 1070 for $350 back in the day, your modern "roughly double" GPU would be a 3060 - even though it's named a tier lower, that represents a roughly doubling of graphics capability. And the MSRP of a 3060, if we were in sane times, would be about $350 -- which lines up. But prices aren't exactly sane -- yet. Street price on a 3060 as of today is still around $500, but is trending downward. 

    It could be prices never adjust, this generation. If it doesn't, then hopefully by the next generation we see it and get back on track. Barring another pandemic or another mining boom. And then so long as the same cadence is able to continue - roughly double the performance every 2-3 generations for a given cost, and we see the restoration of the lower price brackets - I think I'm ok.
    Post edited by Ridelynn on
    [Deleted User]
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Ridelynn said:
    I came to a realization not terribly long ago. At the time I was a bit bent about the top end card price going through the roof - well, all card prices really. But what used to be top-end was around $600-700, and now it's over $2,000.

    But...

    The name on the card doesn't mean much. "Tiers" as we all became used to have completely been blown out of the water.

    As Quiz likes to say, you should look to roughly double your performance when you look to upgrade. I think that's a decent starting point - some people feel more comfortable chasing cutting edge and will upgrade a bit sooner, others are comfortable for a bit longer -- it's just a rough rule of thumb.

    I'd also state - that your upgrade should fit inside your budget. If doubling your performance (or whatever your metric is) isn't possible inside your budget; wait another generation. For a given price, the performance should increase, and as generations move forward, the price for that level of performance should come down. 

    That's the real metric -- that price / performance ratio. No matter what name they put on the card, or what price they release it for. It takes a bit of teasing to get that data out -- the naming convention was nice, but nVidia blew that out with the 2000 (Ampere) generation. 

    During the sane Good Old Days, you could roughly double your performance for the same cost every 2-3 generations. Buy a card, skip the next generation or two, and then you could find your huckleberry, for roughly the same price as you paid before.

    My concern is that the mid and lower ends (basically anything <$500) were obliterated with the pandemic. We need to see a return of cards in that $150-$300 budget range. If you look at the Valve Hardware Survey, the top 10 is dominated by cards that originally fit in that price range; and by consequence of the pandemic and mining making pretty much all cards cost >$500, we still see a bunch of older Pascal cards taking up the top slots.

    So, back to my original realization. I've made my peace with top end cards costing $2,000. That isn't my budget, but the cards are signifcantly faster than what I'm chasing anyway. So if people want to chase those, more power to them (and by power, I also mean electricity, those take a lot) = it shouldn't affect me, my budget, or what & when I'm looking at upgrading.

    Just for example, if you bought a 1070 for $350 back in the day, your modern "roughly double" GPU would be a 3060 - even though it's named a tier lower, that represents a roughly doubling of graphics capability. And the MSRP of a 3060, if we were in sane times, would be about $350 -- which lines up. But prices aren't exactly sane -- yet. Street price on a 3060 as of today is still around $500, but is trending downward. 

    It could be prices never adjust, this generation. If it doesn't, then hopefully by the next generation we see it and get back on track. Barring another pandemic or another mining boom. And then so long as the same cadence is able to continue - roughly double the performance every 2-3 generations for a given cost, and we see the restoration of the lower price brackets - I think I'm ok.
    There are several reasons for the rising prices.  One is inflation:  if the price of everything else goes up, that includes video cards.  Even once inflation goes away, prices remain higher than they were.  For example, if you have four years of 19% inflation, that will cause prices to about double.  If inflation goes away, prices stop going up, but they don't go back down.  For most of the last 40 years, the United States hasn't had much inflation, but that changed around the start of 2021.

    Another is a silicon shortage caused by miners buying everything.  That seems to be easing up, and so the temporarily elevated prices caused by mining locusts are receding.

    But perhaps the most important factor is that if you're trying to predict prices, it's a mistake to think of them in terms of model numbers rather than silicon sizes.  For example, here's a list of video cards with die sizes of around 300 mm^2:

    Radeon HD 5870
    GeForce GTX 460
    Radeon HD 6870
    GeForce GTX 560 Ti
    Radeon HD 7970
    GeForce GTX 680
    GeForce GTX 1080
    Radeon RX 480
    Radeon RX 590
    GeForce GTX 1660
    Radeon VII
    GeForce RTX 3050
    Radeon RX 6700 XT

    That includes a lot of AMD flagship parts, and even a couple of Nvidia flagship parts.  But the nearest modern equivalents are not the flagship parts from either vendor, but the Radeon RX 6700 XT and the GeForce RTX 3050.  If you think of it in terms of silicon sizes, you'd expect the GTX 1660 or RTX 3050 to cost about what the GTX 1080 or GTX 680 did, or any number of AMD flagship parts.  And of course the higher end RTX 2000 and 3000 series parts are going to cost a lot more than that.

    Here's a list of video cards with die sizes around 500 mm^2:

    GeForce GTX 480
    GeForce GTX 580
    GeForce GTX 780 Ti
    GeForce GTX 1080 Ti
    Radeon RX Vega 64
    GeForce RTX 2080
    Radeon RX 6900 XT

    That's a lot of Nvidia flagship parts from when they built huge dies, and not a lot of AMD until recently.  But in the Turing generation, the much larger RTX 2080 Ti was the flagship, not the mere 2080.  In the Ampere generation, nothing is particularly close to 500 mm^2, but the closest part is the RTX 3070.  Prices today on the RTX 3070 are in line with the launch prices of the rest of the parts on that list.

    All else equal, if you want to buy 700 mm^2 of silicon on a cutting edge process node, it's going to cost more than if you want to buy 300 mm^2.  What used to be the low end tier is now gone because it wouldn't be any better than an integrated GPU.  What used to be the high end tier is not the top anymore, as vendors have figured out how to pack more silicon onto a card for a higher tier that didn't used to exist.  So of course that new, higher tier is going to cost more than the successors to tiers that have existed in the past.
  • QuizzicalQuizzical Member LegendaryPosts: 25,351
    Intel is now making a lot of noise and trying to get attention for their upcoming A770 and A750 GPUs.  And also the A580, but that's likely to be a low volume part, as it's severely cut down.  That could mean a real launch coming soon.  Or it might not.  You never can tell with marketing.
    Andemnon
Sign In or Register to comment.