Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Nvidia announces new CMP HX series of mining cards

QuizzicalQuizzical Member LegendaryPosts: 23,226
https://www.nvidia.com/en-us/cmp/

Aren't you just thrilled to learn that Nvidia is diverting GPU production to dedicated mining cards?

That's probably not really what is happening here.  It's far more likely that these are new bins of GPUs that they've already produced, but couldn't sell as GeForce cards.  When wafers come back from the fabs and get cut into GPU chips, some of those chips are defective.  They commonly have multiple bins of the chips to allow them to sell defective parts.

For example, Nvidia's GA102 chip has 84 compute units and a 384-bit memory controller.  The GeForce RTX 3090 enables 82 of the compute units and the full memory controller.  The GeForce RTX 3080 enables 68 of the compute units and a 320-bit memory controller.  If a whole chip works flawlessly, they can sell it as an RTX 3090.  If several compute units or a memory controller are defective, then they can disable parts of the chip, including all of the defective ones, and still sell it as an RTX 3080.  That's a lot better than throwing the chip in the garbage.

But what if the video decode block is defective?  Who wants a video card that will crash if you try to watch a YouTube video?  That can't be sold as a GeForce card at all.  But it would still be useful to miners, as mining doesn't use the video decode block.  That's hardly the only way that a GPU chip can be defective in ways that preclude selling it as a GeForce or Quadro card.  If the memory controllers are fine, but various things that are used exclusively for graphics are defective, it's still a fine chip for mining.  So Nvidia is making mining cards out of those defective GPUs rather than throwing them into the garbage.

So does that mean that the miners will buy these new cards instead of GeForce and Radeon cards?  While that would be nice, the answer is no.  Ethereum currently has a market value of over $220 billion.  The demand for miners to get their cut of that is pretty much insatiable.  They'll buy these new mining cards and also all of the GeForce cards they can get their hands on.  More mining rigs will also mean more competition for the other parts that are needed for mining rigs, including processors, motherboards, memory, and especially power supplies.

The real reason Nvidia is launching the mining cards is to make money.  That is, after all, the same reason that they launch GeForce cards.  But this probably isn't diverting cards from their other lineups to be used as mining-only cards.  This is just another salvage bin and nothing more.
RidelynnTorval

Comments

  • RidelynnRidelynn Member EpicPosts: 7,234
    The other bit of news with this was that nVidia would "nerf" the 3060 for Eth mining (but no other cards). The nerf would be accomplished via the driver.

    Which... I am certain will be bypassed in about 12 hours after release (if it takes even that long).

    Yeah... this is nothing but lip service and profit opportunity from nVidia. I can't really blame them for cashing in, but I still wish it weren't at the expense of gamers.
    Torvaldragonlee66
  • QuizzicalQuizzical Member LegendaryPosts: 23,226
    I wouldn't be so certain that Nvidia can't cripple Ethereum mining.  They've done it in the past, even without necessarily meaning to.  The GeForce GTX 1080 and 1080 Ti did, for example, which is why those cards were still available to gamers for much of the previous Ethereum mining craze.

    The Ethereum mining algorithm mostly consists of doing a ton of random memory lookups to a space that is a little over 4 GB in size.  Depending on how a translation lookaside buffer is implemented, you can easily design that to nearly always be cached.  (That is, the TLB access from which you compute the physical memory address is cached, not the full 4 GB of data.)  You can also easily design it to nearly always miss, so that you have to do two fetches to physical memory instead of one, with the first necessary in order to find the address of the second.  The latter will cut your mining performance by nearly half as compared to the former.

    With a lot of possible mining deoptimizations, you kind of can't do them because it will also cripple the gaming performance that people care about.  But games will pretty much never do random accesses to a buffer that is over 4 GB in size.  That's such a weird thing to do that if a game does do that, the most likely cause is that either it's secretly mining Ethereum in the background or it's a bug causing undefined behavior.  Rather, games tend to do a bunch of memory fetches to the same general area, such as a bunch of accesses within a 4 MB texture, or a bunch of accesses to an 8 MB framebuffer object.  Even if those accesses within a smaller object were purely random (which they aren't), you can easily design it such that accessing the same texture repeatedly has all but the first TLB access cached, while Ethereum mining nearly always gets you TLB cache misses.
  • QuizzicalQuizzical Member LegendaryPosts: 23,226
    remsleep said:
    I would gladly pay more to have a hardware chip on the card that detects mining algorithm and cripples mining performance.

    That way it would be a lot harder to bypass.
    The problem is how to detect the mining algorithm.  You can detect some exact source code, perhaps, but it's easy enough to write slightly different code that does the same thing.
  • RidelynnRidelynn Member EpicPosts: 7,234
    I doubt that nVidia is going to completely redesign the memory controller
    a) For a lower tier card 
    b) some months after they had already been released to fab

    Apparently on reddit they already have a work around.
    Torval
  • QuizzicalQuizzical Member LegendaryPosts: 23,226
    Ridelynn said:
    I doubt that nVidia is going to completely redesign the memory controller
    a) For a lower tier card 
    b) some months after they had already been released to fab

    Apparently on reddit they already have a work around.
    All that it would take is changing the global memory page size.  Whether they can change it depends on whether it's hard-wired into silicon or whether there's any configuration available in the BIOS or drivers.
Sign In or Register to comment.