There are really several intersecting thoughts here. I don't think I can structure this post without burying the lede somehow, so let's get the main thoughts up front.
1. Whatever happened to PowerTune?
2. Why does the reference RX 480 only have a single 6-pin PCI-E connector?
3. What if a driver update reduces clock speeds?
4. It's a good thing that third-party board partners are around.
Returning to the title, I'd include the GeForce GTX 1080, GeForce GTX 1070, and Radeon RX 480 in that. For the GeForce cards, it's really just a question of price. Do you really want to pay $700 (or $800 or $900, depending on how much you're gouged) for a product that you know will soon be $600?
But with the RX 480, it's something much worse. At least Nvidia was up front about pricing, if not timing. And the delays were incandescently obvious to those who understood the tech. But I've complained enough about Pascal, so I want to spend most of this post going after Polaris.
Some reviews noticed the Radeon RX 480 pulling 160 W. Now, there's nothing wrong with a desktop video card burning 160 W. But there's something very wrong with a PCI Express card with only a single 6-pin PCI-E power connector burning 160 W.
The PCI Express slot is rated as being able to deliver 75 W, a 6-pin connector also 75 W, and an 8-pin connector 150 W. If all you've got is the slot and a single 6-pin, that's 75+75 = 150 W. Pulling 160 W through that is running something out of spec.
Now, the Radeon RX 480 isn't the first card to do this. The GeForce GTX 470 had only two 6-pin PCI-E power connectors, and it routinely pulled more than 225 W.
No, I didn't just say that Polaris is as bad as Fermi. But I suspect that the reasons are the same: someone decided late in the game that the stock clock speed needed to increase. Rather than going back to the drawing board to beef up power delivery and make a card that could handle it, they just took the cards they had and clocked them higher.
And the problem is completely fixable simply by adding more power delivery circuitry. Give the RX 480 a second 6-pin connector and suitable corresponding VRMs and such on the board and you're set. It's not at all like the GeForce GTX 480 burning 300 W inside a radiator-like cooler that dared you to try frying an egg on it.
Back in the bad old days, video cards had fixed clock speeds that didn't adjust well for the particular workload, beyond clocking down at idle. Power viruses (e.g., FurMark, OCCT, or the StarCraft 2 title screen) that pushed a card harder than the company expected could fry things. But if you throttle clock speeds way back to handle the power viruses, you give up a bunch of gaming performance and people don't buy your cards.
Fortunately, AMD solved this in 2010 with PowerTune. It tracks power consumption in real time and throttles back clock speed by just enough to stay inside the desired power envelope. You don't get performance that obviously tanks as with the severe throttling from overheating. But you also don't need to know ahead of time everything that can push a card too hard. AMD demonstrably had it working way back in 2010 on the Radeon HD 6970.
If you set the PowerTune cap to 150 W, it shouldn't be possible for the card to pull 160 W for thermally significant periods of time. Did AMD drop PowerTune entirely? Is it malfunctioning? Did they increase the PowerTune cap to 160 or 170 W to try to score better reviews? Isn't it remarkable how these accidents tend to increase performance?
And now reports are coming in that the out-of-spec power draw from the RX 480 is damaging motherboards. It's probably only a tiny handful, and probably further restricted to cheap junk motherboards, likely backed by a mediocre or worse power supply, and possibly egged on by power weirdness coming from the wall. High quality components can handle running a little out of spec.
But you shouldn't rely on that. Even if you're going to overclock, you shouldn't run anything other than the component you're overclocking out of spec. If you want to overclock a CPU to the moon and have it burn 200 W, you should get a motherboard, power supply, case, and cooler that can handle a CPU putting out 200 W so that everything but the CPU itself is running in spec.
Running things out of spec unnecessarily is bad. Doing it intentionally on commercial hardware without telling anyone is worse.
Remember the GeForce GTX 590? It was a "365 W" card (already outside of the PCI Express specification, but not really bad in a desktop built around it) with two 8-pin PCI-E power connectors. That means that the rated power delivery was 375 W. That's quite a lot, but it didn't help that the 365 W TDP was a total lie and the card could easily blow well past 400 W. Some of them didn't survive the review process.
No, I didn't just say that Polaris is as bad as Fermi. But it isn't a good sign that that's the comparison I have to reach for.
Now, handling 400 W in a two-slot cooler is just plain hard. AMD finally got it right with the Radeon R9 295 X2 that liquid-cooled them both. But there's no excuse for not being able to handle 160 W.
So this is fixable with cards from board partners. And we should be thankful that AMD and Nvidia let partners such as MSI, Asus, Sapphire, and EVGA design and build cards. AMD and Nvidia don't always seem competent at it, and the reference cards mentioned above are far from the only clunkers in their histories. Remember the GeForce FX 5800 "dustbuster"?
But you know how else the running out of spec problem is fixable? Throttling back clock speeds more aggressively so that the card doesn't burn more than 150 W. That can be done with a driver update, and don't be surprised if AMD does exactly that.
The problem with cutting back clock speeds is that you lose performance. To stay inside of 150 W, maybe you lose 3% of your performance in this game and 5% in that one. And people notice lower numbers on bar graphs. If the performance losses don't come until after reviews are safely up and no one bothers to update them later, then they don't count, right? After all, the only people who suffer from that are your customers. They probably won't notice if they lose 3% of their performance, but they'll sure notice hardware failures.
So companies play various shenanigans to try to win reviews. Clock higher when you detect a canned benchmark running, and lower when you detect a power usage benchmark running. Both Nvidia and Intel have on various occasions said "look how fast it is" and "look how low power it is" for a part, trying to imply you could have both at once even though it wasn't even close to true. Make short-lived, small volume parts like the Radeon X800 XT PE. And launch just such a part as the claimed competitor to a competitor's real, volume part. Remember the EVGA GeForce GTX 460 FTW?
No, I didn't just say that Polaris is as bad as Fermi. But this is the third time I've had to assert that, and in comparison to four different cards from that architecture. The problems with the Radeon RX 480 are fixable by beefing up the power delivery, even without changing the clock speed. Third-party cards will do exactly that, and if history is any guide, probably at MSRP. Even if it adds $5 to the bill of materials, that shouldn't add $50 to the retail price tag. So I say, if you want a Radeon RX 480, you should wait for that. It probably won't be long.