It looks like you're new here. If you want to get involved, click one of these buttons!
Microsoft's long-awaited Xbox One reveal finally came yesterday. The rumored hardware specs had long been rumored to be underwhelming compared to the PlayStation 4, and Microsoft didn't even make a token effort at dispelling such rumors. Add in that Microsoft may not even have a cost of production advantage over Sony, and you have the makings of a disaster.
So how does Microsoft marketing handle this in their reveal? Basically, they announce a game console, but don't want to talk much about gaming. Instead, they focus more on, look at all the other things that the Xbox One can do. Oh, and yeah, it can play games, too. We think.
It actually reminded me some of the launch of Intel HD Graphics. Intel marketing said, we're focused like a laser on a bunch of things that aren't gaming. Now, that's too harsh to Microsoft; considered in isolation (or compared to a Wii U), the Xbox One will be a capable gaming machine, unlike the dismal failure known as Intel HD Graphics. But if your gaming console is arguably better at games than the competition's, you're going to shout that to the rooftops. That Microsoft didn't even make a token effort here is revealing.
Let's look at hardware. Both the PlayStation 4 and the Xbox One have 8 AMD Jaguar cores, likely clocked at or a little below 2 GHz. So they're basically identical there.
Both use AMD GCN graphics, but the PlayStation 4 has 18 GCN CUs, while the Xbox One only has 12. Both are clocked at 800 MHz, so the PS4 has a 50% GPU advantage. Advantage Sony, right?
Not so fast. A Radeon HD 7870 is a lot faster than a Radeon HD 7770, and a GeForce GTX 660 is a lot faster than a GeForce GTX 650 Ti, but the latter card in both comparisons surely have an important role because they're cheaper. All that Microsoft has to do is use its cheaper hardware as an opportunity to price its console cheaper than Sony's, right?
That's plausible until you look at memory. Both the PlayStation 4 and the Xbox One have a 256-bit memory bus. But the PlayStation 4 uses GDDR5 memory, while the Xbox One uses DDR3. That means that the PS4 has vastly more memory bandwidth. But we already knew that the PS4 was faster, so this also makes the Xbox One cheaper, right?
Well, no. While DDR3 does tend to be cheaper than GDDR5, the Xbox One is strongly rumored to be using 2133 MHz DDR3 memory--that is, the very top bin of DDR3 that memory manufacturers sell. Will that still be cheaper than GDDR5? Maybe, but if so, likely not by all that much.
Now, it's pretty obvious where Microsoft is going with this. While 2133 MHz will be top bin DDR3, it will be bottom bin DDR4. A year or two after launch, Microsoft probably does a die shrink to 20 nm and shifts the memory from DDR3 to DDR4. Will bottom bin DDR4 be cheaper than high bin GDDR5? You bet it will. Eventually.
But there's still the problem that traditionally, any card using DDR3 memory really shouldn't be considered a gaming card. You could at various times make a case for a DDR3 version of a Radeon HD 4670, 5570, 5670, or 6670 as a budget card, but you don't want to pay $400 for a gaming console with all of the GPU performance of a $60 budget card.
While there is a big difference between dual channel DDR3 at 1600 MHz (like the budget video cards use) and quad channel DDR3 at 2133 MHz (as the Xbox One is rumored to use), the Xbox One would still very much be crippled by memory bandwidth. So Microsoft tried to fix this by adding 32 MB of ESRAM to the die, with something like 100 GB/s of bandwidth to the GPU. That gets it total memory bandwidth in the same ballpark as the PS4. Yay, Xbox One?
Well, no. With the PS4, you can use the memory bandwidth any way you want and it's there and it works. With the Xbox One, you need to use a majority of your memory bandwidth from a small 32 MB pool rather than the normal 8 GB if you want to take advantage of the memory bandwidth. Anything that the Xbox One can do here, the PlayStation 4 can mimic pretty well, but the converse is wildly false.
To be fair to Microsoft, using a large fraction of your memory bandwidth from a small 32 MB pool actually is realistic for a lot of games. Every single time you run a pixel/fragment shader with the depth test enabled--which typically means, most of the time that you run a shader, period--you have to read from the depth buffer. If you pass the depth test, you write to it as well and also write to the frame buffer. Meanwhile, post-processing effects involve reading very heavily from a frame buffer-like object used as an intermediate step. At 1080p, the depth buffer is just shy of 8 MB, as is a frame buffer. With one depth buffer, a front frame buffer, a back frame buffer, and an extra frame buffer available for use as an internal step in a multi-pass rendering algorithm, you total just a shade under 32 MB needed for the very heavily accessed data. The 32 MB of ESRAM is not a coincidence.
But what happens if you need larger buffers? For example, if you want a monitor resolution higher than 1080p? Or if you want stereoscopic 3D? Or MSAA, SSAA, or any other form of anti-aliasing that computes multiple pixels for each pixel of the final image and then averages them? (FXAA, MLAA, and other forms of post-processing anti-aliasing are fine here.) Well then, suddenly you need a lot more than 32 MB for your very heavily accessed buffers--meaning that the fraction of your memory reads from ESRAM drops precipitously and you suddenly have a massive memory bandwidth bottleneck that the PS4 won't have.
But what about the cost of production? The first problem here is that on-die ESRAM greatly bloats the die size--and hence the cost of production. Microsoft is claiming that the die has 5 billion transistors. For comparison, that's substantially more than the 4.3 billion of a Radeon HD 7970 or the 3.5 billion of a GeForce GTX 680. Some have estimated that the ESRAM accounts for around 1/3 of the transistors in the SoC of the Xbox One. With cost of production proportional to die size, the main chip in the Xbox One might actually cost more than that of the PlayStation 4.
That would leave Microsoft with a much slower console but without a big price advantage. So why did they go that route? Well, it is lower power, for one. Even if it's not cheaper today, it could still become cheaper eventually, as ESRAM will probably scale very well with future die shrinks and DDR4 will eventually be cheaper.
But there's also the possibility that Microsoft was counting on having a memory capacity advantage. Early rumors put the PlayStation 4 at 4 GB of total memory, not the 8 GB that Sony announced. One channel of GDDR5 memory can only have four memory chips attached. With 2 Gb (256 MB) as the largest size of GDDR5 memory chips available, that caps you at 1 GB per channel. Having four memory channels means that Sony is capped at 4 GB of total memory.
Unless, of course, someone comes out with 4 Gb (512 MB) GDDR5 memory chips. Not coincidentally, Hynix has promised to do exactly that later this year, just in time to be used in the PlayStation 4. Samsung and Micron will presumably do so as well, and probably at about the same time, though I haven't seen any announcement on it. (That's not to say that Samsung and Micron haven't already announced it, but only that I haven't seen such an announcement.)
Also, don't count on this ending up like the Xbox 360 versus PlayStation 3, in which the latter was theoretically faster in peak performance, but much harder to exploit the hardware. If anything, it will probably be easier to exploit the full capabilities of the PlayStation 4 than the Xbox One, not harder, because there's no need to fuss with ESRAM capacity.