It looks like you're new here. If you want to get involved, click one of these buttons!
Let's talk about Intel first. Not that much happened on the Intel side of desktop CPUs in 2014. Haswell-E did launch, and offer 6-core processors for $400 as compared to $560 for Ivy Bridge-E, as well as getting Core i7-branded 8-core processors. But that only matters at the very high end, and with Haswell-E otherwise no better than Ivy Bridge-E, that's not a big deal.
Even less happened for the more consumer-oriented LGA 1150 line. All we got there was some new bins of old CPUs. The Core i7-4790K is perhaps a nifty product, as Intel basically cherry-picked dies and factory overclocked them, much as AMD had done with the FX-9590. What was impressive was that Intel could do this inside of an 88 W TDP, as opposed to 220 W for the FX-9590.
What's also notable is what didn't happen. Broadwell barely exists in laptops, and even then only at stupid prices, and doesn't exist for desktops at all. Broadwell may or may not ever come to desktops at all, and likely will be a stupid product that doesn't matter even if it does come to desktops.
It's really a case of manufacturing problems. Sometimes if the laws of physics say you shouldn't do such and such, and Intel says, we can do it because we're Intel, the laws of physics win. This, for example, was the fate of the 10 GHz Pentium 4. 14 nm before EUV was ready already took triple patterning and other black magic that other foundries shied away from, and Intel decided to demonstrate why avoiding it was such a good idea. But Intel has talked some about their "second generation 14 nm" process node, which probably means with EUV to enable better yields and/or cheaper production. If it works, which pretty much the entire industry seems to be assuming it will.
It wouldn't be surprising if Broadwell is a short-lived product. Once the next 14 nm process node is ready and Intel can launch Sky Lake, there's little point in continuing with Broadwell. I'm not sure if that will happen in 2015; delays of one generation often lead to delays of the next.
But that hardly means that this was a lost year for Intel. Haswell was the best desktop CPU architecture on the market at the start of the year, and still is at the end of the year. Even if it's not that much better than Ivy Bridge. Or Sandy Bridge.
And then there is AMD. AMD's only new desktop CPU product was Kaveri, which is really a laptop CPU product that AMD will sell you in a desktop if you really want it. Well, there was Socket AM1, but I'm not willing to call that a desktop product. Price tags made it clear that AMD didn't actually want you to buy Kaveri. AMD's Piledriver-based FX-series CPUs were still a good budget alternative to Intel if you're not willing to pay for a proper Core i5-4690 or equivalent system. But the FX-6300, -8320, and so forth launched way back in 2012.
Oddly, not that many Kaveri laptops showed up, either. I find that odd because there were a lot of Trinity and Richland laptops. Maybe AMD is charging too much for Kaveri, or maybe laptop vendors are waiting for Carrizo.
Speaking of which, AMD will offer Carrizo, the successor to Kaveri, in the first half of 2015. It will offer Excavator cores and a new GPU architecture. On the CPU side, it will probably be a little better than Kaveri, but not a lot--and still nowhere near as good as Haswell. If AMD can clean up the memory bandwidth botteneck that Kaveri's GPU has (DDR4? HBM?), Carrizo could be a nifty product for people wanting a budget system. Unless AMD decides to charge enough for it that you could get an FX-6300 plus a discrete card that is just as fast, in which case, it would still be a nifty product in laptops, but just not desktops. Kind of like Kaveri.
The last time AMD had a better CPU architecture than Intel was 2006, just before the Core 2 Duo launched. The last time AMD almost caught up was 2009, when AMD's Phenom II was able to match or beat Intel's Core 2 Quad only a few months after Intel launched their Bloomfield Core i7 CPUs that handily beat both. AMD's next real chance to cut into Intel's lead will be with their Zen architecture in 2016, which will not be yet another Bulldozer derivative. But that's not coming until 2016, and may or may not be good.
One other thing that may or may not end up being interesting is that AMD said that they had an agreement to make another big semi-custom part set to arrive in 2016. For comparison, AMD's big semi-custom parts right now power the PS4 and Xbox One. AMD won't say what it is, but only had to disclose that they expected a bunch of revenue coming from it, which is why it may or may not end up being interesting.
Nvidia launched their Maxwell architecture in 2014, and at least at the high end, it's clearly a better GPU architecture than all that preceded it. The GeForce GTX 980 is the fastest single-GPU card on the market, in spite of being considerably lower power than the GeForce GTX 780 Ti or the Radeon R9 290X. The GTX 980 and GTX 970 are already out at the high end. The GTX 750 and GTX 750 Ti launched early this year, albeit at unattractive prices.
There's a huge gap in the middle, which Nvidia will presumably fill with an intermediate GPU, likely to be branded a GeForce GTX 960. There have been rumors of this quite some time, but it's such an obvious thing to guess that it could easily be random people making stuff up. And then Nvidia will likely rebrand the GTX 750 and GTX 750 Ti as a GTX 950 and GTX 950 Ti or some such. Or GTX 940. Or GT 940. Or GTSX+ Titan Jr or whatever nonsensical name Nvidia marketing is fond of that day.
Nvidia moving to a new process node does not appear to be imminent, however. It's unlikely that they would have just launched the GTX 980 and GTX 970 on an old process node if they were about to move to 20 nm or 16 nm or some such. Most likely, they'll have products on TSMC's 16 nm FinFET node in 2016.
On the AMD side, rather less happened. We did get Tonga, the Radeon R9 285, which is basically a lower power version of the Radeon HD 7950. We also got price cuts on the Radeon R9 290 and 290X. There was the Radeon R9 295 X2, which is basically two Radeon R9 290Xs on one card with liquid cooling. And then some other minor price cuts and that's about it.
But it's likely that AMD will make up for that next year. There are rumors about AMD's "Pirate Islands" GPUs code named "Fiji" and "Bermuda" launching early to mid next year. There are supposed performance leaks, but those could easily be random people making stuff up. AMD is about due for a new architecture rather than still more slight variants on the GCN architecture that is about three years old.
Also interesting is that Asetek announced that they had a deal to sell liquid coolers to some undisclosed GPU vendor and expected $2-4 million in revenue from it starting in the first half of 2015. Asetek already made the cooler on the Radeon R9 295 X2, so people immediately assumed that this means that AMD's next top end card will have an Asetek liquid cooler on it. I'd regard that assumption as being likely.
It's plausible that AMD could move to a new process node for the new video cards. There may or may not be a GPU-appropriate 20 nm process node ready. But even without that, it's also possible that they could move to Global Foundries 28 nm node. They already got considerable gains out of moving from Kabini/Temash on TSMC 28 nm to Beema/Mullins on GF 28 nm. And Beema/Mullins itself means that they're already commercially selling GCN architecture chips built on GF's 28 nm process node, so doing the same with discrete GPUs is certainly possible.
Remember that Global Foundries got its start in life by buying AMD's in-house fabs. AMD is thus familiar with them, in addition to being contractually obligated to buy a bunch of wafers from them for a number of years. Thus, if AMD and Nvidia both expect process nodes at TSMC and GF to be just as good at something, AMD will probably go with GF, even as Nvidia stays with TSMC.
DDR4 memory is, of course, coming in 2012. Or 2013. Or 2014. Or eventually. We finally saw Haswell-E become the first platform to use DDR4 this year. Will DDR4 be used to enable more bandwidth for integrated graphics? Lower power laptops? Yes, eventually. In 2015? Maybe. DDR4 prices need to come down to match DDR3 before everyone prefers DDR4. It's possible that we'll see chips that support both, much as AMD's Phenom II CPUs supported both DDR2 and DDR3.
But for a desktop CPU with a discrete video card, DDR4 basically doesn't matter. So let's move on.
Solid state drives
SandForce was supposed to finally launch a new SSD controller that would support both SATA and PCI Express early this year. It still hasn't happened. But the good news on the controller front is that the bad old controllers are pretty much off the market, so you can get whatever now and it's probably good.
Further SSD advances are mostly driven by NAND flash--or by replacements for it. Samsung offered the first 3D NAND this year, and some of their competitors are expected to do likewise in 2015. If that takes off, it could mean more capacity for a given size and quickly. There's not much left available to get from further die shrinks, but if you stack things 32 layers, that can give you 32 times the capacity.
SSDs have been available for under $1 per 2 GB for much of the year. At this point, they're cheap enough that I'd argue that you should get an SSD unless you're on a severe budget. You can usually get 240 GB for about $100 and sometimes 120 GB for about $60. If in a few years, that becomes 500 GB for $60 or 1 TB for $100, then I'd argue that everyone should get an SSD, period. We're not there yet, but we're getting closer.
Speaking of replacements for NAND flash, they're still working on phase change memory, resistive RAM, and some other stuff that may or may not pan out. Especially now that Samsung has demonstrated the viability of 3D NAND, I'd expect that we'll get far enough on NAND for further advances from a replacement to it to not be terribly important.
Think back to the last time you had some reason to care about optical drives. Nothing has changed since then except that you stopped caring.
There are now a lot of good 80 PLUS Gold certified power supplies that don't demand outlandish prices, as well as a lot of decent 80 PLUS Bronze certified power supplies at very budget friendly prices. Power supplies don't scale with Moore's Law or anything like it, but they've still come a long, long way in the last decade. There's still a lot of junk out there, but it's a lot easier to ignore--and to convince others to ignore--when you can get something okay for $50 and something nice for $80.
2015 is supposed to be the year of adaptive sync monitors. Samsung has already announced that they'll offer some in March. I'd expect a lot of vendors to show off monitors supporting it at CES in January. G-Sync monitors that do the same thing are already out, but carry a large price premium and tie you to Nvidia GPUs. One could make a case that they'd be worth it if adaptive sync monitors weren't just around the corner, but industry standards like adaptive sync tend to win out over proprietary nonsense and for good reason.
There is also a drive toward ever larger monitor resolutions. My view is that high resolutions only matter if you can deliver at least 60 Hz. A lot of the 3840x2160 monitors out there can't.
In other news, OLED is still the future and will probably remain the future for quite some time. It doesn't seem any closer to being common in desktops than it did five years ago.