Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD gets SLI again

RidelynnRidelynn Member EpicPosts: 7,383

Quizzical had this one called dead to rights:
http://blogs.nvidia.com/2011/04/you-asked-for-it-you-got-it-sli-for-amd/

You should read the press release, even though AMD is doing them a favor by allowing SLI, and nVidia probably had to pay AMD for the privilege, they still manage to poke fun of AMD.

This support isn't retroactive, looks like it's only for the upcoming AM3+ chipsets (970/990 series)

Even more interesting than this news, is when it's paired with this recent benchmark study by HardOCP:
http://hardocp.com/article/2011/04/28/nvidia_geforce_3way_sli_radeon_trifire_review

Where 6990+6970 ("Trifire") (at the slower 6990 stock speeds) matches and in many cases tops out tri-SLI 580 system (at the much faster stock speeds). I would say it's mainly due to a lack of video memory, but there is a significant gain going from 2-way to 3-way SLI, it's just not nearly as much as you would expect and that adds no additional video memory to the equation, so maybe it's just PCI bus saturation or drive inoptimization.

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

     

    The reason Nvidia is allowing SLI for Socket AM3+ is that they basically had to.  To refuse would be shooting themselves in the foot.

    I doubt that Nvidia is paying AMD anything for SLI to work on AMD 900 series chipsets.  Nvidia could make SLI work on AMD 700 and 800 series chipsets tomorrow if they cared to.  The reason it doesn't work is that Nvidia disables it through their video drivers.  If AMD were to disable it through chipset drivers, I'm not sure if they'd still be able to call their chipsets PCI Express compliant, as it would be disabling PCI Express devices that meet the PCI Express standard and should otherwise work.

    Furthermore, I think AMD would take this as good news, too.  A large chunk of the people who are going to run two cards in SLI are Nvidia fans to begin with.  There tend to be fanboys with much stronger opinions in the video card market than the processor market.  Someone who prefers an Nvidia GPU and an AMD CPU but cannot get both (and have SLI work) would be more likely to go Nvidia GPU and Intel CPU than AMD GPU and CPU.  AMD would rather sell you a processor and chipset than nothing at all.

    So let's back up a bit and explain why Nvidia basically had to do this.  It used to be that, in order to use two Nvidia cards in SLI, you had to use an Nvidia chipset.  You could get an Nvidia chipset for either AMD or Intel processors, but if you tried to use an AMD or Intel chipset, Nvidia would disable SLI through their drivers.  Nvidia had legal agreements with both Intel and AMD allowing them to produce chipsets.  AMD wanted Nvidia to produce chipsets, as even when AMD did produce their own chipsets, they were far inferior to Intel's, and you can't use a processor without a chipset.  AMD also couldn't make integrated graphics.

    Intel, on the other hand, didn't want Nvidia producing chipsets.  Intel wanted everyone who bought an Intel processor to have to buy an Intel chipset, without having to compete with Nvidia chipsets.  Nvidia was able to force Intel to allow Nvidia to produce chipsets for Intel processors by leveraging their graphics patents.  If Intel couldn't use Nvidia graphics patents, then Intel basically couldn't produce graphics chips at all, and there wouldn't be integrated graphics available at all for Intel processors.  So they struck a deal, the main provisions of which were that Nvidia would allow Intel to use their graphics patents, while Intel would allow Nvidia to produce chipsets for their processors.

    In 2008, Intel launched their Nehalem architecture.  This architecture used a Quick Path Interconnect (QPI) and later Direct Media Interface (DMI) instead of a Front Side Bus (FSB) to connect the processor to the chipset.  Intel insisted that their licensing deal with Nvidia only allowed Nvidia to create chipsets that used FSB, rather than QPI or DMI, and wouldn't allow Nvidia to make chipsets for Nehalem processors.

    Nvidia sued Intel over this, claiming a breach of contract.  If Nvidia couldn't get their benefit from the deal, then Intel shouldn't be able to use Nvidia graphics patents--and therefore, Intel shouldn't be able to produce their integrated graphics.  Now, I don't know what the contract between Intel and Nvidia actually said.  For Nvidia to agree to a contract that only allowed them to use FSB and not any analogous successor would have been terminally stupid.  AMD processors were using HyperTransport (HT), which was superior to FSB, so surely Nvidia had to expect that Intel would eventually move to something else.  Indeed, Nvidia is part of the HyperTransport Consortium, and their chipsets for AMD processors use HT, so they know all about it.

    The outcome of the lawsuit had enormously high stakes.  If Intel won, Nvidia would be in deep trouble, and the company may not survive.  If Nvidia won, then Intel would be unable to ship Clarkdale/Arrandale or Sandy Bridge processors, which means most of their processors would have to be yanked off of the market.  That would cost them billions, and that's on top of any damages that the lawsuit may specify.  The two sides agreed to settle the lawsuit, with Intel paying Nvidia $1.5 billion, and Nvidia agreeing to license their graphics patents to Intel but not make chipsets for Intel processors.

    Meanwhile, AMD bought ATI a few years ago.  This meant that AMD suddenly had the ability to create pretty good integrated graphics, and jump started AMD's chipset division.  While Nvidia still could make chipsets for AMD processors, the market for them was rather limited.  People who are going to buy an AMD processor are likely to view AMD favorably, and wouldn't go out of their way to avoid an AMD chipset.  Nvidia could rightly claim that their integrated graphics were vastly better than Intel's, but there was no such chasm between the integrated graphics of Nvidia and ATI.

    So this left Nvidia in a bind.  They couldn't make chipsets for Intel processors.  The market for chipsets for AMD processors wasn't nearly as big, and Nvidia couldn't realistically hope to capture very much of that market.  It costs a lot to design a good SATA 3 controller, or a good USB 3.0 controller, or whatever.  Once you have one, you can largely copy and paste it to other chipsets, though you will have to tweak some things when you move it to a new process node.  Nvidia presumably calculated that if they went to the work to create a modern chipset, they wouldn't be able to sell enough of it to cover their costs.  So Nvidia shut down their chipset division entirely.

    Intel's X58 chipset was a high end chipset with 36 PCI Express 2.0 lanes, and Bloomfield processors were clearly the best on the market.  This was exactly the platform that gamers looking to put together a high end gaming system with multiple video cards would go for.  With Nvidia unable to make their own chipsets for it, if they didn't allow SLI to work on X58, then they'd be telling gamers that in order to use SLI, they'd have to pair two Nvidia video cards with a clearly inferior CPU.

    Nvidia dominated the high end of the market with their G80 and G92 GPUs (GeForce 8800 and 9800 series) for a while, and had that dominance continued, Nvidia might well have gone for that.  But with the launch of RV770 (Radeon HD 4870) a few months before Bloomfield, AMD was competitive again.  Had Nvidia refused to allow SLI on X58, the more likely scenario would be high end gaming systems going with a Bloomfield processor and two Radeon HD 4870 video cards, not two Nvidia video cards and some other processor.  That would be catastrophic for Nvidia.

    Nvidia's answer to this was to license SLI to select motherboards that used Intel chipsets.  Nvidia's official explanation of this was that they would only license SLI to motherboards that could run it properly.  This wasn't entirely false, as Nvidia did insist on at least two PCI Express 2.0 slots that could do x8 bandwidth at the same time.  AMD will allow you to try to run CrossFire on an x16/x4 bandwidth configuration, which really doesn't work very well.  Nvidia could argue that they were protecting uninformed customers who would buy an x16/x4 motherboard expecting to run CrossFire on it, only to have it not work very well, by not allowing the motherboard manufacturer to claim that it supported SLI.

    But that wasn't the only reason, or even the main one.  Nvidia wanted higher profits from customers who used SLI, beyond just what they made from the video cards.  Before, they had been able to get that by requiring SLI users to buy an Nvidia chipset.  Nvidia wanted to make motherboard manufacturers buy an NF200 chip from Nvidia, which adds extra PCI Express lanes.  Motherboard manufacturers wouldn't go for that, except at the very highest end, because 36 PCI Express lanes was already enough for x16/x16, the highest that the PCI Express standard allowed for two cards.

    The next best thing Nvidia could come up with was that, in order for SLI to work on an X58 motherboard, the motherboard manufacturer would have to pay Nvidia licensing fees, dubbed the "SLI tax" by people who disliked them.  For X58, I think it was something like $30,000 per motherboard model that would be licensed for SLI, plus $5 per motherboard actually built.  For P55, it was somewhat less than that.  H55 and H57 chipsets couldn't do x8/x8 bandwidth, so they wouldn't support SLI at all unless they bought an NF200 chip to split the PCI Express lanes.

    Now, adding an NF200 chip means that you can get more bandwidth, so you can do x16/x16 on P55, which can be useful.  One problem is that Nvidia charges quite a bit for NF200 chips.  Another is that having to pass the signal through an extra chip adds latency, which hurts performance if you're not putting the extra PCI Express bandwidth to good use.  It might only hurt your frame rates by 1%, but do you really want to pay extra for that?

    Now, there are Nvidia chipsets available for older Conroe/Penryn architecture (Core 2 Duo/Quad) processors, and also for Socket AM3 motherboards.  But with no Nvidia chipset division, there would be no Nvidia chipsets for Socket AM3+ (Zambezi) or Socket FM1 (Llano).  Nvidia's choices here were either to allow SLI on AMD chipsets, or to disable SLI on all future AMD processors entirely.

    The latter would constitute suicide on Nvidia's part.  AMD's 890FX chipset is currently the best on the market.  AMD's upcoming 990FX chipset will probably be better yet, with various tweaks plus USB 3.0 support.  So the physical hardware to support SLI in the chipset is already there, and probably better than what Nvidia would have offered even if they did have chipsets.  It is highly probable that Zambezi will be a better processor than Bloomfield, and very likely better than Gulftown, too, so X58 would be all but officially dead.  Zambezi may or may not be able to catch Sandy Bridge for gaming performance, but it should be in the same ballpark, and offer a high end chipset, unlike Sandy Bridge, where the P67 chipset only has 22 or 24 PCI Express lanes or so, and not enough to do an x16/x16 bandwidth configuration.

    So in two months, if you want a high end gaming processor together with a high end chipset that can do x16/x16 PCI Express bandwidth, Zambezi plus 990FX will probably be your only good option.  That's exactly the configuration that people looking to run two high end Nvidia cards in SLI would want.  For Nvidia to disable SLI on that would nearly kill off SLI entirely for new systems.  Nvidia's video cards already trail behind AMD's, and to take away the SLI option on top of that would be suicidal.

    So Nvidia will allow SLI on AMD chipsets.  Today's announcement doesn't change any of the above analysis.  It only makes it official.  SLI licensing on AMD chipsets will probably be analogous to what Nvidia did for X58, P55, and now P67.  Nvidia will probably only allow it on motherboards that can do x8/x8 or better bandwidth, and only on those that pay Nvidia an "SLI tax", or perhaps use an NF200 chip.  An NF200 chip would be completely pointless for 970 or 990X chipsets, as that's the 990FX chipset would be able to do the job of 970+NF200 or 990X+NF200 both better and cheaper.  There will probably be a few high end motherboards that do use a couple of NF200 chips, for those few people who want x16 bandwidth to three or four PCI Express 2.0 devices simultaneously.

  • ShinamiShinami Member UncommonPosts: 825

    And you know all of this out of speculation or did you actually speak directly to an Nvidia and AMD Representative who told you all of this "accurate" information or is it your take on things? Its a little poking I know, but I am logical enough to warrant to ask the question and wonder how much comes from some web sites and prediction on the matter. 

     

    What makes you believe that Nvidia went to AMD for SLI and not the other way around spun to make it look like Nvidia was begging for it? There is a lot of that going around lately as companies care for their image a lot more than their integrity. 

     

    The bottom line really is "Who cares, AMD gets SLI again and that is what is important..As to how much that will cost the consumer only time will tell." 

     

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    I'm not sure if you're replying to Ridelynn's post or mine.  A large chunk of my post is publicly available information, and I don't care to track down 40 different references to verify every little point.  I do fill in some gaps with opinion or speculation.  I trust that anyone who cares to know the difference can tell the difference.

    I don't believe that AMD or Nvidia are paying each other directly over SLI on AMD chipsets.  The model will probably be the same as with X58, P55, and so forth, where the fees are paid by motherboard manufacturers--and then, of course, passed on to consumers.  Asus or Gigabyte or whoever pays Intel $70 or so for the X58 chipset, and then also pays Nvidia $5 if they want SLI to be enabled.

    It probably isn't the case that either AMD or Nvidia went to the other begging for SLI on AMD chipsets.  This is just like how Creative doesn't pay AMD or Intel licensing fees to allow their PCI Express sound cards to work, or LSI doesn't pay licensing fees to allow their PCI Express RAID cards to work.  The point of an industry standard is precisely that everything just works, and you don't have to negotiate licensing fees back and forth with everyone.

  • PhryPhry Member LegendaryPosts: 11,004

    Originally posted by Shinami

    And you know all of this out of speculation or did you actually speak directly to an Nvidia and AMD Representative who told you all of this "accurate" information or is it your take on things? Its a little poking I know, but I am logical enough to warrant to ask the question and wonder how much comes from some web sites and prediction on the matter. 

     

    What makes you believe that Nvidia went to AMD for SLI and not the other way around spun to make it look like Nvidia was begging for it? There is a lot of that going around lately as companies care for their image a lot more than their integrity. 

     

    The bottom line really is "Who cares, AMD gets SLI again and that is what is important..As to how much that will cost the consumer only time will tell." 

     

    after reading his post (it took a while!!!) i'd have to say it sounds like a hell of a lot more than just speculation.. .. anyone who knows that much and can explain it like that.. should be taken more seriously.. than just.. speculative interest..  i may not know much about that particular subject. but i at least recognise someone who does...  image

     

  • RidelynnRidelynn Member EpicPosts: 7,383

    It's all speculation unless it's backed up with evidence, no matter how well said it may be.

    I think Quiz is probably right though, I think there was some money under the table at some point, but I don't have any evidence to offer except the press release from nVidia, which makes it sound very much like nVidia is doing AMD a favor (and I have my sincere doubts reflects the reality of the situation in any way).

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    Nvidia and AMD both gain from this.  It's Intel that loses out.  If SLI was AMD only, then people who preferred an AMD processor and two Nvidia cards in SLI would sometimes buy an Intel processor instead of an AMD one.  With this announcement, if someone prefers an AMD processor and two Nvidia cards in SLI, then that's exactly what he'll buy.

    Well, I guess you could argue that people who want to use CrossFire but not SLI could lose out if they have to pay a few extra dollars for their motherboard, in order to cover the motherboard manufacturer's "SLI tax" payment to Nvidia.  But that's just my speculation.

    And even a lot of people who don't want to use SLI when they first buy the system, may still like the option to buy a couple of Nvidia cards in SLI later.  Maybe Kepler or Maxwell will be really awesome, and Nvidia will regain the market lead at the high end that they once had with the G80 and G92 GPU chips.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Honestly, now that I think about it, SLI/CF really only matters to a very small percentage of computer users. Gamers and some very high end research/rendering applications are the only things that even benefit from it, and only a small fraction of that population uses it.

    So I think, the case is more like, AMD and Intel are giggling, because they see the discrete video card market for what it is, a very small niche in a very big pool, while they both keep right on perfecting their integrated GPUs for typical desktop usage and low power GPU devices that go into all the smart phones and tablets (which are selling in volumes of hundreds of millions, maybe soon billions).

    nVidia has some lower power stuff, but once you integrate it with the CPU, they can't compete any longer. I guess this is why they are jumping into the ARM market soon.

  • CadwalderCadwalder Member Posts: 20

    I find this post and the replies incredibly interesting. Thanks for that. And damn, Quizzical, you live up to your name (in a sense) - it took me 3 cups of coffee to roll through your first post (despite being a voracious reader). =o

    I have to agree that the folks at Intel are definitely gonna lose from this. I'll also have to agree with Ridelynn; only gamers or hardware people bother with Crossfire/SLI setups. Very few people in the companies I've worked with and am working with (all media/film/digital design people) bother getting Crossfire/SLI setups. Most of them generally have all their PCIe slots filled up with other extensions (most notably, sound cards) they only plug in a single card. Another reason - after the Dual-GPU cards appeared, most of us use our only slots for these.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    While very few people actually buy SLI or CrossFire setups, the percentage of revenue that they provide is somewhat less small, because those who do spend so much.  Very few people buy professional graphics cards, yet Nvidia still gets something like 20% of its revenue from Quadro cards.  Or to take another example, there are vastly more ARM processors sold every year than x86, but AMD is a vastly bigger company than ARM Holdings, even if you ignore the video card division, and AMD has only a small percentage of the x86 market.

    The high end cards are also partially a marketing expense.  Both Nvidia and ATI long believed that quite a few people would look at who had the fastest card of a given generation, and then buy a lower end card from that company.  So they'd try very hard to have the fastest $600 card that hardly anyone would buy, even if it meant their $200 cards that people did buy weren't the focus.  AMD broke away from that with the Radeon HD 3000 series, saying that if the bulk of the GPU revenue came from $100 and $200 and $300 cards, then that should be the main focus, and if people wanted something higher end, then they could get two.  That didn't work out so well for them in the 3000 series, mainly because Nvidia had a better architecture, but in the 4000, 5000, and 6000 series, it sure made AMD a whole lot more money than Nvidia.

    Indeed, today there are probably some people who will see that a GeForce GTX 580 is faster than a Radeon HD 6970, figure that Nvidia won the generation, and then buy an overpriced GeForce GTX 550 Ti on that basis.  AMD's counter to that is to say, hey, we have the best dual GPU card, as a few months ago, that sort of person might have seen that a Radeon HD 5970 was faster than a GeForce GTX 580, and then bought a Radeon HD 5770 because of it.

    This attempt at grabbing the top "halo" card is the entire reason for the GeForce GTX 590 to exist.  Nvidia surely knew very well that with how far they are behind AMD in performance per watt, and AMD having PowerTune to cap power draw, Nvidia had no chance at making a better top end card than the Radeon HD 6990.  But if they could make the GTX 590 faster than the Radeon HD 6990, they could win sales of lower end cards.  They sacrificed reliability to try to do this, as 450 W on air in a two slot cooler is simply unsafe.

    The Radeon HD 6990, on the other hand, probably would be a sensible purchase for thousands of people.  But only thousands, and not tens of thousands, let alone millions who will buy lower end cards.  But that at least beats the GeForce GTX 590 being a sensible purchase for exactly zero people.

  • CadwalderCadwalder Member Posts: 20

    I think I can safely agree to the logic behind most of your reply.

    I might be drifting slightly off-topic here but : I can, from personal experience, support your saying that nVidia is making significant sales on the Quadro series of cards. Every designer I've come across in the industry (and there's a lot more of these folks around than you'd think)  owns a Quadro 5000 ATLEAST (costs about $2500, I think?) on a system that costs $3000 or more. Many of these folks have MULTIPLE systems with such hardware. A lot of them have even upgraded to the Quadro 6000 (which is $5000) in order to show off to the clients that they have the best (read : most expensive) on the market. I've been told that there's little value for money when you compare the 6000 to the 5000 (about 25% increase in performance for twice the price?!), but still more than 70% of the people here have upgraded in order to please clients. nVidia is probably making a helluva bloody lot more on professional solutions that we'd normally speculate. =D

    You might have come across one of my recent posts regarding the flagship cards from nVidia and AMD. Although I'm somewhat of an nVidia fanboy, I have to say that the 590 is a complete and utter disappointment. It's the most unreliable piece of (shit) hardware I've ever used burning out within days of installing it (similar tales of tragedy from friends who've purchased one too =o). Even if it were faster (unfortunately, it's not beating the 6990) I wouldn't recommend anybody buying this shit. So yes, it's probably not a sensible buy.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Well, the sales figures for GPU's are out. They don't say anything about profit margins, just installation bases.

    If the "halo" effect really is there, it's not terribly big. I have always heard this was the reason why "fastest card" was important, so the explanation isn't new, but it's still somewhat humbling to see the actual sales numbers.

    http://www.engadget.com/2011/05/04/nvidia-losing-ground-to-amd-and-intel-in-gpu-market-share/

    image


    No surprise that Intel went up, and retains it's top spot as GPU supplier (how many people here figured that Intel sold the most GPUs). With them being integrated into every Sandy Bridge, it gets to chalk up a sale for each CPU sold, regardless of if the end user is actually using it or not. And a huge number of laptops and low end/business class machines get by just fine with integrated Intel graphics.

    AMD shot up considerably. This also isn't huge news (although it is surprising that they experienced more growth than Intel, although they still don't have the same volume). They have both integrated GPU's on CPU's, on chipsets, and a very healthy discrete GPU line.

    nVidia lost marketshare. It lost big. But, as the article hints at, and Quizzical touches on, discrete graphics carry a much larger markup, so they are much more profitable. An on-die GPU may only net the company single-digit percent profit, whereas a high end discrete GPU could easily represent 100%+ percent profit over cost.

  • CadwalderCadwalder Member Posts: 20

    Good piece of info there.

    I personally think that the intergrated graphics in the Sandy Bridge processors are pretty impressive, way better than what I had expected. Also, I'm very interested in nVidia's Project Denver.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    Sandy Bridge integrated graphics are only impressive if you restrict the comparison to other Intel integrated graphics, or very old integrated graphics.  I'd much rather have AMD's Radeon HD 6310 netbook integrated graphics than Intel's top of the line Intel HD 3000 graphics in Sandy Bridge.  Well, if it were just the graphics by themselves, that is, which is rather counterfactual, since they're both integrated into the same die as processors.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    Originally posted by Cadwalder

    Also, I'm very interested in nVidia's Project Denver.

    I'm curious about Project Denver, too.  Presumably it doesn't just mean future iterations of Tegra for phones and tablets, which already use ARM cores.

    Will Nvidia try to put ARM cores into GeForce cards?  I don't see an obvious reason to do so, other than CUDA, which is wasted in a gaming card.  Maybe Nvidia thinks they can have some general purpose ARM cores handle several special-function bits.

    Will Nvidia try to make ARM into a competitive architecture in the laptop space?  The upcoming Cortex A15 ARM cores are supposed to be 64-bit, out-of-order, and able to clock as high as 2.5 GHz.  Microsoft has said that Windows 8 will support ARM.  Will Nvidia try to put that into a netbook, and compete with AMD's Bobcat cores and Via's Nano?  They'd need a lot better IPC than ARM has shown so far for that, but Nvidia apparently has an ARM architecture license, and can adjust the cores themselves.

    It's hard to imagine Nvidia going after the desktop processor space with ARM cores, apart from nettops.  Intel and AMD both have many years of experience tweaking x86 to optimize IPC, and are able to borrow many of each other's innovations due to a cross licensing agreement.  If Nvidia thinks they can take on Haswell and a 22 nm die shrink of Bulldozer using tweaked ARM cores, then that's not going to end well for them.

    Of course, reports say that Apple is going to ditch x86 in favor of ARM in their laptops.  If Apple thinks that ARM is going to be good enough to run a laptop, then Nvidia surely could, too.  Asus tried an ARM "smartbook", which turned out to be very dumb, and unable to compete with even gimpy Atom netbooks, but several more years of ARM development might make it enough to run like a real computer.  Apple iOS and Google Android mean that there are now a lot of applications to run on ARM cores, too.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Keep in mind the iPhone/iPad run on ARM processors, and that Apple actually bought out two ARM designers (PA Semi and Intrinsity) a little while back so they could custom make their own ARM CPUs for these mobile devices.

    I think ARM has a good ways to go before it's heavy-duty enough for a laptop, and x86 CPUs have come a long way in terms of power management, which makes it even harder to catch up.

    Project Denver, from the last bits of info I've heard, was a CPU replacement. I haven't heard if nVidia planned on integrating a GPU on die with them or not, but nVidia was aiming this as a high performance CPU. Coupled with the announcement that Windows 8 would support ARM processors natively (which I think was not related - Microsoft is looking at the tablet market, not Project Denver), it could be interesting to see how it stacks up.

    ARM was originally developed to be a desktop CPU (Acorn), but morphed into an ultra-low power solution for mobile and embedded devices. To see someone redevelop an ARM-based desktop CPU isn't unfathomable, but the architecture has a good ways to go to be performance competitive with a modern x86. Now, one advantage that ARM has are that it's cores are very simple (RISC instruction set), so it wouldn't be too hard to imagine a desktop ARM CPU having many more cores than an x86 CPU (maybe even on the order of dozens or hundreds) - which would make it capable of catching up via parallel performance, but the problem with that is that today's software is not very well optimized for parallel execution.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    There are several reasons why ARM in MacBooks makes more sense than in other laptops.

    1)  Apple cares about power consumption tremendously, more so than most other desktop and laptop computer vendors.  ARM is all about low power.

    2)  Apple likes having its own custom hardware, not using off-the-shelf parts that other vendors also use.

    3)  Apple can do its own operating system, and isn't dependent on Microsoft to support hardware the way most other computer vendors are.

    4)  Apple has a lot of success with ARM already, and can position new MacBooks as higher end version of the iPad, rather than a lower end version of a desktop.

    5)  Apple has some very zealous fanboys who will buy their products anyway, even if they end up being terrible.

    -----

    The real question is performance.  How well will a quad core Cortex A15 at 2.5 GHz perform?  Will it basically be competitive with Atom, and too gimpy to be functional like a real computer unless restricted to programs that aren't terribly computationally intensive?  That's a restriction that Apple could well make to cover up how slow the hardware is,  Will it be competitive with AMD's Krishna netbook processors, which will be out long before the Cortex A15?  That would be enough for a functional, low power laptop.

    There's also the question of who will buy it, as it will be incompatible with the rest of the world.  Will this split the world into games developed for ARM and games developed for x86?  There's already some of that, but if that happens, I'd expect the major commercial games to stay x86 only.

    And what about desktops?  People accept that a phone has weak hardware that can't do much.  People accept that in a tablet to some degree, too, though AMD's Wichita APU will do a fair bit to change that.  But people expect a laptop to be functional like a real computer, which is why ARM hasn't caught on in that space.  And people really expect desktops to be powerful, and I can't see ARM being competitive there in the foreseeable future.

    I guess Apple already has to support separate code bases for x86 and ARM, with the former for MacBooks, iMacs, and Mac Pros, and the latter for iPods, iPhones, and iPads.  Moving MacBooks from the former category to the latter wouldn't necessarily change how much work Apple has to do to support all of their products that much.

    Of course, when I ask, how will the performance compare, there's also the question, how can we tell?  You can install a game and run it on a GeForce GTX 570, a Radeon HD 6850, a GeForce GTS 450, and a Radeon HD 6450, and compare performance.  The reason you can compare performance is that it's the same code running on all of that hardware.  But if you want to compare performance between ARM and x86, then it's different code.  If a program on an x86 processor performs three times as well as a program that ostensibly does the same thing on an ARM processor, does that mean the x86 processor is three times as fast as the ARM processor?  Maybe one of the code paths is more efficient than the other, and had they been more comparable, it would have been 5 times as fast, or only 2 times as fast.

    OpenCL may be the way out of this.  The idea of OpenCL is that you code something once, and then it runs on anything.  If it can run on GeForce and Radeon cards, then ARM doesn't seem like such a stretch.  Indeed, reports say that ARM is going to strongly support OpenCL.  That, more than GPGPU, may end up driving OpenCL adoption.  Code something once and have it run on everything from desktops to cell phones.  That would be nifty if it works well, but that's a huge "if".

  • RidelynnRidelynn Member EpicPosts: 7,383

    Apple is an old hand at migrating between CPU architectures: from 68k to PPC to x86 to ARM (there is an iOS emulator if you have the Apple Dev kit that runs very well on OS X). Should they chose to put an ARM CPU in an OS X product, it wouldn't be hard for their engineers to adjust the code. They've done it 3 times in the past anyway (and once with the OS X code they currently are using), and for the end user, it's been fairly transparent. Emulated code runs a bit slower, but generally in very acceptable, and once you get two generations removed from the architecture shift, even emulated the newer CPU lines will run faster.

    That said, I can't see them getting away from x86 for their OS X line, even for the ultraportables (Macbook Air). I know that OS X 10.7 will more closely resemble/integrate iOS features, but I think it will continue to run on x86 chips. Rumor has it that 10.7 will be the last OS X release, and that we can expect "something different" after that (maybe a merging of OS X and iOS, maybe just a shift towards iOS, who knows), so maybe this ARM development coincides with that as well.

    Apple isn't afraid to restrict performance in the name of other factors (mainly usability, which includes factors like heat production and battery life - this is why iMacs use mobile CPUs and sometimes even GPUs), but they won't shoot themselves in the foot over it either. They won't push an ARM-based laptop unless it's faster than their current x86 lineup and can represent better energy management, they've never updated a product with intentionally slower performance than it's predecessor.

    I think their iOS lineup may change - they probably have some trick up their sleeve to add to the iOS family (something mobile and gesture based that will, of course, blow our minds), and it may be why they are pushing these ARM developments and where the rumors come from that ARM will go into Apple laptops. And maybe they are going to go into Apple Laptops, but not unless they represent a major improvement over x86 lines in both performance and energy management.

Sign In or Register to comment.