Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

1GB Radeon HD 6870M or 1.5 GB GTX 460M on a M17x R3

TanonTanon Member UncommonPosts: 176

Right now, I'm not quite sure which of the two cards I should get. I've been reading up on the two cards, and what I can gather is that the 6870M does not support 3D, nor does it have PhysX or CUDA while the 460M does. In terms of speeds, I've read that the 460M has faster shader and memory clocks, but since they're on different architectures the clock speed is irrelevant. The 460M is also $50 extra. As such, I am unsure as to which will give me the better performance, as I don't plan on using 3D on a laptop and I do not currently see any need for PhysX nor CUDA (though that may change in the future).

Another issue is battery life: I've read that the 460M has Optimus, allowing for switchable graphics, which the 6870M does not seem to offer. I've seen claims that having the 6870M will make you lucky to last 10 minutes unplugged. While I'm sure that's an exaggeration, an extremely low battery life does concern me. I would like to last at least 2-3 hours on the battery (turning all the lights off, screen brightness down, and switched to power saver mode), as I fear that some lecture halls will not have enough outlets. I'm not sure as to exactly how much the 6870M will drain at idle, but I would assume that a 9-cell 85 whr battery would be enough to sustain it for a couple of hours. Naturally, I'd like to confirm before purchasing it.

And, of course, I've read some posts (ie. Quizzical's) talking about the headaches that the drivers for discrete switchable graphics cause. If I absolutely must have them to ensure suitable battery life for my classes, then I'm willing to take them, but if the battery can last for at least 2-3 hours on the 6870, then I'd like to make my decision purely based on the performance I'd get out of them, whether or not the extra performance is worth the $50, and just how big a pain the switchable graphics drivers are.

Another thing that might be noteworthy is that I do not plan on purchasing the upgrade for the 1080p screen; I'm sticking with the 900p, so since that requires less rendering, that might be useful information. Also, my choices are strictly between these two, as the only other offer that Alienware gives for Canadian citizens is a 2 GB 580M, which is $600 extra, placing the laptop way out of my budget, and the 6970M is sadly not available in Canada.

Comments

  • CatamountCatamount Member Posts: 773

    I can't think of a single time you're going to use CUDA for anything gaming-related, and Physx has something like two titles in the whole world that you might actually care about.

    On the other hand, AMD has vastly superior performance per watt, which is very important for laptops. At idle it won't matter terribly, I suppose (especially given the switchable graphics on the Nvidia side), but at load it matters very much. The 6870 will stay cooler with a given cooling system for a given level of performance.

    Notebookcheck.net seems to indicate that the 460 has a roughly 72 watt TDP (as opposed to 50 for the 6870). 72 watts is a LOT of energy for a mobile GPU.

    For battery life, I'm guessing you can probably get 2-3 hours without a problem. My laptop lasts 3 hours with a 48Wh battery, and while its GPU uses about half as much as a 6870M, the CPU is an i7 720QM, which is NOT light on power. I think you should be able to match my battery life just fine with a battery twice as large, a GPU that consumes twice as much, and a CPU that probably doesn't consume any more.

  • DovenDoven Member Posts: 138

    heyo..

    thought i would chime a bit here.  out of the two choices you list.. it would most definate be the 6870.  this coming from an NVidia fan for as many years as they have been around. 

    with the research you have done you should recognize that the 4 series cards have had a great deal of issues with overheating.  this with laptop as well as desktop rigs.  and while its somewhat easy to remedy the heating problems with a desktop. there is no way (if it were me) that I would risk any type of heat up with a laptop.   the 4's run way to fat and performance means little if the card is hot to begin with.

    now i realize alienware is THE name in gaming rigs to the normal consumer and you did mention you live in Canada?  so I will plug a laptop for you that you should be able to get.  Maybe not.  It doesn't say Alienware on it and doesnt look like a porche.  But it is quite the beast.

    http://www.newegg.com/Product/Product.aspx?Item=N82E16834152266&SortField=0&SummaryType=0&PageSize=10&SelectedRating=-1&VideoOnlyMark=False&IsFeedbackTab=true#scrollFullInfo

    if anything you can use that as an example of what is possible for the price point.  again.. NOT trying to shove it down your throat.

    I own 2 aliens myself.. and they are a decent product, but pricey as all hell.

    hope this helps some..

    cheers

    d

    "He who reigns within himself and rules his passions, desires, and fears is more than a king."

    "Where there is much desire to learn, there of necessity will be much argruing, much writting, many opinions; for opinions in good men is but knowledge in the making."

    John Milton 1608-1674

  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    I'd heavily favor the Radeon HD 6870M there.  The two cards are basically an underclocked desktop Radeon HD 5770 and GeForce GTX 550 Ti, respectively.  On the desktop, those cards perform about the same, with perhaps a slight edge for the GeForce GTX 550 Ti.  The laptop version of the Nvidia card has to underclock itself further, with 75% of the core clock speed and 61% of the memory clock speed, as compared to 79% and 83% respectively for the 6870M.  So the 6870M should perform slightly better on average, but realistically, they're nearly tied.

    The 6870M offers two big advantages, however.  One is lower power consumption.  The Radeon HD 5770 uses far less power than the GeForce GTX 550 Ti.  They both save on power consumption in laptops by setting a lower clock speed and lower voltage.  There isn't any reason to believe that either vendor manages to save meaningfully more than the other here, so one would expect the GeForce GTX 460M to use considerably more power at load than the Radeon HD 6870M.  Then the GTX 460M also adds memory, for 1.5 GB of video memory, as compared to 1 GB for all other cards, both laptop and desktop.  GDDR5 memory takes a lot of power, so this likely adds several watts to the GTX 460M's power consumption, widening the 6870's advantage.  Power consumption is a big deal in a laptop, as gaming laptops are already packing too much heat into too little space.

    Now, you might ask, but isn't there some performance advantage from the extra memory?  The answer to that is "no", or at least, not at 1600x900.  Not at 1920x1080, either, with only a handful of exceptions.  Metro 2033 would probably get some benefit from the extra video memory, but tech review sites haven't been able to turn up any other game that would be likely do so.

    The other advantage of the 6870M is the lower price tag.  So if you have better performance for lower power consumption and a lower price tag, I say that makes it a better card.

    There's no sense in getting a GeForce GTX 580M.  If you want higher performance, then that's what the Radeon HD 6970M is for.  If Alienware won't offer it, then you can buy a laptop elsewhere that does.  A lot of sites sell rebranded Clevo units with the 6970M.  The GTX 580M is meaningfully higher in performance than the Radeon HD 6970M, but it's not a huge gap.  The gaps in power consumption and price tag are huge, though.

    -----

    Both AMD and Nvidia offer discrete switchable graphics.  It's up to laptop vendors whether they will implement it, though.  The advantage to discrete switchable graphics is that you can get low idle power consumption, and hence long battery life.

    But the disadvantages are many.  First, there's a significant performance hit.  For obvious reasons, it's simpler for a discrete card to simply send a completed frame to the monitor than to copy it to the frame buffer fo the integrated graphics and then let the integrated graphics send it to the monitor.  It isn't obvious how big of a performance hit that should theoretically be, but in practice, you're looking at losing maybe 5%-10% of your performance.

    Second, discrete switchable graphics doesn't always work.  They both rely on a list of programs that, if any of these programs are in use, the discrete card takes over.  But the list of programs isn't flawless, so sometimes it kicks in when it shouldn't, or doesn't kick in when it should.  Now, you can probably manually override it, but that's a pain.

    Third, discrete switchable graphics makes a mess of video driver updates.  Sometimes you need to update your video drivers for one reason or another.  If you have just an AMD card or just an Nvidia card, this is a simple enough matter.  With discrete switchable graphics, it's not so simple.  And if it's the Intel drivers that are the problem that you need an update for, you might well still be waiting for Intel to fix it when you replace the laptop.

    If you want a gaming system with long battery life, you might want to look at AMD's recently released Llano A8-3510MX and A8-3530MX APUs.  Those use integrated graphics, but it's not the low end "don't try to game on this" integrated graphics of years past.  Realistically, it will get you about 1/3 of the graphical performance of either of the cards that you're looking at.  But it's also about half of the price tag, and very low power, both at idle and at load, so you'll get long battery life.  It won't merely be 2-3 hours of battery life at idle, but could easily be triple that with an appropriate battery.  And it could be 2-3 hours of battery life under gaming loads, while not plugged in, with an appropriate battery.

    The drawback of Llano is, of course, that you lose a lot of performance.  As I said, it will get you about 1/3 of the graphical performance of the video cards you're looking at.  That means turning graphical settings down, but games will still run smoothly.  The upside is that it frees you from some of the traditional drawbacks of gaming laptops, such as the power consumption, heat output, short battery life, and high price tag.  It's something to consider, but it's not for everyone.

    -----

    Two generations ago, the GeForce 200 series cards were better for laptops than the Radeon HD 4000 series cards.  Performance per watt was comparable at load, but the Radeon cards weren't able to clock down well at idle.  The 4870 was a particularly bad offender here, as AMD hadn't yet figured out how to clock down GDDR5 memory at idle properly.

    With the transition from TSMC's 55 nm bulk silicon process node to 40 nm bulk silicon, one would theoretically expect about 40% better performance per watt, due to the die shrink.  In moving from the Radeno HD 4000 series to the Radeon HD 5000 series, AMD got about 40% better performance per watt.  Nvidia's early cards based on GT218 and GT216 didn't see any improvement in energy efficiency at all.  The GT215 did a little better than this, but not a lot.  GF100 didn't see any performance per watt improvement at all, either.  The other Fermi cards have seen an improvement of maybe 20%, but nowhere near what they should have gotten--and what AMD did get.

    Now, doing what you theoretically "should" be able to do isn't a trivial matter.  If TSMC's 28 nm HKMG process node had been ready as soon as TSMC promised, we'd probably have seen AMD launch a card on it several months ago.  As it stands now, AMD may or may not be able to do so this year at all, and Nvidia probably won't be able to.

    Furthermore, two generations ago, Nvidia had better laptop drivers.  This is mainly because AMD didn't bother to put the work into their laptop drivers, only infrequently offering updates at all.  Around the time that they launched the Radeon HD 5000 series and had clearly superior hardware for laptops, AMD decided to give laptops the same driver focus as for desktops, and they've been on a monthly update schedule ever since then, and getting the same bug fixes as desktop cards.

    -----

    The feature sets on the Radeon HD 6870M and the GeForce GTX 460M are comparable, as none of the features that are exclusive to one side are a big deal.  If you've got better performance per dollar, then you talk about performance per dollar.  If you've got better performance per watt, then you talk about performance per watt.  If your opponent beats you handily in both of those measures, then you tell your marketing department to come up with some other excuse for people to buy your hardware anyway.

    And then when they best that they can come up with is CUDA, PhysX, and stereoscopic 3D, you hope that they manage to put together a presentation that confuses people enough to get a reaction other than derisive laughter.  If you happen to know of some particular program that you use right now that needs CUDA, then you need CUDA.  If you don't happen to know of some such particular program that you personally use, then not only do you not need CUDA today, but it's very unlikely that you'll ever have the slightest use for it.  If GPGPU does catch on in the consumer space in the future, it will almost certainly be OpenCL, and not CUDA.

    As for stereoscopic 3D, that requires a 120 Hz monitor and special glasses, so you're adding several hundred dollars to the price tag right there.  And then it requires you to maintain 120 frames per second in order to look right.  The hardware you're looking at isn't even in the right league for that.  Something like a desktop with an overclocked Core i5 2500K and two GeForce GTX 570s in SLI is more suitable.  Even if you can get 120 frames per second in some games by turning graphical settings way down, that just means you've spent a bunch of money on hardware that can only run games at low or moderate graphical settings.

    And even if you do implement it properly in hardware, it's still just a dumb gimmick.  Games don't yet implement it properly, and might never do so, because what do you do with the HUD?  That's intrinsically 2D, not 3D, and it will look dumb no matter where you put it.

  • RidelynnRidelynn Member EpicPosts: 7,383

    CUDA and PhysX are just marketing ploys for nVidia more than anything else. Both nVidia and ATI support OpenCL and DirectCompute, which do roughly what CUDA does (only on more hardware than just nVidia's). And there are plenty of physics engines besides PhysX, it tends to get glossed over for other physics engines (Havok mainly) simply because they don't require nVidia hardware, and can then run on pretty well anything (especially console hardware).

    Don't get caught up in the PhysX and CUDA hype. It's mostly irrelevant for gaming, and for 99.9% of most people out there unless you have some very specific software requirements for them.

  • TanonTanon Member UncommonPosts: 176

    Thanks to everyone for the replies.

    @Doven: That is precisely why I am buying the Alienware - they look nice. Anything with considerably better performance is out of my price range, so I figured that I might as well have a great looking laptop while maintaining a decent level of performance.

    @Quizzical: Thanks for the info, especially about CUDA and 3D. That pretty much seals the deal for the 6870M.

    @Ridelynn: I hadn't read up too much about PhysX and CUDA, so I was just worried that there may be some practical application for them sometime in the near future. Thankfully, that is not the case, so the 6870 is really looking to be a much better deal now.

  • TanonTanon Member UncommonPosts: 176

    @Quizzical: I have a quick question about the discrete switchable graphics. You mentioned that I would lose performance with it (5%-10%), which I really don't want if I can get enough battery life with just the discrete card. If the laptop vendor has it enabled, can I just disable this in the BIOS to regain my performance, or will a laptop with discrete switchable graphics implemented permanently have less performance than one that doesn't?

  • faxnadufaxnadu Member UncommonPosts: 940

    i have 2x of those radeons and mostly theyd been allright im only dissapointed drivers policy tho, seems like if amd makes graphic drivers works to 5 series then 6 series are lacking and visa versa.

     

    and on my experience nvidia sector have always had good drivers so your pick and this is my opinion only =)

     

    cheers

  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    Originally posted by Tanon

    @Quizzical: I have a quick question about the discrete switchable graphics. You mentioned that I would lose performance with it (5%-10%), which I really don't want if I can get enough battery life with just the discrete card. If the laptop vendor has it enabled, can I just disable this in the BIOS to regain my performance, or will a laptop with discrete switchable graphics implemented permanently have less performance than one that doesn't?

    If a laptop is set to use discrete switchable graphics, then you could probably make it keep the discrete card running permanently, but you'd still take the performance loss.

    To understand what's going on inside the laptop, think of a desktop with both integrated graphics and a discrete card.  There will be monitor ports integrated into the motherboard that are meant to use the integrated graphics.  There will also be monitor ports built into the discrete card that are meant to use the discrete card.  If you were to look at the back of the assembled machine, you'd see both sets of monitor ports, and when you go to plug in the monitor, you'd have to pick one or the other.

    If you plug the monitor into the discrete card, then that means that it's impossible to power down the discrete card without also shutting off the monitor.  It is theoretically possible to have the integrated graphics do the computations while it's plugged into the discrete card (desktops typically can't actually do this, but it could theoretically be done), but the discrete card has to remain powered up so that you can pass the completed frames through it and on to the monitor.  That means that you're not getting the proper idle power savings of discrete switchable graphics.

    However, if you plug the monitor into the discrete card, then when the discrete computes a frame, it can just pass it on directly to the monitor.  That means that you get the full performance that you're supposed to get from a discrete card.

    On the other hand, if you plug the monitor into the integrated graphics, then when the discrete card isn't being used, you can shut it down entirely.  That means that it doesn't have to use any power at all at idle.  Well, it probably uses a tiny bit, but not very much.  That gets you the full power savings that are the point of discrete switchable graphics.

    The drawback to this is that when you want to use the discrete card, it can't simply pass the completed frame on to the monitor.  Rather, it has to send it through the PCI Express connection back to the integrated graphics' frame buffer, and then let the integrated graphics send it on to the monitor.  It has to do that because that's where the monitor is plugged in.  That takes time and bandwidth, and that means a performance hit.

    In a desktop, you could move the monitor cable back and forth between the monitor ports for the integrated graphics and the discrete card.  In a laptop, it might theoretically be possible, but trying to do so would probably result in breaking the laptop and voiding the warranty.  Realistically, in a laptop, unless you're a whole lot more tech savvy than I am, you're stuck with the choice that the laptop vendor has chosen.

    Now, in a desktop, you could also get a KVM switch and plug the monitor into both monitor ports.  You could then switch back and forth at the touch of a button, without having to physically unplug the monitor from one port and plug it into the other.  No laptops have actually done this, and while the software to do this could surely be written, it presumably hasn't been.  Theoretically, it would allow either the integrated graphics or the discrete card to pass a signal directly on to the monitor, without having to pass it through the other.  It would have to go through the KVM switch either way, but all that a KVM switch does is to see two incoming signals and decide which one to ignore and which to pass on to the monitor.

    I don't know how hard this would be to implement in hardware in a laptop.  It might turn out to be really expensive, and if it adds $100 to the price tag, people won't want to pay that.  It might turn out to take a lot of power, and if the KVM switch adds 10 W, then you're missing the point of discrete switchable graphics entirely, which is to reduce the idle power consumption.  It might take too much space, as if it requires 10 in^2 of motherboard space, do you really want that instead of, say, an extra drive bay?  Or maybe it will be done in the future.  But it hasn't been done yet.

  • TanonTanon Member UncommonPosts: 176

    Then I suppose I don't really have a choice either way. Thanks a ton for the information.

  • CastillleCastillle Member UncommonPosts: 2,679

    Doesnt ati have their own version of CUDA  called ati stream?

    ''/\/\'' Posted using Iphone bunni
    ( o.o)
    (")(")
    **This bunny was cloned from bunnies belonging to Gobla and is part of the Quizzical Fanclub and the The Marvelously Meowhead Fan Club**

  • QuizzicalQuizzical Member LegendaryPosts: 25,351

    ATI Stream is basically deprecated now.  OpenCL is the future.  The idea of OpenCL is that, not only can it do extremely parallel computations like CUDA, but it will also run on nearly anything:  Intel processors, AMD processors, Nvidia video cards, AMD video cards, even ARM processors.  OpenCL should theoretically make it possible to code something once and have the same source code run on everything from cell phones to supercomputers, while taking advantage of whatever hardware resources are available.

    Now, the software support for OpenCL still has a long way to go, and honestly, isn't as far along as CUDA just yet.  But CUDA isn't far along to be useful to most people, either, and I'm betting that OpenCL gets there before CUDA does.  Both AMD and ARM are pushing OpenCL hard, and Nvidia and Intel are trying to make it work with their hardware, too.

    OpenCL has the advantage that it's an industry standard, so there is no vendor lock-in.  If you do something in CUDA and later want to upgrade your hardware, you're forced to move to whatever Tesla card Nvidia is offering at the time.  If it's in OpenCL and you want to upgrade your hardware later, then you've got your pick of whatever quite a few vendors happen to be offering at the time--even if the ideal product then is something that doesn't have anything analogous on the market today.

    In order for CUDA to survive, it has to not merely be better than OpenCL.  It has to be vastly better, in order to justify software vendors making programs that will only run on Nvidia video cards, and thereby locking out a large fraction of their customer base.  And CUDA has to stay vastly better than OpenCL indefinitely.  That's a tall order, and I don't think Nvidia is up to it.  I don't think any other company would be, either.

  • RidelynnRidelynn Member EpicPosts: 7,383

    OpenCL is the open, hardware agnostic version of essentially what CUDA is and what Stream was.

    Microsoft also has their own version built into DX11 (and backfitted to work with DX10), called DirectCompute. It will probably catch on better than OpenCL simply because it's part of the DirectX API.

    Both OpenCL and DirectCompute are hardware agnostic (they don't care what video card they run on). DirectCompute is obviously Windows-only (since it's part of DirectX), whereas OpenCL is on anything that cares to implement the API.

    ATI Stream was sorta-kinda merged into OpenCL: ATI developed Stream in 2007, then joined the standards committee for helping to develop the OpenCL standard. As soon as the first revision of OpenCL was approved, ATI/AMD dropped Stream and just went fully over to OpenCL, rather than continuing to support their own proprietary language (which serves as nothing more than a marketing bullet for nVidia).

Sign In or Register to comment.