It looks like you're new here. If you want to get involved, click one of these buttons!
Here's what I'm running now:
Gigabyte Mobo (P67X-UD3-B3)
Intel i5 2500k + Cooler Master Hyper 212 EVO
8GB DDR3 Corsair Vengeance
EVGA GTX 460 1GB superclocked
60GB Crucial M4 SSD
500GB WD Caviar Black HDD
Corsair TX750 v2 PSU
Basically, I want to make sure the GTX 680 is compatible with my current setup (particularly the PSU), and to ask if anyone knows if there's been any major problems reported with (prone to overheating, etc).
THIS is the one I was thinking about getting on Friday (payday, woot woot). Not to mention a free copy of BL2 is pretty inticing as well. Anyways, thanks in advance.
TJ
Comments
Your fine, so long as it doesn't have clearance problems fitting inside your case.
How are drivers for AMD(ATI?) cards? I've heard that driver suppport for those compared to nVidia was a little lackluster, but obviously I'm no expect on it, nor have I ever used an ATI card.
Also, are we talking a matter of a few FPS in terms of the difference between the 670 and 680? As far as I'm concerned, when runnig at 1920x1080, anything over 45fps (@ high-max settings) is fine for me.
ATI and nVidia drivers are blow for blow. Both update about monthly now. ATI used to be behind the curve, but that was years ago. They have been basically on par for the last 3-4 years. Both companies are far from perfect, and driver quality may vary a bit from game to game, but both are generally pretty good about getting updates out (and equally bad about initial drivers for cards based on new technology).
At 1080p, the difference between the 680 and the 670, and probably even the 660Ti will be negligible, especially with VSync on. Your going to be above 45FPS (and probably 60FPS) in essentially every game with very high (if not MAXed) settings with any 3 of those cards (or similarly, the 7970, 7950, and 7870 from AMD).
The main difference will be ~how long~ you willl be able to maintain that. The 660Ti/7870 will probably provide very good performance, and start to drop off as games get more demanding in about 18 months -- it won't become a bad card over night, it just won't be able to run the latest games with MAX MAX anymore, you'll have to start turning down settings. The 670/7950 will get you out another year past that, and the 680/7970 probably another year past that.
And if you consider overclocking, then for all intents and purposes the 670/680 are equal (as they are the same chip, just at different frequencies, and you can overclock a 670 to be a 680). Likewise for the 7950/7970.
For most people in your situation, it makes more sense to go with the mid-level card. It provides excellent performance now, and in 18 months-2 years, when it starts to suffer, you look at upgrading it again. ~$200 now, and ~$200 2 years from now (for what will have updated technology and drivers) - versus paying $500+ now for the same level of performance over the same amount of time, and being stuck with the same technology over that entire time.
AMD bought ATI several years ago. They continued to use the ATI brand name for a while, but now their cards are branded as AMD Radeon. The reason for the switch was moving the CPU and GPU into a single chip in some cases (which is awesome for budget laptops and completely essential for tablets), and it didn't make sense to say that part of the chip was AMD and part of the chip was ATI, especially with the differences between a CPU and GPU starting to blur.
AMD drivers are about as good as Nvidia drivers. Both will have their problems now and then, but they usually get fixed in a timely manner. Both vendors have drivers vastly superior to what Intel offers.
The GeForce GTX 670 and 680 are based on the same GK104 chip. The GTX 680 has all eight SMXes on the die active, while the GTX 670 disables one of them. The two cards have exactly the same memory bandwidth, and are largely constrained by memory bandwidth. There are probably some corner cases where the GTX 670 only offers about 85% of the performance of a GTX 680, but those are outliers, and over 90% is more typical.
The GTX670 is a great card.
Epic Music: https://www.youtube.com/watch?v=vAigCvelkhQ&list=PLo9FRw1AkDuQLEz7Gvvaz3ideB2NpFtT1
https://archive.org/details/softwarelibrary_msdos?&sort=-downloads&page=1
Kyleran: "Now there's the real trick, learning to accept and enjoy a game for what it offers rather than pass on what might be a great playing experience because it lacks a few features you prefer."
John Henry Newman: "A man would do nothing if he waited until he could do it so well that no one could find fault."
FreddyNoNose: "A good game needs no defense; a bad game has no defense." "Easily digested content is just as easily forgotten."
LacedOpium: "So the question that begs to be asked is, if you are not interested in the game mechanics that define the MMORPG genre, then why are you playing an MMORPG?"
Overclocking can't compensate for pieces that are fused off, unless you're going to overclock the lower card and leave the higher one at stock speeds. A 7950 disables four of the 32 GCN clusters of the 7970, and a GTX 670 disables one of the eight SMXes of the GTX 680.
On the other hand, a Radeon HD 7970 GHz Edition and Radeon HD 7970 (not GHz edition) really are the same thing except with different clock speeds. The better dies are binned out for use in GHz edition cards.
I think I'm going to go with either the 670 or 680. Seeing as how they come with a copy of Borderlands 2 which I was planning to get anyways, it makes more sense to me to go with one of those over the 7970 because of that deal.
Also, would I need to uninstall the drivers for the GTX 460 before installing new ones for the 670/680?
EDIT: Also, in the case of the 680, I see a regular version (from EVGA) and a superclocked version. I thought GPU Boost overclocked the card automatically, or am I misunderstanding something?
Theoretically no, since they are unified drivers (same driver works with nearly every card). It wouldn't be a bad idea to uninstall the drivers for your card before you actually remove the card, then when you drop the new card in, install from the latest drivers from the web site.
And yes, GPU Boost does auto-overclock. The superclocks have faster base clocks and higher Boost limits (they can auto-OC farther). Really, the amount of "Superclock" amounts to not a whole lot (7% or so...). It's probably something you could do yourself though - although often times the premium edition of the cards have beefier power circuitry or better cooling in addition to the stock overclocks (I don't know if EVGA's does or not, they have like 15 different versions of the card)
Superclock
1058 MHz GPU
1124MHz Boost Clock
versus Standard
1006 MHz GPU
1058MHz Boost Clock
That is what I meant - you can generally OC a card a tier down to compete favorably with a card one tier up at stock speeds pretty well up and down the line.
Only if
a) The files you are working with are very large (2G+), or there are a lot of them, such that you notice the machine actually pausing or studdering to go to the swap file
-and-
b) You have both a 64-bit OS and 64-bit version of the program you are using
You can open the task manager while your using those programs and see what your memory usage actually is - down on the lower border of Task Manager it shows your memory useage percent, and it won't actually do much with the swap file that affects your performance until that percent gets to be pretty high (80%+through 100%)
For the GTX 670, yes. For the GTX 680, not necessarily. The latter is priced out of line with its performance far enough that you could make a case for a Radeon HD 7970 GHz Edition plus buying the game separately as being a better deal than the GTX 680 with the game included free.
-----
GPU Boost means that clock speeds bounce around so much that listing a stock clock speed and a GPU boost speed are basically exercises in making up numbers. The numbers likely have some technical meaning, but they're not a maximum or minimum--and in particular, the GPU will automatically clock itself far above the supposed GPU boost speed in some cases. It's not that the numbers they list are wrong, so much as that there aren't really any "right" numbers that they can cite. My guess is that at the "stock" speeds, a factory overclocked version would tend to run at higher clock speeds than a card without a factory overclock, but likely by less than the amount of the nominal factory overclock.
Yeah, but if you're adding in buying Borderlands 2, then a GeForce GTX 670 that includes the game is the better value for the money.
The amd has a larger chip die uses more power and runs hotter than the 680 right?
i went with the 680 too, msi twinfrozr version.
Id have gone with amd but it looked (not sure about the one you suggested im not familair with their numbering) like it much larger in size, had a larger die cut, used more power, and probably ran much hotter than the nvidia...also never had a single issue with a nividia card or driver and ive been using them since 3dfx stopped being relevant.
In a lot of the games i play i still hear people complaining about graphic glitches ect and they are always amd card uses, never a peep out of the nivida crowd.
Yes i DO realize that amd sells superior hardware typically, despite the lower tech GPU..which doesnt really relate to its capabilities as much its heat/power.
Oh and i bought the 680 because nvidia sells the flawed 680s with a few parts disabled as 670s.
Would love for AMD to really get on par and for 3dfx to come back....would love not dealing with either buggy drivers and needless heat or dealing with supply issues and selling defective cards as lesser priced less powerful cards...
I don't think they are defective - they work exactly as they are specified. Just about everything electronic that you buy is done the same way: CPU's, LCD Panels, RAM, etc. They get manufactured, then "graded" - top grades are the fastest CPUs, the most expensive LCD screens with best color quality, or the RAM with the lowest latency - and all sold with hefty premiums. Those that get graded in the middle of the pack are sold in the middle of the pack, and those that are lowest grade are still perfectly useable, just at lower specifications. GPUs are no different.
I don't want 3dfx to come back. Their closed Glide API was bad for the industry (I think all closed/unlicensable standards are, even though it was based on OpenGL, it wasn't upwards compatible). It wasn't until they were bought out by nVidia that Glide finally opened up, but by then Direct3D had finally come around and more or less forced everyone to standardize around that - which locked in the OS to Microsoft but at least opened up the hardware to any manufacturer that was willing to develop to the standard.
Yes, a Tahiti die is substantially larger than a GK104 die. They're 365 mm^2 and 294 mm^2, respectively. The main reason to care about die size is that they tell you something about cost of production. If one GPU chip has twice as large of a die as another, then you can only fit half as many dies in a wafer. TSMC charges the same price for a wafer no matter what dies you put on it. Smaller dies mean cheaper cost of production, and that tends to get passed on to consumers in cheaper prices to buy the card.
But die size is a bigger deal in lower end products with tighter margins. The reason that Nvidia hasn't had a competitive video card in the sub-$150 market in about three years is that they needed much larger dies to match AMD in performance. If your competitor can make a widget for $10, and it costs you $20 to make an identical widget, that's not such a problem if the widgets sell for $100 each. If your competitor sells widgets for $19 each, you're in deep trouble.
A Radeon HD 7970 does use a fair bit more power than a GeForce GTX 680. It's not the die size, really; it's just a less efficient chip. The success of Pitcairn and Cape Verde prove that AMD knows how to make GCN more efficient; presumably they haven't bothered with Tahiti because by the time the respin was ready, it would be nearly time to discontinue the card.
If you mean running hotter in temperature, that's largely a function of the cooler. Even a Radeon HD 7970 GHz Edition can run cool if you put a good enough cooler on it--and some board partners have done exactly that.
That's because iBuyPower's wattage requirements are completely stupid. A good quality 500 W power supply has plenty of power for the original poster's rig.
I was just using them as the high mark. I thought he said he had a 750w, which would mean he is definitely in the clear.
I've seen a few benchmarks for specific games were the 670 actually beat out the 680 buy a few frames in a heavily overclocked scenario. Not saying the 680 is bad, but the price/performance gap isnt really worth the extra 100-150 odd bucks depending on model/make.
If you have the coin I would go for two 670's in SLI if your bent on sticking with an NVIDIA card, much better price/performance ratio. While its pretty overkill unless your running at a massive resolution, I get a kick out of seeing ridiculous framerates while recording/streaming.