Youll have to ask them on their policies of setting stock speeds. Im currently running 5% OC with voltages on stock (and its not even maximum). If you dont fiddle with POWER settings there is NO more power draw.
Check your GPU clock speed. I mean now, while reading the forums, not playing a game. Go on and do it; this post can wait.
...
There, done? It's a whole lot lower than the clock speed you set it to, isn't it? If there's no drawback to higher clock speeds, then why does the driver clock it down at idle?
Higher clock speeds mean higher power consumption. This is just part of how integrated circuits work. The rule of thumb is that power consumption is proportional to clock speed times voltage squared. There are a lot of reasons why this isn't quite right, but all else equal, higher clock speeds do inevitably mean higher power consumption.
Overclocking is usually fairly safe in the "won't damage the card" sense if you don't tweak voltages, though it can make the system unstable if you go too far. Of course, with modern parts having voltages change as the clock speed does, I wouldn't be surprised if overclocking changes the voltages even if you didn't intend to. It's also possible that overclocking may mess with the ability of the chip to reduce voltages when lower clock speeds are appropriate.
Youll have to ask them on their policies of setting stock speeds. Im currently running 5% OC with voltages on stock (and its not even maximum). If you dont fiddle with POWER settings there is NO more power draw.
Check your GPU clock speed. I mean now, while reading the forums, not playing a game. Go on and do it; this post can wait.
...
There, done? It's a whole lot lower than the clock speed you set it to, isn't it? If there's no drawback to higher clock speeds, then why does the driver clock it down at idle?
Higher clock speeds mean higher power consumption. This is just part of how integrated circuits work. The rule of thumb is that power consumption is proportional to clock speed times voltage squared. There are a lot of reasons why this isn't quite right, but all else equal, higher clock speeds do inevitably mean higher power consumption.
Overclocking is usually fairly safe in the "won't damage the card" sense if you don't tweak voltages, though it can make the system unstable if you go too far. Of course, with modern parts having voltages change as the clock speed does, I wouldn't be surprised if overclocking changes the voltages even if you didn't intend to. It's also possible that overclocking may mess with the ability of the chip to reduce voltages when lower clock speeds are appropriate.
Now its your turn to check at what VOLTAGE your GPU IDLES.
There, done?
Higher clock speeds DO NOT mean higher power consumption.
Higher VOLTAGE means higher power consumption. You can easily idle at your idle clocks at FULL voltage. It doesnt make sense, but you can, your consumption does not magically reduce JUST because you lowered the speed.
You have full control over VOLTAGE in this day and age. It may be locked on some cards and unlocked on other.
You should really try it some time and see how it works. Saying these things....really.
I'm not sure I'm going to attempt any OC on the graphic card.
I might give a try on my CPU in the future considering I have water cooling but nothing else.
The thing with OCing is that it is good to gvet an extra year out of your card if you know what you are doing. Buying a card and planning to OC it at once seems that you are getting a too slow card.
OCing will decrease the lifespan of the card, how much depend on the cooling (with water cooling it should be minimum though). A CPU usually isn't a problem though, a 1% decreased lifespan or similar isn't so bad (if you have good cooling). With GFX cards the decrease is higher.
The CPU is generally easier to OC than the GPU and with water cooling I don't see any huge problem (I have had several system with pretty highly overclocked CPU and never had any problems with it). Be careful if you plan to overclock the ram memroy though, it is trickier.
My advice is: Clock up the CPU 10% or so (avoid more than 20%, water cooling or not) and if you are going for more than 10% do 10 first and check the tempature a day or 2 before increasing it further.
Stay away for the ram and wait with the GPU until you had it for 2 or so years, and only OC the GFX card ram if it has a heatsink covering them (not all GFX cards have this). This will give you an extra year on the GPU and that it shortens the lifespan really wont matter there.
This 290 has superb cooling. And no, some reasonable OC wont drastically reduce lifespan. Extreme might, but it still questionable will it reduce it below usefulness of GPU.
This 290 has superb cooling. And no, some reasonable OC wont drastically reduce lifespan. Extreme might, but it still questionable will it reduce it below usefulness of GPU.
And I say it once again: If you need to OC a new GFX card you have gotten a too slow card. You OC a GPU so you can use it another year (a GPU is good 2 years before it is dated usually, 3 with OCing). If you need to OC it when it is new you will need to get a new faster way earlier and then you might as well get a faster from the start.
An R9 Fury or a GFX 980 would be a better choice there. Both will run in circles around a OCed 290.
This 290 has superb cooling. And no, some reasonable OC wont drastically reduce lifespan. Extreme might, but it still questionable will it reduce it below usefulness of GPU.
And I say it once again: If you need to OC a new GFX card you have gotten a too slow card. You OC a GPU so you can use it another year (a GPU is good 2 years before it is dated usually, 3 with OCing). If you need to OC it when it is new you will need to get a new faster way earlier and then you might as well get a faster from the start.
An R9 Fury or a GFX 980 would be a better choice there. Both will run in circles around a OCed 290.
Maybe someone doesnt have money to spend? maybe someone doesnt want to spend so much on GPU? Maybe someone bought it JUST to OC?
I dont know if youre aware vast majority of people dont want to spend 500+$ on GPU.
And why wouldnt they get a bit more performance of what they have? If GPU is not faulty, it will outlive its usefulness even with extreme OC. Same with CPU. Do you REALLY care if your CPU lifespan decreases from 20 to 15 years?
Now its your turn to check at what VOLTAGE your GPU IDLES.
There, done?
Higher clock speeds DO NOT mean higher power consumption.
Actually, yes it does.
Power consumption = silicon capacitance * base frequency * square of the voltage input
Silicon capacitance is highly dependent on process node with small fluctuations due to the manufacturing process. Voltage does cause power consumption to go up much faster, since it's an exponential function, but frequency also contributes to power draw..
Otherwise, you could overclock a chip without an increase in voltage (which is very possible on most dies, both CPU and GPU), and never see a rise in temperatures - which isn't the case
Now its your turn to check at what VOLTAGE your GPU IDLES.
There, done?
Higher clock speeds DO NOT mean higher power consumption.
Actually, yes it does.
Power consumption = silicon capacitance * base frequency * square of the voltage input
Silicon capacitance is highly dependent on process node with small fluctuations due to the manufacturing process. Voltage does cause power consumption to go up much faster, since it's an exponential function, but frequency also contributes to power draw..
Otherwise, you could overclock a chip without an increase in voltage (which is very possible on most dies, both CPU and GPU), and never see a rise in temperatures - which isn't the case
You didnt OC much in your life, did you?
Im interested in practical applaince, not theoretical. Even Intels document tehres NO mention of JUST frquency, (since its tied to VOLTAGE).
When you can measure something only as statistical value since its so insignificant, you can IGNORE it. In other words, to OC frequency enough to actually matter you will have to raise voltage waaaaaaaaaay before that (and raising it as a square). Similarly, lowering frequency to save power is POINTLESS if you do not lower voltages at the same time. Thats why its called UNDERVOLTING not underfrequencing.
VOLTAGE otoh, as nicely shown as square, has significant impact.
So no, OCing withoth fiddling with voltages wont increase power consumption (thoretically it will for insignigicant amount). They do NOT set frequencies SO low. And that ansers Quizzes question why professional cards are clocked lower - its assumption those cards will work at full for long periods of time and lowering voltages can lower power consumtion significantly.
I challenge you to run test on your GPU with at least 3 points by setting cocks 100MHz-150MHz apart without fiddling wth voltages and post your results on both power draw and heat.
Thats the same as my post above - do you REALLY care your average full load tempertaure goes up by 0,1 degree?
What's funny is that you're all both right and wrong
I'll just remind u guys to take into account the difference in 2d and 3d rendering on gpu whether a 3d application is running, and the different states the more recent gpus have (boot ,performance, uvd, etc...)
Oh and also Malabooga is slightly right in the way when core and memory clocks get reduced to 300/150 for instance the voltage needs to be reduced as well for any significant savings.
And also Quizzical is slightly wrong in the way that voltage does actually almost always get reduced when the clocks are low for idle/uvd states.
And Ridelynn is slightly wrong, because he overestimates the impact that solely raising clocks (extremely dependent on a variety of other factors and is not constant) will have on power consumption and/or heat emission and how all of that combined will influence the temperatures of both the chip and the VRMs.
And Malabooga is slightly wrong because he underestimates the impact of solely raising clocks will have on the power consumption and/or heat, and how all of that combined will influence the temperatures of both the chip and the VRMs.
No I'm pretty well right. If you increase clocks by 10%, you can expect a >=10% increase in power consumption. The frequency part of the equation is fixed and linear.
13lake is correct in stating there are other effects to consider. The percent power increase will always be at least directly equal to the percent clock increase, plus those additional factors which are pretty variable.
That's not insignificant, particularly if you are looking for a significant overclock amount. I don't think I overestimated anything.
And considering the impact of power consumption on heat generation... that's pretty well dealt with by the First Law of Thermodynamics, and I won't go into that one any further than that.
Here's some educational material, if you want to see the actual equations, from some pretty reputable sources.
Can you produce the "Intel documentation" that says frequency has no effect?
Especially since I just linked several documents that very clearly state it does, and the Penn State document in particular was co-written by not one, but two Intel engineers.
Sure, you can undervolt a chip. You can also underclock a chip... You can do both at the same time, and they usually call that dynamic power management in modern silicon. I don't see what your point is in even mentioning that.
You can link all the formulae and theory you like, practical application says differently.
"increase of heat and power consumption" from OCing without touching voltages is insignificant.
i TOLD you to check out yourselves. But nooooooooo.
I dont really care about you, as far as im concerned, just to be safe, force idle clocks/voltage on your CPU/GPU JUST to be sure that its safe and that it lasts for eternity, but youre missinforming other people.
Its not like every manufacturer has its own OC suite, because that would be VERY harmful. oooops
You dont want to.......thats fine. But spreading around such nonsense....im really rofling.
The best OCing GPU i had was MSI GTX460. From referent clocks of 675 to 960. And people claimed fermi "cant be cooled" or w/e. Well, they were WRONG lol
And all that was for 24/7. Both are still alive and kicking. So please tell me more how bad it is and how it will kill my grafiks.
That performance was waaaaaaaaaaaay above gtx 470. And those both costed only marginally more than standard 460 and waaaaaay less than 470.Too bad it doesnt happen today any more. Well unless you are lucky one and got one of them asus furies that unlock
How did you go from "frequency has no effect on power use" (which is flat wrong) to "But I'm right because I own a 460GTX that would overclock" and something about overclocking suites which is pretty incomprehensible.
Totally lost here. Pretty sure I did check for myself, and provided some links for other people to do their own reading and come to their own conclusions.
The articles you link, they have dynamic power management - they are adjusting both frequency and voltage. So umm.. don't know what your trying to show there. The MSI 460 article you link to - there is a mention about power draw, but they don't do it with respect to overclocking - you just get an idle and loaded measurement, and it doesn't say much.
If you want to take a 4Ghz CPU and overclock it 100Mhz and call that an insignificant increase in power, ok, but I'd also say that's an insignificant increase in clock speed as well (although I still contend you did in fact increase the power draw by at least 2.5%). But if you take a chip from 1Ghz to 2GHz, without any change in voltage, you have at least doubled the power requirement.
You've very much confused operating temperature with power draw. The two are related, but not synonymous.
What's funny is that you're all both right and wrong
I'll just remind u guys to take into account the difference in 2d and 3d rendering on gpu whether a 3d application is running, and the different states the more recent gpus have (boot ,performance, uvd, etc...)
Oh and also Malabooga is slightly right in the way when core and memory clocks get reduced to 300/150 for instance the voltage needs to be reduced as well for any significant savings.
And also Quizzical is slightly wrong in the way that voltage does actually almost always get reduced when the clocks are low for idle/uvd states.
And Ridelynn is slightly wrong, because he overestimates the impact that solely raising clocks (extremely dependent on a variety of other factors and is not constant) will have on power consumption and/or heat emission and how all of that combined will influence the temperatures of both the chip and the VRMs.
And Malabooga is slightly wrong because he underestimates the impact of solely raising clocks will have on the power consumption and/or heat, and how all of that combined will influence the temperatures of both the chip and the VRMs.
No one is denying that voltage has a pretty big effect on power consumption. Nor is anyone denying that modern hardware that decreases the clock speed at idle to save on power consumption tends to also decrease the voltage. The question is whether clock speed in itself also affects power consumption.
As i said, test it, and report back, until then you can discuss theory with whomever is willing for how long you want. Not with me though.
All that stupid physics and math are useless. Those don't apply to the real world.
Once youre understand that something can be insignificant and rightly ignored in practical application youll be all the wiser.
If you guys went to any form of higher education in techical sciences.....
Yeah, in theory if you had something that can increase frequency indefinitely without raising voltages. As i said, im not interested in theory, i have had many cards, and i have one right now, so theorizing about it is hilarious. But as i said i DARE you to test it yourself. And you dont want to. And for a good reason rofl.
Less theorizing about "what would happen" and more real world results.
Dont be scared, you wont fry your card. In fact nothing spectacular will happen.
It's not theory it's practical electrical engineering. For x amount of increase there there is y amount of increase in voltage or current. It is practical application described by math. The result of that increase is heat because you're shoving more energy through.
If the increase is insignificant so is the heat byproduct and also the performance gain. If the increase is significant so is the performance gain and the heat. Every system has a cyclic life expectancy based on rated use (how the mechanism is typically used within design parameters). If you raise the the stressors on the system the life expectancy will be lowered. Practically speaking this will vary from component to component somewhat because no two components are exactly the same.
Those aren't theories. They are the theories applied practically, that's called engineering.
The bottom line: OC a component will reduce its lifespan compared to not OC'ing that component. How much life is lost depends on how hard that component is driven.
You dont know first thing about theory and practical application. Mentioning engineering in that piece of yours puts engineers to shame.
You still want to theorize about something that you have tens, even hundreds of millions exact results that contradicts you.
You keep telling us to try it and prove ourselves wrong using "real world". Funny thing, I've overclocked a lot of stuff, and it pretty well does what I said it does. But I don't feel the need to put that out in a spreadsheet just so you can ignore it as you have everything else.
I think you need to find something published that proves your right. So far all you have produced are some reviews of ancient hardware that talk about temperature and nothing to do with power consumption, and some links to random overclock utilities that have nothing to do with anything.
Or you could just keep digging that hole your in, which is fun to watch. What's your background in higher education in technical sciences that allows you to preach down to us, the lowly uneducated, anyway?
So before my new GPU arrives I'd like to troubleshoot if the problem is the current GPU or something else.
As I said in the bump post about the GPU, I started getting driver crashes while playing games. At first it was manageable, then it turned worse by crashing every couple of minutes then recovering while playing dead space 3 and now while it lessened with some driver update, today I witnessed something different.
I had chrome open with a youtube video tab and was playing hearthstone. The game after like an hour of playing started the driver crashing but recovering crap, and I kept on playing. Then after crashing another 2-3 times in a span of an hour, I got a notice that driver access from google chrome was going to be blocked and the whole system kept freezing every second until I restarted.
Now the CPU, Motherboard and Ram are quite new (2014) and I'm not sure if it could be the powersupply or harddrive. I'm no expert but it doesn't seem plausible for me.
So the question is: The graphic card/driver is the most probable culprit or something else too?
I hate to say it, but with those symptoms, it could be anything. I'd agree that the GPU seems a likely culprit, but the only way your going to be able to tell is to pull it and use another GPU and test it out (even if that's an integrated GPU).
A clean Windows install would rule out any corrupt drivers.
There is a chance that if it's your power supply, the power supply has broke something else (RAM or GPU are most common) and that something else is what is causing the crashes, and if you replace the broken part it will work fine for a while, until your bad PSU zaps the new part again. Those are extremely hard to troubleshoot, unless you've already been through 2-3 video cards or DIMMs
When I get the new GPU I intend to do a format before upgarding. Before upgrading to windows 10 I'll try launching hearthstone or some game and see if formatting into the new GPU helped or not.
I don't have any card that I can swap to check sadly so I'm mostly not playing anything that seems to be heavy on the card. (Hearthstone is very badly optimized imo)
Seems like the nvidia driver is the cause of the crashing. After doing these steps (link: - Warning loud volume) provided from my friend's brother who contacted nvidia I haven't had the driver crashing/recovering since.
Now I don't know if a clean install of an older driver was what made me able to play games again but I'm not going to attempt updating to the latest driver.
Nvidia's drivers lately have been a pile of turds over and over.
Yup, NVidias drivers are as bad as ever, nothings changed since i had GTX460, keep 10 different driver versions and install "insert driver version" for specific games.
And then these geniuses spam how NVidia has "superior drivers".......
Comments
...
There, done? It's a whole lot lower than the clock speed you set it to, isn't it? If there's no drawback to higher clock speeds, then why does the driver clock it down at idle?
Higher clock speeds mean higher power consumption. This is just part of how integrated circuits work. The rule of thumb is that power consumption is proportional to clock speed times voltage squared. There are a lot of reasons why this isn't quite right, but all else equal, higher clock speeds do inevitably mean higher power consumption.
Overclocking is usually fairly safe in the "won't damage the card" sense if you don't tweak voltages, though it can make the system unstable if you go too far. Of course, with modern parts having voltages change as the clock speed does, I wouldn't be surprised if overclocking changes the voltages even if you didn't intend to. It's also possible that overclocking may mess with the ability of the chip to reduce voltages when lower clock speeds are appropriate.
There, done?
Higher clock speeds DO NOT mean higher power consumption.
Higher VOLTAGE means higher power consumption. You can easily idle at your idle clocks at FULL voltage. It doesnt make sense, but you can, your consumption does not magically reduce JUST because you lowered the speed.
You have full control over VOLTAGE in this day and age. It may be locked on some cards and unlocked on other.
You should really try it some time and see how it works. Saying these things....really.
Here:
http://gaming.msi.com/features/afterburner
and play around a bit. It works for all GPUs. You can monitor whatever you want, even voltages.
OCing will decrease the lifespan of the card, how much depend on the cooling (with water cooling it should be minimum though). A CPU usually isn't a problem though, a 1% decreased lifespan or similar isn't so bad (if you have good cooling). With GFX cards the decrease is higher.
The CPU is generally easier to OC than the GPU and with water cooling I don't see any huge problem (I have had several system with pretty highly overclocked CPU and never had any problems with it). Be careful if you plan to overclock the ram memroy though, it is trickier.
My advice is: Clock up the CPU 10% or so (avoid more than 20%, water cooling or not) and if you are going for more than 10% do 10 first and check the tempature a day or 2 before increasing it further.
Stay away for the ram and wait with the GPU until you had it for 2 or so years, and only OC the GFX card ram if it has a heatsink covering them (not all GFX cards have this). This will give you an extra year on the GPU and that it shortens the lifespan really wont matter there.
Anyway, in other news:
http://wccftech.com/asus-strix-fury-unlocked-fury-x-4096/
An R9 Fury or a GFX 980 would be a better choice there. Both will run in circles around a OCed 290.
I dont know if youre aware vast majority of people dont want to spend 500+$ on GPU.
And why wouldnt they get a bit more performance of what they have? If GPU is not faulty, it will outlive its usefulness even with extreme OC. Same with CPU. Do you REALLY care if your CPU lifespan decreases from 20 to 15 years?
Power consumption = silicon capacitance * base frequency * square of the voltage input
Silicon capacitance is highly dependent on process node with small fluctuations due to the manufacturing process. Voltage does cause power consumption to go up much faster, since it's an exponential function, but frequency also contributes to power draw..
Otherwise, you could overclock a chip without an increase in voltage (which is very possible on most dies, both CPU and GPU), and never see a rise in temperatures - which isn't the case
Im interested in practical applaince, not theoretical. Even Intels document tehres NO mention of JUST frquency, (since its tied to VOLTAGE).
When you can measure something only as statistical value since its so insignificant, you can IGNORE it. In other words, to OC frequency enough to actually matter you will have to raise voltage waaaaaaaaaay before that (and raising it as a square). Similarly, lowering frequency to save power is POINTLESS if you do not lower voltages at the same time. Thats why its called UNDERVOLTING not underfrequencing.
VOLTAGE otoh, as nicely shown as square, has significant impact.
So no, OCing withoth fiddling with voltages wont increase power consumption (thoretically it will for insignigicant amount). They do NOT set frequencies SO low. And that ansers Quizzes question why professional cards are clocked lower - its assumption those cards will work at full for long periods of time and lowering voltages can lower power consumtion significantly.
I challenge you to run test on your GPU with at least 3 points by setting cocks 100MHz-150MHz apart without fiddling wth voltages and post your results on both power draw and heat.
Thats the same as my post above - do you REALLY care your average full load tempertaure goes up by 0,1 degree?
I'll just remind u guys to take into account the difference in 2d and 3d rendering on gpu whether a 3d application is running, and the different states the more recent gpus have (boot ,performance, uvd, etc...)
Oh and also Malabooga is slightly right in the way when core and memory clocks get reduced to 300/150 for instance the voltage needs to be reduced as well for any significant savings.
And also Quizzical is slightly wrong in the way that voltage does actually almost always get reduced when the clocks are low for idle/uvd states.
And Ridelynn is slightly wrong, because he overestimates the impact that solely raising clocks (extremely dependent on a variety of other factors and is not constant) will have on power consumption and/or heat emission and how all of that combined will influence the temperatures of both the chip and the VRMs.
And Malabooga is slightly wrong because he underestimates the impact of solely raising clocks will have on the power consumption and/or heat, and how all of that combined will influence the temperatures of both the chip and the VRMs.
13lake is correct in stating there are other effects to consider. The percent power increase will always be at least directly equal to the percent clock increase, plus those additional factors which are pretty variable.
That's not insignificant, particularly if you are looking for a significant overclock amount. I don't think I overestimated anything.
And considering the impact of power consumption on heat generation... that's pretty well dealt with by the First Law of Thermodynamics, and I won't go into that one any further than that.
Here's some educational material, if you want to see the actual equations, from some pretty reputable sources.
http://www.ruf.rice.edu/~mobile/elec518/readings/DevicesAndCircuits/kim03leakage.pdf
http://www.ti.com/lit/an/scaa035b/scaa035b.pdf
http://cis.poly.edu/cs2214rvs/powers03.htm
http://www.cse.psu.edu/~mdl/paper/Iccd02pmod.pdf
https://en.wikipedia.org/wiki/Thermal_design_power
Can you produce the "Intel documentation" that says frequency has no effect?
Especially since I just linked several documents that very clearly state it does, and the Penn State document in particular was co-written by not one, but two Intel engineers.
Sure, you can undervolt a chip. You can also underclock a chip... You can do both at the same time, and they usually call that dynamic power management in modern silicon. I don't see what your point is in even mentioning that.
You can link all the formulae and theory you like, practical application says differently.
"increase of heat and power consumption" from OCing without touching voltages is insignificant.
i TOLD you to check out yourselves. But nooooooooo.
I dont really care about you, as far as im concerned, just to be safe, force idle clocks/voltage on your CPU/GPU JUST to be sure that its safe and that it lasts for eternity, but youre missinforming other people.
Its not like every manufacturer has its own OC suite, because that would be VERY harmful. oooops
http://www.sapphiretech.com/catapage_tech.asp?cataid=291&lang=eng
http://www.hardocp.com/article/2015/08/11/gigabyte_gtx_980_ti_g1_gaming_video_card_review/4#.VnH4tOKja70
http://gaming.msi.com/features/afterburner
...........
..........
..........
and a little overview )from 5 years ago lol):
http://wccftech.com/article/gpu-overclocking-utilities/
You dont want to.......thats fine. But spreading around such nonsense....im really rofling.
The best OCing GPU i had was MSI GTX460. From referent clocks of 675 to 960. And people claimed fermi "cant be cooled" or w/e. Well, they were WRONG lol
http://www.guru3d.com/articles_pages/msi_geforce_gtx_460_hawk_review,20.html
Some even went as far as 1 GHz.
This one wasnt bad either (also had one for a while, but it wouldnd go over 917 ;P)
http://www.guru3d.com/articles_pages/msi_geforce_gtx_460_cyclone_oc_1024mb_review,19.html
And all that was for 24/7. Both are still alive and kicking. So please tell me more how bad it is and how it will kill my grafiks.
That performance was waaaaaaaaaaaay above gtx 470. And those both costed only marginally more than standard 460 and waaaaaay less than 470.Too bad it doesnt happen today any more. Well unless you are lucky one and got one of them asus furies that unlock
Totally lost here. Pretty sure I did check for myself, and provided some links for other people to do their own reading and come to their own conclusions.
The articles you link, they have dynamic power management - they are adjusting both frequency and voltage. So umm.. don't know what your trying to show there. The MSI 460 article you link to - there is a mention about power draw, but they don't do it with respect to overclocking - you just get an idle and loaded measurement, and it doesn't say much.
If you want to take a 4Ghz CPU and overclock it 100Mhz and call that an insignificant increase in power, ok, but I'd also say that's an insignificant increase in clock speed as well (although I still contend you did in fact increase the power draw by at least 2.5%). But if you take a chip from 1Ghz to 2GHz, without any change in voltage, you have at least doubled the power requirement.
You've very much confused operating temperature with power draw. The two are related, but not synonymous.
As i said, test it, and report back, until then you can discuss theory with whomever is willing for how long you want. Not with me though.
If you guys went to any form of higher education in techical sciences.....
Yeah, in theory if you had something that can increase frequency indefinitely without raising voltages. As i said, im not interested in theory, i have had many cards, and i have one right now, so theorizing about it is hilarious. But as i said i DARE you to test it yourself. And you dont want to. And for a good reason rofl.
Less theorizing about "what would happen" and more real world results.
Dont be scared, you wont fry your card. In fact nothing spectacular will happen.
You still want to theorize about something that you have tens, even hundreds of millions exact results that contradicts you.
I think you need to find something published that proves your right. So far all you have produced are some reviews of ancient hardware that talk about temperature and nothing to do with power consumption, and some links to random overclock utilities that have nothing to do with anything.
Or you could just keep digging that hole your in, which is fun to watch. What's your background in higher education in technical sciences that allows you to preach down to us, the lowly uneducated, anyway?
As I said in the bump post about the GPU, I started getting driver crashes while playing games. At first it was manageable, then it turned worse by crashing every couple of minutes then recovering while playing dead space 3 and now while it lessened with some driver update, today I witnessed something different.
I had chrome open with a youtube video tab and was playing hearthstone. The game after like an hour of playing started the driver crashing but recovering crap, and I kept on playing. Then after crashing another 2-3 times in a span of an hour, I got a notice that driver access from google chrome was going to be blocked and the whole system kept freezing every second until I restarted.
Now the CPU, Motherboard and Ram are quite new (2014) and I'm not sure if it could be the powersupply or harddrive. I'm no expert but it doesn't seem plausible for me.
So the question is: The graphic card/driver is the most probable culprit or something else too?
Thanks
A clean Windows install would rule out any corrupt drivers.
There is a chance that if it's your power supply, the power supply has broke something else (RAM or GPU are most common) and that something else is what is causing the crashes, and if you replace the broken part it will work fine for a while, until your bad PSU zaps the new part again. Those are extremely hard to troubleshoot, unless you've already been through 2-3 video cards or DIMMs
I don't have any card that I can swap to check sadly so I'm mostly not playing anything that seems to be heavy on the card. (Hearthstone is very badly optimized imo)
Seems like the nvidia driver is the cause of the crashing. After doing these steps (link: - Warning loud volume) provided from my friend's brother who contacted nvidia I haven't had the driver crashing/recovering since.
Now I don't know if a clean install of an older driver was what made me able to play games again but I'm not going to attempt updating to the latest driver.
Nvidia's drivers lately have been a pile of turds over and over.
And then these geniuses spam how NVidia has "superior drivers".......