Quantcast

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Basic Gaming Rig. Looking for advice.

2»

Comments

  • bobbymobobbymo Member UncommonPosts: 48

    Originally posted by Quizzical

    Looks like the forum ate my post, so I'll try again:

    You found two reviews that are over five years old.  One only pulled 120 W from the power supply, so that review is completely useless.  The other didn't even measure ripple, and returned values on power factor and energy efficiency that are just awful by modern standards.  That's the best you could do?  That's basically conceding my point.

    -----

    An Athlon II processor has two 64-bit memory channels, each of which can be broken into two 32-bit connections.  If you fill all four memory slots it does this, so you have four 32-bit connections.  In your proposed setup, two of those go to a 4 GB module, and two to a 2 GB module.  Thus, you have 1/3 of the memory crammed into 1/4 of the bandwidth.  You can argue that an Athlon II X3 is slow enough that it won't matter that much, and it might be worth doing if you really need the extra memory capacity later.  But it's completely stupid to plan up front to do that.  Every motherboard or memory manufacturer would advise matching the channels properly.  If you don't believe me, try to find a memory kit that has modules of different sizes.

    -----

    If you use two video cards in CrossFire, they have to communicate with each other, not just the processor or system memory.  A CrossFire bridge helps with this somewhat, but it can't do everything.

    Consider that Nvidia will completely disable SLI through their drivers if the two cards aren't both in PCI Express slots with at least x8 bandwidth.

    No one said you had to crossfire, nor do you have to add memory down the road.  These are just options. 

    You were wrong about crossfire with a 16x and a 4x.  You were wrong about dual channel memory and that adding a 4gb x2 down the road would slow the system down. I myself would only add matching modules, but to say that it would be slower to when adding 8gb is just plain wrong.  You are now wrong about SLI being disabled if not in 8x or higher  http://www.overclock.net/nvidia/819348-16x-16x-vs-16x-4x-gtx.html

     

    You are wrong about the power supply as well.   You really shouldn't be giving advice on system building when you don't understand the fundamentals.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    The motherboard of your link has two x16 slots.  Find a modern motherboard with only one x16 slot and one x4 slot and SLI enabled, if you can.  Nvidia's driver restrictions on SLI are by motherboard, not by what you do with it.

    Results of restricting to x4 bandwidth will vary greatly from one game to another.

    So the best you can do on the power supply is to just say "you're wrong", without any arguments to back it up?

  • bobbymobobbymo Member UncommonPosts: 48

    The mobo does have 2 16x slots.  It also has a 4x slot. He tested in both 16x 16x and 16x and 4x.  SLI was not disabled like you said it would be.. 

    You have been wrong several times in your posts.  Enough so that when you say the HEC power supply  I listed is not suitable for a Budget Gaming Rig, you have no leg to stand on.  Your misconceptions on dual channel memory, crossfire, and SLI make your statements about power supplies less than reliable.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    http://www.tomshardware.com/reviews/pcie-geforce-gtx-480-x16-x8-x4,2696-5.html

    As I said, the effects of having only x4 bandwidth vary greatly by game.  In that one, it cuts the frame rate nearly in half.  And that's just for a single card.

    Or try this:

    http://www.tomshardware.com/reviews/p55-crossfire-nf200,2537-3.html

    3-way CrossFire performing markedly worse than 2-way, unless the motherboard has adequate PCI Express bandwidth for the third card.

    Will CrossFire work with a second card in an x4 slot?  Sometimes it will.  Sometimes it won't help much, even when CrossFire would otherwise work.  Given a choice between a system that sometimes works and one that always works, which would you pick?

    Remember, the motherboard isn't the only thing restricting CrossFire in your build.  Two video cards and a processor is an awful lot of heat to extract from a case using only a single 80 mm fan, as your build proposed.

    -----

    Your claim about my being wrong about mismatched memory channels is based on what?  Because you said so?  You really think that cramming 1/3 of the memory into 1/4 of the bandwidth is just as good as splitting it evenly?  Now, as I said, the processor is slow enough that it won't completely cripple the system.  Maybe it will be more like running the memory at 1066 MHz rather than 1333 MHz--which in a lot of cases, won't make a bit of difference.  But it's not something that you'd want to plan on from the outset if it's not necessary.

    Memory capacity is largely a case of, either you have enough or you don't.  If you don't have enough and getting enough requires mismatching channels, then sure, you go ahead and do it.  If you do have enough memory, then adding more won't help at all.

    ------

    I'm still waiting on you to find a modern motherboard with only one PCI Express x16 or x8 slot that does support SLI.  You've asserted that I'm wrong about that, so there must be a motherboard out there.  Right?

    ------

    Maybe the HEC power supply that you linked is merely mediocre, but not terrible.  Why would you get it over a cheaper power supply that you know is good, like this:

    http://www.newegg.com/Product/Product.aspx?Item=N82E16817371035

    A few minutes of searching will find quite a few favorable reviews of various Antec power supplies, including a number from their EarthWatts line.  For comparison, you were unable to produce even a single favorable review of an HEC power supply from either within the past five years or from a reputable power supply review site.

  • bobbymobobbymo Member UncommonPosts: 48

    Originally posted by Quizzical

     

    12 GB would give you mismatched memory channels, and possibly worse performance than staying at 4 GB.  8 GB won't have that problem, but it's better to have 8 GB in two modules than four.  That's less power, less heat, less stress on the memory controller, and retains room for future upgrades.

     

    You said it would give you mismatched memory channels.  This is wrong.  It would be better to have balance, but you would not have a mismatch.  Mismatched memory channels is when you have say a 1gb stick and a 2gb stick on a dual channel mobo.

     

    We are not talking about GTX 480 in SLI.  We are talking about the ability to crossfire an HD 5770, which you can do, which will give excellent results, and will suffer almost no loss in performance from having a 4x pcie slot.  The scaling with a 4x slot has less to do with the game as you keep saying, and more to do with the speed of the memory on the card. An HD5770 uses 128-bit gddr5, so the 4x pcie has little affect on this.  You can see a jump when you move up to a 5870 with 256-bit memory on a 16x and a 4x.  Even more so with 384-bit gtx 480, which by the way cost over $400.

     

    I don't care if there is a modern motherboard that supports SLI with 16x and 4x.  You said "Consider that Nvidia will completely disable SLI through their drivers if the two cards aren't both in PCI Expressslots with at least x8 bandwidth."  Again, this is not true.  http://www.overclock.net/nvidia/819348-16x-16x-vs-16x-4x-gtx.html

     

    The Antec power supply you linked would be a fine alternative to the HEC.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    Originally posted by bobbymo

    I don't care if there is a modern motherboard that supports SLI with 16x and 4x.  You said "Consider that Nvidia will completely disable SLI through their drivers if the two cards aren't both in PCI Expressslots with at least x8 bandwidth."  Again, this is not true.  http://www.overclock.net/nvidia/819348-16x-16x-vs-16x-4x-gtx.html

    Sorry, I misspoke there.  The forum didn't take my post, so I tried to retype it quickly, and misspoke.  What I meant is what I have said several times since then:  Nvidia disables SLI on some motherboards through their drivers.  If a moden motherboard doesn't have multiple PCI Express 2.0 x8 or x16 slots that it can use at x8 or better bandwidth simultaneously, then Nvidia will disable SLI for that motherboard.

    -----

    The memory bandwidth on a card has nothing to do with the bandwidth of a PCI Express slot.  Which is good, because even a Radeon HD 5770 has 76.8 GB/s of bandwidth, while even a PCI Express 2.0 x16 slot only has 8 GB/s of bandwidth.  Your point might be that more powerful cards will tend to have more memory bandwidth and also to use more PCI Express bandwidth.  But even so, that doesn't eliminate the loss in performance.  On a single GTX 480, dropping from x16 bandwidth to x8 brings a considerable loss in performance, too.

    -----

    If you want to quibble and say that technically, if you're using four memory modules, a memory channel is a pair of 32-bit connections to separate modules, then fine.  The two channels are matched.  But the connections are not, and you're still cramming twice as much memory across one connection as across another.

    -----

    You're still insisting that the HEC power supply is good, basically because you said so.  And ignoring that there is plenty of evidence against it, such as that they won't even tell you what it's rated at in total on the +12 V rails.  That's not the behavior of a reputable power supply company.  And it's not even 80 PLUS certified, which is the absolute minimum for a power supply to be kind of all right.

    The Antec power supply that I linked isn't merely a decent alternative to the HEC one.  It's probably vastly better in nearly every way that matters, with the lack of a power cord included the only real exception.

  • bobbymobobbymo Member UncommonPosts: 48

    I am insisting that the HEC power supply isn't bad simply because you said it was.  

     

    You keep changing the subject to the gtx480. The 5770 suffers almost nothing from a x4 slot when used in crossfire.

     

    "Sorry, I misspoke there"......http://www.ehow.com/how_2198818_admit-wrong.html

     

    "If you want to quibble and say that technically, if you're using four memory modules, a memory channel is a pair of 32-bit connections to separate modules, then fine."

    I'm not quibbling,  it is a pair.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    Originally posted by bobbymo

    I'm not quibbling,  it is a pair.

    If you have two 2 GB modules and two 4 GB modules, then there is certainly something mismatched there.

    See if you can find a memory kit with modules of different sizes.  Surely if one went purely by capacity, there would be people who think that 6 GB is enough but 4 GB is not, or that 12 GB is enough but 8 GB is not.  Surely memory manufacturers know this.  Why are there no memory kits of mixed sizes for sale?  You can get kits with four 4 GB modules, or four 2 GB modules, or two 4 GB modules, or a variety of other combinations of identical modules.  Could it be that mismatching the memory modules will hurt performance in applications where memory bandwidth is a meaningful limitation?

    "You keep changing the subject to the gtx480. The 5770 suffers almost nothing from a x4 slot when used in crossfire."

    I'm changing the subject to systematic benchmarks that someone actually ran, as opposed to random posts on a forum that don't explain the methodology and don't even compare to a single card to show whether CrossFire works at all in his system.

    "I am insisting that the HEC power supply isn't bad simply because you said it was. "

    Proof by assertion and ignoring all evidence, apparently.

  • noquarternoquarter Member Posts: 1,170


    Originally posted by Quizzical

    Originally posted by bobbymo
    I'm not quibbling,  it is a pair.
    If you have two 2 GB modules and two 4 GB modules, then there is certainly something mismatched there.

    I didn't read the whole thread so maybe I haven't followed this enough but you can pair 2x2GB and 2x4GB in the same mobo and have them running in dual channel. All that matters is the 2 sticks in each pair match, but the 2 pairs don't need to match. Just have to be careful which color coded slots you put them in to make sure the 2GB gets paired with the 2GB and the 4GB gets paired with the 4GB.


    So you actually can have 6GB or 12GB running in dual channel, it's just not usually cost efficient to build it that way at the start - can certainly end up there after an upgrade though, but I'd still rather run at 2x2GB (4GB) instead of 2x2GB 2x1GB (6GB) just to keep 2 slots free unless I really needed 6GB. Memory controllers are more stressed running 4 sticks over 2 and usually drop the command rate on the memory timings to compensate.


    Also some Intel chipsets are capable of operating mismatched memory modules in combined dual channel + single channel. So if you have a 2GB + 4GB module the first 2GB of each module will be paired in dual channel, and the remaining 2GB on the 4GB module will be running in single channel. Better than nothing but not ideal.

  • ZolgarZolgar Member Posts: 533

    So I'm still looking around before making any final decisions and I've been looking getting a higher watt power supply so when I buy a better video card I won't have to worry about buying another PSU as well. I'm looking into the 'Corsair CMPSU-650TX'. It's gotten high ratings every where I've looked (5 egg average with over 1,000 5 egg votes on newegg), and it's got that plus 80 thing on it's discription.


    Type

    ATX12V / EPS12V

    Fans

    1 x 120mm fan

    PFC

    Active

    Crossfire

    Ready

    SLI

    Certified

    Modular

    No

    Efficiency

    80 PLUS Certified

    Dimension (L x W x H)

    5.9" x 5.9" x 3.4"

    Connectors

    1 x Main connector

    1 x 12V(4/8Pin)

    8 x peripheral

    8 x SATA

    2 x Floppy

    2 x PCI-E(8Pin)

     

    0118 999 881 999 119 725... 3

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    If you want a higher wattage power supply, then I won't try to stop you.  It's a question of budget, really.  The Corsair TX650 isn't a great power supply, but it is pretty good.  It's roughly comparable in quality to the Antec EarthWatts Green that I had picked for you, but at a much higher wattage.  If you want a good quality power supply in that wattage range, then with the prices as on AVA Direct, that's a pretty good choice.

    The problem, and the reason why I didn't pick that for you up front, is that it doesn't fit your budget so well.

    An intermediate option, both in wattage and price, would be the Antec EarthWatts EA500.  That won't be able to handle every video card on the market, but it will work for most of them.  Among modern cards, the Radeon HD 6970 and 5970, GeForce GTX 580 and 480 would be out, and the GeForce GTX 570 and 470 would be kind of dicey.  Anything else on the market would run just fine on that power supply.  I'd expect that with future video cards, it would be about the same deal, where you wouldn't be able to upgrade to a real high end card, but there would be no issues with a $200 upper mid-range card.

  • ZolgarZolgar Member Posts: 533

    I actually have one more question. I noticed this on the ibuypower site, a lot of their computers come standard with a Liquid CPU Cooling system. It looks like a basic fan, but with two tubes running off of it. Is that safe? I mean, it says liquid, so I'm assuming that means there is water in those tubes. Are leaks easy to get? And would a regular fan and Haatsink be okay with that AMD Phenom II X4 955 CPU, or should a liquid cooling system be something I should look into?

     

    Thanks tons.

    0118 999 881 999 119 725... 3

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    If I bought a computer from iBuyPower, I might get liquid cooling from them, because they often sell the liquid cooling setups at such a discount.  The probably get a big volume discount from Asetek or CoolIT or whatever company they use.  But it's not something I'd seek out elsewhere.  A low end liquid cooling system costs about the same as high end air cooling, and gives about the same performance.  Some of the cheap liquid cooling systems draw air in across the radiator, which dumps the processor heat inside the case, and isn't really effective for cooling everything else.  Liquid cooling does mean that there's liquid in the pipes, and it can leak, but I'm not sure how likely it is to do so.

    For what it's worth, most air cooling heatsinks have liquid inside the heatpipes.  That's sealed inside of solid copper, however, and doesn't move or bend, so it would take a lot of doing to get that liquid to leak out.  The way that heatpipes work is that they're nearly a vacuum, but have some water at very low pressure.  The water touches the CPU or GPU chip (or rather, comes pretty close to it, with heat from the chips geting conducted through a few thin layers), which heats it up until it boils off and takes a bunch of heat with it.  It's at low pressure, so it doesn't take much to get it to boil.  The water vapor then touches the interior of the heatpipe somewhere else cooler, condenses, and releases a bunch of heat in the process, perhaps an inch away from the chip it is trying to cool.  Then it gets carried back (wicking, not gravity) to the hot chip to repeat the process.  That spreads out the heat a lot better than simply sticking a solid aluminum heatsink on the chip.  Air blowing across aluminum fans then carries the heat away and out of the case.

    A Phenom II X4 Black Edition processor comes with a decent stock heatsink, so I don't see any dire need to replace it.  The stock heatsink has two heatpipes going up from each side of the processor into an aluminum heatsink.  I haven't seen independent testing, but I'd expect it to perform comparably to a $20-$30 aftermarket heatsink.  Intel's Sandy Bridge processors, on the other hand, come with an awful, tiny heatsink with no heatpipes.  I wouldn't want to use that.  Intel did give their Gulftown processors a nice heatsink, so they know how to do that if they're so inclined.

  • tuzalovtuzalov Member Posts: 183

    Just make sure you use non conductive fluid,I use Fesser.

     

    For 1366 HS I swear by V10's

    Heres mine...

  • ZolgarZolgar Member Posts: 533

    Are these things normally pre-filled or do I have to put water or that 'Fesser' stuff into it myself? Also, is it something you have to empty out and re-fill?

    0118 999 881 999 119 725... 3

  • tuzalovtuzalov Member Posts: 183

    The additive is basicaly glycol and water with stuff added to kill anything that might grow, make sure you buy non conductive fluid and yes you have to fill it yourself and occasianly top it up,as for filling and draining you can drill a hole in the case and buy a fill port and a funnel I use a 3 way hose connector and have a manual valve for easy drainage.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    Sophisticated liquid cooling setups make no sense at all for someone unwilling to assemble parts, as that's a lot more complicated and there are a lot more things that can go wrong.  It also makes no sense on a $700 budget.

  • ZolgarZolgar Member Posts: 533

    Originally posted by Quizzical

    Sophisticated liquid cooling setups make no sense at all for someone unwilling to assemble parts, as that's a lot more complicated and there are a lot more things that can go wrong.  It also makes no sense on a $700 budget.

    I'm actually going to end up spending closer to $800 with the $65 shipping, but it's no big deal, pretty much what I expected honestly. And it's not so much that I'm unwilling to assemble them, it's just that I don't trust myself enough to put them together myself. To me, $700(~$780 final price) is quite a bit of money to me, having grown up quite poor, and normally I wouldn't drop this much on something but my laptop is outdated and there's some stuff starting in my Game & Simulation Design courses that require me to have a better system (the college does offer a laptop to sell students, but it's $1000 and not quite as good as what I'm looking to buy for $200-300 cheaper). So I think I'm just going to go ahead and use a basic fan+heatsink until I can afford to purchase a good liquid cooling system and read up more about them.

    0118 999 881 999 119 725... 3

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    When assembling parts, about the only thing that you can really screw up is putting the heatsink on the processor--in which case, if you turn the computer on without the heatsink firmly pressed against it, the processor may fry almost instantly.  Different ports are different shapes, so you can't plug something into the wrong slot, as it won't physically fit.  Otherwise, if you don't plug something in right, then maybe the computer doesn't boot or a part won't do anything, but that won't break it.  If you're looking to do a liquid cooling setup, then the processor heatsink is the one thing that you'll do yourself--and you'd be doing something considerably harder and less standard than a simple heatsink.

  • tuzalovtuzalov Member Posts: 183

    I have seen people do some pretty stupid things when installing fans like leaving the pad on or not putting thermal paste on or not actually plugging the fan in I even seen someone use crazy glue on their heatsink worst thing I ever saw was a guy who cut part of the pcie card to make it fit in a agp slot.

  • QuizzicalQuizzical Member LegendaryPosts: 22,078

    Originally posted by tuzalov

    I have seen people do some pretty stupid things when installing fans like leaving the pad on or not putting thermal paste on or not actually plugging the fan in I even seen someone use crazy glue on their heatsink worst thing I ever saw was a guy who cut part of the pcie card to make it fit in a agp slot.

    All right, I take that back, as people can find creative ways to break computers.  (No, your optical drive is not a cup holder!)  Installing the processor heatsink is about the only thing you can break, short of being an idiot.  Thermal paste is part of installing the processor heatsink, so messing that up isn't necessarily being an idiot.

  • Cody1174Cody1174 Member Posts: 271

    Originally posted by Quizzical

    That's still a cheap junk power supply, and it still doesn't tell you what memory or hard drive they use.  If the memory is rated at 1600 MHz, then it can probably run at 1333 MHz with tighter latency timings and a lower voltage, though, so it should be all right.  Don't get that power supply, though.

    You do surely need Windows, whether they install it or you buy it and install it yourself.

    Power Supply is the heart of your machine. Don't buy a cheap one.  

  • ZolgarZolgar Member Posts: 533

    Originally posted by Quizzical

    Originally posted by tuzalov

    I have seen people do some pretty stupid things when installing fans like leaving the pad on or not putting thermal paste on or not actually plugging the fan in I even seen someone use crazy glue on their heatsink worst thing I ever saw was a guy who cut part of the pcie card to make it fit in a agp slot.

    All right, I take that back, as people can find creative ways to break computers.  (No, your optical drive is not a cup holder!)  Installing the processor heatsink is about the only thing you can break, short of being an idiot.  Thermal paste is part of installing the processor heatsink, so messing that up isn't necessarily being an idiot.

    How long does the thermal paste usually last? Is that something I'm going to need to reapply? Or only when I'm removing the heatsink and putting it/something else back onto it?

    0118 999 881 999 119 725... 3

Sign In or Register to comment.