Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Worth waiting for the next generation of video cards or buy now?

2

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383
    Elevenb4 said:
    I was asking something very similar this part summer, on another tech forum I frequent. Most of the answers were to wait a few months for some of the prices to drop a little and other sales. So I waited for Thanksgiving/Christmas, asked the same question again and they convinced me to wait until Nvidia's new stuff comes out.....

    I say just get something. I had planned on having a new rig last summer, but now it looks like i"ll be waiting till Spring. I'm not asking anyone again since i dont' want to be convinced to wait until Summer again for the new hardware lol. 

    My 2 cents

    This is the danger with the current tech upgrade cycle (new hardware releases roughly every 12-18 months)

    You can always get caught in a cycle of waiting for the next tech, or waiting for the price drop.

    In my experience, the next cycle of tech is about the only thing that forces a price drop, and even then it's all relative to performance versus what's currently available for whatever price.

    I will sometimes advise waiting for the next generation of tech - if the tech is definitely on the horizon and it looks to make some decent amount of change. I very rarely advise to wait for price drops though, the price rarely drops unless the new generation of tech forces it to, and then your looking at the question of new tech versus cheaper old tech. 

    In general though, if you have a budget and your ready to buy, go ahead and buy. If you think it is a good deal on the day you buy it, price versus performance and feature set - and something new releases the very next day, that doesn't make what  you just bought any less. You thought it was a good deal when you bought it.

    In this case, I would say that nVidia's Pascal has been "around the corner" for a while now, although I wouldn't call it imminent. Usually within about 30-60 days of release you start to see leaks of prototype cards and benchmark scores, and those aren't really flying around yet, if you don't see a paper release from the company itself. This isn't the first I"ve heard of Polaris, but it's the first official news, and I think Polaris is probably a bit further out than Pascal is. I would expect the first Pascal stuff maybe this Spring/Summer, and Polaris Fall/Winter.

    I would not wait at all on CPUs. Those are moving much more slowly than GPUs, and there isn't enough competition for there to be any real price drops when a new generation is released.

    If your willing to wait that long on a GPU before you rebuild, that's your call. Right now, playing DX9/11 games what is available now is overkill for 1080p, but not quite strong enough to drive 4K or VR except on the very high end, and nVidia has some DX12 optimizations I expect to come about over the current Maxwell line. But DX12 isn't widespread and won't be for several years.

    If your sticking with 1080p, buy now, there is no reason to wait. If your wanting to do VR or 4K now, then I would wait until we see Pascal or Polaris. I wouldn't buy anything based on DX12, "futureproofing", or the possibility of maybe getting into 4K/VR sometime down the road, because by the time it becomes a relevant factor, we may well be two or three generations removed from now and you'd be looking to upgrade again anyway.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    Quizzical said:
    Another thing to consider is whether a Radeon R9 390 is faster than a GeForce GTX 670 by enough to justify the upgrade.  My rule of thumb is, don't upgrade until you can double the performance.  The 390 doesn't get you there unless your problem is running out of video memory, which it probably isn't.  The 390 might be 50% faster, but it's nowhere near double.

    To double your performance, you'd need to look at something more like a GeForce GTX 980 Ti or Radeon R9 Fury.  That's a different price range entirely.
    wut?

    670 performs somwhere above 950 and below 960. 390 is right there at double performance.
  • TanonTanon Member UncommonPosts: 176
    Thanks for all the insights guys. As it happens, I'm Canadian, and with the way our dollar is going (just hit 1.41 CAD:USD) I'm going to be buying now since I predict that by the time Polaris/Pascal release our dollar will be even worse and getting something of similar performance will cost even more.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Wait for the new generation of GPU, HBM 1.0 for AMD and HBM 2.0 + NV-Link for Nvidia will be game changers for the industry.

    I'll be waiting for Pascal (Nvidia) to release because HBM (High Bandwidth Memory) 2.0 has the ability to expand HBM to 8, 16, and even 32GB of HBM on a single card while HBM 1.0 on AMD's new GPU (Polaris) only supports a maximum of 4GB of HBM. NV-Link is also supposed to be a major player in terms of improving communication between GPU and CPU.

    I won't fully explain all of the features the new GPU's will bring in 2016. You should do some research in regards to what the new technology will bring to make your final decision.

    Whether you're Nvidia or AMD both next generation graphics solutions are intended to release in 2016.
    What makes you think that Nvidia is going to use HBM 2 and AMD will only use HBM 1?  Remember that HBM itself was invented by AMD and Hynix; if Nvidia uses HBM, that's just coming along later and copying what AMD did.  Not that that's a bad thing; it's much better for both GPU vendors to use the same memory standards than for everyone to have their own proprietary memory standard.  But my point is that it isn't terribly plausible that AMD will abandon updated versions of HBM just as Nvidia adopts it unless you expect AMD to invent something else that is better to replace it so soon.

    The reason Fiji used HBM 1 because that's what was commercially available in time to launch a part last June.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited January 2016
    Not a bad decision, i sold my 1,5 years old GPU recently for almost what i paid for it back then because of $ currency exchange influencing prices in my country.

    Youll need better CPU with it,

    FX 8300 can be found for as low as 92$

    http://www.tigerdirect.com/applications/SearchTools/search.asp?keywords=fx+8300

    and youll be set for next 2-3 years.
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    edited January 2016
    Quizzical said:
    Wait for the new generation of GPU, HBM 1.0 for AMD and HBM 2.0 + NV-Link for Nvidia will be game changers for the industry.

    I'll be waiting for Pascal (Nvidia) to release because HBM (High Bandwidth Memory) 2.0 has the ability to expand HBM to 8, 16, and even 32GB of HBM on a single card while HBM 1.0 on AMD's new GPU (Polaris) only supports a maximum of 4GB of HBM. NV-Link is also supposed to be a major player in terms of improving communication between GPU and CPU.

    I won't fully explain all of the features the new GPU's will bring in 2016. You should do some research in regards to what the new technology will bring to make your final decision.

    Whether you're Nvidia or AMD both next generation graphics solutions are intended to release in 2016.
    What makes you think that Nvidia is going to use HBM 2 and AMD will only use HBM 1?  Remember that HBM itself was invented by AMD and Hynix; if Nvidia uses HBM, that's just coming along later and copying what AMD did.  Not that that's a bad thing; it's much better for both GPU vendors to use the same memory standards than for everyone to have their own proprietary memory standard.  But my point is that it isn't terribly plausible that AMD will abandon updated versions of HBM just as Nvidia adopts it unless you expect AMD to invent something else that is better to replace it so soon.

    The reason Fiji used HBM 1 because that's what was commercially available in time to launch a part last June.
    The only case AMD would use HBM 1 is on lower end high end parts that would need no more than 4 GB memory since HBM 1 is limited to 4 GB, and, of course, if its cheper than HBM 2, which depends on quite a few things.

    People forget that AMD and Hynix invented HBM lol. And now, somehow, AMD wont use its own invention on its cards (even if its confirmed high end parts will use HBM 2)

    If im to look at my crystal ball, no 300$ GPU will use HBM of any kind in the next generation, theres rumors of GDDR6, but it will still be GDDR5 most likely.
  • Elevenb4Elevenb4 Member UncommonPosts: 362
    edited January 2016
    Ridelynn said:
    Elevenb4 said:
    I was asking something very similar this part summer, on another tech forum I frequent. Most of the answers were to wait a few months for some of the prices to drop a little and other sales. So I waited for Thanksgiving/Christmas, asked the same question again and they convinced me to wait until Nvidia's new stuff comes out.....

    I say just get something. I had planned on having a new rig last summer, but now it looks like i"ll be waiting till Spring. I'm not asking anyone again since i dont' want to be convinced to wait until Summer again for the new hardware lol. 

    My 2 cents

    This is the danger with the current tech upgrade cycle (new hardware releases roughly every 12-18 months)



    If your sticking with 1080p, buy now, there is no reason to wait. If your wanting to do VR or 4K now, then I would wait until we see Pascal or Polaris. I wouldn't buy anything based on DX12, "futureproofing", or the possibility of maybe getting into 4K/VR sometime down the road, because by the time it becomes a relevant factor, we may well be two or three generations removed from now and you'd be looking to upgrade again anyway.
    Just kept the part I wanted to point out, but yes, this is the main reason I'm waiting. I'm really wanting to get into the VR stuff, and move away from 1080p. I'm thinking I want to keep my budget around $1500, but that has to include Windows 10 and a monitor. After that, we're talking my budget for the PC build alone will be around $1150 or less. But again, I'm not buying now, but definetly by Spring break, in fact that is probably what i'll be doing during spring break.

    One other thing, i"m really debating on just getting like a case this month, then a PSU next month, they just a few little things that the wife wont' notice or wont' care that I just spend 50-$100 on. Then I can stretch that $1500 we've agreed on even just a little further when the time comes. 

    -Unconstitutional laws aren't laws.-

  • Loke666Loke666 Member EpicPosts: 21,441
    Malabooga said:
    Not a bad decision, i sold my 1,5 years old GPU recently for almost what i paid for it back then because of $ currency exchange influencing prices in my country.

    Youll need better CPU with it,

    FX 8300 can be found for as low as 92$

    http://www.tigerdirect.com/applications/SearchTools/search.asp?keywords=fx+8300

    and youll be set for next 2-3 years.
    Listen, the FX 8300 is hardly something that would make any gamer happy for 2-3 years, no card even close to that priceclass will do that unless you just plan to play old games like Wow. A R9 290 or a GTX 970 would last most  gamers 3 years but they are in a different price range.

    Below that I would only upgrade if I was poor and had an ancient card, it is better to wait a while and save up money for a good card then to buy old crap that are rather sad already when you get it.

    There are 2 things a gamer never should cheap out on with their computer: The GPU and the PSU. A bad PSU could fry your entire system (a good one will  last you 10 years) while the GFX card will have far greater impact then anything else.

    Now, you could go somewhat cheaper then the cards I recommend but then you will need to update sooner. And yeah, I assume most gamers enjoy playing games with the highest possibly graphics settings. 
  • NevardlrowNevardlrow Member UncommonPosts: 19
    And the correct answer is...

    Wait for DisplayPort 1.3 cards + 4K monitors with 120Hz
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    Loke666 said:
    Malabooga said:
    Not a bad decision, i sold my 1,5 years old GPU recently for almost what i paid for it back then because of $ currency exchange influencing prices in my country.

    Youll need better CPU with it,

    FX 8300 can be found for as low as 92$

    http://www.tigerdirect.com/applications/SearchTools/search.asp?keywords=fx+8300

    and youll be set for next 2-3 years.
    Listen, the FX 8300 is hardly something that would make any gamer happy for 2-3 years, no card even close to that priceclass will do that unless you just plan to play old games like Wow. A R9 290 or a GTX 970 would last most  gamers 3 years but they are in a different price range.

    Below that I would only upgrade if I was poor and had an ancient card, it is better to wait a while and save up money for a good card then to buy old crap that are rather sad already when you get it.

    There are 2 things a gamer never should cheap out on with their computer: The GPU and the PSU. A bad PSU could fry your entire system (a good one will  last you 10 years) while the GFX card will have far greater impact then anything else.

    Now, you could go somewhat cheaper then the cards I recommend but then you will need to update sooner. And yeah, I assume most gamers enjoy playing games with the highest possibly graphics settings. 
    FX 8300 is a CPU...
  • Loke666Loke666 Member EpicPosts: 21,441
    Malabooga said:
    Loke666 said:
    Listen, the FX 8300 is hardly something that would make any gamer happy for 2-3 years, no card even close to that priceclass will do that unless you just plan to play old games like Wow. A R9 290 or a GTX 970 would last most  gamers 3 years but they are in a different price range.

    Below that I would only upgrade if I was poor and had an ancient card, it is better to wait a while and save up money for a good card then to buy old crap that are rather sad already when you get it.

    There are 2 things a gamer never should cheap out on with their computer: The GPU and the PSU. A bad PSU could fry your entire system (a good one will  last you 10 years) while the GFX card will have far greater impact then anything else.

    Now, you could go somewhat cheaper then the cards I recommend but then you will need to update sooner. And yeah, I assume most gamers enjoy playing games with the highest possibly graphics settings. 
    FX 8300 is a CPU...
    Duh, 24 hours awake and I read HD instead of FX... *facepalm to me*
  • MalaboogaMalabooga Member UncommonPosts: 2,977
    get some sleep...NAU ;P
  • KiyorisKiyoris Member RarePosts: 2,130
    edited January 2016
    Tanon said:
    AMD having just revealed their new Polaris architecture that basically doubles performance per wat
    You're drinking a bit too much AMD Kool-Aid, this isn't the 90s anymore where you did see an 2x or more performance gain from gpu to gpu. Those days are long gone.
  • azarhalazarhal Member RarePosts: 1,402
    Kiyoris said:
    Tanon said:
    AMD having just revealed their new Polaris architecture that basically doubles performance per wat
    You're drinking a bit too much AMD Kool-Aid, this isn't the 90s anymore where you did see an 2x or more performance gain from gpu to gpu. Those days are long gone.
    Going from 28nm to 16/14nm die size achieve the 2x performance per Watts increase. It is also how it was done in the 90s, die shrink was just faster back then. Now it is slow because semiconductors manufacturers have been having issues getting stable process for under 20nm production.
  • Alexander.BAlexander.B Member UncommonPosts: 90
    I think there are a variety of factors that come into play such as what resolution do you want to play on, how much FPS do you actually need, etc.

    I've waited for a while (only rocking a GTX 660) because for the games I play (MOBA's), I'm able to get decent FPS and use a 144hz gaming monitor therefore it's difficult to get 144 FPS in the triple AAA games even with a high-end $$$ card but for MOBA's I seem to clock in around 100+. My girlfriend is using a GTX 660 TI but it's even more difficult for her to get 120/144 FPS at 1440p if you're talking about non-MOBA/MMORPG.
  • laxielaxie Member RarePosts: 1,118
    edited January 2016
    I bought a GTX970 over a year ago and am more than happy with it. A year ago, it was a brilliant purchase in my opinion.

     The sad thing is, today, the price is nearly identical to what I paid a year ago.

    I suspect the price will drop significantly once they are close to releasing the new range. You'll then have more options to choose from (cheaper GTX970-ish cards, or the band new ones). Right now the only viable option is to buy 2 year old models for their launch prices.
  • Elevenb4Elevenb4 Member UncommonPosts: 362
    laxie said:
    I bought a GTX970 over a year ago and am more than happy with it. A year ago, it was a brilliant purchase in my opinion.

     The sad thing is, today, the price is nearly identical to what I paid a year ago.

    I suspect the price will drop significantly once they are close to releasing the new range. You'll then have more options to choose from (cheaper GTX970-ish cards, or the band new ones). Right now the only viable option is to buy 2 year old models for their launch prices.
    Totally agree. When I was told to wait until prices dropped, they didn't drop much if at all, even during large sales. Again, just more support for the "Buy it now" suggestion.

    -Unconstitutional laws aren't laws.-

  • Elevenb4Elevenb4 Member UncommonPosts: 362
    If you always wait until the prices drop, you never buy anything...
    My now older 290x is still an excellent card running everything I throw at it with maxed out settings. SW:battlefront and Witcher III look gorgeous on it, with all settings to ultra and with smooth framerates all over the place.
    My only advice would be, if you buy something, buy one of the upper tier cards... for nVidia, that would be 970 or 980, and for AMD the 390x, Fury or Fury X. Get at least 4gb of onboard memory, too. That way, you're sure you won't have to change card again in one or two years when new games become too demanding for an older entry level and mid range card.
    Great advice and something I wish I had heard years ago. 

    -Unconstitutional laws aren't laws.-

  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Gorwe said:
    Quizzical said:
    Another thing to consider is whether a Radeon R9 390 is faster than a GeForce GTX 670 by enough to justify the upgrade.  My rule of thumb is, don't upgrade until you can double the performance.  The 390 doesn't get you there unless your problem is running out of video memory, which it probably isn't.  The 390 might be 50% faster, but it's nowhere near double.

    To double your performance, you'd need to look at something more like a GeForce GTX 980 Ti or Radeon R9 Fury.  That's a different price range entirely.
    I've a question for you(or anyone who is willing to answer):

    What is the double of 7850 or 7870? Fury / 980Ti?
    A Radeon R9 390 or GeForce GTX 970 is a little more than double a Radeon HD 7870.  Doing further up the chain gets you something faster yet, and I'm not saying that you shouldn't try to more than double performance.  I am saying that I'd avoid small upgrades.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    I don't know if HBM will ever be ubiquitous in video cards.  It probably will come down the product stack as it gets more price competitive with GDDR5, but I'd expect to see GDDR5 or some derivative of it (there are rumors about a GDDR5X) in mid-range gaming cards for some years to come.

    As a gamer, though, you don't need to care about the memory bus technology directly.  What matters is bandwidth and capacity.  HBM has a very wide memory bus (4096-bit in the Fury cards), but clocks the memory much lower to compensate.  This has two benefits:

    1)  You can get more memory bandwidth than is practical with GDDR5, and
    2)  You can get much more memory bandwidth per watt than with GDDR5.

    For comparison, the Radeon R9 390X offers 384 GB/s, but the memory controllers use about 30 W.  The Radeon R9 Fury X offers 512 GB/s, but the memory controllers use about 10 W.  Being able to get more than 384 GB/s only matters if the GPU can make use of that, and I'm skeptical that the Fury cards really need all the bandwidth they have.  (The GeForce GTX Titan X is competitive with "only" 336 GB/s.)

    The reduced power consumption is arguably the bigger deal in the long term, but you can ignore that piece and just look at total card power consumption.  To a gamer, what matters directly is how much power the card uses, not whether it's being burned by shaders, memory controllers, caches, or whatever.

    The downside of HBM is that it's more expensive, at least for now.  The Fury series cards have a 1200 mm^2 silicon interposer that has to be built on a logic process node.  Yes, they can use a very old process node for it, and in the future, there will probably be process nodes built specifically to make cheap interposers.  But that will always a production cost that GDDR5 just doesn't have, even if it becomes less significant in years to come.

    HBM1 also caps you at 1 GB per stack of HBM.  HBM2 will have a much larger cap that shouldn't be a meaningful problem for gaming cards.

    A Radeon R9 380 should offer performance in the ballpark of a Radeon HD 7970.  That's somewhere around double the performance of a Radeon HD 7850.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Gorwe said:
    Gorwe said:


    You just saved me at least 30$ / month. So, very well done!
    Huh??? O.o
    I can't believe you can't puzzle that one. Quiz and I were talking about the purchase of Fury vs the purchase of R9 3 8/9 0. We were also comparing it vs HD 7850. So, the options were:

    -> Purchase a Fury
    -> Purchase a 380 / 390

    Seeing what I said...which one do you think I chose? And that's why I thanked him. I THOUGHT it was obvious...guess I need to evaluate some beliefs of mine.
    The Radeon R9 Fury series cards are unambiguously better than the Radeon R9 380 or 390 series cards, at least outside of situations where you need more than 4 GB of video memory and you're comparing it to an 8 GB card in the 390 series.  Make no mistake about that.  But it's not just because of HBM.

    But the Fury cards are also more expensive.  If price were not a consideration, nearly all gamers buying new video cards today would get either a Radeon R9 Fury X or a GeForce GTX Titan X and not even consider other cards.  But the price tag alone rules out both of those cards for most gamers, which is why cheaper, lower end cards exist.

    The memory is not the only difference between the cards.  The Fiji chip (Fury series) has 64 compute units, while Hawaii (390 series) has 44 and Tonga (380 series) has 32.  More compute units means more shaders, more texture units, more copies of the local data store, instruction cache, L1 cache, wavefront scheduler, and various other things.  The higher end cards also tend to have more L2 cache, more ROPs, etc.

    What games tend to need is not some fixed amount of memory bandwidth, shader power, etc., but rather, somewhat fixed ratios of them.  If you have twice as many compute units, twice as much memory bandwidth, and twice as much of everything else, you can offer twice the performance.  So that's what AMD and Nvidia both do:  create lineups where the high end cards about the same ratio of various things as the low end cards, but the high end has more of everything.

    What does worry me, though, is the "$30/month".  The price difference between a Radeon R9 Fury and a Radeon R9 380 is about $320, and there's no intrinsic "price per month" attached to it.  If you're getting financing on computer parts that gets you to look at a monthly rate rather than a total price tag, the interest rate on those is often usuriously high.  If it's a "rent to own" situation, then stop and cancel the order immediately, as those are basically scams in all but the legal sense.
  • SlothnChunkSlothnChunk Member UncommonPosts: 788
    Tanon said:
    I'm planing on buying an R9 390 to upgrade from my 670, but with AMD having just revealed their new Polaris architecture that basically doubles performance per watt, would it be a good idea to wait for that? It'd certainly be a big boon compared to how much power the 390 eats, but would there be a high-end card available early on in the releases or would it be later in the year?
    If you really want efficiency (performance per watt) for a GPU you need to go with Nvidia (and if you want it for a CPU you need to go with Intel). 

    AMD chips are fantastic values, but per watt efficiency is AMD's major weakness. That's where they lag behind the competition and this will continue (for at least the next several years) based on the lead Nvidia/Intel have combined with the funding they provide for R&D in this specific area.
  • thinktank001thinktank001 Member UncommonPosts: 2,144

    AMD chips are fantastic values, but per watt efficiency is AMD's major weakness. That's where they lag behind the competition and this will continue (for at least the next several years) based on the lead Nvidia/Intel have combined with the funding they provide for R&D in this specific area.

    Stop giving poor advice about future products.  

    http://semiaccurate.com/2016/01/04/37910/

  • MalaboogaMalabooga Member UncommonPosts: 2,977
    Tanon said:
    I'm planing on buying an R9 390 to upgrade from my 670, but with AMD having just revealed their new Polaris architecture that basically doubles performance per watt, would it be a good idea to wait for that? It'd certainly be a big boon compared to how much power the 390 eats, but would there be a high-end card available early on in the releases or would it be later in the year?
    If you really want efficiency (performance per watt) for a GPU you need to go with Nvidia (and if you want it for a CPU you need to go with Intel). 

    AMD chips are fantastic values, but per watt efficiency is AMD's major weakness. That's where they lag behind the competition and this will continue (for at least the next several years) based on the lead Nvidia/Intel have combined with the funding they provide for R&D in this specific area.
    You have no idea what youre talking about. Once you inform yourself why nvidia GPUs ude less power you can come and talk about it. It doesnt have anything to do with R&D, it just makes you sound completely ridiculous.
  • QuizzicalQuizzical Member LegendaryPosts: 25,355
    Tanon said:
    I'm planing on buying an R9 390 to upgrade from my 670, but with AMD having just revealed their new Polaris architecture that basically doubles performance per watt, would it be a good idea to wait for that? It'd certainly be a big boon compared to how much power the 390 eats, but would there be a high-end card available early on in the releases or would it be later in the year?
    If you really want efficiency (performance per watt) for a GPU you need to go with Nvidia (and if you want it for a CPU you need to go with Intel). 

    AMD chips are fantastic values, but per watt efficiency is AMD's major weakness. That's where they lag behind the competition and this will continue (for at least the next several years) based on the lead Nvidia/Intel have combined with the funding they provide for R&D in this specific area.
    The reason Nvidia has a big performance per watt advantage over AMD right now is because they decided to redo their entire lineup with a new architecture in 2014 and AMD didn't.  You can do better even on an old process node if you redesign everything taking advantage of a well-understood process node rather than having to guess on a lot of things as they do on the early chips on the node.

    This is the only time in the unified shader era (going back about 9 years) that Nvidia has had a substantial performance per watt advantage over AMD.  With the impending move to 14/16 nm by both companies, Nvidia's performance per watt advantage is likely to soon vanish.  It's possible that Pascal could be more efficient than Polaris, but that's no more likely than the other way around.

    And what does Intel have to do with anything?  Intel's GPUs are ridiculously inefficient as compared to either AMD or Nvidia.
Sign In or Register to comment.