Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

I facepalm AMD(due to first day reviews, it changes quit a bit in last few days) it's been beaten ev

1235

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by Gravarg
    AMD is and always will be good for a mid-range card.  However, I don't think they will ever compete with Nvidia when it comes to higher end PCs.  That's not a bad thing since like less than 1% of PC users have a Titan anyways.  I've had 2 AMD computers, and both were fine.  The only problem I had with AMD was that every time a driver came out you had to download like a 200-500mb patch which was basically the entire driver for every card over several families.  Nvidia on the other hand, I download like 2mb patch every few days

    For many years, performance was basically limited by how large of a die you were willing to build.  Build twice as big of a die and have twice as much of everything and you get double the performance.  Nvidia was willing to build bigger dies, so Nvidia had the top end card except for during some periods when AMD had gotten to a new process node and Nvidia wasn't there yet  (e.g., Radeon HD 5870 or Radeon HD 7970). 

    But now, performance is increasingly limited by power consumption rather than just die size.  Will that affect who has the top end card going forward?  Maybe.  If one vendor is willing to make a 300 W card and the other won't go over 200 W, whoever is willing to make the 300 W card will probably have the top end performer.  I don't expect there to be a big gap between what the vendors are willing to build, but I don't know what will happen.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by Classicstar

     

    AMD 295 still fastest card on market in almost all games.

    That's assuming CrossFire works flawlessly.  Which it doesn't.  The proper measure of who has the fastest video cards for gaming is the fastest single GPU card.  And there, it's close between a Titan X and a Fury X.

  • JohnP0100JohnP0100 Member UncommonPosts: 401
    Originally posted by 13lake
    Originally posted by JohnP0100
    I guess even the diehard AMD supporters are resigned to the fact AMD is going to be bad for the next 12+ months.

    Is that why every single AMD Fury card is sold out on the planet ?

    I don't think 'defending AMD' means pointing out that AMD can't get its logistics sorted out. /shrug

    It shows what PvP games are really all about, and no, it's not about more realism and immersion. It's about cowards hiding behind a screen to they can bully other defenseless players without any risk of direct retaliation like there would be if they acted like asshats in "real life". -Jean-Luc_Picard

    Life itself is a game. So why shouldn't your game be ruined? - justmemyselfandi

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Jean-Luc_Picard

    Originally posted by Quizzical

    Originally posted by Jean-Luc_Picard

    AMD does better when it can use the many cores available in their processors. 
    Even then it's not true. An haswell I7 With 4 cores (8 logical cores) beats the crap out of any AMD processor with 8 cores.
    It depends on what you're doing.  Integer computations aren't the same as floating point, and Bulldozer and its derivatives have pairs of cores share floating point units.  If you're doing all floating point, that's going to look far more favorable to Intel than if you're doing all integer.  It's basically trivial to make a program where an FX-8350 will clobber a Core i7-4790K if that's what you're trying to do.
    I've yet to find a single review or benchmark where a FX-8350 beats a Core I7-4790k, but I'd be interested to read one. And even if that hypothetical situation exists... the Intel CPU still crushes the AMD one in all "real life" situations.

    Also, you say Bulldozer has pairs of cores sharing floating point units... well, the I7-4790k has only 4 cores with 8 logical cores, so 2 logical cores share much more than just a FPU.

    AMD GPUs are excellent, but CPU wise, AMD is still far behind Intel.

    EDIT and PS: not saying the FX-8350 is a bad CPU, and I'm sure it can run most games and applications at max settings with decent FPS when paired with a good graphic card, just that Intel CPUs (and not only the 4790k) definitely have better performance. And this reflects in gaming that when the AMD CPU "chokes" and frame rate drops under 30, the Intel CPU will keep higher frames per second.

    I'm not anti-AMD, my previous CPU was a Phenom II 1100T which is still running in a family computer (my mom's actually, 1100T and AMD HD6870 graphic card) and still delivers more than honorable performance for its age and the price I paid it back then, but I accept reality... Intel makes better CPUs than AMD.


    I was curious about this too.

    I went to Anandtech benchmark tool, because it's very nice to just compare two products (so long as they happen to be in their database), and they have a nice swatch of synthetic benchmarks, real world application data, and game performance data that gets standardized across every test. Fortunately the FX8350 and 4790K both there.

    And so I looked, there were several tests that were close - close enough I'd call them statistically even. But I couldn't find a single bench number where the FX8350 beat a 4790K.

    Pretty much - yeah, a Core i7 clobbers a FX8350 in pretty much every case. Even in the synthetic ones designed to exploit core counts and FPU vs Integer.

    Then again, your pitting an 8 core FX CPU against an 8 core Intel. With AMD, each pair of cores shares a lot of resources, and with Intel, you only have 4 full cores and 4 not really cores... so there are limitations both ways, but both CPUs report to be 8-core units in some manner or another. You aren't really seeing the "Core Count" advantage that AMD wants to press. Probably not the best example if you wanted to highlight a case where more cores beats faster cores.

    I looked again at a Core i5 4690K vs a FX8350 - there you ~can~ find some cases where AMD beats Intel. That gets back to the more versus faster argument a bit better, since an i5 is most definitely not an 8-core unit no matter how you look at it.

    I'm sure you could make some synthetic benchmark just to showcase AMD if you picked some particular instruction and make something to get a FX8350 to beat a i7 4790K, (which in Quiz's defense is what he said, not that one existed) but I don't know of one that currently exists. Mostly just goes to illustrate the absurdity of synthetic benchmarking in general really, and if we want to illustrate that more cores can beat faster cores, it helps to pick a CPU that actually has more cores.

  • mbrodiembrodie Member RarePosts: 1,504
    Originally posted by Quizzical
    Originally posted by Classicstar

     

    AMD 295 still fastest card on market in almost all games.

    That's assuming CrossFire works flawlessly.  Which it doesn't.  The proper measure of who has the fastest video cards for gaming is the fastest single GPU card.  And there, it's close between a Titan X and a Fury X.

    295x is a dual gpu card just like the titan x, it was AMD's direct competition to the titan

  • dotdotdashdotdotdash Member UncommonPosts: 488

    AMD is backed by Arab money and operates both consumers and corporate GPU and CPU arms. In 2014, it had revenues in excess of $5.50 billion which represented significant growth year-on-year.

    Nvidia is backed by Asian money and operates a consumer and corporate focused GPU arm and a mobile focused CPU arm. In 2014, it had revenues of around $4.10 billion, which represented a slight decline year-on-year.

    It's apples and oranges, basically.

    It would be worth noting that AMD aren't just competing with Nvidia (successfully), but also Intel and Qualcomm. The niche they fill is owned by them, where Nvidia hasn't got that luxury; they have to be aggressive because if they fail to compete with Intel, Qualcom and AMD... they'll likely end up getting cut to shreds, where AMD have the luxury of having an established niche that they operate within as if it were a vacuum (which is why they keep getting handed shed loads of investor cash despite only modest (or no) returns in recent years).

    So yeah...

    AMD aren't going anywhere. Indeed, there's more chance of Nvidia and Intel vanishing from sight due to the chaotic nature of the market areas they target, where AMD operate on a fairly stable and long-lasting platform.

  • dotdotdashdotdotdash Member UncommonPosts: 488


    Originally posted by Ridelynn
    Originally posted by 13lake
    Originally posted by JohnP0100 I guess even the diehard AMD supporters are resigned to the fact AMD is going to be bad for the next 12+ months.
    Is that why every single AMD Fury card is sold out on the planet ?

    Not saying anyone is right or wrong here, but could be because there isn't that much supply?

    I know when the 680's first came out, they stayed sold out for months after their release - and it wasn't so much because of the popularity as it was nVidia just couldn't produce enough chips.


    Nah, that's not what happened.

    Nvidia purposefully narrowed the production line to give the illusion of popularity. It's the same marketing strategy employed by most large tech companies as it motivates day 1 and early sales by convincing consumers that if they don't get one as soon as it releases then they'll have to wait months.

    AMD did precisely the same here. They purposefully narrowed the pipeline to drive sales, and now that limited number produced has sold out they can claim it's a testimony to the popularity of the card, which will drive further sales when the next prod run is with retailers.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by dotdotdash

    Originally posted by Ridelynn

    Originally posted by 13lake

    Originally posted by JohnP0100 I guess even the diehard AMD supporters are resigned to the fact AMD is going to be bad for the next 12+ months.
    Is that why every single AMD Fury card is sold out on the planet ?

    Not saying anyone is right or wrong here, but could be because there isn't that much supply?

    I know when the 680's first came out, they stayed sold out for months after their release - and it wasn't so much because of the popularity as it was nVidia just couldn't produce enough chips.


    Nah, that's not what happened.

    Nvidia purposefully narrowed the production line to give the illusion of popularity. It's the same marketing strategy employed by most large tech companies as it motivates day 1 and early sales by convincing consumers that if they don't get one as soon as it releases then they'll have to wait months.

    AMD did precisely the same here. They purposefully narrowed the pipeline to drive sales, and now that limited number produced has sold out they can claim it's a testimony to the popularity of the card, which will drive further sales when the next prod run is with retailers.


    I don't doubt that what you say is a strategy that gets employed.

    But we are talking about low volume, fairly niche products here that have a very high profit margin. I do have a bit of data to back that up:
    http://store.steampowered.com/hwsurvey

    Sticking with the 680, because that's just a decent example that was popular: It has 0.5% installed base on the Steam HW Survey. I admit, that doesn't include people who had one and have since upgraded, so let's look at those as well: 780 (0.73%), the 780Ti (0.34%), and the 980 (0.66%) -- there are no Titans on the list (I don't know if that's because there are no Titans that took the survey, or if they are mislabeled as something else). This survey also won't include 980T Ti, since it was just recently released and this data is from May.

    So, adding all that up, of all the nVidia users, 2.23% paid for top tier, or near top tier hardware; in excess of $500 MSRP for their video cards. nVidia total made up 51.85% of all video cards in the survey. Steam recently crossed 125 million users.

    I'm no math or statistics guru, but just some gut checks here: 51.85% of 125M is 64.8M. 2.23% of 64.8M is 1.45M.

    I didn't include AMD's top tier cards in there - just out of sheet laziness, I'll assume it's around a 2.23% rate as well, at 28.15% of the users, comes out to another 785k.

    And then there's the folks running Intel graphics, for which there is no top tier.

    So out of all 125M Steam users, 2.24M of them are using a top tier card - or about 1.8% of gamers - are willing to pay $500 or more for a top tier card. And only about 1/4 of that number are going to upgrade or buy new for each generation (based on the dispersion of the above-mentioned 4 cards to the total). That puts us to around 560k in upper-tier sales per generation, split between AMD/nVidia, give or take a good bit. That's not a perfect number, but in lieu of actual sales data, it gets us at least in the ballpark.

    Not all gamers are Steam users, I admit, but then again, not all of those graphics card users paid MSRP for their cards - some of those are used/second hand/what have you, and those that resold/handed down a card and then upgraded get "double counted" if we are trying to look at the number of consumers who pay for new top tier cards (which is the data I'm after). I'm also making the assumption that people with earlier revisions than a 680 (the 5x0s, the 4x0's, etc) would have since upgraded, and that the 680 would be the oldest tier still in service by those that tend to purchase top tier cards. I think that is reasonable, it obviously will add some error, but we aren't looking at perfect numbers here, just a general ballparl.

    There is another side to this equation, which I can't quantify nearly as well. These cards may not have a high sale rate, but they have a very rich profit margin. <2% sales could translate to a much higher chunk of your total profit margin - I don't know what that number would be, but it has the potential to be significant - 2% sales on a high profit margin item could conceivably be 5-10% (or more) of your total profit margin.

    So while the marketing aspect is important, is it important enough to jeopardize a very lucrative and high profit margin product just for more publicity (and in this case, it is undoubtedly double-edged; yes, the demand is "high" and that's good, but you also have consumers who aren't getting what they are trying to pay for, and that is bad)? I don't know.

    There is also this article - which isn't proof, I admit, but relevant.

    http://www.pcper.com/news/Graphics-Cards/NVIDIA-claims-GTX-680-sales-outpace-GTX-580

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by Jean-Luc_Picard

    I've yet to find a single review or benchmark where a FX-8350 beats a Core I7-4790k, but I'd be interested to read one. And even if that hypothetical situation exists... the Intel CPU still crushes the AMD one in all "real life" situations.

    Also, you say Bulldozer has pairs of cores sharing floating point units... well, the I7-4790k has only 4 cores with 8 logical cores, so 2 logical cores share much more than just a FPU.

    AMD GPUs are excellent, but CPU wise, AMD is still far behind Intel.

    EDIT and PS: not saying the FX-8350 is a bad CPU, and I'm sure it can run most games and applications at max settings with decent FPS when paired with a good graphic card, just that Intel CPUs (and not only the 4790k) definitely have better performance. And this reflects in gaming that when the AMD CPU "chokes" and frame rate drops under 30, the Intel CPU will keep higher frames per second.

    I'm not anti-AMD, my previous CPU was a Phenom II 1100T which is still running in a family computer (my mom's actually, 1100T and AMD HD6870 graphic card) and still delivers more than honorable performance for its age and the price I paid it back then, but I accept reality... Intel makes better CPUs than AMD.

    I'm not saying that AMD is just as good as Intel.  I agree that AMD is far behind Intel on CPUs today.  I expect Zen to close that gap quite a bit (largely because it's easier to greatly improve on a bad product than on a good one), but that might result in AMD being 90% as fast as Intel rather than 70% as fast.  And that's also quite a way off.  There are two arguments for buying AMD today:  the price tag, and if you want integrated graphics, AMD's graphics are massively superior to Intel's.

    What I'm saying is, if you want to create a benchmark where an FX-8350 beats a Core i7-4790K, it would be easy to do so.  All you have to do is find one instruction that AMD handles much better than Intel (e.g., if Intel doesn't have it at all) and call that instruction a huge number of times.  It doesn't mean that benchmark would be representative of anything that many people are likely to commonly run.

    But corner cases that people care about do exist.  I actually know less about CPUs than GPUs, but for such a corner case in GPUs, consider bitcoin mining.  AMD GPUs had a rotate instruction that Nvidia GPUs lacked (until Maxwell), and Nvidia GPUs are bad at shift, so even hacking together a rotate from other instructions is slow.  Running an algorithm where rotate accounts for a large fraction of instructions gives you results like this:

    http://www.hardocp.com/article/2011/07/13/bitcoin_mining_gpu_performance_comparison/2

    No one is claiming that a Radeon HD 5870 being more than three times as fast as a GeForce GTX 580 is a typical gaming result, but if all you cared about is bitcoin mining, Nvidia wasn't a serious option.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by dotdotdash

     


    Originally posted by Ridelynn

    Originally posted by 13lake

    Originally posted by JohnP0100 I guess even the diehard AMD supporters are resigned to the fact AMD is going to be bad for the next 12+ months.
    Is that why every single AMD Fury card is sold out on the planet ?

     

    Not saying anyone is right or wrong here, but could be because there isn't that much supply?

    I know when the 680's first came out, they stayed sold out for months after their release - and it wasn't so much because of the popularity as it was nVidia just couldn't produce enough chips.


     

    Nah, that's not what happened.

    Nvidia purposefully narrowed the production line to give the illusion of popularity. It's the same marketing strategy employed by most large tech companies as it motivates day 1 and early sales by convincing consumers that if they don't get one as soon as it releases then they'll have to wait months.

    AMD did precisely the same here. They purposefully narrowed the pipeline to drive sales, and now that limited number produced has sold out they can claim it's a testimony to the popularity of the card, which will drive further sales when the next prod run is with retailers.

    Nonsense.  AMD didn't have a bunch of Fiji chips in their basement twenty years ago and just waited until Wednesday to bother selling any.  Once you place an order for high-volume production of a chip, it takes a while to get a bunch of chips, and once they start coming in, it takes a while to accumulate a lot.

    If you launch a card as soon as you have enough chips to provide press samples, you're not going to have enough cards to fill retail demand at first even if your card is terrible, let alone if there's a lot of demand for it.  If you wait a few months to accumulate a whole bunch of inventory, then you've got plenty enough that everyone who wants to buy one on launch day can do so.

    If you're comfortably ahead, you might wait, as people who would have bought your new high end would have bought your old high end (or just waited) anyway if the new high end wasn't out yet.  But if you're far behind and you know it, you want to get your new high end out as soon as possible.  Even if it means that people have to wait a few weeks after launch to get their hands on one, you want to tell people interested in a high end product that you've got something good and they can get your product rather than having no alternative to buying from a competitor.

    That's the reason for the paper launch on the GeForce GTX 680:  the GTX 580 was far behind the Radeon HD 7970 and everyone knew it.  And it's also the reason for the soft launch (and possibly paper launch) on the Radeon R9 Fury X:  the Radeon R9 290X was far behind the GeForce GTX Titan X (or even the GTX 980 Ti) and everyone knew it.

  • ClassicstarClassicstar Member UncommonPosts: 2,697

    More and more reviews popping up in last few days showing a different picture then the day one reviews and Fury X seems not so slow as most think it is. Fury X is not beaten in all games example Farcry 4 or latest assassin creed.

    Some case it performs better in 2560x1440p then TX or TI and in 4K same Fury X performs better then TX and TI.

    Overall is TX and TI faster but if drivers for Fury X are optimized and windows 10 we may see a complete different result.

    Hope to build full AMD system RYZEN/VEGA/AM4!!!

    MB:Asus V De Luxe z77
    CPU:Intell Icore7 3770k
    GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
    MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
    PSU:Corsair AX1200i
    OS:Windows 10 64bit

  • dotdotdashdotdotdash Member UncommonPosts: 488


    Originally posted by Ridelynn
    I don't doubt that what you say is a strategy that gets employed.But we are talking about low volume, fairly niche products here that have a very high profit margin. I do have a bit of data to back that up:
    http://store.steampowered.com/hwsurvey*snip*There is also this article - which isn't proof, I admit, but relevant.http://www.pcper.com/news/Graphics-Cards/NVIDIA-claims-GTX-680-sales-outpace-GTX-580


    Indeed. The answer to the question your post ultimately poses is in the first thing you say about these products: they are high cost, low volume, high margin products that have limited market potential (relatively speaking) within a small niche.

    So, firstly I'd say it isn't about publicity. It's about sales generation. You can argue that publicity is sales generation and vice versa but that would miss out on the finer points of the two concepts. Anyway.

    By limiting the production pipeline so much so that it makes that limit worthy of reporting in the media, you're generating far, far more consumer engagement than you would do with any amount of advertising or paid for social signalling (or whatever the feck you wish to call it now). Your reach is significantly increased, to the point where you engage with more people in the niche you're targeting and create a situation where other organisations push your product for you by reporting firstly that you've sold out of first run products, and then creating a second story when (most of) those organisation further report when the second run product is with retailers again. Some consumers may lose out on that first run, but they will be told when the second run is available. Further to that, you encourage consumers to buy your product by making it look far more luxurious that it actually is.

    The strategy ONLY works because they employ it at the high end of the market (whatever market we're talking about here). It also works on markets where there's an effective monopoly on a product. It would definitely not work at the middle or low end of the market. There are more products operating within the middle and low end of the market, and those products do not carry a "niche" or "luxury" image. And they never will.

    So yeah. Hope that makes sense to you?

  • dotdotdashdotdotdash Member UncommonPosts: 488


    Originally posted by Quizzical
    Originally posted by dotdotdash   Originally posted by Ridelynn Originally posted by 13lake Originally posted by JohnP0100 I guess even the diehard AMD supporters are resigned to the fact AMD is going to be bad for the next 12+ months.
    Is that why every single AMD Fury card is sold out on the planet ?
      Not saying anyone is right or wrong here, but could be because there isn't that much supply? I know when the 680's first came out, they stayed sold out for months after their release - and it wasn't so much because of the popularity as it was nVidia just couldn't produce enough chips.
      Nah, that's not what happened. Nvidia purposefully narrowed the production line to give the illusion of popularity. It's the same marketing strategy employed by most large tech companies as it motivates day 1 and early sales by convincing consumers that if they don't get one as soon as it releases then they'll have to wait months. AMD did precisely the same here. They purposefully narrowed the pipeline to drive sales, and now that limited number produced has sold out they can claim it's a testimony to the popularity of the card, which will drive further sales when the next prod run is with retailers.
    Nonsense.  AMD didn't have a bunch of Fiji chips in their basement twenty years ago and just waited until Wednesday to bother selling any.  Once you place an order for high-volume production of a chip, it takes a while to get a bunch of chips, and once they start coming in, it takes a while to accumulate a lot.

    If you launch a card as soon as you have enough chips to provide press samples, you're not going to have enough cards to fill retail demand at first even if your card is terrible, let alone if there's a lot of demand for it.  If you wait a few months to accumulate a whole bunch of inventory, then you've got plenty enough that everyone who wants to buy one on launch day can do so.

    If you're comfortably ahead, you might wait, as people who would have bought your new high end would have bought your old high end (or just waited) anyway if the new high end wasn't out yet.  But if you're far behind and you know it, you want to get your new high end out as soon as possible.  Even if it means that people have to wait a few weeks after launch to get their hands on one, you want to tell people interested in a high end product that you've got something good and they can get your product rather than having no alternative to buying from a competitor.

    That's the reason for the paper launch on the GeForce GTX 680:  the GTX 580 was far behind the Radeon HD 7970 and everyone knew it.  And it's also the reason for the soft launch (and possibly paper launch) on the Radeon R9 Fury X:  the Radeon R9 290X was far behind the GeForce GTX Titan X (or even the GTX 980 Ti) and everyone knew it.


    You fundamentally misunderstood what I was saying in my post. Reread it.

  • SlyLoKSlyLoK Member RarePosts: 2,698
    Originally posted by Classicstar

    More and more reviews popping up in last few days showing a different picture then the day one reviews and Fury X seems not so slow as most think it is. Fury X is not beaten in all games example Farcry 4 or latest assassin creed.

    Some case it performs better in 2560x1440p then TX or TI and in 4K same Fury X performs better then TX and TI.

    Overall is TX and TI faster but if drivers for Fury X are optimized and windows 10 we may see a complete different result.

    That is what I am thinking too. The card is getting good results around high resolutions because that is why AMD is marketing it for right now and designing its drivers around to push. Once they get around to optimizing the drivers as a whole I expect different results. That could be 2 or 3 months if not longer though.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    I'm not expecting Fiji to get miracles from drivers down the road.  Tweaking drivers for a very new architecture such as GCN or Kepler is, indeed, a lot of work.  Different instructions are present with shaders accessed in different ways at a low level, and that can have a considerable impact on the proper way to do things.  But Fiji is still GCN, an architecture that you'd think AMD would understand pretty well by now.

    Fiji doesn't even change the ratio of computational power versus memory bandwidth as compared to Hawaii or Tahiti very much.  It does have Tonga's texture compression that Hawaii and Tahiti lacked, but I'm skeptical that that necessitates drivers needing a major overhaul as compared to Tonga.

    Nor do I expect that an ultra-wide memory bus would be a major issue unless it causes alignment issues.  I'm not sure how Fiji determines which memory addresses correspond to which physical chips, but it might not be all that different from Hawaii.  If Fiji does need different memory alignment, or especially if memory operations no longer come in 128 byte chunks, that could require a serious overhaul, though.

    What strikes me as more plausible is that HBM will have different latency characteristics from GDDR5, and that will take different optimizations.  But GDDR5 was already very, very high latency, so I'd be surprised if HBM is materially worse, especially when you're not going off package.  Maybe AMD hasn't figured out how to make caching memory in L2 play nicely with HBM or something.  Fiji does have a 2 MB L2 cache, the largest an AMD GPU has ever had and the same as Maxwell (throughout the line, even including the lower end GTX 750), so I wouldn't expect it to be starved for L2 cache.

    But none of those considerations explain why Fiji would do so much better compared to other cards at 4K than at lower resolutions.

  • JohnP0100JohnP0100 Member UncommonPosts: 401
    Originally posted by Classicstar

    More and more reviews popping up in last few days showing a different picture then the day one reviews and Fury X seems not so slow as most think it is. Fury X is not beaten in all games example Farcry 4 or latest assassin creed.

    Some case it performs better in 2560x1440p then TX or TI and in 4K same Fury X performs better then TX and TI.

    Overall is TX and TI faster but if drivers for Fury X are optimized and windows 10 we may see a complete different result.

    I'm not sure what is worse.

    AMD once again shows the world that it can't code out of a paper-bag.

    Or anyone believing AMD cause 'THIS time AMD will deliver!' right after the 'Fastest GPU' claim 2 weeks ago.

    It shows what PvP games are really all about, and no, it's not about more realism and immersion. It's about cowards hiding behind a screen to they can bully other defenseless players without any risk of direct retaliation like there would be if they acted like asshats in "real life". -Jean-Luc_Picard

    Life itself is a game. So why shouldn't your game be ruined? - justmemyselfandi

  • Loke666Loke666 Member EpicPosts: 21,441
    Originally posted by Jean-Luc_Picard
    Originally posted by JohnP0100
    AMD once again shows the world that it can't code out of a paper-bag.

    Ah, because in comparison, nVidia can ?

    Please, don't make me laugh. Both companies had their load of bugged drivers.

    Maybe so but AMD/ATIs drivers are still worse. *Sigh* I miss 3DFX. Heck, I even miss Matrox.

    Having only 2 companies on the market ain't a good thing and it would be even worse if AMD closed down as well. That said I do run an Nvidia card right now and it is likely my next card will be Nvidia as well.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by NightHaveN
    But yes bond of disappointment that after they swearing will release new cards, end up with refresh models (300x), and a refresh with a different memory controller, ram and cooler (Fury).

    Fiji is a new chip entirely, not a simple respin.  Yes, it's still GCN, but if you're going to dismiss it for not being the first chip of a new architecture, you'd have to throw out the entire GeForce 900 series as being illegitimate, too.  There's nothing wrong with building additional chips of a previous architecture.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    If you want a high end gaming rig, AMD is not competitive with Intel right now on CPUs.  But AMD and Nvidia are both competitive on GPUs.  So AMD decided that rather than crippling the rig with an inferior CPU, they'd build a high end gaming rig the right way because they'd rather sell the GPUs and not the CPU than sell neither.

    They also have a version with an AMD CPU so that the fanboys can have something to buy.  I hope they use Radeon memory and a Radeon SSD in it, too.

  • GdemamiGdemami Member EpicPosts: 12,342


    Originally posted by Quizzical

    Fiji is a new chip entirely, not a simple respin.  Yes, it's still GCN, but if you're going to dismiss it for not being the first chip of a new architecture, you'd have to throw out the entire GeForce 900 series as being illegitimate, too.  There's nothing wrong with building additional chips of a previous architecture.

    Only Maxwell card in 700 series was GTX 745/750(ti), the rest was Kepler, entirely different design.


    Amd on the other hand uses same chips since 2012 for entire product lines of HD 77xxx, HD 78xxx, HD 8xxx, Rx 2xx and now Rx 3xx.

    Hardly the same.


    In the same spirit, Fiji is likely many but "new" chip.

  • CleffyCleffy Member RarePosts: 6,412
    Originally posted by Jean-Luc_Picard

    I've yet to find a single review or benchmark where a FX-8350 beats a Core I7-4790k, but I'd be interested to read one. And even if that hypothetical situation exists... the Intel CPU still crushes the AMD one in all "real life" situations.

    Also, you say Bulldozer has pairs of cores sharing floating point units... well, the I7-4790k has only 4 cores with 8 logical cores, so 2 logical cores share much more than just a FPU.

    AMD GPUs are excellent, but CPU wise, AMD is still far behind Intel.

    EDIT and PS: not saying the FX-8350 is a bad CPU, and I'm sure it can run most games and applications at max settings with decent FPS when paired with a good graphic card, just that Intel CPUs (and not only the 4790k) definitely have better performance. And this reflects in gaming that when the AMD CPU "chokes" and frame rate drops under 30, the Intel CPU will keep higher frames per second.

    I'm not anti-AMD, my previous CPU was a Phenom II 1100T which is still running in a family computer (my mom's actually, 1100T and AMD HD6870 graphic card) and still delivers more than honorable performance for its age and the price I paid it back then, but I accept reality... Intel makes better CPUs than AMD.

    That is a bit of an unfair comparison. The FX-8350 was released in October 2012, while the Core i7 4790k was released in April 2014. 2 years and a die shrink later do wonders for performance. The direct competitor for the 8350 would be the 3770K which was released the same year. In design work like 3D rendering it performs better, but within the margin of error so about equal. It also performs similarly with painting programs. The FX-8370 was released at around the same time as the 4790K, but that was not really a competitive product and just a respin to get something on the market.

    There is a bit design philosophy going on here between the architectures. Something you won't experience in games is cache thrashing, so naturally if you can keep the cache under the amount available you will have better IPC performance. This is why the FX-8350 despite being a process node behind can compete against the 3770K in 3D rendering and painting applications. There is a lot of out of order calculations that cause thrashing and the 8350 holds up better to it.

    I actually suggest most people get a Core i7-4790, but I recognize the application for the FX series of processors.

    On the horizon, I am a bit impressed with the AM1 processors. They are not top of the range, but under $200 budget its the only realistic option. It definitely has a place in an office environment when heavy CPU work is not needed.

    On the graphics front, I agree GPUs are lagging in general. Both NVidia and AMD need to move onto a new process node already. The current node has already matured and both companies have gotten the most they can out of it without building monolithic GPUs. I think HBM is pretty much the only thing that could have boosted performance of AMD cards without moving on to a new process node. There is a silver-lining with the new Fury-X cards. They perform better at 4k. That's really the main application of such a pricey GPU. If you are gaming on 1080p, then you shouldn't really be wasting your money in this price range in the first place since there are plenty of $300 cards that will get you over 60 fps at 1080p in nearly all games.

    When it comes to drivers, AMD hasn't dropped the ball in over a decade. All the driver problems I have heard about since 2006 have been related to NVidia not AMD. Even that has been a while ago with the 8500m's catching fire. If you are avoiding a GPU maker because of driver issues today, you should really just let it go.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by Cleffy
    Both NVidia and AMD need to move onto a new process node already. The current node has already matured and both companies have gotten the most they can out of it without building monolithic GPUs.

    The problem is that you can't move to a new process node that isn't there.  AMD probably would have handled some things differently if they had known in 2010 that, come the middle of 2015, they'd still be stuck on 28 nm.  There are 20 nm process nodes, but they're designed for low power chips.  Trying to build a 200 W GPU on a node designed for 1 W chips doesn't necessarily work very well.

  • HrimnirHrimnir Member RarePosts: 2,415
    Originally posted by Cleffy
    *snip*

    On the graphics front, I agree GPUs are lagging in general. Both NVidia and AMD need to move onto a new process node already. The current node has already matured and both companies have gotten the most they can out of it without building monolithic GPUs. I think HBM is pretty much the only thing that could have boosted performance of AMD cards without moving on to a new process node. There is a silver-lining with the new Fury-X cards. They perform better at 4k. That's really the main application of such a pricey GPU. If you are gaming on 1080p, then you shouldn't really be wasting your money in this price range in the first place since there are plenty of $300 cards that will get you over 60 fps at 1080p in nearly all games.

    When it comes to drivers, AMD hasn't dropped the ball in over a decade. All the driver problems I have heard about since 2006 have been related to NVidia not AMD. Even that has been a while ago with the 8500m's catching fire. If you are avoiding a GPU maker because of driver issues today, you should really just let it go.

    I think you're being very generous with both of those statements.

    AMD has absolutely dropped the ball with drivers in the last 10 years.  In the last 5, you could make a strong argument that they haven't.  Hell, just the whole microstuttering thing that from 2013 was clear evidence they had been dropping the ball on the driver front.  Was it *that* major of an issue?  No, but again, i think you're being way too generous.

    Also Fury X does not perform better at 4k, at 4k its roughly equal to the 980ti.  The problem in my opinion is really 1440p. Most people havent gone to 4k because there are diminishing returns in visual acuity over 1440p especially considering how hard it hammers your GPU.  I dont remember the number but its in the low single digits of gamers who do 4k.  There is a much more significant amount of people who game at 1440p and 980ti/Fury X are absolutely valid purchases for that resolution.  Hell i would argue that they're really better suited to that resolution.  IMO you still need something like SLI 980 or above to reliably game at 4k and not experience issues.  But thats my opinion.

    "The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."

    - Friedrich Nietzsche

  • CleffyCleffy Member RarePosts: 6,412
    Micro-stutter wasn't an issue for single-gpus. It was an issue for multi-gpus including NVidia. You do make a good point on drivers if it is directed at crossfire support.
  • ClassicstarClassicstar Member UncommonPosts: 2,697

    DX12 is first to support double gpu.

    Hope to build full AMD system RYZEN/VEGA/AM4!!!

    MB:Asus V De Luxe z77
    CPU:Intell Icore7 3770k
    GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
    MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
    PSU:Corsair AX1200i
    OS:Windows 10 64bit

Sign In or Register to comment.