Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Hawaii to be announced in two days; 512-bit memory bus; faster than Titan for $600?

QuizzicalQuizzical Member LegendaryPosts: 25,353

AMD has said that Nvidia's GK110 chip is about 30% bigger than AMD's upcoming Hawaii chip, and that it the top Hawaii card will cost a lot less than $1000.

Purportedly leaked pictures show 16 memory chips, which presumably means a 512-bit memory bus.  If so, this would be the fourth GPU chip to ever have a 512-bit memory bus, following in the footsteps of the Radeon HD 2900 XT, GeForce GTX 280, and GeForce GTX 285.  The first of those was a bad card, the second suddenly seemed ridiculous when the Radeon HD 4870 showed up two months later, and the third was a GDDR3 card trying to compensate for living in a GDDR5 era by being really expensive.  But none had the benefit of being launched onto an old, mature process node when there wasn't a newer, better process node available, as Hawaii will have.

Benchmarks, on the other hand, are much easier to fake than photographs.  So claims of Hawaii being a little faster than Titan should be taken with appropriate caution.  Still, the die size makes a 40 CU part that nearly doubles the performance of a Radeon HD 7870 into a real possibility, and that would put it in Titan territory on performance.  Whether you can do that without blowing out the power budget is a different question, but AMD might take Titan's approach of more CUs clocked a little lower to save on power.

Still, we'll know more in two days, as AMD has an event to reveal the parts, in Hawaii appropriately enough.  Actual retail availability of hardware is likely a month or so later.  With it looking like 20 nm isn't going to be a terribly important process node, if Hawaii can hang with Titan, there probably won't be anything all that much faster coming until 16 nm in 2015.

«1

Comments

  • RidelynnRidelynn Member EpicPosts: 7,383

    I expect we'll see on-par with 780 performance levels, for about $600. To which nVidia will probably cut to $575, and inside of 6 months we'll see both cards going for around $450-525 with various game deals and rebates and such.

    I don't think AMD can touch Titan yet, unless they have dramatically turned their energy use around and get some fantastic yields (that was the problem with Titan early on, the yields sucked). We'll see - the GCN cards to date aren't horrible on energy use in the first place, it's just that Kepler is better.

    We've already got 770-level performance and price parity, just with more energy use, in the 7970 GHz/Boost.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    Kepler was not uniformly more energy efficient than Southern Islands.  The outlier on energy efficiency was Tahiti (Radeon HD 7900 series), which, as the first big chip on a 28 nm process node, was considerably less efficient than any other GPU chip of the generation--whether from AMD or Nvidia.  Bonaire (Radeon HD 7790) is already more efficient than any Kepler chip besides Titan.

    Furthermore, while still on 28 nm, there are more process nodes available now.  For the Southern Islands GPUs, AMD went with TSMC's 28 nm HPL, not because it was what they wanted, but because it was the first 28 nm process node that was ready for commercial production.  Now that TSMC has several other 28 nm process nodes available, does AMD stay on HPL or do they switch to HP (which Kepler uses) or HPM?  There could plausibly be significant gains from that, and I don't know what Bonaire uses.  Even if they do stay on 28 nm HPL, a more mature process node allows better yields at lower voltages.

    Furthermore, Tahiti was the first GCN chip that AMD made.  They surely found many little tweaks that they wanted to make later--and some of those tweaks went into Pitcairn (7800 series) and Cape Verde (7700 series).  With Hawaii as a whole new chip, AMD could make whatever tweaks they wanted and figured out after they had commercial cards in their hands from the whole 7000 series.

    Does that add up to Hawaii being faster and/or more energy efficient than Titan?  Likely not--but we shouldn't dismiss the possibility out of hand.  More shaders clocked lower can get you the same performance at better energy efficiency than fewer shaders clocked higher--but at the expense of die size, which Titan has 30% more of than Hawaii.  Though AMD has probably done quite the opposite on video memory, with a 512-bit memory bus clocked lower as compared to Titan's 1.5 GHz, 384-bit memory bus.  While that does add to PCB cost, it doesn't necessarily come at the expense of die size.  Remember how AMD's 256-bit bus in Barts (Radeon HD 6800 series) took the same die space as the 128-bit bus in Juniper (Radeon HD 5700 series), because clocking them lower allowed for much smaller memory controllers?

  • The user and all related content has been deleted.
  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    So AMD has now announced the card, the Radeon R9 290X.  And they were more interested in talking about what it can do with audio than graphics.

    Apparently it will be available for pre-order in early October, though.  The rest of the R7 and R9 lineup looks like it could be rebrands.

  • MawneeMawnee Member UncommonPosts: 245

     

    "Announcing AMD Hawaii! The R9 290x is capable of  5 GFLOPS and gets high scores in AMD-centric benchmarks! How much better than Titan? Ummm..lets talk about audio! rabble, rabble, rabble!"

     

    Basically it took AMD all year to come up with a single card solution card to keep up with a Titan and only just. So they are going to try to gain leverage by doing pricing wars. This is good for us as gamers overall as now Nvidia has to finally lower prices to compete. I just hope it doesn't become the norm that it takes them a year to answer back. Because this means they aren't pushing out true next gen hardware as quickly because the competition is so slow to uh...compete.

     

    I currently own a Titan, bios modded and OC'd@1176(no throttle). I was really hoping Hawaii would blow it out of the water so I could upgrade to it or at the very least force Nvidia to push out the GTX 800 series sooner. From a single card standpoint I don't really have an upgrade option yet, though I suppose adding a second Titan will become a cheaper option very soon.

  • Gaia_HunterGaia_Hunter Member UncommonPosts: 3,066
    Originally posted by Mawnee

     

    "Announcing AMD Hawaii! The R9 290x is capable of  5 GFLOPS and gets high scores in AMD-centric benchmarks! How much better than Titan? Ummm..lets talk about audio! rabble, rabble, rabble!"

     

    Basically it took AMD all year to come up with a single card solution card to keep up with a Titan and only just. So they are going to try to gain leverage by doing pricing wars. This is good for us as gamers overall as now Nvidia has to finally lower prices to compete. I just hope it doesn't become the norm that it takes them a year to answer back. Because this means they aren't pushing out true next gen hardware as quickly because the competition is so slow to uh...compete.

     

    I currently own a Titan, bios modded and OC'd@1176(no throttle). I was really hoping Hawaii would blow it out of the water so I could upgrade to it or at the very least force Nvidia to push out the GTX 800 series sooner. From a single card standpoint I don't really have an upgrade option yet, though I suppose adding a second Titan will become a cheaper option very soon.

    Until 20 nm is out will be hard, except if Mantle is relevant.

    Currently playing: GW2
    Going cardboard starter kit: Ticket to ride, Pandemic, Carcassonne, Dominion, 7 Wonders

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Mawnee
    I currently own a Titan, bios modded and OC'd@1176(no throttle). I was really hoping Hawaii would blow it out of the water so I could upgrade to it or at the very least force Nvidia to push out the GTX 800 series sooner. From a single card standpoint I don't really have an upgrade option yet, though I suppose adding a second Titan will become a cheaper option very soon.

    Unrealistic expectations.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by Gaia_Hunter
    Originally posted by Mawnee

     

    "Announcing AMD Hawaii! The R9 290x is capable of  5 GFLOPS and gets high scores in AMD-centric benchmarks! How much better than Titan? Ummm..lets talk about audio! rabble, rabble, rabble!"

     

    Basically it took AMD all year to come up with a single card solution card to keep up with a Titan and only just. So they are going to try to gain leverage by doing pricing wars. This is good for us as gamers overall as now Nvidia has to finally lower prices to compete. I just hope it doesn't become the norm that it takes them a year to answer back. Because this means they aren't pushing out true next gen hardware as quickly because the competition is so slow to uh...compete.

     

    I currently own a Titan, bios modded and OC'd@1176(no throttle). I was really hoping Hawaii would blow it out of the water so I could upgrade to it or at the very least force Nvidia to push out the GTX 800 series sooner. From a single card standpoint I don't really have an upgrade option yet, though I suppose adding a second Titan will become a cheaper option very soon.

    Until 20 nm is out will be hard, except if Mantle is relevant.

    If you already have a Titan, I see no reason to seriously consider upgrading from it until 16 nm is out.  Yes, 16 nm, not 20 nm.  20 nm is likely to be short-lived, so that the vendors won't launch huge die parts, and may not launch anything much faster than Titan or Hawaii on 20 nm.

    Mantle, meanwhile, will be about as relevant as GPU PhysX.  You'll see some nifty demos with it, but it won't be widely adopted.  It will be more of a marketing gimmick than an important feature, as coding stuff that will only run on a small fraction of GPUs is a waste of effort.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical
    Mantle, meanwhile, will be about as relevant as GPU PhysX.  You'll see some nifty demos with it, but it won't be widely adopted.  It will be more of a marketing gimmick than an important feature, as coding stuff that will only run on a small fraction of GPUs is a waste of effort.

    Maybe, but with both major Consoles running on GCN - it wouldn't be that far fetched to see it get wider use than PhysX. It has (or rather, will have) wider availability in terms of hardware adoption.

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Could it be we see things wrong?could this GPU be using quad channel gddr5?
  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by drbaltazar
    Could it be we see things wrong?could this GPU be using quad channel gddr5?

    That would be terminally stupid, as it would be severely starved for memory bandwidth.  Would you really make a top end card with considerably less memory bandwidth than your next card down?

  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    So they couldn't have used quad channel gddr5 at 256 bit each channel?
  • drbaltazardrbaltazar Member UncommonPosts: 7,856
    Nvm lol!I should have read what gddr5 mean first!it would have to be called gddr? Since 5 is dual channel it would need to be a new thing!gddr6 ?ROFL!going from actual dual channel GPU to quad channel GPU ?mm this would be a lot of work!
  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by drbaltazar
    So they couldn't have used quad channel gddr5 at 256 bit each channel?

    Each memory channel is 64-bits.  The way that GDDR5 works, you have your choice of two 32-bit connections to GDDR5 memory chips, or four 16-bit connections.  The latter is used when you want very high video memory capacity, but not on most consumer cards.

    Names of memory standards are arbitrary, and not intrinsically meaningful.  GDDR anything is high power, high bandwidth, and typically only used in video cards and video card-like devices such as the Intel Xeon Phi, which is basically a video card that can't do graphics.  LPDDR anything is low power, low bandwidth, and ideal for tablets and cell phones.  Ordinary DDR is somewhere in between, and commonly used for main system memory in desktops, laptops, and servers.  They increment the number every time JEDEC releases a new standard of a given type, so the next high bandwidth, high power memory standard that targets video cards will be GDDR6, but that doesn't tell us anything about how GDDR6 will work.

    -----

    Rumors put the release date of the Radeon R9 290X as October 15.  Or at least, that's supposedly when the embargo ends and reviews can go up.  It's not clear whether there will be a ton of cards available at launch.  It shouldn't be restricted by process node capacity, but as with any chip, it will be limited by how much time has passed since AMD gave the orders for a full production run of the chips.

  • ClassicstarClassicstar Member UncommonPosts: 2,697

    AMD R9 290X-512BIT-MANTLE and price right?... seems in theory a winner if all engine's(frostbite 3 already) support mantle and with knowledge AMD already own all consoles 290x will be extremely fast and over take titan easly in theory for now its speculation offcorse.

    Hope to build full AMD system RYZEN/VEGA/AM4!!!

    MB:Asus V De Luxe z77
    CPU:Intell Icore7 3770k
    GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
    MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
    PSU:Corsair AX1200i
    OS:Windows 10 64bit

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Classicstar
    AMD R9 290X-512BIT-MANTLE and price right?... seems in theory a winner if all engine's(frostbite 3 already) support mantle and with knowledge AMD already own all consoles 290x will be extremely fast and over take titan easly in theory for now its speculation offcorse.

    Mantle has some relevance with AMD having the consoles - that may make it attractive for developers to actually use it, but I don't think it will come to any sort of prominence, because it won't work on PC nVidia stuff (unless they release some sort of driver or wrapper to pass it through DirectX or something on Intel/nVidia cards, which isn't out of the question, but isn't likely). I wouldn't count on Mantle really gaining much traction - kinda like how GLIDE died out as soon as we got a viable platform-agnostic alternative. I think we'll see more OpenGL stuff than Mantle stuff

    As far as the 290x compared to Titan... the consoles have nothing to do with it. Mantle has nothing to do with it even. The card will sit where it sits based on it's hardware.

  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    I'm going to predict that the Radeon R9 290X will, on average, be a little slower than Titan, but will be close enough that it beats Titan in some games.
  • ClassicstarClassicstar Member UncommonPosts: 2,697


    Originally posted by Ridelynn

    Originally posted by Classicstar
    AMD R9 290X-512BIT-MANTLE and price right?... seems in theory a winner if all engine's(frostbite 3 already) support mantle and with knowledge AMD already own all consoles 290x will be extremely fast and over take titan easly in theory for now its speculation offcorse.

    Mantle has some relevance with AMD having the consoles - that may make it attractive for developers to actually use it, but I don't think it will come to any sort of prominence, because it won't work on PC nVidia stuff (unless they release some sort of driver or wrapper to pass it through DirectX or something on Intel/nVidia cards, which isn't out of the question, but isn't likely). I wouldn't count on Mantle really gaining much traction - kinda like how GLIDE died out as soon as we got a viable platform-agnostic alternative. I think we'll see more OpenGL stuff than Mantle stuff

    As far as the 290x compared to Titan... the consoles have nothing to do with it. Mantle has nothing to do with it even. The card will sit where it sits based on it's hardware.


    What ive read about it is if a engine also support mantle for games they run on that engine it will make the cardfaster and will give amd the advantage over nvidia if game supports mantle. And you can still choose between DX or mantle.

    Mantle have alot more change now then ever before in 2007 with vista we know why it failed its now entirly different situation and with both consoles using mantle its also easy to port games to PC i realy don't see why it should fale again.

    Unless all developer dont support it to port a game thats super easy now with mantle and i very doub that it almost cost them nothing now to port games while in past it was one of reason not port to pc becouse to expensive.

    Frostbite is first its matter of short time before we here unreal and crytek also support mantle.

    We will see how the bench go with battlefield 4 compare to titan.

    Also card have some other things that will improve on better performance that can beat titan but offcorse hardware will have do this and we will see soon enough.

    If rpice is right and 290x is almost as fast its a winner for all gamers becouse nvidia will surly lower there rpice for 280's and titan which is over priced anyway.

    Hope to build full AMD system RYZEN/VEGA/AM4!!!

    MB:Asus V De Luxe z77
    CPU:Intell Icore7 3770k
    GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
    MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
    PSU:Corsair AX1200i
    OS:Windows 10 64bit

  • 13lake13lake Member UncommonPosts: 719

    A good 290x sample depending on the yield will probably beat Titan in everything (those are gonna be the OC and extreme versions of cards, like Sapphire Toxic, Asus Matrix, etc, ...)

    And normal versions of the card are gonna lose to titan in about 25%-45% of the games by a fraction of 1-4 fps.

  • sacredfoolsacredfool Member UncommonPosts: 849
    Originally posted by 13lake

    A good 290x sample depending on the yield will probably beat Titan in everything (those are gonna be the OC and extreme versions of cards, like Sapphire Toxic, Asus Matrix, etc, ...)

    And normal versions of the card are gonna lose to titan in about 25%-45% of the games by a fraction of 1-4 fps.

    Unrealistic unless it's AMD running those benchmarks.


    Originally posted by nethaniah

    Seriously Farmville? Yeah I think it's great. In a World where half our population is dying of hunger the more fortunate half is spending their time harvesting food that doesn't exist.


  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by 13lake

    A good 290x sample depending on the yield will probably beat Titan in everything (those are gonna be the OC and extreme versions of cards, like Sapphire Toxic, Asus Matrix, etc, ...)

    And normal versions of the card are gonna lose to titan in about 25%-45% of the games by a fraction of 1-4 fps.

    You raise an interesting point, though you probably don't realize what it is.  Nvidia has largely locked down overclocking on Titan by not allowing board partners to modify the card.

    If Asus or Sapphire or Gigabyte or whoever wants to make a premium version of a Radeon R9 290X with a 1.1 GHz stock clock, 12+2 power phases, a 3 slot cooler, two 8-pin PCI-E power connectors, and a 375 W TDP, AMD will probably let them.  That card would probably beat Titan in most things, even if the stock R9 290X tends to be slower than Titan.  Trying to make a huge chip with more hardware clocked lower for energy efficiency does tend to allow massive overclocking, after all.

    But you know what it probably wouldn't beat?  A GK110 chip given the same huge overclocking treatment.  There's no reason why Nvidia can't unshackle Titan and let board partners make a premium overclocking card out of it.  And I don't see any reason for Nvidia not to do that, as Nvidia wants to still be able to claim that they have the fastest card.  GK110 is so much faster than Tahiti that Nvidia could lock down overclocking on Titan and still beat a heavily overclocked Tahiti, but they won't be able to do that against Hawaii.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Mantle won't make the card any faster. The card is as fast as it can be given the various clocks you set it to run at.

    What Mantle will do is allow the software to perform better.

    A typical game goes something like this
    You -> Windows -> DirectX -> Video Driver -> CPU -> Video Card -> Monitor

    Mantle just allows you to skip the generic DirectX layer and goes straight through the video card driver, which allows the software to run a bit more efficiently. It also allows you to make direct video card calls that DirectX can't support because of it's generic nature.

    So software can run faster, because it won't be constrained by the DirectX api, but the video card itself won't run any differently, and programmers have to explicitly support Mantle along side DirectX or OpenGL; it won't run on nVidia or Intel graphics for the PC, and it requires separate code bases - so I don't predict it will be terribly popular for the PC. Devs ~may~ use it for Console development, and then wrap it inside of DirectX for the PC port and not expose the Mantle API so they don't have to support it.

  • 13lake13lake Member UncommonPosts: 719
    Originally posted by Quizzical
    Originally posted by 13lake

    A good 290x sample depending on the yield will probably beat Titan in everything (those are gonna be the OC and extreme versions of cards, like Sapphire Toxic, Asus Matrix, etc, ...)

    And normal versions of the card are gonna lose to titan in about 25%-45% of the games by a fraction of 1-4 fps.

    You raise an interesting point, though you probably don't realize what it is.  Nvidia has largely locked down overclocking on Titan by not allowing board partners to modify the card.

    If Asus or Sapphire or Gigabyte or whoever wants to make a premium version of a Radeon R9 290X with a 1.1 GHz stock clock, 12+2 power phases, a 3 slot cooler, two 8-pin PCI-E power connectors, and a 375 W TDP, AMD will probably let them.  That card would probably beat Titan in most things, even if the stock R9 290X tends to be slower than Titan.  Trying to make a huge chip with more hardware clocked lower for energy efficiency does tend to allow massive overclocking, after all.

    But you know what it probably wouldn't beat?  A GK110 chip given the same huge overclocking treatment.  There's no reason why Nvidia can't unshackle Titan and let board partners make a premium overclocking card out of it.  And I don't see any reason for Nvidia not to do that, as Nvidia wants to still be able to claim that they have the fastest card.  GK110 is so much faster than Tahiti that Nvidia could lock down overclocking on Titan and still beat a heavily overclocked Tahiti, but they won't be able to do that against Hawaii.

     

    Yes exactly my point on the first part of your post, though personallu i don't even count cards with reference board, power phases and coolers as they will be replaced by premium models within a really small time frame. The question is how will aftermarket premium cards clash with 780 aftermarket premium cards and titan as it is now.

    As for how Nvidia will take the back the crown, yes it is very much possible that they will "unleash" Titan, but also it's possible that Nvidia will release Titan Ultra, to avoid dropping prices on 780s and normal Titans.

    It will be interesting to see if 290x can keep within 5% less fps with Titan Ultra/Unleashed

  • RidelynnRidelynn Member EpicPosts: 7,383

    Well, there's also the difference between nVidia Boost and AMD PowerTune

    To a large extent, Titan cards are all OCed as much as they can, given that Boost works by starting at some low default clock and then scaling upwards as far as it can until it hits a cap or a thermal/power limit.

    Sure, you can find cases where you are hitting the cap and just raise the cap, and you can put on premium coolers to help prevent from hitting thermal limits, but really that doesn't require special BIOSes or anything - Boost already has it built in. You also have to rely on Boost to give you performance, and that will vary title to title because you start low and have to build up.

    An overclock doesn't really do anything. You can start at a higher base clock, but not hugely because you can run into some bad thermal/power problems, and nearly every title can be accelerated by Boost past the base clock setting. And you can lift the caps, but your still relying on Boost to drive you to the caps and assuming your not going to get throttled on anything before that. Overclocks largely get bypassed by the Boost mechanism and you don't see much benefit, but on the flip side, your almost always able to get maximum performance from your card, because it's in effect automatically overclocking itself in a "safe" manner. You may be able to make it more aggressive, but your not really going to affect Boost much by itself. Here, if you lift the power/thermal cap you could get into some trouble, and if you raise the base clock too high you will definitely get into trouble, but raising the clock cap is safe because you'll likely get saved by the power/thermal caps (although largely ineffectual, because your probably already getting saved by a power/thermal cap).

    AMD PowerTune works differently - you start at a high clock, and if it senses a thermal/power limit, it will throttle you down until your safe. Every title starts at the base clock, and only comes down if required. Here, an overclock affects everything, since you already start high, and the overclock goes right on top of that. You overclock too far, and PowerTune will reign you back in (to a point). You can lift the PowerTune cap and get into some trouble, but aside from that it's pretty safe to OC and you get nearly full benefits from it all the time.

    So, even if you could put out a "turbocharged" Titan, it wouldn't matter to much, you'd still be bound by Boost. The best thing you could do is just put a better cooler on it, and that keeps you from hitting the thermal cap as much, and that doesn't require any fiddling with BIOS settings or clocks or anything else, because it's already part of Boost.

  • 13lake13lake Member UncommonPosts: 719

    As the manufacturing process matures u get better yields, and some very good samples which can oc high and can be turned into Ultra models, the path of gk110 has been hard long and rocky, if the tsmc's process has finally matured enough, nvidia might have been able to get a few thousand perfect+ chips that will be turned into a ultra version of the titan.

     

    It all depends if the wafers at tsmc don't disappoint, they have to deliver insanely good yields or else there would be only be a dozen or so Titan Ultra cards which would be useless.

     

    What i'm getting at is that even though you feel like a turbo-charged TItan is the maximum of the gk110 chip, remember that there are still disabled parts of the chip, and that if more than a thousand golden samples can be made, we will get a Titan which is miles better than the current titan.

Sign In or Register to comment.