Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Apparently making a new GPU architecture from scratch is hard

QuizzicalQuizzical Member LegendaryPosts: 25,348
Back in 2006, when AMD paid several billion dollars to buy ATI, some people questioned whether the purchase was worth it.  Wouldn't it be cheaper to just make your own GPU architecture and get into the market that way?

Intel's efforts at doing that gave reason to think it might not be that easy.  Intel promised that the Larrabee GPU would blow away the competition just like their recent X25-M SSD had.  Then they realized that trying to do graphics well on an x86-based part was too hard, and so Larrabee would only be for compute.  Then Intel's effort at showing off Larrabee for compute was a total fiasco in which a heavily overclocked Larrabee prototype lost to an AMD part already available at retail for under $400 in Intel's own cherry-picked benchmark.  Larrabee would eventually turn into Xeon Phi, an awful compute part that now seems to have been put out of its misery.

Intel has often had integrated GPUs, of course.  They're promising to scale it up to make Xe graphics into the first Intel GPU that is actually good, after more than two decades of trying.  And to be fair, their integrated GPU in Ice Lake is the most competitive that Intel has ever been on integrated GPUs.  After already being ahead of AMD on process nodes for integrated GPUs, Ice Lake was Intel's first major new product on a new, better process node, and put a heavy emphasis on improving GPU performance.  That's enough to make it kind of competitive-ish against AMD's best integrated GPU, at least if you exclude game consoles.  Intel still typically loses in gaming performance, even though AMD's integrated GPU is based on a slightly improved version of a process node that they've been using for GPUs for 3 1/2 years, and a warmed-over version of an eight year old architecture.  And that's ignoring that AMD's integrated GPUs are about to jump forward with a move to 7 nm, and we should probably expect them to announce something or other about this next week at CES.

But it's not just Intel.  Samsung spent several years trying to develop their own GPU architecture.  Samsung doesn't need high performance for PC gaming, let alone HPC compute.  They just need something good for cell phones, though that does mean that energy efficiency is at quite a premium.  Finally, they gave up last June and decided to just license AMD's GPUs instead:

https://news.samsung.com/global/amd-and-samsung-announce-strategic-partnership-in-ultra-low-power-high-performance-graphics-technologies

AMD recently said that that basically means that you can expect some future Samsung SoCs for cell phones or whatever to have a Radeon GPU in them.  Of course, that's really just instead of an ARM GPU as they've used in the past, or an Imagination GPU that they could have used.  The Mali and PowerVR brand names don't have the same sort of high-performance gaming cachet that the Radeon brand name does (or for that matter, GeForce), but a Radeon GPU that isn't allowed to use more than a few watts won't necessarily be that high of performance.  Regardless, if Samsung's efforts at building their own GPU had been successful, they wouldn't have needed to license AMD's.

And then comes the hook for today's post.  Back in 2017, Apple announced that they were no longer going to use Imagination GPUs for their iDevices.  Instead, they were going to make their own GPU.  Well, they've now walked that back.  Apparently they've signed a new licensing agreement with Imagination, after all.

https://www.anandtech.com/show/15272/imagination-and-apple-sign-new-agreement

There has been speculation that Apple's intent had been to drive down the value of Imagination so that they could buy it.  Apple is by far Imagination's biggest customer, so losing their business with Apple raised questions about whether Imagination could survive at all.  Regardless, if Apple had a terrific GPU of their own, they wouldn't need to license Imagination's.  You don't see AMD or Nvidia license other vendors' GPUs for their own chips, after all, though Intel did use Imagination for one generation of netbook.

The number of major GPU vendors has declined over the years, due to mergers or bankruptcies or whatever.  But all six of the significant GPU vendors have been building GPUs for a long time:  Nvidia, AMD, Intel, ARM, Qualcomm, and Imagination.  How significant of a GPU vendor Intel is could be disputed; they have enormous market share, but only because the Intel GPU it comes included with the Intel CPU that people actually want.  So there are really only five GPU vendors for which anyone actually wants to use their GPU.

The lack of new competitors isn't for lack of trying.  Intel, Samsung, and now Apple have tried hard to build some excellent GPUs.  And failed miserably at it.  These aren't obscure start-ups that might have been scams to begin with.  Intel, Samsung, and Apple are all giants of the semiconductor industry, with enormous amounts of success building some terrific computer chips.  But they haven't been able to build good GPUs on their own.  Because apparently that's hard to do.
[Deleted User]

Comments

  • TEKK3NTEKK3N Member RarePosts: 1,115
    Of course building a GPU from scratch is hard.
    Same as building a new CPU architecture.
    That’s the reason why NVIDIA never tried.

    But the reason why AMD bought ATI is two fold.
    The first reason was to eliminate a direct competitor (ATI).
    The second was to have the technology to work on a CPU+GPU architecture, called APU, in order to rival INTEL superiority by offering something they didn’t have.
    Unfortunately that strategy didn’t work quite as predicted, as making an APU than can rival a CPU or GPU is even harder than focusing on each individually, which is what AMD ended up doing eventually.

    I don’t think AMD has given up on APU, but it will take them longer to make something that can replace both a CPU and a GPU.

    AMD products improved a lot in recent years, but they still have a problem of excessive wattage and overheating compared to the competition.
    They should focus on that, because otherwise their products are really good, specifically their new CPU.
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    But there are a number of companies that have their own CPU designs.  Apple and Qualcomm prominently do so with their own ARM cores, and some other companies have done so with server-focused designs.  Nvidia did that briefly, too, and it made it to a launched product.  If anything, the case against building your own CPU core is not so much that it's too hard as that the latest off the shelf ARM Cortex cores are good enough and cheap enough that there's no need to design your own.

    I think it's striking that building your own GPU seems to be much harder than that, as the recent entrants into the GPU market have struggled mightily.  For example, Apple has been very successful with their CPU designs, but failed at making their own GPU.
    Ozmodan
  • TEKK3NTEKK3N Member RarePosts: 1,115
    Well, building CPU for mobiles is one thing.
    Building CPU for PC is another.

    Apple don’t use their own CPUs on their most powerful machines like the Mac and Macbook.
    They use Intel Xeon or Core M.

    While both PS4 and XBOX ONE use AMD CPUs and GPUs.
    They are basically the same machines with different OS.
    Most people who buys either consoles (or both) don’t know they are buying an outdated AMD PC.
    Marketing at its best.

    Intel and AMD are the only two manufacturer that can make CPUs for high end machines, it’s not only the GPUs that are hard to make.
  • RidelynnRidelynn Member EpicPosts: 7,383
    Apple has been using their own GPU design since the iPhone 8.

    The 7 was the last to use an imagination-powered GPU.

    Apples current agreement doesn’t move Apple back to sourcing PowerVR chips - it just allows them to use some IP in their design.

    https://www.eenewsanalog.com/news/how-much-did-apple-pay-settle-imagination/page/0/1

    You can’t really make a modern GPU without stepping on someone’s toes and needing to license something somewhere - just like you couldn’t make a x86 CPU today without needing to pay someone something somewhere. Licensing a patent to do bitbliting or FFA transforms or something specific isn’t the same thing as using an entire GPU platform.
  • TEKK3NTEKK3N Member RarePosts: 1,115
    Ridelynn said:
    Apple has been using their own GPU design since the iPhone 8.

    The 7 was the last to use an imagination-powered GPU.

    Apples current agreement doesn’t move Apple back to sourcing PowerVR chips - it just allows them to use some IP in their design.

    https://www.eenewsanalog.com/news/how-much-did-apple-pay-settle-imagination/page/0/1

    You can’t really make a modern GPU without stepping on someone’s toes and needing to license something somewhere - just like you couldn’t make a x86 CPU today without needing to pay someone something somewhere. Licensing a patent to do bitbliting or FFA transforms or something specific isn’t the same thing as using an entire GPU platform.
    If you read my post closely I wasn’t talking about Apple Smartphones, I was talking about their desktop and laptops range.
    For those they use Intel CPUs.
    They don’t need any particular licence to make a CPU since Apple is their own thing, they can do whatever they want with the brand.

    As for the PC platform, Intel and AMD own the rights to use the X86 architecture, which they bought from IBM.
    I don’t think they will allow a third competitor.

    But Apple doesn’t need to use the X86 architecture, yet they chose to stick with Intel, which means that building a CPU is not that easy.
    I am sure if they could, they would build their own CPU as they did for most of their smaller products.

  • Jamar870Jamar870 Member UncommonPosts: 570
    Well Apple and Qualcomm still didn't build theirs from scratch. They license some major building blocks from ARM, ie the cores.
  • RidelynnRidelynn Member EpicPosts: 7,383
    edited January 2020
    TEKK3N said
    If you read my post closely ....
    I actually wasn't referring to your post at all.

    And then comes the hook for today's post.  Back in 2017, Apple announced that they were no longer going to use Imagination GPUs for their iDevices.  Instead, they were going to make their own GPU.  Well, they've now walked that back.  Apparently they've signed a new licensing agreement with Imagination, after all.

    This part just isn't quite correct, and is what I was referring to.
  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    While Apple and Qualcomm CPUs implement the ARM architecture, they do design their own cores.  Many other vendors just use the cores that ARM designs.

    The x86 architecture already has a third competitor:  VIA.  They're not very well known, but they do exist.
  • CleffyCleffy Member RarePosts: 6,412
    I think the patents are the big issue. Everyone knows how to do it, but how do you do it without getting sued into oblivion? You can license your patents from AMD or nVidia, but then you end up paying enough in license fees were the venture isn't worth it. You can use patents that are past validity, but then you are on an archaic standard. AMD and nVidia also cooperate to an extent in order to maintain standards.
    Ridelynn
Sign In or Register to comment.