Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

OpenGL with GPGPU is the future and future of GPU's

FateFatalityFateFatality Member UncommonPosts: 93

so, after doing lots reserch. reading up on valve use of openGl,

http://www.extremetech.com/gaming/133824-valve-opengl-is-faster-than-directx-even-on-windows

 

https://www.youtube.com/watch?v=imti_KuTVhE

 

here is were it get's fun, OpenGL is already faster then Direct3D any microsoft API, openGl supports everything DX11 and older  graphical features.

 

Now openGl will work with GPGPU, what is GPGPU? well, GPGPU is were most of what CPU does is ported to GPU instead of CPU. This is were it get's cool GPGPU is fast really fast, and OPENGL is faster then D3D so .. from what i said before, useing OpenGl with GPGPU would incresse magnatude in performance in game's, we are in era of bad optimized games this is a true fact, bad codeing bloated and all results in poor porfmance.

 

PS4 and next xbox will suport OpenGL 4.2 and have GPGPU intragration. so what does this mean? it means that visual of photorealstic graphics and pushing boundries are closer then ever before.(not with consoles but with PC)

 

If OpenGL is faster, why is DirectX still the predominant API? It isn’t because of image quality or features: OpenGL 4.0 has all of shaders and tessellators and widgets that DX has. It isn’t because of hardware support: All Nvidia and AMD graphics cards support the latest version of OpenGL along with DirectX.

Really, it all comes down to that crummy old thing we call the network effect — and, of course, monopolistic heft and marketing dollars. DirectX, because it has a cleaner API and better documentation, is easier to learn. More developers using DirectX = more DirectX games = better driver support. This is a vicious loop that again leads to more DX devs, more DX games, and better DX drivers/tools/documentation. Microsoft has relentlessly marketed DirectX, too — and who can forget the release of Windows Vista and Microsoft’s OpenGL smear campaign? Vista’s bundled version of OpenGL was completely crippled, forcing many devs to switch to DirectX.

now with that in mind you can see why, visuals are stagnated or slowed down more then what we think, in terms of raw performance photo realstic graphics is required 5 Teraflops of GPU power, now remeber GPGPU and openGL focus on GPU power unlike D3D, so teraflops is really what runs GPGPU and OpenGL more teraflops more framerate. 

So another point to make out is this, OpenGl can run bloated codeing and not get effect by it as much as D3D does, it still gets effected in performance but not as much as d3d does, so with little more optimization on code would just incresse performance overall, but even with bad code your still getting some huge performance incresse, this wil mean you can still be a lazy developer and run game's 60+ fps super easly, VERYYY EASLY.  this is future

 

now GPU future

Future of GPU's are 3 things, Multi GPU on one PCB and Arm proccesors and drivers for SLI/Crossfire

so il start with MultiGPU setups, now most of you go oh but the microshutter, well this is driver related issue, sooner or later single GPU dies will only go so far and minor tweaks to archutecture , so there is no doubt that MultiGPU configrations on one PCB will come as standard for high end GPU cards. but they will prep and work on SLI/crossfire issues to pretty much remove issues that we have today, like i said driver issues not hardware, its all software, and they just need to cater software with OpenGL,  amd and NV are working hard on Sli and Crossfire issues and there is a reason MultGPU one PCB is were we be headed, but wait....

Arm proccesors, now this is cool,  NV and AMD are looking at putting Arm proccesors in there GPU's , what does this mean?

pretty straight forward it's a CPU on GPU just like APU's but... it has it's own Chip/die that's has a pipelines directed to GPU, this will mean removeing microshutter issues and also be able to do some type of predicit frames before it happens, *(with software ) the gets arms the  infomation and hold it for next command,(GPU also be getting commands from Motherboard CPU) remeber GPUGPU and opengl Less CPU involment more GPU)  but wouldnt it be slow?

No... it be running on GDDR5+  6 or 7 ect bandwith , think of like this Terga CPU on a GPU that has 2 GPU one PCB, Arm CPU will be a quad core with its own APU plus the Multi GPU setup this will be similar to Tri SLI setup but, it's not, APU in Arm CPU will be more or less help with buffering and offload simple visuals something like AA can be dedicated to APU of Arm CPU or anything for that matter maybe AI? . this is all driver related for this work effectly though, so this is were we are headed.

http://www.tomshardware.com/news/nvidia-armv8-soc-gpu,18838.html

Mores law is dead with current way thing's are but GPU's still has mores laws.

 

if you find mistakes or issues with reading it please don't flame just pin point what i should do to fix it in presentation or spelling 

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,353

    "here is were it get's fun, OpenGL is already faster then Direct3D any microsoft API, openGl supports everything DX11 and older graphical features."

    No, actually, it isn't.  OpenGL with Nvidia actively helping Valve optimize a game for Nvidia video cards on it ended up being faster than DirectX with Valve optimizing the code themselves.  If Nvidia had put the work into helping Valve optimize the DirectX version and not the OpenGL version, then the DirectX version almost certainly would have been faster.  For that matter, if you take exactly the same code from both and run them on an AMD card, it wouldn't be surprising if the DirectX version was faster there, too--because the OpenGL version was optimized specifically for Nvidia graphics.

    "Now openGl will work with GPGPU, what is GPGPU? well, GPGPU is were most of what CPU does is ported to GPU instead of CPU."

    GPGPU is using a GPU to do non-graphical computations, or at least computations that don't go through the normal rendering pipeline.  Normally you'd want to use OpenCL for GPGPU, as it gives you a lot more flexibility than OpenGL, which is heavily optimized for graphics in particular.

    But you can't expect that offloading arbitrary code onto a GPU will result in performance increases.  Many things would run much slower on a GPU, because they simply aren't what the GPU is set up for.

    Let's look at what a Radeon HD 7970 can do, for example.  A Radeon HD 7970 has around 4 TFLOPS of computational power available. Stick it on a PCI Express 3.0 x16 bus and you've got 16 GB/s of PCI Express bandwidth available, or enough to transfer 4 billion 32-bit floats per second. If you fully exploit both, you need one thousand floating-point computations per float that you pass along from the CPU.

    The card also has 264 GB/s of of memory bandwidth, or enough to grab 66 billion 32-bit floats per second. If you want to do 2 trillion FMA operations per second, that means you need to grab 6 trillion floats per second. You've got enough video memory bandwidth to grab about 1% of those from video memory, but the rest will have to come from elsewhere. Even the chip's L1 and L2 caches added together only offer about 10 times the bandwidth of video memory.

    Speaking of which, you've got 512 KB of L1 cache and 768 MB of L2 cache, and you want to feed 2048 shaders from that. That's not very much cache per shader as compared to CPUs that may have over 1 MB per core, so you'd better know very precisely what you'll need well ahead of time, and need exactly the same data many, many times in a row.

    So what can you do with such restrictions? You could have a short program with little to no branching so that you have the same instructions in the same order every time, and then a ton of different data points at the start that you feed through the program. The amount of data input into and output from the program should be small relative to the number of floating point computations done in the program.

    If your data are a bunch of vertices in a model that all need to be rotated by the same angle and shifted by the same distance and so forth, then you've got a vertex shader. If your data are a bunch of pixels that need to do the same lighting computations and so forth, then you've got a pixel/fragment shader, depending on whether you want to use the DirectX or OpenGL terminology. The restrictions of needing many computations from little memory access and even less passing data from the CPU aren't really that problematic for doing modern computer graphics.

    But the restrictions of GPGPU are problematic for most programs that run on a CPU.  There are some things that fit, such as encryption or bitcoin mining.  But if you need high (or even not completely awful) single-threaded performance, GPGPU has no chance.  If you need a lot of branching, GPGPU will choke.  Object-oriented programming?  Completely out of the question.

    "PS4 and next xbox will suport OpenGL 4.2 and have GPGPU intragration. so what does this mean?"

    I doubt that the next Xbox will support OpenGL 4.2.  It's made by Microsoft, which is more interested in pushing something DirectX-based.  The PlayStation 4 might support it, but high level APIs are far less necessary for consoles than for PCs.  If you know exactly the capabilities of the GPU chip that you're dealing with, then you can get some substantial gains by using lower-level commands than OpenGL gives you access to.

    Graphics APIs such as OpenGL become necessary when you don't know exactly what GPU you're dealing with.  If you make a PC game, it could end up running on any one of dozens of different GPU chips of several different architectures, many of which have multiple bins even before you consider changing clock speeds.  OpenGL lets you tell video drivers, I want you to do this, and then video drivers that are aware of the particular GPU chip that they have can decide what they think is the most efficient way to put data in which GPU caches, assign computations to which shaders at which times, and so forth.

    "It isn’t because of image quality or features: OpenGL 4.0 has all of shaders and tessellators and widgets that DX has."

    That's actually not true.  DirectX 11 has compute shaders, which OpenGL didn't get until version 4.3.

    "It isn’t because of hardware support: All Nvidia and AMD graphics cards support the latest version of OpenGL along with DirectX."

    Video card support for OpenGL and DirectX is actually rather different.  With DirectX, Microsoft says, here's the standard and you can support it or not.  When Microsoft makes a new version of DirectX, any hardware already out there generally won't support it, and only future generations will.

    OpenGL is very different.  OpenGL is managed by the Khronos group, an industry consortium of graphics vendors such as AMD and Nvidia.  What I think happens is that AMD and Nvidia look at each other and say, well, we've both got video cards that support such and such, so we might as well expose the functionality via OpenGL.  While OpenGL likely doesn't get a feature until after the video cards launch, the older cards will then go back and support the newer version of it.  Thus you can have GeForce 8000 series cards that launched in 2006 fully support OpenGL 3.3 that wasn't announced until 2010--precisely because if those cards can't do something, it wouldn't have been added to the OpenGL 3.3 specification.  Meanwhile, those same cards don't support DirectX 10.1, which released in 2007.

    "DirectX, because it has a cleaner API and better documentation, is easier to learn. More developers using DirectX = more DirectX games = better driver support."

    The flip side of what I said above is that DirectX tends to get features before OpenGL.  DirectX 9.0c launched in 2005.  There isn't a clear OpenGL equivalent, but probably the nearest is OpenGL 3.0, which launched in 2007--and required newer generation hardware.  2007 also featured the launch of DirectX 10, whose nearest OpenGL equivalent, version 3.2, wouldn't arrive until 2009.  Meanwhile, 2009 saw the launch of DirectX 11, and the nearest OpenGL equivalent to that, version 4.0, wouldn't come until 2010.

    The reason for this is that DirectX is controlled by Microsoft, so it isn't prone to being held back by squabbling among graphics vendors.  If AMD says, our cards aren't good at this so let's not use it yet, they may be able to greatly delay something from being added to the OpenGL specification.  Or Nvidia could do that.  Or Intel, or Imagination, or ARM, or Qualcomm.  But with DirectX, Microsoft can cut through that and say, this is what you have to support in order to run the latest DirectX version, whether you like it or not.

    As far as driver support goes, you need to understand that DirectX and OpenGL mostly do the same things.  I'd be absolutely shocked if video drivers don't commonly pair an GLSL (OpenGL) function with its HLSL (DirectX) equivalent and treat them in exactly the same way.  There are some notational differences that driver writers need to be aware of, such as that GLSL makes it easier to select a column of a matrix while HLSL makes it easier to select a row, as well as some functions that will be supported in one but not the other.  But they're very similar.

    The "cleaner API" claim may be true, but there are trade-offs.  DirectX deprecates old features constantly, so if you have DirectX 9 code and tell the system it's DirectX 10, it won't work.  OpenGL just makes the new version add more stuff to the old, so if you have OpenGL 3.1 code and tell the system that it's OpenGL 4.2, it works just fine.  That means that OpenGL tends to build up a bunch of useless junk in the API and DirectX doesn't.

    After building up a bunch of long-useless legacy stuff from an API that originally launched in 1992, OpenGL in 2007 finally decided to deprecate some old stuff and say, okay, you don't actually have to support this anymore to say that you support OpenGL.  For example, color-index mode, for if you have too few colors for red-green-blue values to make sense, and it's better to call them color 1, color 2, color 3, and color 4.  Or dithering, so that you can draw shades of gray on a monochrome monitor by mixing different proportions of black and white pixels.  But other than that one batch of stuff that OpenGL deprecated in 2007, they just keep adding new stuff without removing the old.

    "and who can forget the release of Windows Vista and Microsoft’s OpenGL smear campaign? Vista’s bundled version of OpenGL was completely crippled, forcing many devs to switch to DirectX."

    Microsoft didn't bundle OpenGL with Vista.  OpenGL was just way behind in capabilities at the time.  There's nothing dirty on Microsoft's part in pointing that out.  If the Xbox 720 were to launch a year before the PlayStation 4 (which is very unlikely, but it's just an example), Microsoft would use that intervening year to compare it to the PlayStation 3 and tell everyone how much better the Xbox 720 was.  There's nothing dirty about pointing out when your competitor is way behind.

    "now remeber GPGPU and openGL focus on GPU power unlike D3D, so teraflops is really what runs GPGPU and OpenGL more teraflops more framerate. "

    That's sheer nonsense.  Teraflop ratings on GPU are "here's how many computations you could do if you had a program capable of spamming FMA constantly".  It's a hardware thing, not an API thing.

    "OpenGl can run bloated codeing and not get effect by it as much as D3D does, it still gets effected in performance but not as much as d3d does,"

    Nonsense.  Shaders are very performance-sensitive, regardless of whether you're using OpenGL or DirectX.  They're also very short, so if you're not heavily optimizing your algorithms for "runs as fast as you can make it, even at the expense of human-readability", you're doing it wrong.

    "this wil mean you can still be a lazy developer and run game's 60+ fps super easly, VERYYY EASLY."

    Nope.  No matter how fast your hardware is, sufficiently slow software can still manage to run poorly.  If you could use a time machine to go get the top of the line video card from 2030 and bring it back and it still supported OpenGL 4, I could easily write a program that wouldn't get 1 frame per second on it.

    "so il start with MultiGPU setups, now most of you go oh but the microshutter, well this is driver related issue, sooner or later single GPU dies will only go so far and minor tweaks to archutecture , so there is no doubt that MultiGPU configrations on one PCB will come as standard for high end GPU cards."

    To the contrary, multi-GPU setups are on the way out.  Having two separate GPU chips working on the same frame at the same time is impractical, so multi-GPU for gaming only makes sense if it's faster than what you can get with a single GPU chip.  But graphics performance is increasingly limited by power, not transistor count, so if a single GPU can use as much power as you're willing to dissipate, that's the way to go.  And higher end video cards can already put out over 200 W from a single chip.

    "there is a reason MultGPU one PCB is were we be headed"

    Two GPUs on a single PCB is a bad idea for gaming purposes, and getting worse.  When the most that a single GPU would put out was 75 W, you could put two on a card and have it dissipate 150 W and you were fine.  But now that single-GPU video cards can have a TDP in the range of 250 W?  Two of those on a single card is a bad idea.

    "Arm proccesors, now this is cool, NV and AMD are looking at putting Arm proccesors in there GPU's , what does this mean?"

    AMD uses ARM processors for TrustZone, as AMD found it cheaper to just license ARM's security stuff than to develop their own from scratch.  Soon AMD will use ARM processors for microservers.  But neither of those are really an ARM processor integrated into the GPU.

    Nvidia, meanwhile, uses ARM as their processors in Tegra chips.  That likewise isn't really ARM integrated into the GPU.  Nvidia's upcoming Maxwell GPU architecture will have ARM cores, but it's unclear what Nvidia will do with that.  I wouldn't be at all surprised if only the top end GPU chip gets ARM cores and the rest don't.  If it's purely meant for GPGPU and not for graphics, then that would be the sensible thing to do.

    "pretty straight forward it's a CPU on GPU just like APU's but... it has it's own Chip/die that's has a pipelines directed to GPU"

    If the ARM cores were in a separate die from the GPU, then they wouldn't be built into the GPU.

    "this will mean removeing microshutter issues and also be able to do some type of predicit frames before it happens"

    ARM cores have nothing to do with microstutter.  The problem is that if you have two independent GPU chips computing two independent frames at the same time, it's hard to line them up so that one card always finishes a frame halfway between when the other finishes two consecutive frames.

    "No... it be running on GDDR5+ 6 or 7 ect bandwith"

    Look at my numbers above.  In order for a Radeon HD 7970 GHz Edition to keep its shaders fully utilized, they can only pull about 1% of the data they need from video memory, as that's all the memory bandwidth that is available.  That 1% figure varies some from card to card, but it's fairly typical for modern video cards.

    "Arm CPU will be a quad core with its own APU plus the Multi GPU setup this will be similar to Tri SLI setup but, it's not, APU in Arm CPU will be more or less help with buffering and offload simple visuals something like AA can be dedicated to APU of Arm CPU or anything for that matter maybe AI?"

    It's hard to parse what you're saying, but if you're going to have the GPU use ARM cores for something or other, you'll only do that for work that the GPU itself isn't good at.

  • BarrikorBarrikor Member UncommonPosts: 373


    Originally posted by FateFatality

    So another point to make out is this, OpenGl can run bloated codeing and not get effect by it as much as D3D does, it still gets effected in performance but not as much as d3d does, so with little more optimization on code would just incresse performance overall, but even with bad code your still getting some huge performance incresse, this wil mean you can still be a lazy developer and run game's 60+ fps super easly, VERYYY EASLY.  this is future

    Bad code is bad code, no matter how you slice it.

    I do think that it'll be easier to have games run faster in the future, and have interpreted languages be able to do advanced graphics fast. BUT that can ONLY come from faster hardware, not by tinkering with the 3D API. When hardware gets better, both OpenGL and DirectX take advantage of it.

  • ShakyMoShakyMo Member CommonPosts: 7,207
    There's no way on earth the new xbox will be open gl.

    Why the hell would Microsoft help people port xbox games to Linux and MAC OS.

    They want people locked to direct x, its one of the main things they use to push new versions of windows.
  • QuizzicalQuizzical Member LegendaryPosts: 25,353
    Originally posted by Barrikor

     


    Originally posted by FateFatality

    So another point to make out is this, OpenGl can run bloated codeing and not get effect by it as much as D3D does, it still gets effected in performance but not as much as d3d does, so with little more optimization on code would just incresse performance overall, but even with bad code your still getting some huge performance incresse, this wil mean you can still be a lazy developer and run game's 60+ fps super easly, VERYYY EASLY.  this is future


     

    Bad code is bad code, no matter how you slice it.

    I do think that it'll be easier to have games run faster in the future, and have interpreted languages be able to do advanced graphics fast. BUT that can ONLY come from faster hardware, not by tinkering with the 3D API. When hardware gets better, both OpenGL and DirectX take advantage of it.

    Some API changes do let you efficiently offload more work onto the video card.  Geometry shaders were a big deal for this.  Tessellation may or may not be, depending on what you're doing.  But yeah, you also need hardware that can handle the newer APIs in order to make use of them.

    Right now, the way it works is that shaders are written in GLSL (OpenGL) or HLSL (DirectX), passed to the video drivers as a string (literally a string, as in, kind of like an array of bytes or chars), and the video drivers compile those into executable binaries that the video card can handle.  They do this as the program is running, though it will typically be done when you launch the game and that's it, or possibly for some major changes to graphical settings.  There's no hard rule against compiling shaders at random other times while the game is running, but it's usually a bad idea as it will cause hitching.

Sign In or Register to comment.