Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

So apparently Nvidia Tegra 4 doesn't support any modern graphics APIs

2»

Comments

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    For what it's worth, Nvidia may have made the decision they did because they calculated that more brute force computational power in OpenGL ES 2.0 would be worth more than less computational power in OpenGL 4.3.  Sticking with OpenGL ES 2.0 lets them use non-unified shaders.  That means they can have a bunch of pixel shaders that are much weaker than the vertex shaders or unified shaders would have to be.  I don't know if they could have done that with OpenGL ES 3.0, but the full OpenGL 3.2 or later has additional pipeline stages.

    And weaker pixel shaders is exactly what they did.  Anything that computes positions needs at least 32-bit floating point accuracy.  Otherwise, you'll have a bunch of depth buffer rounding errors and get massive graphical artifacting.  Pixel shaders usually don't compute positions (as the position on the screen is already given to the shader as an input) but only the color of that particular pixel.  Your red, green, and blue color values in any texture you read in are probably only 8-bit each.  So Nvidia went with 20-bit floating point precision in their fragment shaders, and not 32-bit.

    That lets them have more fragment shaders, and likely makes the card benchmark better if you only run OpenGL ES 2.0 benchmarks.  That makes me hope that sites will compare chips using benchmarks using newer APIs, and mark Tegra 4 as 0 frames per second because it wouldn't run.  But they probably won't do that.

  • NevulusNevulus Member UncommonPosts: 1,288
    man this guy REALLY hates Nvidia
  • AvsRock21AvsRock21 Member UncommonPosts: 256
    Originally posted by ShakyMo
    Dx 11.1 won't be important anytime soon. Win 8 is even less popular than win vista, its possibly microsofts biggest disaster to date.

     

    Actually Win 8 passed Vista in popularity about a week ago. But I agree that DX 11.1 won't be important for awhile.

     

    I've been an Nvidia fan for decades now. But the Tegra technology is not one I can recommend yet. My old quad core Tegra 3 Asus tablet gets absolutely dominated by my Google Nexus 10 and it's Exynos 5 dual core. Now after hearing about the API issue with the Tegra 4, it's kind of a no-brainer to go with a device that has a more efficient and powerful cpu architecture, something with an ARM Cortex A15 (or A7) CPU.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by AvsRock21
    Originally posted by ShakyMo
    Dx 11.1 won't be important anytime soon. Win 8 is even less popular than win vista, its possibly microsofts biggest disaster to date.

     

    Actually Win 8 passed Vista in popularity about a week ago. But I agree that DX 11.1 won't be important for awhile.

     

    I've been an Nvidia fan for decades now. But the Tegra technology is not one I can recommend yet. My old quad core Tegra 3 Asus tablet gets absolutely dominated by my Google Nexus 10 and it's Exynos 5 dual core. Now after hearing about the API issue with the Tegra 4, it's kind of a no-brainer to go with a device that has a more efficient and powerful cpu architecture, something with an ARM Cortex A15 (or A7) CPU.

    Tegra 4 uses a quad core ARM Cortex A15 CPU.  It actually has five such cores, but one of them is different and optimized for lower power consumption and will only run when not much performance is needed.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical

    Originally posted by adam_nox Isn't directx a direct interfacing with the cpu, ie not interpreted or portable?  I thought that meant you had to build a version of directx for the architecture of the cpu.  Perhaps you can still put an api on a graphics chip to support it, but I would think that it wouldn't work with smartphone cpus.  Mind you this is based on a bunch of things I just kind of assumed from random posts and stuff.   That and the ordeal android had to go through just to get flash on it (another thing that has to directly access the cpu and therefore was not portable between architectures).
    DirectX is a collection of a bunch of APIs.  By far the best known one is Direct3D, so people often say "DirectX" to properly mean "Direct3D", which is what I did above.

    Direct3D basically gives programmers a way to tell video cards to do things.  That way, instead of making the CPU do all of the work to render a game (which would work, except that you'd likely measure frame rates in seconds (plural) per frame rather than frames per second), you can make the GPU do most of the work, while the CPU just has to do the work that isn't GPU-friendly.  OpenGL is the other graphics API with capabilities comparable to Direct3D.



    DirectX can run on anything Microsoft wants it to. It's just a list of tools that programmers can use when writing their games. Microsoft provides a high-level toolset for commonly used functions so that each programmer/developer doesn't have to implement them itself. Since Microsoft also has access to the operating system, they can chose to use undocumented or hidden OS tricks to speed things up and provide direct access to stuff that programmers/developers wouldn't normally have access to.

    It happens to be compiled for Windows x86/x64. There is also a special version compiled for PowerPC (that runs the XBox 360).

    The API is basically just a list of tools and works like a magic black box. You call this function, you give it these values (parameters), and it will perform this task for you. You don't actually get access to exactly how that task is performed, just what values you need to give it in order for it to work properly.

    For a piece of hardware to be called "compliant" means that it can perform some particular task natively: it has a special instruction that can do some particular task in one shot, such as draw a square, or compute a tesselation, or scale a texture. Yes, you could write the code to do all that yourself, but if it's implemented in hardware, then it's much easier: you just call that hardware function and it's done in a few cycles, rather than writing a bunch of code and having it run on the CPU and several hundred/thousand cycles.

    One quick example would be multiplication:
    It's easy to say:


    x = 3 * 3;
    ... but if (for some strange reason) your hardware didn't have a "Multiply" function, you would have to perform that in software
    x = 0;
    i = 0;
    for i < 3
    {
    x = x + 3;
    i = i + 1;
    }

    With hardware support - boom, you have your answer in one stroke. With software support, it takes several iterations and a lot more complication to arrive at the same answer. Imaging doing that with 3D spaces computing hundreds upon thousands of vectors: with hardware support: boom, your done with one call. Trying to do that with software quickly bogs down. That's what being DX/OGL Compliant means.

    Intel actually used to call some of their early graphics chips "compliant" but they performed a lot of the functions in software at the driver level, so the OS couldn't tell that they weren't performed in hardware (and they may do so still). They had a lot of issues with those early chipsets, and were extremely slow.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Ridelynn

    For a piece of hardware to be called "compliant" means that it can perform some particular task natively: it has a special instruction that can do some particular task in one shot, such as draw a square, or compute a tesselation, or scale a texture. Yes, you could write the code to do all that yourself, but if it's implemented in hardware, then it's much easier: you just call that hardware function and it's done in a few cycles, rather than writing a bunch of code and having it run on the CPU and several hundred/thousand cycles.

    Now that I stop to think about it, it's actually very complicated to say what it means for hardware to be OpenGL compliant.

    I haven't dealt with DirectX, but with OpenGL, it's basically a matter of, if you use these particular functions, the video card has to give this particular behavior.  It does not say how fast it has to run, but it does have to do it without additionally consulting the processor.  Or at least I think it does.  For example, OpenGL 4 requires hardware to be able to do 64-bit floating point computations, but doesn't say it has to do them fast, so most OpenGL 4 compliant video cards do 64-bit computations at 1/24 to 1/16 of the speed of 32-bit computations.

    In some cases, it specifies the behavior exactly:  every bit of an 11-bit unsigned floating-point number has to have exactly this meaning.  In some cases, it only says you have to come kind of close:  if you draw a line from this point to that point, the ideal behavior would be such and such, but anything that isn't shifted from ideal by more than 1 pixel is acceptable.  In some cases, it even says the hardware can give whatever value it wants, such as if a user tries to cast a negative floating-point number to an unsigned integer in GLSL.  That's basically a warning to programmers of "don't do this!" in many cases, or at least don't rely on different invocations of the shader being executed in some particular order.

    But where it really gets weird is shaders.  You pass a string to the video drivers, and it treats it as source code and compiles it into a binary executable that can run on that particular video card.  No, really, it's a string, kind of like an array of characters.  And the drivers have to stop to compile it while the game is running (this will typically be done when you launch the game, though changing certain graphical settings could also trigger it), and then the compiled program has to give particular behavior.

  • GravargGravarg Member UncommonPosts: 3,424
    I think the problem ME had was that 2000 was so good and prevailant, people just skipped over ME.
  • birdycephonbirdycephon Member UncommonPosts: 1,314
    Originally posted by Gravarg
    I think the problem ME had was that 2000 was so good and prevailant, people just skipped over ME.

    I'm sure the huge memory leak had nothing to do with it.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by birdycephon
    Originally posted by Gravarg I think the problem ME had was that 2000 was so good and prevailant, people just skipped over ME.
    I'm sure the huge memory leak had nothing to do with it.

    Well, it could also be that 98 was relased in June of 98, ME was released in Sept of 2000, while Windows 2000 was released in Feb of 2000. And XP was released in Aug of 2001.

    So there was only a short window (11 months) for it to really catch on, and there was a viable alternative (Win2000) that didn't have all these negative perceptions. Anyone doing anything serious went to Win2000, and most home users didn't see the need to upgrade past 98 (it worked fairly well for it's time). The few people that got ME complained loudly (like they did with Vista, and are doing with 8), so the people without it just stayed away, and it never hit critical mass.

  • CaldrinCaldrin Member UncommonPosts: 4,505
    Well considering the Tegra 4 will be power smart phones and tablets it sounds bloody good to me and i really done care if it does not support DX11 or whatever..
  • adam_noxadam_nox Member UncommonPosts: 2,148
    Originally posted by Gravarg
    I think the problem ME had was that 2000 was so good and prevailant, people just skipped over ME.

    Of course not everyone with ME had problems, but I assure you, a LOT of people did.  A lot more than any other operating system.  I worked on people's computers back then, it was sometimes very very awful.

  • CastillleCastillle Member UncommonPosts: 2,679

    Opengl is just an api. Its up to the hardware devs to make openl stuff run. So tesselation in nvidia could be implemented in hardware differently from amd. This is not the case in d3d where microsoft controls how everything is implemented even down to the hardware level. So a dx9.0c compliant card will be exactly the same in nvidia and amd graphics card.

    Tldr:
    Opengl only enforces the api(function names n high level descriptions).
    Dx enforces the api and how its implemented in hardware and software.

    Edit :

    Shaders have completely seperate compliance-ness from the actual graphics api. 

    ''/\/\'' Posted using Iphone bunni
    ( o.o)
    (")(")
    **This bunny was cloned from bunnies belonging to Gobla and is part of the Quizzical Fanclub and the The Marvelously Meowhead Fan Club**

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Castillle
    Opengl is just an api. Its up to the hardware devs to make openl stuff run. So tesselation in nvidia could be implemented in hardware differently from amd. This is not the case in d3d where microsoft controls how everything is implemented even down to the hardware level. So a dx9.0c compliant card will be exactly the same in nvidia and amd graphics card.

    Tldr:
    Opengl only enforces the api(function names n stuff)
    Dx enforces the api and how its implemented in hardware and software.

    I don't see how Microsoft even could do that, as GPU designers have to have considerable freedom on how to implement things in order to optimize performance.  No video card is going to have dedicated hardware to multiply two 4x4 matrices of 32-bit floating point numbers that is completely separate from the hardware they use to multiply a 4x4 matrix from a 4x3 matrix of 32-bit floating point numbers.  Rather, the drivers would interpret it as a bunch of vectorized FMA and multiplication operations.

  • CastillleCastillle Member UncommonPosts: 2,679

    Im not sure exactly how it was worded but it was something like how I said.  Maybe it was specific driver implementation thats standardized by microsoft? Ill try to find the link again when I get home.  I found it back when I made a post asking which would be better to learn prolly a few months ago. Im guessing by the traffic on the dev corner, its prolly still in the first page o.O

    ''/\/\'' Posted using Iphone bunni
    ( o.o)
    (")(")
    **This bunny was cloned from bunnies belonging to Gobla and is part of the Quizzical Fanclub and the The Marvelously Meowhead Fan Club**

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    This is my speculation, but what I think happens a fair bit is that Microsoft says, you have to implement this and this and this or you can't claim to work with the latest version of DirectX, so AMD and Nvidia do it.  And then they look at each other and say, well, we've both got hardware that does this and this and this anyway, so we might as well expose the functionality via OpenGL, too.
  • RidelynnRidelynn Member EpicPosts: 7,383

    OpenGL and DirectX both are just APIs - they don't control the API, because they ~are~ the API.

    They can request that certain functions be implemented in hardware (such as precision levels, execution speeds, etc), but for the most part, they just specify "When we ask your driver this, it outputs this" and leave it up to the manufacturer to implement that in hardware (or not) and expose it via their driver. This is why drivers get WHQL certification - it means they have all the appropriate functions exposed (it doesn't really detail if they are implemented in Software or Hardware, just that they are available and work correctly) and don't have horrible bugs or security flaws.

    Microsoft does not design hardware. Neither does OpenGL. They just say we want to do XYZ in a device driver call - and they can specify that it be performed on the GPU rather than the CPU (which implies there is hardware-level support) - and if you have all the functions they ask for and it meets all their specifications, you can call your product compliant.

    Yes - for the most part, DirectX is the primary driver for consumer cards, but OpenGL takes a more primary role when looking at the workstation-class cards. The biggest different is mainly in the drivers, but since they run on (nearly) identical hardware, and the two APIs are fairly similar in the first place, it doesn't take a whole lot of work to support both to some degree.

    Just for example: VMWare has a non-hardware based generic video driver. It runs nearly entirely emulated on the CPU (since it's hypervisor does not grant access to video hardware), with a few hooks to the host operating system drivers. Their entirely software-based video driver is DirectX 9.0c and OpenGL 2.1 compliant. It does not run fast by any stretch of the imagination, but it works.

Sign In or Register to comment.