Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

AMD Radeon HD 7990 launches. World doesn't care.

124»

Comments

  • ShakyMoShakyMo Member CommonPosts: 7,207
    Lol you don't need a doctorate in mathematics do do the art for tesselation.

    Now your guys coding the engine, might help if they have a mathematics degree, but I suspect the level of maths you get from a GOOD software engineering / comp-sci degree will be enough (e.g the ones that concentrate on technical skills rather than 1980s project management bs)
  • DeivosDeivos Member EpicPosts: 3,692

    More or less to use tesselation you just need to make a texture akin to a bump/heightmap referred to as displacement mapping. Ultimately they are all essentially a greyscale texture map of an object showing the high/low points of a model and the tesselation mechanics translates that onto the object.

     

    The point of the original argument stands though. PC gaming has not seen major strides in a while outside of some rather tech-demo level implementations because of the console market. With the next generation of upcoming consoles we will see a bump up in the quality of games, but it will not break the habit of the PC hardware being somewhat underutilized in potential.

    Even with PC only titles the tendency to adhere to lower standards is still common too because of the fact that a lot of people lag behind in hardware, playing on machines that can be five years old or more.

     

    Also just wanna note. Crysis 3 might look pretty, but the engine it's built on does not hold any major advancements. Rather it was built as a more stable and multiplatform version of the previous generation of the Cryengine.

    "The knowledge of the theory of logic has no tendency whatever to make men good reasoners." - Thomas B. Macaulay

    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." - Daniel J. Boorstin

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by ShakyMo
    Lol you don't need a doctorate in mathematics do do the art for tesselation.

    Now your guys coding the engine, might help if they have a mathematics degree, but I suspect the level of maths you get from a GOOD software engineering / comp-sci degree will be enough (e.g the ones that concentrate on technical skills rather than 1980s project management bs)

    For the textures, no.  Any ordinary game artist can do that just fine.

    But the geometry side of tessellation?  While you don't need a PhD, it's going to be awfully hard if you haven't at least taken a manifolds course--which is typically graduate level.  Mathematicians are likely to have the necessary background, and possibly physicists, too.  Apart from that?

    Tessellation in its purest form is just procedurally generated vertex data.  But the geometrically intuitive approach is using it to do subdivisions of simplicial manifolds with boundary.  That is, you start with a model with few triangles and break them up into many more triangles.  But you have to move around the new vertices, as if you just leave them all in the convex hull of your original vertices, the final image won't look any different.

    If you're not using tessellation, you just have to specify where a handful of vertices go.  You can have modeling programs that let you see the model and move things in or out with the program implicitly handling where each particular vertex goes.  If you're using tessellation, you have to be able to specify the coordinates of any arbitrary point on the surface, not just a handful of vertices.  If you want any lighting effects at all, you probably also need to specify the normal vector at any arbitrary point, and not just a handful of points.  And you need to be able to do both of these fast, with stuff that a GPU can do.

    In order to do this, you start by deciding what shape you want to draw, most likely a manifold with boundary.  For shapes that are more complex than this, you likely break it into multiple pieces and draw each separately.  Basically, you can have an explicit formula, read it in from a texture, or do some combination of these two where you read in points from a texture and then have a formula to interpolate between them.  The first option means that you have to be able to think of the geometry of what you want to draw in terms of formulas, without the benefit of a GUI.  For the second, the textures from which you generate the geometry will have to be pretty high resolution, and this could add a considerable performance hit and a lot of additional video memory needed.  The third will let you use lower resolution textures, but now you get both the performance hit of additional texturing and also needing to be able to write formulas that interpolate smoothly.

    So let's suppose that you know what shape you want to draw, and that it is a manifold with boundary.  Next you have to pick a triangulation of it, that is, a simplicial manifold with boundary that is homeomorphic to the shape that you want to draw.  That triangulation will be your base vertex data.

    In vertex shaders, all that need you do is to determine the tessellation degree at the vertices of your base data.  This basically depends on the curvature of your model at that point, how far it is from the camera, and the frustum width of a pixel on the screen.  The third of those is some fixed constant, the second is pretty easy, and the first doesn't really require explicit computations; you can just eyeball it and call it good enough.  This isn't trivial, but it's no harder than older DirectX 9.0c-style graphics.

    Tessellation control shaders will probably be pretty trivial.  The simple approach is to make any outer tessellation level the maximum of the tessellation levels that you compute at the two vertices of the edge and any inner tessellation level the maximum of the three vertices of the triangle.  Assuming you're using triangles; for quads, it's a little different, but not hard.  You need to understand some geometry to understand why you'd do it this way, but it's the sort of thing that a programmer who knows that it's what he's supposed to do could implement without understanding why.  They already do that with clip coordinates, and this would be less complicated than that.

    But in tessellation evaluation shaders, you have to give both an explicit homeomorphism from your simplicial manifold with boundary to the actual manifold with boundary that you wanted to draw.  Furthermore, you have to explicitly specify the normal bundle for it, which tends to be a lot harder than the homeomorphism.  Specifying a handful of particular points isn't enough; for each triangle in your base vertex data, there are tens of thousands of possible barycentric coordinates that the hardware tessellator could kick out, and you need to be able to specify both explicit coordinates and an explicit normal vector for every single one of them.  And you need to do it smoothly so that when the tessellation degree changes, it doesn't look like your model visibly jumps.  You'll probably also have to fill in texture coordinates in tessellation evaluation shaders, but that's easy by comparison.

    And if you want to animate anything rather than just having a bunch of objects of fixed shapes in your game world, then that adds a bunch of new complications.  But let's ignore that for now.

    So, how much mathematics is involved here?  When I finished a BS in mathematics, I didn't know what a manifold (whether with or without boundary), triangulation, homeomorphism, simplicial manifold (or any other sort of simplicial complex), or normal bundle (or any other sort of vector bundle) were.  There's no way that I could have done tessellation at that point, even with today's tools.  Think your average game programmer would have a stronger math background than someone who got a BS in mathematics while taking considerably more math courses than necessary for the BS?  A handful of relatively advanced undergraduates may well take a graduate-level course in manifolds as an undergraduate, but there aren't a ton of such people and they're likely headed to grad school anyway.

    And I'm skeptical that tools will ever be able to give anyone without that background access to the full power of what tessellation can do.  You could make a game engine that knows how to draw a sphere and lets artists put spheres wherever they want in the game and tessellates them accordingly.  Or you could make more general shapes such as an ellipsoid rather than a sphere.  You could have dozens of shapes available.  But as soon as an artist wants to draw something that isn't on the list of what the game engine already knows how to do, either he's out of luck, he can draw it but not use tessellation, or he needs all of the mathematics above.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Deivos

    More or less to use tesselation you just need to make a texture akin to a bump/heightmap referred to as displacement mapping. Ultimately they are all essentially a greyscale texture map of an object showing the high/low points of a model and the tesselation mechanics translates that onto the object.

    It's one thing to technically use tessellation.  It's quite another to use it in a way that actually has a point.  Here's a video that compares several games with tessellation on and tessellation off:

    http://www.youtube.com/watch?v=-uavLefzDuQ

    Their point was to say, "You need a DirectX 11 card so that you can get the tessellation on effects".  EVGA is, after all, trying to convince you to buy a new video card.  Except that they couldn't find a single game that used tessellation for what it's supposed to be there for.  Now, the video is from 2010, but I'm not aware of newer games that do it better, either.

    In order to be able to tell the difference between tessellation on and off, they have to show a model up really close to the screen.  And with tessellation off, the base models still have really a whole lot of vertices.  That constitutes defeating most of the point of tessellation.

    It's pretty trivial to make a model that has a ton of vertices up close.  The problem is that, without tessellation, the same model will still have a ton of vertices when it's far away.  Or perhaps you can make several and switch between them based on distance, but that adds to the cost.  (You'd use the same textures, but only vary the vertex data.)  Most models are far away most of the time, so that means you've got a huge performance hit where you're drawing far more vertices than you can tell the difference on.

    The point of tessellation is not so much that you can make models look smooth up close; you could do that without tessellation, too.  It's that you can make models look smooth up close without needing many vertices when they're not up close.  That makes it into a huge performance optimization; if you're using tessellation to kill your performance and make the game unplayable outside of high end hardware, you've missed the point.

    To use few vertices when far away, you want base models that will have very few vertices, so that when the model is far away, you don't need many vertices.  The base models should look really blocky.  I don't mean like 2005's cutting edge worth of blockiness.  I mean like SNES StarFox blocky.

    Now, a game that uses tessellation probably also needs to run in older APIs, as many people don't have DirectX 11 or OpenGL 4.  But that doesn't mean that you use the base vertex data for the older API.  You can do tessellation in software to get more vertices in the vertex data that the video card uses for the legacy API, so "API without tessellation" sees exactly the same vertices as "tessellation on with tessellation degree 5" or some other number instead of 5.  You only need to do tessellation on the CPU once when you load a model, not every single frame, so this isn't that much of a performance hit even if you do it on the fly.  It would also be possible to do it far ahead of time and store the tessellated models on the hard drive and load those directly.

    To be fair, it's theoretically possible that the games in the video could have done exactly that--though it sure didn't look that way, as the vertices in DirectX 10 version didn't tend to still be there in the DirectX 11 version.  But if the benefit of "now the game runs smoother on slower hardware" that tessellation is supposed to offer actually worked properly, why doesn't anyone tout it as an advantage of their game?

  • DeivosDeivos Member EpicPosts: 3,692

    There a reason you quoted me on that, or was it just to throw out more information?

     

    As I see it, that has very little to do with what my comment was. I was only citing how I use tesselation as an artist, which you already made your spiel on the difference in your last post between an artist using an implementation of tesselation versus actually, y'know, implementing it.

     

    I don't mean to be rude with this comment, but that was effectively a technical explanation of something that simply didn't need to be said. It's not like there was a disagreement on the matter, and to those familiar with tesselation it's not exactly new information either. I've seen scaling implementations It's not entirely accurate for me ot reference this, but this image somewhat illustrates the point.

     

     

    The lower poly model on the left can be the normal game model if you're not next to the character, when you approach a certain range the tesselation can kick in and the model gets subdivided into the smoother model with the applied map to render the higher detail. This is something that can even be tiered as well, making two sets of tesselation alongside a normal/bump map for larger distances.

     

    And that's just continued semantics on how to build LOD.

     

    EDIT: Let me clarify that I agree with your points. It's just that your points were never in any argument. Bringing them up seems rather random and tangential to any standing commentary.

     

    Another image that helps in example of the point on LOD (though it's also techncially just Nvidia showcasing stuff, the point here is that the only model necessary to be made is the first one, and the bump mapping + tesselation adds in increasing levels of detail).

     

    EDIT2: Guess I should also say that if you just put that out to be infromational, I have no real complaints about that. Just confused why you'd quote me consequently.

    I shouldn't really complain about tangents as I tend to make them myself randomly. Which I guess the whole tesselation thing is a tangent from the original point of the thread any ways.

    "The knowledge of the theory of logic has no tendency whatever to make men good reasoners." - Thomas B. Macaulay

    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." - Daniel J. Boorstin

  • RidelynnRidelynn Member EpicPosts: 7,383

    I just used tessellation as one example of a technical feature that isn't mature enough to see widespread use in advancing gaming as a whole. It's relatively new, it has seen a few fairly inconsequential uses to date, but it could make a large impact if we can figure out how to better exploit it (TressFX to start with) - and to do that we need better education and/or more robust development tools. Usually these two items rather grow organically together: some smart people make some tools up while developing their own software, and these tools released into the wild where they get refined and made even more robust and easier to use, until eventually it requires very little education in order to get most of the effect of the technology (although it will still require a lot of education to fully exploit it, that educational level will be better understood).

    I'm sure there are many, many others. You could probably make the argument that current physics tools are just now getting out of their infancy to the point where they can start to make some dramatic changes in gameplay, and that has as much to do with GPU technology as anything. We've gone from a few flags blowing in the wind and extravagent barrel explosions to casual games based on physics simulations, and we have a lot more ground to cover before we've really exploited all we can with physics-based APIs, and as those APIs and development tools mature we'll see a lot more exciting uses for Physics in gaming.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Deivos

    There a reason you quoted me on that, or was it just to throw out more information?

     

    As I see it, that has very little to do with what my comment was. I was only citing how I use tesselation as an artist, which you already made your spiel on the difference in your last post between an artist using an implementation of tesselation versus actually, y'know, implementing it.

     

    I don't mean to be rude with this comment, but that was effectively a technical explanation of something that simply didn't need to be said. It's not like there was a disagreement on the matter, and to those familiar with tesselation it's not exactly new information either. I've seen scaling implementations It's not entirely accurate for me ot reference this, but this image somewhat illustrates the point.

     

     

    The lower poly model on the left can be the normal game model if you're not next to the character, when you approach a certain range the tesselation can kick in and the model gets subdivided into the smoother model with the applied map to render the higher detail. This is something that can even be tiered as well, making two sets of tesselation alongside a normal/bump map for larger distances.

     

    And that's just continued semantics on how to build LOD.

     

    EDIT: Let me clarify that I agree with your points. It's just that your points were never in any argument. Bringing them up seems rather random and tangential to any standing commentary.

     

    Another image that helps in example of the point on LOD (though it's also techncially just Nvidia showcasing stuff, the point here is that the only model necessary to be made is the first one, and the bump mapping + tesselation adds in increasing levels of detail).

     

    EDIT2: Guess I should also say that if you just put that out to be infromational, I have no real complaints about that. Just confused why you'd quote me consequently.

    I shouldn't really complain about tangents as I tend to make them myself randomly. Which I guess the whole tesselation thing is a tangent from the original point of the thread any ways.

    There is "what artists can do today", and then there is "what artists would need to do to implement it efficiently".  If the models you show are representative of the former, then it illustrates the problem, as it's a long, long way away from the latter.

    How many vertices are there in your first model on the left?  Without doing an explicit count, I'd say it looks like somewhere in the ballpark of 1000.  If you're using DirectX 9.0c, that's fine.  For an up-close model, you need a lot of vertices.  But what happens when the character is way off in the distance and only 10 pixels tall?  A thousand vertices is an awful lot.  When far away, if you only had 100 vertices instead of 1000, you wouldn't be able to tell the difference.  Being able to reduce the performance hit of far-away models is arguably more important than being able to increase the smoothness of up-close models.

    Worse, tessellation makes it so that if the same vertex is shared by multiple patches, it has to be processed separately for each patch in tessellation evaluation shaders.  So if you use the left model as your base model and you're using hardware tessellation, now you're looking at 4000 vertices when the model is far away and you can't tell the difference between that and 100 vertices.

    You may be partially able to get around that by doing all of the computations in vertex shaders and passing them along so that you don't have to redo the computations for your base vertices in tessellation evaluation shaders, but that still adds a considerable performance hit to pass all that data around, so it's still a lot more expensive than just processing 1000 vertices without tessellation.  That also adds a good bit of branching to your tessellation evaluation shaders, so it may add a considerable performance hit for models that are close enough to need a tessellation degree greater than 1.

    If you're reading in heights and normal vectors from a texture, then you're looking at 4000 texture calls just to get the vertex data for a tiny, distant character that might only have 50 texture calls in pixel/fragment shaders.  That's wildly inefficient, and the way that you end up with games that can only draw 10 or 15 characters on the screen at a time without choking.  That leads to needing to cap how many characters you draw awfully low, and then people complain that it's not a real MMORPG.  It also makes the minimum system requirements much higher than they ought to be, as you're using tessellation to kill your performance rather than as a perforance optimization.

    If you make it so that your displacement map always has height 0 at base vertices, then you can skip the texture call on base vertices, which helps enormously for far away objects where all vertices are base vertices of the model.  But it sure doesn't look to me like you're doing that; the Nvidia demo at the end certainly isn't.

    -----

    There are clear metrics for efficiency.  You have some "true" model that is basically a limit as the number of vertices goes to infinity in some sense.  And then you want for all positions in the model to be within some fraction of a pixel of the true model while displayed on the screen, while using as few vertices as possible and as little GPU load to process each vertex as possible.  Multiply the latter two quantities (though the last one is somewhat nebulous, as it can vary from one vertex to the next; perhaps more properly sum the second quantity across all vertices) to get the GPU load.  If two different ways of setting up vertices each make it so that the maximum distance from any point in the model to what it draws on your screen is, say, 0.2 pixels, and the one requires double the GPU load of the other (e.g., twice as many vertices with the same amount of work per vertex), then the latter method of placing vertices is better.

  • fivorothfivoroth Member UncommonPosts: 3,916
    Originally posted by Maniox
    Originally posted by fivoroth

    I agree that not a lot of people care. I never saw the appeal of buying the best GPUs (like GTX 690) or CPUs like that Intel's i7 CPU which was a total overkill for any sort of gaming (don't remember the name). Do people really spend so much money on a single GPU just to get the very best. You will most likely have to buy a new GPU in 2-3 years tops anyway so "futureproofing" seems a bit pointless as technology evolves so quickly.

    You know what is exciting though? The new Xbox which is being unveiled on the 21 May :)

     

    Originally posted by Maniox

    Feels like its always nVidia with driver errors and bugs while ATI works smoothly.

    From my personal experience, nVidia is much much better than ATI. I have never had any driver erros with my Nvidia cards. I have only once bought an ATI GPU and I was seriously dissappointed with all the bugs. Ever since I only buy Nvidia.

    When I were playing BF2,BC2 and WoW etc I would very often get nVidia related problems, but nVidia has been infamous for hating the Battlefield genre with passion, but a little after BC2 was released I got my hands on a 5770 and later now a 7950 which has never given me any artifacts and such.

     

    I haven't played any of the Battlefield games on the PC to be honest. With WoW I haven't had any problems.

    And if you went from nVidia to ATI you probably had driver fragments remaining or something that might screw things up, but I guess it's decided by the rest of your system.

    I don't upgrade my computers. I always just buy a new one :)

     

    Mission in life: Vanquish all MMORPG.com trolls - especially TESO, WOW and GW2 trolls.

  • DeivosDeivos Member EpicPosts: 3,692

    Well that's the reason there's LOD models in the first place. The point of that above illustration is noting how the near-model can transition into a high detail up-close model.

    In application that'd be the only model and only time the thing would be getting tesselated. You drag a model back from the camera and you get to the point where, yeah, rendering a lower poly model is more sensible because the lack of detail versus power taken, at that point you swap a lower detail model into place, one that's not high poly and does not contain any tesselation data.

     

    What I was stating about the above example, and why I said "It's not entirely accurate for me ot reference this, but this image somewhat illustrates the point." before.

    The image I used doesn't fully illustrate my point, I used it as a frame of reference. You can't take it alone as literal case and you can't take it as an isolated thing.

     

    Not rocket science, it's what's been done quite a while now. This is taking a single thing and introducing problems that aren't problems, by arguing it as an isolated semantic.

    You're a bright person, you shoudl realize this is not where an issue exists.

     

    Too caught up in the math of one thing and not addressing the problem in any global context. :p

     

    EDIT: Let me give a different example. 

    Say you have a character you need to render at face to face, a few yards, and a couple blocks away in terms of distance. 

    The first model is something you might use at the few yards distance, something that looks reasonable from a range, but up close becomes kind of flat, angular, and consequently ugly. 

    Too far away and the detail is lost, yet the machine keeps rendering the full model's complexity, even though you can't see it.

    That's the problem you present, then you compound with the tesselation data.

     

    But then there's a solution. Just make multiple Levels Of Detail. That foirst model is used for all mid-range action, as it's pulled away you swap the model out for a lower poly model missing the unnecessary data, and perhaps even drop the texture resolution.

    And then up close? That's where the tesselation data actually comes in handy, when the model comes closer, you can swap it again for a model that functionally looks about the same, but contains the subdivision and tesselation data so as you approach the model, the detail scales up and emerges.

     

    And there you go, at mid and long range, you don't have 4k vertices nor the extra information getting processed, and up close you are still taking full advantage of it.

    "The knowledge of the theory of logic has no tendency whatever to make men good reasoners." - Thomas B. Macaulay

    "The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge." - Daniel J. Boorstin

Sign In or Register to comment.