Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Are very high monitor resolutions still a reason to get a ton of video memory?

QuizzicalQuizzical Member LegendaryPosts: 25,348

Or to put it another way, how much video memory do you really need?

There are three basic ways that video cards use video memory in games.  My classification here is arbitrary, but you'll see why momentarily.

1)  A game tells a video card to buffer data in video memory and leave it there for a long time, so that it can read from it frequently later.  Textures that correspond to "pictures" of what something in the game looks like almost invariably fall into this category.  Vertex data for various models probably does as well.  This does not get wiped every frame, and data in this category will tend to stay in video memory for seconds or minutes at a time--or in some cases, until you close the game.

2)  A game uses short-term buffers that are written to and then wiped every frame.  All games use a framebuffer, and nearly all use a depth buffer.  There could be some additional framebuffer objects used for post-processing or a multi-pass rendering system, where the game first renders to a texture, then does something to that texture to produce the final frame.  Or there can be several steps of rendering to a texture, using that texture to create another texture, and so forth, before eventually producing the final frame at the last step.  The distinction between this and part (1) is that the data gets wiped every frame.

3)  A game uses some data for internal computations by shaders or fixed-function pipeline portions.  Unlike the first two cases, a game programmer doesn't specify exactly how much memory is used here; rather, video drivers get to make the decisions.  Data written here is very short-lived, and typically overwritten or discarded within microseconds if not nanoseconds.  Any data used in internal computations in a shader is surely discarded when the shader completes execution, for example--and at ~10 lines of code apart from declaring variables, that happens very fast.  Some of this data is stored only in GPU cache and never actually makes it to video memory, but the details here will vary wildly from game to game, and even within a given game, from one GPU chip to the next.

-----

So what is the point of classifying video memory usage this way?  The point is that different variables influence the amount of video memory used in different categories.  Data in the first category depends on what game you're playing and what graphical settings you're using.  Higher resolution textures will take a lot more space, for example.  Reducing the draw distance likely means that data on objects that are further away don't need to be buffered, which saves space.

But the video memory usage in the first category does not depend on what video card you're using (except indirectly, in that this may influence your choice of graphical settings), nor does it depend on your monitor resolution.  The texture(s) used to draw a character's arm, for example, do not depend on your monitor resolution.  How many times the texture is accessed in a given frame (and indeed, whether it is accessed at all) can depend on your monitor resolution, but not how much space the texture itself takes.

The third category also does not depend on your monitor resolution.  It may, however, depend on the particular video card that you're using.  Higher end cards have many more shaders than lower end cards; indeed, this is one of the things that makes it a higher end card.  Putting the extra hardware to good use means executing shaders on more data simultaneously, which means more video memory is needed to store the internal steps of all of the computations going on at once.  Getting enough parallelism is fairly trivial, regardless of the monitor resolution, though.  Even a tiny 1366 x 768 monitor resolution has more than a million pixels, which likely means more than a million pixel/fragment shader executions.  That's more than enough to keep even 2048 shaders in a Radeon HD 7970 or 2880 shaders in the upcoming top-end GK110 card mostly busy.

But again, you're limited here by how many things the video card is willing to have running at a time.  Having to draw around two million pixels is more work than one million pixels, but the video card isn't going to try to have two million shaders executing at once.  It will have the same number running simultaneously at either resolution, and merely take longer for two million pixels because many haven't even started by the time the video card would be completely done at a lower resolution.

This does, however, raise the issue that, while a given amount of video memory may be overkill on a lower end card, that doesn't automatically mean that the same amount of video memory running the same games at the same settings is automatically overkill on a higher end card.  I don't know how much the video memory usage will vary by GPU here.

-----

That leaves the second category as the only one for which video memory usage depends on the monitor resolution.  So the question of how much your video memory needs as a result of your monitor resolution is at most the amount used in the second category.  And this might be less than you think.

Let's suppose that you're using a 1920 x 1080 monitor.  That has 2073600 pixels, or around 2 million.  A framebuffer has four bytes per pixel for 32-bit color depth, which comes to just under 8 MB for the framebuffer.  The video card also has a depth buffer, which likewise uses 4 bytes per pixel, which again comes to just under 8 MB for the depth buffer.  So at 1920 x 1080, we're under 16 MB for the main buffers.  That doesn't sound like a compelling case to get a 2 GB card rather than 1 GB, does it?

Well, there is more than that, of course.  You don't just need one frame buffer.  You need three:  the one that you're actively writing to, the one that is most recently completed, and the one that the video card is in the process of uploading to the monitor.  So that means you're at 24 MB for framebuffers, not 8 MB.  You still only need one depth buffer, so adding that means 32 MB in total for the buffers.  For more than a few games, that really is all that you'll use in the second category.

So let's increase the monitor resolution to 2560 x 1600.  That puts you at 4096000 pixels, and just under 64 MB rather than 32 MB for the main buffers.  Throw on a 3-monitor Eyefinity setup and you can triple that to a little under 200 MB.  Stereoscopic 3D means you need 4 framebuffers rather than two, in addition to the one that the video card is in the process of sending to the monitor.  But the 120 Hz refresh rate needed for stereoscopic 3D will overwhelm any monitor port at 2560 x 1600, so you're capped at 1920 x 1080.  Even with a 3-monitor Eyefinity setup, 8 MB per buffer times 6 buffers (four framebuffers that the programmer controls, one in use to send data to a monitor, and one depth buffer), you're under 150 MB.  That's less than you'd use at a higher monitor resolution, so let's ignore it.

Okay, so what about post-processing effects?  Here, you may have two buffers instead of one for the active framebuffer, as you have both the one you're reading from and also the one you're writing to.  Once you're done writing to the next one, you can probably scrap the previous one.  So that basically adds an extra framebuffer worth of data, and now you're at around 250 MB in our Eyefinity setup with three 2560x1600 monitors.  Maybe you actually do need a few textures that you've rendered to simultaneously, so you can push it to 300 MB or 400 MB.

There's also SSAA, which greatly increases the effective resolution that you render to.  If you're using 4x SSAA, that quadruples your number of pixels for the active framebuffer and depth buffer, though it won't affect the video memory usage of other completed framebuffers.  But still, with an Eyefinity setup of three 2560 x 1600 monitors at 4x SSAA, you're a bit under 600 MB used for your framebuffers and depth buffer.  And that, of course, is a very outlandish situation that few people will ever see.  Drop down to a single 2560 x 1600 monitor, which is still much higher than most gamers use, and you're under 200 MB of video memory usage for framebuffers and a depth buffer, even at 4x SSAA.

-----

Now, you can see why this would be a big issue if you're choosing between a 256 MB card and a 512 MB card at modern resolutions and settings.  For that matter, you can see why AMD and Nvidia didn't do SSAA in drivers until recently, when 1 GB cards were common.  And why they didn't try to get you to spread a game window across three monitors until recently, either.

But if you're choosing between a 1 GB card and a 2 GB card?  Even 2560 x 1600 isn't large enough that 1 GB of video memory is likely to be problematic.  If you're deciding between a 2 GB card and a 4 GB card, your monitor resolution barely matters, even if you're using Eyefinity or Nvidia surround, a 2560 x 1600 monitor, or eventually, a 3840 x 2160 ("4K x 2K") monitor.

That's not to say that you'll never have any use for more than 2 GB of video memory.  Buffering textures in video memory can take almost arbitrarily large amounts of space, simply by keeping the textures available for objects that are increasingly far away from you.  That provides very mild benefits beyond a certain point, though.  Higher resolution textures can offer a real graphical benefit to users, but that creates the question of where those higher resolution textures will come from--or more pointedly, how much space you want a game installation to use, and how long you want it to take to download the game.

But one should be realistic about the benefits of enormous amounts of video memory.  Your monitor resolution, no matter what it is, does not justify thinking that 2 GB of video memory won't be enough, nor will it in the foreseeable future.  Period.

Comments

  • ShakyMoShakyMo Member CommonPosts: 7,207

    theres one other area where the extra memory is good.

    Enabling custom very high rez textures.  e.g. games using rage engine like rage and maxpayne 3, you can tweak the ini files to use 8k textures where you will need 1.5gb of gpu ram.  you can tweak to 16k textures if you have a xfire/sli beast too.  Same with custom texture packs for stuff like skyrim (not the bethseda released hd pack they are 4k textures)

  • miagisanmiagisan Member Posts: 5,156

    Quiz,

    I dont post much in your threads, but goddamn yours are some of the best threads on these god-forsaken forums.

     

    That is all image

    image

  • VrikaVrika Member LegendaryPosts: 7,888

    Ty for the post, intresting to read.

    But how about huge battles/gatherings in an MMO? Assuming the game engine doesn't put a limit on how many characters can be visible on-screen at once, there could well be more than 50 characters visible in large PvP battle or major city. Every character could have different textures, different equipment, different mount and different pet/vanity pet.

    I wonder if GPU's memory could run out while trying to store all those textures.

     
  • NBlitzNBlitz Member Posts: 1,904

    I always enjoy your posts Quiz, which makes me wonder where do you come from and what is it you do in your daily life! Darn it :p

    Great post as usual, though a little too technical for the way my brain is wired.

    Though I will save the thread for later, sit down and try and make better sense of what you wrote.

  • miagisanmiagisan Member Posts: 5,156
    Originally posted by Vrika

    Ty for the post, intresting to read.

    But how about huge battles/gatherings in an MMO? Assuming the game engine doesn't put a limit on how many characters can be visible on-screen at once, there could well be more than 50 characters visible in large PvP battle or major city. Every character could have different textures, different equipment, different mount and different pet/vanity pet.

    I wonder if GPU's memory could run out while trying to store all those textures.

    lag based on large numbers of characters on a screen in an mmo is a server side issue, not actual data being sent to your computer or your computer lagging.

    image

  • syntax42syntax42 Member UncommonPosts: 1,378

    Quizzical is too knowledgeable for these forums.  Ban him!  He makes wanna-be computer geeks like me lose two sizes on my e-peen.  :(

     

    Yes, it was very informative and gives me a good idea of what to look for next.  Chances are, though, I'll be buying from a selection of whatever marketing people tell us we need, instead of what is truly necessary.  As we all know, bigger numbers means more fun, and a marketing person will make sure only bigger numbers are for sale next year.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Originally posted by ShakyMo

    theres one other area where the extra memory is good.

    Enabling custom very high rez textures.  e.g. games using rage engine like rage and maxpayne 3, you can tweak the ini files to use 8k textures where you will need 1.5gb of gpu ram.  you can tweak to 16k textures if you have a xfire/sli beast too.  Same with custom texture packs for stuff like skyrim (not the bethseda released hd pack they are 4k textures)

    I explicitly mentioned higher resolution textures as a way to use more video memory in the original post.  Twice, in fact.  But the amount of video memory you need for texture buffers does not depend on your monitor resolution, so it has no bearing on whether a very large monitor resolution would mean you need more video memory than you would otherwise.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348
    Originally posted by Vrika

    Ty for the post, intresting to read.

    But how about huge battles/gatherings in an MMO? Assuming the game engine doesn't put a limit on how many characters can be visible on-screen at once, there could well be more than 50 characters visible in large PvP battle or major city. Every character could have different textures, different equipment, different mount and different pet/vanity pet.

    I wonder if GPU's memory could run out while trying to store all those textures.

    Depending on how a game engine is designed, needing to draw a lot of characters at once may or may not require a lot of video memory to buffer the textures and vertex data.  In games where you fight against identical centuplets (actually, that's just about every game), for example, the game will probably buffer the textures for a character once and then read from the same textures for a bunch of different characters.

    Also, not everything needs a texture to draw it.  A texture is just a lookup table, and doesn't have to intuitively correspond to a picture.  Anything that doesn't need a lookup table to draw it (e.g., solid color objects) likely doesn't need a texture.  On the other hand, you could need several "textures" for one particular portion if you're using a texture to describe the geometry of a scene, for example.

    But that doesn't depend on your monitor resolution.  A higher monitor resolution may mean that more characters get drawn in a given scene, as some would be off fo the screen at a lower resolution.  But textures for any nearby characters have to be buffered in video memory whether they're going to be drawn in a given frame or not, as the camera could spin around very quickly.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    Here, let me give you two examples of why it's hard to tell how many textures a game is using.

    image

    First, we have a bonfire.  How many textures do you think that uses for the entire bonfire?  The answer is actually only one.  While the wood portion of the bonfire has 80 triangles, if you look closely, it's actually just one texture applied 50 times.  The ends of the pieces of wood use the same texture as the log sides, but just scaled differently.  Meanwhile, the flames don't use any textures at all, as it picks a random color for each of the three vertices and then interpolates across the triangle.  (Incidentally, being able to draw and animate 2550 triangles of "flames" like this with virtually no performance hit is one of the advantages of using a recent graphics API rather than an older one like DirectX 9.0c.)

    image

    Now we have an outdoor scene.  How many textures does it look like were used to draw it?  The answer is actually hundreds.  Each rock has a different texture, as I wanted to make every rock look different.  Each tree uses exactly two textures:  one for the trunk and one for the branches.  Each "branch" texture is drawn once for each branch, which comes to 20-40 times per tree.

    The ground, on the other hand, is really messy.  You can see boundaries where it changes from one type of ground to another, and near those boundaries, textures can't repeat at all.  The reason for this is that the boundaries can go off in any which angle, so the texture for one area on the ground will look wrong if you place it anywhere else.  But there are also broad areas where the ground is roughly constant, and there, you can have one texture and draw it a bunch of times.

    The large brick walls off in the distance (the other side of the walls in the bonfire picture, incidentally), meanwhile, only use two textures.  The difference between the two textures is actually their length, not what they intuitively look like.  The reason some portions of the wall are different colors is that some are in the sunlight and others aren't, as the sun is off far to the right of the camera.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Monitor resolution hasn't been a serious consideration versus VRAM for a long long time - for exactly the reasons you bring up.

    Before we had post-processed anti-aliasing (and we were limited to around 512M on the upper tier cards) VRAM was almost a big deal - you could still play the games fine, but SSAA required a lot of VRAM - (your frame buffer size) x (SSAA level roughly) x (level of buffering (double/triple buffering)) - which can get big quickly, especially once you start looking at x4+ levels of SSAA. Today, with 1G+ being entry-level, and most AA being pushed to post-processed shader computations rather than brute force high resolution downscaling, textures are really the limiting factor.

  • RidelynnRidelynn Member EpicPosts: 7,383

    Also pertinent to this discussion is the Rage engine, which uses Virtual Texturing, a version of a megatexture. The basic idea is that you put all your texture images into one file, and you just "snip" off the piece that you want to use for that particular piece of geometry. Having a huge texture file allows you to uniquely texture every piece of geometry in your game (with hand-drawn art if you so choose)

    Developers notes on Megatextures from Gamasutra

    This sounds like it eats huge amounts of VRAM, and it could, but the engine on the back is intelligent enough to just load the parts that it needs, and it can go ahead with rendering even if the texture isn't fully loaded and stream it in as it gets it in further frames (texture pop - not good, but better than the framerate stuttering). As far as I know Rage is pretty much the only large-scale game to use this, but it's an interesting concept.

    The quoted article above also makes the case for procedurally-generated textures in the comments (which is what Quiz does in his demo, if I recall correctly). The problem with that is that most artists are not math whizzes, and while a lot of nature is based on geometry and math (Snail shell fractals, symmetrical tree leaves, etc), to get realistic looking textures procedurally is difficult to say the least because there are a lot of factors to take into account. You could easily spend a lot of processing time generating your textures (especially if you are wanting high levels of detail with a good degree of entropy, like human skin or woodgrain) - and if you want to pre-generate and cache them, then your back to the same problems that hand-drawn textures have, it's just that you had the computer paint them instead of an artist.

  • QuizzicalQuizzical Member LegendaryPosts: 25,348

    There's no fundamental reason why a game has to choose between procedurally generated textures and hand-made textures exclusively.  You could use some of each, with hand-made textures where you can't make something procedurally generated look decent.  You could also have hybrids, such as having a single hand-made texture that is modified in various ways to produce a bunch of different textures; I'm pretty sure that armor dyeing systems in games typically do something to this effect.

    Indeed, if you do some of each, that means you don't need the CPU performance to do all procedural textures, nor do you need the hard drive performance to do all hand-made textures, as different textures push different hardware.

    The hard part of procedural textures, of course, is finding people who can do them and make them look decent.  It's basically using mathematics as artwork, which is something that basically no trained artists know how to do.  The people who do know the mathematics generally aren't artists, and "can make it work" is a long way from "can make it work and look good".

Sign In or Register to comment.