It looks like you're new here. If you want to get involved, click one of these buttons!
Or to put it another way, how much video memory do you really need?
There are three basic ways that video cards use video memory in games. My classification here is arbitrary, but you'll see why momentarily.
1) A game tells a video card to buffer data in video memory and leave it there for a long time, so that it can read from it frequently later. Textures that correspond to "pictures" of what something in the game looks like almost invariably fall into this category. Vertex data for various models probably does as well. This does not get wiped every frame, and data in this category will tend to stay in video memory for seconds or minutes at a time--or in some cases, until you close the game.
2) A game uses short-term buffers that are written to and then wiped every frame. All games use a framebuffer, and nearly all use a depth buffer. There could be some additional framebuffer objects used for post-processing or a multi-pass rendering system, where the game first renders to a texture, then does something to that texture to produce the final frame. Or there can be several steps of rendering to a texture, using that texture to create another texture, and so forth, before eventually producing the final frame at the last step. The distinction between this and part (1) is that the data gets wiped every frame.
3) A game uses some data for internal computations by shaders or fixed-function pipeline portions. Unlike the first two cases, a game programmer doesn't specify exactly how much memory is used here; rather, video drivers get to make the decisions. Data written here is very short-lived, and typically overwritten or discarded within microseconds if not nanoseconds. Any data used in internal computations in a shader is surely discarded when the shader completes execution, for example--and at ~10 lines of code apart from declaring variables, that happens very fast. Some of this data is stored only in GPU cache and never actually makes it to video memory, but the details here will vary wildly from game to game, and even within a given game, from one GPU chip to the next.
So what is the point of classifying video memory usage this way? The point is that different variables influence the amount of video memory used in different categories. Data in the first category depends on what game you're playing and what graphical settings you're using. Higher resolution textures will take a lot more space, for example. Reducing the draw distance likely means that data on objects that are further away don't need to be buffered, which saves space.
But the video memory usage in the first category does not depend on what video card you're using (except indirectly, in that this may influence your choice of graphical settings), nor does it depend on your monitor resolution. The texture(s) used to draw a character's arm, for example, do not depend on your monitor resolution. How many times the texture is accessed in a given frame (and indeed, whether it is accessed at all) can depend on your monitor resolution, but not how much space the texture itself takes.
The third category also does not depend on your monitor resolution. It may, however, depend on the particular video card that you're using. Higher end cards have many more shaders than lower end cards; indeed, this is one of the things that makes it a higher end card. Putting the extra hardware to good use means executing shaders on more data simultaneously, which means more video memory is needed to store the internal steps of all of the computations going on at once. Getting enough parallelism is fairly trivial, regardless of the monitor resolution, though. Even a tiny 1366 x 768 monitor resolution has more than a million pixels, which likely means more than a million pixel/fragment shader executions. That's more than enough to keep even 2048 shaders in a Radeon HD 7970 or 2880 shaders in the upcoming top-end GK110 card mostly busy.
But again, you're limited here by how many things the video card is willing to have running at a time. Having to draw around two million pixels is more work than one million pixels, but the video card isn't going to try to have two million shaders executing at once. It will have the same number running simultaneously at either resolution, and merely take longer for two million pixels because many haven't even started by the time the video card would be completely done at a lower resolution.
This does, however, raise the issue that, while a given amount of video memory may be overkill on a lower end card, that doesn't automatically mean that the same amount of video memory running the same games at the same settings is automatically overkill on a higher end card. I don't know how much the video memory usage will vary by GPU here.
That leaves the second category as the only one for which video memory usage depends on the monitor resolution. So the question of how much your video memory needs as a result of your monitor resolution is at most the amount used in the second category. And this might be less than you think.
Let's suppose that you're using a 1920 x 1080 monitor. That has 2073600 pixels, or around 2 million. A framebuffer has four bytes per pixel for 32-bit color depth, which comes to just under 8 MB for the framebuffer. The video card also has a depth buffer, which likewise uses 4 bytes per pixel, which again comes to just under 8 MB for the depth buffer. So at 1920 x 1080, we're under 16 MB for the main buffers. That doesn't sound like a compelling case to get a 2 GB card rather than 1 GB, does it?
Well, there is more than that, of course. You don't just need one frame buffer. You need three: the one that you're actively writing to, the one that is most recently completed, and the one that the video card is in the process of uploading to the monitor. So that means you're at 24 MB for framebuffers, not 8 MB. You still only need one depth buffer, so adding that means 32 MB in total for the buffers. For more than a few games, that really is all that you'll use in the second category.
So let's increase the monitor resolution to 2560 x 1600. That puts you at 4096000 pixels, and just under 64 MB rather than 32 MB for the main buffers. Throw on a 3-monitor Eyefinity setup and you can triple that to a little under 200 MB. Stereoscopic 3D means you need 4 framebuffers rather than two, in addition to the one that the video card is in the process of sending to the monitor. But the 120 Hz refresh rate needed for stereoscopic 3D will overwhelm any monitor port at 2560 x 1600, so you're capped at 1920 x 1080. Even with a 3-monitor Eyefinity setup, 8 MB per buffer times 6 buffers (four framebuffers that the programmer controls, one in use to send data to a monitor, and one depth buffer), you're under 150 MB. That's less than you'd use at a higher monitor resolution, so let's ignore it.
Okay, so what about post-processing effects? Here, you may have two buffers instead of one for the active framebuffer, as you have both the one you're reading from and also the one you're writing to. Once you're done writing to the next one, you can probably scrap the previous one. So that basically adds an extra framebuffer worth of data, and now you're at around 250 MB in our Eyefinity setup with three 2560x1600 monitors. Maybe you actually do need a few textures that you've rendered to simultaneously, so you can push it to 300 MB or 400 MB.
There's also SSAA, which greatly increases the effective resolution that you render to. If you're using 4x SSAA, that quadruples your number of pixels for the active framebuffer and depth buffer, though it won't affect the video memory usage of other completed framebuffers. But still, with an Eyefinity setup of three 2560 x 1600 monitors at 4x SSAA, you're a bit under 600 MB used for your framebuffers and depth buffer. And that, of course, is a very outlandish situation that few people will ever see. Drop down to a single 2560 x 1600 monitor, which is still much higher than most gamers use, and you're under 200 MB of video memory usage for framebuffers and a depth buffer, even at 4x SSAA.
Now, you can see why this would be a big issue if you're choosing between a 256 MB card and a 512 MB card at modern resolutions and settings. For that matter, you can see why AMD and Nvidia didn't do SSAA in drivers until recently, when 1 GB cards were common. And why they didn't try to get you to spread a game window across three monitors until recently, either.
But if you're choosing between a 1 GB card and a 2 GB card? Even 2560 x 1600 isn't large enough that 1 GB of video memory is likely to be problematic. If you're deciding between a 2 GB card and a 4 GB card, your monitor resolution barely matters, even if you're using Eyefinity or Nvidia surround, a 2560 x 1600 monitor, or eventually, a 3840 x 2160 ("4K x 2K") monitor.
That's not to say that you'll never have any use for more than 2 GB of video memory. Buffering textures in video memory can take almost arbitrarily large amounts of space, simply by keeping the textures available for objects that are increasingly far away from you. That provides very mild benefits beyond a certain point, though. Higher resolution textures can offer a real graphical benefit to users, but that creates the question of where those higher resolution textures will come from--or more pointedly, how much space you want a game installation to use, and how long you want it to take to download the game.
But one should be realistic about the benefits of enormous amounts of video memory. Your monitor resolution, no matter what it is, does not justify thinking that 2 GB of video memory won't be enough, nor will it in the foreseeable future. Period.