Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Looking back at older engines and cards.

The user and all related content has been deleted.

Comments

  • CleffyCleffy Member RarePosts: 6,412
    Its not really the engines problem, but how they are used.  The sad thing is its been 6 years, yet no developer is uttering the word 64-bit.  Adoption of new technology usually takes developers longer.
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    How do you know that the engines are good?  Don't judge engine quality by a demo video.  Remember when Intel showed off how powerful their Ivy Bridge graphics were by having an executive stand there pretending to play a game on it--until the video controls inconveniently popped up to prove that it was a pre-recorded video of a game, not an actual game being rendered by the Intel graphics?
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Cleffy
    Its not really the engines problem, but how they are used.  The sad thing is its been 6 years, yet no developer is uttering the word 64-bit.  Adoption of new technology usually takes developers longer.

    So 64-bit lets you address more than 2 GB of system memory.  What are you going to do with that extra memory?  Prefetch assets from storage and cache them in system memory in case they might be needed later?

  • CleffyCleffy Member RarePosts: 6,412
    Exactly that.  Games like Skyrim are teetering at the 32-bit memory cap with the workarounds.  There is also the benefit of it processing twice as fast which is important to CPU-based physics and games where the processor is a bottle-neck.
  • craftseekercraftseeker Member RarePosts: 1,740

    Older graphics cards?

    What about: SVGA, VGA, EGA, CGA?

    The standard IBM CGA graphics card was equipped with 16 kilobytes of video memory, and could be connected either to a NTSC-compatible monitor or television via an RCA connector for composite video, or to a dedicated 4-bit "RGBI"[2] interface CRT monitor, such as the IBM 5153 color display.[3]

    Built around the Motorola MC6845 display controller, the CGA card featured several graphics and text modes. The highest display resolution of any mode was 640×200, and the highest color depth supported was 4-bit (16 colors).

    EGA boosted this to 640×350 and allowed 64 colors, but only 16 at a time.

     

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Cleffy
    Exactly that.  Games like Skyrim are teetering at the 32-bit memory cap with the workarounds.  There is also the benefit of it processing twice as fast which is important to CPU-based physics and games where the processor is a bottle-neck.

    A 64-bit program processes twice as fast as 32-bit?  Since when?  Any data that you pass to a video card will immediately get truncated to 32-bit unless you insist on doing 64-bit computations on the video card, too, in which case only Radeon HD 5000 and GeForce 400 series and later cards will even run the game at all, and most will run the 64-bit computations at anywhere from 1/24 to 1/16 of the speed of the same operations at 32-bit precision.

    Doing 64-bit computations on the CPU at best gets you maybe an extra bit of precision or so unless you're doing something that is numerically unstable or having a ton of steps with a slight amount of rounding error at each step.  Even if you need two surfaces to fit together on the screen exactly, this typically doesn't matter.  And for more typical computations where it wouldn't matter if something was shifted by 1/100 of a pixel, you don't really even need the full 32-bits of precision CPU side--though you use 32-bit computations anyway because 16-bit sometimes wouldn't be enough.

    If the problem is that you're overwhelming the CPU with physics computations, then at an absolute minimum, you'd better be pushing as many CPU cores as the system has, as physics computations almost trivially scale to as many cores as you've got.

    If that's not good enough, then you can probably offload some of it to the GPU.  Particle effects aren't that hard to mostly offload to the GPU if you've got geometry shaders.  That limits you to GeForce 8000 and Radeon HD 2000 series cards or later, but if you're doing something complex enough to overwhelm a decent CPU with physics computations anyway, then you tell the last few people still using a Radeon X1800 XT or GeForce 7800 GTX or whatever that it's time to upgrade, as they probably wouldn't have the latest and greatest CPU, anyway.

  • CleffyCleffy Member RarePosts: 6,412
    64-bit calculations have been used in games since the PS2.  The technology has matured enough since then to make it usable on PC games.  Precision isn't really necessary in games past 16-bit.  Its not like you need more than 32k characters to define any aspect in the game.  However, there are tricks you can use to calculate 64-bits of information reducing calculations by a line or 2 of code at a time.
  • GravargGravarg Member UncommonPosts: 3,424
    I'm done with consoles all together.  I've finally figured it out that like within 4 months of buying the newest console, you could've spent less and bought a much superior graphics card for a desktop.
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Cleffy
    64-bit calculations have been used in games since the PS2.  The technology has matured enough since then to make it usable on PC games.  Precision isn't really necessary in games past 16-bit.  Its not like you need more than 32k characters to define any aspect in the game.  However, there are tricks you can use to calculate 64-bits of information reducing calculations by a line or 2 of code at a time.

    I'm not saying that you never use 64-bit computations.  I am saying that having a 64-bit program isn't going to uniformly double your CPU performance.

    The number of lines of code is only loosely correlated to how fast a program will run, so I'm not sure what that has to do with anything.

    If you try to do 3D graphics with only 16-bit precision, though, you'd get some pretty bad graphical artifacting due to rounding errors.

  • The user and all related content has been deleted.
  • DihoruDihoru Member Posts: 2,731
    Originally posted by Etherouge
    Originally posted by Gravarg
    I'm done with consoles all together.  I've finally figured it out that like within 4 months of buying the newest console, you could've spent less and bought a much superior graphics card for a desktop.

    Sure, but our PCs usually play games with the graphical standards of the latest consoles.

    I don't know how potent the next gen is, but I wasn't really impressed with the PS4 conference.

    Apparently the PS4 will be 16 times more powerful than the PS3, this may sound like allot but... that's like a medium range AMD dual graphics PC these days... with a single dedicated graphics card. If you build a rig with a Ivy Bridge i5, 8 gb of DDR3 ram and a AMD 6000series GPU you're gonna rape a PS4 in raw preformance and if you think firmware and optimization can make up for the gaps in raw preformance... yeah... you haven't met the guys overclocking their mineral oil cooled PCs to the limit or the more rational sort who tweak their rigs gradually to squeez more preformance out of them (christ sake I had a friend playing World of Tanks with a 2006 era GPU which did not support Pixel Shader 3.0, for anyone who doesn't know World of Tanks natively requires 3.0 and allot more preformance than that friend's rig had yet he still had no issues playing that game... and in fast tanks to boot, his Hellcat was something to be amazed at).

    image
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Dihoru
    Originally posted by Etherouge
    Originally posted by Gravarg
    I'm done with consoles all together.  I've finally figured it out that like within 4 months of buying the newest console, you could've spent less and bought a much superior graphics card for a desktop.

    Sure, but our PCs usually play games with the graphical standards of the latest consoles.

    I don't know how potent the next gen is, but I wasn't really impressed with the PS4 conference.

    Apparently the PS4 will be 16 times more powerful than the PS3, this may sound like allot but... that's like a medium range AMD dual graphics PC these days... with a single dedicated graphics card. If you build a rig with a Ivy Bridge i5, 8 gb of DDR3 ram and a AMD 6000series GPU you're gonna rape a PS4 in raw preformance and if you think firmware and optimization can make up for the gaps in raw preformance... yeah... you haven't met the guys overclocking their mineral oil cooled PCs to the limit or the more rational sort who tweak their rigs gradually to squeez more preformance out of them (christ sake I had a friend playing World of Tanks with a 2006 era GPU which did not support Pixel Shader 3.0, for anyone who doesn't know World of Tanks natively requires 3.0 and allot more preformance than that friend's rig had yet he still had no issues playing that game... and in fast tanks to boot, his Hellcat was something to be amazed at).

    Yes, a desktop Core i5 Ivy Bridge quad core is faster than the CPU in the PS4.  But how much faster?  Maybe 50% faster?  It's not a huge chasm.  For the video card, if you want a Radeon HD 6000 series card that is faster than the GPU in the PS4, then you'd better make it a 6970, because anything else would be slower.  Even then, the PS4 will allow extremely high resolution textures that a PC won't for lack of video memory.

    -----

    In other news, shader model 3.0 basically means DirectX 9.0c.  In 2006, the competition was the GeForce 7000 series and the Radeon X1000 series, both of which supported that.  For that matter, the GeForce 6000 series did also.  The Radeon X000 series (what am I supposed to call X300, X600, X800, etc?) didn't quite fully support it due to a blunder by ATI, but it came pretty close.

  • RidelynnRidelynn Member EpicPosts: 7,383

    16 bits gives you about 3 decimal places of precision - +/-0.001 give or take, which isn't quite enough for most people - about one milimeter of precision if we were talking metric lengths.

    32-bits gives you about 7 decimal places of precision - +/0 0.0000001 - better than a 1/10th of a micron in terms of metric length. This is good enough for most purposes.

    64-bits takes you out to about 16 digits of decimal precision. +/-0.0000000000000001 - pretty damned accurate. 1/10th of a femtometer, or 10,000 picometers.

    If all you have are 16 bits, and you need better than 3 digits of precision, you have to split the calculation up into several pieces to maintain precision. That takes a lot of extra clock cycles and instructions.

    That's why we saw a pretty good speed increase going from 16-bit to 32-bit, and why the 80386 was such a big leap in performance for computers. But if you don't need that extra step in precision, then going the next mile and bumping to 64-bit doesn't gain you anything more in terms of speed, you already had the precision you needed. Sure, there are a few things that can greatly benefit from it: Cryptography, compression, HPC, etc. Those things will see a decent speed increase going to 64-bit (and benchmarks show that). But gaming, largely, doesn't need more than 32-bit precision to look good, at least at the resolutions we are dealing with (maybe once we get into really really high resolutions we'll want more precision, because there is a greater plane to make the surfaces work with).

    So for gaming, 64-bit just lets us access a larger memory bus.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347

    For what it's worth, in the game I've been working on, the only places I use 64-bit data types are for timers (32-bit precision on a nanosecond timer would reset about every 4 seconds) and as outputs to built in math functions, most notably sin, cos, and sqrt.  I immediately round the latter to 32-bit precision before doing anything with it.

    The only places I've had rounding problems are the depth buffer, collision detection with multiple overlapping objects, and getting a non-real number out of Heron's formula.  The last of those is the problem that the area of a triangle whose vertices are collinear should be zero, but floating point rounding errors mean that you take the square root of a number that often isn't quite zero--and could as easily be negative as positive.  Replacing 32-bit precision by 64-bit wouldn't make a bit of difference here. Nor would 256-bit precision help.

    Collision detection with multiple overlapping objects is a problem that rounding errors mean that two mathematically equivalent formuals for a line can round to slightly different lines.  For technical reasons, I have to move one back so that the one I want to be in front is actually in front.  The same problem would arise with 64-bit precision as with 32, and 32-bits is plenty of precision to move a line a bit (0.0002 seems to be enough; 0.0001 isn't with 32-bit precision, but being off by 0.2 mm is fine for collision detection; most games have places that are off by at least 100 times that) and have it work.

    The depth buffer is the tricky one, and 64-bit precision actually would help.  But that's done on the video card, so 64-bit precision is a complete non-starter for reasons of performance.  Performance reasons mean that 64-bit is a bad idea there, as a much cleaner fix would be to have an option to do clipping in RP^2 x R rather than RP^3.  That would be slower than what is done now, which is why it isn't already done that way, but it would sure be a lot faster than doing 64-bit computations everywhere.  As it stands, 32-bit more or less works, but does cause mean that a 3D perspective forces you to choose between either some minor clipping issues on faraway objects due to depth buffer rounding errors or some minor clipping issues up close from the near clipping plane.

  • The user and all related content has been deleted.
  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Etherouge
    Will we see amazing graphics this following year, Quizzical?

    I expect that the PlayStation 4 will enable some pretty impressive graphics--and furthermore, some PS4 games will still look decent a decade from now.  (There will also be some PS4 games that look rather bad today, but it's easy to make a game that looks bad, no matter what hardware it runs on.)

    There are two important things to keep in mind.  One is that hardware isn't improving as fast as it used to.  When the limiting factor was the number of transistors, you could double performance about every two years.  Now that the limiting factor is power consumption (except in budget desktops), you can only double performance about every four years.

    Furthermore, there are diminishing returns to ever-increasing performance.  At some point, it's good enough, and doubling performance isn't terribly important.  The PS4 will have plenty of power to do pretty extensive tessellation, to the degree that the only reasons for anything on a PS4 game to look blocky will be because the game programmers goofed or because they wanted it to look blocky.  It will have enough shader power to do all of the basic lighting effects that everyone expects, and also some fairly advanced stuff.

    -----

    What's left for future graphical improvements that the PS4 won't have plenty of power to do easily?  Shadows are still fairly intractible, to the degree that shadows in games are invariably fake.  If you look closely, you'll find a ton of cases where shadows are clearly wrong.  The best that games can do for shadows unless the geometry of a game is very simple is to say, we'll draw something that is definitely a shadow and is relatively fast to compute, and looks like it might plausibly even be a correct shadow if you don't look closely.

    Transparency is still problematic, though that may be waiting more on API advancements than more powerful hardware.  I doubt that the graphics API that the PS4 uses will be more advanced than the latest versions of Direct3D and OpenGL.  Indeed, it will probably be heavily based on OpenGL--and Direct3D isn't really that different from OpenGL, either.

    There are also limits to how many things you can realistically draw.  If you can draw 20 characters on the screen at a time with a given image quality on a PS4, then double the hardware power and maybe you can draw 40 characters at a time, and twice as detailed of terrain.  But do you really want to draw 40 characters on the screen at once?  For many games, the answer to that is "no", so there aren't any real benefits to having that option.

    There is also advanced physics, which can eat up a ton of processing power.  Rather than having hair fixed in place as if it's made of stone, or cycle through a few frames of animation without regard to what is going on around you, you could make hair blow dangle freely and blow in the wind.  You could do the same for characters' clothing.  Depending on how detailed you want to make your physics, you could use a ton of CPU and/or GPU power here.

    -----

    But let's not forget that even if hardware is plenty powerful enough to do something, that doesn't mean that programmers will be able to do it.  For example, that's already what's holding back tessellation.  Even budget cards like a Radeon HD 6670 or GeForce GT 430 have plenty enough power to use tessellation pretty extensively.  That few games use it at all, and even fewer use it sensibly is because very few game programmers don't know how to do it.

    If you make more powerful hardware that is capable of doing a bunch of new things that game programmers don't know how to do, how much benefit is there in that?

  • RidelynnRidelynn Member EpicPosts: 7,383

    I think we've hit diminish returns on graphics.

    Sure, they will get better. But it will be in small steps, and with great computational expense.

    Take a look at screen shots from "modern" DX9.0c games - they look great as stills. Something like a shot from Mass Effect 3 - those characters look very lifelike. Could they be better? Sure, but not by a lot.

    When the picture is moving, we can still see a lot of problems: textures that don't quite stretch properly, clipping issues, clumsy animations, shadows that aren't quite right, etc.

    Are they insurmountable problems? No, not at all. We see plenty of computer-generated animation that looks very lifelike (Life of Pi, for instance, just won an Oscar for it). We just can't quite do them all in real-time yet. But it will come.

    Do the graphics used in Hollywood-type effects look a whole lot better than those used in our AAA video games? I would say debatable. If you look at still screenshots, I would say hardly at all, to be honest.

    So are there graphical improvements coming? Sure, but it won't be huge leaps. My jaw probably won't drop, like it did the first time I saw Quake run on a 3DFX Voodoo 1 card. There have been other nice jumps forward (DX9 was a good one), but I don't think we'll see anything huge, as we get closer and closer to photorealistic quality, the gains will all be in the subtle details. And a lot of that doesn't have to deal with just graphics - physics plays a huge role in animating them properly, and we are still seeing pretty decent pushes in physics engines.

    I am not saying that just because photorealistic is the goal that all artistic decisions need to be in that vein, but the goal has always been to make graphics "realistic" - even cartoony graphics benefit from real looking shadows, and high quality water rendering, and hair that actually looks and acts like hair, just for some examples.

  • The user and all related content has been deleted.
Sign In or Register to comment.