Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

How often does a 60 Hz monitor refresh?

QuizzicalQuizzical Member LegendaryPosts: 25,347

The obvious answer is, 60 times per second.  That's what 60 Hz means, after all.  (Hz as units is the inverse of seconds.)  That, interestingly enough, seems to be the wrong answer.

To give some background, I've been programming a game, and thought it would be nice to keep a steady frame rate.  If your monitor refreshes every 1/60 of a second, then you'd ideally like to grab the state of the game and use it to render a frame every 1/60 of a second, and have that be what the monitor uses for the new frame.  That will give you very smooth animations.

There are a variety of problems here, though.  I was originally worried about screen tearing.  So I tried some artificial examples to try to create screen tearing, and couldn't do it.  I concluded that some sort of vertical sync is implicitly implemented in what I'm using (JOGL+Windows 7+Radeon HD 5850), most likely in video drivers.  So far, so good.

Knowing that, it would be ideal to start a frame so that it finishes just before the monitor grabs a frame.  If every frame finishes a millisecond before the monitor grabs a frame rather than a millisecond after, then that reduces your display latency by about 15 ms.

The next issue is that you can't control exactly how long it takes to render a frame.  How long it takes depends on what graphical settings are in use, how much needs to be drawn in that frame, whether textures are being generated at the same time, what other (non-game) processes are running in the background, how Windows decides to schedule threads, when Java decides to run garbage collection, and a variety of other things.

Ideally, you'd like to start a frame exactly as often as the monitor grabs a frame, and then always have the frame you start finish by the time the monitor grabs a frame.  Obviously, on slow hardware, you just have to draw as fast as you can and it's done whenever it's done.  But on faster hardware, if you can get 300 frames per second, it should be possible to be a little more precise.  If the time to draw a frame varies by a few milliseconds on fast hardware (let's say 99% of frames with 2 ms of the median), then you could move the start time up so that nearly all frames finish within several millseconds before the monitor grabs a new frame.

What you really don't want to do is to have an average frame finish as the monitor grabs a new frame, so that about half of the frames are done before it and half after.  If one frame finishes slightly after the monitor grabs a new frame, and then next slightly before, then the first frame won't display at all.  Instead, the frame before it just displays twice consecutively.  One worst case scenario is that it leads to a steady 30 frames per second rather than 60, as every other frame gets skipped.  Another worst case scenario is that the frames finish late-late-early, and then that cycle repeats forever.  That gets you 40 frames per second, but stuttering so that one frame is displayed for a full 1/30 of a second before the next frame that shows the state of the game world as of only 1/60 of a second later is displayed.  Then the next frame is only displayed for 1/60 of a second, before it is replaced by a new frame that shows the state of the world 1/30 of a second later.

One way to avoid that is to just let a fast card render as fast as it can.  That will definitely get you a new frame displayed every 1/60 of a second with only rare exceptions, and each new frame displayed will usually show the state of the game world about 13-20 ms after the last, though it will vary substantially.  But that seems less than ideal, as it would be nice for each new frame to show the same amount of time passing in the game world.  It also seems like an undue strain on hardware.

To do better than that requires knowing when the video card will grab a frame.  I speculated that what it does is, when it's time to display a new frame, it copies the most recently completed framebuffer elsewhere to feed to the monitor before continuing to render the frame currently being worked on.  Copying that completed framebuffer should take time, and make the frame take a little longer than most to render.

For long frames, that could amount to a rounding error.  But for very fast frames, it should be detectable.  So let's create a program that renders frames very fast:  a solid red screen one frame, then solid blue the next, and cycle back and forth each frame.  Then we can ask how long it took to render each frame, and see if there are outliers.

A little trial and error found that most frames took about 200 microseconds.  If I flagged frames that took more than 400 microseconds and recorded the time at which they finished, there was one such frame every 16-17 milliseconds, plus a handful of other frames that sporadically took longer than expected.  Bingo.  That's exactly what I was looking for, and would expect if copying the framebuffer to send it to the monitor did make a frame take longer.  (Actually sending the frame to the monitor will take several milliseconds; I think it just copies the frame that it is going to send somewhere else on the video card so that it won't be overwritten while it is sending it.)

Discarding the other frames that took longer was easy enough:  check the time that a frame finished as compared to the frame before it and the frame after.  If either of the time gaps differs from the time to draw a frame by more than 100 microseconds, then discard it.  This will actually discard about 2/3 of the data output, but that still leaves plenty--and with good certainty that what remains really does correspond to when the monitor draws a new frame.

We can then take the times that we know that a frame was drawn, mod out by the amount of time it takes to draw a frame, and see whether it's drawing a frame at, say, 1:35:49.0000, then 1:35:49.0167, then 1:35:49.0333, and so forth, or those plus two milliseconds, or whatever.  Let the program run for a couple of minutes to generate thousands of data points and we can average them to nail it down.

But then something unexpected happened:  the time shift drifted on the data.  Early data points might be 8 ms past, then they slowly down up to 7 ms, then 6 ms, and so forth.  Changing the assumption on how long it takes to draw a frame was able to correct the drift.  And furthermore, it was able to pin down how long it takes to actually draw a frame.  In my first data set, it came to 16661408 ns per frame.  A perfect 60 Hz would be about 16666667 ns per frame.

There's also the question of just how precise that 16661408 ns figure is.  Java's System.nanoTime() can only report times with about 300-400 ns precision.  (The precision actually varies by hardware, but that's what it is on my computer.  I've tested it.)  Averaging a large number of measurements should make it possible to get better precision, and I had data from thousands of frames.  Rather than trying to compute how good the data ought to be theoretically, I decided to just take another independent data set, run the same computations, and see how close it was to the first number.

So I did.  16661408 ns.  Again.  Another data set?  This one came to 16661406 ns.  A fourth also gave me 16661406 ns.  Those four numbers are all rounded to the nearest nanosecond, as trying to put a decimal point on it would be dubious.  But that's pretty precise, and I've pinned down the average time per frame to within a few nanoseconds.  (The time that a given frame takes to display probably varies by a lot more than a few nanoseconds, so I'm really only getting the average.)

But 16661408 ns definitely isn't the 16666667 ns of a perfect 60 frames per second.  In fact, if I've got it to within a few nanoseconds like I think I do, being off from the theoretical expectation by several thousand nanoseconds is pretty conclusive evidence that the theoretical expectation is measurably wrong.

Now, it's only off by about 0.03%.  But as keeping time goes, being off by 0.03% is horrible.  If a clock is off by 0.03%, that's more than two hours per year.  So I thought, maybe the problem is that Java's System.nanoTime() simply runs at the wrong speed.  System.nanoTime() isn't meant to tell you the time of day.  It's a relative time that is meant to tell you that 5.983 milliseconds passed between this time and that time, give or take a microsecond or so.  I think it's based on how many CPU clock cycles have passed, or something to that effect, but I'm not sure about that.

What if I run the same experiment with System.currentTimeMillis()?  That's Java's way of returning the current system time, as it gives you the number of milliseconds since midnight on January 1, 1970.  Unlike System.nanoTime(), this one is meant to tell you the time of day.

The problem with System.currentTimeMillis() is that the time it returns is some integer number of milliseconds.  At best, it's rounded to the nearest millisecond, and it could easily be rounded badly.  It's not meant to distinguish between whether 5 ms or 6 ms passed between this event and that one.  On older computers, it might only offer precision of 10 or 15 ms.  Can that really measure differences of a few microseconds, even if we average a lot of data?

Well, I've worked out the methods for System.nanoTime(), so let's run the same computations (using System.nanoTime() to measure the length of a frame, and outputting System.currentTimeMillis() when we flag a long frame) and try it again.  This time, I check the time that each frame is done, and if it's 15-18 ms after the previous one and 15-18 ms before the next, I keep it.  If not I discard it.  This ends up discarding perhaps 1/4 of my data.

So I run the same computations and get 16661594 ns per frame.  That's different from what I got with System.nanoTime(), of course.  I'm skeptical that those last few digits are significant, but I'm not really sure how precise it is.  So let's get another data set and try it again.  16661614 ns per frame.  Again?  16661660 ns per frame.  Then 16661554 ns per frame.

So it looks like using System.currentTimeMillis() to flag the long frames can really only pin down the time per frame to within about 100 ns accuracy.  But that's still enough to say that it's definitely not 16666667 ns per frame.  And it's enough to say that the result is probably different from that of System.nanoTime().  It's only off from System.nanoTime() by about 0.001% or so, which comes to several minutes per year.  As keeping time goes, being off by several minutes per year isn't great, but it's not terrible, either.  The wall clock in my bathroom is off by more than that.

I can test whether my calculations are correct more directly by using the test I described earlier.  But instead of switching the color once per time that the monitor grabs a frame, let's try switching it twice.  Let's stay we start at red, and the monitor grabs a frame.  We switch to blue, then switch back to red, and the monitor grabs a frame, so it stays red.  If you render a frame twice as fast as the monitor grabs one, then every frame the monitor grabs should be the same color.

So let's try making the monitor wait after it renders each frame by an amount that we scale to try to make it render a new frame exactly twice as fast as the monitor grabs one.  With an initial, naive guess of 8333333 ns per frame, the screen appears solid red for a while, then flickers back and forth between red and blue rapidly, then settles on blue for a while, then flickers back and forth, then returns to red for a while.  This pattern repeats indefinitely.

If we change it from 8333333 ns per frame to 8331000 ns per frame or 8330000 ns per frame, we get the same behavior, but it takes a lot longer to switch from one to the other.  At 8330700 ns per frame, it stays solid blue for a long time.  Bingo.  The amount of time it takes between moments that the monitor grabs a new frame is right around double that.

Let's return to the title.  How often does a 60 Hz monitor refresh?  It could easily vary by the monitor, video card, motherboard, and who knows what else.  But in my case, at least, it's definitely not 60 times per second.  60.02 times per second is much closer.

«1

Comments

  • SandboxSandbox Member UncommonPosts: 295

    If you have a CRT monitor, the timing is determind by the video source and not by the monitor. You have both horizontal and vertical sync pulses and blanking periods etc.

    Most video cards are able to react on vertical sync interrupt signals and update/switch the framebuffer during this period.

    Use verical sync signals and double buffers to avoid the tearing.

    Framerate conversion generated stuttering, like 40 hz to 60 hz i mostly an issue if you have continious movements. Most games have variation in their generated framerate but that are handled by modern GPUs.

    Back to your question, you have to meassure the syncpulses from the videoboard to get a correct answer.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Sandbox

    If you have a CRT monitor, the timing is determind by the video source and not by the monitor. You have both horizontal and vertical sync pulses and blanking periods etc.

    Most video cards are able to react on vertical sync interrupt signals and update/switch the framebuffer during this period.

    Use verical sync signals and double buffers to avoid the tearing.

    Framerate conversion generated stuttering, like 40 hz to 60 hz i mostly an issue if you have continious movements. Most games have variation in their generated framerate but that are handled by modern GPUs.

    Back to your question, you have to meassure the syncpulses from the videoboard to get a correct answer.

    As best as I can tell, OpenGL does not have any vertical sync capabilities built into the API.  It does use double-buffering by default, though.

    I'm using an LCD monitor, not a CRT.  Actually two LCD monitors, but the red/blue flashing test found that when they get a new frame, they both get the framebuffer as it existed at exactly the same time.

    And I do think I have a correct answer for my hardware.  It seems likely that my method could find the answer for a lot of other hardware, too, though I don't know.  I posted this largely because I thought it was interesting.

  • SandboxSandbox Member UncommonPosts: 295
    Originally posted by Quizzical
    Originally posted by Sandbox

    If you have a CRT monitor, the timing is determind by the video source and not by the monitor. You have both horizontal and vertical sync pulses and blanking periods etc.

    Most video cards are able to react on vertical sync interrupt signals and update/switch the framebuffer during this period.

    Use verical sync signals and double buffers to avoid the tearing.

    Framerate conversion generated stuttering, like 40 hz to 60 hz i mostly an issue if you have continious movements. Most games have variation in their generated framerate but that are handled by modern GPUs.

    Back to your question, you have to meassure the syncpulses from the videoboard to get a correct answer.

    As best as I can tell, OpenGL does not have any vertical sync capabilities built into the API.  It does use double-buffering by default, though.

    I'm using an LCD monitor, not a CRT.  Actually two LCD monitors, but the red/blue flashing test found that when they get a new frame, they both get the framebuffer as it existed at exactly the same time.

    And I do think I have a correct answer for my hardware.  It seems likely that my method could find the answer for a lot of other hardware, too, though I don't know.  I posted this largely because I thought it was interesting.

    Even LCD monitors have frame rate specification, mostly due the bandwidth limitations. One difference is they don’t have to clear the screen like a CRT monitor since they have no phosphor. But it’s still the video card that dictate the data rate and the timing, and that’s why you can have a vertical sync interrupt even with a LCD connected.

    I just wanted to put your attention at the right direction since you seem to lack some basic understanding of video systems; my reply was in no way a complete explanation.

    Maybe this can help you… http://www.tweakguides.com/Graphics_7.html

  • LoktofeitLoktofeit Member RarePosts: 14,247
    Originally posted by Sandbox
    Originally posted by Quizzical
    Originally posted by Sandbox

    If you have a CRT monitor, the timing is determind by the video source and not by the monitor. You have both horizontal and vertical sync pulses and blanking periods etc.

    Most video cards are able to react on vertical sync interrupt signals and update/switch the framebuffer during this period.

    Use verical sync signals and double buffers to avoid the tearing.

    Framerate conversion generated stuttering, like 40 hz to 60 hz i mostly an issue if you have continious movements. Most games have variation in their generated framerate but that are handled by modern GPUs.

    Back to your question, you have to meassure the syncpulses from the videoboard to get a correct answer.

    As best as I can tell, OpenGL does not have any vertical sync capabilities built into the API.  It does use double-buffering by default, though.

    I'm using an LCD monitor, not a CRT.  Actually two LCD monitors, but the red/blue flashing test found that when they get a new frame, they both get the framebuffer as it existed at exactly the same time.

    And I do think I have a correct answer for my hardware.  It seems likely that my method could find the answer for a lot of other hardware, too, though I don't know.  I posted this largely because I thought it was interesting.

    Even LCD monitors have frame rate specification, mostly due the bandwidth limitations. One difference is they don’t have to clear the screen like a CRT monitor since they have no phosphor. But it’s still the video card that dictate the data rate and the timing, and that’s why you can have a vertical sync interrupt even with a LCD connected.

    I just wanted to put your attention at the right direction since you seem to lack some basic understanding of video systems; my reply was in no way a complete explanation.

    Maybe this can help you… http://www.tweakguides.com/Graphics_7.html

    Great link.

     

    Quiz, it was seven paragraphs before you got to 'framebuffer,' and no mention of 'back buffering'. Get more familiar with those two and you probably will never have to worry about screen tearing again, as it takes either some really horrible code or really highend graphics for that to happen.

    There isn't a "right" or "wrong" way to play, if you want to use a screwdriver to put nails into wood, have at it, simply don't complain when the guy next to you with the hammer is doing it much better and easier. - Allein
    "Graphics are often supplied by Engines that (some) MMORPG's are built in" - Spuffyre

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Loktofeit
    Originally posted by Sandbox
    Originally posted by Quizzical
    Originally posted by Sandbox

    If you have a CRT monitor, the timing is determind by the video source and not by the monitor. You have both horizontal and vertical sync pulses and blanking periods etc.

    Most video cards are able to react on vertical sync interrupt signals and update/switch the framebuffer during this period.

    Use verical sync signals and double buffers to avoid the tearing.

    Framerate conversion generated stuttering, like 40 hz to 60 hz i mostly an issue if you have continious movements. Most games have variation in their generated framerate but that are handled by modern GPUs.

    Back to your question, you have to meassure the syncpulses from the videoboard to get a correct answer.

    As best as I can tell, OpenGL does not have any vertical sync capabilities built into the API.  It does use double-buffering by default, though.

    I'm using an LCD monitor, not a CRT.  Actually two LCD monitors, but the red/blue flashing test found that when they get a new frame, they both get the framebuffer as it existed at exactly the same time.

    And I do think I have a correct answer for my hardware.  It seems likely that my method could find the answer for a lot of other hardware, too, though I don't know.  I posted this largely because I thought it was interesting.

    Even LCD monitors have frame rate specification, mostly due the bandwidth limitations. One difference is they don’t have to clear the screen like a CRT monitor since they have no phosphor. But it’s still the video card that dictate the data rate and the timing, and that’s why you can have a vertical sync interrupt even with a LCD connected.

    I just wanted to put your attention at the right direction since you seem to lack some basic understanding of video systems; my reply was in no way a complete explanation.

    Maybe this can help you… http://www.tweakguides.com/Graphics_7.html

    Great link.

     

    Quiz, it was seven paragraphs before you got to 'framebuffer,' and no mention of 'back buffering'. Get more familiar with those two and you probably will never have to worry about screen tearing again, as it takes either some really horrible code or really highend graphics for that to happen.

    Screen tearing already isn't an issue.  I even went out of my way to try to cause it and couldn't.  I think that it's implicitly handled by video drivers.

    JOGL uses double-buffering by default.  All rendering is done on the back buffer.  When a call to display() returns (which means that a frame in the back buffer is completed), it automatically swaps the front and back buffers.  (Or perhaps rather, the front left and back left buffers; OpenGL 4.2 has quad buffering.)  What seems to happen is that the video card periodically decides that it's time to send a frame to the monitor, and copies the contents of the front buffer elsewhere to send it to the monitor--and won't allow the front buffer to be written to during that time.

    What I want to know is exactly when the video card will decide that it's time to grab the contents of the front buffer and send it to the monitor.  That should make it possible to have smoother animations.  The ideal solution, which likely isn't possible, would be for me to be able to be able to request that the next time it grabs the front buffer, it also calls System.nanoTime() and records the value of that for me to use however I want.  That would make things easy for me.

    Absent that, I could do the testing as described in the original post to get a pretty good approximation.  The problem is that I have to stop rendering the game to do that testing.  If the average frame time on a given hardware configuration doesn't vary, then I could do a one time test that takes a couple of minutes to compute it, and then store that value.

    Then I'd need the offset, which I could compute in a fraction of a second at a cost of making the game window flicker a bit (i.e., alternate between two colors very rapidly; the colors could be very close to each other to avoid being obnoxious) instead of rendering the game.  The offset would need to be refreshed periodically.  If I can get the average frame time to within a few nanoseconds, then refreshing the offset every few hours is sufficient.  Stopping to do that when there isn't anything important to render (e.g., when loading the game or switching characters) should work fine.

    The problem is that the offset will drift each frame by however far off my average frame value is.  Multiply that by about 216,000 frames per hour and trying to keep the proper value to within about a millisecond precision means I'd better have the average value pretty accurately--and it better not change for a given hardware configuration.

    -----

    As for the link, that does give the basics of how monitors work.  The problem is that just knowing the monitor refresh rate is around 60 times per second isn't good enough for my purposes.

    The link also seems to be very old.  I don't know if their description of the graphics pipeline was ever correct, but at the very least, it hasn't been for a long time.  Among other things, they're assuming fixed-function lighting.  Lighting calculations can be done in any of several pipeline stages, or even spread across multiple stages, but regardless of where they're done, they're not fixed-function anymore.

  • SandboxSandbox Member UncommonPosts: 295

    As I already said, it's the video cards refresh rate AND timing that matters. Not the refresh rate of the monitor.

    Either you continue to bang your head against the wall, or find a "how to" describing how to enable vertical sync or similar signalling from you hardware.

    Found this in a OpenGL wiki:

    Use the WGL_EXT_swap_control extension to control swap interval. Check both the standard extensions string via glGetString?(GL_EXTENSIONS) and the WGL-specific extensions string via wglGetExtensionsStringARB() to verify that WGL_EXT_swap_control is actually present.

    The extension provides the wglSwapIntervalEXT() function, which directly specifies the swap interval. wglSwapIntervalEXT(1) is used to enable vsync; wglSwapIntervalEXT(0) to disable vsync.

    Another option is to find a card that supports what you need.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Sandbox

    As I already said, it's the video cards refresh rate AND timing that matters. Not the refresh rate of the monitor.

    Either you continue to bang your head against the wall, or find a "how to" describing how to enable vertical sync or similar signalling from you hardware.

    Found this in a OpenGL wiki:

    Use the WGL_EXT_swap_control extension to control swap interval. Check both the standard extensions string via glGetString?(GL_EXTENSIONS) and the WGL-specific extensions string via wglGetExtensionsStringARB() to verify that WGL_EXT_swap_control is actually present.

    The extension provides the wglSwapIntervalEXT() function, which directly specifies the swap interval. wglSwapIntervalEXT(1) is used to enable vsync; wglSwapIntervalEXT(0) to disable vsync.

    Another option is to find a card that supports what you need.

    That controls when the front and back buffers are swapped.  The default functionality of JOGL works fine for me there.  But that's not what I'm after.  I want to know when the video card takes the front buffer and copies it elsewhere to send it to the monitor for display.

    And as I've said repeatedly, I think I can measure exactly that in a convoluted way.

  • RidelynnRidelynn Member EpicPosts: 7,383

    I think, there are probably more variables than you are taking into account.

    Missing nanoseconds here and there when measuring frame rate response isn't all together unreasonable. There are things that happen other than your code (like the Windows scheduler) that can interrupt your process and throw off that clock by some small margin, that are completely external to your program.

    For ~most~ purposes, you either render as fast as you can and just let the monitor show what it's able (and risk tearing), or you just enable VSYNC and let the driver take care of throttling the frame rate for you, and that's good enough.

    For applications that require a higher degree of accuracy, your probably going to need to dig down a lot deeper than Java will let you, since your more or less stuck inside of the Java VM, and need more direct access to the underpinnings - and it may be that you can't get that with the way Windows architecture is laid out in the first place (I'm not entirely certain - to get that level of accuracy you may need to look at some embedded-type OS hooks or look toward modified/modifying a linux kernal).

  • IchmenIchmen Member UncommonPosts: 1,228

    im kinda curious on this my self as my lcd tends to be 59-60hz while i typically dont notice tearing or such issues (apart from GPU limitions i suffer) id love to know if buying a 1000000hz monitor is really any better then a 60hz

    be nice to know if this refresh rate would boost the avoidance of lag from system hardware or if it would really matter all that much. 

     

    but then again.. most of this stuff to beyond my tech knowledge :/

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Ridelynn

    I think, there are probably more variables than you are taking into account.

    Missing nanoseconds here and there when measuring frame rate response isn't all together unreasonable. There are things that happen other than your code (like the Windows scheduler) that can interrupt your process and throw off that clock by some small margin, that are completely external to your program.

    For ~most~ purposes, you either render as fast as you can and just let the monitor show what it's able (and risk tearing), or you just enable VSYNC and let the driver take care of throttling the frame rate for you, and that's good enough.

    For applications that require a higher degree of accuracy, your probably going to need to dig down a lot deeper than Java will let you, since your more or less stuck inside of the Java VM, and need more direct access to the underpinnings - and it may be that you can't get that with the way Windows architecture is laid out in the first place (I'm not entirely certain - to get that level of accuracy you may need to look at some embedded-type OS hooks or look toward modified/modifying a linux kernal).

    Perhaps I should back up and say that this isn't something that I absolutely need to do for the game to work.  The game works fine without it, and there isn't any tearing.  But if a few days of work can make all animations visibly smoother and take several milliseconds off of the display latency on higher end hardware while having the vertical sync advantages of easing the load on hardware, then I'll do it.

    I don't need to control when the video card will grab a frame to send to the monitor.  I only want to be able to predict it.  If it's based on regular intervals in whatever timer it uses (as is highly probable), and the timer it uses is the same one that Java exposes either with System.nanoTime() or System.currentTimeMillis(), then I can do exactly that.  If it uses some other timer that doesn't run at a rate very closely proportional to either of those, then it will just mean I spent a few hours tinkering with something that didn't work out.  Oh well.  It wouldn't be the first time.

    I suppose the way to do it is to try to implement it and see if it works.  It's entirely possible that it will work on some systems and not others, in which case, it's easy enough to leave it as an option for the end user to turn on or off.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Ichmen

    im kinda curious on this my self as my lcd tends to be 59-60hz while i typically dont notice tearing or such issues (apart from GPU limitions i suffer) id love to know if buying a 1000000hz monitor is really any better then a 60hz

    be nice to know if this refresh rate would boost the avoidance of lag from system hardware or if it would really matter all that much. 

     

    but then again.. most of this stuff to beyond my tech knowledge :/

    A monitor with a 120 Hz refresh rate will tend to make animations look smoother and reduce your display latency by several milliseconds.  But it can really be the difference between animations that look fairly smooth and ones that look very smooth.  If the problem is that your system just can't compute frames fast enough, then it's going to be choppy no matter what you do for a monitor.

  • grndzrogrndzro Member UncommonPosts: 1,162

    Usually the refresh rate of a monitor is a slightly variable function of the hardware that is used in the monitor. Drivers have built in algorithms that adjust to the hardware tolerances in the monitors. So one 23 inch monitor might be at 59hz while another of the same kind could be at 61. It's hiden in the way drivers deal with the frequency tolerances. At least that's the way I understood it from CRT monitors.

    In LCD monitors I'd say they are probably set up to much closer tolerances due to the fact that the LCD crystals don't really care what HZ they are at as long as it isn't higher than they are capable of. Digital systems probably have a clock crystal built in. No idea though.

    From a programming perspective setting up a timing/refresh test set to the system clock could determin how close you are to 60 hz. But I'm not a programmer.

  • syntax42syntax42 Member UncommonPosts: 1,378

    The poster above is fairly close to what I would guess.  Old television sets used the AC power waveform to set their scan rates.  Modern electronics may not do the same thing, but the function of the device builds on the flaws of its predecessors.  The problem was that the monitor and the video source had little real synchronization.  

     

    I'm not a programmer, but i was reading the posts and had an idea.  Instead of trying to predict the frames, draw the next frame right after the monitor grabs the latest frame.  Wait to draw the next frame until the monitor grabs the latest one.  This way, you will produce one frame every time the monitor grabs a frame and your frames will be spaced perfectly apart.  This would require the use of the front and back buffers as mentioned in previous posts to ensure the monitor is only reading a frame which has been drawn completely.

     

    Basically, your game state would be just under 16.7ms ahead of your monitor, assuming a perfect 60Hz refresh.  Every frame would be drawn 16.7ms after the previous, producing the smoothest possible animation for you.  This is only possible if you have access to the ability to detect when the monitor has grabbed the frame from the buffer.

     

    Did all of that make sense?  Is it even possible with the programming language you are using?

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910

    The issue isn't the monitor, unless something is wrong with it. It's probably the video card or cpu being too weak for the content being displayed. It's also possible for a video card to somehow get out of sync with a monitor's refresh rate, so that half the screen is one frame, and half the screen is another frame. There's often a setting in games that keeps the video card synced with the monitor. I've never used it, but I know it exists. Maybe try that.

    Most HD televisions, even those suited to BlueRay HD run at 60hz. You get diminishing returns at refresh rates higher than 60hz. For instance, the difference between 15hz and 30hz is HUGE. You go from something that isn't watchable, to something you could actually watch. For instance, movie theaters have been running at 24hz since forever and are only recently trying out higher refresh rates. From 30hz to 60hz, you go from something that is watchable and even enjoyable to something that looks really smooth with any post processing on the video being displayed and works well with interactive things like video games. From 60hz to 120hz you get a slight improvement that many people can't even see because they are humans and not bug eyed aliens. Even BlueRay HD benefits very little from having a 120hz refresh rate. For instance, they have built televisions that are capable of running at 600hz, but consumers don't need them. The only real use of 120hz televisions now is having screens that are slightly smoother for a good bit more money, or displaying content in 3D by displaying two different videos at 60hz each.

    I have read a couple posts, but I haven't read the OP, so if this has nothing to do with the OP, my apologies.

    ** Lordy, lordy. I just read the OP. My post has nothing to do with it. I'm also going to posit that the OP is too smart to be posting on these forums, and also has too much time to just experiment. I don't know what they look like, but I imagine them bald, face hidden in darkness, with a mechanical hand, petting a cat that looks like a tiny tiger, with green eyes. They have already laid out their plans, and they are just waiting...

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by syntax42

    I'm not a programmer, but i was reading the posts and had an idea.  Instead of trying to predict the frames, draw the next frame right after the monitor grabs the latest frame.  Wait to draw the next frame until the monitor grabs the latest one.  This way, you will produce one frame every time the monitor grabs a frame and your frames will be spaced perfectly apart.  This would require the use of the front and back buffers as mentioned in previous posts to ensure the monitor is only reading a frame which has been drawn completely.

     

    Basically, your game state would be just under 16.7ms ahead of your monitor, assuming a perfect 60Hz refresh.  Every frame would be drawn 16.7ms after the previous, producing the smoothest possible animation for you.  This is only possible if you have access to the ability to detect when the monitor has grabbed the frame from the buffer.

    The problem is that I have no way to directly find out when a frame is grabbed and sent to the monitor.  The way that I measure it indirectly is by spamming a bunch of junk frames that are a solid color, measuring how long each takes, and looking for outliers that take much longer than most.  A long frame (in this case, over 0.4 ms) usually but does not always correspond to when a frame was sent to the monitor.  Continuing the test for 100 ms or so and looking for a set of several long frames spaced about 16-17 ms apart is able to pin it down.

    The problem is that having the monitor go blank so I can spam junk frames for 100 ms in the middle of a game is rather disruptive to gameplay.  You can do something like that at loading screens, but I don't have any loading screens.  Loading a game initially or switching characters (my idea of fast travel:  in-game characters can't warp, but you can switch which character you control so that you effectively warp to a different area of the world) means that there will be a brief period when a lot of the stuff that needs to be drawn isn't loaded.  Having the screen go blank for 100 ms or so when that happens would be perfectly acceptable.

    But that means that I can only sporadically find out when a frame is sent to the monitor.  I can't find out every frame, or even once every thousand frames.  I expect it to be fairly rare for players to go more than about 200,000 frames without giving me a chance to check when frames are sent.  (One full day/night cycle is 1 real-life hour, and being outside of town at night kills you.  So if it's getting dark, maybe you should switch characters to somewhere where it is morning.)

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910

    Is your game a full screen Java application?

    This site (http://www.java-gaming.org/topics/why-java-games-look-choppy-vertical-retrace/14696/view.html) references a method to get smooth animations, without having to predict when a frame is being written. It apparently doesn't apply to Java 2D, and from other sites I was reading can give inconsistent performance depending on whether you're running Windows or Linux. It works under Windows, but only works under certain versions of Java under Linux.

    The relevant quote from the page:


    "The fix is easy for anyone writing a fullscreen Java application; applications that use the FlipBufferStrategy get this for free. When that buffer strategy copies its contents to the screen from the back buffer, it specifically waits for the vertical blank interface, and thus avoids tearing completely.

    The fix is not as easy for typical windowed (non-fullscreen) applications, because there is currently no way to tell Java to wait for this interval, and there is no way for your code to know when it is a good time to go ahead with the copy. We hope to address this in a future release (I just filed a bug on it last week!), but in the meantime there is no way to get this behavior."



    There are two blog posts referenced with more detailed information.
    http://weblogs.java.net/blog/chet/archive/2006/02/make_your_anima.html
    http://today.java.net/pub/a/today/2006/02/23/smooth-moves-solutions.html#handling-vertical-retrace.html

    The information is pretty old though...so it might not be helpful.

    I can not remember winning or losing a single debate on the internet.

  • RidelynnRidelynn Member EpicPosts: 7,383

    A better question may be to take a step back and ask:

    Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Ridelynn

    A better question may be to take a step back and ask:

    Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.

    Because I didn't expect to get very far, and then didn't want to scrap it and start over.

    Exactly what drawbacks should I be warned about?  Java and OpenGL seem to have all of the capabilities that I need, though I haven't tested Java's sound and network capabilities to see just how robust they are.  I'm not a computer programmer by training, so I really don't know the pros and cons of this language versus that one.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by Ridelynn A better question may be to take a step back and ask: Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.
    Because I didn't expect to get very far, and then didn't want to scrap it and start over.

    Exactly what drawbacks should I be warned about?  Java and OpenGL seem to have all of the capabilities that I need, though I haven't tested Java's sound and network capabilities to see just how robust they are.  I'm not a computer programmer by training, so I really don't know the pros and cons of this language versus that one.




    Was it a shock to end up with a functional game, or was it a nice surprise?

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by lizardbones

     


    Originally posted by Quizzical

    Originally posted by Ridelynn A better question may be to take a step back and ask: Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.
    Because I didn't expect to get very far, and then didn't want to scrap it and start over.

     

    Exactly what drawbacks should I be warned about?  Java and OpenGL seem to have all of the capabilities that I need, though I haven't tested Java's sound and network capabilities to see just how robust they are.  I'm not a computer programmer by training, so I really don't know the pros and cons of this language versus that one.



    Was it a shock to end up with a functional game, or was it a nice surprise?

     

    Functional is perhaps a relative thing.  I can have a placeholder character move around in the game world, but there are still a ton of things missing.

    I was surprised at how math-intensive computer graphics is.  I expected there to be a lot of linear algebra, of course.  But tessellation, the big new feature of DirectX 11 and OpenGL 4, basically involves a bunch of graduate level mathematics.  No wonder most games don't use it, or just make some token use of tessellation that misses the whole point of it.

    There were a lot of things where I thought, I've never seen a game do such and such, but I think it would be cool if one did.  So let's try it.  Some such efforts fell flat.  For example, I spent about a day trying to create my own function for the depth buffer before realizing how OpenGL basically locked you into only two choices (3D perspective or isometric)--and why they chose to do it that way.  It made a ton of sense for performance reasons ten years ago, though it would be nice to have more options today.  But at least now I understand why clipping requires homogeneous coordinates in real-projective space RP^3, which is probably completely baffling to most people who do 3D graphics.

    This thread was based on the idea of, "Wouldn't it be cool if you could make a game always finish rendering a frame right before the video card sends it to the monitor, and not render any extra frames?"  I won't be able to do that perfectly, but I might be able to get pretty close.  I think I should wait until I have more stuff done and come back to it later.

    But some things actually worked.  For example, I don't load any textures off of the hard drive.  Rather, I generate them all on the processor.  That means no AAA graphics, but it also means a doable 3D world that looks decent enough without any artists on the project at all.  I'll post some screenshots for you to see.  You can click for a larger picture, though it won't show full size for some reason:

    image

    image

    image

    image

    The second screenshot actually needs some explanation.  It's dark off in the distance because the world is round, and it gets dark at night.  If I turned the camera around, you'd be able to see the sun rising--and yes, actually moving.  One full game-day takes one real-life hour.  The sky also gets darker as the sun gets lower in the sky, which is why it's different colors in the different screenshots.

    It's a spherical world about 1 kilometer in radius.  There are about 15,000 trees, and every single tree looks different from every other.  There are likewise about 10,000 rocks, and every single rock looks different from every other.  There are twelve cities, most of which aren't finished yet, but I gave you screenshots of three of them.  The clocks in the third screenshot actually run and display the time in the local in-game time zone.

    The blue and purple spotted cylinder in the middle of each screenshot is my placeholder character.  It's going to be deleted eventually, but I needed a way to test collision detection, and that's hard to do if I can't see where I am.

    But the thing that I think is really amazing is that the entire game takes less space at the moment than any one of those four screenshots saved as a .jpeg file does by itself.

    I'm going to try to make an extremely versatile character creator that will let players create entire new species to put in the game.  The idea will be you can make your character look like whatever you want--and then I get to use it randomized somewhat (so it won't look exactly like you) as mobs for other players to fight.  If I can get that to work the way I want, then it will be time to release a demo and look for a publisher.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by lizardbones   Originally posted by Quizzical Originally posted by Ridelynn A better question may be to take a step back and ask: Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.
    Because I didn't expect to get very far, and then didn't want to scrap it and start over.   Exactly what drawbacks should I be warned about?  Java and OpenGL seem to have all of the capabilities that I need, though I haven't tested Java's sound and network capabilities to see just how robust they are.  I'm not a computer programmer by training, so I really don't know the pros and cons of this language versus that one.
    Was it a shock to end up with a functional game, or was it a nice surprise?  
    Functional is perhaps a relative thing.  I can have a placeholder character move around in the game world, but there are still a ton of things missing.

    I was surprised at how math-intensive computer graphics is.  I expected there to be a lot of linear algebra, of course.  But tessellation, the big new feature of DirectX 11 and OpenGL 4, basically involves a bunch of graduate level mathematics.  No wonder most games don't use it, or just make some token use of tessellation that misses the whole point of it.

    There were a lot of things where I thought, I've never seen a game do such and such, but I think it would be cool if one did.  So let's try it.  Some such efforts fell flat.  For example, I spent about a day trying to create my own function for the depth buffer before realizing how OpenGL basically locked you into only two choices (3D perspective or isometric)--and why they chose to do it that way.  It made a ton of sense for performance reasons ten years ago, though it would be nice to have more options today.  But at least now I understand why clipping requires homogeneous coordinates in real-projective space RP^3, which is probably completely baffling to most people who do 3D graphics.

    This thread was based on the idea of, "Wouldn't it be cool if you could make a game always finish rendering a frame right before the video card sends it to the monitor, and not render any extra frames?"  I won't be able to do that perfectly, but I might be able to get pretty close.  I think I should wait until I have more stuff done and come back to it later.

    But some things actually worked.  For example, I don't load any textures off of the hard drive.  Rather, I generate them all on the processor.  That means no AAA graphics, but it also means a doable 3D world that looks decent enough without any artists on the project at all.  I'll post some screenshots for you to see.  You can click for a larger picture, though it won't show full size for some reason:

    The second screenshot actually needs some explanation.  It's dark off in the distance because the world is round, and it gets dark at night.  If I turned the camera around, you'd be able to see the sun rising--and yes, actually moving.  One full game-day takes one real-life hour.  The sky also gets darker as the sun gets lower in the sky, which is why it's different colors in the different screenshots.

    It's a spherical world about 1 kilometer in radius.  There are about 15,000 trees, and every single tree looks different from every other.  There are likewise about 10,000 rocks, and every single rock looks different from every other.  There are twelve cities, most of which aren't finished yet, but I gave you screenshots of three of them.  The clocks in the third screenshot actually run and display the time in the local in-game time zone.

    The blue and purple spotted cylinder in the middle of each screenshot is my placeholder character.  It's going to be deleted eventually, but I needed a way to test collision detection, and that's hard to do if I can't see where I am.

    But the thing that I think is really amazing is that the entire game takes less space at the moment than any one of those four screenshots saved as a .jpeg file does by itself.

    I'm going to try to make an extremely versatile character creator that will let players create entire new species to put in the game.  The idea will be you can make your character look like whatever you want--and then I get to use it randomized somewhat (so it won't look exactly like you) as mobs for other players to fight.  If I can get that to work the way I want, then it will be time to release a demo and look for a publisher.




    Are the trees and the rocks persistent and are the textures for each of these things persistent?

    I can not remember winning or losing a single debate on the internet.

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by lizardbones

    Are the trees and the rocks persistent and are the textures for each of these things persistent?

    If you start a new game, they'll all be in different places, as will the mobs that don't yet exist.  There is a substantial emphasis on exploring the game world, which doesn't work if everything is always in the same places.

    But if you save a game and load it another day, or go far away and then come back, the trees and rocks will be exactly where they were before, with the same textures, shapes, positions, and everything.  Plug in a different random seed, get a different game world.  Plug in the same random seed and get the same game world as before.

    When it loads the game, it specifies a random seed and some parameters for every single texture in the game world and keeps that in system memory on the client.  It periodically sees what you're close to and creates textures for objects as you come in range, then deletes the texture once you move far enough away.  If you come back, it will again create a texture from the same function, parameters, and random seed, which will give you exactly the same texture.

    But if textures are generated randomly, why keep a fixed resolution?  Why not let players adjust the texture resolution up and down?  Changing the texture resolution will make the textures look different.  But that's okay, as it means that if you've got 3 GB of video memory, I can put it to good use with ultra high resolution textures.  (Actually, you'd better have at least a quad core processor to generate those textures.)  And if you've only got 256 MB, I can use low resolution textures that only need to buffer 20 MB or so in your video memory to make the game playable.

  • TheLizardbonesTheLizardbones Member CommonPosts: 10,910


    Originally posted by Quizzical
    Originally posted by lizardbones Are the trees and the rocks persistent and are the textures for each of these things persistent?
    If you start a new game, they'll all be in different places, as will the mobs that don't yet exist.  There is a substantial emphasis on exploring the game world, which doesn't work if everything is always in the same places.

    But if you save a game and load it another day, or go far away and then come back, the trees and rocks will be exactly where they were before, with the same textures, shapes, positions, and everything.  Plug in a different random seed, get a different game world.  Plug in the same random seed and get the same game world as before.

    When it loads the game, it specifies a random seed and some parameters for every single texture in the game world and keeps that in system memory on the client.  It periodically sees what you're close to and creates textures for objects as you come in range, then deletes the texture once you move far enough away.  If you come back, it will again create a texture from the same function, parameters, and random seed, which will give you exactly the same texture.

    But if textures are generated randomly, why keep a fixed resolution?  Why not let players adjust the texture resolution up and down?  Changing the texture resolution will make the textures look different.  But that's okay, as it means that if you've got 3 GB of video memory, I can put it to good use with ultra high resolution textures.  (Actually, you'd better have at least a quad core processor to generate those textures.)  And if you've only got 256 MB, I can use low resolution textures that only need to buffer 20 MB or so in your video memory to make the game playable.




    Well, that is pretty cool. You may not be a programmer by trade, but I am, and that's pretty cool.

    I can not remember winning or losing a single debate on the internet.

  • RidelynnRidelynn Member EpicPosts: 7,383


    Originally posted by Quizzical
    Originally posted by Ridelynn A better question may be to take a step back and ask: Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.
    Because I didn't expect to get very far, and then didn't want to scrap it and start over.

    Exactly what drawbacks should I be warned about?  Java and OpenGL seem to have all of the capabilities that I need, though I haven't tested Java's sound and network capabilities to see just how robust they are.  I'm not a computer programmer by training, so I really don't know the pros and cons of this language versus that one.


    Well, for just programming as a hobby, there isn't anything wrong with it. It's a free and open source programming tool that has a lot of benefits:
    a) It's free and open source
    b) easily available
    c) has an established history, and is still supported by a major corporate sponser (Oracle)
    d) is readily taught in many schools as the first or primary language for programming
    e) is functionally complete and strongly typed
    f) a lot of existing code and tutorials exist
    g) "Write once, Run anywhere" promise (I used quotes around that, I'll explain below)
    h) baked-in memory management
    i) capability to write internet-distributed apps (applets/Java Web Start)

    There are, however, some drawbacks.

    a) It is not considered an industry standard - not a lot of commercial software out there is actually written with Java - most commercial software is written in some variation of C, although you can make a strong case for several different langauges (including Java) as to which would be "The Best" for any particular application
    b) The JIT/VM - you are relying on not only your end-users hardware for performance, but also a middleware Just-in-time compiler and virtual machine. The quality and availability of these vary widely from platform to platform.
    c) Security - the US is advising people to actively uninstall Java unless they depend on it: From just yesterday
    d) "Write once, Run anywhere" quickly became "Write once, debug everywhere" due to differences in JIT/VM's on different platforms. GUI elements in particular are notoriously difficult to support.
    e) Forced object-oriented design. Not every program lends itself well to OOP design fundamentals, but you don't really have any other way around it in Java.
    f) The JIT/VM insulates you from direct access to hardware, or even device drivers, and in some cases even the OS
    g) Writing anything web-based (applets) requires a third-party plug-in (the Java plugin)

    If you are just poking around OpenGL for a hobby, there is nothing wrong with Java

    If your trying to do something as intricate as accurately time the frame buffer for a graphics application, your going to get extremely frustrated because there are a lot of things outside of your control that are going to affect that, both from Windows and from Java (namely the Windows scheduler and the Java JIT/VM).

    If you are looking for this level of accuracy, it can be done with Windows, but it would probably more easily be done with C/C++ with XWindows on an open source OS (where you can see the OS source code, and potentially even modify it as required, and your application can drop all the way to the assembly level to tightly control tolerances): you can keep the same OpenGL APIs, but you'll lose the commercially stable video device driver (Windows drivers just get more support, and AMD is notoriously poor on Linux drivers, although they do exist but with Windows the OS code is blocked off and you can only really tell what's going on based on documentation or lots and lots of debugging info).

  • QuizzicalQuizzical Member LegendaryPosts: 25,347
    Originally posted by Ridelynn

     


    Originally posted by Quizzical

    Originally posted by Ridelynn A better question may be to take a step back and ask: Why are you trying to write a game in Java in the first place? There are a lot of drawbacks to picking Java in particular to program with.
    Because I didn't expect to get very far, and then didn't want to scrap it and start over.

     

    Exactly what drawbacks should I be warned about?  Java and OpenGL seem to have all of the capabilities that I need, though I haven't tested Java's sound and network capabilities to see just how robust they are.  I'm not a computer programmer by training, so I really don't know the pros and cons of this language versus that one.


     

    Well, for just programming as a hobby, there isn't anything wrong with it. It's a free and open source programming tool that has a lot of benefits:
    a) It's free and open source
    b) easily available
    c) has an established history, and is still supported by a major corporate sponser (Oracle)
    d) is readily taught in many schools as the first or primary language for programming
    e) is functionally complete and strongly typed
    f) a lot of existing code and tutorials exist
    g) "Write once, Run anywhere" promise (I used quotes around that, I'll explain below)
    h) baked-in memory management
    i) capability to write internet-distributed apps (applets/Java Web Start)

    There are, however, some drawbacks.

    a) It is not considered an industry standard - not a lot of commercial software out there is actually written with Java - most commercial software is written in some variation of C, although you can make a strong case for several different langauges (including Java) as to which would be "The Best" for any particular application
    b) The JIT/VM - you are relying on not only your end-users hardware for performance, but also a middleware Just-in-time compiler and virtual machine. The quality and availability of these vary widely from platform to platform.
    c) Security - the US is advising people to actively uninstall Java unless they depend on it: From just yesterday
    d) "Write once, Run anywhere" quickly became "Write once, debug everywhere" due to differences in JIT/VM's on different platforms. GUI elements in particular are notoriously difficult to support.
    e) Forced object-oriented design. Not every program lends itself well to OOP design fundamentals, but you don't really have any other way around it in Java.
    f) The JIT/VM insulates you from direct access to hardware, or even device drivers, and in some cases even the OS
    g) Writing anything web-based (applets) requires a third-party plug-in (the Java plugin)

    If you are just poking around OpenGL for a hobby, there is nothing wrong with Java

    If your trying to do something as intricate as accurately time the frame buffer for a graphics application, your going to get extremely frustrated because there are a lot of things outside of your control that are going to affect that, both from Windows and from Java (namely the Windows scheduler and the Java JIT/VM).

    If you are looking for this level of accuracy, it can be done with Windows, but it would probably more easily be done with C/C++ with XWindows on an open source OS (where you can see the OS source code, and potentially even modify it as required, and your application can drop all the way to the assembly level to tightly control tolerances): you can keep the same OpenGL APIs, but you'll lose the commercially stable video device driver (Windows drivers just get more support, and AMD is notoriously poor on Linux drivers, although they do exist but with Windows the OS code is blocked off and you can only really tell what's going on based on documentation or lots and lots of debugging info).

    Thanks for the advice.

    My attempts at timing when the framebuffer gets grabbed and sent to the monitor were a "wouldn't it be cool if I could do X" thing, not an "I need to do X or the game won't work" thing.  If I could do it but it would take a month, I'd say, forget it, it's not worth anywhere near that amount of time.  Modifying an OS is way behind what I care to learn how to do.

    For my purposes, any platform that doesn't support OpenGL 3.2 or later is a non-starter.  I've already ported things back from OpenGL 4.2 to 3.2 (with dynamic tessellation in OpenGL 3.2, even!), but further than that means giving up geometry shaders, which would make life rather difficult.  At the moment, that means Windows, Mac OS X, and Linux, though some Android devices may get there soon.  JOGL should supposedly work with any of those platforms, though I don't know how much debugging it will take.

    For the drawbacks, (c) and (g) seem to only apply to web browser plug-ins.  That could be an issue for a browser-based game.  Browser-based means no OpenGL, which makes it useless to me.

    For (b) and (d), is Java likely to be horribly problematic for Windows, Mac OS X, or Linux?  Because if it's only wandering off into platforms for embedded devices that I've never heard of where Java falls apart, I don't care about those platforms.  I have the impression that anti-Microsoft Linux fans tend to be pro-Java, which they wouldn't if Java on Linux didn't work, though that impression could be mistaken.  Maybe they're just anti-Visual Studio.

    I've already had to deal with situations where my game would run on my desktop but not my laptop, and then both my desktop and laptop but not my parents' computer.  In most of those situations, once I found the problem, the puzzling thing was why it did work on my desktop.  (E.g., if you try to set a four-dimensional vector equal to a three-dimensional vector in GLSL, Radeon HD 4000 series drivers crash the program, while 5000 series drivers simply copy the first three components and ignore the fourth.  There's actually a good case to be made for the former being more desirable behavior.)

    For (a), why exactly would this matter?  I know that industry standards are a good thing for a lot of purposes, but I'm not sure why it would be a big deal for a programming language.

    I could understand why (f) could be a problem in a lot of situations.  But I really don't want to do a bunch of custom stuff for a bunch of different OSes and hardware configurations.  Having to maintain separate code paths for OpenGL 4.2 and 3.2 is already getting annoying, even though the overwhelming majority of my code neither knows nor cares that I'm using OpenGL, let alone which version.

    And then (e) wanders into nuances that I don't really understand.  If I ever finish my game, I'm guessing that it will offer about 60 frames per second performance (it's substantially above that at the moment, but adding characters to the game world will probably bring it down more than remaining performance optimizations that I find later bring it up) on my AMD E-350 based laptop/netbook at medium settings, so it's not like Java won't perform well enough.  A Temash-based tablet would be faster yet.

    I'm not trying to argue with you.  I'm just thinking that drawbacks of Java that could be an enormous problem for some people are likely to be minor or irrelevant to me.

Sign In or Register to comment.