It looks like you're new here. If you want to get involved, click one of these buttons!
The obvious answer is, 60 times per second. That's what 60 Hz means, after all. (Hz as units is the inverse of seconds.) That, interestingly enough, seems to be the wrong answer.
To give some background, I've been programming a game, and thought it would be nice to keep a steady frame rate. If your monitor refreshes every 1/60 of a second, then you'd ideally like to grab the state of the game and use it to render a frame every 1/60 of a second, and have that be what the monitor uses for the new frame. That will give you very smooth animations.
There are a variety of problems here, though. I was originally worried about screen tearing. So I tried some artificial examples to try to create screen tearing, and couldn't do it. I concluded that some sort of vertical sync is implicitly implemented in what I'm using (JOGL+Windows 7+Radeon HD 5850), most likely in video drivers. So far, so good.
Knowing that, it would be ideal to start a frame so that it finishes just before the monitor grabs a frame. If every frame finishes a millisecond before the monitor grabs a frame rather than a millisecond after, then that reduces your display latency by about 15 ms.
The next issue is that you can't control exactly how long it takes to render a frame. How long it takes depends on what graphical settings are in use, how much needs to be drawn in that frame, whether textures are being generated at the same time, what other (non-game) processes are running in the background, how Windows decides to schedule threads, when Java decides to run garbage collection, and a variety of other things.
Ideally, you'd like to start a frame exactly as often as the monitor grabs a frame, and then always have the frame you start finish by the time the monitor grabs a frame. Obviously, on slow hardware, you just have to draw as fast as you can and it's done whenever it's done. But on faster hardware, if you can get 300 frames per second, it should be possible to be a little more precise. If the time to draw a frame varies by a few milliseconds on fast hardware (let's say 99% of frames with 2 ms of the median), then you could move the start time up so that nearly all frames finish within several millseconds before the monitor grabs a new frame.
What you really don't want to do is to have an average frame finish as the monitor grabs a new frame, so that about half of the frames are done before it and half after. If one frame finishes slightly after the monitor grabs a new frame, and then next slightly before, then the first frame won't display at all. Instead, the frame before it just displays twice consecutively. One worst case scenario is that it leads to a steady 30 frames per second rather than 60, as every other frame gets skipped. Another worst case scenario is that the frames finish late-late-early, and then that cycle repeats forever. That gets you 40 frames per second, but stuttering so that one frame is displayed for a full 1/30 of a second before the next frame that shows the state of the game world as of only 1/60 of a second later is displayed. Then the next frame is only displayed for 1/60 of a second, before it is replaced by a new frame that shows the state of the world 1/30 of a second later.
One way to avoid that is to just let a fast card render as fast as it can. That will definitely get you a new frame displayed every 1/60 of a second with only rare exceptions, and each new frame displayed will usually show the state of the game world about 13-20 ms after the last, though it will vary substantially. But that seems less than ideal, as it would be nice for each new frame to show the same amount of time passing in the game world. It also seems like an undue strain on hardware.
To do better than that requires knowing when the video card will grab a frame. I speculated that what it does is, when it's time to display a new frame, it copies the most recently completed framebuffer elsewhere to feed to the monitor before continuing to render the frame currently being worked on. Copying that completed framebuffer should take time, and make the frame take a little longer than most to render.
For long frames, that could amount to a rounding error. But for very fast frames, it should be detectable. So let's create a program that renders frames very fast: a solid red screen one frame, then solid blue the next, and cycle back and forth each frame. Then we can ask how long it took to render each frame, and see if there are outliers.
A little trial and error found that most frames took about 200 microseconds. If I flagged frames that took more than 400 microseconds and recorded the time at which they finished, there was one such frame every 16-17 milliseconds, plus a handful of other frames that sporadically took longer than expected. Bingo. That's exactly what I was looking for, and would expect if copying the framebuffer to send it to the monitor did make a frame take longer. (Actually sending the frame to the monitor will take several milliseconds; I think it just copies the frame that it is going to send somewhere else on the video card so that it won't be overwritten while it is sending it.)
Discarding the other frames that took longer was easy enough: check the time that a frame finished as compared to the frame before it and the frame after. If either of the time gaps differs from the time to draw a frame by more than 100 microseconds, then discard it. This will actually discard about 2/3 of the data output, but that still leaves plenty--and with good certainty that what remains really does correspond to when the monitor draws a new frame.
We can then take the times that we know that a frame was drawn, mod out by the amount of time it takes to draw a frame, and see whether it's drawing a frame at, say, 1:35:49.0000, then 1:35:49.0167, then 1:35:49.0333, and so forth, or those plus two milliseconds, or whatever. Let the program run for a couple of minutes to generate thousands of data points and we can average them to nail it down.
But then something unexpected happened: the time shift drifted on the data. Early data points might be 8 ms past, then they slowly down up to 7 ms, then 6 ms, and so forth. Changing the assumption on how long it takes to draw a frame was able to correct the drift. And furthermore, it was able to pin down how long it takes to actually draw a frame. In my first data set, it came to 16661408 ns per frame. A perfect 60 Hz would be about 16666667 ns per frame.
There's also the question of just how precise that 16661408 ns figure is. Java's System.nanoTime() can only report times with about 300-400 ns precision. (The precision actually varies by hardware, but that's what it is on my computer. I've tested it.) Averaging a large number of measurements should make it possible to get better precision, and I had data from thousands of frames. Rather than trying to compute how good the data ought to be theoretically, I decided to just take another independent data set, run the same computations, and see how close it was to the first number.
So I did. 16661408 ns. Again. Another data set? This one came to 16661406 ns. A fourth also gave me 16661406 ns. Those four numbers are all rounded to the nearest nanosecond, as trying to put a decimal point on it would be dubious. But that's pretty precise, and I've pinned down the average time per frame to within a few nanoseconds. (The time that a given frame takes to display probably varies by a lot more than a few nanoseconds, so I'm really only getting the average.)
But 16661408 ns definitely isn't the 16666667 ns of a perfect 60 frames per second. In fact, if I've got it to within a few nanoseconds like I think I do, being off from the theoretical expectation by several thousand nanoseconds is pretty conclusive evidence that the theoretical expectation is measurably wrong.
Now, it's only off by about 0.03%. But as keeping time goes, being off by 0.03% is horrible. If a clock is off by 0.03%, that's more than two hours per year. So I thought, maybe the problem is that Java's System.nanoTime() simply runs at the wrong speed. System.nanoTime() isn't meant to tell you the time of day. It's a relative time that is meant to tell you that 5.983 milliseconds passed between this time and that time, give or take a microsecond or so. I think it's based on how many CPU clock cycles have passed, or something to that effect, but I'm not sure about that.
What if I run the same experiment with System.currentTimeMillis()? That's Java's way of returning the current system time, as it gives you the number of milliseconds since midnight on January 1, 1970. Unlike System.nanoTime(), this one is meant to tell you the time of day.
The problem with System.currentTimeMillis() is that the time it returns is some integer number of milliseconds. At best, it's rounded to the nearest millisecond, and it could easily be rounded badly. It's not meant to distinguish between whether 5 ms or 6 ms passed between this event and that one. On older computers, it might only offer precision of 10 or 15 ms. Can that really measure differences of a few microseconds, even if we average a lot of data?
Well, I've worked out the methods for System.nanoTime(), so let's run the same computations (using System.nanoTime() to measure the length of a frame, and outputting System.currentTimeMillis() when we flag a long frame) and try it again. This time, I check the time that each frame is done, and if it's 15-18 ms after the previous one and 15-18 ms before the next, I keep it. If not I discard it. This ends up discarding perhaps 1/4 of my data.
So I run the same computations and get 16661594 ns per frame. That's different from what I got with System.nanoTime(), of course. I'm skeptical that those last few digits are significant, but I'm not really sure how precise it is. So let's get another data set and try it again. 16661614 ns per frame. Again? 16661660 ns per frame. Then 16661554 ns per frame.
So it looks like using System.currentTimeMillis() to flag the long frames can really only pin down the time per frame to within about 100 ns accuracy. But that's still enough to say that it's definitely not 16666667 ns per frame. And it's enough to say that the result is probably different from that of System.nanoTime(). It's only off from System.nanoTime() by about 0.001% or so, which comes to several minutes per year. As keeping time goes, being off by several minutes per year isn't great, but it's not terrible, either. The wall clock in my bathroom is off by more than that.
I can test whether my calculations are correct more directly by using the test I described earlier. But instead of switching the color once per time that the monitor grabs a frame, let's try switching it twice. Let's stay we start at red, and the monitor grabs a frame. We switch to blue, then switch back to red, and the monitor grabs a frame, so it stays red. If you render a frame twice as fast as the monitor grabs one, then every frame the monitor grabs should be the same color.
So let's try making the monitor wait after it renders each frame by an amount that we scale to try to make it render a new frame exactly twice as fast as the monitor grabs one. With an initial, naive guess of 8333333 ns per frame, the screen appears solid red for a while, then flickers back and forth between red and blue rapidly, then settles on blue for a while, then flickers back and forth, then returns to red for a while. This pattern repeats indefinitely.
If we change it from 8333333 ns per frame to 8331000 ns per frame or 8330000 ns per frame, we get the same behavior, but it takes a lot longer to switch from one to the other. At 8330700 ns per frame, it stays solid blue for a long time. Bingo. The amount of time it takes between moments that the monitor grabs a new frame is right around double that.
Let's return to the title. How often does a 60 Hz monitor refresh? It could easily vary by the monitor, video card, motherboard, and who knows what else. But in my case, at least, it's definitely not 60 times per second. 60.02 times per second is much closer.