Distance from 0

Shadow again. Argh.

Anyway I found the problem in the end, the further from 0,0,0 my camera moves the more the jiggling occurs. A picture/movie tells a thousand words so here’s my lighting test harness with a building placed at z 20, 200,2000 and 20000.

2000 it starts to have problems, by 20000 it’s horrible.

How is this generally done if i want models miles from each other? I feel like i’m missing something fundamental here.

Here’s the example: video

It’s normal to loose precision as you move further away from the center. Floating point numbers have a limited number of precise digits. In the case of normal floats that’s roughly 7 digits. That means 1 unit away from the center positions are accurate to about 0.000001 units. At 10000 units away from the center the accuracy goes down to 0.01 units.

One solution to the problem would be to use double precision numbers. This gives you double the digits, but will eventually also fail, and reduces performance.
Another solution could be to recenter the world somehow as the camera moves away from the center.
Maybe you can tweak your shadow math to be less sensitive to inaccuracies.

1 Like

Yep I understand that I lose accuracy the further away from 0 I get but I thought it was normal to move the camera with the player. I that little test harness in the video I tried keeping the camera at 0,0,0 at moving the player with the controls, then offsetting the world/models by the players position. That actually works great, not really much of a performance hit, just combining another translation matrix.

Is there some standard way this should be done in large worlds? In my actual game terrain covers 50km2. I’ve read in very large environments sometimes a player centered world is the only way to go (such as space games) so that might be my eventual solution.

There’s some tricks I’ve seen suggested like reversing the depth buffer for more accuracy but that seems like I might just be moving the problem.

I was curious how other people solved the problem.

I don’t think there’s one standard way to solve this. For different scenerios different approaches will be easier to implement or more performant.

If a game needs to dynamically load the environment, it might do that by separating the world into individual chunks of content, which get loaded on demand, a grid for example. If you need such a chunk-system anyway, a natural solution could be to alwas have the chunk with the player at the center of your world.

If you don’t need chunks a solution like yours might be better, at least if the total object count is low.

If double precision is enough (and it probably will be), and your engine/framework allows it, using double for positions might be the easiest solution. You probably want to keep your double precision math to the CPU though, as shader performance suffers quite a bit from it, and the GPU/shader language might not even support it. So whenever you pass positional data to a shader you calculate a float position relative to some center, probably the camera or the player.

If you can’t use double, or don’t want it for whatever reason, you can get pretty much the same effect by using two float positions for every object. One would be the coarse position, and the second would be the fine position. Added together they make the final position.

Most game worlds are not large enough to run into such problems, so most games, and many engines, don’t suppor large-scale at all.
As to what’s the most dominant solution out there, I couldn’t say. My guess would be chunk-based.