I am building a 2D, top-down game, and I am wondering if there is a better way to handle rendering.
Initially, I was drawing all assets at the desired zoom level directly to the screen. Varying zoom levels showed seams between tiles, and the zoom itself was a little choppy.
So I started drawing to a native resolution buffer, which I then drew to the screen at the desired size using PointClamp to preserve pixelation. This solved the problem of seams and made the zoom smooth, but it resulted in choppy movement since the smallest unit of change became 3-4 pixels depending on zoom level.
My solution to that was to increase the size of my assets using PointClamp. I am currently doing it at runtime for ease of development, though it would be easy enough to adjust the source files. Anyway, now movement is smooth, zoom is smooth, and there are no artifacts.
Is there a better way to handle this? Increasing the size of my tile sheets is going to require me to split the larger ones up, and I would like to avoid that if I can. How is this sort of thing usually handled when using SpriteBatch? Or do people tend to use textured triangles instead of spritebatch?
That shouldn’t be the case if it’s done right. What do you do exactly when the zoom changes?
I’d definitely go for direct scaled-size rendering if I were you.
The question is broad, but I will try to answer it.
When the zoom changes, the rendering system queries a smaller rectangle of things to render. When I had problems with seams, it would then draw each of those things directly to the screen with a destination rectangle reflecting the zoom level. Now it draws everything in native size to a RenderTarget. Once it has drawn the whole scene to the RenderTarget, the RenderTarget is drawn to the screen scaled based on the zoom level.
Note I am not having any problems right now. I did get them cleared up. I’m just wondering if there is a better way to do this. What do you mean by direct scaled-size rendering? Do you mean how I used to do it? Or are you referring to some other technique?
Now that I think about it, the original problem might possibly have been solved using ceil and/or floor.
Yes, I meant the technique you used before.
I meant to ask what you were doing during the zooming in your first approach that could have caused it to be choppy.
Well, after you asked about it, I realized it had to have been conversions between float and int. In fact, thinking about it further, I am sure it is because I was combining several float calculations which were each cast to int, when I should have been performing the entire calculation in float and casting only at the end so the missing fractions could not add up to more than 1 pixel.
Edit: Well, multiple casts was not the problem, but I am sure it is still the float to int casting causing the issue. I’ll find a solution now that I know what the problem is.