Incosistent rasterization when rotating

When you draw a sprite with rotation its 4 vertices are rotated. Depending on position and rotation these vertices may have some non-integer values. During the rasterization, it cannot color only a part of a pixel (unless using anti-aliasing, I am using point sampling), either a pixel is filled or not. So depending on the rounding the rotated sprite is not just displaced, different pixels are filled. Is there a way to ensure consistent rasterization?

For example I have this arrow sprite

When I draw it at (x.0f, x.0f) rotated 1 degree (similar problem for other rotations, e.g. 45 degrees) it appears like so (upscaled image):

But when I draw it at (x.1f, x.1f) rotated 1 degree it appears like so (upscaled image):

Code to reproduce (cross-platform OpenGL MonoGame 3.8.0):

using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;

namespace RotationTest
    public class Game1 : Game
        private GraphicsDeviceManager graphics;
        private SpriteBatch spriteBatch;

        private Texture2D texture;

        public Game1()
            graphics = new GraphicsDeviceManager(this);
            Content.RootDirectory = "Content";
            IsMouseVisible = true;

        protected override void Initialize()

        protected override void LoadContent()
            spriteBatch = new SpriteBatch(GraphicsDevice);
            texture = Content.Load<Texture2D>("Arrow");

        protected override void Update(GameTime gameTime)
            if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed || Keyboard.GetState().IsKeyDown(Keys.Escape))


        protected override void Draw(GameTime gameTime)
            spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, null, null, null);
            spriteBatch.Draw(texture, new Vector2(100, 100), texture.Bounds, Color.White, MathHelper.ToRadians(45f),
                Vector2.Zero, Vector2.One, SpriteEffects.None, 0f);


Usually, in a game, there is also a WVP matrix, so simply rounding the position will not be enough. When the camera moves the inconsistent rasterization will look weird as the shape of the sprite appears to flicker. I could in a custom vertex shader apply world+view matrix, then truncate the vertices, and then apply the projection matrix. This does not seem to fix the issue for me and introduces jittering on camera movement. I could try to reproduce it in a much simpler environment like this. But first, is there really not a better solution?

Round camera position and sprite position.

1 Like

Anti Alias only works on lines (edges of geometry), not on textures, as a texture has no concept of a “line”.

The relevant thing is texture sampling - the GPU will still render a perfect grid of pixels and all the shader does is trying to find the pixel in the texture which will correspond to the current screen pixel the PS is processing - Texture Sampling will (should) care about less edgy lines. So Point Sampling is not what you want, you want at least Linear Sampling.

The other thing is vertex alignment - this is where rounding comes in place. As Spritebatch works on a 1:1 ratio with resolution it mostly works with integers - this should be alright (for transformations!), as any transformation is applied via matrix at the shader (which does use floats anyway) and sampling works as expected (at least afaik - not sure if or how SpriteBatch pre-transforms vertices)

So primarily I would suggest trying Linear Sampling - if this does interfere with anything else of your intended style, you can also try to do the sampling on the texture itself - this means not having 100% sharp lines in the texture, but have a bit of easing on the edges - basically baking sampling into the texture, that should make the rotation artifacts less prominent - another way to do it is to just provide bigger textures, as mipmaping will basically do the same

Should have mentioned explicitly this is for a pixel art game, so sharp pixels are a must. My experience with Linear Sampling is it turns the scene into a blurry mess. This easing sounds like it will do a similar thing. Linear sampling also appears to produce a different result depending on the fractional part of the position, although maybe not as prominent. I don’t think Linear Sampling is a good fit for pixel art.

Rounding x and y and then calculating x-y is not the same as rounding x-y. This will result in jittering. To see why that is, consider a camera following a moving target with some form of interpolation. At first, the camera catches up with the target, but at some point, the distance between the camera and the target will stabilize to a constant. In this case 5.108. First frame, the x-position of the camera is 10.62885 and x-position of target is 15.7368. Second frame is 11.4969 and 16.6049. Third frame is 12.3649 and 17.47296.
Rounding separately the render x-position is 16 - 11 = 5, next 17 - 11 = 6 and then 17 - 12 = 5.
So rounding camera and sprite positions separately means the render x-position will go back and forth with 1 pixel (jitter) depending on fractional parts even if the render x-position is constant when not rounding.


Similar problem. Camera and sprite position in frames 1, 2, 3. Render position will be 7, 8, 7 with floor
11.0939 18.20197
11.9619 19.07007
12.83 19.9381

Pixelart or not doesn’t matter - Pixels dont rotate. Try rotating an image in Photoshop, rasterize (edges now blured) and rotate back, you will find it’s not the same image any more because rotating pixels will result in different pixels. Of course you can hide that fact by just using a higher resolution and simulating bigger pixels by using squares - then a pixel can rotate as it’s now a square :slight_smile:

What Photoshop does is basically what Linear Sampling does, otherwise rotating an image in photoshop would look edgy/stepped as well. The higher the screen resolution and bigger the texture the less dominant the bluring of sampling will be (sampling a 16x16 texture will be a blurry mess, yes, sampling a 512x512 will not).

Problems with rasterization has been the bane of my existence for a long time, even outside any game related work. The problem is that you just don’t get a lot of control over how those pixels render. You might even have it perfect in the top-left part of your screen and think it’s ok, but then when you render that same rotated sprite in the bottom right part of your screen you will get a different result. It’s very frustrating.

Unfortunately, outside of looking into writing your own rasterization process, your options are limited…

  1. Accept that you’ll have a pixel in the wrong spot from time to time.
  2. Pre-bake your rotations so the sprites look exactly how you want.
    *3. Add more data

*For the last one here, there’s a technique I’ve played around with where I actually take the low-rez texture, then scale it up to a higher resolution version. I just multiply the data, so it’s a high-rez pixelated image, if that makes any sense. Then, when I render it, I use anisotopic filtering (instead of point), so that when I rotate it maintains that pixelated look but it can add some filtering around the edges in what looks like sub-pixel, but it’s not.

You can see an example here…

If you don’t need to absolutely maintain a low resolution and are ok with faking it in a higher resolution, this is an easy way to go. It’s pretty easy to programmatically scale your textures up and, depending on what your needs, you can even have a “high res” layer that only does this for the textures that need it.

There might be other, more general solutions here, but I haven’t found them and I just haven’t really wanted to write my own rasterizer. Doesn’t mean they don’t exist though! :slight_smile:

I think we’re misunderstanding each other. It is a separate issue that the rotation gives edgy/stepped pixels. The problem that I’m talking about is that the same rotation gives a different result depending on the fraction of the position you are drawing at.

The first paragraph sounds like my problem, but afterward, it seems like you primarily talk about how to rotate pixel art nicely when drawing to a higher-resolution screen (and maybe hiding the inconsistent rasterization?).
I use a low resolution because for things like lighting shaders it is more efficient and better for the aesthetic I think when drawing at low-res (at higher resolutions the transition between light and dark will be smoother, an example of how the visuals will change depending on the resolution the user chooses). To combat this you could use things like light masks, but for rotation, if you rely on a higher resolution, at native resolution or just lower, the rotation might be really bad again. I’m focusing on making the rotation look good at native resolution and then scaling up, so the game looks the same at all resolutions. I might try to use Fast RotSprite which can give results similar to hand drawing the sprite at different rotations.

But I digress from my current problem, which is not bad rasterization, but inconsistent rasterization.
I think the simplest solution to my problem is to apply the WorldView matrix to the sprite positions and then round them. This way vertices will always have the same fractional parts no matter where the camera is. Then inside shaders, I only apply the projection matrix. That should solve the problem completely. It is just inconvenient and unorthodox.

As soon as you rotate a quad, its vertices will no longer fit the screen raster, no matter what you do. It’s not inconsistent, it’s just how the math works. And as you use point sampling these slight differences just get very visible, as they result in big jumps inside the texture.

You say the transition between pixels will get washed out when using higher screen resolutions and sampling. Yes, that is because your texture is too small and you are just upscaling it to screen resolution. Provide higher res texture and the pixels will stay crisp during sampling - and for rotation to work consistently you just need sampling - you dont need point sampling (=no sampling at all) for a pixel look.

In the old days, when there was no sampling, devs actually provided spritesheets which contained a single sprite for every desired degree of rotation to accomplish rotation of sprites - which was basically the only way to make it happen. Nowadays we have texture sampling, it just needs a higher detailled texture in the first place

Inconsistent rasterization is included in what I’m talking about. As says above, the problem is just the mapping of the pixels to the screen. An easy way to consider this is an 8x8 pixel image. If you scale it up to 16x16 pixels, it’s easy to see that each pixel perfectly becomes a 2x2 block and none of your data is lost. However, what happens when you scale it to 12x12 instead? Some data is going to be lost and now the algorithm has to make it’s best guess as to what colour the pixels should be.

It’s much like this with rotation except more complicated. That’s why, at some angles, you’re seeing a pixel jump out where it doesn’t look like it should belong. What’s more frustrating is that this can happen at a fixed angle of rotation, but in a different place on the screen, even when rounding to integer values. I had an old test for this that I made 10 years ago where I just used polygons to build thick line segments in an L shape on the screen. Sometimes there would be a random pixel out of place at the bend and sometimes there wasn’t, it all depended on where the shape was and how it was scaled. If that’s not inconsistent rasterization, I don’t know what is :slight_smile:

Anyway, the idea behind what I’m saying is that you just give the rasterizer more data to work with. Instead of the 8x8 pixel sprite I mentioned above, what if you scaled that up to 64x64. None of the data is lost in the scale, it’s just an 8 times larger image on both dimensions, but when you rotate it, there are now 8 pixels of data representing one image pixel and so the rasterizer has more to work with. You’re still going to get inconsistent results if you do a side by side image comparison of the rotated sprite at various angles and/or positions, but since the artifacts are smaller (relative to the visual pixel size) they are much less noticeable.

For your lighting approach, you would just have to consider each pixel as whatever your scaled up size is, otherwise you should be able to just do the same thing. You shouldn’t be prohibited from taking this approach; however, if you don’t want to that’s completely fine. If you think you can solve it with shaders, awesome! Using pre-baked rotations is also a good approach (ie, what you’re calling Fast RotSprite, which I assume is a library) if that works for you.

Good luck! :slight_smile:

If I draw a sprite rotated at some angle at x-position 100.1 or 250.1 the texture will be sampled in the same way because the vertices will end up in the same relative positions compared to the screen raster, just displaced by 150 pixels. If I draw a sprite rotated at that same angle at x-position 100.2 for example, the texture will be sampled in a different way that is displaced on the texture and thus the sprite on the screen may look different than when drawn at 100.1.

Upscaling and sampling does not change that. I tried upscaling the texture and as you say e.g. linear and anisotropic sampling is crisper, but the problem persists, it is not even less dramatic (although upscaling to a higher screen resolution as well might make it less dramatic). Therefore no matter what sampling I use I need to round the position from which the vertices are calculated to make sure the positioning on the screen raster is always the same.

Sadly, at least as far as the tests I’ve done over the years, this is not a guaranteed outcome. Intuitively it should be, but it’s not.

You still have to round your position just prior to render to an integer result, but I’m surprised you don’t see any impact here. Did you watch the video I posted above? Those sprites are rotating rather cleanly and any rotation artifacts are much less discernable, while the overall pixelated look stays relevant.

You might need to post some better images (your ones above are crazy tiny) or maybe even some videos to demonstrate what’s happening.

Rounding the render position has the same fixing effect no matter what, independent of sampling and upscaling.

Unfortunately that’s not true due to floating precission. When you use “100.1” in reality the 32 bits floating point can’t represent it, so it chooses the nearest approximation

Check the following site and enter both 100.1 and 250.1 :

Watch “value actually stored in float” and you’ll see that not only both 100.1 and 250.1 are not stored as those values, but the “error due to conversion” is different. That conversion difference is what makes your images look different depending on the position you render them.

Floats are evil.

p.s. check 100.0625 and 250.0625 to have some extra fun :wink: I suppose if you approximated your values to the nearest value which can be represented by a float you may get the same pixels when rotating, but I honestly have never tried it.

Yeah, I know. What I meant to say is the difference in sampling is so small there is no change in the result (for point sampling at least). Of course, if we were talking about like 100.1 vs 250000.1 the inaccuracy difference might become too large.

But when I draw it at (x.1f, x.1f) rotated 1 degree it appears like so (upscaled image)

what happens if you render it at (x.125f ,x.125f ) ?

The same result as x.1f, x.1f (for 100.1 vs. 100.125 at least). Although I get your point, even if I round the values, at a certain rotation it may only need a very small nudge to give a different result.

1 Like