I have a 1024x1024 texture of tiles that I draw from.
When the camera is at very specific positions I can see a 1 pixel gap appearing between tiles like this:
Upon further inspection, I found out that in the shader (yes, the default one too) the texture coordinates always have the same integer values (for example y value of 360.444 and 360.026 because 360 = 360), but the decimal value after is determined by the final screen position of the drawn object. Meaning the value could fluctuate between 360-360.999. (Actually, the texture coordinates are both between 0-1 where 1 represents the width/height of the texture, I’m referring to the value multiplied by texture width/height)
Why? I have no idea. In theory this doesn’t matter because I’m using PointClamp sampling, but when you have a big texture and the y texture coordinate is at exactly 360 for example, a floating point error can sneak in during the tex2D call and sample the pixel above, so if that pixel is transparent above the tile in the texture you will see a “gap” like this. If you try to draw a black line in the texture just above the tile, you will see that instead.
I “fixed” this by adding 1000th of a pixel to the texture coordinates, but although I have never encountered it if it were to pick, say 360.999, then my problem would appear again the other way around. So how can I fix this properly? It would be best if I could somehow make it aim for the middle of each pixel instead of fluctuating to the absolute bounds of each pixel.
I’m using PointClamp sampling, please do not confuse this with the common problem of using LinearWrap: GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp;
Are you using spritebatch to draw the sprites or your own quad drawing.
Clamp just dis-allows any sampling outside the texture.
Point however as its namesake samples to the texel point according to the math and rounding rules its under.
I should just say with point it’s still just a matter of were you define the start and ending position of the sprites in the sprite sheet or more specifically the width of each sprite. It seems that this shouldn’t be happening from the sampling side unless the sprite rectangles or vertice uv’s aren’t set properly im fairly certain that there is accuracy up to 4 places from the passed float to the shader.
When the camera is at very specific positions I can see a 1 pixel gap appearing between tiles
It is possible to incur floating point positional ploting errors depending on how you define start and end drawing positions. Those positions conversely should actually start exactly were the previously drawn tile ends. This sort of artifact can result in gaps of 1 or 2 pixel wide gaps and non uniformed across multiple sprites drawn in sequence.
I tried drawing both with sprite batch (with default SpriteEffect) and with my own quad (with my own shader), the results are the same.
When I say floating point error, I mean when sampling at the absolute edges of pixels in a large texture then (1 / 1024) * 360 might turn into (1 / 1024) * 359.99999 for example.
That’s not the problem, the problem is that the texture coordinates aren’t centered on the given pixels when they are passed to the pixel shader. There is no way the decimal number (1 / 1024) * 360.5 might turn into floating number (1 / 1024) * 360.
A solution could be to make sure the texture coordinates are centered in the pixel shader myself by adding a float2 called pixelSize that contains the width and height of a pixel for that texture (1 / textureWidth(Height)) and then in the pixel shader add these lines: input.TextureCoordinates.x += (pixelSize.x / 2) - fmod(input.TextureCoordinates.x, pixelSize.x); input.TextureCoordinates.y += (pixelSize.y / 2) - fmod(input.TextureCoordinates.y, pixelSize.y);
It’s just a bit bothersome to do and I wouldn’t be able to do it in the default shaders for sprite batch. So I was wondering if there was some kind of setting to make it pick the center of each pixel for texture coordinates.
No, I don’t think so. I’m not particularly familiar with it, but it is handled in SpriteBatch and I use the same solution when creating my own projection matrix: https://github.com/MonoGame/MonoGame/issues/4939
My problem is that I want the texture coordinates passed to the pixel shader to always hit each pixel in the center instead of fluctuating between decimal values 0-0.999 on the pixel. Floating point error: (1 / 1024) * 360.5 becomes (1 / 1024) * 360.4999, doesn’t matter, but when (1 / 1024) * 360 becomes (1 / 1024) * 359.999 that is an entirely different pixel, resulting in the image I posted.
Hi there, I think I’ve been having a similar issue. I’m rendering the game to a render target that is half the size of the game window. I have a camera that lerps smoothly towards the players position. On occasion, when the player is moving at a certain angle or speed, lines appear between some of the background tiles.
Here’s a video showing what happens, it doesn’t have often so it took me a while to capture this. In this case, the lines seem to be made from the bottom (or top) edges of the tile sprite. Other times I’ve had the white background appear through the lines.
Yes, that’s the same issue. Of course the camera doesn’t have to be moving, I found an exact position that caused the issue and just set the camera position to that, so that the lines were there constantly to experiment more easily.
To determine whether it’s coming from the top or bottom, try drawing lines of different colors above/below tiles in the tile sheet.
I posted a solution earlier, which is to round down and add half a pixel in the shader. Problem is that this is tedious to do for all shaders and it does require you to have access to the shader code. So I don’t know how to solve this with the default MonoGame effects. Of course you could just make a copy of SpriteEffect shader with the solution and use that instead.
This is a shader problem, not a MonoGame problem. You don’t want to sample at the boundaries of a pixel when using Point sampling (more on that here: https://docs.microsoft.com/en-us/windows/desktop/direct3d9/nearest-point-sampling) So we’re looking for a setting within the shader that changes how the texture coordinates are interpolated between vertices to the pixel shader, such that it will always hit in the center of a pixel. If we can find that we can find the equivalent setting in GraphicsDevice and thus do it outside the shader.
The reason why this is connected to the position of the vertices is because the texture coordinates are interpolated over the vertices, so if you have a top and bottom vertex that ends up having integer y-coordinates then the texture coordinates will all start at the absolute beginning of the pixel, thus in combination with point sampling creating the problem.
What we do is draw an extra pixel around the side of the texture (which is annoyingly impossible if your texture is a whole sheet).
Then the float imprecisions are solved (this still happens doing 8x8 times with point clamp on)
I made a windows app that runs the above script for you which I’ve posted on my website (http://hernblog.com/TileSheetProcessor.zip). You must install Image Magik to use this (I used version 6.8.7-5-Q16-x86)
Drag files on to convert them.
I would be willing to publish the source if anyone is interested.
Have you tried using a pixel-perfect camera?
Just clamp the camera-position to ints.
e.g. Your camera pos being Vector2(5.1f, 6.7f) might cause this.
Try rounding the position e.g. to Vector2(5, 7)
The only thing that will solve this problem for good is adding a small amount to the texture coordinates in a shader like this. It would be ideal to be able to round down and add half a pixel, but the point where the texture coordinate is too close to the bounds of the pixel is extremely close and so there is no way to do this without encountering floating point precision errors that rounds down to the wrong pixel (believe me I tried every intrinsic function there is as well as integer casting).
Luckily the only issue I have encountered was when the texture coordinates were at the top or left of the pixel, for example (360, 175), not when they were at the bottom or right of the pixel for example (360.9999, 175.9999), it never got that close to the bottom or right.
pixelSize is 1 divided by texture size and must be set manually outside of the shader.
I know exactly what’s causing the issue and I have found a solution, I was just curious for a solution that solves the problem on a theoretical level as well, so you could have a texture of any size and at any vertex position without too much of a hassle or performance issues and without restricting yourself.
With my solution I think it is still possible that it will sample the pixel below or to the right (or both) with either very large textures (like 5096x5096 pixels) or a tiny area of a large texture upscaled a lot, this is not really relevant for me, but it’s still annoying to think about.
And yes making sure that you only draw sprites at integer positions with integer scaling and the camera is at an integer position with integer zoom, with integer scaling (which means screen resolution must be evenly divisible with the native resolution) would solve the problem for good on a theoretical level as well, but that is a large number of restrictions to impose upon yourself for a small issue that has easier practical alternatives.
If you are writing your own shader you could just trunc the value of the texture coordinates.
Also knock one pixel off the texture width height when multiplying that should give you a exact 1 to 1 mapping.
Clamp wont ever allow reading the last texel as that would be out of bounds but the rest of them should be dead on.
to say pixels have a real index from 0 to 9 for such a texture it is said to have a width of 10 so then 1/10 = .1 that multiplied by a texture location ranging from 0 to 9 would give at maximum 0.9
Conversely if you subtract 1 from the width to align to the maximum index then find the reciprocal you get 1/9 = n multiplying that as a coefficient n * 9 = 1.0
What you are doing here. input.TextureCoordinates += pixelSize * 0.001;
Is basically faking that process and that isn’t really going to give proportional results
for instance for a 1024 width texture the coefficient is .9990234 + .9990234 * .001
but for a 100 pixel texure you get .99 + .99 * .001
It’s done by width height as the standard reciprocal because the assumption is that typically you are polygonal mapping with shared polygon vertices and the vertices share texture coordinates so then it makes sense in that context i.e. its made for polygonal models you have to fix it a little for quads.
Typically the best way to solve this though is to build the quads to match exactly what the shader does so that the set coordinates of the quad is the function that handles the translation.
I can’t trunc the texture coordinates, because of floating point errors.
Knocking off a pixel from width and height makes sense, I’ll try that out.
Don’t know what you mean by
“build the quads to match exactly what the shader does so that the set
coordinates of the quad is the function that handles the translation.”
Edit: Multiplying texture coordinates by (1 / 1023) instead of (1 / 1024) did not work.