I am drawing a tilemap using a texture atlas where I get the tile I want to draw using the spritebatch source rectangle argument. Pretty standard stuff. The issue comes when i use a non whole number value to scale the the transform matrix. The texture atlas source rects start to bleed like so:
I am using point clamp in the sprite batch and in my custom shader as well:
Ah OK I can see better what you’re doing now. Looks like your source rect isn’t quite aligned to pixels here, somehow? It looks like it’s maybe half a pixel out horizontally in some places, which could cause the sampler to land between pixels and unpredictably pick the wrong colour as you scale as a result of small floating point errors. I don’t think this is a shader problem, maybe post the relevant part of the C# might see what is happening.
So I’ve solved this by changing by zooming increment from 0.25 to 0.2. To be honest I’m really unsure why it fixes it, but oh well, at least the problem is gone.
The best strategy I’ve found for dealing with these sorts of issues is to render the tiles to RenderTarget and keep the scaling at 1:1, and then draw the render target texture with the scaling that you want.
So, in case anyone is puzzled about this: keep in mind that floating values are subject to imprecision given their binary representation. Therefore, what might have happened in your case is that 0.2 has a more accurate binary value, whilst 0.25 does not. Please, notice that it’s not an error within MonoGame, C# or .NET, it’s just how floating points are implemented following the standard IEEE 754.
This is more like a heads up in case someone tries to fix their problem by changing constants whilst keeping their fingers crossed, hoping it works. It may look like “luck” or “bad luck”, but it’s really this floating point inaccuracy that sometimes become more evident depending on the figures used.