Tilemap-Shader not working

Hi ppl!
I hope you can help me. I tried to use a shader to render my tilemap, but I have a strange issue with it.
Here is the source: https://www.dropbox.com/s/d2lxro98vecazt7/TileEngineShaderTest.zip?dl=0

I tried t implement it on the shader from here: http://www.connorhollis.com/fast-tilemap-shader/

If I try to render my map, the texture gets distorted and somestimes it takes the wrong tile.

I dont get whats wrong here…
Shader:

float4x4 World;
float4x4 View;
float4x4 Projection;

Texture2D TilesetTexture;
sampler2D tilesetTextureSampler = sampler_state
{
    Texture = <TilesetTexture>;
    Filter = POINT;
    AddressU = CLAMP;
    AddressV = CLAMP;
};

Texture2D MapTexture;
sampler2D mapTextureSampler = sampler_state
{
    Texture = <MapTexture>;
    Filter = POINT;
    AddressU = CLAMP;
    AddressV = CLAMP;
};

struct VertexShaderInput
{
    float2 TexCoord : TEXCOORD0;
    float4 Position : SV_Position0;
    float4 Color : COLOR0;
};

struct VertexShaderOutput
{
    float2 TexCoord : TEXCOORD0;
    float4 Position : SV_Position0;
    float4 Color : COLOR0;
};

struct PixelShaderOutput
{
    float4 Color : COLOR0;
};

VertexShaderOutput VertexShaderFunction(VertexShaderInput input)
{
    VertexShaderOutput output;
    float4 worldPosition = mul(input.Position, World);
    float4 viewPosition = mul(worldPosition, View);

    output.Position = mul(viewPosition, Projection);
    output.TexCoord = input.TexCoord;
    output.Color = input.Color;

    return output;
}

// Tilemap Dimension (in tiles)
int MapWidthInTiles;
int MapHeightInTiles;

// Max Tileset-Tiles per row
int TilesetSizeInTiles = 8;

float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0
{
    float4 pixel = tex2D(mapTextureSampler, input.TexCoord);
    int index = 0; // ((pixel.g * 255) * TilesetSizeInTiles) + (pixel.r * 255);
    int xpos = index % TilesetSizeInTiles;
    int ypos = index / TilesetSizeInTiles;
    float2 uv = float2(xpos, ypos) / TilesetSizeInTiles;

    float xoffset = frac(input.TexCoord.x * MapWidthInTiles) / TilesetSizeInTiles;
    float yoffset = frac(input.TexCoord.y * MapHeightInTiles) / TilesetSizeInTiles;
    uv += float2(xoffset, yoffset);

    return tex2D(tilesetTextureSampler, uv);
}

technique Technique1
{
    pass Pass1
    {
        VertexShader = compile vs_4_0_level_9_1 VertexShaderFunction();
        PixelShader = compile ps_4_0_level_9_1 PixelShaderFunction();
    }
}

Are you certain uv is in [0,1] when calling return tex2D(tilesetTextureSampler, uv)

I hope :slight_smile: dont know how to debug that shader :frowning: I coding blind… If you have a debugging tool, please let me now :smiley:

Try with
return float4(uv.x, uv.y, 0, 0) and take a screenshot if possible
Index is always 0 so…
There seem to need an index stored in a texture

Hey ppl, I’ve got it… Its distorted because of the different sizes. Is it is 1010 or 2525 it’s all right, but if it is 40*25 its broken… must it be a square?

For testing purposes, I have set the index to 0.

I am ashamed, but the distortion was because I have setup the indices in a wrong order… Now I have updated the project and it work (supposed). Now I think I have some strange monogame-bug. On many pcs I have testet the tileengine, it works well, but on some, it renders the wrong texture. Please check it on your own. If you click on the map (please not too close to the edge to prevent crashes ;)) you should see it rendering a gras like texture. If not, then you see what I mean.
Here the release Version for testing: https://www.dropbox.com/s/rnz2ikul01erwyx/TileEngineShaderTestRelease.zip?dl=0

Could someone veryfied that bug ?
On my PC @ work: https://www.dropbox.com/s/qsz6mdgph3dh6df/right.png?dl=0
DX-Dialog: https://www.dropbox.com/s/54cbpv7c30v5lqf/DxDiagRight.txt?dl=0
On the other PC @ work: https://www.dropbox.com/s/7uunwvyeaxann3a/wrong.png?dl=0
DX-Dialog: https://www.dropbox.com/s/5x96zus47iylpau/DxDiagWrong.txt?dl=0

My PC @ work: NVIDIA GeForce GTX 750 Ti
The other has Intel HD-Graphics
My laptop (wrong): Intel HD-Graphics + One of the ATI (must be reviewd later)

EDIT: Added DX-Dialogs

I’ve got it… Why does the intel gives me here a different result as the nvidia/ati cards?
Set the “tilemap-color” to new Color(4,4,0)

int colX = (pixel.r * 255.0); <- Intel 3, others 4
int colY = (pixel.g * 255.0) * TilesetSizeInTiles); <- intel 3, others 4

if I wrote
int colX = (int)ceil((pixel.r * 255.0));
int colY = (int)ceil(((pixel.g * 255.0) * TilesetSizeInTiles));

all works on both cards… rounding errors?