Super simple shader problems - cut off texture on x condition

I normally do my best to solve a problem on my own by googling around, but being new to MonoGame and hlsl (I only just got my head around glsl shaders…) I am finding it pretty daunting trying to solve what was a fairly simple problem is glsl.

Basically I managed to get a simple raycast psuedo 3D engine up and running yesterday in MonoGame. Now I am looking to add sprite support, and the specific issue I am facing is rendering sprites correctly that are part obscured by a wall. The way I did it previously was a simple glsl shader, which pseudo code read something like this;

uniform int startX;
uniform int endX;

main()
{
int pixelX = coord.x;
if(pixelX < startX || pixelX > endX)
{
pixel *= 0.0f;
}

return pixel;

}

I realise I can probably do this in a more optimal way (I remember being taught in university that conditional statements on the GPU are really heavy, so would be great if I could do it without this). The intent of the code is to hide the sides of sprites that are obscured behind walls. If they are completely obscured I can simply not draw them, so this is for sprites in view or part obscured.

This is my current attempt:-

#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif

uniform int startX;
uniform int endX;

matrix WorldViewProjection;

struct VertexShaderInput
{
	float4 Position : POSITION0;
	float4 Color : COLOR0;
};

struct VertexShaderOutput
{
	float4 Position : SV_POSITION;
	float4 Color : COLOR0;
	float2 TexCoord : TEXCOORD0;
};

VertexShaderOutput MainVS(in VertexShaderInput input)
{
	VertexShaderOutput output = (VertexShaderOutput)0;

	output.Position = mul(input.Position, WorldViewProjection);
	output.Color = input.Color;
	output.TexCoord = output.Position.xy;

	return output;
}

float4 main(VertexShaderOutput input) : COLOR0
{
	
	int pixelX = input.TexCoord.x;
	if (pixelX < startX)
	{
		input.Color.rgba *= 0.0f;
	}

	return input.Color;

}

technique BasicColorDrawing
{
	pass P0
	{
		VertexShader = compile vs_4_0_level_9_1 MainVS();
		PixelShader = compile ps_4_0_level_9_1 main();
	}
};

It compiles, yay! But no visible effect on the sprite. I believe I am probably getting the screenX coordinate of the pixel in the wrong manner, but all my googling efforts have proven fruitless, and it seems there are a many different way this can be achieved in different versions.

Would anyone more experienced be able to shed some light on this?

      float4 main(VertexShaderOutput input) : COLOR0
      {    	
	int pixelX = input.Position.x;
	clip(pixelX - startX);

	return input.Color;
      }

ah thanks, simple and exactly what I need. However I ran into this earlier as well, now the shader wont compile due to “ps_4_0_level_9_1” not allowing reading from position semantics.

I understand this is referring to directx 9? Its a hobby project, with some intent perhaps to eventually build a game with it. So I’m not all that bothered about supporting dx9 right now, but I’d rather I was able to. And cheers, I now know about clip :slight_smile:

k sorry I got it now, if is useful for anyone else, must use “ps_4_0” to compile and set below in the constructor.

graphics.GraphicsProfile = GraphicsProfile.HiDef;

However I’m not getting any affect out of it still, but will take a look into this.

To get proper screen coordinates you also need to divide by the w component:
output.TexCoord = output.Position.xy / output.Position.w;
screen coordinates are -1,-1 for the bottom left corner of the screen, and +1,+1 for the top right corner. If you want texture coordinates instead, where the top left corner is 0,0 and the bottom right corner is 1,1 you need an extra transformation like this:
output.TexCoord = (output.TexCoord + 1) / 2; output.TexCoord.y = 1 - output.TexCoord.y;
In your case it looks like you want to use pixel coordinates, because you are using integers for startX and endX. To get pixel coordinates you need to multiply the texture coordinate by the screen resolution, or the viewport resolution if you don’t render fullscreen.

I would change startX and endX to screen- or texture space and make them floats. Let the application calculate the screen/texture coordinates for those two values. That way you don’t need the extra shader code, and also shaders don’t like integer math all that much.

I didn’t quite understand the last bit, how I would get these values in app in order to pass to the shader, But I followed the rest and implemented it in the shader, converting to floats for everything. I was indeed looking for pixel x coordinate (i think pixel coord is the correct term). That is the x coordinate of the pixel on screen, so that I can compare the x position of the ray to this.

I’m still not getting any visible change though. The simple test for the shader I setup looks like this:-

And I set the startX to be 34.0f. The texture is positioned at 0,0. What I am expecting to see is half of the texture not drawn. I must be misunderstanding something…

Here is how I would do this. In the application you do this

float startX = 34;
float startXScreen = startX / backbufferResolution.X * 2 - 1;

You pass startXScreen to your shader instead of startX. Your vertex shader then looks something like this

struct VertexShaderOutput
{
	float4 Position : SV_POSITION;
	float4 Color : COLOR0;
	float2 ScreenPosition : TEXCOORD0;
};

VertexShaderOutput MainVS(in VertexShaderInput input)
{
	VertexShaderOutput output;

	output.Position = mul(input.Position, WorldViewProjection);
	output.Color = input.Color;
	output.ScreenPosition = output.Position.xy / output.Position.w;

	return output;
}

and for a quick test just return black in the pixel shader, when the pixel gets clipped.

float4 main(VertexShaderOutput input) : COLOR0
{	
	if (input.ScreenPosition.x < startX)
	    return 0;

        return input.Color;
}