I normally do my best to solve a problem on my own by googling around, but being new to MonoGame and hlsl (I only just got my head around glsl shaders…) I am finding it pretty daunting trying to solve what was a fairly simple problem is glsl.
Basically I managed to get a simple raycast psuedo 3D engine up and running yesterday in MonoGame. Now I am looking to add sprite support, and the specific issue I am facing is rendering sprites correctly that are part obscured by a wall. The way I did it previously was a simple glsl shader, which pseudo code read something like this;
I realise I can probably do this in a more optimal way (I remember being taught in university that conditional statements on the GPU are really heavy, so would be great if I could do it without this). The intent of the code is to hide the sides of sprites that are obscured behind walls. If they are completely obscured I can simply not draw them, so this is for sprites in view or part obscured.
It compiles, yay! But no visible effect on the sprite. I believe I am probably getting the screenX coordinate of the pixel in the wrong manner, but all my googling efforts have proven fruitless, and it seems there are a many different way this can be achieved in different versions.
Would anyone more experienced be able to shed some light on this?
ah thanks, simple and exactly what I need. However I ran into this earlier as well, now the shader wont compile due to “ps_4_0_level_9_1” not allowing reading from position semantics.
I understand this is referring to directx 9? Its a hobby project, with some intent perhaps to eventually build a game with it. So I’m not all that bothered about supporting dx9 right now, but I’d rather I was able to. And cheers, I now know about clip
To get proper screen coordinates you also need to divide by the w component: output.TexCoord = output.Position.xy / output.Position.w;
screen coordinates are -1,-1 for the bottom left corner of the screen, and +1,+1 for the top right corner. If you want texture coordinates instead, where the top left corner is 0,0 and the bottom right corner is 1,1 you need an extra transformation like this: output.TexCoord = (output.TexCoord + 1) / 2; output.TexCoord.y = 1 - output.TexCoord.y;
In your case it looks like you want to use pixel coordinates, because you are using integers for startX and endX. To get pixel coordinates you need to multiply the texture coordinate by the screen resolution, or the viewport resolution if you don’t render fullscreen.
I would change startX and endX to screen- or texture space and make them floats. Let the application calculate the screen/texture coordinates for those two values. That way you don’t need the extra shader code, and also shaders don’t like integer math all that much.
I didn’t quite understand the last bit, how I would get these values in app in order to pass to the shader, But I followed the rest and implemented it in the shader, converting to floats for everything. I was indeed looking for pixel x coordinate (i think pixel coord is the correct term). That is the x coordinate of the pixel on screen, so that I can compare the x position of the ray to this.
I’m still not getting any visible change though. The simple test for the shader I setup looks like this:-