I’ve been having some shader trouble with my 2D platformer game. I should say that I am working in monogame windows Opengl. I’m trying to get Multiple Render Target rendering to work for a custom depth buffer implementation. Before I got that far, I was trying to get Multiple Render Targets to be rendered out at once. Thought I’d ask the monogame community for help with my implementation and suggestions for future depth buffer handling,
Here is my shader code for rendering to two render targets. The first render target samples the texture rendered to it, and the second overwrites every pixel in the drawn sample as white.
#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_4_0_level_9_1
#define PS_SHADERMODEL ps_4_0_level_9_1
#endif
sampler TextureSampler : register(s0);
float depth : SV_Depth;
struct PS_OUTPUT {
float4 color : COLOR0;
float4 depth : COLOR1;
};
PS_OUTPUT MainPS(float4 position : SV_Position, float4 color1 : COLOR0, float2 texCoord : TEXCOORD0)
{
PS_OUTPUT output;
output.color = tex2D(TextureSampler, texCoord)*color1;
output.depth = float4(1,1,1,1);
return output;
}
technique SpriteDrawing
{
pass P0
{
PixelShader = compile PS_SHADERMODEL MainPS();
}
};
Here is my initial implementation of my draw call to this shader:
//init function body
RenderTarget2D colorTexture colorTexture = new RenderTarget2D(camera.graphicsDevice,
camera.drawSpace.Width,
camera.drawSpace.Height,
false,
SurfaceFormat.Color,
DepthFormat.Depth24,
1,
RenderTargetUsage.PreserveContents);
RenderTarget2D depthTexture = new RenderTarget2D(camera.graphicsDevice,
camera.drawSpace.Width,
camera.drawSpace.Height,
false,
SurfaceFormat.Color,
DepthFormat.Depth24,
1,
RenderTargetUsage.PreserveContents);
Effect effect = EffectManager.Instance.effects[EffectEnum.DepthRead];
//Draw Function Body
graphics.SetRenderTargets(colorTexture, depthTexture);
graphics.Clear(Color.Transparent);
effect = EffectManager.Instance.effects[EffectEnum.DepthWrite];
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.AnisotropicClamp, DepthStencilState.None, null, effect, null);
spriteBatch.Draw(camera.debugTextures["debug_square"], new Vector2(200, 200), null, null, Vector2.Zero, 0f, new Vector2(1, 1), Color.White);
spriteBatch.End();
graphics.SetRenderTarget(null);
graphics.Clear(Color.Transparent);
SamplerState.AnisotropicClamp, DepthStencilState.Default, null, effect2, null);
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.AnisotropicClamp, DepthStencilState.None, null, null, null);
spriteBatch.Draw(colorTexture, new Vector2(0, 0));
spriteBatch.End();
And here is the actual image being used as the ‘debug_square’ texture. It is actually just a red square.
Here is the result of the call.
You can see that the effect is associating the second render target register ‘depth’ as the first rendertarget ‘colorTexture’ in the ‘graphics.SetRenderTargets(colorTexture, depthTexture);’ call. My question is, what would I need to do in my shader to properly map the render targets? I would expect the draw call to render a red square, not the white texture that is supplied for the second render target.
if you can see where I am going with this implementation, my next question is “is this a good approach to a custom depth buffer?” I’m trying to implement depth textures with my existing game so I can render sprites in arbitrary order and still have the scene composite properly. Please post any insight into this, anything is appreciated.