Mixing two Shadow Maps together with an Effect.fx technique

I’m trying to mix two Shadow Maps together for my 3D world but don’t know how to do it without a 3D model.

My draw code is in a function returning the merged shadow map…

private Texture2D mergeShadowMaps(String technique, Texture2D shadowMap1, Texture2D shadowMap2)
{            
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        this.effect.CurrentTechnique = this.effect.Techniques[technique];

        this.effect.Parameters["merge1"].SetValue(shadowMap1);
        this.effect.Parameters["merge2"].SetValue(shadowMap2);
        
        device.SetRenderTarget(renderTarget);

        this.effect.CurrentTechnique.Passes[0].Apply();

        // WHAT TO DO NOW???

        device.SetRenderTarget(null);
        
        return (Texture2D)renderTarget;
 }

But I’m used to draw a model with the effect, now I’m only going to use these two Shadow Maps to draw and return a mixed Shadow Map. The code I have isn’t enough to ‘trigger’ the Shader effect.

How can this be done without a Model?? Should I use spriteBatch in someway?
I’m stuck…

Regards, Morgan

EDIT EDIT EDIT EDIT UPDATE UPDATE UPDATE…

Ok, now I’ve read your comments and tried to implement what you’ve said but I can’t make it all the way.
This is my HLSL Shader code.

Texture merge1;

sampler merge1Sampler = sampler_state
{
texture = <merge1>;

magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};

Texture merge2;

sampler merge2Sampler = sampler_state
{
texture = <merge2>;

magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
struct VS_Input
{
float4 Position	: POSITION;
float2 uv		: TEXCOORD0;
};

struct VS_Output
{
float4 Position	: POSITION;
float2 uv		: TEXCOORD0;
};

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;

Output.Position = float4(Input.Position.xy, 0, 1);

Output.uv = (Input.Position.xy + 1) / 2;
Output.uv.y = 1 - Output.uv.y;

return Output;
}
float4 shadowMapMergerPS(VS_Output Input) : COLOR0
{
float4 depth1 = tex2D(merge1Sampler, Input.uv);
float4 depth2 = tex2D(merge2Sampler, Input.uv);

if (depth1.r < depth2.r)
{
	return depth1;
}
else
{
	return depth2;
}
}

technique ShadowMapMerge
{
pass P0
{
	VertexShader = compile vs_2_0 imageProcessingVS();
	PixelShader = compile ps_2_0 shadowMapMergerPS();
}
}

And here is the function that is drawing my 3D Plane exported from Blender.

private Texture2D mergeShadowMaps(GraphicsDevice gD, String technique, Texture2D shadowMap1, Texture2D shadowMap2)
{            
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget);

        foreach (ModelMesh mesh in this.plateForShadowMap.Meshes)
        {
            foreach (ModelMeshPart meshPart in mesh.MeshParts)
            {
                gD.SetVertexBuffer(meshPart.VertexBuffer, meshPart.VertexOffset);
                gD.Indices = meshPart.IndexBuffer;

                Effect effect = meshPart.Effect;

                effect.CurrentTechnique = effect.Techniques[technique];
                this.effect.Parameters["merge1"].SetValue(shadowMap1);
                this.effect.Parameters["merge2"].SetValue(shadowMap2);
                                    
                foreach (EffectPass pass in effect.CurrentTechnique.Passes)
                {
                    pass.Apply();

                    gD.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0,
                                                             meshPart.NumVertices, meshPart.StartIndex,
                                                             meshPart.PrimitiveCount);
                }
            }
        }

        device.SetRenderTarget(null);
        
        return (Texture2D)renderTarget;
  }

Both depth1 and depth2 are 0 in the vertex shader, I’ve added a U/V map to the object but I must say that I don’t know if the plane is rotaded right because Blender’s x, y ,z are not the same as when the object is loaded in Monogame.

Usually if you want to pass over the entire render target, you can use what’s called a fullscreen quad. It’s a model that’s created programmatically to cover every pixel of render target, causing each pixel to get processed by your shader. I’m on my phone so I don’t have any good examples readily available, but you should be able to find some ready to work with MonoGame. Perhaps in the Deferred Engine’s source on github.

1 Like

As jnoyola suggest, you can use a fullscreen quad to cover the entire screen. You can create that quad procedurally, by filling a vertex buffer, or you just import a square plane created in a 3D modeling program. Make the plane go from -1,-1 to +1,+1 in the xy-plane, then in the vertex shader do this to fill the screen.

vsOut.position =  float4(vsIn.position.xy, 0, 1); 	

You will also need uv coordinates. Either include them in your model, or calculate them in the vertex shader form the position.

vsOut.uv =  (vsIn.position.xy + 1) / 2; 
vsOut.uv.y = 1 - vsOut.uv.y; 
1 Like

Thanks for your response, I’ve tried to make the changes and implement the code needed.
I can’t quite make it though

I’ve added my new code to this thread after the line ‘EDIT EDIT EDIT EDIT UPDATE UPDATE UPDATE…’ :smiley:

Thanks again

Trim it down alter it as needed, this is copy pasted.

        private VertexPositionNormalColorUv[] screenQuadVertices;
        private int[] screenQuadIndices;

        private void CreateScreenQuad()
        {
            float z = 0.0f;
            float adjustment = .0f;
            float scale = 2f; // scale 2 and matrix identity passed straight thru is litterally orthographic
            //screenQuadVertices = new VertexPositionColorTexture[4];
            screenQuadVertices = new VertexPositionNormalColorUv[4];
            screenQuadVertices[0].Position = new Vector3((adjustment - 0.5f) * scale, (adjustment - 0.5f) * scale, z);
            screenQuadVertices[0].Normal = Vector3.Backward;
            screenQuadVertices[0].Color = Color.White;
            screenQuadVertices[0].TextureCoordinateA = new Vector2(0f, 1f);

            screenQuadVertices[1].Position = new Vector3((adjustment - 0.5f) * scale, (adjustment + 0.5f) * scale, z);
            screenQuadVertices[1].Normal = Vector3.Backward;
            screenQuadVertices[1].Color = Color.White;
            screenQuadVertices[1].TextureCoordinateA = new Vector2(0f, 0f);

            screenQuadVertices[2].Position = new Vector3((adjustment + 0.5f) * scale, (adjustment - 0.5f) * scale, z);
            screenQuadVertices[2].Normal = Vector3.Backward;
            screenQuadVertices[2].Color = Color.White;
            screenQuadVertices[2].TextureCoordinateA = new Vector2(1f, 1f);

            screenQuadVertices[3].Position = new Vector3((adjustment + 0.5f) * scale, (adjustment + 0.5f) * scale, z);
            screenQuadVertices[3].Normal = Vector3.Backward;
            screenQuadVertices[3].Color = Color.White;
            screenQuadVertices[3].TextureCoordinateA = new Vector2(1f, 0f);

            screenQuadIndices = new int[6];
            screenQuadIndices[0] = 0;
            screenQuadIndices[1] = 1;
            screenQuadIndices[2] = 2;
            screenQuadIndices[3] = 2;
            screenQuadIndices[4] = 1;
            screenQuadIndices[5] = 3;
        }

        public void DrawUserIndexPrimitiveScreenQuad(GraphicsDevice gd)
        {
            gd.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, screenQuadVertices, 0, screenQuadVertices.Length, screenQuadIndices, 0, 2, VertexPositionNormalColorUv.VertexDeclaration);
        }

vertex def

    public struct VertexPositionNormalColorUv : IVertexType
    {
        public Vector3 Position;
        public Color Color;
        public Vector3 Normal;
        public Vector2 TextureCoordinateA;

        public static VertexDeclaration VertexDeclaration = new VertexDeclaration
        (
              new VertexElement(VertexElementByteOffset.PositionStartOffset(), VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
              new VertexElement(VertexElementByteOffset.OffsetColor(), VertexElementFormat.Color, VertexElementUsage.Color, 0),
              new VertexElement(VertexElementByteOffset.OffsetVector3(), VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),
              new VertexElement(VertexElementByteOffset.OffsetVector2(), VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0)
        );
        VertexDeclaration IVertexType.VertexDeclaration { get { return VertexDeclaration; } }
    }
    /// <summary>
    /// This is a helper struct for tallying byte offsets
    /// </summary>
    public struct VertexElementByteOffset
    {
        public static int currentByteSize = 0;
        public static int PositionStartOffset() { currentByteSize = 0; var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(float n) { var s = sizeof(float); currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector2 n) { var s = sizeof(float) * 2; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Color n) { var s = sizeof(int); currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector3 n) { var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int Offset(Vector4 n) { var s = sizeof(float) * 4; currentByteSize += s; return currentByteSize - s; }

        public static int OffsetFloat() { var s = sizeof(float); currentByteSize += s; return currentByteSize - s; }
        public static int OffsetColor() { var s = sizeof(int); currentByteSize += s; return currentByteSize - s; }
        public static int OffsetVector2() { var s = sizeof(float) * 2; currentByteSize += s; return currentByteSize - s; }
        public static int OffsetVector3() { var s = sizeof(float) * 3; currentByteSize += s; return currentByteSize - s; }
        public static int OffsetVector4() { var s = sizeof(float) * 4; currentByteSize += s; return currentByteSize - s; }
    }

relevant shader code is basic

//_______________________________________________________________
// structs ScreenQuadDraw
//_______________________________________________________________
struct VertexShaderInputScreenQuad
{
    float4 Position : POSITION0;
    float3 Normal : NORMAL0;
    float4 Color : COLOR0;
    float2 TexureCoordinateA : TEXCOORD0;
};
struct VertexShaderOutputScreenQuad
{
    float4 Position : SV_Position;
    //float3 Normal : NORMAL0;// not needed yet
    float4 Color : COLOR0;
    float2 TexureCoordinateA : TEXCOORD0;
};
struct PixelShaderOutputScreenQuad
{
    float4 Color : COLOR0;
};

VertexShaderOutputScreenQuad VertexShaderScreenQuadDraw(VertexShaderInputScreenQuad input)
{
    VertexShaderOutputScreenQuad output;
    output.Position = mul(input.Position, gworld); // basically matrix identity
    output.Color = input.Color;
    output.TexureCoordinateA = input.TexureCoordinateA;
    return output;
}

PixelShaderOutputScreenQuad PixelShaderScreenQuadDraw(VertexShaderOutputScreenQuad input)
{
    PixelShaderOutputScreenQuad output;
    output.Color = tex2D(TextureSamplerA, input.TexureCoordinateA); // *input.Color;
    return output;
}

technique ScreenQuadDraw
{
    pass
    {
        VertexShader = compile VS_SHADERMODEL VertexShaderScreenQuadDraw();
        PixelShader = compile PS_SHADERMODEL PixelShaderScreenQuadDraw();
    }
}

the call looks like this. sorry its a bit sloppy.

public void DrawScreenQuad(Texture2D t)
        {
            //GraphicsDevice.RasterizerState = RasterizerState.CullNone;
            myeffect.Parameters["TextureA"].SetValue(t);
            myeffect.Parameters["gworld"].SetValue(Matrix.Identity * Matrix.CreateScale(1f));
            myeffect.CurrentTechnique = myeffect.Techniques["ScreenQuadDraw"];
            foreach (EffectPass pass in myeffect.CurrentTechnique.Passes)
            {
                pass.Apply();
                pvs.DrawUserIndexPrimitiveScreenQuad(GraphicsDevice);
            }
        }

you should be able to peice it together as you like from that quickly.

For most uses of a screen quad, I would imagine you don’t need normal or color info.

Also look out for rare race conditions with your static mutable struct.

Im guessing you mean VertexElementByteOffset

I never thought about it, do you think it can cause a race condition ?
xna was wrapped in a [STAThread]

Thanks, I’ve tried it but it’s flickering when trying to use it.

Trying to find the problem…

Regards, Morgan

Yes, that’s what I mean. I’m not sure of the specifics of XNA or MonoGame threading, but in a generic environment you could have a race condition like this:

  1. Thread A calls PositionStartOffset(). currentByteSize is now 12
  2. Thread A calls OffsetFloat(). currentByteSize is now 16
  3. Thread B calls PositionStartOffset(), resetting currentByteSize and then again setting it to 12
  4. Thread A calls OffsetFloat(). currentByteSize was supposed to be 16, but is instead 12

As for the main issue, I don’t see a problem with that code at first glance. I’m not sure off the top of my head why it would be flickering.

Well i just checked and monogame is sta threaded to so unless that changes in the future or someone is explicitly threading their draw calls, it should be fine.

Well if you can see of another way to make that work, it would be great counting bytes in sequence is annoying.

Ain’t vertex declaration declared one time at startup and saved into a variable? So no problem with threading unless the startup code was parallelized.

Hi, the flickering is gone now but the Shadow Map is stretched out too big over the screen.
The code I’m trying to use with your nice code are…

device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

device.SetRenderTarget(renderTarget);
        
this.effect.CurrentTechnique = this.effect.Techniques["ShadowMapMerge"];
this.effect.CurrentTechnique.Passes[0].Apply();
this.effect.Parameters["merge1"].SetValue(shadowMap1);
this.effect.Parameters["merge2"].SetValue(shadowMap2);
DrawUserIndexPrimitiveScreenQuad(GraphicsDevice);

device.SetRenderTarget(null);

shadowMapMixed = (Texture2D)renderTarget;

And the HLSL code…

Texture merge1;

sampler merge1Sampler = sampler_state
{
texture = <merge1>;

magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};

Texture merge2;

sampler merge2Sampler = sampler_state
{
texture = <merge2>;

magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
struct VS_Input
{
float4 Position	: POSITION;
float2 uv		: TEXCOORD0;
};

struct VS_Output
{
float4 Position	: POSITION;
float2 uv		: TEXCOORD0;
};

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;

Output.Position = float4(Input.Position.xy, 0, 1);

Output.uv = (Input.Position.xy + 1) / 2;
Output.uv.y = 1 - Output.uv.y;

return Output;
}
float4 shadowMapMergerPS(VS_Output Input) : COLOR0
{
float4 depth1 = tex2D(merge1Sampler, Input.uv);
float4 depth2 = tex2D(merge2Sampler, Input.uv);

if (depth1.r < depth2.r)
{
	return depth1;
}
else
{
	return depth2;
}
}

technique ShadowMapMerge
{
pass P0
{
	VertexShader = compile vs_2_0 imageProcessingVS();
	PixelShader = compile ps_2_0 shadowMapMergerPS();
}
}

The output is like stretched and zoomed in (too big)

Any ideas?

Regards, Morgan

At first glance, this line in your pixel shader is suspect specifically the /2.
,
Output.uv = (Input.Position.xy + 1) / 2;
,
,

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;

Output.Position = float4(Input.Position.xy, 0, 1);

Output.uv = (Input.Position.xy + 1) / 2;
Output.uv.y = 1 - Output.uv.y;

return Output;
}

That’s standard. It’s for converting texture coordinates from the range [-1, 1] to [0, 1]. Although if output.uv is being computed from input.Position, you don’t also need an input.uv.

Ok It might be working, but I do seriously not know how to use it with the Shader.
My two shadow maps are empty filled with zeroes and nothing comes out because of that.
These two lines are always empty.

float4 depth1 = tex2D(merge1Sampler, Input.TextCoord);
float4 depth2 = tex2D(merge2Sampler, Input.TextCoord);

Yes I’ve changed the uvs to TextCoord because that is what I think they are from the VertexDeclaration.

Ah i see.

Though i think you can just pass in the texture uv and let the vertex shader interpolate the quads texture uv coordinates for free.

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;
Output.Position = float4(Input.Position.xy, 0, 1);
Output.uv = Input.uv;
return Output;
}

Anyways that shader is pretty simple i suppose the next thing to check would then be the shadow map view / camera. .

Ok, it is working if I write directly to the screen but not if I try to write to the back buffer and then to the screen.

First I’m creating the first shadow map from one light to the back buffer storing it into the shadowMap1 variable
Then I’m writing next light to the back buffer storing it into the shadowMap2 variable/holder.
And after this I’m creating the merge of the previous two into the back buffer storing it into shadowMapMixed.
When using the following code the shadowMapMixed is alwas empty/black but not if I write it directly to screen.

        // DO NR ONE SHADOW MAP!!
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget);

        DrawScene("HardwareInstancingShadowMap");

        device.SetRenderTarget(null);

        shadowMap1 = (Texture2D)renderTarget;

        // DO NR TWO SHADOW MAP!!
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget);

        DrawScene("HardwareInstancingShadowMap");

        device.SetRenderTarget(null);
        
        shadowMap2 = (Texture2D)renderTarget;
        
        // DO MIX THEM TOGETHER!
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget); // <--- If I comment this line out.. and

        this.effect.CurrentTechnique = this.effect.Techniques["ShadowMapMerge"];
        this.effect.CurrentTechnique.Passes[0].Apply();
        this.effect.Parameters["merge1"].SetValue(shadowMap1);
        this.effect.Parameters["merge2"].SetValue(shadowMap2);
        DrawUserIndexPrimitiveScreenQuad(GraphicsDevice); <-- Code from willmotil at Monogame Community

        device.SetRenderTarget(null); // <---- this line out.. and

        shadowMapMixed = (Texture2D)renderTarget; <---- this line out it is shown directly on screen.

Like I said when I’m back buffering the merging part the shadowMapMixed is empty/black, but if I comment out those lines mentioned in the code above so that it is rendered directly to the screen then it’s shown just like the mixed Shadow Map is supposed to be.

What?? :confused:

Wee… my renderTarget is created like this.

renderTarget = new RenderTarget2D(device, pp.BackBufferWidth, pp.BackBufferHeight, true, device.DisplayMode.Format, DepthFormat.Depth24);

The width and height is set to the same as screen…

Why do you clear BEFORE setting the RenderTarget ? :slight_smile:

And after filling renderTarget, you need to draw it with a SpriteBatch or a Quad renderer for it to be shown on the screen.

Why I’m doing things in a weird way is just because I’m not sure what I’m doing :smiley:
But I’m rendering it with this function, to screen.

DrawUserIndexPrimitiveScreenQuad(GraphicsDevice);

public void DrawUserIndexPrimitiveScreenQuad(GraphicsDevice gd)
    {
        gd.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, screenQuadVertices, 0, screenQuadVertices.Length, screenQuadIndices, 0, 2, VertexPositionNormalColorUv.VertexDeclaration);
    }

Which is working as long as I’m not trying to use it on the back buffer, why is that?
Or didn’t I understand what you’ve said now?

From you sample, what I understand is:

  • You draw/fill shadowmap#1 to renderTarget with drawScene
    |- You copy this as shadowmap#1 texture2D (which is not necessary as RenderTargets are Textures2D too)
  • You draw/fill the shadowmap#2 to renderTarget with drawScene
    |- You copy this as shadowmap#2 texture2D
  • You set the current RenderTarget as null which is fine to release the buffer.
  • You clear the backbuffer
  • You set renderTarget as the current buffer to draw onto
  • You merge both shadowmap#1 and #2 into renderTarget

When do you draw renderTarget to the screen is my question. It is normal that you see the shadowmap as it is supposed to be if not rendering into renderTarget because you draw DIRECTLY to the screen/backbuffer.
But you set again a RenderTarget renderTarget and fill it with the merged textures, and never draw it onto screen.

You need to do this after your last device.SetRenderTarget(null);

// Use any of the overload you need
spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null);
spriteBatch.Draw(renderTarget, renderTarget.Bounds, Color.White);
spriteBatch.End();

Maybe you did it but it is not shown in your sample code.