Mixing two Shadow Maps together with an Effect.fx technique

Thanks, I’ve tried it but it’s flickering when trying to use it.

Trying to find the problem…

Regards, Morgan

Yes, that’s what I mean. I’m not sure of the specifics of XNA or MonoGame threading, but in a generic environment you could have a race condition like this:

  1. Thread A calls PositionStartOffset(). currentByteSize is now 12
  2. Thread A calls OffsetFloat(). currentByteSize is now 16
  3. Thread B calls PositionStartOffset(), resetting currentByteSize and then again setting it to 12
  4. Thread A calls OffsetFloat(). currentByteSize was supposed to be 16, but is instead 12

As for the main issue, I don’t see a problem with that code at first glance. I’m not sure off the top of my head why it would be flickering.

Well i just checked and monogame is sta threaded to so unless that changes in the future or someone is explicitly threading their draw calls, it should be fine.

Well if you can see of another way to make that work, it would be great counting bytes in sequence is annoying.

Ain’t vertex declaration declared one time at startup and saved into a variable? So no problem with threading unless the startup code was parallelized.

Hi, the flickering is gone now but the Shadow Map is stretched out too big over the screen.
The code I’m trying to use with your nice code are…

device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

device.SetRenderTarget(renderTarget);
        
this.effect.CurrentTechnique = this.effect.Techniques["ShadowMapMerge"];
this.effect.CurrentTechnique.Passes[0].Apply();
this.effect.Parameters["merge1"].SetValue(shadowMap1);
this.effect.Parameters["merge2"].SetValue(shadowMap2);
DrawUserIndexPrimitiveScreenQuad(GraphicsDevice);

device.SetRenderTarget(null);

shadowMapMixed = (Texture2D)renderTarget;

And the HLSL code…

Texture merge1;

sampler merge1Sampler = sampler_state
{
texture = <merge1>;

magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};

Texture merge2;

sampler merge2Sampler = sampler_state
{
texture = <merge2>;

magfilter = LINEAR;
minfilter = LINEAR;
mipfilter = LINEAR;
AddressU = CLAMP;
AddressV = CLAMP;
};
struct VS_Input
{
float4 Position	: POSITION;
float2 uv		: TEXCOORD0;
};

struct VS_Output
{
float4 Position	: POSITION;
float2 uv		: TEXCOORD0;
};

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;

Output.Position = float4(Input.Position.xy, 0, 1);

Output.uv = (Input.Position.xy + 1) / 2;
Output.uv.y = 1 - Output.uv.y;

return Output;
}
float4 shadowMapMergerPS(VS_Output Input) : COLOR0
{
float4 depth1 = tex2D(merge1Sampler, Input.uv);
float4 depth2 = tex2D(merge2Sampler, Input.uv);

if (depth1.r < depth2.r)
{
	return depth1;
}
else
{
	return depth2;
}
}

technique ShadowMapMerge
{
pass P0
{
	VertexShader = compile vs_2_0 imageProcessingVS();
	PixelShader = compile ps_2_0 shadowMapMergerPS();
}
}

The output is like stretched and zoomed in (too big)

Any ideas?

Regards, Morgan

At first glance, this line in your pixel shader is suspect specifically the /2.
,
Output.uv = (Input.Position.xy + 1) / 2;
,
,

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;

Output.Position = float4(Input.Position.xy, 0, 1);

Output.uv = (Input.Position.xy + 1) / 2;
Output.uv.y = 1 - Output.uv.y;

return Output;
}

That’s standard. It’s for converting texture coordinates from the range [-1, 1] to [0, 1]. Although if output.uv is being computed from input.Position, you don’t also need an input.uv.

Ok It might be working, but I do seriously not know how to use it with the Shader.
My two shadow maps are empty filled with zeroes and nothing comes out because of that.
These two lines are always empty.

float4 depth1 = tex2D(merge1Sampler, Input.TextCoord);
float4 depth2 = tex2D(merge2Sampler, Input.TextCoord);

Yes I’ve changed the uvs to TextCoord because that is what I think they are from the VertexDeclaration.

Ah i see.

Though i think you can just pass in the texture uv and let the vertex shader interpolate the quads texture uv coordinates for free.

VS_Output imageProcessingVS(VS_Input Input)
{
VS_Output Output;
Output.Position = float4(Input.Position.xy, 0, 1);
Output.uv = Input.uv;
return Output;
}

Anyways that shader is pretty simple i suppose the next thing to check would then be the shadow map view / camera. .

Ok, it is working if I write directly to the screen but not if I try to write to the back buffer and then to the screen.

First I’m creating the first shadow map from one light to the back buffer storing it into the shadowMap1 variable
Then I’m writing next light to the back buffer storing it into the shadowMap2 variable/holder.
And after this I’m creating the merge of the previous two into the back buffer storing it into shadowMapMixed.
When using the following code the shadowMapMixed is alwas empty/black but not if I write it directly to screen.

        // DO NR ONE SHADOW MAP!!
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget);

        DrawScene("HardwareInstancingShadowMap");

        device.SetRenderTarget(null);

        shadowMap1 = (Texture2D)renderTarget;

        // DO NR TWO SHADOW MAP!!
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget);

        DrawScene("HardwareInstancingShadowMap");

        device.SetRenderTarget(null);
        
        shadowMap2 = (Texture2D)renderTarget;
        
        // DO MIX THEM TOGETHER!
        device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);

        device.SetRenderTarget(renderTarget); // <--- If I comment this line out.. and

        this.effect.CurrentTechnique = this.effect.Techniques["ShadowMapMerge"];
        this.effect.CurrentTechnique.Passes[0].Apply();
        this.effect.Parameters["merge1"].SetValue(shadowMap1);
        this.effect.Parameters["merge2"].SetValue(shadowMap2);
        DrawUserIndexPrimitiveScreenQuad(GraphicsDevice); <-- Code from willmotil at Monogame Community

        device.SetRenderTarget(null); // <---- this line out.. and

        shadowMapMixed = (Texture2D)renderTarget; <---- this line out it is shown directly on screen.

Like I said when I’m back buffering the merging part the shadowMapMixed is empty/black, but if I comment out those lines mentioned in the code above so that it is rendered directly to the screen then it’s shown just like the mixed Shadow Map is supposed to be.

What?? :confused:

Wee… my renderTarget is created like this.

renderTarget = new RenderTarget2D(device, pp.BackBufferWidth, pp.BackBufferHeight, true, device.DisplayMode.Format, DepthFormat.Depth24);

The width and height is set to the same as screen…

Why do you clear BEFORE setting the RenderTarget ? :slight_smile:

And after filling renderTarget, you need to draw it with a SpriteBatch or a Quad renderer for it to be shown on the screen.

Why I’m doing things in a weird way is just because I’m not sure what I’m doing :smiley:
But I’m rendering it with this function, to screen.

DrawUserIndexPrimitiveScreenQuad(GraphicsDevice);

public void DrawUserIndexPrimitiveScreenQuad(GraphicsDevice gd)
    {
        gd.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, screenQuadVertices, 0, screenQuadVertices.Length, screenQuadIndices, 0, 2, VertexPositionNormalColorUv.VertexDeclaration);
    }

Which is working as long as I’m not trying to use it on the back buffer, why is that?
Or didn’t I understand what you’ve said now?

From you sample, what I understand is:

  • You draw/fill shadowmap#1 to renderTarget with drawScene
    |- You copy this as shadowmap#1 texture2D (which is not necessary as RenderTargets are Textures2D too)
  • You draw/fill the shadowmap#2 to renderTarget with drawScene
    |- You copy this as shadowmap#2 texture2D
  • You set the current RenderTarget as null which is fine to release the buffer.
  • You clear the backbuffer
  • You set renderTarget as the current buffer to draw onto
  • You merge both shadowmap#1 and #2 into renderTarget

When do you draw renderTarget to the screen is my question. It is normal that you see the shadowmap as it is supposed to be if not rendering into renderTarget because you draw DIRECTLY to the screen/backbuffer.
But you set again a RenderTarget renderTarget and fill it with the merged textures, and never draw it onto screen.

You need to do this after your last device.SetRenderTarget(null);

// Use any of the overload you need
spriteBatch.Begin(SpriteSortMode.Immediate, null, null, null, null, null);
spriteBatch.Draw(renderTarget, renderTarget.Bounds, Color.White);
spriteBatch.End();

Maybe you did it but it is not shown in your sample code.

I’m sending the Shadow Map last when rendering my 3D world in another Shader, that is working with one Shadow Map so it should work sending the merged one too, that’s why I’m certain the problem lies within the code that I’ve shown you.

I got it working now, I changed things around and now it’s working.
But the sad part is that it’s taking too much FPS away I had around 98-100 before and with this approach I’ve got around 60-67 FPS, that’s not good. I knew it was going to be a load but not this much, I could certainly make the code more effective but mmm nope.

I have another approach which I’ve been trying to accomplish, but all in vain every try.
Would like to mix the two different light view matrices together within the Shader rendering both of them into one Shadow Map to gain speed that way but it seems I’m not able to rap my head around it.

Maybe some parts of your shader can be optimized, as well as the culling of objects if you have a huge amount.
What is the resolution of your shadowmaps ? Above 1024x1024 it can start being really slow.
A profiling could help you find where you loose speed.

My settings are for the moment…

const int cTa_Width = 1400;//1920;//1000;
const int cTa_Height = 800;//1080;//1000;

You’re right there might be plenty to do but I doubt that I will reach a point where it will be sustainable anyway, sadly. Maybe that I’m just too pessimistic for the moment because all the set backs, culling might be the key because I’ve not implemented it at all yet.

I’ve just tried to render two lights from within the Shader to one Shadow Map, that would just be something but my head is not enough. I just can’t do it, I’m stubborn but.
I’m afraid I’ll have to step down and continue to use the merging of Shadow Maps again, but my game is going to have several lights and if two are dragging the FPS down what will several do?? :open_mouth:

Am I missing something? I don’t get the point to mixing shadow-maps outside of directional shadow map cascades.

When used correctly shadow maps are just masks for each light’s pass. Combining multiple lights into one pass should just be binding their textures and setting up the uniforms for that particular combination.

No you’re not missing anything,
Please tell me more.

Regards, Morgan

What type of shadows are you trying to do? More importantly, are you using a sane resolution for your shadowmaps?

Shadow performance comes down to resolution and how many shadow casters you have to render. Unless you’re deviating from the canonical forward rendering loop then shadow receivers are pretty meaningless in the numbers.

Packing your shadowmaps on the GPU isn’t really going to get you anything. Samplers and texture fetches in an effect aren’t expensive enough to warrant the blitting.

If you are packing multiple directional shadow-casting lights into a single Effect then just include the appropriate additional samplers and techniques.

It’s really only cascading shadow maps that you have some sort of packed multiples of maps, but even then you use viewports into a single render-target for each of the cascades to render them. You don’t care about mips anyways since they’re shadow-maps and you want to use the more expensive filtering on those depth values.