Deferred Engine Playground - download

hi kosmonautgames,

the gain would be only 1 draw call for geometry and also only half of computation cost for the pixel shader because ( by parcing the vertecie normal ) btw. a surface of a triangle can at most affect ( or be seen from ) 3 sides of a cube.

This is my theory.

But on the other hand, when we suppose vertecies rendered with backside culling ( which is most of geometrie used in games) does the upper method really get a performance gain ??

Are the vertecies which look back are anyway culled ?

hi kosmonautgames,

i have some trouble with monogame behavior which is best explained with the following code from renderer.cs

This are two sprite batch cycles

  1. copy renderTargetAlbedo to renderTargetDecalOffTarget
  2. copy renderTargetDecalOffTarget to renderTargetAlbedo

When renderTargetDecalOffTarget is not neede for debug view mode we would think it is unnecessary to copy forth an back.

So i tried to comment the above code and set renderTargetAlbedo directly as rendertarget.
I also changed decal drawcyle blendstate to additive blendstate but the result on screen is alway the visible decal and the albedo information is lost. ( screen image is black and white mode)

B. To check my decal blendstate Then i made my own sprite shader and simulated the obove code copy back and forth with my own sprite shader with fullquad renderer and the result with my own additive blendstate was ok.

C. In the End i tried to change in the constructor code of albedo rendertarget to set it “to preserve mode” and not discard but this could not solve the problem.

Why it is impossible to draw the decal directly with additive blendstate to the existing renderTargetAlbedo ?

It seems it is necessary first to give a active fullscreenquad draw to the rendertarget, which shall in the following be used as a rendertarget with blendstate.

private void DrawDecals(List decals)
if (!GameSettings.g_drawdecals) return;

                //First copy albedo to decal offtarget
                DrawMapToScreenToFullScreen(_renderTargetAlbedo, BlendState.Opaque, _renderTargetDecalOffTarget);
                DrawMapToScreenToFullScreen(_renderTargetDecalOffTarget, BlendState.Opaque, _renderTargetAlbedo);
                _decalRenderModule.Draw(_graphicsDevice, decals, _view, _viewProjection, _inverseView);

Because the graphics.SetRenderTarget has changed meanwhile. If you change that the renderTarget is cleared before you can draw to it.
You can change that behaviour by making the rendertarget KeepContents when initializing.

Heads up that setting RTs to keep their content might impact performance a bit and is not available on all platforms (I don’t think it works on mobile), so in general it’s better to draw to RTs in the right order so it doesn’t matter if they’re cleared when you set them.

Hi kosmonautgames, Hi Jjagg,

“with joint forces we are strong” !!!

It works now. My first attempt could not work because the Albedo Map is part of a “Multiple” RenderTargetView.

The next Shader will not be accepted to write to the first RenderTarget if the PixelShader Output Format does not match the Tripple Signature.

So we have to set the Albedor RenderTarget alone. My Problem was the whole time, that with the Method SetRenderTarget of Monogame the new RenderTarget is automatically cleared. This i could NOT prevent even when setting

=> “KeepContents when initializing the RenderTarget” option

To my very humble opinion, because i am the smallest Light here around, this option must have another meaning ??

Now i succeeded just to set the RenderTarget directly by accessing the SharpDX Device Outputmerger see the follwing code.

Now i can exactly do what i used to do, just taking care of the order of the draw calls an fill the Rendertargets additionally by the desired blendstates without the need to have one or two additional draw cycles.

Maybe there is a better Monogame intern Solution but i did not have success.

// first get the Array of 3 Rendertargets currently active in SharpDX

         SharpDX.Direct3D11.RenderTargetView[] GetRenderTargets = Gl.Device.ImmediateContext.OutputMerger.GetRenderTargets(3);
        // Now take the first which is the Albedo Map and make a SetRenderTarget as usual in SharpDX
        // the Rendertarget is NOT be cleared by Default in SharpDX

( Sorry for the bad text format i could not get it better althoug i tried )

Hi kosmonautgames,

would you please so patient and kind to answer another question ?

Topic: Ambient Occlusion

  1. do Effect with input fullres Depth, Normal Maps => Output halfres
  2. blur vertical
  3. blur horizontal

I understand:
a. ssao pass is done with halfres ( quarter area) for performance gain.
b. blur is a separable filter so we do it faster by doing it first on all coloums then on all rows

But why is in the blur passes the halfres ssao map upsized to fullresolution and then back to half.
I simply cannot understand this. Could you please explain.

in the method DrawScreenSpaceAmbientOcclusion()
=> _graphicsDevice.SetRenderTarget(_renderTargetSSAOEffect (=halfres);
computeSSAO Effect

in the method DrawBilateralBlur()

         // halfres is upsized to fullres ??

          // implicit it says Hight, Width res is only half
          Shaders.ScreenSpaceEffectParameter_InverseResolution.SetValue(new Vector2
                            (1.0f / _renderTargetScreenSpaceEffectUpsampleBlurVertical.Width,
                             1.0f / _renderTargetScreenSpaceEffectUpsampleBlurVertical.Height) * 2);

           // but blur vertical is feeded with fullres ??
 Shaders.ScreenSpaceEffectParameter_SSAOMap.SetValue(_renderTargetScreenSpaceEffectUpsampleBlurVertical); (fullres)



            Shaders.ScreenSpaceEffectParameter_InverseResolution.SetValue( new Vector2
                     (1.0f / _renderTargetScreenSpaceEffectUpsampleBlurHorizontal.Width,
                      1.0f / _renderTargetScreenSpaceEffectUpsampleBlurHorizontal.Height)*0.5f);


I don’t remember, I’ll have a look when I find time.

SSAO is half res but the blur should be full res, for upsampling to work correctly.

The ScreenSpaceEffectParameter_InverseResolution is not stating the actual output resolution in this case. It just states the texel size of the input texture which affects the width of the blur.
So for the full res effect to correctly space out pixels i need to supply the texel size of the half res target.

The second one shouldin theory not have the *0.5f at the end, this would make the kernel smaller. I guess i played around with it and found it to work well …?

just found some issue, on the left picture show the initial scene when i start the program, on the right picture shown when i move the direction light abit. The shadow seem go wrong when i edit position, it suppose to be block by the middle wall. Any idea?

Hey @bettina4you

Can you tell me how you went about integrating SharpDX directly? Do you mix the approach with some things still done by monogame?

Hi kosmonautgames,

nice to hear from you. I still work on your engine. I have succesfully integrated a windows form with a dynamic gridview and treeview so that i can hirarchically visualize Entity, Model, Mesh, MeshPart and all their attirbutes …

Now to you question. I still work under the namespace of monogame. After adding the SharpDX references i mix the code.

But this must be done carefully, because monogame under the hood works different from what i have been used in SlimDX.

In monogame most of the time things are not applied to directx until a real draw call is started an then there are mechanism which hide some things you would normally do directly.

The trick i did for example was to write a converter back from monogame rasterizer state to sharpDX rasterizer state.

So i just create a SharpDX rasterizer state be feeding my method with your monogame rasterizer state. (and so on)

Once you know how to translate things, this is no more necessary but “for the conversion” process this was helpfull.

I will send you a community mail with that part of my Code. I hope this could be a little help.

You are a professional and will right away understand the code.

@kosmonautgames This is awesome but I can’t find any mention of the license… any chance you could add a MIT license to this project?

Sure, I guess any license is better than nothing. I recall that someone specifically said „no MIT please” but I don’t remember what that was about.

EDIT: Ah I guess I would have to spend a little more than a minute since I use Sponza atrium and some assets which have their own license

Hi! I’ve been inspecting the engine code for days and I’d like to thank you a lot for such a valuable asset. You’ve teached a lot of tricks to an old dog :wink:

However I’m reaching the point of “I won’t be able to do anything like this on my own” and thoughts are rapidly moving into “I’ll just take Kosmonaut’s code and adapt it to my game” :slight_smile: So as people has been asking I’d love if you please could include a license to the project to know what to expect in the future when reutilizing the code.

(btw, I think you can have different licenses for a project, i.e. one for the Sponza files and a different one for your code)

I also have a question about shadows for lights. It seems there are 3 configurations selectable with g_UseDepthStencilLightCulling: None, DepthBuffer and DepthBuffer+Stencil. However I’m unable to appreciate any difference between the three, other than DB+Stencil takes like 50% more time on my HW.

Could you please give an overview explanation what are the differences between those methods and their strengths/weaknesses?



Thanks for the interest!

I am on mobile right now, I’ll get back to your question when I find time.

I did add a license to the github repo (MIT)

I’ve heard complaints about mit before, but i just went ahead with it since it’s better than nothing. I haven’t had time to review the others


thanks for your time, there’s no hurry in any of my questions.

I’ve spent the weekend transplanting you point lights into my game, they look great, thanks again! (although it’s a little sad that I’ve probably spend more time transplanting them than you doing them, but hey… let’s be positive :wink: )

I’m targetting the directional light now but I’m observing a problem. I’ve taken the source code at github as is, commented out all point lights, and removed comment for the directional light. The light looks good until moved. When moved, some shadows disappear.

I’ve made a small private video to show it:

After making the video I found that the directional light “going up” shown at the end of the video can be alleviated with the shadowDepth parameter so you can ignore that part. And anyways, I won’t need a shadow so high.

However I’m unable to understand nor find why the shadow disappears once you move the directional light.

At first I though that maybe the top roof geometry was being skipped because of frustum culling, but I removed the culling forcing all objects to be rendered always ( bool isUsed = true; in MeshMaterialLibrary.Draw ) but the problem persists.

Any ideas on what could be causing that? I’ve been playing a bit with the parameters but I’ve found no combination that works.

Thanks a lot!

I have a function where I check what shadows need updating and it checks for objects moving or lights moving. I can’t look it up right now, but the problem should be there (on mobile)

It’s in rendermodules/shadowmaprendermodule.cs ln 302ff I think

Hi Kosmonaut, thanks for the answer. I tried to remove all ifs so that it always enters the light generation, but the problem persists.

I’ve been having a look at it. I’m by not means an expert in this field, so take my message with a grain of salt ( lucky you, my blood pression is high and I can’t :slight_smile: )

I’ve rendered the shadowmap for the first frame and subsequent frames. It is rendered differently.

First shadowmap render:

Next shadowmap render:

If you look closer, the second shadowmap shows all the balls (including the ones that are below the roof). I think for some reason the depth buffer is not working well after the first frame. It’s like it’s rendered backwards, or maybe just not using the depth information at all (like DepthStencilState.None, but I suppose the shader doesn’t use the DepthStencilState.) Or maybe it’s near-clipping the roof (but I don’t think so because of the next image)

this image is curious and probably gives some clues about the problem:

I’m hopeful I’ll be able to nail the problem tomorrow :slight_smile:

p.s. in the screenshots I forgot to disable the point lights, so the floor level looks lighted and with shadows, but most of the light as well as all the ball shadows come from the point light.

I couldn’t resist until tomorrow :slight_smile:

commenting the following line makes the directional lights work:

if (!lightViewPointChanged && hasAnyObjectMoved)

in MeshMaterialLibrary.CheckShadowMapUpdateNeeds ~ line 478

maybe should be if (lightViewPointChanged || hasAnyObjectMoved) ?

yes you are probably right.

EDIT: I’m not sure about the problem, I couldn’t replicate it on a quick test. Maybe I didn’t delete all point lights.

weird, it happens to me all the time even without removing the point lights. Just git clone, uncomment the directional light (I have to change an enum otherwise won’t compile), run and move the light.

That’s something I thought last night that is also puzzling: The line I’m commenting is common to both Directional and Point lights. If it fails in Directional lights, why doesn’t have the same problem with Point lights, if most of the code is the same?

What MG version are you on? here I’ll try to update my drivers later just in case.