Deferred Engine Playground - download

the content processor is smart, it will only build on changed or newly added items.

The exact mechanism for determining if an item changed, I don’t know, but that’s more or less trivial, if you store a last known hash for example.

So it will not built any stuff at all if only code changed (unless it is shader files, which are loaded through the content pipeline)

The Pipeline Tool builds changed content automatically because of a build hook in a .targets file (MSBuild targets) that MonoGame imports in the .csproject file (at least in templates, I’m guessing that’s how this project does it too). You can open the pipeline tool and run the build from there to follow the build process. When you then build your project it will skip unchanged files like @kosmonautgames mentioned.

Edit: in case you’re interested, the .targets file that is imported can be found here:

OK now i have found how it works…

Many thanks to kosmonautgames an Jjagg

just if someone wants to know details:

  1. open the coresponding VS project file in a text editor
  2. search for the “.targets” string mostly at the end of the file
  3. you will find a target file additionaly to the normal .target file
  4. you will see a referenz path of the additional .target file with directory referenz

Attention: because you dont know which value the environment variable has at runtime you dont know where to search for your .target file.

e.g. in the monogame install folder
C:\programs(86)\monogame\3.0 … you wil NOT find the target file although we would expect it here.

The exact identical referenc path is placed at the one and only filepath ( at least on my machine ) in C:\programs(86)\MSBUILD\monogame… (.target)

  1. open this file in text editor and if necessary add or change actions.

hi kosmonautgames,

the following code fragment is from MeshMaterialLibrary.cs
in the draw method. To my humble opinion the second loop cycle

==> for (int index = 0; index < meshLib.Index; index++)
could completely removed because the index in nowhere used

            for (int i = 0; i < matLib.Index; i++)
                MeshLibrary meshLib = matLib.GetMeshLibrary()[i];
                for (int index = 0; index < meshLib.Index; index++)
                    //If it's set to "not rendered" skip
                    for (int j = 0; j < meshLib.Rendered.Length; j++)
                        if (meshLib.Rendered[j])
                            isUsed = true;
                            //if (meshLib.GetWorldMatrices()[j].HasChanged)
                            //    hasAnyObjectMoved = true;

                        if (isUsed)// && hasAnyObjectMoved)


                    if (isUsed)// && hasAnyObjectMoved)

I see you made your way into the hell that is meshmateriallibrary. Something I took from an earlier 3d project, but certainly can be confusing

your humble opinion is obviously correct, thank you!

hi kosmonautgames,

this file was really a hell …

I have overworked the whole file with all the classes reformated an sent you an Email via the monogame community board. I could not upload as a file because only picture formats are alowed.

Dont get surprised about SharpDX compiler pragmas in the code.

II have translated all the Monogame Blendstates, Rasterizerstates and DephtStencil States to direct SharpDX Versions and comunicate directly with the SharpDX.handle Device to set the States.

( WoW, took me some time to understand from the Monogame Source Code, that all states are buffered and only “really set” to directx when the actual drawcall is done.)

The interfaces of the classes i did not change so i think if it is a help to you, you can nearly copy paste to your code.

I changed some early out mechanisms that you had in your code, but placed to top of some other following loops.
You will easy understand, when you check.

I did not dare to upload a pull request on GitHub. I tried, but then i was asked to select branches and it was iritated what actually is to do.

Hope, i could help you, because you are so kind to answer my questions.

many thanks, bettina from germany

1 Like

Any idea to calculate light bounce for point / directional light, like unity

Unity uses Enlighten, which you can look into if you want. It is precalculated though, nothing I’m going to pursue in my engine.

Any other simple approach that i can start with? other thn Enlighten.

Yes, “Sparse-Voxel Octree Global Illumination” and “Reflective Shadowmaps” with virtual point lights are often used.


They are easier (not to say way faster) with compute shaders, which we are missing within monogame :unamused: Without this, voxels are expensive (if you don’t have a 1080ti :wink: )

thanks for sharing …

Try implementing Light Propagation Volumes, they can run under DX9 and implementing them is a very easy task.

hi kosmonautgames,

here a question about source file: ShadowMapRenderModule.cs and the method CreateShadowMapDirectionalLight(…)

there are two paths

  1. light ( position has changed )

meshMaterialLibrary.FrustumCulling(entities, _boundingFrustumShadow, true, light.Position);

=> culling of the meshMaterialLibrary is done with the attribute “true” means light has changed

2 light ( position has not changed )

first step.

       bool hasAnyObjectMoved = meshMaterialLibrary.FrustumCulling(entities: entities, boundingFrustrum: _boundingFrustumShadow, hasCameraChanged: false, cameraPosition: light.Position);

=> culling of the meshMaterialLibrary is done with the attribute “false” means light has not changed

second step:

if (!hasAnyObjectMoved) return;

meshMaterialLibrary.FrustumCulling(entities: entities, boundingFrustrum: _boundingFrustumShadow, hasCameraChanged: true, cameraPosition: light.Position);

=> when nothing has changed => return

The problem to me is the case when something ( means a positioin of a mesh ) has changed.

Why you have to do the FrustumCulling again with parameter light_has_changed ??

hi kosmonautgames,

here still another question about the culling logic in MeshMaterialLibrary.cs

in my very humble opinion i conclude, that in every MeshPart of a Mesh the bounding Sphere of the Mesh is stored an never changed.

For culling algorithm the position of the bounding Sphere ist transformed by the coresponding wold matrix but the radius of the bounding Sphere remains the same.

So for Sponza Scene there is nearly no Gain of culling because of the fact that only on visible meshPart of the mesh forces all Meshparts to be drawn.

Should it not be that way:
=> on initialize:
the bounding Sphere of the Meshpart is calculated and stored (position, radius )

In the game loop:
=> the bounding Sphere of the Mesh should be used to make a fast check for the Mesh as a whole

=> if Mesh as whole is visible the MeshPart bounding Sphere should be transformed and compared in the culling algorithm

Yes to everything. 20 characters

Hi kosmonautgames,

i have been working further on your code and have recognized, that e.g. the worldview, worldviewprojection Matricies are calculated for each meshpart in the draw method of the MeshMaterialLibray class.

Each meshpart shares the same worldview, worldviewprojection from the Mesh so it is sufficient only to update this matricies once per draw cycle.

I added the Matrix definitions as properties to BasicEntity class and make a update loop over all entities before the draw cycle. In the draw cycle the precalculated Maticies of the Entity is used directly without any further calculation. This works fine.

Now my question.

Why we dont always just write the Matricies world, view, projection to the shader and the shader calculates the corresponding worldview, worldviewprojection (and so on) matrices ONCE and uses this matricies for all vertecies in the draw call.
Is this possible ?

How would i write such a scenario in the shader code ?

No it’s not possible, since by default you cannot write global variables in shaders. You could look into “UAV” which is a construction for sich auch a thing, but not known to the default monogame hlsl debugger as far as i know.

Either way, keep in mind that all shaders are computed in parallel with many threads, so such a construct is not even that useful.

It should also be noted that by default monogame and many tutorials and xna implementations calculate worldviewprojection on the vertex shader, I just figured it’s more efficient to do that only once on CPU.

The whole meshmatlibrary is a mess, I apologize again. It was build for my first 3D game when I wasn’t as experienced. All my meshes did not have any additional submeshes either.

On an unrelated note: I currently have some unfinished features which I didn’t commit yet, along with some interesting stuff like live-shader reloading. Should I just commit with the unfinished stuff in place? It looks like I might be out of time to finish Signed distance field stuff soon

Hi kosmonautgames,

i have a question about creating environment cubemap and also shadowmap cubemap.

We now dont take into account that we want to be able to update only a part of the cupemap.
( Culling )

In both cases the according shader gets 6 draw cycles with the whole geometry but different viewprojection matricies.

Inspired from an article about reflective shadowmaps that i studied it is possible to do only one geometry draw cycle and feed the shader with Matrix Arrays of viewmatricies and viewprojections.

The vertex an pixelshader do each 6 times loop.

It should be not too hard to implement because the rendertarget is already prepared as a rectangle 6 / 1 aspect ratio.

I ask, because i tried the cost of dynamic point light with cubeshadowmap and also dynamic environment map. But the fps got down from 108 to 40.

Even if we want to be able to update only a part of the cubemap ( culling ) it should be no problem by passing an array of float used as boolean for check in the shader loop.


That sounds like great research, I haven’t thought about that. You could also look into spherical mapping or dual hemisphere mapping, which also needs less loops, the way view projection can work is really fascinating. These are in theory cheaper but need significantly more memory for the same quality.

So it sounds like the idea is to determine which side the current vertex is on (top, right, forward etc) and use the specific view matrix then, right?

That can lead to some problematic areas at the borders, but it may be worth exploring