The Pipeline Tool builds changed content automatically because of a build hook in a .targets file (MSBuild targets) that MonoGame imports in the .csproject file (at least in templates, I’m guessing that’s how this project does it too). You can open the pipeline tool and run the build from there to follow the build process. When you then build your project it will skip unchanged files like @kosmonautgames mentioned.
I have overworked the whole file with all the classes reformated an sent you an Email via the monogame community board. I could not upload as a file because only picture formats are alowed.
Dont get surprised about SharpDX compiler pragmas in the code.
II have translated all the Monogame Blendstates, Rasterizerstates and DephtStencil States to direct SharpDX Versions and comunicate directly with the SharpDX.handle Device to set the States.
( WoW, took me some time to understand from the Monogame Source Code, that all states are buffered and only “really set” to directx when the actual drawcall is done.)
The interfaces of the classes i did not change so i think if it is a help to you, you can nearly copy paste to your code.
I changed some early out mechanisms that you had in your code, but placed to top of some other following loops.
You will easy understand, when you check.
I did not dare to upload a pull request on GitHub. I tried, but then i was asked to select branches and it was iritated what actually is to do.
Hope, i could help you, because you are so kind to answer my questions.
They are easier (not to say way faster) with compute shaders, which we are missing within monogame Without this, voxels are expensive (if you don’t have a 1080ti )
here still another question about the culling logic in MeshMaterialLibrary.cs
in my very humble opinion i conclude, that in every MeshPart of a Mesh the bounding Sphere of the Mesh is stored an never changed.
For culling algorithm the position of the bounding Sphere ist transformed by the coresponding wold matrix but the radius of the bounding Sphere remains the same.
So for Sponza Scene there is nearly no Gain of culling because of the fact that only on visible meshPart of the mesh forces all Meshparts to be drawn.
Should it not be that way:
=> on initialize:
the bounding Sphere of the Meshpart is calculated and stored (position, radius )
In the game loop:
=> the bounding Sphere of the Mesh should be used to make a fast check for the Mesh as a whole
=> if Mesh as whole is visible the MeshPart bounding Sphere should be transformed and compared in the culling algorithm
i have been working further on your code and have recognized, that e.g. the worldview, worldviewprojection Matricies are calculated for each meshpart in the draw method of the MeshMaterialLibray class.
Each meshpart shares the same worldview, worldviewprojection from the Mesh so it is sufficient only to update this matricies once per draw cycle.
I added the Matrix definitions as properties to BasicEntity class and make a update loop over all entities before the draw cycle. In the draw cycle the precalculated Maticies of the Entity is used directly without any further calculation. This works fine.
Now my question.
Why we dont always just write the Matricies world, view, projection to the shader and the shader calculates the corresponding worldview, worldviewprojection (and so on) matrices ONCE and uses this matricies for all vertecies in the draw call.
Is this possible ?
How would i write such a scenario in the shader code ?
No it’s not possible, since by default you cannot write global variables in shaders. You could look into “UAV” which is a construction for sich auch a thing, but not known to the default monogame hlsl debugger as far as i know.
Either way, keep in mind that all shaders are computed in parallel with many threads, so such a construct is not even that useful.
It should also be noted that by default monogame and many tutorials and xna implementations calculate worldviewprojection on the vertex shader, I just figured it’s more efficient to do that only once on CPU.
The whole meshmatlibrary is a mess, I apologize again. It was build for my first 3D game when I wasn’t as experienced. All my meshes did not have any additional submeshes either.
On an unrelated note: I currently have some unfinished features which I didn’t commit yet, along with some interesting stuff like live-shader reloading. Should I just commit with the unfinished stuff in place? It looks like I might be out of time to finish Signed distance field stuff soon
i have a question about creating environment cubemap and also shadowmap cubemap.
We now dont take into account that we want to be able to update only a part of the cupemap.
( Culling )
In both cases the according shader gets 6 draw cycles with the whole geometry but different viewprojection matricies.
Inspired from an article about reflective shadowmaps that i studied it is possible to do only one geometry draw cycle and feed the shader with Matrix Arrays of viewmatricies and viewprojections.
The vertex an pixelshader do each 6 times loop.
It should be not too hard to implement because the rendertarget is already prepared as a rectangle 6 / 1 aspect ratio.
I ask, because i tried the cost of dynamic point light with cubeshadowmap and also dynamic environment map. But the fps got down from 108 to 40.
Even if we want to be able to update only a part of the cubemap ( culling ) it should be no problem by passing an array of float used as boolean for check in the shader loop.
That sounds like great research, I haven’t thought about that. You could also look into spherical mapping or dual hemisphere mapping, which also needs less loops, the way view projection can work is really fascinating. These are in theory cheaper but need significantly more memory for the same quality.
So it sounds like the idea is to determine which side the current vertex is on (top, right, forward etc) and use the specific view matrix then, right?
That can lead to some problematic areas at the borders, but it may be worth exploring