Deferred Engine Playground - download

unfortunately I have no idea why this happens with custom shaders, even with super basic ones I have to disable the sharp.dx exceptions on startup. They happen with any custom shaders I think, I’m not sure. :frowning:

Humm does that fix it, im willing to try that how can i disable the sharpdx exceptions ?

Its weird because the
IdRenderEffect = content.Load(“Shaders/Editor/IdRender”);

Here is more detail…

HRESULT: [0x80070057], Module: [General], ApiCode: [E_INVALIDARG/Invalid Arguments], Message: The parameter is incorrect.

from the stack trace.

at SharpDX.Result.CheckError()
at SharpDX.Direct3D11.Device.CreatePixelShader(IntPtr shaderBytecodeRef, PointerSize bytecodeLength, ClassLinkage classLinkageRef, PixelShader pixelShaderOut)
at SharpDX.Direct3D11.PixelShader…ctor(Device device, Byte[] shaderBytecode, ClassLinkage linkage)
at Microsoft.Xna.Framework.Graphics.Shader.CreatePixelShader()
at Microsoft.Xna.Framework.Graphics.Shader.PlatformConstruct(Boolean isVertexShader, Byte[] shaderBytecode)

… ect

I guess I would have to build a custom sharp dx solution to know where the exception is coming from.

Some shaders have some warning in mojoshader, i guess i could fix them, but the compiler chose the right things anyways.

But for example billboard doesn’t have any of these and is a super simple shader

Is it possible some requisite call was missed in a necessary sequence when i wrapped this line.

if (isInitializedOnce) { screenManager.UpdateResolution(); }

Is this supposed to just fire right up as is or maybe im missing some basic first steps i mean i added nothing to game1 other then that to get it to run past the null object exception at first.

Sorry im not familiar with this app i literally just tried it, and then posted, I didn’t expect you to be on.

Oh never mind i figured it out its the shader level my card doesn’t like 5.0 cause its old and it doesn’t have full dx11 caps
Guess i know what this relates to now, HRESULT: [0x80070057]

Changing the billboard shader to 4.0 worked then next shader errored out after that instead now lol
betting its 5.0 too :slight_smile: ya have to change them all.

you can change everything to 4.0, 5.0 doesn’t really need anything special, it just does some things ever so slightly different

Emissives seem to require 5.0 edit: ah maybe not then i just got sick of waiting thought it was gonna fail.

I just tried out, it builds in 4.0. Building takes super long though for these files since they have some big loops in there, so the compiler takes ages

Well i just yanked all the emmisive code out but its running. Its Very Cool.

Did you manually change all the shaders to 4.0. ? or is there a way to quickly tell the app to target shader 4.0 ?

Hey Kosmo i noticed you have a little garbage on the gc showing up that is probably from the numerical text.
Do you want the code for the no garbage text class i wrote ? I posted it some were before but it was probably a older version. I have a full fps demo as well with a gc allocation collection counter in a example app. I can zip it up and drop a link on here if you like, its pretty small and simple if you wanna check it out or use it.

Sure sounds interesting, maybe drop it on the main page even

Done. You might need to touch it up a bit more, i think this is a older version but it works.

try to re implement ur normal buffer inside my application, why it the look different?
Left-side is from ur application; Right-side is mine.

ok i think i got it. it’s actually ur up and forward is diffferent.

Just some nice recources useful for deferred shaders and shading I’ve recently came across again

I have been thinking about large performance optimizations and an obvious one would be to switch to view space normals and lighting.

Right now I’m using z/w depth and world space normals and lighting. This leads to a lot of unnecessary math inside the pixel shaders but is cheaper for the GBuffer setup and CPU.

The way World Space calculations work in a deferred renderer are:

  • transform vertex position with our WorldViewProjection matrix, transform normal with World matrix, save depth from position
  • store world space normals, store z/w depth
  • per light
    +pass the light position and info to shader
    +reconstruct the world position of the current pixel by construcing the ViewProjection Position with the Screen position and the depth from our depth buffer. Then multiply by the inverse viewProjection and end up with world position.
  • calculate vector from light to the world position,
  • this gives us the distance and the direction which we can compare with our Normal (we just read it from the normal buffer)
  • calculate Diffuse/Spec/NdL

The way View Space calculations work in a deferred renderer are:

  • transform vertex position with our WorldViewProjection matrix, transform normal with WorldView matrix, transform vertex position with WorldView and save depth (-> transform the position twice)
  • store world space normals, store depth / farClip (-> linear depth)
  • per light
    +transform the light position to WorldView position
    +pass the light position and info to shader
    +reconstruct the View position of the current pixel by construcing the ViewProjection Position with the Screen position and the depth from our depth buffer. With a ray into the frustum plane at the far end we can get the correct depth.
  • calculate vector from light to the view position,
  • this gives us the distance and the direction which we can compare with our Normal (we just read it from the normal buffer)
  • calculate Diffuse/Spec/NdL

So I’m using the first one.
That’s fine for lighting still, in fact, the precomputation and the savings in the Pixel shader per light average each other out at 1000 lights.

But the inverse Matrix projection for each pixel is not really good when I have a lot of lights.


With this approach I have a real linear depth buffer, so screen space ray marching should in theory be easier to realize and understand.
And I could use a 2 component normal buffer, but right now I don’t know what the extra slot would be useful for, so I’d rather avoid the additional encoding/decoding cost.

It would be a pretty big undertaking to switch to view space lighting, basically every shader has to be changed to accomodate for that and the ray marching stuff changes a lot potentially.

Not sure what to do, especially since it works right now, just not hyper fast.

You asked about mesh warping - > check this out

1 Like

Im actually checking out your app now. Though im getting a running crash in it some were.

Thanks for the link as well i actually have been wanting to port over the xna reimers water sample. I had at one point re-did almost all his samples for the reach profile in xna about 20 or so of them.

I also made some alterations to his fresnel water sample (shown below) though which by the the point i stopped didn’t have much fresnel left, but i never was able to finish it, because i couldn’t get it to run in monogame.
It was really nice looking though for like a pond or stream ect.
Im hoping maybe looking thru your shader implementation, i might be able to get it to work.

Additionally i did some work on manually generated nurb surfaces that i never picked back up i was thinking about making a mesh editor within monogame for custom vertex terrain.

Be sure to use the inverse transpose world matrix, if you use non-uniform scaling (this is scaling using different scaling factors along the axes).

Since both approaches (world and view space) have their advantages, it depends on what you plan to do with it. Some algorithms work better with the one or the other. For view space, you mentioned getting a real linear depth buffer and that screen space ray marching should be a bit easier to implement. So it sounds like in your case there would be quite some benefit from switching to view space, since you do use ray marching algorithms a lot (reflections, volumetric lights…). So I guess, you have to decide, if it is worth the effort of switching.

I, the mad man, have actually started on the switch to View Space and the switch is a lot less smooth than expected.

For instance, shadow mapping is an additional pain, since you store the shadow map/cubemap in world space coordinates, etc.

The simplicity of the z/w buffer and inverse matrix transformation is not to be underestimated. I still have to figure out how to properly reproject with a jittering buffer now

Plus z/w is stable in perspective, which is really beneficial for screen space actually.

Therefore it should be noted that the engine is not in a great state right now. Gbuffer and point lights are the only things working right now

Screen space reflections are working again and the new code looks a bit cleaner

I’ve used this model for testing


Did you solve the issue with perspective stability? Oh and nice work so far! I think the last image is my favorite so far :slight_smile:

Well the reprojection is working now, but the TAA will sometimes wrongfully reject new frames and I’m not sure why yet, in theory that shouldn’t happen. The working TAA version is not pushed to git yet, because I pass some other debug data.

That is an awesome model! It fits well in the Sponza atrium :stuck_out_tongue:
Nice screenshots!