Deferred Engine Playground - download

Well i just yanked all the emmisive code out but its running. Its Very Cool.

Did you manually change all the shaders to 4.0. ? or is there a way to quickly tell the app to target shader 4.0 ?

Hey Kosmo i noticed you have a little garbage on the gc showing up that is probably from the numerical text.
Do you want the code for the no garbage text class i wrote ? I posted it some were before but it was probably a older version. I have a full fps demo as well with a gc allocation collection counter in a example app. I can zip it up and drop a link on here if you like, its pretty small and simple if you wanna check it out or use it.

Sure sounds interesting, maybe drop it on the main page even

Done. You might need to touch it up a bit more, i think this is a older version but it works.

try to re implement ur normal buffer inside my application, why it the look different?
Left-side is from ur application; Right-side is mine.

ok i think i got it. it’s actually ur up and forward is diffferent.

Just some nice recources useful for deferred shaders and shading I’ve recently came across again

I have been thinking about large performance optimizations and an obvious one would be to switch to view space normals and lighting.

Right now I’m using z/w depth and world space normals and lighting. This leads to a lot of unnecessary math inside the pixel shaders but is cheaper for the GBuffer setup and CPU.

The way World Space calculations work in a deferred renderer are:

  • transform vertex position with our WorldViewProjection matrix, transform normal with World matrix, save depth from position
  • store world space normals, store z/w depth
  • per light
    +pass the light position and info to shader
    +reconstruct the world position of the current pixel by construcing the ViewProjection Position with the Screen position and the depth from our depth buffer. Then multiply by the inverse viewProjection and end up with world position.
  • calculate vector from light to the world position,
  • this gives us the distance and the direction which we can compare with our Normal (we just read it from the normal buffer)
  • calculate Diffuse/Spec/NdL

The way View Space calculations work in a deferred renderer are:

  • transform vertex position with our WorldViewProjection matrix, transform normal with WorldView matrix, transform vertex position with WorldView and save depth (-> transform the position twice)
  • store world space normals, store depth / farClip (-> linear depth)
  • per light
    +transform the light position to WorldView position
    +pass the light position and info to shader
    +reconstruct the View position of the current pixel by construcing the ViewProjection Position with the Screen position and the depth from our depth buffer. With a ray into the frustum plane at the far end we can get the correct depth.
  • calculate vector from light to the view position,
  • this gives us the distance and the direction which we can compare with our Normal (we just read it from the normal buffer)
  • calculate Diffuse/Spec/NdL

So I’m using the first one.
That’s fine for lighting still, in fact, the precomputation and the savings in the Pixel shader per light average each other out at 1000 lights.

But the inverse Matrix projection for each pixel is not really good when I have a lot of lights.

PLUS

With this approach I have a real linear depth buffer, so screen space ray marching should in theory be easier to realize and understand.
And I could use a 2 component normal buffer, but right now I don’t know what the extra slot would be useful for, so I’d rather avoid the additional encoding/decoding cost.

It would be a pretty big undertaking to switch to view space lighting, basically every shader has to be changed to accomodate for that and the ray marching stuff changes a lot potentially.

Not sure what to do, especially since it works right now, just not hyper fast.

@willmotil
You asked about mesh warping - > check this out

1 Like

Im actually checking out your app now. Though im getting a running crash in it some were.

Thanks for the link as well i actually have been wanting to port over the xna reimers water sample. I had at one point re-did almost all his samples for the reach profile in xna about 20 or so of them.

I also made some alterations to his fresnel water sample (shown below) though which by the the point i stopped didn’t have much fresnel left, but i never was able to finish it, because i couldn’t get it to run in monogame.
It was really nice looking though for like a pond or stream ect.
Im hoping maybe looking thru your shader implementation, i might be able to get it to work.

Additionally i did some work on manually generated nurb surfaces that i never picked back up i was thinking about making a mesh editor within monogame for custom vertex terrain.

Be sure to use the inverse transpose world matrix, if you use non-uniform scaling (this is scaling using different scaling factors along the axes).

Since both approaches (world and view space) have their advantages, it depends on what you plan to do with it. Some algorithms work better with the one or the other. For view space, you mentioned getting a real linear depth buffer and that screen space ray marching should be a bit easier to implement. So it sounds like in your case there would be quite some benefit from switching to view space, since you do use ray marching algorithms a lot (reflections, volumetric lights…). So I guess, you have to decide, if it is worth the effort of switching.

I, the mad man, have actually started on the switch to View Space and the switch is a lot less smooth than expected.

For instance, shadow mapping is an additional pain, since you store the shadow map/cubemap in world space coordinates, etc.

The simplicity of the z/w buffer and inverse matrix transformation is not to be underestimated. I still have to figure out how to properly reproject with a jittering buffer now

Plus z/w is stable in perspective, which is really beneficial for screen space actually.

Therefore it should be noted that the engine is not in a great state right now. Gbuffer and point lights are the only things working right now

Screen space reflections are working again and the new code looks a bit cleaner

I’ve used this model for testing https://sketchfab.com/models/44de9112ce564d35be97a06c302a2fe3

2 Likes

Did you solve the issue with perspective stability? Oh and nice work so far! I think the last image is my favorite so far :slight_smile:

Well the reprojection is working now, but the TAA will sometimes wrongfully reject new frames and I’m not sure why yet, in theory that shouldn’t happen. The working TAA version is not pushed to git yet, because I pass some other debug data.

That is an awesome model! It fits well in the Sponza atrium :stuck_out_tongue:
Nice screenshots!

I’ve got environment maps working and I give the option to have a more accurate SSR now with some more jitter.

The relevant commands are
g_SSReflection_Taa
g_SSReflecions_Samples (15 currently, down to 3 is ok for normal non-TAA)

1 Like

Are the environment maps static ones (cubemap textures) or are they rendered from the objects point of view which then uses the environment map?

static cubemaps from the camera position

Hi @kosmonautgames… the engine is turning out really awesome!!

I would like to know that if want to display a 2D video buffer created with a Depth Rendertarget.

Which would be the best place to render it in the Engine with your existing render targets.

Thanks,
Bala

I don’t understand what exactly you are trying to do? Play back a video?

I think he wants to see the depth rendertarget

@kosmonautgames… Ok… Let me put it in this way… I have a 2D Texture drawn on RenderTarget with depth information.

How can I compose my renderTarget with the final output of your Engine.

I am trying to have custom RenderTarget which has to be drawn with your final Rendertarget before the output is drawn on Screen.

Thanks,
Bala