I’ve got environment maps working and I give the option to have a more accurate SSR now with some more jitter.
The relevant commands are
g_SSReflection_Taa
g_SSReflecions_Samples (15 currently, down to 3 is ok for normal non-TAA)
I’ve got environment maps working and I give the option to have a more accurate SSR now with some more jitter.
The relevant commands are
g_SSReflection_Taa
g_SSReflecions_Samples (15 currently, down to 3 is ok for normal non-TAA)
Are the environment maps static ones (cubemap textures) or are they rendered from the objects point of view which then uses the environment map?
static cubemaps from the camera position
Hi @kosmonautgames… the engine is turning out really awesome!!
I would like to know that if want to display a 2D video buffer created with a Depth Rendertarget.
Which would be the best place to render it in the Engine with your existing render targets.
Thanks,
Bala
I don’t understand what exactly you are trying to do? Play back a video?
I think he wants to see the depth rendertarget
@kosmonautgames… Ok… Let me put it in this way… I have a 2D Texture drawn on RenderTarget with depth information.
How can I compose my renderTarget with the final output of your Engine.
I am trying to have custom RenderTarget which has to be drawn with your final Rendertarget before the output is drawn on Screen.
Thanks,
Bala
first of all, you can press F1 to go through my current render targets - color, normals, depth, light etc.
The final output happens in the renderer.cs class in the RenderMode() function
There you see the function
default:
{
DrawMapToScreenToFullScreen(_renderTargetFinal);
DrawPostProcessing();
}
These functions basically use spritebatch to draw a texture to the screen, like this
_spriteBatch.Begin(0, blendState, _supersampling>1 ? SamplerState.LinearWrap : SamplerState.PointClamp, null, null);
_spriteBatch.Draw(map, new Rectangle(0, 0, width, height), Color.White);
_spriteBatch.End();
If you want to blend your output into it you can change the blendstate when calling the function.
Sorry the overall code is so messy, I’ll clean it up soonish
Hi @kosmonautgames,
I have done some exploration in your code.
Actually I have a Video color Pixels with depth info for each pixel.
I am currently drawing the Color & Depth Textures in after setting the GBuffer RenderTargets.
I am able to see the Color data with AlbedoMap and Depth data with DepthMap Rendering.
I can not see the colors pixels when I switch to Deffered Rendering.
What I want ultimately achieve is My Video Pixels should occlude or overlap the 3D Objects based on Depth information passed on each pixel.
I did this before with my other applications,but here your finalRenderTarget does not contain any DepthStencil information.
So I wonder how and where exactly I should integrate this depth based video rendering.
I am really SORRY if you got confused. But I would really like to see how my existing application works with your engine.
Still exploring your engine…
Thanks and Regards,
Bala
I’ve updated the code with a lot of comments, especially in the renderer.cs and mainlogic.cs
You migth want to get the updated version, it might be easier to understand
I’m still not 100% sure I understand how it will look like, but you want to the finalrendertarget to have a depth buffer so you can render additional stuff, right?
This would be useful then
But you would have to reconstruct from a linear depth buffer and not the z/w used in the example. And of course give the final render target a depthstencil property.
Maybe this would be the issue… I am having Linear Depth buffer which is single float.
Let me try it and let you know.
Thanks for the explanation!
Regards,
Bala
I’ve implemented @willmotil’s stringbuilder (find it here: No Garbage Text and Numbers.)
If you disable physics with p_Physics = false, you’ll see the garbage generation is drastly diminished. (Unless you use the console, I don’t care for that yet)
For comparison you could change c_UseStringBuilder to false
I wanted to optimize my lighting with stencil culling when unforseen problem arose.
I have a curious bug now, and I have no idea what changed.
Basically when i draw 1000 lights my FPS will go down to 70. Which is odd, since they used to be at 170. That’s not the issue, the issue is that even if I have many times the resolution the frames will still stay at 70. It doesn’t matter whether I draw at 100x100 or 1920x1200 or 3840x1200 (in all of those cases previously the framerate would have stayed above 70, so that’s fine).
Even if i change the pixel shader to be a one liner, (return float4(1,1,1,1) ) my frametime barely changes.
My vertex shader is as simple as it gets, too. And it doesn’t make a difference either if I just return hard values instead of calculating anything.
I then changed the light mesh to be a cube, so only 8 vertices. No difference.
Lighting scales linearly with pixels covered usually and I wonder why it doesn’t any more.
If I turn away from the lights the scene will render normally - that is 1000 fps at low resolution, 200 fps at high resolution. Makes sense since a lot of the stuff is pixel shader bound.
Soooo is it a CPU problem? Possibly, if i check performance profilers it turns out that over 50% of the work goes into the _graphics.DrawIndexed function, afterwards I can’t trace.
But nothing changed here, I am 100% sure. I still use the same mesh and it doesn’t matter whether or not I pass the shader variables etc.
Any ideas?
EDIT: Hmm. I’ve reverted to a very old build, where I took this screenshot
and I get the same results. Doesn’t matter if lights have volume or not, framerate stays.
So I would guess it is outside of the program. Maybe some windows thing? I have no clue. Really confused.
I have checked my processor clock rate and it’s normal. No powered-down state I believe.
EDIT: WOOWOWOW after a long time I found the error
In my DirectX properties (Debug->Graphics->directX properties) I’ve set some flags. Once disabled everything runs well again
Can I linearize the depth buffer in your Engine’s GBuffer shader?
Will it side effect the lighting and shadow mapping shaders?
Now I have two Depth Textures,
How can I mix these two buffers by comparing the Z value based on Depth buffer and
I want to render only the pixel which is nearer to Camera.
So the final RenderTarget will have a merged color data from both buffers.
This is my ultimate requirement, Please give me some suggestion.
Thanks and Regards,
Bala
the depth buffer in my engine (if you have the newest version, since about 6 days or so) is completely linear
OMG !!! Have you changed it in new version…
Let me check the shaders of your new one… I am using your old version code since it gave me higher frame-rates…
Let me check…
for better performance in Recources / GameSettings.cs
In the ApplySettings() function
you can change
g_EnvironmentMapping = true;
g_SSReflection = true;
these to false
Of course unshadowed lights and non-volumetric lights are also faster
Thanks a lot!
I have got it working nicely after making these 2 params as FALSE.
I don’t want the reflection at the moment. I can enable it later on…
Thanks a lot, I will integrate with new one and let you know…
Regards,
Bala
Dun have image-based lighting?
I guess so.
But it’s not physically acccurate if that’s what you mean. I would need access to mip maps for real time usage, but Monogame doesn’t allow that
here is an old gif about that
In my game “bounty road” I don’t use cube maps, instead I just use an environment strip and I calculate accurate lighting response there, but for cubemaps in the game I just take basic mip mapping and take a random guess at what looks good