Curious behovior with MRT ?

Hi !
I’m using MRT and a custom processor to draw my models. (Using a GBuffer)
I’m drawing to 4 RenderTargets: (using device.DrawUserIndexedPrimitives instead of a spritebatch if it matters)

It curiously shows only depth as if it was 0 or 1, nothing in between…

float4 worldPosition = mul(input.Position, World);
output.Depth.x = output.Position.z;
output.Depth.y = output.Position.w;

The creation of the RenderTargets:
_RT_Color = new RenderTarget2D(GraphicsDevice, _BackbufferWidth, _BackbufferHeight, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8);
_RT_Normal = new RenderTarget2D(GraphicsDevice, _BackbufferWidth, _BackbufferHeight, false, SurfaceFormat.Color, DepthFormat.None);
_RT_Depth = new RenderTarget2D(GraphicsDevice, _BackbufferWidth, _BackbufferHeight, false, SurfaceFormat.Single, DepthFormat.None);
_RT_BlackMask = new RenderTarget2D(GraphicsDevice, _BackbufferWidth, _BackbufferHeight, false, SurfaceFormat.Color, DepthFormat.None);

But when I’m drawing each of the RenderTarget to see what was drawn in each, only the Normals are ok.
_RT_Color is black whatever I try:
PixelShaderOutput output = (PixelShaderOutput)0;

output.Color = float4(1.0f, 0.5f, 0.7f, 0.5f); return output; //Force to return and don't execute the remaining code

Here are the structs (Are these semantic ok ?)

struct VertexShaderInput
{
	float4 Position : POSITION;
	float3 Normal : NORMAL0;
	float2 TexCoord : TEXCOORD0;
	float3 Binormal : BINORMAL0;
	float3 Tangent : TANGENT0;
};

struct VertexShaderOutput
{
	float4 Position : SV_POSITION;
	float2 TexCoord : TEXCOORD0;
	float2 Depth : TEXCOORD1;
	float3x3 tangentToWorld : TEXCOORD2;
	float3 ViewDirection : TANGENT0;
}

struct PixelShaderOutput
{
	float4 Color : SV_TARGET0; //COLORx instead ?;
	float4 Normal : SV_TARGET1;
	float4 Depth : SV_TARGET2;
	float4 SunRays : SV_TARGET3;
}


It is most likely not.

You may check your near & far plane use with your projection matrix.
depth precision is defined by near/far ratio, so near is too close to the camera, far too… far = you loose too much (50% of the precision is in the range [ near, near * 2 ], see second link below)

see this and this

After some reading it seems sv_position returns coordinatewin screenspace rather than [0, 1] so all my texture sampling are wrong.

Concerning the depth I use near far values 0.05 and 500. It should at least show a gradation when near the nearplane or really far away but nothing.
It is exactly the same code in my xna version except for the sv_semanti which did not exist.

I ask myself the same question, and the only one I know which work for sure is from Riemer Grootjans HLSL tutorial (with shadow mapping), especially the shadow map step, with the ShadowMap technique of he effect. But here, he use a N/F = 5/100, which is a good ratio, a lot better than yours), try it, draw the shadow map and add some moving “light-camera” code, you’ll got a gray-scale (despite it’s a loss of memory as RGBA components are all equals, only one is enough/needed)

looks like it’s ok, the black can be a color very close to black, or just close enough so it looks 0,0,0 on your output.

Here, this is my depthbuffer. It is 100% working, but may look all black to you, too.


Move your camera super close to the geometry and you might see different shades.

Obviously you can set up your depth buffer in a linear fashion, that will make it more readable for humans (and has other useful properties).

Setting the farPlane closer to the camera also helps with “readability”. If you set it just behind the plants you will see a gradient for sure.

Am I right to say POSITION semantic is used by the vertex shader and SV_POSITION by the pixel shader ?
Another point is with xna the depth is shown as a gradient of blue, here it is red. Is monogame handling single format RT as true singles ?

After some further tests it seems that the model is drawn black if I clear the rendertarget to gray, as if tex2d returned an empty diffuse texture. But drawing this texture shows this is ok.
Drawing texcoord into color with
Output. Color =float4(input.TexCoord, 0.0f, 1.0f); shows gradients of red and green as it should be.

Perhaps my declaration of the samplers is wrong ?

texture diffuseMap;
sampler diffuseSampler = sampler_state
{
Texture = <diffuseMap> ;
MAGFILTER =… ETC
};

I am wondering if there is a problem with the customcontentpipeline when it assigns textures to model…

My engine works perfectly under XNA but it is a real pain to make it work with MG.
Maybe I should recode all from scratch instead of copy/paste… But it took me 4 months on my spare time and I don’t want to loose another 6 months when with xna, achieving just a simple MRT working with a custom pipeline effect was so simple.

for what it’s worth I followed the deferred engine tutorial by Catalin (for XNA 3.0 or so) and it worked from start to finish.

Apart from that - yes depth is always in red since it’s only 1 color aka r.

The samplers look ok, but they are using the old system. You can try to use Texture = (diffuseMap) but I think both work in this system. ( old: tex2D(sampler) vs new texture.Sample(sampler…))
Both the old and new semantics work though.

Pixelshader input for me is also SV_Position in most cases.

But I have some older fx files where, curiously, only Position0 is used for both VS and PS instead of the newer SV_Position0 style. I don’t know, I mix all the stuff and it works.

Other stuff I mix: float4x4 and matrix.

sampler and SamplerState. texture and Texture2D. (these have different functions).

Texture : register(t0).

Texture without registering.

Somehow no problems.

Did you use the custom pipeline from catalyn too ?
If there was a cheat sheet from xna to MG as the one I followed to upgrade from xna3 to 4 it would be great :slight_smile:
I have tried with () in the sampler and got the same result.
Maybe I have missed something but I cannot find where. And when i’ll find i’ll feel so dumb it was a so stupid mistake…
Anyway I cannot see why it does not sample from the textures.
I will have to test passing them manually to the shader to see if it works and confirm it is my pipeline extension that fails somewhere.
But if it is I will have to discard this pipeline and set back to my former solution which set all the data and work at load time instead of build time. Not an optimal solution…
Maybe it comes from the fbx model but it os ok under xna. And the second model I have tested with was a conversion of an older unsupported fbx, same result: black.

Models and textures should work the same as XNA. If they don’t, that’s a bug that we should fix. It would be helpful to see your project (code and data) or at least a small sample that demonstrates the problem so we can determine the cause.

I will make one to narrow the problem. If I have some spare time today of course. I will upload it by the end of the day (europe).

EDIT: Creating a new project showed me there was a bunch of outdated code in my ModelProcessor.
I also made the test where I set manually the diffuse texture. It draws it without problem so I assumed my ModelProcessor is doing something wrong, but without any exception triggered.
So I made it write/log everything in an HTML file which showed it failed in ConvertMaterial:

ExternalReference<TextureContent> externalRef;
if(!deferredShadingMaterial.Textures.TryGetValue("Texture".ToLower(), out externalRef))
{
	_swLog._e("\t\tConvertMaterial > Texture > TryGetValue failed assigning default value");

	string abspath = Path.Combine(_Folder_ContentBase, "Textures/Default/default_diffuse.tga");
	_swLog._w("abspath: " + abspath);

	//deferredShadingMaterial.Textures["Texture".ToLower()] = new ExternalReference<TextureContent>("Textures/Default/default_diffuse.tga");
	deferredShadingMaterial.Textures["Texture".ToLower()] = new ExternalReference<TextureContent>(abspath);
}

The TryGetValue fails, so the model should at least draw a dummy texture with multicolors showing it failed. But it seems even deferredShadingMaterial.Textures[“Texture”.ToLower()] = new ExternalReference(abspath); fails.
It uses a default_diffuse.tga, but in the ModelProcessor, should I use the assets instead of the files before they are converted to assets ?

I have used XNB parser to achieve this onan FBX model (fern, which has 4 textures:
diffus.tga,
normal.tga,
opcity.tga,
specular.tga.
For now it should only use the diffus color or at least the default diffuse color in the engine.)
http://www.filedropper.com/fern_1

The end of the files shows:

and as you can see, it points to a RenderGBuffer. But if I put the sahder on top of the MGCB file, it produces a second xnb named RenderGBuffer_0. Could it be the mistake ?
Why did it build 2 times ?

Ok, I think I have found the source of my problem (I’m not really sure of what made it not work properly after 1h of blind coding (ie a lot of tidying up the code without being able to compile)).
Here is what I did to fix this (at least concerning the diffuse rendertarget):

  • The effect applyed in the ModelProcessor has been put on top of the .mgcb file to avoid building it 2 or more times. (The ModelProcessor references this one so it most likely did 50% of the problem on its own)
  • Removed the textures pointed by FBX file from the pipelinetool (I dunno why I added them in fact, as there only was the FBX in the XNA version)
  • Changed some of the deferredShadingMaterial.Textures to match the .fx texturemaps (I would say almost the remaining 48% of the problem). I had a MyAlbedoMap in the .fx whereas in XNA Texture was enough to be mapped to this sampler in
    deferredShadingMaterial.Textures["Texture"]
    replaced with
    deferredShadingMaterial.Textures["MyAlbedoMap"]
  • A lot of other fixes I don’t remember or may not be relevant to this problem.

Tonight I’ll try to set up the NormalMap this way too.