Color banding when using RenderTarget2D

I’m using a RenderTarget2D to draw my 3D scene onto a low resolution texture, and then I draw that RT to the window, sized and positioned to the highest multiple it can of the window resolution. This all works great, but has one issue: The RT seems to be… lower quality? I’ve ensured my RT is using the same GraphicsDevice.PresentationParameters.BackBufferFormat, but for some reason, whenever I render to the RT, it seems to have a lower color depth.



I’m kinda wracking my brain wondering if there’s something I’m just not understanding about RenderTargets, I figured LESS pixels would offer more room for having all the colors in the color depth, but instead it seems like no matter how large or small my RT is, it still has the same banding and low color depth. And simply NOT drawing to the RT, and just drawing my models to the screen, it immediately looks as intended, full color.

If anyone has any idea how to avoid this kind of issue, I’d love to hear it. Also, if it makes any difference, I’m using an OGL project.

Hi PixelDough,
would you paste some of the code you are using for creating the render target? what format are you using in code? , how do you draw your object?
the picture before and after, before means before putting it into a render target?
I see the lower part having an issue with color banding, it can be for many reasons, it can also be the topology of the object, or many other things.

I’m using whatever format is set by default in the GraphicsDevice.PresentationParameters.

 _mainRenderTarget = new RenderTarget2D(GraphicsDevice, (int)RenderResolution.X, (int)RenderResolution.Y, true,
            GraphicsDevice.PresentationParameters.BackBufferFormat, DepthFormat.Depth24,
            GraphicsDevice.PresentationParameters.MultiSampleCount, RenderTargetUsage.PlatformContents);

Yes, before means how it looks when I don’t render the game to a RT, and after is what it looks like in the RT.

I’m just drawing the model using a simple unlit Effect, looping through all meshes in the model and drawing them according to their model transform matrices

When I see the picture, it looks like a normal may be flipped since it is a strong line maybe a triangle. But that should also render in the non render target which is not. That’s very strange since I used render targets but never seen an error like this before.

Can you try to render to a render target a simple shape in red, another in green and another in blue and see if they rendered properly? maybe there is a problem with the mesh. Also, the pixelated format, can you render the render target at 1:1 size and see if the error still happens?

Show us the drawing code. Are you using Directed Lights or called EnableLighting() on BasicEffect?
Also show us the RT with full resolution. it’s hard to see any details at that resolution.

1 Like

Out of curiosity, is any kind of anti-aliasing/multisampling enabled here? I see GraphicsDevice.PresentationParameters.MultiSampleCount being provided when creating the new RenderTarget2D but I don’t know what you actually have configured.

The result is a sampling of the pixel at one point then stretched by a bilinear scaling that produces the artefacts.

The perceived color depth loss is due to the non-axis aligned gradient and scaling.

The 4 adjacent pixels in the smaller image(comprised of a single sample instead of 100s or more for the top image) average is non-linear with respect to the gradient. It is still there, but reduced in range due to the average over a huge area of the source.

So here’s some more pictures (mode’s texture has changed slightly since last pics, I added a dark stripe down the center of the top piece)

In all the RT ones, there’s discoloration on the top of the block. I’m really not sure what this is, it does this regardless of resolution of the RT. I also tried to see if using just the simplest Basic Effect would make a difference, and it does not. I’m not using any lighting, or any shading at all either.

You can see the color banding is worse on the lower resolution render texture, but while stromkos gave an explanation, I struggle to understand why no other engines I’ve used with RTs have ever had this issue for me. I’m not using bilinear scaling or anything, in the RT or afterwards when drawing it to the screen.

looks like a normal may be flipped since it is a strong line maybe a triangle

I thought this might be the case, double checked my normals in my model and everything. It seems to be rendering perfectly fine with the right winding order and everything, no depth issues at all.

Show us the drawing code

public void Draw(Matrix viewMatrix, Matrix projectionMatrix, Color? tint = null)
    Color tintUnwrapped = tint ?? Color.White;

    ParamHasTex.SetValue(Texture != null ? 1 : 0);
    ParamIsUnlit.SetValue(IsUnlit ? 1 : 0);
    foreach (ModelMesh mesh in Model.Meshes)
        Matrix localWorld = _modelTransforms[mesh.ParentBone.Index] * WorldMatrix;


I’m using a custom effect, which just currently draws the texture on the model with a color multiplied with it. But like I said, I just tested it with BasicEffect, and the results are the same across the board.

(edit: also, forgot to mention, I did check the multisample values, and the mipmap settings, and no changes to them make any difference to the visuals with regards to this issue)

Depth buffer / Depthstate is wrong for your Render target draws.

How would that happen? It’s the same drawing code, I’m resetting the GraphicsDevice.DepthStencilState to Default before drawing the scene, regardless of if it’s in the RT or not…

It’s not having any other symptoms of depth issues, like objects overlapping one another at certain angles. If there were, these cubes would show faces through one another when I switch to the RT (seen as the low res one here). Instead all switching to the RT does is make things appear slightly weirdly colored in some places

So I think some of the issue might have to do with my UV maps. I’m using a texture with a few color gradients on it, and UV unwrapping sections of the mesh to display various ranges on a gradient. But for some reason, when displaying in the RT, any UVs that are like, a scale of 0 on the X or Y axis, seem to be messed up when rendering. I’m not entirely sure what the cause is though, I’m still working on figuring that part out, I’ll update this later if I solve it

Is blendstate opaque and is buffer cleared to same color?

Are you applying the effect pass before draw?

Yes, blendstate is opaque. What do you mean by “cleared to same color”? Using GraphicsDevice.Clear with a specific color?

My effect is set on the MeshParts of the Model when the model is loaded.

Also, just found out that if I set my GraphicsDevice.SamplerState to use AnisotropicClamp, everything suddenly looks right when rendering on the RT. I was using PointClamp had already tried LinearClamp.

The sampler state will change how the effect will sample/pick the color from the texture based on UV coordinates, if the mode is point type it will use exactly the color without ramping at all, that is very useful if you are trying pixel art style or exact pixel color rendering , I guess in your case your texture may have some wide range of colors causing it to pick in some cases something that made your mesh look like that and when not ramping it will show an abrupt change in color, so it will not work well if you want smooth gradients. For smooth gradients/transitions you have to use anisotropic.

For example if color is 0,0,0 and the next pixel the color is 1,1,1, if you pick uv coordinate 0.4 the output in point will be 0,0,0 but if you use anisotropic it will give you 0.4 , 0.4 , 0.4 , something in between adjacent colors.

I assume that you are then rendering the RT with SpriteBatch.
SpriteBatch.End() will change some of the device states.

This is a simple scaling issue. Tiny to large never turns out well.