Sample currently set RenderTarget2D

How can I sample the RenderTarget2D that is being drawn to in a different draw call? Alternatively, can I sample the back buffer in any way?

So doing this:

GraphicsDevice.SetRenderTarget(renderTarget); DrawCall1(); // Sample changes to renderTarget from DrawCall1 in someEffect DrawCall2(someEffect);

Without doing this:

GraphicsDevice.SetRenderTarget(renderTarget); DrawCall1(); // Manually apply render target (expensive operation) GraphicsDevice.SetRenderTarget(null); GraphicsDevice.SetRenderTarget(renderTarget); // Sample changes to renderTarget from DrawCall1 in someEffect someEffect.Parameters["SomeTex"].SetValue(renderTarget); DrawCall2(someEffect);

afaik you cannot read-access the buffer currently being rendered to (this would cause performance issues anyway) - and you’d have no guarantee if a pixel was processed or not during the process.

What benefit you’d expect from doing that? I doubt that SetRenderTarget is that expensive or avoidable. Looks like you’d want to apply a Post-Effect? If you can, you can always try to integrate that operations in an existing render pass

I’m trying to accomplish bitwise OR blending. Since there is no such blend state function then I use additive blending and check if the bit I want to write to the render target is set at that pixel, if it is set then I return 0 if it is not set then I return the bit. As it turns out a big chunk of the GPU usage is used on updating the render target (imagine a hundred DrawCall2 calls while updating the render target between each call).

Have you tried to accomplish the desired output with a custom BlendState? I am not that good at those values, but maybe you can get the desired result with the available operators.

I have. Unfortunately, the operations offered are very limited and I have yet to see bitwise OR implemented using arithmetic operators only without loops or conditions.

Hmm applying the render target seems to increase GPU usage just as much even when there are no changes. I would have thought MonoGame somehow called this: which only updates the dirty parts of the render target. Also the size of the render targets in question has no effect on performance.

But the good news is that if I replace “null” in SetRenderTarget(null) with a different actual render target it significantly reduces GPU usage. Although it’s still a big hit for GPU usage.

If you pass in null then it calls PlatformApplyDefaultRenderTarget() and with non-null values it calls PlatformApplyRenderTargets() line 809

So my theory is that PlatformApplyDefaultRenderTarget() is much more expensive. I have no idea why that might be.

Not sure what hardware actually does behind the scenes, but as textures basically reside on the GPU the size of the rendertarget shouldn’t matter because the texture is actually not transfered back and forth.

If you render to a texture, that texture is already on the GPU, so there is no need for UpdateTexture.

Anyway, setting a rendertarget is basically a State Change and this is an actual (sequential) IO operation to the GPU - therefore it’s expensive

Do you actually want to “OR” the whole color (not sure what that does on a float)

I see. Is there any cheaper way to apply the render target than swapping render targets?

Each RGB component is a bit field and I only need to set one bit at a time. Alpha can be whatever. So for example, if I want to set bit 3 (2^3) of red component in back buffer I would return this in the pixel shader:

return float4(8, 0, 0, 1

The expected behaviour is that if bit 3 in red component is already 1 then nothing will change, but if it is 0 then it will be set to 1.

You can’t read from a render target while you are writing to it, only indirectly by using blendstates. If you could read and write simultaneously there would be no reason to even have blendstates. I believe in OpenCL/Cuda you can use read/write buffers to do something like that. What are you trying to do, save texture memory?

How can I sample the RenderTarget2D that is being drawn to in a different draw call? Alternatively, can I sample the back buffer in any way?

You draw to the render target first as it is a fake back buffer that you can then in another drawcall using a shader sample from it.
Which is basically the same as sampling from the backbuffer. You then draw something to the real back buffer using that data (the original rendertarget) plus a shader that reads it and does something with it.
While it would be somewhat inefficient / bad practice you could even take the rendertarget and cast it to a texture use GetData on the texture to place its pixel data into a array of colors.


I think what he want’s to do is to and tell me if this sounds like your intent so we are all clear.

Draw to a rendertarget.

Set the active render target to then be the actual backbuffer.

Take that original rendertarget and then pass it to a shader.

In the shader…
Read data from the rendertarget to perform some manipulation on the current drawings using that as well as possibly pass in a addition texture ect.

Draw to the back buffer (the screen) using the above output from the shader ?

Or maybe something like so were the rendertargets continuously feed off the previous render target data in a shader?

for example in the above the game1 did roughly the following.

bool firstrun = true;
int flipflop =0;
Rendertarget currentRt;


flipflop ++;
if( flipflop > 1 )
    flipflop =0;

if(flipflop == 0)
    currentRt = rendertarget1;
    currentRt = rendertarget2;


// draw something on the initial render target
firstrun = false; // never enters here again

SetRendertarget(null); // ready to draw to screen

// spritebatch begin(... your effect ,..) end() ect set your own effect in begin ( ... FX )
// that effect of yours that uses a texture as if it were data itself in it.
// the texture in this case which is actually the rendertarget.
spritebatch.Begin(..., , ,... your effect ,..) 

// though it might be easier to draw a quad see here.

// draw the whole texture to the screen (now the back buffer) the pixel shader manipulates the data.

spriteBatch.Draw( (Texture2D) currentRt, sourceRect, destRect, ...ect);


the resulting image in that picture is just growing on its own by the algorithm in the pixel shader repeatedly using data from the first run and altering it over and over each frame.

First of all I’d like to distinguish between Game.Draw call and draw call. When I say “draw call” it really just involves GraphicsDevice.DrawUserIndexedPrimitives. The contents of the render target does not persist. If you must know I’m drawing shadows to it.

What I want to do is the following in one Game.Draw call:

Step 1: Game.Draw is called
Step 2: Set back buffer as render target
Step 3: Sample the render target in a shader that also draws to it
Step 4: While the same render target is still the back buffer apply the changes from step 2
Step 5: Repeat step 3-4 for every entity in the game

And you can’t just use the whole 8-bit component instead of single bits because the 8-fold memory requirement is too much?
What do you need so many bits for when doing shadows?

Each bit represents the origin light of the shadow.
In the lighting process each light checks if its bit is set at a given pixel, if that is the case then it will not illuminate that pixel. This means that other lights can still illuminate shadows that is the result of a different light being blocked out.

RGB = 3 * 8 bits = 24 lights in the scene view at once.

My game is in 2D BTW.

As I said in my previous post, if you can’t make it work with blending states, the only option is to switch render targets.
I’m wondering if you could change your lighting method though to get the same result. When you have many lights it’s often better to use some kind of deferred lighting model, where you first draw the scene without lights, and then draw one quad for each light in a post process to light your scene. Then you don’t need to switch render targets for every object, because you don’t do any lighting calculations during object rendering.

That’s exactly what I’m currently doing.

But as you say “the only option is to switch render targets” if I want to draw shadows for each object and can’t make it work with blending states.

Would this method maybe work for you:

  • draw all shadows for the first light into a shadow-rendertarget.
  • draw the lighting information for the first light into a light-rendertarget using the shadow texture.
  • repeat those two steps for every light. The lighting-rendertarget accumulates the lighting information from all light sources using blending states.
  • use the accumulated lighting-texture to light the whole scene in one go.

The difficult part is to accumulate all lights into one light-texture. Accumulating just color/intensity is easy, you just add them all up. The problem is if light direction matters, that’s not so easy to accumulate, but I think it might be just fine to have a weighted average of all the incoming light directions.

Perhaps. This would maximize the number of swaps to the number of lights in the scene. But it means I cannot batch shadows for each entity together, so 1 shadow = 1 draw call. Instead of 1 entity = 1 draw call. So if I had 100 entities and 24 lights that would be 2400 draw calls instead of 100. However, this is only true when many lights overlap which is unlikely. Also I would have to iterate my entire entity list for each light, although that’s a smaller detail in comparison.

So it might be better in certain situations and worse in others, but it is not a vastly superior method by any means.

Maybe you can figure out a way to draw all shadows from one light source in a single/fewer draw calls. That’s how I was imagining it, but of course that might not be so easy. I don’t know what shadow method you are using. Is it impossible to draw multiple entity-shadows in one batch (instancing), or maybe you could do the shadows also as a post-process for the entire scene, rather than per entity.

If you really need one draw call for every entity-shadow, can’t you just set a max number of lights affecting one entity. You could just use the 4 most influential lights for every object.

Well for starters I could use a light for each RGB component meaning I could render shadows coming from 3 lights at a time. Your method would also eliminate max lights in scene limit (includes non overlapping lights) which is currently 24. The only likely performance crash is if I give enemies torches and they all gather in one spot (for example around the player), I could deal with this by having a limit to number of shadows per entity as you say. I’ll try to think of more optimization possibilities, thanks.