Retrieve current contents of the backbuffer after SpriteBatch.Draw

Hello,

is it possible to retrieve the current contents of the backbuffer after calling .Draw()?

I tried binding a rendertarget, draw a sprite on it, then use the rendertarget i bound as the texture in the next draw call:

	GraphicsDevice.SetRenderTarget(renderTarget);
			spriteBatch.Begin();

			spriteBatch.Draw(tex, Vector2.Zero, Color.Wheat);
			spriteBatch.Draw(renderTarget, new Vector2(100, 200), Color.Red);
			spriteBatch.End();

			GraphicsDevice.SetRenderTarget(null);

			spriteBatch.Begin();
			spriteBatch.Draw(renderTarget, new Vector2(0, 0), Color.White);
			spriteBatch.End();

It seems as if the contents are not flushed until i call GraphicsDevice.SetRenderTarget(null), am I right? Is there any workaround?

No workaround - you cannot read the rendertarget you are currently writing on. GPUs work in parallel so you’d never have a consistent information about which pixel was written and which was not. It’s no definite state yet on the whole buffer … (not sure if modern hardware can do it anyway tho)

So you need to give it free by setting a different render target (or null which is the screen basically) and then you can assign the prior rendertarget as a texture for the next pass and read from it like from any other texture

If there is workarounds mainly depends what you are trying to achieve and how it can be done and if other mechanics can be used for that.

PS: After calling .Draw nothing has happened anyway on the GPU, the Spritebatch just batches the information and the actual rendering happens in .End() - but looks like you are aware of it by looking at the code you provided.

Depends what you mean by retrieve. Honestly describe what exactly you want to do, with compute branch and RW textures/structured buffers/coherent buffers our possibilities grew considerably (beyond point of what some people even consider possible it seems)

Also previous comment is misleading. Spritebatch in immediate mode will create drawcall for every .Draw, in deferred it will wait for end to flush on screen and batch together what it can.

I wanted to add depth to sprites. I first tried creating two rendertargets (one for color, the other for depth – like used in deferred shading), after each draw call I wanted to pass these two rendertargets to the custom sprite effect shader. There I would compare the current depth (the depth map is seperately provided for every sprite) of the sprite’s pixel with the current depth value in the depth-rendertarget. If the sprite’s depth is higher than the current depth I wanted to discard this pixel giving the appearance of depth.

It didnt work out, so I used compute shaders because that way I can “draw” (copy the sprites pixel data to a rendertarget) and access the current contents of the rendertargets whenever I call draw again.

Although it works it is very inconvenient espacially when considering batching which results in race conditions (happens when two sprites intersect with each other).

Isnt there any option to draw to a rendertarget and pass the current contents of the rendertarget to the custom sprite effect?

Why don’t you use normal depth buffer for early Z rejection? For compute variant you will need memory barrier, however I wouldn’t recommend using compute in this case (unless there is more to it)

Could you elaborate? How can I write and read from the depthbuffer during rendering? Or do you rather mean to draw as if i would render 3d objects and let the pipeline decide what to discard and not? But how would I write down the sprite’s depth value?

Everything you render is being rendered as 3D, everything gets transformed within clip space and NDC. You are reading and writing into depth buffer the moment you set depthstencilstate to Default. So do your vertex transformation into clip space as you see fit, it’s vec4, always been, always will be. Make sure you have depthstencilstate as you want. Done. Don’t forget to discard transparent pixels so you don’t write their depth. Spritebatch geometry is nothing but quads, four vertices, two triangles, 6 indices. Can be as 3D as you want.

Check whether layer depth is now written into Z coord or not, if not, modify source to change that, but last time I remember it was (including some comment they aren’t sure about it - it was indeed right thing to do btw)

Note, there are more specific operations you can do with depth buffer in pixel shader, but it has some implications and is not required in your case, hence I wont elaborate to prevent confusion.

What kind of depth are you looking for?

My last game, uses spritebatch and some other tricks here and there to give depth to sprites, make the floor look like 3d and make objects have some depth like columns and walls, everything is rendered using spritebatch ,camera tricks to add perspective and stretch textures so I don’t have to model anything in 3d. You can watch my YouTube video for some details if this is what you are looking for Dev log 15 Not 3D! Faking all - YouTube

1 Like

I know that sprites are nothing but quads drawn with DrawIndexedPrimitives and appropiate orthographic projection. My point is how can I pass the sprite’s depth texture to the depth buffer.

This is the texture:
block

This is its depth texture:
block_depth

How can I pass the depth values to the depth buffer such that whenever drawing another sprite it will be checked whether or not the sprite’s pixel’s depth is smaller than the current depth in the depth buffer?

Maybe my question is dumb but I am pretty sure it should somehow be possible to pass the depth values to the depth buffer the same way its possible to pass the rgb values to the backbuffer… but then again it is not that easy because I guess the pixel shader is not even called for pixels whose depth value is larger than that in the depth buffer.

If you have depth texture then you have to do per pixel write, that’s where it will become annoying within MG bounds if you indeed need that to discard pixels of follow up draws.

so basically its not possible with the current version of monogame, right?

I would have to check few things and it would be relatively fair amount of work. In your case I would probably stick with compute solution and throw in few memory barriers to sort out your race conditions.

yeah no problem, the compute shader variant does its job, although not perfect.