Hi,
SetData/GetData functions use CPU and that’s the reason why it is so slow.
In your case I would use GPU by drawing a fullscreen quad with a basic HLSL shader that samples the input texture.
E.g.: float4 PSCopy(VSOutput input) : COLOR0 { return tex2D(SourceTextureSampler, input.TexCoord); }
Load the effect with Pipeline tool, set your parameters (SourceTexture) and then bind the shader as always with: myEffect.CurrentTechnique.Passes[0].Apply();
And draw the fullscreen quad: quadRenderer.RenderQuad(graphicsDevice, -Vector2.One, Vector2.One);
(I’m assuming you’re still using LightPrePass sample ).
Texture data is in stored on the GPU in VRAM, so you can’t directly copy stuff over to it from. When doing stuff like this, you should not that data transfer between RAM and VRAM is pretty slow. Reading data from VRAM is also not supported on all platforms. So when you want to copy data from one texture to the other, like @Sizaar said it’s a lot faster (and alway supported) to render one texture to the other (the data doesn’t leave VRAM this way) than to copy over the data from VRAM to RAM and back.
You should make the texture you want to render to a RenderTarget2D rather than just a Texture2D so you can actually draw to it.
In what manner are you receiving data from the camera? If it is a byte buffer, then you have no choice but to use Texture2D.SetData(). Obtaining the Direct3D texture handle will not make it any quicker as you would be doing the same code that SetData() already does, but likely in a less robust manner. There is also no need to allocate a byte buffer and copy the camera color buffer to the byte buffer. Just pass the camera color buffer to SetData(). No duplicate buffers are required.
If it is in a texture already, why copy it to another texture? The suggestions of using a shader to render from one texture to a render target require the camera data to already be in a texture to start with. And the only that texture would work is if it was created from the Direct3D device context that MonoGame uses. Just use the original texture.
@Sizaar & @Jjagg: Thanks for the reply, but my case is not to copy one texture to another texture.
@KonajuGames: Yes. I am getting my byte buffer filled by calling camera’s API function for each frame. [This is one copy]
I can avoid this unwanted copy by passing the Camera’s buffer handle directly to my final Tex2d.SetData().
But the issue is I declared the Texture2D as COLOR format, so it expects RGB32. But the camera’s buffer handle contains YUY2 buffer format. I found no setting in camera to change buffer handle format.
So I am calling Camera’s GetBuffer() function with required format, so it internally converts the buffer from YUY2 to RGB32 and filling it into my byte buffer and then I am passing my byte buffer to Text2D.SetData().
We don’t explicitly support it, but Direct3D does have a YUY2 surface format. I have seen comments that only AMD cards support this, but that was from several years ago. That would be the most efficient way to support it (dump the YUY2 data straight into a texture with that surface format).
In short, we don’t have direct support for it, but it could be added as a feature request.