Generation Loss between Render Targets

Here’s an interesting conundrum that I did not expect. In my post processing effects, I’m rendering a scene (and additional effects) back and forth between two RenderTarget2Ds of equal size. However, with each render, it seems to suffer from what I would describe as “generation loss” - it gets a tiny bit blurrier each time, such that after four of these exchanges, it’s extremely noticeable. After comparing the sequential renders, it also seems to drop down and right a bit each time too.

I’m suspecting that it’s probably due to anti-aliasing within the texture sampler, which might imply that I’m not lining up the geometry correctly. At present, I’m rendering an untransformed textured quad spanning (-1, -1, 0) to (1, 1, 0), (texcoords (0, 0) to (1, 1), of course), to the render target. I thought it would make sense that this would be equivalent to transferring it perfectly from one to the other.

Anyone else run into something like this? What’s the best way to correct this offset?

Here’s the solution I found, although I’d still like to know the exact reasoning behind it.

After playing around with the numbers, I found that if shift the texture coordinates by half a pixel, then it comes out just right. This supports my hypothesis that it had something to do with the sampling.

My fullscreen quad previously looked like this:

new VertexPositionNormalTexture(new Vector3(-1, 1, 0), Vector3.Backward, new Vector2(0, 0)),
new VertexPositionNormalTexture(new Vector3(1, 1, 0), Vector3.Backward, new Vector2(1, 0)),
new VertexPositionNormalTexture(new Vector3(-1, -1, 0), Vector3.Backward, new Vector2(0, 1)),
new VertexPositionNormalTexture(new Vector3(1, -1, 0), Vector3.Backward, new Vector2(1, 1)),

Now it looks like this:

new VertexPositionNormalTexture(new Vector3(-1, 1, 0), Vector3.Backward, new Vector2(0, 0) + new Vector2(.5f / 1280f, .5f / 720f)),
new VertexPositionNormalTexture(new Vector3(1, 1, 0), Vector3.Backward, new Vector2(1, 0) + new Vector2(.5f / 1280f, .5f / 720f)),
new VertexPositionNormalTexture(new Vector3(-1, -1, 0), Vector3.Backward, new Vector2(0, 1) + new Vector2(.5f / 1280f, .5f / 720f)),
new VertexPositionNormalTexture(new Vector3(1, -1, 0), Vector3.Backward, new Vector2(1, 1) + new Vector2(.5f / 1280f, .5f / 720f)),

As you can see, I’m adding half a pixel (out of 1280 and 720 for x and y respectively, since that’s my current resolution). That seems to make it line up so I don’t lose any sharpness, but I don’t know whether I shifted it the right direction because I don’t know exactly how the texture samples are calculated. (For example, it seems strange to me that I’m adding half a pixel to texcoords that are already 1, as that would indicate that that point has effectively wrapped around the texture… Same thing if I decided to subtract half a pixel from 0.) As I said, I’d still like to know the answer to this so I can better understand for the future.

Ok, for anyone who may be wondering, here’s what’s up.

After stepping away for an hour and thinking about it (over ice cream, which may have been my muse), I considered that the viewport’s boundaries may not be exactly -1…1. If that were the case, it would take an odd number of pixels to have a centre of exactly 0. I tested this by drawing some line segments around the 1’s and -1’s, and sure enough, they only appeared on the top and left of the screen, not the bottom nor right. What this indicates is that the viewport coordinates x=1 and y=-1 are actually one pixel off the screen.

As such, my quad as described above was losing one pixel off the right and bottom edges, hence the apparent image drifting down and right. Once I compensated for that - and also the half-pixel offset as described above - everything worked perfectly. My new quad is defined like this (and this time it’s flexible for various resolutions.)

float sright = (width - 1) * 2.0f / width - 1; // right edge of the screen in viewport coordinates
float sbottom = -(height - 1) * 2.0f / height + 1; // bottom edge of the screen in viewport coordinates
float tleft = .5f / width; // left edge texture coordinates - note that it's offset by half a pixel
float ttop = .5f / height;
float tright = (width - .5f) / width;
float tbottom = (height - .5f) / height;
vbuf.SetData<VertexPositionNormalTexture>(new VertexPositionNormalTexture[]
{
    new VertexPositionNormalTexture(new Vector3(-1, 1, 0), Vector3.Backward, new Vector2(tleft, ttop)),
    new VertexPositionNormalTexture(new Vector3(sright, 1, 0), Vector3.Backward, new Vector2(tright, ttop)),
    new VertexPositionNormalTexture(new Vector3(-1, sbottom, 0), Vector3.Backward, new Vector2(tleft, tbottom)),
    new VertexPositionNormalTexture(new Vector3(sright, sbottom, 0), Vector3.Backward, new Vector2(tright, tbottom)),
});

I hope this odyssey helps someone else out there.

After further investigation, I found that even this isn’t exactly correct.

To anyone who is looking for the solution to this matter, please see the guide I wrote here: