Here’s an interesting conundrum that I did not expect. In my post processing effects, I’m rendering a scene (and additional effects) back and forth between two RenderTarget2Ds of equal size. However, with each render, it seems to suffer from what I would describe as “generation loss” - it gets a tiny bit blurrier each time, such that after four of these exchanges, it’s extremely noticeable. After comparing the sequential renders, it also seems to drop down and right a bit each time too.
I’m suspecting that it’s probably due to anti-aliasing within the texture sampler, which might imply that I’m not lining up the geometry correctly. At present, I’m rendering an untransformed textured quad spanning (-1, -1, 0) to (1, 1, 0), (texcoords (0, 0) to (1, 1), of course), to the render target. I thought it would make sense that this would be equivalent to transferring it perfectly from one to the other.
Anyone else run into something like this? What’s the best way to correct this offset?