Scaling RenderTarget2D to viewport causes blurriness

I’m trying to have my game render to a 640x360 render target and then render that to the screen in order to achieve “virtual resolution” so that the game has the same “zoom” no matter how big the window is. This is my draw code where the render target is drawn too and then put on the screen:

When I use this draw code, the text is awkwardly stretched out and becomes blurry as seen here:

After Googling I’ve seen that using point clamp sampling is the solution but using it made the pixels in the text even more obvious. This is how I initialise my render target (where the width and height are 640 and 360):

image

I am using cross-platform OpenGL MonoGame project on Windows 10. If anyone else has encountered this issue please let me know. Thanks

Looks pretty clear to me… :wink:

Just pointing out you used the wrong image…

Welcome back!

What’s the value of Viewport.Width and Viewport.Height in this case? If it’s not an exact multiple of your virtual resolution the rendertarget won’t scale well.

1 Like

Fixed the image. Thanks for pointing it out, I never would’ve noticed it in my 2AM state.

1 Like

1920x1080. I would expect 640x360 to scale up perfectly considering they’re both 16:9.

Are you using SamplerState PointClamp?

Yes, I passed it into SpriteBatch.Begin as well as setting it on GraphicsDevice.

The pixels in the rendertarget are all 100% black or white? I can’t explain how you would get blurring with point sampling.

Could it be something external, like some super resolution GPU feature, display scaling, something like that. Maybe try on a different PC, maybe see how it behaves with DirectX.

In this case yes, because it’s an exact multiple. If the factor is 2,5 however, some pixels will be scaled up to 2 pixel width, others to 3 pixel width, which is not ideal.

Have you got multisampling enabled anywhere in the code?

Could be that the 640 by 360 is multisampled , then scaled

I enabled PreferMultiSampling but the same issue happens whether it’s true or false.

I don’t see anything wrong with the image myself. Looks like the normal outcome of a 2x upscale with bilinear filtering. Yea it makes things that you expect to be crisp like text look worse when you render them with scale, especially that much scale. There are a ton of different ways to approach this problem… just to list a few: second pass at final resolution for text, rendering text with geometry and using msaa, ttf text rendering at runtime, …

I’m not sure I’m understanding the problem. Are you rendering at a lower resolution than you are outputting, so you are stretching the render target to match? If that’s the case this is the expected behavior. I noticed you mentioned turning on point sampling, which you didn’t like because it “made the pixels more obvious”, which is also expected. You can’t just add pixels that aren’t there to begin with. You’d either need some bilateral blur or some other “smart”/image aware upscaling technique to mask the fact that you are missing pixels.

1 Like

If what is showing on the screen is normal, how would I be able to achieve virtual resolution without everything looking blurry? Is there some sort of blur or sampling I can use that would be ideal for the likely scenario that the render target will be smaller than the window?

I can see that you did this in your code before you draw to the render target, but did you also do it when you actually drew the text to the render target?

(ie, somewhere in GameStateManager.Instance.Draw)

Check the sampler state value at the point you actually call the DrawString method on the sprite batch you pass to it.

I listed several approaches to this above, also someone else just made this post…