Resolution Independent Rendering in Fullscreen Mode

I’ve been banging my head against this issue all day and just cannot get a proper solve for it. I’m hoping someone can provide some insight.

I currently render my entire game using some resolution independent code based on the David Amador virtual resolution code that looks like it is used all over the place. I have a virtual resolution of 1920x1080 and a render target of the same size which I render everything to. After that, I render that texture to the backbuffer using SpriteBatch and the viewport / scale matrix calculated based off the difference between the virtual and actual resolution.

This all works perfectly when my application is in windowed mode. Here’s an example with a virtual resolution of 1920x1080 and an actual resolution of 1920x1200.

The issue arrives once I switch to fullscreen and results in the following:

Now, this result is not entirely unexpected. My monitor display is 3840x2160, so when toggling to fullscreen it changes the size of the backbuffer and the spritebatch code simply renders the render target size 1920x1080. What I can’t figure out is what the proper way to solve this is?

The easy way to solve this seems to be to simply lock the resolution to the actual monitor display resolution when in fullscreen. So, for example, in my case, when in fullscreen mode my virtual resolution is always 1920x1080 and my actual resolution is 3840x2160. When in windowed mode the user can continue to set the resolution to whatever they desire.

However, most games I’ve played allow you to set the resolution even in fullscreen mode. So if I want to set my resolution to something other than my monitor’s resolution I can do so and it will stretch out to fill the screen. This can be done visually easily enough by simply changing the SpriteBatch call on the render target to always use the window’s client bounds, but the problem there is that my mouse input is suddenly off because while I’m stretching out that render target to 3840x2160, my input is still using the virtual 1920x1080 and the set resolution as defined in my resolution class.

I’m hoping someone has been able to tie this all together and working properly and could provide some insight.

Important code bits:

Main Draw Function:

base.Draw( gameTime );
Viewport regularViewport = GraphicsDevice.Viewport;

this.GraphicsDevice.SetRenderTarget( _MainRenderTarget );
GraphicsDevice.Clear( Color.CornflowerBlue );
GraphicsDevice.DepthStencilState = DepthStencilState.Default;

// My game state, particle, UI rendering, etc

GraphicsDevice.SetRenderTarget( null );
GraphicsDevice.Viewport = _Resolution.VirtualViewport;
GraphicsDevice.Clear( Color.Black );

_SpriteBatch.Begin( SpriteSortMode.Deferred, null, null, null, null, null, _Resolution.Transform );
_SpriteBatch.Draw( _MainRenderTarget, Vector2.Zero, Color.White );
_SpriteBatch.End();

GraphicsDevice.Viewport = regularViewport;

And the functions in my resolution class for calculating the transform and the virtual viewport

public void CalculateVirtualViewport()
	{
		// figure out the largest area that fits in this resolution at the desired aspect ratio
		int width = _DeviceManager.PreferredBackBufferWidth;
		int height = (int)(width / VirtualAspectRatio + .5f);

		if ( height > _DeviceManager.PreferredBackBufferHeight )
		{
			height = _DeviceManager.PreferredBackBufferHeight;
			// PillarBox
			width = (int)(height * VirtualAspectRatio + .5f);
		}

		// set up the new viewport centered in the backbuffer
		Viewport viewport = new Viewport();

		viewport.X = (_DeviceManager.PreferredBackBufferWidth / 2) - (width / 2);
		viewport.Y = (_DeviceManager.PreferredBackBufferHeight / 2) - (height / 2);
		viewport.Width = width;
		viewport.Height = height;
		viewport.MinDepth = 0;
		viewport.MaxDepth = 1;

		_VirtualViewport = viewport;
	}

private void CalculateTransform()
{
	float scale = (float)_ActualWidth / (float)_VirtualWidth;
	_Transform = Matrix.CreateScale( scale, scale, 1.0f );
}

As far as im aware in any games when you go fullscreen your output resolution has to be the native resolution of the monitor.

Games will lock the resolution to the full screen resolution.

If you wanted to “lower the resolution” for faster drawing you would render to the lower resolution back buffer (rendertarget) then scale that to the monitor full screen display when outputting to the monitor

The inverse of that is super sampling. Draw to a higher resolution back buffer then scale down to the monitor resolution.

I get a list of all available resolutions, then also get the monitors full screen resolution. On going full screen i switch it to that.

For translating input into the virtual resolution use Viewport.Unproject(…).

About the drawing, I would try to render the _MainRenderTarget stretched to the viewport Width/Height, without the _Resolution.Transform.

I am not familiar with the said code but i suppose the idea is to use _Resolution.Transform in spritebatch without the intermediate rendertarget. Pass the transform in spritebatch and draw your sprites directly.

Also, do not use the _DeviceManager.PreferredBackBufferW/H in drawing code. They don’t always contain the same value as the actual backbuffer. Use the GraphicsDevice.PresentationParameters.BackBufferWidth/Height to get the backbuffer size.
Preferably in drawing code use the Viewport width/height to draw inside the bounds of the visible window and in update code use Viewport.Project()/Unproject() to convert mouse/touch input from screen coordinates (actual pixels) -> world coordinates (virtual resolution).

Resolution independence is one of those thing I have done a lot of, and it’s a horrible problem.

Your choice to render everything at a fixed size is one option, but I hate it.

The problem is when you scale your beautifully rendered screen to a different resolution, you get scaling artifacts. The display gets blurred or crunched and to my eyes looks awful. This is just my opinion though, other people are perfectly happy with this technique. However I feel that . if I have spent all this money getting graphics created that look awesome, why ruin them by scaling.

I tend to calculate the main display size from the screen resolution and design my gui to have built in resolution independence by calculating sizes and positions from screen resolution and using multiple blit’s to render elements instead of a single blit.

If you are set on this technique, then I advise you to use hardware to your advantage. Turn on multi-sampling and think about trying some 2D shaders to offset the scaling artifacts. Contrast adjustment, et al. can help.

Mouse handling is trivial as long as you have a central input handler and don’t get lazy and just call Mouse.GetState() all over the place.

I tinkered around with Resolution Independet Rendering before and decided to not use it in the end.

Instead im basically doing it like that:

  • Setting up a projection, view and world Matrix
  • Creating a BasicEffect in my drawable game components
  • Set the above mentioned matrices in the BasicEffect
  • Draw everything with the BasicEffect in a SpriteBatch overload
  • Project/Unproject the viewport to get the right coordinates

I draw all of my drawable game components in a single RenderTarget so I can apply Antialising to avoid scaling artefacts as much as possible.

The outcome for me is pretty nice.

1 Like

Appreciate the replies!

The main reason I’m trying to do it this way is for the sake of simplicity. We have several different systems and a lot of custom shaders to support lighting. So to handle it without just stretching out the render target it needs to be handled by the UI (billboarded quads), the 3D world, 3D particles, 2D billboard characters driven by Spine, etc. Example screen cap:

If I was primarily 2D I think it would definitely be easier to handle, and admittedly I should have made sure I properly planned for this from the start. I waited a little too long to tackle it and I was hoping for an easier solution than going through and making sure each individual system handles different resolutions properly.

I think for the demo we’re putting out, I am likely going to do as boot suggested and simply lock the full screen resolution. This should at least ensure it generally works for everyone when playing, and long-term I will probably handle it in a more organic fashion as Stainless and sqrMin1 are talking about.

Yeah the original code I based on it works as you’ve described. I thought it was a pretty elegant way of handling this issue. However, it is geared primarily towards 2D games using Spritebatch and so it’s only been marginally useful to me. I suspect I’m trying to fit a square peg through a round hole.

Thanks for the tips on PreferredBackBuffer vs the PResentationParameters, that’s good to know.

In 3D if you are creating the projection with a fieldOfView then the camera is already resolution agnostic. You don’t need to do anything.
if you are creating the projection with width,height taken from the viewport/BackBufferSize then use the virtual resolution instead. e.g. Matrix.CreatePerspective(1920,1080*vp_aspect, 0,-1);

1 Like