I’ve been banging my head against this issue all day and just cannot get a proper solve for it. I’m hoping someone can provide some insight.
I currently render my entire game using some resolution independent code based on the David Amador virtual resolution code that looks like it is used all over the place. I have a virtual resolution of 1920x1080 and a render target of the same size which I render everything to. After that, I render that texture to the backbuffer using SpriteBatch and the viewport / scale matrix calculated based off the difference between the virtual and actual resolution.
This all works perfectly when my application is in windowed mode. Here’s an example with a virtual resolution of 1920x1080 and an actual resolution of 1920x1200.
The issue arrives once I switch to fullscreen and results in the following:
Now, this result is not entirely unexpected. My monitor display is 3840x2160, so when toggling to fullscreen it changes the size of the backbuffer and the spritebatch code simply renders the render target size 1920x1080. What I can’t figure out is what the proper way to solve this is?
The easy way to solve this seems to be to simply lock the resolution to the actual monitor display resolution when in fullscreen. So, for example, in my case, when in fullscreen mode my virtual resolution is always 1920x1080 and my actual resolution is 3840x2160. When in windowed mode the user can continue to set the resolution to whatever they desire.
However, most games I’ve played allow you to set the resolution even in fullscreen mode. So if I want to set my resolution to something other than my monitor’s resolution I can do so and it will stretch out to fill the screen. This can be done visually easily enough by simply changing the SpriteBatch call on the render target to always use the window’s client bounds, but the problem there is that my mouse input is suddenly off because while I’m stretching out that render target to 3840x2160, my input is still using the virtual 1920x1080 and the set resolution as defined in my resolution class.
I’m hoping someone has been able to tie this all together and working properly and could provide some insight.
Important code bits:
Main Draw Function:
base.Draw( gameTime );
Viewport regularViewport = GraphicsDevice.Viewport;
this.GraphicsDevice.SetRenderTarget( _MainRenderTarget );
GraphicsDevice.Clear( Color.CornflowerBlue );
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
// My game state, particle, UI rendering, etc
GraphicsDevice.SetRenderTarget( null );
GraphicsDevice.Viewport = _Resolution.VirtualViewport;
GraphicsDevice.Clear( Color.Black );
_SpriteBatch.Begin( SpriteSortMode.Deferred, null, null, null, null, null, _Resolution.Transform );
_SpriteBatch.Draw( _MainRenderTarget, Vector2.Zero, Color.White );
_SpriteBatch.End();
GraphicsDevice.Viewport = regularViewport;
And the functions in my resolution class for calculating the transform and the virtual viewport
public void CalculateVirtualViewport()
{
// figure out the largest area that fits in this resolution at the desired aspect ratio
int width = _DeviceManager.PreferredBackBufferWidth;
int height = (int)(width / VirtualAspectRatio + .5f);
if ( height > _DeviceManager.PreferredBackBufferHeight )
{
height = _DeviceManager.PreferredBackBufferHeight;
// PillarBox
width = (int)(height * VirtualAspectRatio + .5f);
}
// set up the new viewport centered in the backbuffer
Viewport viewport = new Viewport();
viewport.X = (_DeviceManager.PreferredBackBufferWidth / 2) - (width / 2);
viewport.Y = (_DeviceManager.PreferredBackBufferHeight / 2) - (height / 2);
viewport.Width = width;
viewport.Height = height;
viewport.MinDepth = 0;
viewport.MaxDepth = 1;
_VirtualViewport = viewport;
}
private void CalculateTransform()
{
float scale = (float)_ActualWidth / (float)_VirtualWidth;
_Transform = Matrix.CreateScale( scale, scale, 1.0f );
}