[Solved] Right way to use matrices to scale a GUI across different display configs

Hey there. I’m designing an operating system/desktop UI in MonoGame. The user interface is in 2D though I plan to do 3D rendering for certain UI effects like wobbly windows.

The issue right now is that I design the game on a 1080p 16:9 display and on that display configuration, everything fits perfectly. However, on lower resolutions and aspect ratios, windows, buttons and other UI elements are physically bigger and take up more space leading to less items fitting on screen.

Higher resolutions and aspect ratios lead to more screen real estate but also smaller UI elements and text. In 4K, the user interface is completely unusable because you cannot read anything.

So there are several things I need to tackle.

  1. I need to make sure that UI elements scale and translate based on the screen resolution and aspect ratio, so that everything fits in the right place, nothing gets cut off, and there’s room to draw larger text on higher resolutions.
  2. If I have three font sizes - small, medium and large, I want to know which one will fit best with the current GUI scale.
  3. I want to make sure my textures don’t get blurry and my text doesn’t get pixelated, garbled, blurry, etc.
  4. I want to keep SpriteBatch support if possible.
  5. I’d like to keep units of measurement in pixels before the transformation is done - even if they map to a virtual screen. If I draw a rectangle at (64, 64) with the size (85x20), I want it to be rendered at 64 virtual pixels from the top left of the screen, and with a size of 85x20 virtual pixels. My virtual screen height would be 1080 pixels, and virtual screen width would be the real screen’s aspect ratio (expressed as a float) multiplied by the virtual screen height. This accounts for stuff like NVidia Surround where I actually do want more horizontal screen real estate but I want everything to be the same size as a regular non-Surround setup.

My question is, how can I do this with matrices in MonoGame at a renderer level so that the entire engine is affected and the most porting I have to do is to make sure the coordinates and sizes of all my UI elements and other objects are changed to make better use of the new renderer?

I’ve been stumped on this for months. Any help would be greatly appreciated. My game’s not going into alpha until I figure this stuff out.

I think using a renderTarget is the easiest way then just drawing it to the users screen it will scale everything proportionally.

My question is, how can I do this with matrices in MonoGame.

However some of the other requirements make what your talking about far more detailed and complex.
This part in particular.

If I have three font sizes - small, medium and large, I want to know which one will fit best with the current GUI scale.I want to make sure my textures don’t get blurry and my text doesn’t get pixelated, garbled, blurry, etc.

You don’t even need a matrix to keep the proportional ratio just a vector2 scalar.

However with your stated requirements you cant just trivially make a transformation from one scale to the other. Because any proportional change will skew text which will add a blurred look to it.

What this means is as you have hinted at you need to choose a font size.
However you also still need the propotional scale to find the starting position and ensure that the ending positon doesn’t go out of bounds. But this doesn’t hold with the requirement as offsets between two starting positions that are proportionally even wont match either with set font sizes. These are basically opposing ideas that counter each other.

This is not something that can simply be solved with a matrix in one shot i think.

Anyways you have to follow a rule. if you want everything to scale proportionally to your design screen.
You have to work from or code in ratios that allow you to convert to and from world space coordinates even your 2d stuff needs to be thought of as going into and back from world space coordinates with a design time ratio that is calculated and wrapped up with the app. That also places certain restrictions on you. Or just store everything in world space coordinates.

However as for text and even images that you dont want the width and height to scale the same, so as to preserve the same quality as you see on your screen then those calculations need to be independant of your design time resolution. It’s really a ugly realization.


If you want quality.
Your gui has to look at a resolution in any given situation and decide how do i do it with what ive got.
If you want design time proportion.
You can just renderTarget it, use big text and be done with it, but you get blurred text ect.

I want to know which one will fit best with the current GUI scale

If your storing world space coordinates you just directly convert the positions * by his viewport width height to see which font will best match and choose it. It wont fit exactly though and any scaling is probably going to give it blur.

Ultimately for text you are talking about independant calculations on the users end that you must test with different resolutions to ensure it is satisfactory and this has to be part of your gui design this also has a cost in complexity.

I’ve been stumped on this for months.

Me too i realized i was stumped on the choice.
It is directly a trade off between quality and a lot of complexity.
Either choice has a high cost, time performance and complexity or quality.


We have experienced the same issues for years in the software engineering profession for business applications. Many approaches have been tried to align the GUI with the many resolutions now available but it has been a losing battle.

Basically what we do in the business end of such development is to scale the overall interface screen to the resolution so it does not overlap or is too small. However, resizing actual interface elements, even when using controls from a GUI suite, is a lot of work and in most cases not worth it.

The issue I have with using a render target and simply scaling that render target is that, for example, scaling down from 1080p to 800x600 looks horrible.

Though, this could easily be compensated for by decreasing or increasing the physical size of the render target - thus making the UI look bigger or smaller respectively. Could always change the aspect ratio of the render target as well so that scaling it to 4:3 is less harsh.

…I should seriously drink coffee and sleep before posting here. I’ll see if this approach works out. I’ll keep ya’s posted. :slight_smile:

Everything’s all good to go. I have now shipped a build of my game with a proper GUI scaler. It works perfectly in the development environment, we’ll see how it works out in the real world. :slight_smile:

Out of curiosity, which approach did you end up using? Was it the Render Target or something else?

So here’s what I did. First, in my Game class, I have a float called _renderScale, and an int called _renderScreenHeight. The _renderScreenHeight is the base vertical resolution you want to render at. In my case, it has the constant value of 1080.

The _renderScale value is a percentage of how much of the base vertical resolution you want to render at. Well, not really a percentage. I don’t know the right term. But basically, 1.0 means render at 1080, 0.5 means render at 2x 1080, 2.0 means render at 1080/2.

Then, I have a float property called AspectRatio which basically divides the width of the game’s back buffer by the height of the game’s back buffer.

So far what we have is:

private float _renderScale = 1.0f;
private const int _renderScreenHeight = 1080;

public float AspectRatio
    return (float)GraphicsDevice.PresentationParameters.BackBufferWidth / GraphicsDevice.PresentationParameters.BackBufferHeight;

Then, I write a function which divides _renderScreenHeight by _renderScale to get the scaled screen height, and multiplies AspectRatio by the scaled height to get the scaled width. These values are returned in a Vector2.

public Vector2 GetScaledResolution()
  var scaledHeight = (float)_renderScreenHeight / _renderScale;
  return new Vector2(AspectRatio * scaledHeight, scaledHeight);

This Vector2 is then used as the size of a RenderTarget2D which the game is rendered to. When it comes time to render the game to the screen, we first render all our world, UI, etc to the render target, then render the render target to the screen using SpriteBatch.Draw() and stretching the render target across the entire back buffer.

Any time the back buffer size or the _renderScale value changes, you’ll want to recalculate the size of your game’s render target and recreate it so that the settings take effect properly. Then you simply write a UI that allows the user to change their GUI scale (which really just changes the _renderScale value) and you’re basically good to go.

This has various benefits for you as a developer and you as a user.

  1. NVidia Surround/AMD Eyefinity support. The render target’s width is DIRECTLY affected by the screen’s aspect ratio and the scaled _renderScreenHeight value. However, your UI elements’ widths and heights are never ACTUALLY changed. You can have a 32x32 rectangle on the screen and it may get scaled to 64x64, 16x16, etc, but NEVER scaled to 64x32. Thus you ACTUALLY gain more screen real estate without affecting readability by using Surround/Eyefinity.
  2. You can scale items onscreen if things aren’t readable or things aren’t fitting. Thus, you as a developer never have to worry about old uncle Joe’s 800x600 VGA CRT monitor from the 90s not being able to fit your game’s UI that’s written for 1080p. Uncle Joe can just tune his GUI scale settings to get the best readability-to-screen-realestate ratio.
  3. You can still use post-processing shaders on your game like blooms, blurs, color filters, etc. You’re still rendering to a RenderTarget, after all.

It’s amazing what you can do on a full stomach, 8 ounces of caffeine, and with actual sleep. :slight_smile: