I started developing in Monogame because of what it sold itself as: a cross-platform solution that abstracts all of the graphics, audio, and input through its own interface. I’ve been trying to get the boilerplate graphics configuration out of the way, and I’ve reached the point where there are just too many problems for me to continue. Here’s a breakdown of just some of the roadblocks I’ve hit:
In Windows 10, I have 150% scaling on my 4k monitor. If I drag a Monogame window that has AllowUserResizing set to false to my other monitor without the scaling, it inexplicably grows vertically and horizontally as I drag to the point where it grows off the screen. No resizing allowed so I can’t fix it. I have to kill the app and switch to AllowUserResizing = true. My workaround was just to position the RenderTarget2D of my game inside the sizable window so it’s proportional and centered with black bars as needed. Weird.
If I have a RenderTarget2D with a DepthFormat that isn’t None (I want a stencil) and use any positive value for multisampling, the result is a blank output. The goofy workaround I found is to create a 1x1 size RenderTarget2D with a depth format that isn’t None and multisampling > 0 and not dispose it. I only discovered this workaround when I was changing resolutions and saw it would actually start rendering as long as I changed resolutions at least once (by pressing a switch keyboard key blindly since I had no graphics output). After experimenting I found that creating this dummy RenderTarget2D on Init gave me the results I needed (I’m using the OpenGL runtime for reference). I needed the dummy RT to have a depth buffer and multisampling or it wouldn’t do the trick, and if I immediately disposed of it after creation it still wouldn’t do the trick.
If I call clear with a color on a RenderTarget2D that isn’t set as the graphics device target, it just overlays the color at half transparency on the existing graphics data. Huh?!
If I try to use HardwareModeSwitch with a two-monitor setup, then changing the back buffer size results in a blank screen with no way I can find to fix it (alt-tabbing, toggling full screen twice, etc). If I hover the mouse over the task bar of the program I see in the preview that it’s actually running, and it’s consuming GPU and CPU resources. Just no display. If I turn one monitor off before-hand, it actually works, but it displays the output at a 1 to 1 pixel ratio with the monitors native resolution. So if my game is rendering in HardwareModeSwich at 720P to my 1080P monitor, it’s centered in the middle of the screen with a bunch of blank space around it. This is despite the fact that Monogame did indeed change the monitor screen resolution, and my GraphicsDevice.PresentationParameters.Bounds,Width, RenderTarget2D.Width, and GraphicsDevice.Adapter.CurrentDisplayMode.Width all say they’re 1280.
I really like the game design aspects of this framework, but I don’t think it’s wise to invest in a framework that seems to struggle so much with the basic setup of a game. I recognize the challenge faced in trying to make a cross-platform solution, but I’m wondering if these are problems others experience and deal with, or if the similar frameworks like FNA also have these issues? I’m wondering if I just switch to platform-specific solutions and keep the abstractions to graphics, audio and input in my own code base is a more reliable alternative.
Any insights would be appreciated. Has anyone here made the standard graphics UI with all the typical settings (VSync, FullScreen, Resolution, MSAA) and had it all work as expected? With the simple interface in Monogame for these features I had no idea I’d be spending so much time trying to get it to do what I want.