So I have a slight issue with my game’s UI framework. My game is HEAVILY UI-driven and uses a lot of text. The UI is also user-customizable in just about every aspect you could think of - colors, sizes, locations, fonts, etc.
MonoGame has support for rendering text strings using
SpriteFonts, but this won’t work in my case as I need to let the user choose from any installed font on their system and render text in that font.
The logical thing to do would be, when the font is set, or when the game is loaded, I generate a spritefont from it at runtime and cache that spritefont for use with rendering text. You…can’t do that, because
SpriteFont's constructor is
public. So even if you could find some way of loading the font data into memory, read out and render each glyph to a texture, etc, there’s no way you can use that data in a SpriteFont because you can’t even create an instance of a SpriteFont without going through Content Pipeline.
So the next best thing I bet would be to use
System.Drawing to measure the text you want to render using the font you want to render it in, then create a bitmap to render the text into, then grab the bitmap’s pixel data using
Marshal.Copy, create a
Texture2D matching the size of the bitmap and drop the bitmap’s pixel data into the texture and render it. This is what I currently do in my game.
However, GDI (which is what’d be used for text rendering) is hardly hardware accelerated. It is quite slow, especially when you’re doing lots of sequential renders AND on top of that marshalling that data out of GDI and into MonoGame. Takes a lot of CPU and RAM is what I’m saying.
Since my game does a lot of text rendering, I tried mitigating this by caching any text render results and only rerender if the text or font changes. This helps A LOT for most parts of the game, but there are some parts that are constantly changing the text - for example the framerate counter, the date/time in the desktop panel (my game takes place in a fictional operating system), or when the user’s typing in the Terminal.
The problem is even with all this caching, in those cases, there’s a lot of GDI stuff going on and it takes up precious CPU and RAM, and especially in the Terminal where the user’s either trying to work really fast to hack another player or NPC, the GDI rendering tends to REALLY slow them down when the terminal has lots of text.
The only fix in my head would be either to disallow customization of fonts (not doing that - the UI customization is a big part of the game and makes development a lot easier too), or write my own implementation of SpriteFont. Option B still requires GDI, but only when loading the font data. I’d still also need to write code to find each glyph in the texture, which is probably a nightmare (hence why SpriteFont exists, probably so monogame can deal with that for you)
But wait! Things like DirectWrite exist. What if I used that to render the text directly onto the screen or whatever render target I have set? Surely since DirectWrite is ontop of DirectX that can’t be too hard…but oh wait. My game uses OpenGL, because it is cross-platform. Can’t do that.
There’s gotta be some way of rendering text in MonoGame with hardware acceleration and specifying a text string, font family, style and size, and optionally alignment, wrap width, etc, just like you can in GDI. But since my googling comes up inconclusive, I came here for advice on it. If anyone can help me tackle this issue, it’d be greatly appreciated.
(Also, on the topic of cross-platform, GDI isn’t totally usable in Linux and GDI+ is POORLY implemented on it as well, and one of the devs on my team who runs linux had to write his own text rendering backend for our engine in C++ that wraps pangomm/cairomm. It’s fast, looks beautiful, but is extremely hard to compile on Windows. I want to get a text rendering backend in that’s fully compatible on both linux and windows if possible.)