Ui Pattern's and methodologys?

The question i have is made in a loose or general sense. As every time i try to write a ui i just feel like it sucks.

What sort of patterns do you guys use or think should be used for Ui ?
What sort of patterns do you think work the best to keep it decoupled and adaptable?

To say do you think the composite pattern is a good starting point or some other is best. Do you mix patterns and if so which do you choose ?.
I have been trying to prototype a good pattern for UI but this is like my Achilles heel.

For example this is my latest idea but im not to sure about it no matter how i try to design my ui it never feels right. Id like to hear others input.

    // pseudo ui schema 
    //
    //abtract BaseElement
    //{
    //   LocalPosition
    //   Action OnClick;
    //   Action OnOver;
    //   Action OnAwaitingInput;
    //   //ect.
    //}
    //
    // ConteteElement_A : BaseElement{
    //   LocalPosition
    //   OnClick;
    //   OnOver;
    //   //ect.
    //}
    //
    //
    //abtract BaseButton
    //{
    //   list<BaseButton> children;
    //   list <BaseElement> baseElements;
    //   list <rectangle> elementsDrawnPosition
    //   // deals with positions and depths
    //   abstract void IterateChanges(int n);
    //}
    //
    // Abstract Group : BaseButton
    //{
    //
    //}
    //
    // Abstract Leaf : BaseButton
    //{
    //
    //}
    //
    // ConcreteA_Panel : Group{
    //  // has children
    //  // defines element list
    //}
    //
    // ConcreteB_List : Group{
    //  // has children
    //  // defines element list
    //}
    //
    // ConcreteC_ClickButton : Leaf{
    //  // cant have children
    //  // defines element list
    //}

I used composite for GeonBit.UI and it felt pretty fitting and worked good for most things. I had few issues when I wanted to add render targets so it’s something you might want to plan ahead. So overall I’d recommend going with the composite pattern, eg a tree of UI elements that share the same api with update / draw / whatever.

You might want to make a separation between “containers” and “entities” but in my case I didn’t feel it was necessary so wouldn’t recommend it personally.

Another big thing in UI to worry about is positioning and size of entities, in regard to different screen sizes. In my case I did a system of anchors and offsets (eg offset of 10x10 pixels from top-left corner of the parent entity), and all sizes can be either in pixels or percent of parent size. I also add a global scaling property which is very useful to fit different resolutions.

While I think this method worked pretty good for me, if I ever write another UI system I think I’ll try a different approach of copying Twitter’s bootstrap grid.

If you’re not familiar with it I suggest you play with bootstrap a bit, it can give you inspiration for how to make a responsive UI system.

Another tip that might help is to wrap all access to spritebatch with your own functions, so you can add extra logic when drawing things. Proved to be useful couple of times.

Hope this somewhat helped :slight_smile:

I’m actually in the middle of writing a windowing system and a user interface framework for MonoGame right now as part of my Peace engine project - a modular game engine for my games to make life a little easier.

I’ll share the way I designed the entire engine as it may help you get an idea of what you’re up against, and how you can make the beast a bit less intimidating…hopefully.

First, the Peace engine is split into modular components. Each component has an Initialize, Update, Draw, and Unload method, just like the Game class in MonoGame. You get these methods from the IEngineComponent interface.

interface IEngineComponent
{
  void Initiate();
  void Update(GameTime gameTime);
  void Draw(GameTime gameTime, GraphicsContext gfx);
  void Unload();
}

The Game class, while initializing, looks inside every EXE and DLL file in the game’s executable folder for classes that implement this interface, and it tries to load them in. This allows you to have modularity as this is all done at runtime and NOT by yourself on an as-needed basis.

So, if you decide to add something like Discord integration to your game, all you have to do is add a class that implements IEngineComponent, add the features you’d like, and compile. You’ll never have to worry about making sure everything gets initialized, updated, and unloaded at the right time. The engine does that for you.

Now, this is only half of what makes my engine what it is. The next thing that makes it so useful and intuitive, is each component can depend on other components, and can even depend on the Game class to get at the real lowlevel stuff if needed. This is what is called a Dependency Injection design pattern. Using this design pattern allows you to pull off some PRETTY AMAZING modularity and adds a “write once, use in anything” feature to the engine, especially when modding. (Ever noticed how some Minecraft mods depend on other mods? That’s because they’re using the features of the other mod, allowing the mod developer to not have to spend time reinventing the wheel, so to speak.)

Now, how does this relate to UI?

Easy. Once you have a framework like this going, things get very simple. First off, each component has an Update() and a Draw() method - just like your Game class, right? Well, that allows components to tap into MonoGame’s game loop and do things like real-time updating, rendering their status, etc. But, you’ll notice that the Draw() method takes TWO arguments instead of one? And what the heck’s that GraphicsContext?

Well this is where things get very, very easy, and very awesome. GraphicsContext, in my case, is a class handled by the Game class. It is passed around to EVERY engine component, and basically wraps your GraphicsDevice and SpriteBatch. It has built-in scissor-testing, methods that wrap SpriteBatch.Begin() and SpriteBatch.End() (and in fact pass the right parameters to Begin() to get your UI looking good), and methods for drawing solid or textured rectangles, lines, outlined polygons, solid-colored circles, and even text using either GDI+ or Pango (of course, you could modify it to just wrap the SpriteBatch.DrawString() method, I just find that doing it with GDI or Pango adds a degree of flexibility and get rid of the extra headache that comes with writing your own text layout API - something you’d have to do if you went the SpriteFont route - and there’s the added benefit of the text looking better even if you scale it). This way, you don’t need to give yourself RSI trying to write your SpriteBatch code.

So you can use this to your advantage when writing your UI. For example, this is how I implement the UI in my engine - note that there’s an extra OnKeyboardEvent() method in my UIManager component - that’s deprecated.)

UI manager component:

    public class UIManager : IEngineComponent, IConfigurable
    {
        [Dependency]
        private Plexgate _plexgate = null;

        [Dependency]
        private ThemeManager _thememgr = null;

        [Dependency]
        private ConfigManager _config = null;

        private bool _isShowingUI = true;
        private int _uiFadeState = 1;
        private float _uiFadeAmount = 1;

        public void ShowUI()
        {
            if(_isShowingUI == false)
            {
                _isShowingUI = true;
                _uiFadeAmount = 0;
                _uiFadeState = 0;
            }
        }

        public void HideUI()
        {
            if(_isShowingUI == true)
            {
                _isShowingUI = false;
                _uiFadeAmount = 1;
                _uiFadeState = 0;
            }
        }

        public int ScreenWidth
        {
            get
            {
                return _plexgate.GameRenderTarget.Width;
            }
        }

        public int ScreenHeight
        {
            get
            {
                return _plexgate.GameRenderTarget.Height;
            }
        }

        private List<TopLevel> _topLevels = new List<TopLevel>();

        private Control _focused = null;
        public void SetFocus(Control ctrl)
        {
            _focused = ctrl;
        }

        public bool IsFocused(Control ctrl)
        {
            if (ctrl == null)
                return false;
            return ctrl == _focused;
        }

        public void Add(Control ctrl)
        {
            if (ctrl == null)
                return;
            if (_topLevels.FirstOrDefault(x => x.Control == ctrl) != null)
                return;
            _topLevels.Add(new TopLevel
            {
                Control = ctrl
            });
            ctrl.SetManager(this);
        }

        public bool ShowPerfCounters = true;

        public void Remove(Control ctrl, bool dispose = true)
        {
            if (ctrl == null)
                return;
            if (_topLevels.FirstOrDefault(x => x.Control == ctrl) == null)
                return;
            var tl = _topLevels.FirstOrDefault(x => x.Control == ctrl);
            if (tl == null)
                return;
            tl.RenderTarget.Dispose();
            if (dispose)
                ctrl.Dispose();
            tl.Control = null;
            _topLevels.Remove(tl);
        }

        public void Clear()
        {
            while (_topLevels.Count > 0)
            {
                Remove(_topLevels[0].Control);
            }

        }

        public void Initiate()
        {
            Logger.Log("Loading text renderer...", LogType.Info, "ui");
            try
            {
                TextRenderer.Init(new NativeTextRenderer());
                Logger.Log("Using native text renderer.", LogType.Info, "ui");
                //TextRenderer.Init(new WindowsFormsTextRenderer());
            }
            catch(Exception ex)
            {
                TextRenderer.Init(new GdiPlusTextRenderer());
                Logger.Log("Couldn't load native text renderer. Falling back to GDI+.", LogType.Error, "ui");

            }
        }

        public void OnFrameDraw(GameTime time, GraphicsContext ctx)
        {
            foreach (var ctrl in _topLevels)
            {
                if (ctrl.RenderTarget == null)
                    continue;
                ctx.Device.SetRenderTarget(ctrl.RenderTarget);
                ctx.Device.Clear(Color.TransparentBlack);
                ctrl.Control.Draw(time, ctx, ctrl.RenderTarget);
                
                ctx.Device.SetRenderTarget(_plexgate.GameRenderTarget);
                ctx.BeginDraw();
                if (_ignoreControlOpacityValues)
                {
                    ctx.DrawRectangle(ctrl.Control.X, ctrl.Control.Y, ctrl.Control.Width, ctrl.Control.Height, ctrl.RenderTarget, Color.White * _uiFadeAmount);
                }
                else
                {
                    ctx.DrawRectangle(ctrl.Control.X, ctrl.Control.Y, ctrl.Control.Width, ctrl.Control.Height, ctrl.RenderTarget, Color.White * (ctrl.Control.Opacity * _uiFadeAmount));
                }
                ctx.EndDraw();

            }
            if (ShowPerfCounters == false)
                return;
            ctx.BeginDraw();
            var fps = Math.Round(1 / time.ElapsedGameTime.TotalSeconds);
            ctx.DrawString($"FPS: {fps} - RAM: {(GC.GetTotalMemory(false)/1024)/1024}MB", 0, 0, Color.White, new System.Drawing.Font("Lucida Console", 12F), TextAlignment.TopLeft);
            ctx.EndDraw();
        }

        public void OnGameUpdate(GameTime time)
        {
            if(_uiFadeState == 0)
            {
                if (_isShowingUI == true)
                {
                    _uiFadeAmount += (float)time.ElapsedGameTime.TotalSeconds;
                    if (_uiFadeAmount >= 1f)
                    {
                        _uiFadeState = 1;
                    }
                }
                else
                {
                    _uiFadeAmount -= (float)time.ElapsedGameTime.TotalSeconds;
                    if (_uiFadeAmount <= 0f)
                    {
                        _uiFadeState = 1;
                    }

                }
            }


            if (_isShowingUI == false)
                return;
            var mouse = Mouse.GetState();
            foreach(var ctrl in _topLevels)
            {
                var w = ctrl.Control.Width;
                var h = ctrl.Control.Height;
                bool makeTarget = false;
                if (ctrl.RenderTarget == null)
                    makeTarget = true;
                else
                {
                    if(ctrl.RenderTarget.Width != w || ctrl.RenderTarget.Height != h)
                    {
                        makeTarget = true;
                    }
                }
                if (makeTarget)
                {
                    ctrl.RenderTarget = new RenderTarget2D(_plexgate.GraphicsDevice, ctrl.Control.Width, ctrl.Control.Height, false, _plexgate.GraphicsDevice.PresentationParameters.BackBufferFormat, _plexgate.GraphicsDevice.PresentationParameters.DepthStencilFormat, 1, RenderTargetUsage.PreserveContents);
                    ctrl.Control.Invalidate();
                }
                ctrl.Control.SetTheme(_thememgr.Theme);
                ctrl.Control.Update(time);
            }

            //Propagate mouse events.
            foreach(var ctrl in _topLevels.OrderByDescending(x=>_topLevels.IndexOf(x)))
            {
                if (ctrl.Control.PropagateMouseState(mouse.LeftButton, mouse.MiddleButton, mouse.RightButton))
                    break;
            }
        }

        public void OnKeyboardEvent(KeyboardEventArgs e)
        {
            if(e.Key == Keys.F11)
            {
                bool fullscreen = (bool)_config.GetValue("uiFullscreen", true);
                fullscreen = !fullscreen;
                _config.SetValue("uiFullscreen", fullscreen);
                ApplyConfig();
                return;
            }

            if(_isShowingUI)
                if (_focused != null)
                    _focused.ProcessKeyboardEvent(e);
        }

        private bool _ignoreControlOpacityValues = false;

        public bool IgnoreControlOpacity
        {
            get
            {
                return _ignoreControlOpacityValues;
            }
        }

        public void Unload()
        {
            Logger.Log("Clearing out ui controls...", LogType.Info, "ui");
            Clear();
            Logger.Log("UI system is shutdown.");
        }

        public void ApplyConfig()
        {
            bool fullscreen = (bool)_config.GetValue("uiFullscreen", true);
            _plexgate.graphicsDevice.IsFullScreen = fullscreen;
            _plexgate.graphicsDevice.ApplyChanges();
            _ignoreControlOpacityValues = (bool)_config.GetValue("uiIgnoreControlOpacity", false);
        }
    }

    public class TopLevel
    {
        public Control Control { get; set; }
        public RenderTarget2D RenderTarget { get; set; }
    }

Quite a lot of code there, and most of it isn’t commented/documented, but you can see things like the dependency injection framework in play as well as HEAVY use of GraphicsContext to make things easy on you.

You’ll also note the heavy use of render targets. This is something that you just cannot avoid if you want a fluid, high-performance user interface that’s easy to work with.

Heavy use of render targets can certainly BENEFIT your game’s performance - if you use them with care. Trust me, you do not want to be re-rendering an entire user interface every single frame. This also allows you to change the location of UI elements on screen (or even change their opacity) without having to re-render said elements, which can definitely save you some CPU and GPU cycles when doing things like fade or slide animations - something that is heavily done in my game.

Anyway, the next important thing is your Control class. This is the heart of your UI framework itself. It handles the rendering and updating of child elements, processing of keyboard and mouse events, and it is the base of any user interface element you implement.

You can have whatever events, properties, methods, etc in your Control class (although, try to keep things general-purpose. This is where things like your width, height, X and Y coordinates, opacity, visibility and other crucial properties for every UI element goes.)

The important thing to note however is that you need - AT MINIMUM - these variables and methods in your control. Do NOT directly expose them to users. They are vital for a high-performance UI framework, and should NEVER be touched directly by the user.

public abstract class Control
{
  //Has something happened to this control that requires a repaint of the front buffer?
  private bool _invalidated = true;
  //Has the control been resized? If this value is true, we need to re-create our render targets to compensate.
  private bool _resized = true;
  //This render target is rendered every frame, to the render target that our parent control (or the UI manager) gives us. It doesn't have any shaders or effects applied to it WHATSOEVER. It's just a raw representation of what our control looks like.
  private RenderTarget2D _backbuffer = null;
  //This render target is NOT rendered every frame. It is only rendered when we are invalidated, and is rendered to our own backbuffer. This would be the render target our graphics context would be set to while the control's user-overridable paint function is running.
  private RenderTarget2D _frontbuffer = null;
  //All textures in MonoGame - including render targets - must have a width and height of at least 1. So, our controls should act this way as well.
  private const int _hardMinimumWidthOrHeight = 1;
  //Width and height of your control. You can expose these with a property but make sure you check incoming values to make sure they're at or above the hard limit or MonoGame will throw a hissy fit at you.
  private int width, height = 1;  

  //A class for holding information about a child control of this control. It contains the child control itself, as well as a render target which the control will render its back buffer to, and will become what we render onto ourselves. We can apply shaders and other effects to it. We own this render target.
  internal class ChildInfo
  {
    internal Control Control { get; set; }
    internal RenderTarget2D RenderTarget { get; set; }
  }

  //A list of all our children.
  private List<ChildInfo> _children = new List<ChildInfo>();

  public void Update(GameTime gameTime)
  {
    //In here, we perform general UI updating - such as pulling the mouse state, keyboard state, etc, propagating events, stuff like that.
    //Also, we make sure that any invalid child rendertargets are null so they can be recreated next Draw().
    foreach(var child in _children)
    {
      //Usually, you would check the control size and render target size and if they don't match, invalidate the control and dispose the render target and set it to null.
      if(/*controlSize != renderTargetSize*/)
      {
        child.Control.Invalidate();
        child.RenderTarget?.Dispose();
        child.RenderTarget = null;
      }
      //We can also update the child control here.
      child.Control.Update(gameTime);
    }
  }

  public void Draw(GameTime gameTime, GraphicsContext gfx, RenderTarget2D parentTarget)
  {
    //Here we do actual rendering. First we test to see if we're invalidated.
    if(_invalidated == true)
    {
      //If so, check if we're resized.
      if(_resized == true)
      {
        //This is where we'd dispose and recreate our front and back buffers.
        //Once that is done, we make sure _resized is false.
        _resized = false;
      }
      //Now that we've checked if we're resized and recreated our buffers if we needed to, it gets simple.
      //First you set the graphics context's render target to our front buffer.
      gfx.Device.SetRenderTarget(_frontbuffer);
      //Next you clear it of any previous gunk. Note that EVERY render target you create for your UI should be set to RenderTargetUsage.PreserveContents as you'll be switching between them quite a lot, and doing so with that not set would clear out the render targets anyway. In this case we do it manually so we're not rendering things on top of what used to be there. This helps with translucent objects.
      gfx.Device.Clear(Color.Transparent);
      //Next you begin a draw, and call something like an OnPaint() method passing your gametime and graphics context. 
      gfx.BeginDraw();
      //In my case I'll just draw a red box.
      gfx.DrawRectangle(0,0,_width, _height, Color.Red); //0,0 correspond to the top left of our frontbuffer.
      gfx.EndDraw();
      //We're not invalidated any more.
     _invalidated = false;
    }
    //The next thing we do, is render our front buffer to our back buffer. You know, just in case it's different.
    gfx.Device.SetRenderTarget(_backbuffer);
    gfx.Device.Clear(Color.Transparent);
    gfx.BeginDraw();
    gfx.DrawRectangle(0,0,_width,_height,_frontbuffer);
    gfx.EndDraw();

    //Then you render your back buffer to the parent buffer.
    gfx.Device.SetRenderTarget(parentTarget);
    gfx.Device.Clear(Color.Transparent);
    gfx.BeginDraw();
    gfx.DrawRectangle(0,0,_width,_height,_backbuffer);
    gfx.EndDraw();

    //Now, you take on your parental role and give your children a chance to shine.
    foreach(var child in _children)
    {
      //First, if the child's render target is null, recreate it so it matches the width and height of the control.
      if(ctrl.RenderTarget == null)
        //re-create it.
      //Now, we call this Draw() method on the child, but passing that render target we JUST checked as the third parameter.
      ctrl.Control.Draw(gameTime, gfx, ctrl.RenderTarget);
      //If all went well, ctrl.RenderTarget should be filled with the control's content.
      //So we can render it to us.
      //So, we'll set our GFX context back to our OWN parent target.
      gfx.SetRenderTarget(parentTarget);
      //Begin a draw
      gfx.BeginDraw();
      //Draw the rectangle - this is where your control X and Y coordinates come into play. I'll let you implement that on your own. It's very easy. For now I'll just draw everything at 0,0.
      gfx.DrawRectangle(0,0,ctrl.RenderTarget.Width,ctrl.RenderTarget.Height,ctrl.RenderTarget);
      gfx.EndDraw();
      //Note that we don't clear anything at all here. We're just drawing a rectangle.
    }
  }
}

I didn’t implement EVERYTHING that my UI manager component in my engine would force you to implement, but most of that stuff should be fairly trivial for you to do on your own. Once you get the general foundation down for your UI framework, things start to get really easy. Some things you may want to implement in the future include:

  • Theme support. If you plan to use your UI framework in multiple games, it’s a good idea to allow the game to deeply customize how each UI element is rendered. You don’t need to add some super elaborate file format or anything like that - just the ability to allow users to implement a ITheme interface with methods like DrawButton, DrawPicture, DrawCheckbox etc should be more than enough.
  • Auto-sizing of controls - for things like text blocks, you probably don’t want to get into the math involved with making sure the text will actually fit in the control. Ideally you’d want to add an AutoSize property to your base Control class, and in something like a text block control, in their OnUpdate method they would check that property and decide how to lay themselves out. That way, you still need to do the math if the control’s autosize property is true, but you do it INSIDE the textblock control instead of in, say, your settings menu’s UI layout code. So you only need to code it in once and never have to deal with it again. And you can switch it off if you need to.
  • Soft minimum/maximum sizes. Going back to our text block example, if your text block is autosized, you may want to make it so after a certain width, text would wrap to a new line. This isn’t possible if you don’t allow arbitrary soft maximum sizes for your control. This is very easy to implement, however - it’s just an additional check in your width and height property. You’d end up clamping the incoming value so it’s between the limits.
  • Translucency, tints, shaders, etc. This would be something that is handled by the UI manager or the parent control, but if you want an even fancier looking UI with lots of cool animations, you should look into pixel shader support. Of course, if all you want to do are simple fades and color-based animations, you don’t need full pixel shader support as MonoGame does that stuff for you (Ever multiplied a Color by a floating-point number between 0 and 1? That’s a neat trick that allows you to adjust the translucency of the color), but if you plan to use your UI framework in other games, it is a good feature to have.
  • A windowing system - Draggable, resizable windows can be very useful depending on what your UI is used for. In my game, the UI is the main interaction point - the game is set in a fictional Unix-like operating system with a GNOME 2-style desktop environment - a windowing system would work with your UI manager and allow things like window borders, maximizing, minimizing, resizing, dragging, closing, hiding, dialog boxes, HUD panels, etc, etc. It’s a bit harder to implement, but can be VERY rewarding.

I know this was a bit of a long post, and probably went way over most people’s heads, but hopefully it helps someone :stuck_out_tongue:

I try to use the Immediate Mode approach for everything I can. To such an extent that I have DearIMGUI renderers laying about for QT’s QPainter and MFC’s GDI rather than use either one’s awful controls.

Admittedly I think the approach can be a little overwhelming at times. My transform-gizmo also functions in an immediate mode fashion and that did admittedly get both a little messy and a little redundant.

I wrote my own UI library because I wanted the flexibility. I programmed it to be identical to Java Swing. So you just create a Panel(Vector2 Position) and you can add other GUI elements to it (including other panels). You can add action listeners, layouts, etc… to all gui elements.

I figure why reinvent the wheel in terms of ui patterns. The JSwing methodology is very easy to understand and easy to maintain (when looking back at old code).

Not sure that helps or answers your question but that is the methodology I used.

David

I didn’t think id get so much feedback, it seems everyone has tackled it in different ways.

You might want to make a separation between “containers” and “entities” but in my case I didn’t feel it was necessary so wouldn’t recommend it personally.

Can you elaborate or define “containers” and “entities” if this is more then just within a general sense of the terms.

I also add a global scaling property which is very
useful to fit different resolutions.

I did all the scaling last time around but im going to drop it and base sizes off the font itself and load different point sizes. Ive already some what decided on that.

Another big thing in UI to worry about is positioning and size of
entities, in regard to different screen sizes. In my case I did a system
of anchors and offsets (eg offset of 10x10 pixels from top-left corner
of the parent entity), and all sizes can be either in pixels or percent
of parent size.

This part is however troubling in particular for me.

The idea of auto setting positions with some sort of layout basically brings the idea that it restricts a lot of other possibility’s and adds a lot of complexity. It is such a annoying thing when i dwell on it that i think the decision is more troubling then the actual code.

I had thought recently that maybe every sort of panel or button should just have a base scrolling ability like a window, So i can worry about it later on.

if I ever write another UI system I think I’ll try a different approach of copying Twitter’s bootstrap grid.

sounds interesting.

I’m actually in the middle of writing a windowing system and a user interface framework for MonoGame right now as part of my Peace engine project

Watercolor.
Thanks for the in depth response, quite a bit different from how i have been looking at it. It deserves some quiet reading over and im sure i will reference this more then once again. Tons of info here.

This part sounded particularly impressive.

The Game class, while initializing, looks inside every EXE and DLL file
in the game’s executable folder for classes that implement this
interface, and it tries to load them in.

^^ This is beyond me, how do you do that ?

This is what is called a Dependency Injection design pattern.
Using this design pattern allows you to pull off some PRETTY AMAZING modularity

Ill have to look into this. What is the drawback if any to this pattern ?

I try to use the Immediate Mode approach for everything I can. To such an extent that I have DearIMGUI renderers laying about for QT’s QPainter and MFC’s GDI rather than use either one’s awful controls.

I’ve heard of imgui but never used it and have barely any knowledge of the ideas behind it.

I wrote my own UI library because I wanted the flexibility.

primary requisite ^.

Yeah I mean some UI systems have panels and elements, and only panels can contain elements (and other panels). Similar to folders and files. Personally I felt like this separation was not needed, and in many cases I found advantage of having the basic entities being both containers and elements, eg that a button can have another button inside of it as a child etc.

So I advice against it, and just make that every element can have children and not just special type of elements.

Interesting approach.

Adding to complexity of coding - absolutely :slight_smile: But it makes the usage a lot simpler. Also in my case I allowed to define sizes as both relative or in pixels, so you can always use the “top left” anchor and set offset in pixels from it, giving you the freedom to place and size everything as you want. If I’m not mistaken Ogre’s Tray don’t have this ability and its kinda annoying, like you said.

Another thing I though I might drop now, is that I slightly regret having everything set in vectors / points. If I could start over I think i’d try to copy CSS a bit more, at least in the part that you can set position / size with “px”, “%”, or “em” suffix. It sounds like a good idea on paper, at least.

In my old Ui like my first couple, i kept everything internally in floating coordinates and a rectangle for drawn coordinates. So that you held things as percentages in relation to the parent as a %.
I long ago made a virtual rectangle class that basically does everything the Rectangle does and more, made a struct version too. Basically everything scaled to the game window including text.

I mean it was good usage wise it was simple.
Internally it was a nightmare to maintain or extend.
This is how it was used.

public class ExampleMode_3
{
    BxButtonManager Buttonmanager;
    //
    public ExampleMode_3(int game_mode)
    {
        Buttonmanager = new BxButtonManager(game_mode);
        Buttonmanager.createListButton("Menu", new VirtRect(.4f, .05f, .2f, .03f), new VirtRect(.4f, .05f, .2f, .6f), MenuModeChange_BM, false, false, new string[] { "0) Goto Menu", "1) mode 1 example", "2) mode 2 example", "3) mode 3 example", "4) mode 4 example", "5) mode 5 example", "6) mode 6 example" });
    }
    public void onWindowsResize(int w, int h)
    {
        Console.WriteLine(" ExampleMode_3 window is resizing  w " + w + " h " + h);
    }
    public void load() { }
    public void unload() { }
    public void initialize() { }
    public void update()
    { }
    public void draw()
    {
        BxEngine.clearScreen(Color.DarkSeaGreen);
        BxEngine.callBeginDraw(true);
        BxDraw2d.DrawCoolText("Example mode 3", .4f, .01f, Color.Blue);
        // buttons drawn at end call
        BxEngine.callEndDraw(true);
    }
    public void MenuModeChange_BM()
    {
        BxEngine.game_mode = BxButtonManager.getListButtonChoice();
    }
}
1 Like

I’m head over heels for the Immidate Mode GUI (IMGUI) system, I think it’s much easier to work with. It looks wonky at first but makes it easy change the UI on the fly. Minimal data about the UI is stored between frames, it’s mostly generated each frame which allows you to just write the actions code and ignore much of the setup.

public enum ImmediateModePass
{
	Draw,
	Update,
	Size
}

internal override void Update(GameTime gameTime)
{
	UpdateAndDraw(ImmediateModePass.Update, null);
}

internal override void Draw(GameTime gameTime, Renderer renderer)
{
	UpdateAndDraw(ImmediateModePass.Draw, renderer, gameTime);
}

internal void UpdateAndDraw(ImmediateModePass mode, Renderer renderer, GameTime gameTime = null)
{
	// UIManager manages Input data, Window data, and other state.
	var menuPanel = new ImmediateModePanel(mode, renderer, Vector2.Zero, UIManager.ImguiContext);
	
	// DoTextButton will only return true if the mode is Update.
	if (menuPanel.DoTextButton("Play", buttonTexture))
	{
		GameState = State.Playing;
	}	
	if (menuPanel.DoTextButton("Exit", buttonTexture))
	{
		GameState = State.Quitting;
	}
}

The DoTextButton function will detect what mode we are in and draw itself or check for if it’s being clicked. It also advances the panels current position so the next item will be to the right.

IMGUI is not completely stateless. If you want to do hover highlights and different button images when clicked you have to make an ID for each button or action and then pass that into DoTextButton, and UIManager will store the current item ID. If you want to have multiple windows you have to store state on them and what order they are in have have a separate UpdateAndDraw function for each window type. If you want scrolling you should run a sizing pass of the UpdateAndDraw loop which at the end of you can grab the total size of menuPanel, then on the Update and Draw loops you can call menuPanel.SetupForScroll(size).