Architecture

Hello,
I want to make a game using MonoGame (shocker, I know), and I know the gist of drawing things to the screen, the content pipeline, playing sounds…

But what I’m trying to do is lay a solid foundation for my game’s architecture. I’m working with a concept of a logic layer and view layers, and then a message queue to communicate between them. So far so good but what I’m trying to do with encoding is where my brain’s been thrashing a bit.

Before I continue, the reason I’m tackling this problem is because I want to (a) support network play eventually, and (b) have a foundation that lets me be productive cuz I’m one dev with other stuff I need to do (like a job).

The general idea was first to support an encoding of everything the view needs to know to draw to the screen each tick. The immediate fear is being drowned in encoding-decoding performance hits, but I’m not 100% familiar with the performance levels of these things so I thought I’d explore the option.

So, I was tossing around a couple of ideas. The first one was a byte-based solution–define encodings of messages in bytes, sorta like bytecode. The problem I ran into first was encoding data for the messages–this property has this value, that property has that value, etc. I could get reasonably small messages if I wanted 2-3 byte numbers*, but then encoding strings was going to be a pain. But the bigger speed bump was that Microsoft actually warns that binary deserialization is always dangerous for security reasons.

So then I figured I’d use JSON encoding. Again, I’m concerned about drowning in Serialization and Deserialization, but then I want to optimize for the biggest concern, which is network play, and if I can’t handle the Serialization and Deserialization locally, I can’t handle it over the network. So after some reading about how network play works, I figured if I do action messages that are sent once and status messages that are sent occasionally, I can give the view (local or network) enough information to chart out the course of things and predict what it needs to do. Obviously this is much less chatty then sending the state of the world every tick, and it has a bonus in that by treating the local view like a network client (at least in terms of serialization) I’ll be able to notice when things are bogged down by too much serialization.

Thank you for reading this far. My question is–of the above approaches, which seems best? Did I abandon anything too soon? Is there a gotcha lying around the corner? Is there a better way I’m missing?

Thanks for your time.

EDIT: I should also mention I want to do a lot of procedural generation of content and areas, and be able to push updates without recompiling, which is another reason I was looking at encodings.

I am using the command queue for parallelism, not for network play.

Processing per frame(Update method of MonoGame)

  1. execute all commands issued in the previous frame
  2. multi-threaded update processing of hit decisions, etc.
    The appearance and disappearance of objects and variable updates are converted to commands, stored in the command queue, and reflected in the next frame.

NET standard serializer, JSON.NET, etc., but they were all too slow to be useful.

NET, but none of them were useful because they were too slow. I think a simple method of writing “function name and arguments” in a lazy way is probably faster.

note:Please understand that I am not good at English so I may not be able to convey the meaning.

1 Like

So, after doing more research (and a bunch of testing) I realize that trying to simulate network play for the local view via occasional serialized updates was a bad idea. Not the least of reasons being that json-esque object serialization isn’t really the bread and butter of network play- reading from udo and tcp streams is.

So I’m studyng more about byte code and interpreters.

1 Like

I think you hit the nail on the head there that local serialization and deserialization is bad because its a bunch of extra work and it won’t actually solve your networking. Networking is a complex beast and I find every single thing that needs to be networked is a little different eg. I send player state and position and rotation in the network message and thats the minimum information to setup the animation to look right on another client. But if they have the bow out I have to also tell it how far back the bow is drawn so I send 1 more number because that’s the minimum data possible to make it work and it seems to me like you can’t make a generic system for this that handles every case.

For a turn based game you could probably do it however but I think in real time games you start hitting the performance boundaries of network traffic pretty quickly and passing as strings takes up way more space than passing integers or floats. That said if you can get it working and it runs okay in your worse case performance scenario then it’s fine.

I think the local serialization is also bad just because of all the extra leg work you have to do to avoid the sometimes maligned but actually good solution of sticking all your game state in global variables so they can be referenced from elsewhere.

The biggest issue that new game developers run into is that they try to anticipate all their future problems and then go on to solve dubious issues (that will never make it to the front of the priority list), and architect something complex that solves the wrong problems.

I use this technique:

Start are simple as possible.
Build a protype.
When you run into a limitation, queue it and prioritize it.
Once your vertical slice is working.
Identify the highest priority real problems. Refactor, rearchitect and redesign until they are solved.
Repeat until you have a great engine.

1 Like