Monogame's role in an authoritative server?

Even though monogame is a graphics eneinge, I see that alot of people are leveraging the 3D/2D objects that are loaded as part of their collision detection and physics.

If I write an authoritative server, then it doesn’t really need to be concerned with graphics. It will need to load the XNB models to get their bounding boxes for use in collision detection, thus I’ll have a reference to monogame in my server project, and would use monogame objects like Vector’s and Matrixes in both server and client. My question is, Should it be no problem running as a Windows service without a graphics device server side, as long as I don’t ever call Draw? Just trying to determine if there might be some dependency on a graphics device that would prevent an application referencing MonoGame assemblies from loading successfully in a server.

The idea being that the physics/collision code would be shared in both the server and the client(for client side prediction), but the Server would not call Draw.

Just looking for general advice on whether this sounds like a sound approach, and any other tips.

I’m wondering if there’s good approaches to minimizing memory usage on the server. Wondering if there’s a way to generate server specific XNB’s that are just the bounding boxes and don’t include meshes and other information not needed server side.

I can’t run monogame without a graphics device… Execution is halted, and it demands a graphics device.

I dont know if there is a way around this.
But why do you need to NOT have a graphics device?

  1. If it’s running in a VM then the virtualized graphics device is unlikely to support 3D.
  2. If it’s running as a service then it won’t have access to the graphics device.
  3. Even if running bare metal and running as a user mode application(instead of service), many server class machines will only have a basic VGA graphics device.

Just one of those is enough to make a dependency on a 3D graphics device a problem on the server side.

wow, would have assumed all present day machines had some rudimentary graphics device, some onboard chip at least. Better than vga I mean…

Have you concidered regular c# and just running some cpu collision detection on raw numbers rather than actual XNB’s? Send it the values of a binding box, instead of a binding box. That sort of thing.

especially if you are planning to use simplified geometry anyway?

Yeh, it’d be pretty unusual for a dedicated server to require a 3D graphics card. I imagine there’s a few out there, but all the ones I can think of do not.

I’d be recreating a lot already in monogame. It will be 3D, so even if I load only bounding boxes from my own format, I’m still going to want to apply world/local transforms as part of tracking location and applying physics. I’d be recreating alot of 3D math that is already in monogame. Also would like to be able to use some of the monogame based extensions/libraries that might be useful purely for math.

I found an example where they circumvent Game class, but it’s more in the context of doing stuff off-screen. I think it is still trying to grab the actual graphics device though. I’ll test it out in a service project to see how it pans out. Not optimistic though cause I don’t think you can even create a Form outside of user mode. I’ll lose some of the fixed tickrate stuff to. If it works I’ll probably create a HeadlessGame class that brings in that missing functionality from upstream.

All you really need is to run a game loop right? Most of the stuff Game does in its loop is window events/updating.

Hmmm. I read your thread out of curiosity and I have a question.
Why do you want to do collision-detection on the server? How many games could such a server run simultaneously? A single one?
In my book that only makes sense if I want to have a massively multiplayer - game.
In every other case I would make one of the MP-clients the server. They have to calculate your model anyways and calling one the authority on decisions will work up to a certain number of players depending on your model and network throughput on the ‘server’.
That way your server would only be concerned with linking games together, doing the NAT punch-through and then leaving them alone.
Does that make sense?

@throbax Yep, I’m familiar with that approach. Pretty much what alot of game engines have you do, which is why I’m opting for a graphics only engine. It is going to bit a bit more MMO’ish with the server state being persistent as players come/go, minus being large number players. I also don’t want to trust any single player with the server(they could muck with the server state since it resides in their memory space).

I’ve looked at several of the manual approaches to creating the GraphicsDevice which is required by the content loader, as well as some of the old WinForm examples which also circumvent Game class, but they still require a 3D capable adapter. I’ve tried enumerating the Reference driver from the DX SDK, but no luck so far. I’ll probably post a simplified attempt at that separately to see if I can get that to work. I understand the reference adapter is slow but only really need it to successfully load models.

Failing that, I’ll probably go with the original suggestion of just loading bounding boxes from my own format, since it seems without a device I can still use a good bit of the library:
-Most of the 3D math methods/objects work fine without Game/GraphicsDevice. The ones so far I’ve seen that do require a device aren’t relevant to the server since they deal with perspective transforms.
-I should be able to modify the pipeline to automatically produce just the bounding boxes in a simplified JSON format.
-Game has a really well behaved fixed tick loop, but that shouldn’t be too difficult to implement myself, and as I thought about it more, I’ll want some specializations that deal with clients sync’ing timesteps with the server similar to how Source engine handles tick snapshots. So in the end the default implementation probably wouldn’t have sufficed anyway.

I your server has DirectX I think you can use GraphicsAdapter.UseReference to get a software GraphicsDevice in case you do need it. It will be a lot slower though.

1 Like

Nice! Is that what “software rendering” does? That sort of thing?

You just do stuff that is normally run on the GPU on the CPU. So without everything running nicely in parallel, but no GPU specific behaviour. I learned about this feature a couple days ago because Tom recommended to use it in testing to null GPU specific differences.

1 Like