Don’t know if it was already suggested: access to low-level graphics API, ie: SharpDX on windows DX platform, and other API used for others (OpenTK for GL I think, etc) in order to have access to features “denied” by multi-platform choices made, just for those who don’t care about some platforms by choice.
It could be done by giving a reference to (with the example above) SharpDX reference from graphicDevice instance (according to MG graphicsdevice DirectX, device, context and others are all internal, so I guess they can’t be use from a game instance).
[quote=“InfiniteProductions, post:80, topic:6850”]
Don’t know if it was already suggested: access to low-level graphics API, ie: SharpDX on windows DX platform
[/quote]This is already available for DirectX platforms via the GraphicsDevice.Handle property.
I’d like to see build-in masking system, so that I can hide parts of textures easily, make quickly effects like darkness, easy Diablo-style mana/health balls and much more.
Isn’t using BlendState sufficient? It might not seem very easy, but that’s mostly because of the flexibility it offers I think. There’s also stencil buffer/alphatesteffect, so it’s not like you don’t have options here.
For most games it would be enough to load all resources into one ContentManager and to reuse them. But on embedded systems RAM is barely limited. In my case, I am developing an mobile game with different screens and assets. On every screen I have to render different assets and models. I would like to “unload” all the used resources when I switch screens, but that is not possible with one ContentManager.
Also very confusing is, that you are able to dispose a texture but not able to reload it again with the same contentManger. Took me some hours to realize, that once I disposed a texture I am not able to reload it again.
So my wish is:
Being able to unload resources in a contentManager
Being able to reload resources in the same contentManager
Or at least tell the contentManager something like: contentManager.EnableDynamicReloading = true;
It should be possible to distinguish in the contentManager
if(resource.state == disposed) return new
else return existingResource;
I’d argue most non-trivial games don’t or at least shouldn’t do this…
If you have different stuff loaded you can just use ContentManager.Unload and then load the new assets using ContentManager.Load.
Both of these things are possible. Limitations are that you can’t unload specific assets and you can’t reload assets that you disposed yourself in stead of letting the ContentManager handle it. Sounds like the ContentManager as is is more than sufficient for your game. Maybe you do need two, one for global assets and one for per screen assets.
I think a PR was merged to throw a better exception in this case. The idea is that you should let the ContentManager handle unmanaged resources and not dispose stuff yourself.
[quote=“Jjagg, post:86, topic:6850”]
Both of these things are possible. Limitations are that you can’t unload specific assets and you can’t reload assets that you disposed yourself in stead of letting the ContentManager handle it. Sounds like the ContentManager as is is more than sufficient for your game.[/quote]
Thats why I added this wish. Because I do want to dispose some assets instead of disposing the whole contentManager.
Thats what I have done now, but it feels like a workaround.
If you subclass ContentManager you can do all of that right now. There is a LoadedContent Dictionary with all the loaded assets so you can get it it yourself.
Desktop OpenGL platform already supports them, most DirectX platforms can’t support it because Microsoft only allows Xbox controller to be detected as gamepad.
Does this mean if I create an OpenGL project, I can use all sorts of regular joy-sticks? You know, the standard USB ones for all sorts of games… I have felt very restricted by the x-box game-pad only.
I suppose desktop opengl project not provide gamepad buttons pressing value for “A”,“B”,“X”,“Y” (like triggers), but only bool state pressed/released like directInput?
Sony gamepads supports analog pressing values for most buttons like “A”,“B”,“X”,“Y”. ScpToolkit project (http://forums.pcsx2.net/Thread-ScpToolkit-XInput-Wrapper-aka-ScpServer-Reloaded) show it.
P.S. Some gamepads like Trustmaster and xbox one elite controller has more buttons instead of xbox360 gamepads. I not sure directInput supports this extra buttons
XNAs Gamepad class was designed for Xbox 360 controllers so they would get translated to ButtonPressed/Released enum. You can use Joystick class (only implemented on DesktopGL) if you want to access lower end of a gamepad device, but you will not have any auto mapping then.
I know this as been briefly discussed in the past, but the idea was dismissed (which is unfair):
SharpDX and OpenTK should be replaced by BGFX
BGFX is simply the best graphics api abstraction library out there.
It offers a shader language abstraction as well, Monogame could improve the syntax on top of it. It seems there are curently too many problems with converting HLSL to GLSL
I doubt it. SharpDx and OpenTk are C# wrappers, that’s already 2 levels to me. BGFX is a C interface, I would bet it would be faster actualy
BGFX implements more than DirectX and OpenGL. It has every API, and soon Vulkan too. Those who contribute to the graphic layer can focus on something else.
BGFX supports more platforms than Monogame currectly does
What about consoles? It’s pretty neat that it can target web though.
We need to maintain the current API, so it will take a bunch of work to figure it all out and it would still require maintenance. I expect the port to run into major issues when trying to fit BGFX into the current API
Other stuff:
The way shaders work is different from what MonoGame users are familiar with (this one’s important and probably a dealbreaker by itself)
Multiple vertex streams not supported except specifically for instancing
Built in uniforms for World, View, Projection and more with a specific API (this isn’t really a big deal, we just wouldn’t use it)
We give up the control we have with using the different backends directly
It would be really, really great to have a library that abstracts away the graphics API for us, but porting to BGFX is not as easy, trouble-free or great as you make it sound IMO and it would mean we’d deviate a lot from how XNA works which is still one of the main goals for MG.
BGFX don’t require to use its translator, at runtime it’s just taking compiled GLSL or HLSL. So it is possible to keep current MG approach.
That is a good point. There will be indeed some incompatibilities with the original XNA implementation.
Monogame is currently missing Vulkan and DX 12, I’m not an expert, but I do know these API are very different from DX 9 and openGl paradigm. I remember Ogre3D had a lot of trouble integrating DirectX 12, and couldn’t implement properly until they rewrote their backed abstraction. Now I’m suspecting MG to face the same kind of issues…
The plan is not to integrate these new low level API’s, but rather to come up with a new API in MonoGame that better suits the back ends. Just integrating them wouldn’t be very beneficial for performance.
Are you developing BGFX or just an enthusiastic user? @raizam