OpenGL/DirectX & Reach/HiDef basics?

Hi all,

I am reading up on some things and getting into graphics a bit more and trying to understand how it works, mainly with MonoGame.

I now understand what DirectX and OpenGL (and ES) are, APIs for graphics card and that one is mainly open source, and the other mainly microsoft. I get that programming in either you are literally talking to the GPU and telling it to do calculations/drawing to screen etc.

I read somewhere that MonoGame code (your C# codebase) is just a wrapper for both APIs, so you can use the same code to talk to a gpu in DirectX or OpenGL, so why are there two types of projects in Visual Studio for windows (DirectX) and portable (OpenGL, which also works on windows)?

Also are the Reach and HiDef profiles on MonoGame or is that just an XNA thing? What else should I know about this area?

Windows can do both DirectX and OpenGL, but I think DirectX is easier/faster in most cases because its specific to Windows. Since Windows supports OpenGL as well, a DesktopGL project can run on all 3 major desktop platforms which is awesome. So basically if you’re doing a Windows only game DirectX is preferred, but if you want an executable that runs on Linux and Mac too you should go with DesktopGL.

MonoGame has the Reach and HiDef profiles too. I think Reach is mostly used for targeting mobile. Don’t know what limitations it has in comparison to HiDef

I have gone through the source a bit, basically if reach is enabled the graphics device enables DirectX 9 features, and if HiDef is enabled, it allows DirectX 10 and 11 features.

Is that a correct way of thinking about it?

That’s about it. We support the graphics profile, but not to the extent that XNA 4.0 did.

Save making a new topic,

Textures and powers of two(POT) resolution.

I know these days it hardly matters, but I have read some (old older?) GPU’s cannot load NPOT textures, so is it best to still use POT textures?

Does it depend on sprite vs static elements.

For example a HUD that spans a small wide space (800x100) is best left like that for a texture, but a sprite say 50x60 might be best as a 64x64 texture?

Perhaps I’m old-school, but I always use POT textures. Some texture compression types such as PVR on iOS and some Android devices require square POT. While most desktop GPUs can handle NPOT textures these days, some are more efficient with POT. Texture compression usually requires multiple of 4 at least.

I think for your cases, NPOT should be fine.

Fun fact most gpu’s will make your texture power of two anyways but power of two doesn’t mean that it must be 256 x 258 or 1024 x 1024 it can be 2304 x 256 ect…

Once upon a time there wasn’t even open gl or direct x there was…

The X86 math co-processor and hex interrupts this was actually just that just a fast math calculator for floating point arithmetic mainly that’s what you got,there was no gpu lol.

To do any thing you had to call IO interrupts to the cpu via memory pointers.
If you ever wondered what the IRQ’s were in your device manager, there you go.
Back then you had to write all your own stuff from matrix’s to line and polygon rasterization, you had to change video modes via interrupt requests same for the sound card but that was a bit easier.
Finding information well you were lucky to find more then a handful of tutorials on the entire internet and there was no schools for this stuff.

Later on there was 3dfx and glide and Open gl,t we started seeing cool games many of which are retro classics doom quake ect my voodoo blaster banshee was pretty sweet back then i think it had a 100 mhtz chip in it.

Direct x is Microsoft proprietary it was birthed in response to open gl after which microsoft did about everything they could to make sure gl wasn’t going to work right on there pc’s and conversely they couldn’t get direct x to work right at all it was trash. Ah the days of windows 95 and even 98 lol nothing like not being able install a driver because ms doesn’t like its manufacturer and wont allow it.

Nowday’s open gl is of course Open and Direct x is both established and works just as well, Programmatically in my opinion they are both about the same. I actually think open gl is slightly a little clearer to use, but they both have basically all the same calls so they really are about the same now.

Both were or are programed in c or c++ They are API’s or Application Programming Interface’s to the video card driver, that driver is what is actually communicating with the gpu the api’s job is to translate programmed commands to the driver.
When the timing is right the driver, irq bursts its command data to the gpu.
A framework is basically an api though more often it is used to denote it grants access to more then just one api, or wraps a api.

XNA is a framework yet it essentially was C# wrapper to directx.
Monogame is like the open version of Xna it can use either or by calling wrappers to dx or gl depending on the project you pick.
It allows the same calls to used to either gl or dx by the programmer.
Mono-game is actually a little bit more then Xna in that regard.

Here is a bit of the history of gl and dx.

History of modern graphics processors