I was trying to figure why my engine was faster on the desktop, about 5 times faster.
So I have created an empty windows dx project. On the notebook by default the nvidia is the default graphics device used. I set it manually to be sure to be used when launching my empty project test and my engine.
Also the same result. I made a console of some graphics infos. It appears the reach profile is used be default.
So I tried to set it manually to hidef in the initialize method after base.initialize();
It triggers a not supportedexception: could not find a graphics device that supports the hidef profile…
So I am scratching my head as every latest game works perfectly but monogame does not.
Does anyone have any idea ? How does MG selects if the video card is supporting hidef ?
Edit: while using watches on the exception i expanded the handle’s value and debugname is “”.
Feature level 9_3.
With an empty name I cant be sure it is using nvidia’s card. So how to enumerate video cards ? Hd4000 is also supporting dx11 I think… I currently dont know where to start now…
I’ve already tested this yesterday evening. Applying nvidia to the exe in the nvcpl, starting the app on the nvidia gpu with rightclick, looked at the applications currently using the nvidia gpu, etc. no success.
Now, as i’m on my desktop at work, I’ve tried using PreparingDeviceSettingsEventArgs in the constructor to set HiDef profile.It is successful as it shows HiDef later in the app. I’ll try on my laptop this evening,as I’ve already tried everything that can be.
It is fixed with the use of PreparingDeviceSettingsEventArgs to set hidef.
But i would like to understand why it is using reach by default for everyone ? Is it to make multiplatform easier ?
With reach profile xna was only able to handle 2048x2048 max textures which would have been too small and ugly on my engine and not MRT while MG’s reach seems to support MRT and bigger textures.
From my personal experience with this issue on just one laptop I found out that it defaults to reach because my game is at an very early stage and not much is happening on the screen, so it not requiring a lot of GPU power it gives the instructions the integrated GPU. Although it’s runs like crap on it. It switches automatically to the dedicated GPU when I play CS:GO or other demanding games.
Now I have a secondary monitor on HDMI and it is set as primary monitor. Looking into nvcpl the external monitor is using the dedicated GPU and the laptop monitor is using the integrated GPU. I cannot change these settings like when I ran with just the laptop monitor. Now it defaults to hidef.
Looking at a newer laptop from a colleague, he has both monitors set to the dedicated GPU in nvcpl. So your default experience may be different in relation to the hardware, software and drivers. The default seems to be changed by the drivers, although we can override it in PreparingDeviceSettingsEventArgs. I may be wrong though as my tests were very limited.