Hi, I implemented dynamic resolution in a patch to my game on Android. The effect is great, it allows to balance the resolution against the GPU performance. Of course, it won’t fix CPU related issues, but this works perfect for all GPU related slowdowns. The downside is when the change of resolution occurs - you can clearly see it on aliased edges, hence it is worth to consider implementing AA. Here you can see it easily with no AA enabled and quality settings & lowest resolution possible exaggerated for the device.
This is a really cool feature, I wanted to try it on my phone, downloaded the game but I never noticed any lags or that the resolution changed (One Plus 6 btw).
Iam still a beginner and I have a couple of questions about the implementation of that feature.
How do you exactly determine how much GPU performance equals how much render size?
As for an example: Do you determine your prefered size by FPS? Prolly not the best way because not only the GPU performance impacts FPS.
(On the screenshots) It doesnt look like the resolution is impacting the GUI, how do you change it for the Game but not at all for the GUI?
Could you use this way to supersample the resolution (if the GPU is not overworked)?
It would be great if you could give code samples when explaining for clarification.
If you go to options you can change the level of quality. High and very high are quite crazy and kill the performance fast I think even on your powerful One Plus 6.
Yep, not by FPS - actually most often I have issues with CPU, not GPU. The idea is to measure the number of miliseconds for a Draw() function call. Since 1 s = 1000 ms, 30 FPS means 33.3 ms per frame. Hence, if your frame takes more time to draw than 33.3 ms, your performance suffers. Now, how to calculate the preferred render size. I use an algorithm that goes like that:
ResolutionChangeFactor is a rate of resolution change constant. I played a bit with values and 0.01f works well for me
currentResolutionScaleRatio is quite self explanatory, starting value is of course 1f
DesiredTimeFrame is target number of miliseconds, so our 33.3f for 30 FPS
_drawStopwatch.ElapsedMilliseconds is the number of miliseconds it took to for the last Draw() call
Indeed, that is achieved using RenderTargets. The game is rendered into a single RenderTarget, the UI to a separate one. The game RenderTarget is scaled with dynamic resolution, the UI RenderTarget has const resolution.
YES! I managed to pull off 4K resolution in certain scenes with my Sony Xperia XA1 However to keep the battery usage low and heat output reasonable, my game scales up to 1080p or max resolution supported by device if it is lower than 1080p.
If you need any additional explanation please feel free to ask I can only ask for a nice review on Google Play in exchange
Small question: Are you reallocating the RenderTargets on each target switch or are you using a viewport on all your functions to use a fraction of the original RenderTarget ?
Nice game! (I would try it if I had a working android )
Oh, actually I did not think about using a viewport for that case. Given the fact I’m scaling a single RenderTarget, I don’t see any issues in performance coming from reallocation. But it is an interesting idea, when you have many RenderTargets to scale, this shall be the better approach.