I’ve rounded out (!) my transitions with a circle transition.
They have a very smooth border, but the gif compression makes it seem a bit blocky since the limited color range can’t interpolate well (no dithering)
Honestly though, the only one really satisfying and undistracting is the swipe. I’m probably gonna use that one in all future projects that have more than one screen state.
If you test with the autodesk viewer you verify that the exported FBX works with the autodesk viewer. (it’s a tautology but I don’t know how else to explain it!)
A custom viewer also tests that it works correctly with the ModelImporter (either XNA or MG) and the ModelProcessor or your custom AnimationProcessor.
I was wondering why many devs make their own validator for models, when monogame is not the reference. Maybe a lot of work spent when only monogame needs to be fixed/improved. But I understand it allows to say if a model will work or not with the needs of a particular game.
If every one creates its viewer, isn’t it because monogame(/assimp?) does not import models properly. Just asking.
Not really. FBX is a complex and very flexible format and not 2 programs implement it 100%. Every 3D tool exports differently and usually comes with a lot of options. It’s about finding the right combination.
The viewer saved us time, because we no longer had to move files back and forth, test them by rebuilding the whole project, etc.
BTW, my viewer was written on XNA! for my second project/game. Long before I knew that MG existed.
it’s not depth of field, it’s just a blur on an image, i draw on the complete image.
BUT
I could probably just use it for DOF, if I make the quad size dependent on depth and transform their size in the vertex buffer (similar to basic GPU particles). Unaffected quads have no pixels covered then.
I’ve build a very very basic particle simulation, similar to what I did with the grass for Bounty Road
The idea is to have each particle represented by a pixel in a render target (in the grass example this is not the case!) and solve spring equations for each pixel in this rendertarget.
In the image above i used 128x80 particles, so that’s the size of the rendertarget, too.
I need to have 2 of the rendertargets so they can read from each other. The active rendertarget is switched each frame for this reason. Obviously this is a limitation of using pixel shaders, but I’m fine with that.
The simulation runs at ~2500fps, but I think the limiting factor is something else at that point.
For what it’s worth it still runs with 500 fps with 1280x800 (over a million) particles. At that point I have massive overdraw though, so that’s a thing to consider.