Manual generation of MipMaps?

Hi guys,

how would I go about manually generating mip maps for a rendertarget (or texture for that matter)?

Plus bonus question: Would it make a difference if I assign 3 textures with one r8 component vs 1 texture with R8G8B8?

You could subdevide original texture into 8x8 pixel grids, and average out the color value of each grid, and assign that as a pixel in your mipmap…

Or if you need to generate faster, you could devide your texture into 8x8 grids, and use the top-left corners color to set a pixel in your mipmap…

oh, and 8 is just an example value…

The algorithm is a whole different thing, but how do I change the mip map? Texture.SetData is on CPU right? So it would slow down the game a loooooooooot since it would have to read from the Graphics Memory

well, I suppose you should make the mipmaps before hand, and include them as files with your game…

Unless you are generating your textures dynamically, in which case, generating low res mip map wouldnt be that much more work…

it seems we are talking about different things.

I need to generate mip maps dynamically for dynamically changing textures. So yes I need to make it on runtime.
However, I don’t know how I can access and write to the MIPs

Unless you are generating your textures dynamically, in which case, generating low res mip map wouldnt be that much more work…

Well how can I write to the MIPs then? I obviously need to do it on the GPU, but how?
I don’t know the actual code to have a MipMap as a rendertarget for the GPU.

In xna 3.1 3.5 you could use GenerateMipMaps however in 4.0 i believe they pulled that.

Let me just give you some quotes from shawn hargreaves and others in a chain of comments that occured on just this subject.

ShawnHargreaves
May 11, 2010 at 2:06 pm

Does this mean I cannot load my dds textures with dxt compression and mipmaps without using the Content Pipeline anymore?

The only built-in DDS loader we provide is via the Content Pipeline
texture importer, so you would have to write your own loader if you want
to read these files in some other way (DDS is an extremely simple
format so this would not be hard to do).

Karl
May 13, 2010 at 10:31 am
Ugh I’m frustrated with many of these new changes. If DDS is such a
simple format to write ourselves then why wasn’t it included in the new API?
You say its too large and expensive to support but then say its
easy to do?
I don’t enjoy hearing that the functionality for MY preferred
platform (windows) will continue to drop because a gimp phone or mp3
player can’t do it.

Reply
Michael Hansen
May 14, 2010 at 3:48 am
Does this meen that tga image format is not supported any more

Reply
Jamie
May 14, 2010 at 7:54 am
“Sometimes taking a step forward requires a huge step back.”
stepping back sometimes means losing features in favor of preparing
the product to move forward in new directions and to be honest I like
the large picture view of “compatible with Silverlight”, “DirectX 10+”,
more platforms vs the few features that where dropped.

Ok so basically the only xna support for auto generating mipmaps was dropped in 4.0
So the below is no longer valid

https://blogs.msdn.microsoft.com/shawnhar/2009/09/14/texture-filtering-mipmaps/

I found this quote from starnick but you might find some actual code on the old xbox live indie forums if you look there.

Starnick
Posted 28 March 2011 - 01:55 AM

It’s missing because presumably GenerateMipMaps() used D3DX
methods, much like the Texture.FromFile() or Effect.CompileFromFile()
methods. Those are gone since they were Windows-only methods

So yeah, it’s going to be a bit tricky since you’re going to need to
use the Content pipeline (which can cause problems if this isn’t a tool,
but something you want to distribute to someone who doesn’t have the
Game Studio installed). The content pipeline is there for processing
data. XNA does all of its data processing “offline” because it has to
support multiple platforms. E.g. the D3DX routines that they use that
were removed, for example, are not supported on Xbox. So the pipeline
processes the data and puts it in a generic form that can be easily read
at run time (also reducing loading times).

The bool parameter to
generate mipmaps won’t actually generate the mip data for you, it’s
just going to create the surface levels. So if you create a blank
texture, you’re going to have to fill the data for each mip level
yourself. If you just fill the mip data for the first level, then the
rest of the levels are just going to be empty (which is why you’re
seeing that gradual fade to black).

When you use the content
pipeline normally, you’re able to go from Texture2DContent to Texture2D
because XNA has a built in content reader that reads a Texture2D from a
XNB binary file, which the Texture2DContent has its data written into.
Each mip map is read from the binary file and is set onto the texture
object like I mentioned above. So you really don’t need to worry about
this step, since you can just get the mip map data from the
Texture2DContent’s MipMapChain property, which is a Collection of
BitMapContent objects.

After you call
Texture2DContent.GenerateMipMaps, just go through each BitMapContent for
how ever many mip map levels you have, get its data, and set it on the
Texture2D object.
I’d take a look through the MSDN documentation for BitMapContent, >Texture2DContent, and MipMapChain and their members. Hope this helps!

http://www.gamedev.net/topic/607472-xna-40-limiting-mipmap-levels/

You might need to get some input here from tom or one of the other guys on this.
It may have been added or not to monogame.

I have a manual image cpu pre-content scaling algorithm i can post that will scale a color array to just about anysize you want not to brag but i think mine is better then a commercial one it works off (bi directional non uniform interpolated averaged blending) if that were a term that’s what it would be, but anyways if you would like to use it ill post it.

How to get monogame to use mimmaps i don’t know. i never tried or asked. The Texture2D levelcount attribute is still there so maybe there is some support for preloaded mipmap textures.

DirectX 11 has a function to generate mipmaps for a shader resource (within certain restrictions), and OpenGL (and ES) has glGenerateMipmap, but it is not something that MonoGame supports at this time. That’s not to say a GenerateMipmaps() function could not be added, and I don’t know what the performance impact would be to generate these mipmaps each frame. I would hazard a guess that these implementations for both DirectX 11 and OpenGL would be done on the CPU anyway.

One thing I did find though was that if the texture is created with the mipmap count set to zero, DirectX 11 will automatically generate mipmaps when UpdateSubresource() (as called in MonoGame’s Texture2D.SetData) is called on the top level of the mipmap chain for that texture.

specifically I want to implement Min/Max shadows, eg have min/max values for depth stored in a mip map chain so i can instantly see whether a pixel is fully lit or fully in shadow without having to (PCF) sample several times per pixel

Can you tell me, if you can generate whole textures at run-time, how is it a problem to also generate mip-maps? Arent they just low-res representations of the bigger texture?

I write to rendertargets in runtime on the GPU, so I write the pixels to mip0 aka the full resolution.

However, I have no control over mip map generation. I don’t want to use mip maps to just be a downsampled half-res representation of the full texture, I want to create them manually because I need a custom downsampling method. So the default mip map generation is useless for me.

However i can’t set texture.mip1 or mip2 to be my rendertarget for the graphics card.

I could obviously use different textures with lower resolution, but having it all stored in one main texture with corresponding mipmaps is much more elegant and probably saves a lot of bandwidth as well.

I thought mip maps were just a concept you had to implement yourself. I never knew they were a part of monogame. It would be nice to see a screen shot when it works, to get an idea of this topic in action…

MonoGame just provides access to the mipmap functionality built into DirectX, OpenGL and other APIs we support.

It appears it can be done in DirectX 11, but you might need to hack at the MonoGame code to get access to the bits you need.

1 Like

Just to clarify the concept of mipmaping is very simple and you can cheat it in with monogame for simple things. I never used xna’s mipmaping but i actually bypassed it with a simple shader which is easily possible. Before you go that route though you should understand the problem that mipmaping solves.

Which is texture scaling at distance the is a huge topic but suffice it to say if a models being drawn at sufficient distance and your texture for that model is hi res then. Without mipmaping is not only pointless litterally its a big performance penalty, if any sort of blending or alpha is on or any setting is active to were down scaling must occur it can be really bad gpu side, its just a lot of potentially needless work in certain cases extremely a lot.

For that reason you almost always in all cases Pre-Genererate your mipmaps in 3.1 it did that via dx or even in ogl, as always, right when it gets loaded in or you make them and send them in yourself with your model ect… But its never done at run time.
The gpu may do scaling but that’s interpolated rasterizing in dx ogl, that is actually why we use the mipmaps, to hack around the inefficiency that can result from down-scaling in those api’s for technical reasons such as point clamping and memory management, this is not done automatically in either ogl or dx it must be told to generate them and use them or given them and then told to use them. The actual generation of them is fairly slow, so dynamically changing and generated bitmaps is usually worked around by using multi-sampling on different textures via a pixel shader and a multisampled custom vertice structure

something like so

    public struct VertexMultitextured
    {
        public Vector3 Position;
        public Vector3 Normal;
        public Vector4 TextureCoordinate;
        public Vector4 TexWeights;
        // 14 * 4 bytes = 56 for a cube with 36 vertices 2015 
        public static int SizeInBytes = (3 + 3 + 4 + 4) * sizeof(float);
        public static VertexElement[] VertexElements = new VertexElement[]
        {
          new VertexElement(  0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0 ),
          new VertexElement(  sizeof(float) * 3, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0 ),
          new VertexElement(  sizeof(float) * 6, VertexElementFormat.Vector4, VertexElementUsage.TextureCoordinate, 0 ),
          new VertexElement(  sizeof(float) * 10, VertexElementFormat.Vector4, VertexElementUsage.TextureCoordinate, 1 ),
        };
    }

.

So anyways what you do to hack around it is pretty simple in simple cases.
But theoretically the same in all cases.
You make a basic shader that accepts a texture.
You make scaled down versions of your hi res texture typically halfing its size, could be quartered or just just a couple textures. You can do that with any paint program.
So you load the model in then a bunch of textures that look alike but are different sizes, maybe put the references to them in a list or something to make it easier on yourself.
Then at different distances from the camera you use smaller or larger textures for the model which will be drawn to the screen that you will then see the detail as needed or not.

For example in some custom model you might have a custom draw call that accepts a texture.
In the below case the method doesn’t care what texture it is the highest res or lowest.

public void DrawBasicCube(BasicEffect beffect, Matrix world, Matrix view, Matrix projection, Texture2D t)
{
beffect.EnableDefaultLighting();
beffect.TextureEnabled = true;
beffect.Texture = t;
beffect.World = world;
beffect.View = view;
beffect.Projection = projection;
BxEngien.Gdevice.SetVertexBuffer(The_VertexBuffer);
foreach (EffectPass pass in beffect.CurrentTechnique.Passes)
{
pass.Apply();
BxEngien.Gdevice.DrawPrimitives(PrimitiveType.TriangleList, 0, NUM_TRIANGLES);
}
}

So then now that you have your ingredients.
When your models world position is some certain distance from the camera’s world position. you switch to a smaller texture and send that in to your shader when you draw your model or even your spritebatch which basically lets you do it without a shader the model version works because internally gl and dx use floating uv coordinates in the range of 0 to 1.0f so you just pass the texture when drawing in 3d.

When doing 2D using spritebatch you make virtual coordinates that range from 0 to 1f then multiply by the textures width and height as the positional coordinates.

for example…

Vector2 uv = new Vector2(.05f,.05f);
Vector2 uvend = new Vector2(.65f,.65f);
Vector2 wh = uvend - uv;

Texture2D t = smilyFaceTextureList[ currentZoomLevelIndex ];
Rectangle sourceRectangle = new Rectangle(uv.X *t.Width, uv.Y *t.Height, wh.X *t.Width, wh.Y * t.Height);

When you draw to spritebatch.Draw(,); passing a full call any texture you pass in will use the same relative area even ones that you made half the original size ect.
You note that you have a destination to the screen as rectangle as well as a source rectangle that specifys were in the texture to pull image data from.

In the case of 3d you don’t have to worry about this as uv is already defined in terms of 0 to 1 f as range from the left to the right of the full texture used regardless of its size.
However in the case were your loading in a model, you have the syntax of a prebuild model to deal with. Where that is not the case with a simple spritebatch or custom model you have made.

Though this would take some work to figure out how to set it up for a preloaded model the overall principal in any case is the same you just to take some time to get the syntax right which is honestly annoying in that case.

For simple stuff its pretty easy to just pass a smaller version of the high res image to a shader before drawing. So if your doing it with a preloaded model thats what your going to have to figure out how to do tell it to switch textures at distance and give it smaller versions of the textures to use which you have premade.
If this is not yet implemented and you take the time to do this yourself, you should post it as a issue on github and any work you do it on it there, so everyone can benifit or chip in.

Nice explanation about general mapmaking, maybe consider putting it in the monogame documentation.

As far as mip maps for model textures go I am pretty happy with the Monogame autogeneration, simple and effective.

Not really relevant for min/max shadow map mip chains, but I guess that’s just something that can’t be done in a few minutes in Monogame.

Thanks for the replies guys.

Is it possible to manually generate Anisotropic Mipmaps? Currently I manually set Non-Anisotropic ones, but when I set the TextureSampler from PointWrap to Anisotropic I can clearly see that my crafted ones are not applied but instead it seems like some auto-generated ones are used.

I cannot rely on those, and need handcrafted ones (since I’m using a double-indexed palette).

I’m not an expert on this matter, but when I follow the wikipedia article on anisotropic filtering, it seems as I need differently sized mipmaps from different viewing angles.

If I cannot set them manually, I probably need to write my own sampler that takes depth and angle information to point-sample from a handcrafted anisotropic mipmap-texture. That’s probably slower than the hardware solution though.