So, according to the Why use the Content Pipeline page:
When we load the .png from storage at runtime, the texture is then loaded into memory and decompressed/unpacked from its compressed png format into raw bytes. A new texture is then created for that data because your device cannot decompress on the fly (yet) so it has to use that data as is. Creating the texture uses 262kb of graphics memory on the GPU.
So, my understanding of what is happening when you use function like Texture2D.FromStream() is:
- The image is loaded from the memory/file stream and uploaded to the GPU
- On the GPU, the texture is decompressed from a .png to an unoptimized texture format (raw pixel color values)
- The GPU memory occupied by the first texture is deallocated
- The raw, unoptimized version of texture is used
However, when you run your images through the Content Pipeline what happens is:
- When building content, the texture is compressed to a GPU-optimised format (for example, DXT on desktop) that is more performant than a raw image data
- When you call Content.Load(), the compressed version of the texture is uploaded to the GPU
- The uploaded GPU-optimised format is used, letting you save time spent on decompressing the texture and memory on the GPU as we are using a texture format optimised for GPU
So, my questions are:
- Am I understanding this correctly?
- When loading textures at runtime, does the old texture data from before the decompression get deallocated (step 3)
- Would it be possible to use the content pipeline at runtime (with something like MonoGame.RuntimeBuilder or even something like this library to do something similar, but cross-platform?