How does loading textures at runtime work?

So, according to the Why use the Content Pipeline page:

When we load the .png from storage at runtime, the texture is then loaded into memory and decompressed/unpacked from its compressed png format into raw bytes. A new texture is then created for that data because your device cannot decompress on the fly (yet) so it has to use that data as is. Creating the texture uses 262kb of graphics memory on the GPU.

So, my understanding of what is happening when you use function like Texture2D.FromStream() is:

  1. The image is loaded from the memory/file stream and uploaded to the GPU
  2. On the GPU, the texture is decompressed from a .png to an unoptimized texture format (raw pixel color values)
  3. The GPU memory occupied by the first texture is deallocated
  4. The raw, unoptimized version of texture is used

However, when you run your images through the Content Pipeline what happens is:

  1. When building content, the texture is compressed to a GPU-optimised format (for example, DXT on desktop) that is more performant than a raw image data
  2. When you call Content.Load(), the compressed version of the texture is uploaded to the GPU
  3. The uploaded GPU-optimised format is used, letting you save time spent on decompressing the texture and memory on the GPU as we are using a texture format optimised for GPU

So, my questions are:

  1. Am I understanding this correctly?
  2. When loading textures at runtime, does the old texture data from before the decompression get deallocated (step 3)
  3. Would it be possible to use the content pipeline at runtime (with something like MonoGame.RuntimeBuilder or even something like this library to do something similar, but cross-platform?

Additional questions:

  1. Since the GPU-optimized texture formats used by MonoGame (DXT1, DXT3, DXT5, ETC1, PVRTC, ATC(Ardeno texture compression)) are all lossy texture compression formats, it means that the quality of the texture will be decreased, if I use the more performant, compression approach, right?

  2. So if I want my textures to have same quality as originals, I can either use content pipeline and set the compression format to “Color” (or “NoChange”) or just load the images at runtime.
    In that case, since I’m using a texture format unoptimized for GPU, would using content pipeline be any more performant than loading textures at runtime?