MGCB: DXT compression sometimes throws NullPointer

Hi, I’m currently porting our XNA game to MonoGame. So far I was able to solve all issues that came up, but I can’t work out where this one comes from and only have found a workaround.

So here is the exception:

Could not convert texture. System.NullReferenceException: Der Objektverweis wurde nicht auf eine Objektinstanz festgelegt.
bei Nvidia.TextureTools.Compressor.nvttCompress(IntPtr compressor, IntPtr inputOptions, IntPtr compressionOptions, IntPtr outputOptions)
bei Microsoft.Xna.Framework.Content.Pipeline.Graphics.DxtBitmapContent.TryCopyFrom(BitmapContent sourceBitmap, Rectangle sourceRegion, Rectangle destinationRegion)
bei Microsoft.Xna.Framework.Content.Pipeline.Graphics.BitmapContent.Copy(BitmapContent sourceBitmap, Rectangle sourceRegion, BitmapContent destinationBitmap, Rectangle destinationRegion)
bei Microsoft.Xna.Framework.Content.Pipeline.Graphics.BitmapContent.Copy(BitmapContent sourceBitmap, BitmapContent destinationBitmap)
bei Microsoft.Xna.Framework.Content.Pipeline.Graphics.GraphicsUtil.Compress(Type targetType, TextureContent content, Boolean generateMipMaps)
bei Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor.Process(TextureContent input, ContentProcessorContext context)

The strange thing about this exception is that it doesn’t come up predictably. I have around 900 content elements. At first build the exception will come up for 10% of them. When I build again (not rebuild) some of the ones that previously crashed suddenly can be built.

When I replace the MonoGame.Framework.Content.Pipeline.dll used by mgcb with a debug version the exception doesn’t come up.

Maybe someone here has a deeper understand of what is happening and a good guess what might cause the NullPointer.

Interesting… sounds like a threading issues of some sort. But we don’t actually thread any of the content loading yet, so this is weird.

@KonajuGames any ideas?

We see the exact same problem. Note that this problem is not new. MonoGame always had this problem when using NVTT. When building large content projects with a lot of DXT compressed textures, the NVTT compressor fails randomly. It is basically impossible to build large content projects in one go. The build process has to be repeated until all DXT textures are processed and NVTT no longer fails.

I haven’t found a way to deterministically reproduce the problem. The problem appears non-deterministic, which makes it hard to debug. (Perhaps some uninitialized memory causes random errors?)

This problem was removed by using PVRTexLib instead of NVTT. But now that NVTT is back, the problem has resurfaced.

I was totally not aware of that.

@KonajuGames ?

I’ll setup a test case, see if I can repro and investigate.

Just want to chime in and say that I’m encountering this problem too - anyone made any further progress on it?

EDIT: actually the call stack I get is the one in this thread: Upgrading to MonoGame 3.5.1 (from - Processor 'TextureProcessor' had an unexpected failure

I found the nvtt project here:
including the managed wrapper here:

The project looks active with the last commit 2 days ago.

We have encountered that error as well but since a complete rebuild of the project practically never occurs, it hasn’t been a big problem.
I believe it always worked when building the subfolders separately although they contain a large enough number of textures that the error should have happened.
Maybe that helps.

A fix was merged today. You can go grab the latest Development Builds from

You earned a medal in my book. Nice find.
So here’s my test protocol:

Before the update I used to get these:

not many, but just enough to make my day miserable :wink:
Granted, the project has some files to compile. But that’s about the usual outcome:

Here’s a second try, just for reference:

Now after the update:
First, cleaning the project takes like forever. Before it took about 10 seconds, now over 5 minutes. Seemed to be faster when disabling the filter-output mode but still slow compared to the stable one.
Building is much slower now as well, the output shows the ‘images only’ build of the same project as above. Half-time through cleaning I turned the filter-output off.
It worked.

Memory is OK. CPU is low (~25%) but OK. Everything’s fine. Just takes forever (2 against 18 minutes).
Swapped this build with the stable one, so I cannot tell if it’s your changes though.
But maybe it’s not important since you won’t do such a clean-sweep of such a big library very often…

Wasn’t slow in pipeline-mode (out of VS). Only in the standalone-tool.