How to work in linear color space?

Intro
I’ve been reading up on linear color space (https://developer.nvidia.com/gpugems/gpugems3/part-iv-image-effects/chapter-24-importance-being-linear). And I think my engine could be improved by implementing it. However, I’m unsure how I should treat data throughout my application to make this work.

A possible approach
So first things first, I can add a texture to my project. That texture will have ‘gamma’ values. As far as I can tell there’s no way to let the content pipeline know that I want to read the linear values from this texture. There’s no way to set an sRGB format in the content pipeline tool. So whenever I load the texture it will be set to SurfaceFormat.Color and when reading from this texture I will get ‘gamma’ values.

The next step is the render targets. I’m using a deferred rendering architecture. I guess I can write all the diffuse (texture/color) data to a render target with SurfaceFormat.BGRA32SRGB. Do I need to manually convert the data I read from the texture to linear space, before writing the data? Or does the GPU take care of that for me?

Now for the lighting data, here I blend several light sources. I’ll need more than 8 bits per channel to properly color correct later. There’s no surface format that supports sRGB that has more than 8 bits per channel so I guess I don’t have to do anything here.

Then the next steps is blending the light data with the diffuse data. I guess I should read the light data and convert it to linear values. Then combine it with the already linear diffuse values. The surface format of this render target could also be one of the SRGB ones I guess. But to be honest I would still prefer a format with more than 8 bits per channel here to allow for a higher quality in post processing.

The final step is presenting this data to the screen. If I feed the sprite batch a render target with an SRGB format will it properly gamma correct the values when rendering it to screen? Or should I convert the values back to gamma values in a shader?

Questions
So as you can tell I have a lot of questions. When to use the SRGB surface formats? When do I have to manually convert color data to/from linear, and when will it be done for me?

And another question. What is the easiest way to test that everything is working like it should?

1 Like

I think what would be helpful is to have the content (linear data) in the texture(s) to make sure you start with useful (test)data. If we could get a sample from somewhere which we know does everything right we could then compare the result to this. Because there are many subtle things and errors where this could go wrong I think this is the way to go here.

If I recall correctly I did use a render target with a greater bit depth for the lightmap in a deferred renderer. I transformed the data to linear space did the calculation there and at the end I did some color correction. But the exact steps I have to look up. I don’t know if it will be helpful if I look it up in my code because I really did not know and don’t know if I was doing everything right.
What should be right is to make all light calculations in linear space and then just as a final step in the end of the combine Pixel shader you would transform everything back to gamma space because the hardware will maybe assume gamma space and will try to correct it on its own just by multiplying with some value?

You could have a look into Stride, it’s almost the same API as MonoGame but has linear color space render pipeline by default. It handles texture import already correctly. It also has an optional editor called game studio, which is great for asset management: https://stride3d.net/

Hmm, both of you give me some good hints. But I still don’t really get the sRGB surface formats, how do they come into play?

Also really annoying that haven’t really found a test yet where I can ‘show’ if an implementation actually works correctly.

I guess this video gives inspiration: https://www.youtube.com/watch?v=LKnqECcg6Gw

That video helped a lot actually. I’ve now created a simple conversion that blurs an image in linear space. The effect is very noticeable. I’ll try to use this test program to investigate further. For example, this is all done by hand in shader. But I wonder what happens if I use the right render targets. I’m also interested how mipmaps etc… behave.

2 Likes

Yeah I guess my hints are not very helpful because I don’t have a good understanding of the topic myself. I just did something and it has been a long time ago. But even at the time when I worked on the topic it was basically just playing around and using code from somewhere I don’t remember now. I just implemented it how I thought it could work.

Just watched the video. And I read about the topic on the links posted above.

The main point seems to be that it has to be the right space you are working in. I assumed the diffuse textures of the models I used to be not linear and I guess this assumption was right. So to work in the right space the values had to be corrected manually in the shader. So after getting samples from the diffuse render target I just made the gamma correction using the pow function and using an again assumed gamma value of 2.2.
To allow a large range of brightness I used a light render target with greater bit depth.
In the final compose step a color mapping can be done which is just the process of using some function which converts the calculated values. This step takes place after taking the sampled values from the different render targets, calculating the final color etc. if I remember correctly.
The last step of the pixel shader is then to get the data back to gamma space, so basically taking the square root or what I did using the same gamma value as before but as one over gamma as exponent of the pow function. I think this last step of converting back to gamma space is just to prepare the data for what comes after. The output through the hardware which assumes gamma space values.

Interesting question about how mipmaps are handled :slightly_smiling_face:

The Kosmonaut used linear space in his engine, but it was “manual linear space”, not using SRGB surfaces. He rendered the diffuse RT “normally”, and converted it to linear space ( pow (inputcolor, 1/2.2) ) in the next shader.

When all computations were done, he “undid” the linear space ( pow (outputcolor,2.2) ) before dumping the image back to the screen.

Sorry for not being more accurate, I haven’t touched the 3d engine part in a year or so, and even then I was below average in this kind of stuff. The point is that you can still use linear color space without even creating a SRGB surface (at the cost of extra GPU power, of course)

EDIT: I found his document. There’s an overview which shows where he (manually) converts to linear space and viceversa. https://kosmonautblog.wordpress.com/2017/03/26/designing-a-linear-hdr-pipeline/

Manual conversion is what I have now, but it has some downsides. Besides speed differences algorithms that are performed where you have no control over (mipmapping, filtering) will not take the gamma curve into account.

However, I think I’ve figured it out. Looking at this it seems like everything that touches non-linear images should be marked as SRGB (https://docs.microsoft.com/en-us/windows/win32/direct3ddxgi/converting-data-color-space).

If in MonoGame I set the this.Graphics.PreferredBackBufferFormat and all textures and render targets to SurfaceFormat.ColorSRgb the example looks ok, without doing any corrections in the shader. :slight_smile:

Checkout https://github.com/roy-t/MiniRTS/tree/master/vNext/Prototypes/LinearWorkFlow

The next question is now that I need to tell the model processor to process the textures it references so that they get a SRGB format instead of a non SRGB one. Not sure how to do that, even in a custom model processor

I agree, one thing that I learned (I’m getting back some of the memories) is that once you converted the texture to linear space, you had to work with floating point textures in order to avoid what you’re saying.
As I’m not using mipmapping and most (if not all) of my texture sampling are point sampling I didn’t find this problem (although it wouldn’t exist if using floating point texture sources), except for FXAA. What I do is “undo” the linear space before FXAA and then apply it.
But yes, it’d be better and faster to do it the right way, I just couldn’t afford to spend more time with the 3d engine. Some day I’ll have to revisit it.