Trying to set the value of another pixel in hlsl

Hi, I’ve come across an issue in hlsl where in my “game of life”-like simulation, a pixel needs to “jump” to another pixel. I’m trying to implement this in an HLSL shader for performance.

This is a surprisingly hard thing to do. I tried to write to a static float4 array so I could do a second pass where I read from it, but the maximum array size in a shader is too small (around 60000, I need roughly 10 times more space than that)

So I looked at RWByteAddressBuffer, but I’m not even sure how to implement that. It seems it’s very hard to get a grip on how to do things in hlsl in general, with the only real documentation being MS’s official, which is robotic and only useful for people who are not learning.

You are thinking about it the wrong way around, instead of thinking about changing the destination pixel, think about changing the source pixel.

You can change the texture coordinate you read from, not the output pixel location

That is the proper way of looking at it, but a game of life shader is very atypical.

In my case there are upwards of 32*32 surrounding pixels that may affect the color. I would have to do a for loop, some arithmetic, as well as a conditional check on each of them. Only one of them will affect the output.

I’ll give that a shot if I can’t figure out a better way. But I’d really like to establish that there is no other way. This method would make my shader thousands of times slower.

It would be much more practical if it could write the destination to a buffer. And that may be close, I could almost do it if the array size limit was higher.

I may be getting close with RWByteAddressBuffer, maybe if I have a field like

RWByteAddressBuffer transfer : register(u0);

Now I just have to figure out how to use it. :thinking:

https://nullprogram.com/webgl-game-of-life/

might give you some ideas

1 Like

Interesting, they did go with your approach. Thanks for bringing that up.

I’m in too deep though, I’m making my version much more complicated so I need some strange tools to do it. It’s more like a compute shader.

With my very rough understanding of shaders, I believe I can create a writable buffer using DirectX ps_5_0 and

RWTexture2D<float> TransferTexture: register(u1);
sampler2D TransferTextureSampler = sampler_state
{
	Texture = <TransferTexture>;
};

And setting a value using TransferTexture[xy] = float4(0, 0, 0, 0,). I can then read the buffer using a second pass.

I am caught on one problem though, I can’t initialize the buffer? I assume I need to define the width and height of the texture somewhere, but doing

effect.Parameters["TransferTexture"].SetValue(new RenderTarget2D(Parent.GraphicsDevice, Width, Height));

Throws a null reference exception because it cannot find the parameter. The solution may be interfacing with it through SharpDX.Utilities.AllocateMemory() as suggested below. Maybe I will have to do it all through DirectX/SharpX.

I wonder if this is an issue with Monogame? Seems like effect.Parameters should be able to access it.

Just for clarity, I’ll post my .fx code so far, with much commented out to focus on the issue. Maybe someone can point out a dumb mistake I’m making. That’s enough research for me tonight though.

#if OPENGL
#define SV_POSITION POSITION
#define VS_SHADERMODEL vs_3_0
#define PS_SHADERMODEL ps_3_0
#else
#define VS_SHADERMODEL vs_5_0
#define PS_SHADERMODEL ps_5_0
#endif

static const float FTB = 0.00390625; // psuedo incremental byte value of float (1/256)

float Width;
float Height;

float MutationRate = 0.1;
float mutationratehalf = 0.05; // set with above

float Range = 8;
float rangehalf = 4; // set with above

float Decay = 0.004;

// RNG seed
float2 rng_seed = float2(0, 0);
float2 rng_inc = float2(0.001, 0.001);

const float2 t = float2(3, 3);

Texture2D SpriteTexture;
sampler2D SpriteTextureSampler = sampler_state
{
	Texture = <SpriteTexture>;
};

RWTexture2D<float> TransferTexture: register(u1);
sampler2D TransferTextureSampler = sampler_state
{
	Texture = <TransferTexture>;
};

float nrand(float2 uv)
{
	return frac(sin(dot(uv + rng_seed, float2(12.9898, 78.233))) * 43758.5453);
	rng_seed += rng_inc;
}

// https://forum.unity.com/threads/encode-float4-rgba-to-uint.695176/
uint PackFloat4(float4 f)
{
	uint packedR = uint(f.r * 255);
	uint packedG = uint(f.g * 255) << 8; // shift bits over 8 places
	uint packedB = uint(f.b * 255) << 16; // shift bits over 16 places
	uint packedA = uint(f.a * 255) << 24; // shift bits over 24 places
	return packedR + packedG + packedB + packedA; // Note: Alpha is the largest value
}

float4 UnPackUint(uint i)
{
	return float4(
		i % 256 / 256,
		i / 256 % 256 / 256,
		i / 65536 % 256 / 256,
		i / 16777216 % 256 / 256);
}

float4 GetNeighbor(float2 texture_coordinates, int x, int y)
{
	return tex2D(SpriteTextureSampler, texture_coordinates + float2(x / Width, y / Height));
};

static float2 LoopCoordinates(float2 texture_coordinates)
{
	float2 target = texture_coordinates % float2(1, 1);
	return float2(1, 1) - (float2(1, 1) - target) % float2(1, 1);
}

float4 GetNeighborLoop(float2 texture_coordinates, int x, int y)
{
	float2 target = texture_coordinates + float2(x / Width, y / Height);
	target = target % float2(1, 1);
	target = float2(1, 1) - (float2(1, 1) - target) % float2(1, 1);
	return tex2D(SpriteTextureSampler, target);
};

//void SetRelativeBufferValue(float2 texture_coordinate, int x, int y, float4 value)
//{
//	TransferTexture.Store(
//		texture_coordinate.x * Width + texture_coordinate.y * Width * Height,
//		PackFloat4(value));
//}

struct VertexShaderOutput
{
	float4 Position : SV_POSITION;
	float4 Color : COLOR0;
	float2 TextureCoordinates : TEXCOORD0;
};

float4 StoreValuePS(VertexShaderOutput input) : COLOR
{
	//// move to new pixel
	//SetRelativeBufferValue(
	//	LoopCoordinates(input.TextureCoordinates),
	//	input.Color.r * 32 - 16,
	//	input.Color.g * 32 - 16,
	//	input.Color);
	
	float2 index = LoopCoordinates(input.TextureCoordinates);
	TransferTexture[
		input.TextureCoordinates + 
		float2((input.Color.r * 32 - 16) / Width, (input.Color.r * 32 - 16) / Height)] 
	= float(3);
	
	return input.Color;
}

float4 ProcessValuePS(VertexShaderOutput input) : COLOR
{
	// mutate
	//float4 result = input.Color + (float4
	//(
	//nrand(input.TextureCoordinates),
	//nrand(input.TextureCoordinates),
	//0,
	//0
	//) % float4(MutationRate, MutationRate, 1, 1)) - float4(mutationratehalf, mutationratehalf, 0, 0);
	
	// decay
	//result = result - float4(Decay, Decay, 0, 0);
	
	return input.Color; //GetBufferValue(input.TextureCoordinates);
}

technique SpriteDrawing
{
	pass P0
	{
		PixelShader = compile PS_SHADERMODEL StoreValuePS();
	}
	//pass P1
	//{
	//	PixelShader = compile PS_SHADERMODEL ProcessValuePS();
	//}
}

It couldn’t hurt to open an issue on Github I guess. It seems like Monogame should expose RWTexture2D's through Effect.Parameters.

Maybe my understanding of compute shaders is not good.

I have no idea if this will help you but the shader i use for blur (grabbed from offline somewhere) uses an array of weights and offsets to average each pixel to generate a blur effect by sampling neighbouring pixels. So its kind of similar in sampling other pixels.

Like i said no idea if this will help you but you might be able to grab something to speed your shader up. My shader knowledge is very limited.

Shader Code

texture Texture;
float weights[15];
float offsets[15];

sampler2D TextureSampler = sampler_state
{
	texture = <Texture>;
	minfilter = point;
	magfilter = point;
	mipfilter = point;
};


float4 BlurHorizontal(float4 position : POSITION0, float4 color : COLOR0, float2 texCoord :         
TEXCOORD0) : Color0
{
	float4 output = float4(0, 0, 0, 1);

	for (int i = 0; i < 15; i++)
	{
		output += tex2D(TextureSampler, texCoord + float2(offsets[i], 0)) * weights[i];
	}

	return output;
}


float4 BlurVertical(float4 position : POSITION0, float4 color : COLOR0, float2 texCoord :         TEXCOORD0) : Color0
{
	float4 output = float4(0, 0, 0, 1);

for (int i = 0; i < 15; i++)
{
	output += tex2D(TextureSampler, texCoord + float2(0, offsets[i])) * weights[i];
}

return output;
}


technique Blur
{
pass Vertical
{
	PixelShader = compile ps_4_0 BlurVertical();
}

pass Horizontal
{
    PixelShader = compile ps_4_0 BlurHorizontal();
}
}

C# code to set the weights and offsets

float[] weights = { 0.1061154f, 0.1028506f, 0.1028506f, 0.09364651f, 0.09364651f, 0.0801001f, 0.0801001f, 0.06436224f, 0.06436224f, 0.04858317f, 0.04858317f, 0.03445063f, 0.03445063f, 0.02294906f, 0.02294906f };
float[] offsets = { 0, 0.00125f, -0.00125f, 0.002916667f, -0.002916667f, 0.004583334f, -0.004583334f, 0.00625f, -0.00625f, 0.007916667f, -0.007916667f, 0.009583334f, -0.009583334f, 0.01125f, -0.01125f };
blur.Parameters["weights"].SetValue(weights);
blur.Parameters["offsets"].SetValue(offsets);
1 Like

I may end up doing that. My RWTexture2D doesn’t show up in parameters. The other method is setting GraphicsDevice.Texture[index] where index is the register.

aka

C#

Texture2D texture = new Texture2D(Parent.GraphicsDevice, Width, Height);
Color[] arr = new Color[Width * Height];
for (int i = 0; i < arr.Length; i++) arr[i] = Color.Red;
texture.SetData(arr);
Parent.GraphicsDevice.Textures[3] = texture;

HLSL

RWTexture2D<float> TransferTexture: register(u3);
sampler2D TransferTextureSampler : register(s3)
{
	Texture = <TransferTexture>;
};

(This should sample as red but doesn’t)

I’ll try the latest dev build and then open a git issue if it’s still present. From my point of view it’s less about doing it this specific way and more about making sure HLSL is functional and not missing any features it should have. There’s a lot I would like to do in Monogame with HLSL that’s going to get more complex.