Any way to do large shader inputs in OpenGL?

I have the following pixel shader for drawing a 2D polygon:

	#define VS_SHADERMODEL vs_3_0
	#define PS_SHADERMODEL ps_3_0
	#define VS_SHADERMODEL vs_4_0
	#define PS_SHADERMODEL ps_4_0

Texture2D SpriteTexture;
float2 Points[20];
int numPoints;
float LineRadius;

sampler2D SpriteTextureSampler = sampler_state
	Texture = <SpriteTexture>;

struct VertexShaderOutput
	float4 Position : SV_POSITION;
	float4 Color : COLOR0;
	float2 TextureCoordinates : TEXCOORD0;

// isLeft(): tests if a point is Left|On|Right of an infinite line.
// Input:  three points linePointA, linePointB, and p
// Return: >0 for p left of the line through linePointA and linePointB
//         =0 for p  on the line
//         <0 for p  right of the line
float IsLeft(float2 linePointA, float2 linePointB, float2 p)
	return ((linePointB.x - linePointA.x) * (p.y - linePointA.y) - (p.x - linePointA.x) * (linePointB.y - linePointA.y));

float SegmentDistanceSquared(float2 p, float2 a, float2 b)
	float h = clamp(dot(p - a, b - a) / dot(b - a, b - a), 0, 1);
	float2 distanceVector = p - lerp(a, b, h);
	return distanceVector.x * distanceVector.x + distanceVector.y * distanceVector.y;

float4 PolygonPS(VertexShaderOutput input) : COLOR
	float sdf = 2000000;
	int j = numPoints - 1;
	int wn = 0; // winding number
	for (int i = 0; i < numPoints; i++)
		float dist = SegmentDistanceSquared(input.Position.xy, Points[i], Points[j]);
		sdf = min(sdf, dist);
		// winding number stuff
		bool cond1 = input.Position.y >= Points[i].y;
		bool cond2 = input.Position.y < Points[j].y;
		float leftVal = IsLeft(Points[j], Points[i], input.Position.xy);
		wn += cond1 && cond2 && leftVal > 0 ? 1 : 0; // up intersect
		wn -= !cond1 && !cond2 && leftVal < 0 ? 1 : 0; // down intersect
		j = i;
	sdf = sqrt(sdf);
	if (wn != 0)
		sdf = -sdf;
	sdf -= LineRadius;
	float alpha = clamp(-sdf, 0, 1);
	float4 retval = tex2D(SpriteTextureSampler, input.TextureCoordinates) * input.Color;
	retval.a *= alpha;
	return retval;

technique Polygon
	pass P0
		PixelShader = compile PS_SHADERMODEL PolygonPS();

As I understand it, arrays in shaders must have a size set, which will be the maximum size for the input. The game I am making involves modifying polygons and I would like to be able to draw polygons with a lot of vertices. With DirectX I was able to set the size of Points to 1024 with no problems, but with OpenGL increasing the size of Points significantly increases the compile time of the shader and causes compilation to fail completely if it is too big.

Is there any way for me to use a large array in OpenGL?

For now I will continue to use DirectX, but I would like the option of putting my game on non-Windows platforms in the future.

I guess it depends on the shader model? And Monogame only supports up to shader model 3 with OpenGL.

I guess I need some completely different approach if I want this working in OpenGL.

If compile time is increasing with a larger array, it’s probably unrolling your for loop, as opposed to actually looping over the same lines of code. You could use the [loop] attribute to force the shader to actually loop, but that would slow it down and may not be the best solution for what you’re trying to accomplish. (docs)

Why do you draw polygons in the pixel shader? Can’t you use a vertex shader for that?

Thanks for the suggestion, but no luck. I think it’s worth noting that while the array is indeed accessed in the loop, the number of loop iterations is controlled by a separate variable numPoints and not directly by the size of the array.

When compiling for Windows, the shader compiles in around a second regardless of size of the array.
When compiling for DesktopGL, it takes a bit over a second with an array size of 20 and around 45 seconds with an array size of 80. With an array size of 100, compilation runs for nearly 3 minutes and then fails.

I wonder, you set the Points as an fixed array of 20 but you accessing it with “i” which is incremented until numpoints. … so as soon as numpoints is bigger than 20 you’d be out of bounds

maybe this screws the compiler somehow? the optimizer could recognize it and makes some weird unrolling, as he sees the numpoints can never be bigger than 20

I started with making a shader that drew a circle and it made sense to me to use similar techniques to draw other shapes. The result looks nice.

Shaders aren’t designed to do one long running operation. They are designed to do millions of small operations in parallel.

You should consider decomposing this approach so you can distribute it across many instances of the shader executing.

You could do this by removing your loop entirely. Make the shader only draw one pixel in the loop, use a vertex buffer to hold the info for that one pixel. So store your points in a vertex buffer. This will also let you use more points. Then, when you execute the render, it will parallelize the drawing over many shaders.