Draw pixel's using HLSL

Hi. I want to create ‘Conway’s Game of Life’ on monogame. How can I draw pixels with using HLSL? The size of the game grid is 18000x14000. HLSL cannot work with such huge arrays. I need HLSL to make pixels glow.

Hmmm.
Seems like you’d like to draw simple pixels using, for example, the 2D drawing methods of MonoGame.Extended and apply a post-processing filter (Bloom) to the camera’s rendertarget afterwards.

1 Like

2D drawing methods - bad decision, if draw a lot of pixels. But thanks for MonoGame.Extended. I’ll see what’s there.

Here are two sources for Game of life.


http://blogs.msdn.com/b/shawnhar/archive/2011/12/29/is-spritebatch-turing-complete.aspx

Make sure you change to HiDef profile and you will get up to 16384x16384 (card with DX11.0).

1 Like

=D
Yes. Seems I underestimated the desired size of your grid.
Intuitively thought that you’d use a camera and never draw the whole grid since you would lose grid-cells anyway because of your monitor’s resolution.
So you’d have to do some downsampling on your own before drawing I guess.
Any other ideas? Pixelshader anyone?

Some sites I came across on my google-tour through GoL-HLSL land:
on shader for irrlicht engine
on XNA but only 800x600
another one

As I initially thought you’d have to do tiling when exceeding the 16384x16384 on HiDef limit nkast pointed out and you’d definitely have to think about drawing since it’s one thing to calculate the cells using shaders and another thing drawing the whole shebang (use a camera, zoom, pan and only draw the visible parts of your calculation textures. then possibly apply the bloom-filter afterwards).

1 Like

Can you be more specific…

Is it ok to only view some of this area?
im guessing you will need to zoom in ?
How fast does this have to run?.
This is no easy task if speed is involved or if it must be on the gpu.

Further
That many pixels will not show up on your screen individually.
They would be blended together to view them all at once.

If time is not a factor i would generate this into a bit array each round and manually turn the portion of the array that will be displayed on your screen into a image.
That may involve scaling those dots to fit the screen yourself.
Drawing the glowing dots white and just use a changing variable for a blend color passed to full spritebatch draw call and draw the image.
Doing that continuously until the next round is ready.

I think you may have to prototype this on the cpu and break up each rounds calculations thru out multiple updates to calculate a array that large.
As that is roughly a quarter billion pixels or in this case cells that must be processed.
roughly 9x7 2k*2k textures worth of data to be calculated then generated each round.
250 million bits bytes or ints depending on how you look at it.
Not to mention i think there is rules for that which depend on neighboring pixels.

Maybe someone has a idea how to do this on the gpu using multiple rendertargets ?
With maybe a orthagonal camera.

I think…what if write code on C++ as libray for update live? This will speed up the update? Or the same thing can be done in C# using unsafe code?

Build texture, send it to shader… if you view only part of texture then actuallize only viewed part of texture. If one texture want be able to cover it then make several and use them as tiles.

You mean - Texture2D.SetData ?

No, Texture2D.SetData is CPU sided.
You want GPU here, because it’s faster and smarter.
Bind your rendertarget using:
GraphicsDevice.SetRenderTarget(your texture here);
And then apply custom HLSL shader (effect).

Here, this should be a good start for learning GPGPU:
http://www.xnainfo.com/content.php?content=21

1 Like

Cant you just like apply HLSL to a limited amount of objects(the objects being the visible ones)?

He has data on CPU, from his post it does look like he want to use c++ / c# for logic of his game. You need to get data from CPU to GPU, thus it will always has CPU side. GraphicsDevice.SetRenderTarget(your texture here); set render target where you will be rendering, what do you want to render in his case since he has no information in GPU Vram at point where he is setting that?
Texture2D.SetData can use slices to set only VISIBLE SET of data, WHICH ARE monitor resolution bound… unless he wants to rewrite his project completely to run on GPU using GPU state aware particles approach then what I said is correct.

Well, then CPU sided Game of Life with that resolution will be complicated and can usually lead to slow results.