Draw pixel only in polygon

I have literal just implement something similar in my game.

Is this the effect you want?

If so I can gladly explain how I did it.

Matt

1 Like

Hello @Mattlekim
Thanks for your reply.
I want to draw other players pixels only in the fov polygon, and yeah that’s look like the effect i want.
If you can explain the method to me it would be kind from you :slight_smile:

Cool.

Fist off my engine is tilebased but it will work with any type of 2d engine. Been tilebased just means it’s can be optimised to use less CPU power.

So here is what I do.
Each tile has a line for each side of it. So 4 lines per tile.

If two tiles are next to each other I eliminate the lines where the tiles touch for performance reasons.

The I create a line from the player going out at 2 degree angles. So 180 lines for a full circle.

I check these lines with the lines from each tile.
When I get a collision and store the point in an array.

Ones iv done that I draw a poligon from the player to point 1 and point 2 in the array
Then. Player point 2 and point 3 etc

This then create my light map to apply to my game.

I’m not that good at explaing so if you are struggling to understand my method I’ll provide some code.

Also one you get this working you can get it very optimized.

@Mattlekim My project is tilebased aswell.

I have almost everything needed because i have the fov polygon (builded with the collisions points)

Now my problem i think is knowledge… because i never done lightmap and i don’t really see what’s next.

If you have the time and the patience to tell me how you have done the light map with the polygon (or the collection of collision point), that would be cool :slight_smile:

If not, you already have helped me a lot, and i’ll check some tutorials :smiley:

Thanks you!

First you need 2 Rendertarget2d. One for your lightmap the other for your game.

So first step is create lightmap.

So set the rendertarget to your lightmap Rendertarget2d.

Next draw your lightmap. Here is where you need to actually draw the poligons using your points so that it’s only the bits you want to be in the light.

Now set the rendertarget to your game Rendertarget2d.

Draw all your game stuff.

Now you set your render target to null.

At this point you need to ether to use a custom shader or use the alpha effect Built in to monogame

Personaly I think a custom shader is better as you have way more control over the effect. I can provide you with my shader if you wish.

It will take my some time to get it to work because i never used rendertarget2d.
Thanks you for your explanation !
If you don’t mind sharing your shader, i’ll glad to see it because i’m pretty new with shaders.

Rendertarget2d is very simple it’s the other stuff that’s complex. I’ll send it you when I get home from work

I’m on it !
Thanks you

If you are able to draw your “view-polygon” filled, you can use the stencil buffer. It’s still none of the easy tasks, but what it does is, that you can create a mask for every pixel you paint (you need to setup the stencil state correctly), so setup stencil write and paint you viewpolygon - after that you just draw everyhing else (with a different stencilstate), so only pixels are drawn with a specific stencil value (draw only when stencil set for example). Basically masking on pixel-level

Doesn’t need any Rendertargets and no shader changes, but you’d need to learn a bit how the stencil buffer works and how to use it to your benefit

1 Like

Back in 2010, I solved this by using a pixel shader as I only needed it for lighting, not LoS, but you could examine the output to see if the screen pixel is lit or not.

You can see a vid I did of it here.

1 Like

Thanks you for the idea, i’ll take a look at stencil buffers today then ! :slight_smile:
With all your solutions, i think i’ll be ok !
Plus i’m in touch with Mattlekim, he is helping me :slight_smile:

Thanks you all !

The way I’d recommend you render the mask/stencil of your vision polygon is by triangulating it. That means dividing it up in triangles. Then you can render the triangles to a separate RenderTarget2D that will act as the mask (or as suggested you could use the stencil buffer). Triangulation is a widely researched topic and there are a lot of resources on it.

To do the triangulation you can take a look at the implementation in penumbra which is MIT licensed. Penumbra is a 2D lighting lib for MG that supports soft shadows. You might be able to use penumbra instead of implementing your own solution if you only care about the result.
Convex polygons are easy to triangulate by picking any vertex and forming triangles with the diagonals to all the other subsequent vertices (Fan triangulation). For concave polygons penumbra has a nice triangulation implementation. Penumbra also has an implementation to check if a polygon is convex here. A better implementation is possible though, by checking if each edge turns in the same direction following the previous edge if you go through the edges in order. That means every edge either turns left with regards to the previous edge or every edge turns right. If that holds the polygon is convex. You can put the polygon’s vertices into a vertex buffer and let the triangulator fill in the index buffer to render the mask.

Thanks you a lot for your idea. I’ll take a look at penumbra :slight_smile:

@Jjagg @Charles_Humphrey

You know i been thinking about this and about how to do this the right way.

I think it comes down to a simple question.

How to see a polygons edge when its surface normal is perpendicular to the view direction?

e.g. the blue plane.

Im thinking glsl or hlsl will clip that every time but maybe there is a way to force it to draw those edges. I don’t know how to force it to not consider the normal though i don’t think even NoCull will do it but if you could. Then the depth buffer could be made in 2d just like in 3d but it would render a 1d Line as depth instead of a entire 2d screen depth. I think then you should be able to do that with a bunch of light sources really cheap but im not sure at least one point would work though if you could and really cheap.

I don’t understand what you mean by this.

Like our 2d screen is flat so you say its a plane that plane has a imaginary line that is perpendicular to it.
It would be pointing straight into or out of the screen. That is typically called the surface normal to a polygon or to a plane.

Edit pictures better.

like in the above image i posted on the bottom part of it. The vertical red polygon line is perpendicular to the direction of the blue flat polygon line.

As that blue polygon line is raised to be exactly drawn so its edge or its plane lines straight up with the forward view direction
(flat to your eye like a piece of paper edge on)
and the surface normal of the blue polygon becomes exactly perpendicular to the cameras forward.

Then the dot product calculation to any pixels normal on that plane returns zero and as shown in the upper part of the above image.

The blue line disappears cause the vertex shader clips it out.

Other wise you could rotate your whole 2d scene on the X axis 90 degrees. Move your camera to be at your player position in that plane. Render everything once view forward then view back, with the depth buffer on.The z depth would be on a Y line 1 pixel high as the entire scene would be on a ZX coordinate plane. Use whatever depth resolution you wanted in width.
Every polygon would render like you were looking at a piece paper edge on in a long line.


Take two snap shots forward back and you have a 360 degree depth buffer from the viewer.

The above depth buffer (rendered as a line were the x coordinate is thought of as a degree bering) then represents the atan2 polar coordinates of all regular scene drawn polygons when drawn in the shader (pixels - the player position) = a direction that can be used by atan2 to get a linear value to line up with the stored depth line along its x coordinates.
Compare that xy pixel to viewer distance to the coresponding x coordinate value in the depth buffer. So if its more then the depth value recorded in the rt depth buffer at
y = 1 and x = atan2(x,y) / pi (or /360) when less then 180 degrees or y =2 for the back view when more then 180 degrees.

Then it is in shadow.

Maybe later ill give this a shot, im not sure it can’t be made to draw that edge or maybe it doesn’t matter as long as it’s not really cliped maybe the disappearing is just a math artifact.

I have done something like this in my current project, I use it for lighting. The player carries a light through a dark world… It was actually pretty simple, and it works great so far…

FIRST, I have a player light, a circle the size of un-obscured player vision… Its just a rendertarget, where I draw a circular 2d gradient representing a light…

Then I SUBTRACT from this light render target everything that is in shadow. Using the angle from the palyer pos to shadow-casting geometry CORNERS… You can just draw this very simply using noob-level draw-verticies code.

Then I create a darkness renderTarget the size of my screen… solid black, with an alpha value corresponding to the level of darkness I want…

Then I subtract my LIGHT render-target from the darkness layer,

and THEN draw the darkness RT on top of my whole game…

3 Likes

Well what im describing is a bit different i think? It would be a real shadow buffer.
You would draw the scene twice clip alpha drawn pixels to get a a rendered z depth in two render targets.
These would be depth samplers for the scene when drawn normally with a pixel shader.

(the cavet is that the normally drawn scene would calculate polar coordinates to get the sampled depth then comparing that to its depth from camera so it would be exactly how you shadow in 3d but in 2d.

The depth would conform to the shape of the sprites themselves Not the corners of polygons that contain them or any generated polygons. Basically in 2d you would be finding edges of whats in the sprites not the sprites polygon.

edit this is the idea.

Okay, I get what you mean now. That approach is a lot more complex IMO because now you’re working with 2 different views and have to map stuff from one view to the other and back. It also does not give you a full occluder map, only the distance to the closest occluder from the perspective of the character you rendered the depth map for. You can get the same result by computing it from the vertices of the view polygon.

If you do have sprites with transparent parts or that otherwise can be seen through, I’d recommend manually building the occluder shapes.

My problem is now solved. With the help of Mattlekim.

He kindly give me his library, we used the RenderTarget2D approach :slight_smile:

Thanks you all for your implication in my question :relaxed:

4 Likes