This topic is a sort of sequel of my previous topic here
The solution found by willmotil was great and looked like this :
But the problem with that solution, is that the player can see through walls, that is not what i want.
So i enhanced my raycast to create a Polygon (Monogame.Extended.shapes.polygon) of what the player can really see.
Like this :
So now i’m wondering if it’s possible to draw only pixel in the polygon that i have created ? Somebody have any clues about if it’s possible ?
Ah ya i can think of about 3 ways to do it and none of them are easy and the cpu way might actually be less work then the rendertarget way.
The problem here is that…
In 3d you use the whole screen or in this case a rendertarget to track depth.
In 2d this is actually kind of a pain to make a depth buffer.
So the first way im thinking is cpu side with a single set data call and use a array over and over.
But like this is something id rather not even try to explain.
The second way would be a special shader a render target and involve repeatedly sending in the render target and reading it while doing fake rendering of a quad and it too would be hell to explain.
The third way and this is kinda weak to be honest.
Is some work as well. It involves not using sprite batch basically or only partially and a custom shader + a rendertarget.
This idea is as follows.
Creating a list of quads to do all the shadowing by finding edges of the rectangle polygons you would draw in your scene.
Via extending them in a pair of lines to the edge of the screen based on the player position, in order to build up a polygon list of darkened areas.
You would then make a render target.
Render this polygon list to it were it just sets a value to say red as 1.
This is then basically a stencil or shadow buffer.
Send that render target back in and draw all your regular map but read the render target with a shader that looks for those red values on it.
If it has a red value of 1 then replace the regular scene drawing color with black.
You could optimize it a bit by saying If all the points of a rectangle aren’t in the current viewing area that you are doing now. You can skip as you already know how to darken that part.
This one would be a bit of work and i don’t think i have time to tackle that.
Edit humm now that i think about it i think someone else did it like this and it probably is the best way.
You could maybe just draw all your scene slightly lifted off the map and with a actual 3d light just slightly higher than that to spread a actual 3d stencil shadow across the screen. Again lot of work though.
Unless someone else can come up with something simpler that’s all i can think of right now.
Thank for your ideas & your time. I’ll take some time to think about it.
Maybe i can send all my polygon vertices to my pixel shader and then look if the pixel coordinate is inside the polygon or not, do you think that’s possible ?
I have literal just implement something similar in my game.
Is this the effect you want?
If so I can gladly explain how I did it.
Thanks for your reply.
I want to draw other players pixels only in the fov polygon, and yeah that’s look like the effect i want.
If you can explain the method to me it would be kind from you
Fist off my engine is tilebased but it will work with any type of 2d engine. Been tilebased just means it’s can be optimised to use less CPU power.
So here is what I do.
Each tile has a line for each side of it. So 4 lines per tile.
If two tiles are next to each other I eliminate the lines where the tiles touch for performance reasons.
The I create a line from the player going out at 2 degree angles. So 180 lines for a full circle.
I check these lines with the lines from each tile.
When I get a collision and store the point in an array.
Ones iv done that I draw a poligon from the player to point 1 and point 2 in the array
Then. Player point 2 and point 3 etc
This then create my light map to apply to my game.
I’m not that good at explaing so if you are struggling to understand my method I’ll provide some code.
Also one you get this working you can get it very optimized.
@Mattlekim My project is tilebased aswell.
I have almost everything needed because i have the fov polygon (builded with the collisions points)
Now my problem i think is knowledge… because i never done lightmap and i don’t really see what’s next.
If you have the time and the patience to tell me how you have done the light map with the polygon (or the collection of collision point), that would be cool
If not, you already have helped me a lot, and i’ll check some tutorials
First you need 2 Rendertarget2d. One for your lightmap the other for your game.
So first step is create lightmap.
So set the rendertarget to your lightmap Rendertarget2d.
Next draw your lightmap. Here is where you need to actually draw the poligons using your points so that it’s only the bits you want to be in the light.
Now set the rendertarget to your game Rendertarget2d.
Draw all your game stuff.
Now you set your render target to null.
At this point you need to ether to use a custom shader or use the alpha effect Built in to monogame
Personaly I think a custom shader is better as you have way more control over the effect. I can provide you with my shader if you wish.
It will take my some time to get it to work because i never used rendertarget2d.
Thanks you for your explanation !
If you don’t mind sharing your shader, i’ll glad to see it because i’m pretty new with shaders.
Rendertarget2d is very simple it’s the other stuff that’s complex. I’ll send it you when I get home from work
If you are able to draw your “view-polygon” filled, you can use the stencil buffer. It’s still none of the easy tasks, but what it does is, that you can create a mask for every pixel you paint (you need to setup the stencil state correctly), so setup stencil write and paint you viewpolygon - after that you just draw everyhing else (with a different stencilstate), so only pixels are drawn with a specific stencil value (draw only when stencil set for example). Basically masking on pixel-level
Doesn’t need any Rendertargets and no shader changes, but you’d need to learn a bit how the stencil buffer works and how to use it to your benefit
Back in 2010, I solved this by using a pixel shader as I only needed it for lighting, not LoS, but you could examine the output to see if the screen pixel is lit or not.
You can see a vid I did of it here.
Thanks you for the idea, i’ll take a look at stencil buffers today then !
With all your solutions, i think i’ll be ok !
Plus i’m in touch with Mattlekim, he is helping me
Thanks you all !
The way I’d recommend you render the mask/stencil of your vision polygon is by triangulating it. That means dividing it up in triangles. Then you can render the triangles to a separate RenderTarget2D that will act as the mask (or as suggested you could use the stencil buffer). Triangulation is a widely researched topic and there are a lot of resources on it.
To do the triangulation you can take a look at the implementation in penumbra which is MIT licensed. Penumbra is a 2D lighting lib for MG that supports soft shadows. You might be able to use penumbra instead of implementing your own solution if you only care about the result.
Convex polygons are easy to triangulate by picking any vertex and forming triangles with the diagonals to all the other subsequent vertices (Fan triangulation). For concave polygons penumbra has a nice triangulation implementation. Penumbra also has an implementation to check if a polygon is convex here. A better implementation is possible though, by checking if each edge turns in the same direction following the previous edge if you go through the edges in order. That means every edge either turns left with regards to the previous edge or every edge turns right. If that holds the polygon is convex. You can put the polygon’s vertices into a vertex buffer and let the triangulator fill in the index buffer to render the mask.
Thanks you a lot for your idea. I’ll take a look at penumbra
You know i been thinking about this and about how to do this the right way.
I think it comes down to a simple question.
How to see a polygons edge when its surface normal is perpendicular to the view direction?
e.g. the blue plane.
Im thinking glsl or hlsl will clip that every time but maybe there is a way to force it to draw those edges. I don’t know how to force it to not consider the normal though i don’t think even NoCull will do it but if you could. Then the depth buffer could be made in 2d just like in 3d but it would render a 1d Line as depth instead of a entire 2d screen depth. I think then you should be able to do that with a bunch of light sources really cheap but im not sure at least one point would work though if you could and really cheap.
I don’t understand what you mean by this.
Like our 2d screen is flat so you say its a plane that plane has a imaginary line that is perpendicular to it.
It would be pointing straight into or out of the screen. That is typically called the surface normal to a polygon or to a plane.
Edit pictures better.
like in the above image i posted on the bottom part of it. The vertical red polygon line is perpendicular to the direction of the blue flat polygon line.
As that blue polygon line is raised to be exactly drawn so its edge or its plane lines straight up with the forward view direction
(flat to your eye like a piece of paper edge on)
and the surface normal of the blue polygon becomes exactly perpendicular to the cameras forward.
Then the dot product calculation to any pixels normal on that plane returns zero and as shown in the upper part of the above image.
The blue line disappears cause the vertex shader clips it out.
Other wise you could rotate your whole 2d scene on the X axis 90 degrees. Move your camera to be at your player position in that plane. Render everything once view forward then view back, with the depth buffer on.The z depth would be on a Y line 1 pixel high as the entire scene would be on a ZX coordinate plane. Use whatever depth resolution you wanted in width.
Every polygon would render like you were looking at a piece paper edge on in a long line.
Take two snap shots forward back and you have a 360 degree depth buffer from the viewer.
The above depth buffer (rendered as a line were the x coordinate is thought of as a degree bering) then represents the atan2 polar coordinates of all regular scene drawn polygons when drawn in the shader (pixels - the player position) = a direction that can be used by atan2 to get a linear value to line up with the stored depth line along its x coordinates.
Compare that xy pixel to viewer distance to the coresponding x coordinate value in the depth buffer. So if its more then the depth value recorded in the rt depth buffer at
y = 1 and x = atan2(x,y) / pi (or /360) when less then 180 degrees or y =2 for the back view when more then 180 degrees.
Then it is in shadow.
Maybe later ill give this a shot, im not sure it can’t be made to draw that edge or maybe it doesn’t matter as long as it’s not really cliped maybe the disappearing is just a math artifact.
I have done something like this in my current project, I use it for lighting. The player carries a light through a dark world… It was actually pretty simple, and it works great so far…
FIRST, I have a player light, a circle the size of un-obscured player vision… Its just a rendertarget, where I draw a circular 2d gradient representing a light…
Then I SUBTRACT from this light render target everything that is in shadow. Using the angle from the palyer pos to shadow-casting geometry CORNERS… You can just draw this very simply using noob-level draw-verticies code.
Then I create a darkness renderTarget the size of my screen… solid black, with an alpha value corresponding to the level of darkness I want…
Then I subtract my LIGHT render-target from the darkness layer,
and THEN draw the darkness RT on top of my whole game…