Hi!
I’m trying to render 1 or many sphere as this: for each sphere, from its center to the border, alpha = distance_from_the_center) /radius.
Would I have to do this using a raycasting method ?

Can’t I just for a start get the vertex’s position, the center, and use the distance between the two ?

In the vertexshader, is input.position the position of the vertex in object’s space ?
If right, then I should just have to do length(normalize(input.position-world. _41_42_43)), isn’t it ?

I’ve got the red decreasing when the pixel gets away from (0,0,0) as expected, but not in the object space when getting away from the center of the sphere

If I got this right you’re trying to get the distance from the center of the sphere, divided by radius?
That’s just float redamount = length(output.WorldPos - output.PositionSphereCenter) / sphereRadius;
Though I don’t really understand what you’re doing. If you’re just rendering a sphere at PositionSphereCenter the redamount will be the same for all points on it, 1 if the radius is equal to the sphereRadius above.

And just rendering like this will not get you a volumetric sphere which seems to be what you want. But I don’t get what you mean by alpha = distance_from_the_center /radius, that doesn’t make much sense for a volume.

Sorry if I was not clear enough.
I want to draw a sphere, where the center is fully opaque, and when getting to the borders, getting transparent. (alpha getting lower till 0)

alpha = distance_from_the_center /radius was to get a value between 0 and 1.

Thinking of it to explain what I want to achieve makes me think there is only the volumetric way to do this which is really to costly.
But what I was thinking, my very first idea, was to draw the visible hemisphere as a disc, something like this:

That’s the problem, I need to know the depth: in CWcull and CCWcull, keeping only the closest and farthest pixels does not work
Drawing 2 times : and storing the depth for CW and CCW does not work.

I need to gather the transmittance or something similar i don’t know the name: sum the depths of spheres/other volumes (cubes/ whatever) to know how much opaque it is between 2 points in the volume or any other area.

I know it would be easier with voxels, but we don’t have geometryshaders

Is there a way to accumulate the depths of objects in 3D ? Overlaying 2 spheres and getting the distance between the closest pixel, and the farthest ? Sort of alphablending depths ? Or a value that is incremented for each pixel, each time a pixel is drawn. Can the stencil buffer do this ?

If I go for billboards, I would need to know how many times a pixel has been drawn with a quad, and its position from the center of the quad would give me the depth after some maths.
The 3D way seems faster to me as less cosines/sines will be needed.

Hum… I have found this interesting thing

But what happens when there are triangles of many models overlapping, in CCW or CW ? Won’t the closest one cull/prevent the draw of the ones behind ? There can be only 1 depth per pixel How can I keep the transmission between 2 pixels

For a sphere you can calculate the depth / the distance the camera ray has to travel through with simple math.

Google “line sphere intersection” or just look at the problem and figure it out yourself. If your inputs are clear - a pixel ray, a sphere center, a sphere radius, it’s not super crazy math.

If you have the entry point and the exit point you can subtract them and you get the distance travelled.

If your camera is inside the sphere you only need an exit point.

You can calculate the intersections with those shapes, just like you can do with a sphere. Check out Scratchapixel for the method to write the intersection algorithms and some examples.