Object space

Hi!
I’m trying to render 1 or many sphere as this: for each sphere, from its center to the border, alpha = distance_from_the_center) /radius.
Would I have to do this using a raycasting method ?

Can’t I just for a start get the vertex’s position, the center, and use the distance between the two ?

In the vertexshader, is input.position the position of the vertex in object’s space ?
If right, then I should just have to do length(normalize(input.position-world. _41_42_43)), isn’t it ?

When I do

float4 worldPosition = mul(input.Position, world);
float redamount = length(normalize(output.WorldPos) - normalize(output.PositionSphereCenter));

and in pixelshader

return float4(redamount,0,0,1);

I’ve got the red decreasing when the pixel gets away from (0,0,0) as expected, but not in the object space when getting away from the center of the sphere

If I got this right you’re trying to get the distance from the center of the sphere, divided by radius?
That’s just float redamount = length(output.WorldPos - output.PositionSphereCenter) / sphereRadius;
Though I don’t really understand what you’re doing. If you’re just rendering a sphere at PositionSphereCenter the redamount will be the same for all points on it, 1 if the radius is equal to the sphereRadius above.

And just rendering like this will not get you a volumetric sphere which seems to be what you want. But I don’t get what you mean by alpha = distance_from_the_center /radius, that doesn’t make much sense for a volume.

Sorry if I was not clear enough.
I want to draw a sphere, where the center is fully opaque, and when getting to the borders, getting transparent. (alpha getting lower till 0)

alpha = distance_from_the_center /radius was to get a value between 0 and 1.

Thinking of it to explain what I want to achieve makes me think there is only the volumetric way to do this :confused: which is really to costly.
But what I was thinking, my very first idea, was to draw the visible hemisphere as a disc, something like this:

I will use quads instead of spheres, as we already know the depth of the sphere :slight_smile:

The thing is, this doesn’t make sense for a volume, what you’re describing only works in 2D (for a disc).

So the sphere would look the same from all angles? You can just use billboarding with a quad then :slight_smile: Be careful to not mess up the depth buffer though.

That’s the problem, I need to know the depth: in CWcull and CCWcull, keeping only the closest and farthest pixels does not work :confused:
Drawing 2 times : and storing the depth for CW and CCW does not work.

I need to gather the transmittance or something similar i don’t know the name: sum the depths of spheres/other volumes (cubes/ whatever) to know how much opaque it is between 2 points in the volume or any other area.

I know it would be easier with voxels, but we don’t have geometryshaders :confused:

Is there a way to accumulate the depths of objects in 3D ? Overlaying 2 spheres and getting the distance between the closest pixel, and the farthest ? Sort of alphablending depths ? Or a value that is incremented for each pixel, each time a pixel is drawn. Can the stencil buffer do this ?

If I go for billboards, I would need to know how many times a pixel has been drawn with a quad, and its position from the center of the quad would give me the depth after some maths.
The 3D way seems faster to me as less cosines/sines will be needed.

Hum… I have found this interesting thing :slight_smile:

But what happens when there are triangles of many models overlapping, in CCW or CW ? Won’t the closest one cull/prevent the draw of the ones behind ? There can be only 1 depth per pixel :confused: How can I keep the transmission between 2 pixels :confused:

For a sphere you can calculate the depth / the distance the camera ray has to travel through with simple math.

Google “line sphere intersection” or just look at the problem and figure it out yourself. If your inputs are clear - a pixel ray, a sphere center, a sphere radius, it’s not super crazy math.

If you have the entry point and the exit point you can subtract them and you get the distance travelled.

If your camera is inside the sphere you only need an exit point.

The problem is I took the sphere as an example, but it can be any 3D shape: cube, sphere, or even a nurbs.

You can calculate the intersections with those shapes, just like you can do with a sphere. Check out Scratchapixel for the method to write the intersection algorithms and some examples.