Distortion effect

I try to make explosions shader effect as ResoGun game.

I began with a displacement map and a spritebatch for drawing each explosions effect …

That work but It’s not really nice and I have not found a solution to animate distortion waves …

Somebody as already try animated distortion effect ?

Thanks.

Do you have a reference of what you want it to look like? I think you can get a nice effect with a displacement map, maybe the map generation needs some tweaking.

He is talking about that game… a game designed to make you blind… no thanks I am already blind enough…

What might work (and maybe it’s what they do too) is combine some sine waves in a function, for each point near the explosion evaluate that function in ‘R’ - where R is the distance from the explosion center. Then displace that point around the center of the circle (keeping it at the same distance from the center) with the distance of the displacement being relative to the result of the function evaluation. By controlling the amplitude and frequency of the sine waves you get great control over the intensity of the effect. Offset R by a time factor and it’s gonna be animated. EDIT: You’ll also want to offset by screen location or some random value to not have simultaneous explosions look the same, though I’m not 100% sure you could actually spot patterns in this.

1 Like

If you’re not sure where to start with this, google Fourier series. There’s lots of great information to find about this. Like this video that can greatly help you understand how/why this works if you like visual explanations: https://www.youtube.com/watch?v=r18Gi8lSkfM

Maybe a noob question, but…
How would you pass multiple centers of multiple explosions to the shader in one go? Using a texture?

an array of vector3 ?

Oh. Yeah. Right. Thx.
I think I remember now. The real problem was a too low shader level which allowed no dynamic for next.
Maybe I should really only support the higher ones.

I’d render the displacement map to a seperate texture using some sort of blending (intuitively additive blending, but in practice something else might be different) and after you render the scene re-render the quad with the displacement map and shader.

If this is only in a 2d game I would do this with a shader and a single triangle of type Position color texture. I just wouldn’t use any of the values for what they are just as inputs and instead to do distortions using a final render target of the scene.

Litterally i would mathematically make a triangle texture in a array that diminished in value over each pixel as they moved away from the center then use that as my triangle texture.
The texel rgbs on it have then all the distance interpolation information.
the color you send in or it could be any value could be used against a second texel color to create waves you could further distort that with the other color components.

basically
the texture from the center its red value diminishes.
its blue value is a wave up and down

the colors you pass to the texture is a timing function that leaves you extra values actually.

then its just a matter of using that textures calculated value against a render target or the scene at the end to move texels that have already been drawn to displace them.

This is just a idea though getting it to work right would probably take some time.

Its basically the idea of using a polynomial to find a distance from that center as a target to calculate a sin cos intensity against.

their is also the noise function to get a perlin noise value.

you could also do it using all the same texture weights or a second texture set of coordinates that all relate to the center of a quad itself and go off the position and distance to that. That way would probably be superior as you could get direction and distance from the center for each pixel position. for 3d i think you might as well be learning to do water.

Sorry. Been sick.
Is there something I’m not getting here?
I thought dynamic arrays are a no-go in HLSL? Am I wrong? Or are you talking about a fixed size-array?
Also:
What shader-level would I have to go in order to use a for-loop? How ‘save’ is that for older computers?
Thx in advance.

thanks for your answers, my explosions effects works now with animation !

3 Likes

What approach did you take?

So here is my approach (a few weeks old):

and I used that shader here:

sampler ColorMapSampler : register(s0);

float2 center;
float amplitude;
float frequency;
float phase;
float size = 1;

float4 PixelShaderRipple(float4 pos : SV_POSITION, float4 color : COLOR0, float2 texCoord : TEXCOORD0) : SV_TARGET0
{
	float2 toPixel = texCoord - center;
	float distance = length(toPixel);
	float2 direction = toPixel/distance;
	float angle = atan2(direction.y, direction.x);
	float2 wave;
	float m = mad(frequency, distance, phase);
	sincos(m, wave.x, wave.y);
		
	float falloff = saturate(size - distance);
	falloff *= falloff;
	
	distance += amplitude * wave.x * falloff;
	sincos(angle, direction.y, direction.x);
	float2 uv2 = mad(direction, distance, center);
   
    float weight = wave.y * falloff;
	float l = mad(weight, 0.2, 0.8);
	return tex2D( ColorMapSampler, uv2 ) * l;
}


technique ripple
{
	pass P0
	{
		PixelShader = compile ps_4_0_level_9_1 PixelShaderRipple();
	}
}

And while writing I wondered how to make use of that effect for multiple targets on screen.
@Jjagg your approach with displacement mapping would work of course, but I’d have to generate the displacement map by drawing dynamically generated textures, wouldn’t I. Wouldn’t that shift all the computations to the CPU?

Oh, and @willmotil. Sorry, but I need a few days to even get a grasp on your post :slight_smile:
Have to google first for a while.

No, you could render the different effect areas to the texture with a custom effect (you could just use a SpriteBatch if you want).

Ah. Render to RTs and then render to the displacement map. Then render full screen quad with displacement effect. Now I get it.
Bit late :smiley::smile:
Will try that. Thx.

Uhm, no :stuck_out_tongue: I mean have a displacement map the size of the screen and render all required distortions on it. Then use it to displace the scene by rendering a full screen quad. There’s no need to render each effect to a RT first. Just render them all to the displacement map RT directly while applying the effect.

Lol
That makes sense.