How to generate a changing blob?

Hello,

I am trying to achieve the effect similar to Turbulent Displace in After Effects. I have found that such effect is based on the Perlin Noise. I have found similar effect, but it’s in JavaScript with Three.JS
I tried to find something with HLSL, but almost everything is based on Unity(which I haven’t learned).
Can someone point me, should I investigate more shaders or switch to something else to implement such a thing?
Also, I’m a little bit worried that in example with JS everything was in 3D, while I’m trying to achieve a 2D effect. Can shaders modify shape of 2D textures, apart from 3D?

Thanks a lot for your help!

Number of ways you could approach this using shaders

You could use a refraction pixel shader might take some tweeking.

Refraction shader example

You could do the above and also send in the center point and offset uv’s grabs by the secondary displacement texture towards the center. That would probably give better results then just the refraction shader as it is.

You could use a basic pixel shader send in some a bunch of points that move around to displace the uv coordinates towards them based on distance and the original coordinates to pull the edges out.

Harder more precise ways you could do this… A circular vertex shader using polygons and displacing the edges dunno how great it would be, i suppose that could get super detailed to pretty basic. Would take a lot of work and you couldn’t use spritebatch for that prolly not worth the time.

1 Like

Thanks a lot for your response!

I tried to compile your example to better understand each of the shader, but it fails with the following error:

RefractShader.fx(194,43-50): error X4502: Shader model ps_4_0_level_9_1 doesn't allow reading from position semantics.

I have copied your code without changing, and running it as Win32 application. Maybe I should try it as UWP or Android?
I think, it blames the following: float2 warpedCoords = texCoord + (tex2D(DisplacementSampler, texCoord * SampleWavelength + DisplacementMotionVector).xy - float2(0.5f, 0.5f)) * Frequency;

Ah sorry that was done in GL. added the Dx changes in that post.

For Dx to quick fix it…

in the shader

//#define VS_SHADERMODEL vs_4_0_level_9_1
//#define PS_SHADERMODEL ps_4_0_level_9_1
#define VS_SHADERMODEL vs_4_0
#define PS_SHADERMODEL ps_4_0

game1 constructor

        public Game1()
        {
         // ..

            graphics.GraphicsProfile = GraphicsProfile.HiDef; // << profile hi def DX
        }

in the drawing function

        public void Draw2dRefractionTechnique(...)
        {
            //...

            // Set an effect parameter to make the displacement texture scroll in a giant circle.
            refractionEffect.CurrentTechnique = refractionEffect.Techniques[technique];
            refractionEffect.Parameters["Texture"].SetValue(texture); // << Requisite for Dx

It is possible to do it for for lower level dx caps to do that you have to add a vertex shader set and pass a position value from it as a texturecoordinate set the projection matrix up and call apply changes after begin();

Ya i think if you alter one of the methods to flatten the color as well as pass a center point and adjust the texture grabs towards or away from it, Or just use a smoother displacement image, You could make the effect.

I did something like that (again many moons ago) I posted a clip of it here, ill see if I can find my old code and put it in my repo :slight_smile:

Thanks a lot for your update!

I have successfully compiled that example and was figuring out the logic behind the scenes for the effects. So first of all, I see, that everywhere in .fx file you have written PixelShader without altering the vertices. So from my understanding, it’s ok to manipulate only with pixels, without touching the actual shape of the texture? Sorry, if it’s sounds a bit strange, I’m still trying to dig into this thing.
Oh and BTW, the thing with DisplacementSampler is connected with this article, correct? But in your example you have magfilter, which is absent in the documentation. This is related to the DX9 and DX11 differences?

Thanks again for all your input, I’m now trying to tweak a little with your example to achieve the necessary effect. Will post an update, if I will find something :slight_smile:

Thanks for the video! Seems interesting with all that stretching and morphing effects indeed!
Maybe my effect is based on the same techniques, as you have shown in the video :slight_smile:

It will be wonderful, if you will find that code, thanks :slight_smile:

it’s ok to manipulate only with pixels, without touching the actual shape of the texture?

Yep the pixel shader can displace were texels are taken from on a image.
To say ideologically the pixel shader draws each and every destination position pixel and allows you to manipulate were the corresponding texel is grabed from which normally is proportionally even but you can distort that grab or even blend it for all positions via some math function you come up with. Aka distort the texel grab positions or uv sampling.

The vertex shader on the other hand works by displacing were the position of vertex edges that define were the drawing area will be at proportionally to the assigned texture positions equal to a vertex.

In general a square image has 4 vertice points in total the corners of the destination drawing rectangle the uv texel positions are straight forward 0,0 being the top left 1,1 being the bottom right. However you can assign more points to draw a square image say 9 points top mid right and so forth down with matching uv positions for middle points to have ,5 values. Later these can be distorted on the vertice shader.

Either is possible.

But in your example you have magfilter, which is absent in the documentation. This is related to the DX9 and DX11 differences?

Dunno why it would be missing in the doc should be applicable to all of them.

Anyways min and mag filter is short for Minification and Magnification Sampling filters.

These filters tell the card what to do if the destination is smaller or larger then the source texels.

To say i draw to Destination screen pixels 0 100 from a image source texels that ranges 0 10
The Magnification filter can sample and blend points (between integer texels) by a fractional amount with linear or many texels with ansiostropic.
For example for point sampling say i grab a pixel from u .5 it can just go 10 *,5 = 5 and grab source texel 5 for a range of destination positions within the 0 to 100 pixels. Namely all positions from 50 to 59, were we calculate 59 / 100 = 0.59 then .59 * 10 = 5.9 (in this case point filter means get rid of the fraction) so 5 being the grab source texel on the image that ranges from 0 to 10 and ends up at destination 50 to 59 of the area from 0 to 100 on screen a big blocky draw.

The Minification filter tells the card how to handle downsampling when the source texels range from say 0 to 100 and the destination pixels on screen are 0 to 10 were .5 actually represents many texels that would have to be sampled (aka looked up and RGB averaged) to shrink the image to one tenth the size.

The address U V describe texel grab positions x y on the texture so named instead as uv for the sake of clarity. the filters used relate mainly to what happens if the texel grab position is below 0 or above 1 basically out of bounds of the image …
Do we WRAP around to the other side of the image ?
Do we MIRROR back into the texture from that edge ?
Do we CLAMP at the edge and just return the nearest valid edge pixel ?

these 5 are the primary filter concepts to take note of the W is important for ansiostropic primarily.

Anti-aliasing is a separate topic but you need to be aware of the above before you start to worry about it. In general anti-aliasing can be thought of as a 1 to 1 texel to pixel ratio domain problem were depth and real physical properties of a monitors led lighting also factors into it…

If you want to go the vertex route you also need to create a vertex shader and set up the world view projection matrices send in the textures ect, You can’t really use spritebatch at that point.
Here is a mesh class that will generate a grid mesh from a array of Vectors. Unless charles has something ready made i don’t at the moment.

Mesh class

1 Like

Thanks again for your help :slight_smile:

I think, I have finally accomplieshed desired result:

#if OPENGL
    #define SV_POSITION POSITION
    #define VS_SHADERMODEL vs_3_0
    #define PS_SHADERMODEL ps_3_0
#else
    #define VS_SHADERMODEL vs_4_0_level_9_1
    #define PS_SHADERMODEL ps_4_0_level_9_1
#endif

matrix WorldViewProjection;

struct VertexShaderInput
{
    float4 Position : POSITION0;
    float4 Color : COLOR0;
};

struct VertexShaderOutput
{
    float4 Position : SV_POSITION;
    float4 Color : COLOR0;
};

float4 mod(float4 x, float4 y)
{
    return x - y * floor(x / y);
}

float4 mod289(float4 x)
{
    return x - floor(x / 289.0) * 289.0;
}

float4 permute(float4 x)
{
    return mod289(((x * 34.0) + 1.0) * x);
}

float4 taylorInvSqrt(float4 r)
{
    return (float4)1.79284291400159 - r * 0.85373472095314;
}

float2 fade(float2 t) {
    return t * t * t * (t * (t * 6.0 - 15.0) + 10.0);
}

// Classic Perlin noise
float cnoise(float2 P)
{
    float4 Pi = floor(P.xyxy) + float4(0.0, 0.0, 1.0, 1.0);
    float4 Pf = frac(P.xyxy) - float4(0.0, 0.0, 1.0, 1.0);
    Pi = mod289(Pi); // To avoid truncation effects in permutation
    float4 ix = Pi.xzxz;
    float4 iy = Pi.yyww;
    float4 fx = Pf.xzxz;
    float4 fy = Pf.yyww;

    float4 i = permute(permute(ix) + iy);

    float4 gx = frac(i / 41.0) * 2.0 - 1.0;
    float4 gy = abs(gx) - 0.5;
    float4 tx = floor(gx + 0.5);
    gx = gx - tx;

    float2 g00 = float2(gx.x, gy.x);
    float2 g10 = float2(gx.y, gy.y);
    float2 g01 = float2(gx.z, gy.z);
    float2 g11 = float2(gx.w, gy.w);

    float4 norm = taylorInvSqrt(float4(dot(g00, g00), dot(g01, g01), dot(g10, g10), dot(g11, g11)));
    g00 *= norm.x;
    g01 *= norm.y;
    g10 *= norm.z;
    g11 *= norm.w;

    float n00 = dot(g00, float2(fx.x, fy.x));
    float n10 = dot(g10, float2(fx.y, fy.y));
    float n01 = dot(g01, float2(fx.z, fy.z));
    float n11 = dot(g11, float2(fx.w, fy.w));

    float2 fade_xy = fade(Pf.xy);
    float2 n_x = lerp(float2(n00, n01), float2(n10, n11), fade_xy.x);
    float n_xy = lerp(n_x.x, n_x.y, fade_xy.y);
    return 2.3 * n_xy;
}

VertexShaderOutput MainVS(in VertexShaderInput input)
{
    VertexShaderOutput output = (VertexShaderOutput)0;

    float x = input.Position.x;
    float y = input.Position.y;

    if (x == 0.25 && x == 0.25) {
        output.Position = mul(input.Position, WorldViewProjection);
    }
    else {
        float angle = atan2(x, y);
        float radius = 0.75 + 0.25 * cnoise(input.Position.xy);
        output.Position = mul(float4(radius * cos(angle), radius * sin(angle), input.Position.z, input.Position.w), WorldViewProjection);
    }

    output.Color = input.Color;

    return output;
}

float4 MainPS(VertexShaderOutput input) : COLOR
{
    return input.Color;
}

technique BasicColorDrawing
{
    pass P0
    {
        VertexShader = compile VS_SHADERMODEL MainVS();
        PixelShader = compile PS_SHADERMODEL MainPS();
    }
};

Credits to this code snippet: https://github.com/keijiro/NoiseShader/blob/master/Assets/HLSL/ClassicNoise2D.hlsl

Don’t really understand all math behind the cnoise, but the result is pretty ok to me :slight_smile:
The last thing that I should figure out is how ti animate this thing, but half of the progress is better than nothing :slight_smile:

Nice you should use the other function to animate it i believe.

    // Classic Perlin noise, periodic variant
    float pnoise(float2 P, float2 rep)

were p is the pixel position in the vertex shader and rep is a x,y value pair you pass in to your shader as a global variable that you update each frame.

Rep here is typically a oscillating variable or pair of them in this case soo…
each frame add to some value call it say (float radians) by a small amount say .01f
if it goes over 2* Math.Pi reset the variable to zero.
Take that value each frame and get the sin cos of it. Depending on how the function works you might need to multiply the resulting sin cos by a integer like 100 or something.
those functions sin cos are also in Math they return doubles so you might need to cast them to floats
Pass that into your shader and set it to the cnoise(p, REP) value.

you pass in a global just like you did with your matrix WorldViewProjection except a float2 for this value.

To take out the distortions i would multiply both resulting sin cos results by a value of 0 or to put them back in 1 dunno if he set his function up for that but it looks pretty competent so he probably did.

Think that may be a bug :slight_smile:

Another technique I have used to get similar effects is point distancing.

You basically have an array of points (in 2D or 3D ) and calculate the distance squared from each of them and sum them to create a float, then use that float to generate the pixel value.

This technique has been used in many, many ways to generate effects and led on to modern distance fields used extensively in real time ray tracing.

A simple example can be seen here.

Thanks for the proposition. But it seems, that I’m doint something wrong. Because then my blob may cause an epilepsy to someone, I think:

In the shader I have this line changed:
float radius = 0.75 + 0.25 * pnoise(float2(x, y), vectorOffset);

I am setting up the displacement as you have mentioned, in the way of cos/sin functions.
But I have tinkered with the regular cnoise and have come up with this:

    output.Position.x = output.Position.x + offset;
    output.Position.y = output.Position.y + offset;
    float radius = 0.75 + 0.25 * cnoise(output.Position.xy);

And I’m changing the offset with Update function by 0.006f. Just not trimming it to 2 * Pi (Can it cause stack overflow later, if it reaches some limit…?)
This looks little bit better, but gets sharp edges, they really seem out of order:

As stated here, the issue might be due to the fact of not repeating the last and first vertex diplacement.

Thanks for sharing that code! That’s really something cool, blending two meshes together. I hope that I will understand that code a bit better with time. For now, I’m trying to just achieve that blobby effect
And regarding line if (x == 0.25 && x == 0.25) {, it doesn’t really affect something. You can leave it or remove it, the result is the same per my checks

Please at least change it to

if (x == 0.25 && y == 0.25)

1 Like

Yeah, you’re right :smiley: Thanks