Texture tap in vertex shader

Quick question I can’t find answer to. Considering DX environment. How expensive is taping into texture in Vertex shader compared to Pixel shader? I used that technique quite a bit in the past but I’ve never wondered about the price. Now I would like to use it on ridiculous amount of vertices, and that’s the problem. Alternatively I could find a way if that texture tap would be uniform for whole model, but without geometry shader I have no idea how to approach it.

Edit: Actually best approach for me would really be if I could just tap once into texture for whole model and use it in vertex shader. I don’t think there is a way how to achieve it but just checking if I am not missing something.

As is often the case, the only way to check how it performs for your exact use case is to try it and benchmark.

Just a heads up that vertex textures are not supported on DesktopGL right now, in case you’d like to port your project in the future.

What do you mean once for the whole model? The data is the same for all vertices? Then why have a texture?

Because those data are created in texture in first place. Imagine for instance approach where you generate wind by rendering gpu particles inside offscreen render target, you can create some really interesting things by doing that. Now you need to apply those data to all vegetation, let’s say that all vegetation shares same shader. If that shader samples that texture and use their world position to get position of sample in that texture they have really fast and efficient access to GPU simulated wind behavior (and ofc course there can be different forces applied within that offscreen RT), to large degree this is similar to Stateaware GPU particles which uses offscreen RTs to store data and behavior (render velocity additively on position RT and you get position of particles in next step - I mean, I am sure you are familiar with that technique I am just trying to explain what I am trying to achieve).

Now the difference between this and my system is that I do not use point particles but rather complex models (vegetation) where sampling that map once for each vertex might be brutal (I am aware there is no way how to communicate date between vertices as they run in paraller, I am just checking if I am not missing some other trick). I could get data from that offscreen RT to CPU by .getdata and then used it together with instanced geometry by baking that information (most likely additional vertex4) to world Matrix stream, but I am aware of that .getData is slow as it is moving data from GPU to CPU. I have pretty much whole system in my head just trying to find way how do I get pixel data from RT to all models sharing the shader in single draw call (let say I have 1x1 KM map and 1024x1024 off screen RT, that gives me resolution of aprox 1 meter, if I will be affecting whole objects on that scale I believe I can do some neat tricks).

1 Like

Well, today I learned that sampling texture for few million vertices (including linear interpolation) wont even make a dent on FPS.

1 Like

Yes I know. It’s getting stupid how mush you can throw at a graphics card and it just eats it.:grinning:

I am waiting for a 2080 graphics card to arrive so I can do some work with it.

It’s going to be stupidly fast.

Close enough for now.

1 Like