Because those data are created in texture in first place. Imagine for instance approach where you generate wind by rendering gpu particles inside offscreen render target, you can create some really interesting things by doing that. Now you need to apply those data to all vegetation, let's say that all vegetation shares same shader. If that shader samples that texture and use their world position to get position of sample in that texture they have really fast and efficient access to GPU simulated wind behavior (and ofc course there can be different forces applied within that offscreen RT), to large degree this is similar to Stateaware GPU particles which uses offscreen RTs to store data and behavior (render velocity additively on position RT and you get position of particles in next step - I mean, I am sure you are familiar with that technique I am just trying to explain what I am trying to achieve).
Now the difference between this and my system is that I do not use point particles but rather complex models (vegetation) where sampling that map once for each vertex might be brutal (I am aware there is no way how to communicate date between vertices as they run in paraller, I am just checking if I am not missing some other trick). I could get data from that offscreen RT to CPU by .getdata and then used it together with instanced geometry by baking that information (most likely additional vertex4) to world Matrix stream, but I am aware of that .getData is slow as it is moving data from GPU to CPU. I have pretty much whole system in my head just trying to find way how do I get pixel data from RT to all models sharing the shader in single draw call (let say I have 1x1 KM map and 1024x1024 off screen RT, that gives me resolution of aprox 1 meter, if I will be affecting whole objects on that scale I believe I can do some neat tricks).