MonoGame Feature Wishlist

Oh thanks, will definitely contact Tom when the time to creating the UWP project comes up.

Is there a difference between running an UWP project on XBox rather than using this specific set of libraries you mention?

The UWP implementation is public, you don’t need access or anything. You can’t use the full Xbox capabilities with a UWP project. You can easily find more information on the XBox website.

mesh = new Mesh();
mesh.vertices		
mesh.triangles = triangles;
mesh.RecalculateNormals();

would be nice :slight_smile:

You can kind of do this if you include the MonoGame.Framework.Content.Pipeline lib and using MeshBuilder.

I have a mesh maker peon here, but im not sure it will help with what you were doing previously. As it seemed to me you were trying keep the mesh quads separate.

You basically feed this a vector3 array of points like a height map and it generates a mesh that will map to a texture and pretty much everything including normals and tangents for normal mapping i didn’t include bi-normals as i typically calculate them on the shader as its cheap.

Better model importing.

Iam thinking how can i shift texture position on each quad, but with shared vertecies its imposible.
Am I right?

Not sure what you mean by shared vertices. Could you open a new topic for this?

He was doing 3d tile mapping but a major component for him is to have each vertice of a portion of the mesh conceptually equivillent to a tile. Have a separate uv data set so that each tile may have interchangeable textures or texture coordinates. Yet the whole thing maps to a contiguous mesh and is built as one.

E,g, if you have a mesh with 6 vertices were vertices 0 to 3 make up quad 1 and vertices 2 to 5 make up quad2 then the shared uv coordinates at vertices 2 and 3 wont allow you to use two separate tiles in that mesh. This can be done with multi texturing and more data per vertice but then that starts to represent a whole unacceptably whole lot of data for all adjoining sides and possibly even more if the mesh is not comprised of grids.

So he was attempting to separate the quads and align them so that the edges of a quad 1 which is then made up off vertices 0 to 3 touch the ends of another quad made up of vertices 4 to 7 so each has its own separate uv components.

This would all be trivial if it were possible to attach uv coordinates to index data instead of the vertice data itself but as far as i know that’s not possible.

The point of this would then be to make a 3 dimensional tilemap / heightmap / model just like any 2d map were you could place tiles at a location with arbitrary textures coordinates or even different textures as well however the entire thing would be in 3d model form. Though the edges getting artifacts can be difficult to deal with when having the vertices not truly shared.

I had been meaning to write a version of that mesh class that will do that for tilemapping but i just never had the time or need as of yet.

I linked to it as he might alter it to do what he wants with a bit of effort.

1 Like

You could use a custom shader and a second texture to hold the UV data.
Here’s is how I think this could work:

One texture would be the texture atlas that contain the tiles (‘TileAtlas’).
The second will hold the TileMap.
You render a single quad with UVs from (0,0) to (1,1),
In the shader first you sample the TileMap (PointClamp), the four components would be a translation (.rg) + scale (.ba).
you take the UV modulo TileMap.Size, apply translation + scale to get the final UV for TileAtlas.

If all tiles in the atlas are the same size , then you can hardcode the scale factor in the shader and use the remaining 2 components (.ba) for a second tile layer.

Not sure i fully follow.
However that was just a basic conceptual example its a bit worse.
Not only can the uv of points 2 and 3 represent the right two vertices of quad1 and conversely represent the left two vertices of quad 2.
When you extend this to a center quad in a grid mesh its 4 vertices each connect to 4 quads. And a quad can share its vertices with 8 other quads each with its own unique uv requirements.

Ok, as I understand… I have to make large texture of smaller textures (tiles). As Iam noob at gpu computing i will write it on cpu for now. Also I will try to read examples :slight_smile: and come up with solutions :slight_smile: ty for help. Also if you have other ideas or solutions tell me.

The idea is that the GPU can split the tiles for you, you don’t have to generate all tiles as vertices.
If you have a world of 256x256 tiles for example, you draw them as a single quad.
For any pixel the UV will take values from 0 to 1. Multiply that by 256 and the integer part is your tile xy, and the remainder is the UV inside that tile.

Not sure if we’re still adding to this list, but wanted to add something that’s been the biggest frustration (at least for me) when working with Monogame… i.e. support for 3d models.

Maybe it’s just me, but I’ve had nothing but problems trying to load and get models working in Monogame. Most of them end up having some weird issue (e.g a lot of the FBX’s throw some error about missing or unknown references etc), or they just don’t display correctly.

And those are just static models… animated models are a completely different frustration.

I would love to see an improvement to the pipeline tool (or perhaps even a separate tool) that allows you to load different model formats (fbx, x, dae etc), preview them in a UI, adjust scale etc… and view any embedded animations.

There should also be better support at the framework level for model animations (not sure if that’s changed, but when I last looked at it, you had to dig around and try to get the skinned model sample working, which was extremely difficult).

This will validate that the models actually work with Monogame before trying to get them working in a game.

To me, this would make Monogame significantly more usable and friendly to newcomers and new developers, and would definitely have made my life easier…

Yep better default model loading going thru hell right now trying to get a textured blender model to import correctly.

For me…

  • Low level shader access

  • Graphics driver identification
    So you can tailor used shaders to the graphics card. For example being able to use MRS when available

  • Content FromFile methods

all would be nice

But the main one I would like is a really nice built in profile build

It’s the one thing I really miss from the Guerrilla engine, really be able to work out what is happening for real rather than trying to figure it out from renderdoc or (if you can ever get it to work) pixwin

1 Like

How about using dxc and spirv-cross instead of SharpDX and MojoShader.

dxc and spirv-cross are not dependent on Windows. Can now compile HLSL on macOS and Linux.

In addition to GLSL for OpenGL ES 1.0, and SPIR-V for Vulkan and MSL for Metal can also be output.

https://translate.google.co.jp/translate?hl=ja&sl=auto&tl=en&u=https%3A%2F%2Fmonobook.org%2Fwiki%2FMacでDirectX_Shader_Compilerをビルドする
https://translate.google.co.jp/translate?hl=ja&sl=auto&tl=en&u=https%3A%2F%2Fmonobook.org%2Fwiki%2FSPIR-VからGLSLを生成する

Yes, this is the plan.

2 Likes

Unsure if already mentioned, but ARM64 support could never go amiss…