Draw a square, not a quad.

This is probably a stupid question, but does anyone know of a way to instruct Monogame/XNA to draw polygons rather than triangles?

Right now I can only figure out how to draw 2 triangles into a square, which contains 4 vertices and 6 indices (quad).
I would like to simply have 4 vertices and 4 indices (square).

This probably does not seem like a big deal to most people, but if I was using quads with 6 info points vs a square with 4, it adds up to substantially more objects on screen / better performance with the same amount.

I know you can do this is other frameworks, for example I was drawing hexagons, as a single polygon, not multiple triangle polygons.

There is a work around to just export an .fbx square, with 4 edges not 5, but I would like to be able to use this with procedural generation, and model editing in game.

1 Like

You realize GPU operates only with triangles?

No, GPU operates with vert and frag info, and the vert info can be any polygon.

EDIT:
Even the old consoles use to brag about how many “polygons” they could get on screen, not how many “triangles” (Gamecube was something like 14 million)

Oh wow. You got absolutely zero idea how computer graphics work. It’s all triangles, boy. Even this vertex info which can be “any polygon” is essentially a way to describe a bunch of triangles.

I’m really not trying to argue with you but, you don’t seem to know how it works. It is not all triangles, it never was, they have to be polygonal shapes though, you cant do circles or something like that, and the graphics card will take my square or hexagon fine, if I make it in a modeling program.

Quads are common and come up usually as default when you are modeling, because:
A. Triangles will give you the highest level of detail on curved surfaces, and
B. Sometimes when animating with other polygon types, the animation processor wigs out and you get needle things flying out your model when it bends a certain way.
But don’t worry theses are not 3D animated, they just have animated textures.

Seriously, look it up. It’s all triangles and – in some cases – lines. An complex polygon can be expressed as a bunch of triangles, and making everything a triangle reduces hardware and software complexity. Some programs may abstract away and show you stuff as quads or whatever – but when it comes down to the lowlevel, it’s all a triangle. Even this webpage is rendered using triangles.

Why are you arguing? You need to go look it up, because it is POLYGONS, in all cases (triangles being one of them), other than a line.
Yes any polygon can be broken down into triangles, like a hexagon for example. I bet you would think it breaks down into 6 triangles, which you could, but as far as triangle efficiency goes, 4 is the least.
Hex
Your illusion of everything is triangles, makes the data much more complex as there is just a bunch more data and you can not render as much. If making a shader for example you only need the points, then the texture coord for the surface. This does not need to be triangle, or multiple triangles.
For example you could use (0,0,0),(40,0,0)(40,40,0)(0,40,0) as 3D coordinates, and then just supply the 4 corresponding tex-coords (0,0),(1,0)(1,1)(0,1), and then the graphics card will take that info and draw out the pixels, in either a clock-wise, or counter clock wise manner depending on driver. You could even not have a texture, and use vertex shading, and would do the same with blended colors.
This did give me an idea though, just write my own shader, and provide my custom data, and this will work, so thank you, for that.

And by the way this web page is rendered using pixels FFS, which usually come from square textures, which is a polygon.

I am going to find it really difficult taking any of advice remotely seriously in future, as you are stating almost the exact opposite of how things actually are.

Ever heard of index buffers, or they are not a thing in the land of imagination?

But sure, if you don’t want to look it up, I will for you. Here’s an official OpenGL documentation, which explains how to draw a quad. Surprise – it is done with two triangles. https://open.gl/drawing

Hi,

I think there’s a bit of confusion here. Let’s clear some stuff up.

  1. Gpus will indeed decompose polygons to triangles. You cannot ‘draw polygons’ as you are thinking of. You can export a model as a, fbx, and load it, and draw it in monogame, but the gpu will decompose it to triangles, as that is how they are designed. You can think of the mesh as polygons, you can model the mesh as polygons, or ngons, or whatever surface you’d like (nurbs, if you want), but when you send that mesh to the gpu it gets triangulated, meaning the ngons and polygons are reduced to triangles, much like the image you posted showing the two different triangle counts. It’s important you accept that this is how gpus work, or ppl will not take you seriously.

  1. You can choose to store you mesh data in many formats, one of those formats being quads that are composed only of 4 vertices. this makes the quad data much smaller and easier/faster to work with and store - but this quad will get decomposed to triangles on the gpu. I would suggest storing quads only using 4 verts when you need to alter their states/values as it simply less work to do, rather than storing 6 verts as two triangles.

  2. I don’t think it’s worth getting frustrated over these details, have a good day. :+1:

If you used something like that in another api it was just a method that hid the vertice indice setup to triangles same as spritebatch does.

You can use a triangle strip however to cut down the data needed for simple geometry but that can be problematic for complex meshes.

1 Like

You could do what you want in a pixel shader but the pixel shader, or the texture resulting from the pixel shader (render target) will have to be rendered to a quad.

That’s one good way to cut back on data for something like terrain - if that’s what the original poster was looking for.
The 6 indices is still pretty optimal tho, They’ll simply tell it which verts to use, where if you sent in 6 verts for 2 triangles, you’d end up calculating 2 of the 3d coordinates twice (and vertices are more data than indices cuz they’re 4 floats each x,y,z,[w]).
I think quads are preferred in 3D modelling for subdiv and deformation benefits (best deformation if the majority of the triangle inner edges are aligned which they usually are - otherwise some tweaking to flip edge direction may be a good idea).
Technically even each triangle (in the GPU) is broken down to 2 triangles (top half and bottom half) so that the centre is a horizontal line - this is to setup the horizontal scanline interpolations/texturing. I had to do this once when programming a rasterizer that was made to do the equivalent in software.
I became a bit hesitant about using strips for terrain, since someone said it doesn’t make a huge difference(true?) and I began thinking about quadtree and how that might get messy.
Any thoughts on that?

I am looking at a ROAM implementation for Terrain, I have a Geo-clip map sample in my samples too. Hoping, if I have time to get my ROAM terrain code into my samples on git hub too.

1 Like

Ya with a triangle strip a quad is just 4 vertices the hex is 6 but its still all triangles.

TriangleFan

Every other one if i remember right gets reverse winding.

I never really use them but i mean if i had to and was able im not opposed to using it.

MonoGame uses either OpenGL or DirectX, right?

OpenGL purportedly supports the use of polygons as primitives:

https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glDrawElements.xml

glDrawElements lets you use these modes:

GL_POINTS
GL_LINE_STRIP
GL_LINE_LOOP
GL_LINES
GL_TRIANGLE_STRIP
GL_TRIANGLE_FAN
GL_TRIANGLES
GL_QUAD_STRIP
GL_QUADS
GL_POLYGON

However DirectX does not:

typedef enum D3DPRIMITIVETYPE {
D3DPT_POINTLIST = 1,
D3DPT_LINELIST = 2,
D3DPT_LINESTRIP = 3,
D3DPT_TRIANGLELIST = 4,
D3DPT_TRIANGLESTRIP = 5,
D3DPT_TRIANGLEFAN = 6,
D3DPT_FORCE_DWORD = 0x7fffffff
} D3DPRIMITIVETYPE, *LPD3DPRIMITIVETYPE;

So the questions are,

1 - can you select a rendition of MonoGame which uses OpenGL on all supported platforms?
2 - does the version of OpenGL used by that rendition of MonoGame still support polygons?
3 - can you arrange to make the call to glDrawElements yourself, or specify the arguments to it?

If you can do those things, then you can specify polygons when doing rendering in your MonoGame program.

So often discussions around posted questions devolve into religious arguments about whether or not doing something someone wants to do is desirable or advisable. The justification for this is invariably something along the lines of “don’t subject your fellow programmers or your users/customers to this bad idea!” To me this is tantamount to advocating a kind of communal slavery to “the common good.”.

Freedom and liberty says that if you can find one person in the world who’s willing to fund your research, you can do whatever the hell you want, as long you don’t do any substantial, immediate, direct physical harm to anyone’s person or property. This is why free countries innovate more and are more wealthy than communist ones.

I think it’s fine to probe about someone’s intent, and try to help them achieve that intent in a better way, but when it comes down to actually answering their question or not, don’t be a communist. Let people learn for themselves. Answer “how” questions even if You Think the “what” or the “why” is a bad idea. Sometimes when people learn for themselves, they learn things You Didn’t.

That is for gl2.1 and I think they removed it in newer versions but not sure. You can check if they stil support it in newer versions.

Couldn’t help it and quickly looked it up. Found that they deprecated quadrilateral and polygons.

https://www.khronos.org/opengl/wiki/History_of_OpenGL#Deprecation_Model

Yeah, it does look like OpenGL removed everything but points, lines and triangles in version 3. You can see that from docs.gl, too:

http://docs.gl/gl3/glDrawElements

I was aware that that had happened at some point, but I wasn’t sure exactly when. (Actually as I think of it, I do recall that it happened in 3.) That’s why I put the note in about the OpenGL version. But it seems like you can use OpenGL 2 if you want to with MonoGame DesktopGL? But since the same MonoGame version supports the use of OpenGL 3, I’d guess it doesn’t wrap OpenGL 2’s support for polys. So I’m left with the question, is there a way to make your own gl calls from a MonoGame program? Without going so far around MonoGame that you can no longer use it’s other facilities?

But the poster may not want to use an older version of OpenGL; If not, then I’m afraid they’re probably SOL as far as specifying polys when rendering goes.

So often discussions around posted questions devolve into religious arguments about whether or not doing something someone wants to do is desirable or advisable. The justification for this is invariably something along the lines of “don’t subject your fellow programmers or your users/customers to this bad idea!” To me this is tantamount to advocating a kind of communal slavery to “the common good.”.

Freedom and liberty says that if you can find one person in the world who’s willing to fund your research, you can do whatever the hell you want, as long you don’t do any substantial, immediate, direct physical harm to anyone’s person or property. This is why free countries innovate more and are more wealthy than communist ones.

I think it’s fine to probe about someone’s intent, and try to help them achieve that intent in a better way, but when it comes down to actually answering their question or not, don’t be a communist. Let people learn for themselves. Answer “how” questions even if You Think the “what” or the “why” is a bad idea. Sometimes when people learn for themselves, they learn things You Didn’t.

This is all a fine speech but monogame is crossplatform it’s calls are therefore cross platform calls.
Even if this is supported on GL as you say it’s not on directX.
How are you going to implement quad drawing on a DirectX project.

Not to mention you are talking about syntatic sugar in GL anyways.
You have one GPU and it does the rasterizing with triangles ultimately.

A common representation of digital 3D models is Before rasterization, individual polygons are broken down into triangles

In the old days gpu’s used the breshamns algorithm to rasterize a triangle.
Now days im pretty sure it’s based on barycentric rasterization.

While it is possible to construct a algorithm to rasterize a polygon with a arbitrary number of sides. It is less efficient then breaking a polygon down into triangles and rasterizing them individually.

In the domain of graphical rasterization the performance and speed of the algorithm is of supreme importance it is the inner most loop of everything you see every fraction of every second.

1 Like