How efficient spriteBatch.DrawPoint() is?

I’ve come to a point where I actually want to draw a couple of pixels. It’s not much (probably around 40/frame but I plan more in the future) and my code works fine for now, but it’s quite a little monster if you look at it.

I basically create a 1x1 pixel texture and draw it.

pixel = new Texture2D(graphicsDevice, 1, 1);
pixel.SetData(new[] { Color.White });

public void pset(int x, int y, Color color)
	spriteBatch.Draw(pixel, new Vector2(x, y), color);

Would spriteBatch.DrawPoint() be more efficient for placing a high number of pixels? Not necessarily efficient, just more efficient. I tried looking into the source code but I’m not smart enough to understand the major part of what’s going on there.

I know the correct way to do this would be to create a new texture with all the pixels and draw it only once but it’s not exactly convenient for what I’m trying to achieve.

Edit: Made the code even more blatantly obvious.

What you’re doing is how I tend to draw basic primitives in MonoGame. I prefer it since I tend to focus on 2D games drawn with sprites, so that method tends to integrate well with the rendering process I generally use. MonoGame.Extended used to use this method as well (they had pulled in a library a friend and I had developed many years ago) but they’ve since switched to using the GraphicsDevice.DrawUserPrimitives method, which doesn’t go through a SpriteBatch.

For some applications, such as multiple lines or circles made from line segments, I believe this is more efficient since it uses graphics API calls that are more specialized for that. However for a single line, filled rectangle, or point… I’m not sure.

I believe a SpriteBatch ultimately renders a textured quad for this, so it’s not really doing a lot. Though maybe that’s the answer… because a quad is still two triangles, with texture mapping to boot. So with that in mind, I’d imagine the way that MG.E is doing it now would be more efficient.

I suppose the best way to check is to just run a test. Draw 100k points using each method and capture the time it takes with System.Diagnostics.Stopwatch. I would be interested in your results :slight_smile:

1 Like

Thank you for the detailed answer. I will run the tests later today and post results.

Samples: 1 000 000

My method takes 00:00:00.367 in the first frame and around 00:00:00.60 in the next frames.
Monogame extended DrawPoint starts at 00:00:00.434, next frames are around 00:00:00.135.

Time formatted with:


Both methods take around 230mb (full app) of RAM and 20% of my CPU. (Intel Core i3-9100F CPU @ 3.60GHz × 4)

I used OpenGL on Linux with the newest Nvidia proprietary drivers.

Here are all the notable parts of the source code (it’s shit of course, I wrote it in 5 minutes) if you/anyone wants to try it out (my pset is in the original post):

private const int canvasWidth = 1000;
private const int canvasHeight = 1000;

protected override void Draw(GameTime gameTime)

                for (int x = 0; x <= canvasWidth; x++)
                    for (int y = 0; y <= canvasHeight; y++)
                        // place your pixel draw method here
                        _spriteBatch.DrawPoint(x, y, Color.Blue);

                TimeSpan ts = stopWatch.Elapsed;
                string elapsedTime = $"{ts.Hours:00}:{ts.Minutes:00}:{ts.Seconds:00}.{ts.Milliseconds:00}";



Guess it’s time to switch to Extended :slight_smile:

If you’re saying its time to switch to extended, I’m wondering if you swapped your values when reporting? The way this reads is that your method takes significantly less time than MG.E’s :slight_smile:

Also, I’m not sure it will have an impact on your overall test but its worth noting that with the MG.E method, you don’t need to involve SpriteBatch at all. Since your timing is inside the Begin/End, I’m not sure it matters, but just wanted to mention that.

It’s probably worth considering what kind of use case you’re needing here before you blanket switch. MG.E may be faster, but it depends on what you’re ultimately doing. If you’re making a 2D game, for example, you might find it harder to layer in MG.E’s 2D drawing with your own layers (if you’re using SpriteBatch instead of a manually drawn textured quad). You know your own application of course, I just wanted to throw that out there.

Finally, something else! It looks like MG.E is using the same circle method the old Primitives2D approach used to generate circles…

public void DrawCircle(Vector2 center, float radius, Color color)
	if (!_primitiveBatch.IsReady())
		throw new InvalidOperationException("BeginCustomDraw must be called before drawing anything.");

	const double increment = Math.PI * 2.0 / CircleSegments;
	double theta = 0.0;

	for (int i = 0; i < CircleSegments; i++)
		Vector2 v1 = center + radius * new Vector2((float)Math.Cos(theta), (float)Math.Sin(theta));
		Vector2 v2 = center + radius * new Vector2((float)Math.Cos(theta + increment), (float)Math.Sin(theta + increment));

		_primitiveBatch.AddVertex(v1, color, PrimitiveType.LineList);
		_primitiveBatch.AddVertex(v2, color, PrimitiveType.LineList);

		theta += increment;

This is absolutely fine; however, I happened to stumble across a more interesting and performant way to generate circles the other day.

The short of it is, you can use a pixel shader to generate your circle on a quad and get a perfect circle instead of a segmented circle. Not only will this have a nice appearance, I think it might be faster for certain circle segment counts and you also get more options for colouring/texturing. The video is in shadertoy, but it should easily convert to HLSL for use with MonoGame.

Whew, ok, information dump complete! Thank you for posting your results, that’s really informative! :slight_smile:

In the first frame my method takes less time (00.367 < 00.434) but in the next frames it takes more (00.60 > 00.135). And I suppose having a performance boost while initializing isn’t worth the significant drop every frame.

I wonder if you want to measure the actual end-to-end rendering time (i.e. Present/FlushWait).
Most of the interesting work happens after the CPU submits the work to the GPU…

00.60 is, in fact, less than 0.135. I wrongly assumed that 600 is rounding to 60, but it is not, meaning that my method appears to be faster than extended.

Using spritebatch to draw individual pixels is extremely inefficient approach on so many levels:

  1. Massive Cpu overhead
  2. Massive amount of data pushed to GPU that wont have any utilization
  3. Pointless use of Texture

You can either use Point list and draw millions and millions at very high performance (this is in your situation most likely optimal solution - PR is available on github) or at least draw tiny quads using instancing (current MG nuget will do)

And yeah, I know it is necro, but since it was necroed by someone else, I would like to clear it up for others that might come across it that this might not be ideal way.


It was necro’d by him to correct his timings he posted above.

For what you said, that’s what I would expect to be true, but his timings seem to be showing the SpriteBatch approach is performing better. This seems odd.

@Cuber01 - Any chance you were running this test in a Debug build? Also, what @Mindful_plays says might be interesting. Really, you could probably just measure FPS (make sure you uncap it) and use that as a benchmark as well.

There is a high chance I was running a Debug (though I am not sure at this point) build indeed. This should affect both methods if anything, so I hope it’s irrelevant.

And yes, drawing individual pixels to form shapes will never be truly efficient, but for my case it was the optimal solution.