BLOG
1M vs 500K Triangles: Does AI Mesh Density Actually Matter for 3D Printing?
A direct comparison of the two most common AI text-to-3D output densities. What gets resolved at 1M that does not at 500K, when slicers downsample anyway, and the file-size tradeoff most makers underestimate.
PUBLISHED 2026-05-02· 7 MIN READ
What density actually controls
A triangle mesh approximates a smooth surface with flat polygons. The more triangles, the closer the approximation to the underlying form. For a sphere, 200 triangles look faceted; 2,000 look smooth at arm's length; 20,000 look smooth at any distance. The threshold where additional triangles stop adding visible detail depends on the smallest feature you care about and the scale at which you view it.
For 3D printing, the relevant question is different: at what density do additional triangles stop producing visible improvement on the printer's output? The answer depends on three things — the printer's minimum feature size, the model's print scale, and how much fine detail the model actually contains.
The 500K and 1M defaults
The two most common text-to-3D output densities in 2026 are 500K triangles (Meshy default, Tripo default for the standard tier, most generic image-to-3D tools) and 1M triangles (Automatic3D default, Tripo Pro tier, Hyper3D Rodin standard). Both are reasonable choices depending on the destination workflow. Both come from the same underlying geometry — the generators just decimate to different targets at the export stage.
A 500K mesh fits in a 15-25MB STL file. A 1M mesh fits in a 30-50MB STL file. Both are within the size range every major slicer handles without complaint. Storage and transfer cost differences are negligible on modern hardware.
Where the difference shows up
On large smooth surfaces — the body of a dragon, the dome of a helmet, the cylindrical body of a vase — there is no observable difference between 500K and 1M. Both produce a printed surface limited by the printer's layer height (typically 0.12-0.2mm), not by triangle density.
On small features, the gap is real and growing as the feature gets smaller. A 1mm-wide sword crossguard on a 75mm-tall miniature: 500K aliases the edges into stair-stepped facets that look intentional only from a distance; 1M renders crisp parallel edges. A fabric fold that is 0.5mm deep at print scale: 500K smooths it into nothing; 1M preserves the indentation.
The general rule is that triangles must be smaller than the smallest feature you want to resolve. For a mesh fit into a 75mm cube, 500K triangles average roughly 0.4mm edge length and 1M triangles average roughly 0.28mm. A feature smaller than the average triangle gets aliased; a feature larger than two or three triangles gets resolved cleanly. The 0.4mm vs 0.28mm threshold is exactly the range where small mechanical and decorative details live.
The print scale interaction
Density and scale interact directly. The same mesh printed at 75mm vs printed at 150mm has features twice as large in the second case, and the slicer's job becomes proportionally easier. So a 500K mesh printed at 150mm tall often looks just as good as a 1M mesh printed at 75mm — because the larger scale gives the printer more room to resolve each feature regardless of the input density.
This means there are two reasonable strategies for printing AI-generated models. Strategy A: generate at 1M, print at any scale, and details survive. Strategy B: generate at 500K and only print large (100mm+). Strategy A is more flexible because you do not have to commit to a scale at generation time. Strategy B works when you know in advance you will only ever print large.
The practical implication: if you are printing miniatures (28mm-32mm tabletop scale, common for D&D and similar), 1M is meaningfully better even on FDM, and dramatically better on resin where the printer can resolve sub-100µm features. If you are printing display pieces at 150mm+, the gap narrows enough that either density works.
What happens inside the slicer
A common assumption is that slicers decimate input meshes to a manageable density before slicing. This is mostly false in modern consumer slicers. Bambu Studio, PrusaSlicer 2.6+, and OrcaSlicer all pass full input geometry through to the slicing kernel — Cura, OrcaSlicer's ancestor, has an optional “Mesh Fixes” stage with a decimate option, but it triggers only at 5M+ triangles by default.
What actually happens: the slicer computes the intersection of each Z-plane with the mesh and traces the contour at full resolution. For a 1M-triangle mesh on a 0.16mm-layer print of a 75mm-tall model (~470 layers), each layer's contour is computed against ~2,000 triangles on average. That is a fast operation on modern hardware, taking milliseconds per layer. The slicing time difference between 500K and 1M is usually under 5 seconds for a typical model.
So the input density survives all the way to the printer's g-code. There is no “invisible downsampling” in the modern slicer pipeline that would equalize 500K and 1M outputs.
When 500K is genuinely enough
For models with smooth organic surfaces and no fine detail — large vases, simple animal silhouettes, rounded characters without facial features, low-poly stylized designs — 500K is indistinguishable from 1M in print. For models intended to be painted (where surface texture is added with primer and paint, not sculpted), 500K is fine because the geometry is just the substrate.
For tabletop terrain pieces (rocks, ruined walls, basic structures) at 1:60-1:100 scale, 500K is enough because the printed details get further obscured by base coats and washes. Most low-end miniature companies print from 500K-density source and the result looks good after painting.
For digital-only workflows (game assets, AR models, web 3D), 500K is often the right target because rendering performance scales linearly with triangle count and most consumer hardware struggles with 1M+ assets in real-time pipelines.
When 1M earns its keep
For tabletop minis at 28-32mm with crisp armor plating, distinct facial features, and fine weapon detail, 1M is meaningfully better. For mechanical and engineered objects (gears, mechanical parts, jewelry, dental models), the small precision features benefit visibly. For photorealistic prints intended for display rather than painting (collectible busts, statues, replicas), 1M preserves surface detail that 500K smooths.
The other reason to prefer 1M: future-proofing. Printers are getting better. A 500K mesh printed today on a Bambu A1 looks identical to the same mesh on a 2030 printer with sub-50µm resolution, because the source data does not contain detail to resolve. A 1M mesh has reserve detail that the next generation of printers will start to actually render.
The practical bottom line
For 3D printing as the primary use case, 1M is the better default. The file-size cost is small, slicing is unaffected, and the detail headroom makes the difference visible at miniature scale and on detailed models. For workflows where the same model also needs to ship into rendering or web pipelines, generate at 1M and decimate to 500K with Blender or MeshLab when needed — that path preserves both options without re-generation.
The choice between 500K and 1M is one of the more meaningful technical decisions an AI text-to-3D generator makes for you. It is reflected in the printed output more reliably than most other density choices, and it is the single biggest reason the same prompt looks different across different tools.
Questions
Will I see the difference between 500K and 1M on a 0.4mm FDM nozzle?
On large smooth surfaces, no — both resolve the same. On small features under 1mm (jewelry, fingernails, fabric folds, sword cross-guards, mechanical edges), yes. 1M holds detail at scale; 500K aliases or smooths it. The threshold depends on your print scale: at 75mm tall, 1M wins on details. At 200mm, both look identical because everything is much larger than the printer can resolve anyway.
Does my slicer just downsample to 500K anyway?
Some do, some do not. Bambu Studio and PrusaSlicer pass full-density meshes through to the slicing algorithm without decimation. Cura has a "Mesh Fix" stage that can decimate, but only triggers above ~5M triangles. So a 1M-triangle source survives slicing intact in all major slicers. The relevant downsampling happens at the printer firmware level only at extreme densities (10M+).
Why does Meshy ship 500K and Automatic3D ship 1M?
Meshy targets games, AR, and rendering pipelines first; 500K is a sweet spot for real-time rendering on consumer hardware. Automatic3D targets 3D printing, where the slicer is offline and 1M is comfortable for any modern slicer to handle. The choice reflects the dominant use case each tool was designed for, not a quality difference.
Is higher density always better?
Past about 1.5M triangles for a model that fits in your hand (under 100mm), no. Beyond that point you are storing geometry that the printer cannot resolve and the slicer just averages out. File sizes balloon (50MB → 200MB), slicing slows, and the printed result is identical. 1M is the practical ceiling for most consumer printing.
Can I downsample 1M to 500K to save space?
Yes, with Blender (Decimate modifier, ratio 0.5) or MeshLab (Filters → Remeshing → Quadric Edge Collapse Decimation). The result is usually visually indistinguishable from a native 500K source. So you can think of 1M as "a superset of 500K" — you have the option to go smaller, but a 500K source cannot be upscaled to 1M without re-generation.
RELATED
- →Glossary: polygon count
How polygon count translates to print quality and file size.
- →Glossary: mesh decimation
How to reduce triangle count without losing visible detail.
- →Scaling AI 3D models for printing
When to scale during slicing vs. during prompting, and why mesh density matters.
- →Automatic3D vs Meshy
Direct comparison: text-to-3D for printing vs text-to-3D for rendering.