BLOG
Text-to-3D vs Photogrammetry vs CAD: When Each One Wins
Three completely different paths to a printable 3D model, each with a clear sweet spot. A decision framework for picking the right one based on what you are making, what reference material you have, and how precise the result needs to be.
PUBLISHED 2026-05-02· 10 MIN READ
Three paths, three sweet spots
Almost every printable 3D model comes from one of three sources. You imagined the object and described it (text-to-3D). You photographed an existing object and reconstructed it (photogrammetry). Or you specified dimensions and constraints in a parametric tool (CAD). Each approach has a clear sweet spot, a clear failure mode, and a workflow profile that fits some projects much better than others.
Choosing the right approach for the job is the difference between a satisfying afternoon and a frustrated evening. The decision tree is straightforward once you know the right questions to ask. The most common mistake is treating any one approach as the universal answer — text-to-3D for a precision mechanical part, CAD for an organic monster, photogrammetry for something that does not yet physically exist.
Text-to-3D: imagined objects, fast, no source needed
Text-to-3D is what you reach for when you want to fabricate something that exists only in your head. A dragon, a wizard, a fantasy weapon, a stylized vase, a custom miniature with no physical counterpart. Modern text-to-3D pipelines (Automatic3D, Meshy, Tripo, Hyper3D Rodin) take a description in plain English and return a triangle mesh in 1-5 minutes. The artistic intent comes through reasonably well for visually-iconic objects and degrades for anything obscure or highly specific.
The strengths: speed, zero setup, and access to a huge artistic vocabulary baked into the underlying generative model. You do not need reference photos, modeling skills, or dimensional specifications. You just describe what you want.
The limitations: no dimensional accuracy guarantees, no parametric control, no editability after the fact (the output is a non-parametric mesh, so changing a dimension means regenerating). Topological quality varies by tool — pipelines targeted at printing produce watertight output, pipelines targeted at rendering often need cleanup. Mechanical parts, anything that has to fit a specific size, anything where tolerances matter — these are not where text-to-3D shines.
The right use cases: tabletop miniatures, display pieces, themed home decor, custom keychains, stylized gifts, conceptual art prints, fantasy and sci-fi assets, creature models, character figures. Anywhere the artistic intent matters more than precise dimensions.
Photogrammetry: replicating real objects
Photogrammetry takes 30-100 photos of a real object from different angles and reconstructs the 3D geometry. Modern free tools (Meshroom, RealityScan from Epic) handle the entire pipeline — feature detection, camera position estimation, dense point cloud, surface reconstruction, texture projection — with one button press. Quality depends mostly on lighting, surface texture, and photo coverage.
The strengths: faithful reproduction of an existing object, including subtle surface details that would be tedious to model by hand. Captures color and texture as well as geometry. Cost is essentially zero beyond a phone camera.
The limitations: requires a physical reference object you can photograph. Struggles with transparent, reflective, or featureless surfaces (glass, polished metal, single-color smooth plastic). The reconstructed mesh is usually noisy and non-manifold; cleanup in MeshLab or Blender is part of the workflow. Dimensional accuracy is good (sub-1%) with calibration but not engineering-grade.
The right use cases: replacing broken parts you no longer have a CAD source for, replicating heirloom or sculptural objects, capturing a person's likeness, archeology and museum work, creating reference assets for digital workflows. Anywhere the answer to “what does this look like in 3D” is best answered by capturing rather than describing.
CAD: precision parts, mechanical, parametric
CAD (Fusion 360, Onshape, SolidWorks, FreeCAD, Tinkercad at the entry level) is the precision tool. You start with sketches, extrude or revolve them into solid bodies, apply boolean operations, define constraints. The output is parametric — you can change a dimension and the entire model updates consistently. CAD models have exact mathematical surfaces, not triangle approximations, so the precision is limited only by the printer's resolution.
The strengths: dimensional accuracy to micrometers, parametric editability, mating-part compatibility (you can design two parts to fit together perfectly), standard mechanical features (threads, fillets, chamfers, lofts) baked into the toolset. CAD is the right answer when the model has to do its job, not just look like it would.
The limitations: steep learning curve. A simple bracket takes 15-30 minutes for an experienced CAD user and an hour or more for a beginner. Organic and freeform shapes are awkward in CAD — sculpting tools (ZBrush, Blender) handle these better than parametric CAD does. The artistic vocabulary is limited compared to AI generation; you build complexity by stacking primitives, which is slow for visually rich objects.
The right use cases: enclosures, brackets, mounting hardware, replacement gears, custom adapters, tool handles, mechanical components, precision-fit parts, anything that mates with another object, jigs and fixtures, professional engineering work, repair parts where dimensions matter.
The decision framework
Three questions get you to the right approach almost every time.
Question 1: does the object exist physically? If yes and you have access to it, photogrammetry is on the table. If no, photogrammetry is out — you cannot photograph a unicorn. This eliminates one of the three approaches in most cases.
Question 2: does dimensional precision matter? If you need a part to mate with another part, fit a specific cavity, or hit a specific dimension to within a millimeter — CAD. If the model is decorative, artistic, or the dimensions are flexible — text-to-3D or photogrammetry, depending on Question 1.
Question 3: how much skill and time do you have? CAD has the steepest learning curve and the longest authoring time. Photogrammetry needs decent photo composition. Text-to-3D needs only an ability to describe what you want. For someone with no CAD or modeling background, text-to-3D is the only approach that produces results in the first hour.
Hybrid workflows that win
The three approaches compose well. The most powerful pattern for advanced makers is to use each one where it is strongest within a single project.
Generated body, CAD features: generate a creature or character with text-to-3D, import the STL into Fusion 360, add precision mounting bosses, magnet recesses, or screw holes parametrically. The artistic bulk of the model uses AI's vocabulary; the engineering features use CAD's precision. This is how custom tabletop minis with magnetic bases or modular weapon swaps get made.
Scanned base, CAD modifications: photogrammetry-capture an existing object, then redesign part of it in CAD. A common workflow for hobby restoration: scan a broken part, identify the missing geometry, model the replacement segment in CAD, merge with the scan, print. The scan provides shape continuity; the CAD step adds the precision the scan cannot.
CAD frame, AI ornament: design the structural frame of an object in CAD (a vase, a phone stand, a planter), then generate decorative ornamental elements with text-to-3D and integrate them. The CAD parts ensure the object functions; the AI parts make it visually distinctive without hours of organic modeling.
Common decision mistakes
The most common mistake is using text-to-3D for a job that needs CAD. A user with a broken bracket on a treadmill describes the bracket to an AI and gets back something that looks plausible but does not actually fit. The bracket needs measurements; AI does not measure. Use a caliper and Fusion 360.
The second most common mistake is using CAD for a job that needs text-to-3D. A maker spends an evening modeling a dragon in Tinkercad, ends up with a low-detail blocky shape, and concludes 3D printing is frustrating. The dragon was the wrong job for CAD — text-to-3D would have produced a better result in two minutes.
The third common mistake is using photogrammetry for an object that does not have texture. Trying to capture a smooth white plastic part with photogrammetry produces a noisy, holey mesh. The algorithm needs variation in the surface to compute camera positions. A featureless object needs CAD or a structured-light scanner instead.
The convergence trend
The boundaries between the three approaches are softening. Modern CAD tools (Fusion 360, Onshape) are adding generative-AI features that let you describe a part and have the tool produce a parametric model. Text-to-3D pipelines are adding constraint-aware generation modes that respect specific dimensions. Photogrammetry tools are integrating into both — Reality Scan exports directly to Unreal, RealityCapture exports to CAD-friendly formats.
The 2030 likely outcome is a single integrated tool that accepts any combination of inputs (description, photos, dimensions) and produces a parametric, watertight, dimensionally-correct mesh. Today, the three approaches are still distinct, and choosing the right one for the job is the maker's skill that separates a smooth project from a frustrating one. The framework above is the cheat sheet until the tools converge.
Questions
Which approach is fastest from "I want a model" to "the printer is running"?
Text-to-3D, by a wide margin. Generation takes 1-5 minutes, the output is ready for the slicer, and there is no setup time. Photogrammetry takes 15-60 minutes from photos to mesh, plus the time to take 30+ photos. CAD takes anywhere from 20 minutes for a simple part to multiple hours for anything complex, plus the learning curve.
Which approach is most accurate?
CAD, by a wide margin. CAD models exact mathematical surfaces with no approximation error. Photogrammetry produces dimensional accuracy of ~0.5-2% with care, and ~5-10% if not careful. Text-to-3D has no inherent dimensional accuracy — the model is a plausible interpretation of your description, not a precise reproduction of any specific reference object.
Can I combine multiple approaches in one workflow?
Yes, this is the most powerful pattern for advanced makers. Generate the artistic shape with text-to-3D, then add precision mounting features in CAD (Fusion 360 imports STL and lets you add bosses, screw holes, and snap-fits). Or use photogrammetry to capture an existing object, then redesign part of it parametrically in CAD. Hybrid workflows let each approach contribute what it is best at.
When should I avoid text-to-3D?
When dimensional precision matters and you cannot iterate. A custom enclosure for a specific PCB, a replacement gear for a specific machine, a part that has to mate with another part — these need CAD, full stop. Text-to-3D produces visually plausible geometry, not engineering-grade tolerances. The artistic/decorative use case is where text-to-3D wins.
When should I avoid photogrammetry?
When the object you want to print does not exist physically yet. Photogrammetry requires a physical reference; you cannot photograph a fictional dragon. Also avoid for transparent, reflective, or featureless surfaces — photogrammetry algorithms need texture variation to compute camera positions, and smooth shiny surfaces are nearly impossible to capture.
RELATED
- →AI 3D generation vs traditional CAD
Deeper comparison of the two for engineering and artistic workflows.
- →Glossary: photogrammetry
How photogrammetry pipelines turn photos into 3D meshes.
- →Text-to-3D printing: a practical guide
When AI 3D generation works for printing and where it falls short.
- →Browse text-to-3D categories
Curated prompts and printability notes for 30+ object types.