GUIDE

Prompt Engineering for 3D Generation

Text-to-3D models reward different prompting patterns than text-to-image models. Describing an object as a physical thing gets you more printable geometry than describing a scene or a mood.

LAST REVIEWED 2026-04

The core principle

Text-to-image models are trained on photos and art. They know about lighting, mood, style. Text-to-3D models are trained on 3D geometry and reference views. They know about shape, silhouette, physical form.

That means your prompts should describe what the object is, not what the photo of it would look like. "A dragon sculpture on a stone pedestal, dramatic lighting" is a text-to-image prompt. "A dragon sculpture with folded wings, seated pose, 20cm tall" is a text-to-3D prompt.

The anatomy of a good prompt

Think in this order:

  1. The noun. What is the object. Be specific but brief: "owl figurine", not "a beautifully crafted owl with intricate feathers".
  2. The pose or stance. Seated, standing, flying, folded, open. This matters a lot — it sets the silhouette.
  3. One or two key features. "wide wings", "long tail", "hollow interior". Pick the defining traits.
  4. Style. Stylized, realistic, low-poly, organic, geometric. One word is usually enough.

That is often the whole prompt. Four elements, one sentence. Longer prompts don't generally produce better output in current text-to-3D models — they confuse the generator.

Prompts that work

✓ GOOD

sitting corgi with pointed ears, stylized figurine

✓ GOOD

low-poly skull planter with open top, flat base

✓ GOOD

octopus with curled tentacles, one arm raised, organic style

✓ GOOD

miniature treasure chest, closed lid, rivets on the sides

Prompts that don't work (and why)

"A majestic dragon perched atop a medieval castle at sunset, with fire breathing from its mouth"

Too much scene, two objects (dragon and castle), plus an effect (fire) that does not translate to printable geometry. Pick one object.

"Custom gear with 20 teeth, 30mm diameter, 3mm bore"

Dimensional precision is not what text-to-3D does. Use CAD.

"A sign that says HELLO"

Most text-to-3D models cannot produce readable lettering. Generate the shape of a sign, then add text in a mesh editor.

"Something cool"

You will get something, and it will probably be a generic blob. Give the model a noun to anchor on.

Style keywords that actually shift output

A short, non-exhaustive list of words that tend to produce recognizable differences:

  • stylized / cartoon — exaggerated features, thicker geometry, print-friendly
  • low-poly — faceted, angular, minimal detail
  • realistic / detailed — more surface complexity, may need a higher-resolution printer
  • organic — flowing curves, fewer hard edges
  • geometric — straight edges, clear planar surfaces
  • chibi — oversized head, small body, good for figurines
  • hollow — the generator may try to produce a shell rather than a solid

Iterating

Generative 3D is stochastic — the same prompt can produce different outputs. If the first result is close but wrong in one way, re-run rather than adding more words to the prompt. Adding words often creates new problems rather than fixing the one you had.

If the first result is far from what you wanted, the issue is usually that the prompt described a scene instead of an object, or that you asked for something the model can't do (text, mechanical parts, dimensional precision). Change the strategy, not the adjectives.

Prompting for printability

Two habits that help the print succeed:

Ask for a flat base. "on a flat base" or "with a flat bottom" at the end of the prompt gives the slicer a bed-contact surface to work with. Not every model respects this, but enough do that it's worth adding.

Avoid tiny appendages. A dragon with a thin whisker-like tail is going to have support issues at small scales. "Thick tail" or "chunky" biases toward print-friendly geometry.

Try Automatic3D free

Describe an object, get a printable STL. Free tier includes 3 models and 12 concept generations per month. No credit card required.