Here’s how the internet works: someone coins a term like “Motion Seed parameter,” it circulates in a few YouTube thumbnails and Reddit threads, and suddenly half the Midjourney community is trying to type --motion-seed into their prompts and wondering why nothing happens. Spoiler: that parameter doesn’t exist. Not in V7, not in any prior version, not in any official Midjourney documentation anywhere.
But here’s the thing — the underlying goal is completely valid. Product photographers and e-commerce teams genuinely want images that suggest motion: a camera pulling back from a perfume bottle, a sneaker caught mid-rotation, a laptop with a sense of cinematic push-in. And Midjourney V7 can absolutely produce that. Just not with a magic parameter nobody invented. It gets done through prompt craft, camera language, and a few techniques that are actually documented and actually work.
This guide is that tutorial — grounded in what V7 can actually do in early 2026, with prompts you can copy and paste right now.
By the end of this, you’ll be able to generate product images in Midjourney V7 that convincingly imply camera movement (pans, zooms, dolly-ins), object motion (rotation, float, tilt), and environmental dynamism (motion blur, depth-of-field shift) — all in a single still frame. These are not animations. They are stills that feel kinetic. That distinction matters, and it’s actually more useful for e-commerce than you’d think, because most product listing images are static JPEGs anyway.
You need an active Midjourney subscription — any tier works. You need to be running V7, which as of early 2026 is the default model. If you’re unsure, add --v 7 to any prompt to force it. You don’t need Midjourney’s video feature (available on Pro and Mega plans) for this tutorial, though the last section touches on how to extend these stills into actual video clips if you do have access. Everything here works in the standard image generation workflow.
Midjourney V7 has significantly better understanding of cinematic and photographic language than its predecessors. The model was trained on enormous amounts of visual media, which means it responds to the vocabulary cinematographers actually use. When you write “dolly zoom,” it knows what a dolly zoom looks like. When you write “motion blur on background, sharp subject,” it knows how to render that optical effect. This is your toolkit.
The technique breaks into three categories: camera motion language, subject motion language, and optical/physics cues. You’ll combine these with your product description to get the result you’re after.
Start with camera language. These phrases tell Midjourney to render the scene as if a camera is actively moving through it — and V7 handles them with enough fidelity to be genuinely useful.
Slow dolly-in on a product:
close-up product photography of a matte black espresso machine, slow dolly push-in, shallow depth of field, foreground elements softly blurred, sharp focus on machine body, cinematic motion, studio lighting, --ar 4:5 --v 7 --style raw
The phrase “slow dolly push-in” cues the model toward rendering depth compression and a slightly compressed background — the visual signature of a dolly-in shot. Adding “foreground elements softly blurred” reinforces it. --style raw keeps Midjourney from over-stylizing and lets the photographic realism come through.
Camera pan across a product line:
product photography of three luxury skincare bottles on a marble surface, wide camera pan implied, slight motion streak on background, sharp products, cool studio lighting, editorial aesthetic, --ar 16:9 --v 7 --style raw
Notice “implied” — that word does work. It signals to the model that you want the suggestion of motion, not a full blur that obscures the product. The background motion streak gives the pan without destroying the subject.
Overhead crane shot pulling back:
aerial overhead product flatlay of premium running shoes on a textured concrete surface, crane shot pulling back, dynamic composition, slight vignette on edges, sharp product detail, --ar 1:1 --v 7
Pro tip ✅
Overhead and flatlay compositions respond especially well to crane/pull-back language because Midjourney interprets the high angle as already cinematic. You get more dramatic results than with eye-level shots using the same prompt language.
Sometimes you don’t want the camera to move — you want the product itself to feel dynamic. A sneaker mid-rotation, a watch tumbling through space, a bottle caught in a floating arc. V7 handles object motion well when you describe it physically and specifically.
Product rotation — mid-spin:
product photography of a minimalist white wireless headphones, mid-rotation, 3/4 angle, subtle motion blur on trailing edge, sharp focus on logo and ear cup, dark gradient background, studio strobe lighting, --ar 4:5 --v 7 --style raw
The key here is “trailing edge” — it tells Midjourney where to place the motion blur, which makes the rotation look directional and intentional rather than random.
Floating/levitation with implied movement:
luxury perfume bottle floating above a reflective black surface, slight upward drift, soft motion trail beneath bottle, particles catching light, shallow depth of field, cinematic product shot, --ar 9:16 --v 7
Object tilt with kinetic energy:
e-commerce product shot of a matte aluminum laptop, dynamic tilt at 15-degree angle, caught mid-motion as if being opened, motion blur on hinge and screen edge, sharp keyboard in focus, clean white background, --ar 4:3 --v 7 --style raw
Pro tip ✅
Specifying the exact degree of tilt (like “15-degree angle”) gives Midjourney V7 more precise spatial information than vague terms like “slight tilt.” The model responds to numerical specificity in ways that earlier versions didn’t. Try 10, 20, and 30 degrees and compare — the difference is visible.
Motion blur alone isn’t enough. Real kinetic photography has a constellation of optical effects that happen together — depth of field shifts, lens flares at certain angles, light streaks, bokeh on backgrounds. Stacking these cues is what makes a still frame feel genuinely mid-movement rather than just blurry.
Depth-of-field shift (zoom lens pull):
product photography of a glass whisky bottle, zoom lens pull focus effect, front label sharp, background bottles progressively more blurred, warm amber studio lighting, cinematic 35mm aesthetic, --ar 3:2 --v 7 --style raw
Light streak during lateral motion:
sneaker product shot, lateral motion, dramatic side lighting creating horizontal light streaks across upper, sharp sneaker body, dark moody background, speed lines subtle, editorial sports photography style, --ar 16:9 --v 7
Full cinematic motion package for a hero shot:
hero product shot of a premium smartwatch, slow push-in camera motion, watch face sharp, background city lights bokeh in motion, wrist strap trailing soft blur, golden hour warm tones, cinematic 2.39:1 aspect ratio feel, high-end commercial photography, --ar 21:9 --v 7 --style raw
Warning ⚠️
Avoid stacking too many motion cues on the same subject element. If you ask for motion blur on the product AND sharp product detail at the same time, Midjourney will make a choice — and it might not be the one you want. Pick where the motion lives (background, edges, trailing elements) and keep the hero detail sharp.
Here are category-specific prompt templates tuned for the types of products where implied motion actually sells — apparel, tech, beauty, and footwear.
Beauty/skincare — dropper bottle with upward float:
luxury face serum bottle with gold dropper, floating upward against deep navy background, slow upward drift motion trail, light refracting through glass, elegant shadow below, sharp label detail, commercial beauty photography, --ar 4:5 --v 7 --style raw
Tech/audio — earbuds mid-drop:
premium wireless earbuds mid-drop above their charging case, caught in motion, slight blur on buds, sharp case detail, white minimalist background, product reveal aesthetic, soft diffused studio light, --ar 1:1 --v 7
Footwear — running shoe with speed context:
high-performance running shoe, low angle, implied forward motion, ground-level perspective, subtle motion blur on background, shoe sharp with dynamic side lighting, track surface visible, sports commercial photography, --ar 16:9 --v 7 --style raw
Apparel — jacket caught mid-swing:
premium leather jacket on invisible form, caught mid-swing as if just placed, collar and lapels in motion, trailing fabric blur on sleeves, sharp chest detail, neutral gradient background, editorial fashion photography, --ar 4:5 --v 7
Pro tip ✅
For e-commerce specifically, use
--ar 4:5— it’s the native aspect ratio for Instagram product posts and most mobile product pages. Everything here is tuned to that ratio by default, but swap to--ar 1:1for square listings or--ar 16:9for hero banners.
Midjourney’s --chaos parameter (0–100) controls compositional variation between the four grid images. For motion-implied shots, setting it between 15 and 35 gives you meaningful variation in how the motion reads across the four outputs — one might have more background blur, another more trailing edge blur — without going so chaotic that half the results are unusable.
product photography of a matte ceramic coffee mug, slow dolly-in, steam rising with slight motion, warm morning light, shallow depth of field, sharp mug handle and logo, cozy editorial aesthetic, --ar 4:5 --v 7 --style raw --chaos 20
Note 💡
If you find a result you love and want to generate variations that preserve the motion angle and composition, use Midjourney’s Vary (Subtle) option rather than Vary (Strong). Strong variation will often throw out the motion language entirely and regenerate something more static.
If you’re on a Midjourney Pro or Mega plan, you have access to the video generation feature that can take a static image and animate it. The stills you generate with these prompt techniques make excellent starting frames for that feature — because you’ve already established a directional motion in the image, the video model has a clear trajectory to follow. A dolly-in still tends to generate a dolly-in video. A floating product still tends to animate upward. It’s not guaranteed, but the alignment is noticeably better than starting from a completely static product shot.
Generate your motion-implied still first, upscale it, then feed it into the video feature with a text prompt that mirrors the motion language: “slow push-in camera, product stays sharp, background depth increases.” That combination — cinematic still as starting frame plus directional video prompt — gets you closer to a polished product video than starting from scratch in most video tools.
Pro tip ✅
When preparing a still for video extension, render it at
--ar 16:9and avoid placing the product in extreme corners. The video model needs compositional headroom to animate into. Dead center or slightly off-center compositions with negative space around the product animate most cleanly.
The most common failure mode is describing motion too abstractly. Phrases like “dynamic” or “energetic” tell Midjourney almost nothing. “Dynamic” in particular is so overused in training data that the model treats it as filler. Replace it with specific cinematic vocabulary: “dolly push-in,” “lateral pan,” “crane pull-back,” “mid-rotation at 45 degrees.” Specificity is the difference between a vague commercial shot and a genuinely kinetic one.
The second mistake is putting motion blur on the hero product element without specifying that the subject itself should stay sharp. If you write “motion blur” without anchoring it to the background or trailing edges, Midjourney will blur whatever it thinks is most visually interesting — which is usually the product label. Always pair motion blur language with a sharpness anchor: “background motion blur, sharp product detail” or “trailing edge blur, sharp front face.”
Avoid 🚫
Don’t use
--style rawfor every shot. Raw mode strips Midjourney’s aesthetic processing, which is great for photorealism but can make lighting feel flat. For beauty and luxury products especially, try without--style rawfirst — the default V7 stylization often adds a polish that works in your favor for premium products.
A “Motion Seed” parameter would be a neat shortcut if it existed. It doesn’t. What does exist is a text-to-image model that’s been trained on enough cinematography to respond intelligently to camera and motion language — and that’s genuinely useful once you know how to talk to it. The prompts in this guide produce results that stand up in real e-commerce contexts: hero banners, social ads, product listing images that stop a scroll. None of them require a subscription upgrade, a video tool, or a parameter someone made up on a forum. Just precise prompt language and a clear idea of where you want the motion to live in the frame.
