
How to Change Camera Angle in Nano Banana (Cinematography Prompts)
Documented camera angle vocabulary for Nano Banana and Nano Banana Pro (wide shot, low-angle, Dutch tilt, focal length, aperture) with sourced prompt patterns.
Changing camera angle inside Nano Banana and Nano Banana Pro is the single biggest edit you can make to a generated image. Subject and lighting can stay identical, and one angle swap (eye-level traded for a low angle, a medium shot pulled out to a wide) produces a completely different read.
This guide pulls together what Google has officially documented about camera control in Gemini image generation, the cinematography vocabulary the model recognizes, and the prompt patterns Google's own guides publish. No invented "best 10 angles I tested." Third-party claims are labeled where they appear.
Why camera angle matters
Camera angle does three things at once: it sets composition (where the eye lands), implied scale (how big or small the subject feels), and mood (vulnerable, imposing, intimate, detached). Cinematographers have a century of working vocabulary for those choices, and image models have been trained on the same vocabulary through their caption data.
Google's official "How to create effective image prompts with Nano Banana" page makes this explicit, instructing prompt writers to ask: "How do you want to frame your shot? Portrait or landscape? Is it an extreme close-up, a wide shot, a low angle shot?" Angle is treated as a first-class compositional decision, not a stylistic afterthought.
If you do not name an angle, the model picks one for you, usually something safe like a centered, eye-level, medium shot. Naming it is how you take the decision back.
What Google has documented about camera control
Three Google sources are the load-bearing references here: the DeepMind Nano Banana prompt guide, the Google Developers Blog Gemini 2.5 Flash guide, and the Google Cloud ultimate prompting guide.
The Google Developers Blog guide names the terminology the model is built to interpret: "close-up portrait," "wide-angle shot," "macro shot," "85mm portrait lens," "low-angle perspective," "elevated 45-degree shot," and "Dutch angle." It states that "photographic and cinematic language … give you precise control over the final image."
The Google Cloud guide publishes a structured pattern for forcing perspective:
"Force the perspective by explicitly requesting a 'low-angle shot with a shallow depth of field (f/1.8)'. If you need to show a vast scale, ask for a 'wide-angle lens'. For intricate details, specify a 'macro lens'." — Google Cloud, Ultimate prompting guide for Nano Banana
Everything below builds on those documented phrasings.
The camera angle vocabulary the model is documented to understand
This is not a "top 10" list. It is the vocabulary that appears in cinematography reference material and in Google's own prompt guides, with definitions and the prompt fragments shown in those guides.
Wide shot (WS) and extreme wide shot (EWS)
Frames the subject head to toe with significant surrounding environment; the extreme version pushes farther so the subject is small against the setting. Google's Cloud guide names "wide-angle lens" as the canonical way to "show a vast scale."
Medium shot (MS) and medium-full shot
Frames the subject from roughly the waist up; medium-full extends to mid-thigh. Google's own fashion-editorial example prompt uses the literal phrase "Medium-full shot, center-framed."
Close-up (CU) and extreme close-up (ECU)
Close-up frames head and shoulders; extreme close-up fills the frame with a single feature, like an eye, a hand, or a textured surface. Google's prompt guide names "close-up portrait" and "extreme close-up" as recognized terms.
Low angle and worm's-eye view
Camera below the subject's eye line, looking up. The subject reads as imposing, dominant, larger than life. The worm's-eye variant places the camera essentially on the floor. "Low-angle shot" is one of the four framing options the DeepMind guide names by name.
High angle and bird's-eye / top-down
Camera above the subject's eye line, looking down. Subject reads smaller, more vulnerable. Top-down is directly overhead, flattening the scene into something map-like. Google's example prompts use "A top-down drone perspective looking directly down" and "A high-angle, close-up shot."
Eye-level
Camera matches the subject's eye line. Neutral, naturalistic. The DeepMind guide uses "A realistic eye-level shot" in its examples.
Dutch tilt (Dutch angle / canted angle)
Camera rotated on its longitudinal axis so the horizon runs diagonally, typically 15 to 25 degrees per cinematography references. Signals tension, unease, instability. Listed by name in the Google Developers Blog prompt guide.
Over-the-shoulder (OTS)
Camera behind one subject, framing past their shoulder onto a second subject. Standard conversation-coverage shot, useful when the brief calls for two characters with one as foreground anchor.
Three-quarter and profile
Three-quarter rotates the subject ~45 degrees from camera, widely cited in portrait references as the most flattering angle because it shows facial contour without flattening features. Profile is the 90-degree turn: silhouette-forward, analytical, useful for character sheets and packaging.
Nano Banana Pro's camera-control feature
Nano Banana Pro (model id gemini-3-pro-image-preview, released November 20, 2025) is where granular camera control becomes a marketed feature rather than a vocabulary trick.
Per the Google announcement, Nano Banana Pro lets users "Adjust camera angles, change the focus and apply sophisticated color grading," and the same post calls out the ability to "adjust the depth of field or focal point" and to "transform scene lighting (e.g. changing day to night or creating a bokeh effect)." These are part of the model's "studio-quality creative controls."
The Google Cloud guide publishes the pattern for forcing depth of field with an aperture value: "low-angle shot with a shallow depth of field (f/1.8)." For focal length, the Google Developers guide lists "85mm portrait lens" as a recognized term, and the same pattern extends to the cinematographer's standard set: 24mm wide, 35mm documentary, 50mm normal, 85mm portrait, 200mm telephoto.
Claimed vs. confirmed. Whether the model renders a physically accurate optical model (exact f-stop falloff, exact field of view per millimeter) is not specified in Google's documentation. Treat the lens and aperture phrasings as documented prompt patterns that steer style, not as a calibrated lens emulator.
Combining angle, lens, and lighting
The Google Cloud guide publishes a complete prompt template that fuses these:
"A striking fashion model wearing a tailored brown dress, sleek boots, and holding a structured handbag. Posing with a confident, statuesque stance, slightly turned. A seamless, deep cherry red studio backdrop. Medium-full shot, center-framed. Fashion magazine style editorial, shot on medium-format analog film, pronounced grain, high saturation, cinematic lighting effect." — Google Cloud Ultimate prompting guide for Nano Banana
Structure: subject and pose first, environment second, shot type (Medium-full shot, center-framed), then camera/format (shot on medium-format analog film), then rendering qualities (grain, saturation, lighting).
For lighting, the Google Developers Blog guide names "three-point softbox setup," "soft, golden hour light," and "chiaroscuro lighting with harsh, high contrast" as documented phrasings. Pairing one with an angle multiplies the effect:
- Low angle + chiaroscuro: dramatic, imposing, threatening.
- High angle + soft golden hour: vulnerable, warm, intimate.
- Eye level + three-point softbox: neutral, commercial, catalog-clean.
- Dutch tilt + harsh directional light: unsettling, thriller-coded.
Pick one combination per prompt. Stacking dilutes rather than compounds.
A documented working template
Combining the patterns above into a single fill-in-the-blank template that mirrors Google's published structure:
A [shot type] of [subject], [pose or expression], in [environment].
Shot on [camera/lens, e.g., 85mm portrait lens, f/1.8].
[Lighting setup, e.g., three-point softbox, golden hour backlight].
[Optional: color grading, film stock, mood].Worked example, built only from documented Google phrasings:
A low-angle close-up portrait of a Himalayan wolf, alert expression, on a snow-dusted ridge at dawn. Shot with an 85mm portrait lens at f/1.8 for a soft, blurred background. Soft golden hour light from the side. Cinematic, high saturation, slight film grain.
Every element above traces back to a phrase Google publishes in its own prompt guides.
Common pitfalls
1. Directorial language the model does not parse for stills. "Push in slowly," "rack focus," and "dolly out" are camera movements; they belong to video. For Nano Banana stills, stick to position (low/high/eye level), framing (close-up / medium / wide), and rotation (Dutch tilt).
2. Stacking contradictory angle terms. A prompt asking for "a wide aerial top-down close-up" gives the model four mutually exclusive instructions. Pick one shot scale and one camera position.
3. Naming a focal length without a depth-of-field cue. A "200mm telephoto" prompt without an aperture value often renders flat-field. Google's documented pattern pairs the lens with an f/ value when you want the subject isolated.
4. Expecting true 3D rotation from one reference. Nano Banana Pro accepts up to 14 input objects and 5 character likenesses, but rotating a subject to a new angle from a single 2D reference is inference, not photogrammetry. For tight identity-locked rotations, give it more than one reference where possible.
5. Changing more than one variable per iteration. When iterating angle, change only the angle term. Holding subject, environment, lens, and lighting constant lets you attribute the visual change to the swap rather than to drift.
Try it on gptimg
Run all of these patterns inside the Nano Banana Pro studio without setting up a Google Cloud project. The studio also routes to the standard Nano Banana tier for cheaper iteration while you lock down the angle. For when Pro is worth the cost over OpenAI's image model, see Nano Banana Pro vs GPT Image.
Frequently asked questions
What camera angle terms does Nano Banana actually understand?
Google's official prompt guides explicitly name close-up portrait, wide-angle shot, macro shot, low-angle perspective, elevated 45-degree shot, aerial / top-down view, eye-level shot, Dutch angle, and 85mm portrait lens as documented working phrases.
Does Nano Banana Pro support real focal length and aperture control?
Per Google's announcement, Nano Banana Pro lets users "Adjust camera angles, change the focus and apply sophisticated color grading" and "adjust the depth of field or focal point." The Cloud guide publishes the pattern for combining the two: "low-angle shot with a shallow depth of field (f/1.8)." Whether the model is rendering a physically accurate optical model or interpreting the phrasing as a style cue is not specified by Google.
How do I change camera angle on an existing image without changing the subject?
Submit the original image as an input and write an edit instruction that holds subject and environment constant while naming only the new angle: "Same subject and same scene, re-rendered as a low-angle worm's-eye-view shot." For identity-sensitive edits, include a short description of the subject in the edit prompt as an explicit anchor.
What is a Dutch tilt and when should I use one?
A Dutch tilt rotates the camera roughly 15–25 degrees on its longitudinal axis so the horizon runs diagonally. It signals tension, unease, or psychological imbalance. Use it sparingly; overusing it neutralizes the effect.
Can I get cinematic angles in standard Nano Banana, or do I need Nano Banana Pro?
Both models accept the same documented vocabulary. Nano Banana Pro adds higher resolution (up to 4K) and explicit multi-character consistency. For most single-image angle work, the standard Nano Banana tier is sufficient and cheaper to iterate against.
Sources
- How to create effective image prompts with Nano Banana — Google DeepMind prompt guide
- How to prompt Gemini 2.5 Flash Image Generation for the best results — Google Developers Blog
- Ultimate prompting guide for Nano Banana — Google Cloud
- Nano Banana Pro: Gemini 3 Pro Image model from Google DeepMind — Google blog announcement
- Gemini 3 Pro Image (Nano Banana Pro) — Google DeepMind product page
Last reviewed against source pages: 2026-04-18. Google updates these prompt guides periodically; confirm phrasings in the linked sources before relying on them in production prompts.
Autor

Categorías
Más artículos sobre generación de imágenes con inteligencia artificial

Why Is Nano Banana Image Quality Bad? A Fix Guide (April 2026)
Why Nano Banana image quality looks low or blurry: what Google has documented, what users actually report, and the settings that fix it for production work.


How to Animate a Single Image: A Nano Banana → SkyReels Workflow
Animate an image with AI: generate a still in Nano Banana, then feed it to SkyReels V4 as a first frame to produce 1080p motion clips up to 15 seconds.


Using Nano Banana for E-commerce Product Mockups (Workflow Guide)
A documented workflow for generating Shopify, Amazon, and Etsy product mockups with Nano Banana and Nano Banana Pro — references, prompts, and platform sizes.
