AI-Assisted Door Reveal: Solving Symmetry, Reflection, and Motion in a Single Workflow
This post documents a specific problem I encountered while photographing a California Closets installation at the Tribeca home of interior designer Athena Calderone — and how a combination of AI generation and post-production discipline produced a result that would have been impractical to achieve any other way. It is a continuation of my earlier post on controlled AI workflows in architectural photography.
The space was recently featured as the cover story of the April issue of Architectural Digest.
The Problem: Symmetry, Reflection, and Mechanical Constraint
The closet featured a pair of symmetrical double doors in light grey, opening to reveal a dark wood grain vanity interior with high-gloss surfaces throughout. The brief called for a reveal — the doors opening to show the finished interior.
Executing this as a traditional video capture presented three compounding problems.
First, the doors were held closed by magnets. Opening them smoothly required meaningful force, which would have made any mechanically rigged pull — fishing wire, for example — inconsistent and abrupt rather than fluid.
Second, achieving symmetrical motion at a consistent speed with two doors simultaneously is a rigging and coordination problem. Even with two operators pulling in sync, perfect symmetry at a consistent speed is difficult to guarantee. Any asymmetry in the opening would read immediately on camera.
Third, the vanity interior was high-gloss. The camera and tripod were already visible in the surface reflections. I had removed them in post from the still frames, triggering the camera remotely to remove myself from the scene as well. Introducing an operator — or any physical rigging — into that environment would have multiplied the retouching problem significantly.
This was not a situation where video capture was inconvenient. It was a situation where the physical constraints made clean execution genuinely impractical.
The Capture Decision
I had originally considered a GIF — a simple stop-motion open-close loop using multiple still frames. That idea led to the more useful realization: if I had a fully resolved still of the doors closed and a fully resolved still of the doors open, captured from the same position with the same lighting, I had everything I needed to hand the problem to AI.
Both frames were captured identically. Same camera, same position, same lighting. The only variable was the state of the doors.
This is the same principle I applied to the drone approach documented in my previous post — defining two architecturally verified states and allowing AI to construct the transition between them, rather than asking it to invent spatial information from a single frame.
What Kling Did Well
The interpolated motion was convincing. More importantly, the AI correctly responded to the physics of the scene — as the doors opened, the hardwood floor darkened appropriately in response to the shadow cast by the opening doors and the dark interior beyond. That kind of material and lighting response is not trivial. It was handled without instruction.
The door movement itself was fluid and consistent. The symmetry problem that would have challenged any physical rig was a non-issue.
Where AI Failed — and How I Corrected It
Two problems emerged in the generated clip.
First, Kling overshot. The doors opened beyond the state defined by the second still frame — extrapolating past the anchor rather than resolving to it. This was straightforward to fix: I trimmed the clip in DaVinci Resolve to end at the correct open position.
Second, the AI invented a shadow on an adjacent wall as the doors opened. It was not a plausible shadow in the wrong place — it was a shadow with no relationship to any light source in the space. Physically impossible. This is a different category of AI error than the approximations I described in the previous post. The system wasn't guessing at something real. It was generating visual information that had no basis in the actual scene.
To correct it, I created a freeze frame from the clip at the moment the shadow first appeared, layered it above the video clip in the timeline, and used a Power Window mask in the Color page of DaVinci Resolve to isolate only the affected wall area. The freeze frame held the correct wall tone for the remainder of the clip, effectively erasing the invented shadow.
This is the same retouching discipline that architectural photography requires in stills — identifying what is incorrect, isolating it precisely, and replacing it with verified information. The tools are different. The thinking is the same.
On Prompting: Less Is More
A 4K render in Kling takes roughly ten minutes. In my experience, the more instruction loaded into a prompt, the higher the probability that something unexpected goes wrong in the render. Complex prompts introduce more variables for the AI to interpret — and misinterpret.
My approach now is to keep prompts minimal and let the source images carry the information. If the result has a correctable flaw, I fix it in DaVinci Resolve rather than trying to prompt around it. The question I ask after every render is not "how do I prompt this better" but "can I fix this in post." Usually the answer is yes, and the fix is faster than another render cycle.
Resolve Handles the Edges
One practical observation: Kling's motion controls are effectively binary. There is no native ease-in or ease-out. For this clip the door opening itself was smooth enough, but the stop at the fully open position arrived abruptly. I added a keyframe ease-out in DaVinci Resolve to soften the deceleration into the resting position.
This is now a consistent part of my workflow. AI generates the motion. Resolve handles the edges — the trim points, the easing, and any corrections. The combination produces results that feel intentional rather than generated.
The Photograph Remains the Source of Truth
This vanity reveal works as a piece of interior design photography because both anchor frames were photographically resolved before AI was involved. The lighting was correct. The reflections were retouched. The camera was removed from the gloss surfaces. The architecture was accurate.
AI interpolated between two true states. It did not invent the space. Where it did invent — the wall shadow — it was corrected using verified still frame material.
This is the discipline that makes AI a productive tool in architectural photography rather than a liability. The photograph sets the standard. Everything else is held to it.

