Geri
Wan2.7: Video Edit - After
Wan2.7: Video Edit - Before

This ComfyUI workflow demonstrates reference-guided video editing with Wan2.7. You load a source clip with LoadVideo, provide one or more reference images with LoadImage, and let the Wan2VideoEditApi node apply character or scene replacements consistently across frames. The node calls the Wan2.7 model to infer identity, attire, and style cues from your references, then synthesizes edited frames that preserve the original camera motion and timing. Finally, SaveVideo assembles the frames into a finished MP4/WEBM with your chosen frame rate and quality.

Technically, the Wan2VideoEditApi node takes three essential inputs: the decoded video frames, the reference image tensor(s), and your textual directions (prompt and optional constraints). It uses the Wan2.7 backbone to align reference identity/style with the target footage while maintaining temporal coherence, so edits remain stable over time rather than flickering frame to frame. This makes the workflow especially useful for swapping a character across an entire shot or re-dressing a scene without rotoscoping. The minimalist node set—LoadVideo → LoadImage → Wan2VideoEditApi → SaveVideo—keeps the pipeline simple while still offering strong control through prompts, reference choice, and edit strength settings exposed by the API node.