Geri
Wan2.7: Reference to Video

This workflow turns a reference video into a new video featuring your chosen character while preserving the motion, framing, and timing of the original. It uses the Wan2.7 model via the Wan2ReferenceVideoApi node to track the reference video’s movement and composition, then conditions generation on a single character image you load with the LoadImage node. The output is a sequence of frames that maintain the target character’s facial features and overall look across time, which the SaveVideo node encodes into a final video file.

Technically, Wan2ReferenceVideoApi ingests the reference video and extracts per-frame motion cues and scene layout. It then conditions Wan2.7 with the identity information from your character image so the model synthesizes each frame with consistent facial features and styling, while following the original camera and body movement. The node outputs generated frames for SaveVideo to assemble at your chosen frame rate and filename. This approach is useful when you want character consistency without manual keying or heavy rotoscoping, and it works well for short clips, social content, and previz.