Retour
Wan2.7: Text to Video

This ComfyUI workflow turns a plain text prompt into a short video using the Wan 2.7 model. At its core, the Wan2TextToVideoApi node sends your prompt and generation settings to the Wan 2.7 backend, which synthesizes a sequence of frames that match the described scene, style, and motion. If you supply a reference audio clip, the node uses it to guide mouth shapes and timing, enabling convincing lip-synced results without manual keyframing.

The output from Wan2TextToVideoApi is passed to SaveVideo, which assembles and writes the final clip to disk. You control key parameters in the API node—such as prompt text, seed for reproducibility, and clip characteristics like duration, resolution, and FPS (as exposed by the node)—and then finalize the container and path in SaveVideo. This simple two-node setup is ideal for fast iteration: tweak your prompt, optionally attach an audio file for lip sync, and render clean, shareable videos from within ComfyUI.