返回
Seedance 2.0: Reference to Video

Seedance 2.0: Reference to Video turns a single reference image into a short cinematic video by combining a strong identity lock with prompt-driven motion and lighting. The LoadImage node supplies a clean, high-resolution reference frame, while ByteDance2ReferenceNode runs the Seedance 2.0 model to synthesize a sequence of frames that preserve the subject’s look and composition. You guide the scene using natural language—add camera cues (dolly-in, slow pan left), lighting notes (soft rim light, golden hour), and style hints (35mm, shallow depth of field) directly in the prompt.

Technically, ByteDance2ReferenceNode conditions Seedance 2.0 on both the reference image and your text prompt, then generates a video at the duration, resolution, and frame rate you specify. Key controls typically include reference adherence versus motion intensity (to balance identity preservation with expressiveness), guidance strength, negative prompts for artifact control, seed for reproducibility, and output timing (fps, seconds). The SaveVideo node then encodes the generated frames into a compact MP4 at your chosen fps and quality. This minimalist graph—LoadImage → ByteDance2ReferenceNode → SaveVideo—makes iteration fast: tweak a prompt or slider, queue again, and compare results side by side to quickly dial in the look you need.

While the workflow is optimized for visuals, you can time your motion to external audio by setting duration and beats per section to match your soundtrack. SaveVideo produces a silent video, so plan to add or replace audio in your editor after export. The result is a reliable identity-preserving image-to-video pipeline that responds well to clear prompts and produces cinematic movement without complex node graphs.