戻る

This ComfyUI workflow builds a guided style and scene transition on top of the LTX-2.3 text-to-video model by applying the ltx2.3-transition LoRA. The core generation happens inside a custom LTX-2.3 pipeline node (beb19732-5803-4af6-b2d2-289692ce780b), which takes your prompt, applies the LoRA at a user-defined weight, and renders a sequence of frames that smoothly evolve from one style or scene description to another. If you provide a reference image with LoadImage, the pipeline can also pull color palettes or subject cues into the transition; if you skip it, the transition is guided purely by text.

Practical controls are surfaced where you need them: prompt fields (with support for the trigger word "zhuanchang"), frame count and FPS for timing, and LoRA strength to dial how assertively the transition behaves. MarkdownNote is embedded to keep prompting tips visible as you iterate, and SaveVideo encodes the final frames into a video file at your chosen frame rate and quality. The result is a reliable, loop-friendly way to blend visual elements—style, lighting, subject emphasis, or even scene composition—without hand-animating or post-processing crossfades.