Back
Flux.1 Canny Model - After
Flux.1 Canny Model - Before

The Flux.1 Canny Model workflow in ComfyUI is designed to generate images that are guided by edge detection, using the Flux.1 model in conjunction with the Canny edge detection technique. This workflow leverages a series of nodes, including KSampler, VAEDecode, and FluxGuidance, to transform input images into visually compelling outputs. The core of this process is the Canny node, which detects edges in the input image, providing a structural guide that the Flux.1 model utilizes to generate new imagery. This technique is particularly useful for creating images with a strong sense of form and structure, as the edge detection ensures that key features of the original image are preserved and enhanced in the generated output.

The workflow begins with loading necessary models such as Flux and Flux.1, followed by uploading an image that serves as the base for edge detection. The Canny node processes this image to identify edges, which are then used by the FluxGuidance node to influence the image generation process. The workflow also includes nodes like CLIPTextEncode and InstructPixToPixConditioning, which allow for text-based prompts to further refine the output, making it versatile for various creative applications. By using this structured approach, users can achieve high-quality, edge-guided images that maintain coherence and artistic intent.