رجوع
2.2 Creator - Diffusion Guidance - After
2.2 Creator - Diffusion Guidance - Before

The '2.2 Creator - Diffusion Guidance' workflow in ComfyUI is designed to enhance image editing precision by integrating inpainting and ControlNet techniques. At the core of this workflow is the Z Image Turbo model, which facilitates efficient and accurate diffusion processes. The workflow employs a variety of nodes, such as CLIPLoader and VAELoader, to load and process images, while nodes like DifferentialDiffusion and InpaintModelConditioning are used to guide the diffusion process with spatially varying changes. This makes it particularly useful for tasks that require detailed image editing without the need for model retraining.

The workflow leverages ControlNet to provide structural guidance, such as canny edges or depth maps, to ensure that the diffusion process adheres to desired structural elements. This is achieved through nodes like ZImageFunControlnet and Controlnet Preprocessor. By using per-pixel 'strength' maps, the workflow allows for smooth and gradual changes, making it ideal for applications that demand high precision and control, such as inpainting specific image areas while maintaining the integrity of the surrounding context.