Volver
2.2 Creator - Diffusion Guidance - After
2.2 Creator - Diffusion Guidance - Before

The '2.2 Creator - Diffusion Guidance' workflow is designed to enhance image editing precision using advanced diffusion techniques, inpainting, and ControlNet. At its core, this workflow utilizes the Z-Image-Turbo model to enable users to make targeted edits to images, such as adding or altering objects with high fidelity. The process begins with loading the necessary models using nodes like CLIPLoader and VAELoader, followed by the application of differential diffusion through the DifferentialDiffusion node. This technique allows for smooth, spatially varying changes without the need for model retraining, making it highly efficient for iterative editing tasks.

A key feature of this workflow is the integration of ControlNet, which employs preprocessors to extract features like canny edges or depth from an image. These features serve as guidance signals that help maintain structural integrity during the editing process. The workflow also includes inpainting capabilities, facilitated by the InpaintModelConditioning node, which allows users to regenerate specific image parts while preserving the surrounding context. By combining these advanced techniques, users can achieve precise and seamless image modifications, making this workflow particularly useful for creative professionals and digital artists.