How to Install ControlNet Extension in Stable Diffusion (A1111)
This blog post provides a step-by-step guide to installing ControlNet for Stable Diffusion, emphasizing its features, installation process, and usage.
Table of Contents
1. Introduction
In the ever-evolving landscape of artificial intelligence, tools that enhance creativity and output quality have become essential. One such tool is Control Net, designed specifically for Stable Diffusion. This extension acts as an interface structure for neural networks, augmenting control over diffusion models by introducing additional constraints. This blog aims to provide a comprehensive guide on installing Control Net, ensuring you can leverage its capabilities to generate improved and finely controlled outputs.
2. Installing the ControlNet Extension
To install the ControlNet extension, open the web UI interface and follow these steps:
- Navigate to the "Extensions" tab.
- Click on "Install from URL" and
- Paste the Git URL: https://github.com/Mikubill/sd-webui-controlnet
- Click "Install"
- Once done, Click on "Apply and restart UI" or Close Stable Diffusion web UI and restart it.
When successfully installed, you should be able to see the ControlNet expansion panel in both the 'txt2img' and 'img2img' tabs. It should look like this when the expansion panel is expanded:
3. Downloading Pre-trained Models
The next critical step involves obtaining the necessary pre-trained models. Control Net functions effectively only when combined with these models, which dictate how the extension processes inputs. Here’s how to download and set them up:
-
Visit the Hugging Face website to access the repository of original pre trained models.
-
Download at least one model; however, it is advisable to acquire all available models for maximum versatility.
-
Once the models are downloaded, navigate to your Downloads folder.
-
Once downloaded, place the models in following folder location: "extensions/sd-web-ui/ControlNet/models" within the Stable Diffusion folder. Example below:
You also have the option to download the .safetensors pre-trained models, which consume less storage space.
4. Pairing Models with Pre-Processors
Each model needs to be paired with the appropriate pre-processor. For example, if you're using the canny preprocessor, pair it with the original or pre-trained canny model. Example:
The same goes for depth, HED, mlsd, normal map, open pose, scribble, and segmentation models. Ensure that the correct combination is selected.
If a specific model does not appear immediately, simply click the refresh icon next to the respective section.
5. Conclusion
In conclusion, the installation of Control Net for Stable Diffusion brings forth a realm of creative possibilities. By following the outlined steps, users can seamlessly integrate this extension into their workflow, allowing for enhanced control over output generation. From downloading pre-trained models to customizing control units, every aspect of this process is designed to support users in aligning their outputs with their unique creative visions. With patience and careful attention to detail, you are now equipped to explore the full capabilities of Control Net, paving the way for innovative and engaging content generation.