Embark on a captivating journey with our comprehensive step-by-step tutorial, as we expertly guide you through the effortlessly smooth process of installing Deforum, skillfully configuring prompts and settings, and achieving a truly remarkable transformation of your videos, all awaiting your exploration. The moment has arrived for you to take the plunge into this exciting endeavor!
Before we get started, make sure you have Deforum and ControlNet installed. If you haven't installed the Stable Diffusion and ControlNet yet, you can follow our comprehensive guides.
How to make Amazing AI Videos with Deforum (Stable Diffusion)
How to Install ControlNet Extension in Stable Diffusion (A1111)
Make sure you also download the tile and the openpose models for controlnet, this can be found on the huggingface controlnet models website.
Place the files in following directory: stable-diffusion-webui\extensions\sd-webui-controlnet\models For this tutorial I will be using the Rev Animated checkpoint.
The initial noise multiplier parameter increases the amount of noise resulting in more detailed images. But in the case of Deforum we don't want a lot of detail to avoid flickering. By default the minimum value is 0.5. We are going to change it so we are able to set this to 0.
To access the initial noise multiplier easily in Stable Diffusion you want to enable it in the Quicksettings list in the user interface. To do this, head to the Settings tab and look for “User interface” and scroll down a little bit until you find the Quicksettings list. Here you want to type “initial_noise_multiplier” and press enter. Apply the settings and reload the UI.
Get started with creating prompts and settings for your videos. First we need an image or frame from the video we want to transform. There are multiple ways of doing this but I suggest using an editing software like Premiere Pro or DaVinci resolve. Alternatively you can make a screenshot of the video with the snipping tool built in to windows. Once you have the Image we can start with our prompt.
Open the Image to Image tab. Set your prompt, negative prompt, and load your image. It’s crucial that you use short prompts, I suggest staying under 75 characters for the Positive prompt. For the negative prompt I advise to use the negative embedding; EasyNegative.
Make sure the initial noise multiplier is set to 0.
We will be utilizing 2 ControlNet units. For the first ControlNet unit use these settings:
For the second ControlNet Unit follow these settings:
Make sure both ControlNet units are enabled and hit generate!
You now have the option to modify certain settings until you achieve an output that meets your preferences. Feel free to make changes such as adjusting the prompts, modifying the CFG Scale, and altering the Control Weight of the Tile ControlNet unit. Once you're content with the output, it's time to transfer all these settings to Deforum!
To achieve your desired style, utilizing the power of LoRA's is key. If you're unfamiliar with LoRA's, I highly recommend checking out our guide.
With LoRA's, you can easily transform your original video into different styles, like this One Piece style I found on CivitAI. Just take a look at the before and after!
At first glance the Deforum tab looks a bit overwhelming, but we only need to make a few adjustments, after that we can save these settings and load them easily with a few clicks! I will run you trough all the tabs that we will be using starting from left to right. If you don't want to do all of this manually you can load in a settings file that I have created for you, you can download this on mega.nz.
Within the Keyframes tab, we encounter several additional sub-tabs where we need to adjust some settings.
Set the strength schedule from 0.65 to 0.
Set the CFG scale schedule from 7 to about 3 to 5.
Set the seed behavior to “fixed”, this is to keep consistency in the video.
Now we move on to the tabs below.
Set translation Z to 0
For the noise schedule I advise staying between 0 and 0.03, usually 0 works the best.
Here we change the Color Coherence to None, if you want to keep the original colors from the video, you can change this to Video input.
Change the Amount Schedule to 0.05
If you’re using an NVIDIA graphics card you don’t need to change anything here, but if you are on an AMD you want to disable “Use depth warping”.
That was it for the Keyframes tab, now we move back to the main tabs.
Paste your prompt here that you used from the img2img tab, make sure to use the right format: { "0": "Your prompt here" } Don’t forget to paste your negative prompts in the Negative Prompt section.
We will change 2 sub-tabs in the Init tab, here is where we select our video.
Set the Strength to 0.
Here you want to paste the path to your original video, you can copy the path of your video by right clicking and press copy as path (On windows 10 use Shift+right click). Make sure to remove the double quotes at the start and end of the path. Then, enable “overwrite extracted frames”.
Here we will copy over all of our settings that we used in the img2img ControlNet units. The layout is a little bit different but there’s nothing to worry about.
Enable the first ControlNet unit and put the same settings we used before.
Now enable the second ControlNet Unit and copy over those settings as well.
Now for both ControlNet units it’s important to copy the original video path into the “ControlNet Input Video/Image path”, simply copy the path from the “init” tab and paste it in both ControlNet units.
Then open Hybrid Schedules beneath the settings we just adjusted and set “Comp alpha schedule” to 1.
Set the frames per second to match the frames per second from your original video.
To make your life a whole lot easier we can save these settings to a text file and load them in next time we start up stable diffusion. Simply give your settings file a name and click on save settings. This will save a text file to your stable diffusion webui directory. To load these settings next time you start up Stable Diffusion you can copy the path of the settings file and paste it under Settings File and press “Load All Settings”.
Now, let's generate the magic! Click the Generate Button and witness Deforum in action, transforming your original video into an incredible masterpiece!
Now it’s time to upscale your video, I highly recommend TopazLab’s AI video upscaler for this task. Topaz also has a built in feature that allows you to interpolate frames for an even smoother video!
If you're getting some unexpected errors please head to the FAQ section down below.
In conclusion, this step-by-step guide has equipped you with the tools and knowledge to unleash your creative potential through the transformative capabilities of Stable Diffusion and Deforum. From installations to advanced settings, you've learned how to seamlessly merge technology and creativity to craft mesmerizing videos that captivate and inspire. As you embark on your video editing journey, remember that the fusion of Deforum's cinematic magic and ControlNet's innovative technique opens up new dimensions of storytelling. With your newfound expertise, you're ready to dive into the world of creating stunning videos that leave a lasting impression on your audience. So go ahead, hit that Generate Button, and watch as Deforum works its magic, turning your original videos into captivating masterpieces that showcase your unique vision and creativity.
Absolutely! This guide is designed to be user-friendly, making it accessible to both budding creators and seasoned professionals. The step-by-step instructions break down complex concepts, ensuring that even beginners can create stunning videos with Deforum and Stable Diffusion.
Some extensions cause issues between Deforum and ControlNet, I advise you to disable other extensions that you don't need during this process. You can do this by going to the extensions tab, unchecking the extensions you don't need and then press Apply and restart UI.
ControlNet is a groundbreaking technique that enhances video transformations by providing precise control over various aspects of the process. This guide shows you how to leverage ControlNet units, such as Tile and OpenPose, to achieve desired effects and seamlessly merge technology and creativity.
Verify the accuracy of the path / URL leading to the video within the "Init" > "Video Init" > "Video init path/ URL" tab. Furthermore, ensure that both ControlNets, CN Model 1 and CN Model 2, have the accurate video path entered in the "ControlNet Input Video/ Image Path" field.
Example of video path: "C:\Users\Desktop\video\video1.mp4"