>In this Stable Diffusion Tutorial, we’ll walk you through the process of creating AI animation videos using a combination of powerful tools: the Roop Faceswap extension, ControlNet, and the Ebsynth Utility in Stable Diffusion Automatic 1111.

Check out the full YT Shorts : https://www.youtube.com/shorts/YY4905dNLA8

Check out other SD Tutorial : https://thefuturethinker.org/category/artificial-intelligence/stable-diffusion/

All Google Colab links here : https://thefuturethinker.org/stable-diffusion-google-colab-ipynb-list/

Our journey today will involve replacing the face of the main character in a video, using ControlNet to transform backgrounds, outfits, and themes, resulting in a video that’s entirely different from the source material.

We’ll take a short dance video as an example. The initial character looks like this, but we’ll use the Rev Animated checkpoint to transform her into an animated cartoon style. Then, with the Roop Faceswap extension, we’ll morph our dancing character into Nancy.

While we won’t dive into detailed steps for the Ebsynth Utility extension in this tutorial, you can check out our previous tutorial for a comprehensive explanation. This time, we’re applying the same FPS rate as the source video, ensuring detailed movement in our AI animation.

Here’s where the fun begins. We’ll explore style changes in image-to-image batch generation. We’ll enable two ControlNet processors, using OpenPose and Canny processors, and meticulously adjust styles until they meet our expectations. In the text prompt, we’ll specify details like neon lights, a pink T-shirt, and a mini skirt.

We’ll also enable the Roop extension for faceswap, using Nancy’s image for this tutorial. After generating, we’ll select the best result for our dancing video.

The process will take a bit, but patience is key. While waiting, you can explore the generated frames in the frames folder.

Next, we’ll upscale the images. There are 1200 images to upscale, and we’ll use Batch Upscale in Ebsynth Utility, following the instructions for Input and Output Directories, as well as setting the Width and Height for upscaled images. We’ll choose an upscaler like 4X Ultra Sharp and initiate the process.

After the upscale is complete, we’ll have all frames of upscaled images in the designated folder.

In Stage 5, we’ll let the extension rename files and generate Ebsynth project files. While there are many project files to handle due to the high FPS, we’ll focus on 300 frames for this demo, with the full 18-second dance animation video set to be uploaded on my YouTube Shorts channel soon.

Once the animation video is generated in Stage 7, we’ll take a look at the impressive result.

And that’s not all! The full dance animation will be uploaded soon on YouTube Shorts, so if you haven’t already, please subscribe to our channel and hit the notification bell to stay updated on our animations and other fun content.

That’s a wrap for today’s tutorial. We hope this inspires you to experiment with different extensions and create your own animations. Stay tuned for more exciting content, and we’ll catch you in the next video. Have a fantastic day!

#AIAnimation
#RoopFaceswap
#EbsynthUtility
#YouTubeShorts
#AnimationTutorial

Support the Hairy Eyeball

Share this on