>#a1111 #stablediffusion #fashion #ipadapter #clothing #controlnet #afterdetailer #aiimagegeneration #tutorial #guide

The video talks mainly about uses of IP Adapter in Automatic 1111
The video will explain:
1- how IP Adapter works and used in text to image, image to image and in inpaint
2- How we can use it to change a dress of a person based on a reference dress
3- I will also have a supplementary section about automatic dress changes with help of After detailer for an existing image without LoRA Training.
Note that IP Adapter can be used in creating more stylized stable videos in img2img or using animatediff

00:00:00 Introduction and sample results
See examples what we will learn and generate in this video such as changing image style based on reference image, mixing two images, and changing dresses of a person manually and automatically
00:00:46 downloading IP Adapter models
00:02:06 How to use IP Adapter in A1111 alone
Understanding how IP Adapter works in the background and Shows example of generating robot from a golden sphere, changing settings, and what to expect
00:04:20 using IP Adapter with other Controlnets examples
00:04:55 statue face with glasses and hat example
00:07:07 IP Adapter with Open pose control net using LCM LoRA examples
Generated images faster using LCM combined with IP Adapter and controlnet
00:08:12 mixing two images using IP Adapter and Depth map or using two IP Adapters
00:08:50 img2img usage with IP Adapter
00:09:21 inpaint example, older woman face in the body of younger woman
00:12:55 Changing Dresses based on reference dress image examples
USING IP Adapter to change dresses based on reference dress manually
00:16:19 Automatically detecting clothes and changing them using After detailer fashion moduel
See settings needed to make fashion model in After detailer work properly to change clothes automatically without masking them yourself based on a prompt

Here we see some examples of what we will do in this video.
Github page of IP Adapter
https://github.com/tencent-ailab/IP-Adapter
download IP Adapter controlnet models from
https://huggingface.co/h94/IP-Adapter/tree/main
Controlnet overall models
https://huggingface.co/lllyasviel/sd_control_collection/tree/main
download Realistic vision model from
https://civitai.com/models/4201/realistic-vision-v51

watch stable diffusion and A1111 guide if you are new to AI Image generation in SD
https://youtu.be/RtjDswbSEEY
watch Complete Guide on Controlnet usage
https://youtu.be/13fgBBI-ZXU
watch After detailer guide
https://youtu.be/WtGxmn-0qSw
watch developing LoRA model for clothes
https://youtu.be/wJX4bBtDr9Y

Thanks to all creators from Pexels.com and Freepik.com for the images they provided
https://www.pexels.com/
https://www.freepik.com/

Prompts used: all displayed in the video:
For IP Adapter mostly simple prompts such as : a robot, for more complicated prompts I used Realistic vision prompt recommendation which is
Prompt:
RAW photo, SUBJECT, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3

(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation,

Computing Specs
Computer Specs:
Laptop: Legion 5 Pro
Processor :AMD Ryzen 7 5800H , 3201 Mhz
System RAM: 16.0 GB
Graphics GPU: NVIDIA GeForce RTX 3070 Laptop GPU 8GB

Support the Hairy Eyeball

Share this on