Comfyui multi controlnet example reddit.
- Comfyui multi controlnet example reddit So here are my steps: Create base image with desired amount of characters using openpose Inpaint desired characters Combine all lora models using ModelMergeSimple We would like to show you a description here but the site won’t allow us. In this example there are two X variables (1. I love Comfyui, but it is difficult to set a workflow to create animations as easily as it can be done in Automatic1111. . I have Lora working but I just don’t know how to do controlnet with this The ControlNet input is just 16FPS in the portal scene and rendered in Blender, and my ComfyUI workflow is just your single ControlNet Video example, modified to swap the ControlNet used for QR Code Monster and using my own input video frames and a different SD model+vae etc. In this example, we will use a combination of Pose ControlNet and Scribble ControlNet to generate a scene containing multiple elements: a character on the left controlled by Pose ControlNet and a cat on a scooter on the right controlled by Scribble ControlNet. While I'd personally like to generate rough sketches that I can use for a frame of reference when later drawing, we will work on creating full images that you could use to create entire working pages. If true, the conditioning will be with ControlNet and if false, the conditioning will be without ControlNet. anyway. yaml and ComfyUI will load it #config for a1111 ui #all you have to do is change the base_path to where yours is installed a111: base_path: path/to/stable-diffusion-webui/ checkpoints: models/Stable-diffusion configs: models/Stable-diffusion vae: models/VAE loras: models/Lora upscale_models: | models/ESRGAN Ok so started a project last fall, around the time the first controlnets for XL became available. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. But usually you are driving them too hard. Please keep posted images SFW. Keep your models in your A1 installation and find the comfyui file named extra_model_paths. There is a ton of stuff here and may be a bit overwhelming but worth exploring. Larger images also look better after refining, but on 4gb we aren’t going to get away with anything bigger than maybe 1536 x 1536. TLDR: THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. Back then it was only Canny and Depth, and these were not official releases. Note, this site has a lot of NSFW content. You can load this image in ComfyUI to get the full workflow. ComfyUI ControlNet Regional Division Mixing Example. - lots of pieces to combine with other workflows: Hi there! I recently installed ComfyUI after doing A1111 all this time Seeing some speed improvements made me curious to do the switch. A few days ago I implemented T2I-Adapter support in my ComfyUI and after testing them out a bit I'm very surprised how little attention they get compared to controlnets. One question: When doing txt2vid with Prompt Scheduling, any tips for getting more continuous video that looks like one continuous shot, without "cuts" or sudden morphs/transitions between parts? I find this really good if I want an image with multiple people that look different; for example using a vertical 50/50 mask: Overall prompt: photo of two men, arms around shoulders, looking at camera, smiling, in a park We would like to show you a description here but the site won’t allow us. I saw a tutorial, long time ago, about controlnet preprocessor « reference only ». Load the noise image into ControlNet. I also had issues with this workflow with unusually-sized images. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting! May 12, 2025 · ComfyUI 中多个 ControlNet组合使用教程. Remove 3/4 stick figures in the pose image. I was having trouble getting anything to look like the input image, but the issue was that I wasn't including the ControlNet (as I thought it was only needed for posing and I was having trouble loading the example workflows). Many professional A1111 users know a trick to diffuse image with references by inpaint. An image of the node graph might help (although those aren't that useful to scan at thumbnail size) but the ability to search by nodes or features used, and the generation of models, would lol, thats silly, its a chance to learn stuff you dont know, and thats always worth a look. Adding LORAs in my next iteration. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. So Control net will basically fill an empty vessel shaped like your example pose or outline with fused together limbs and body parts. For example, when generating characters, if there is a misalignment in the limbs, you can overlay depth to ensure the correct front-back relationship of the limbs. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. And then each successive active ControlNet working dowards. I have Lora working but I just don’t know how to do controlnet with this Share People want to find workflows that use AnimateDiff (and AnimateDiff Evolved!) to make animation, do txt2vid, vid2vid, animated controlNet, IP-Adapter, etc. When you use ControlNet to determine the pose, then even if there's room in the canvas for more people, the guide skeleton's pose and position in frame determines where the person is, so that will let you have whatever composition you want. I've actually just done a total clean install of A1111 because it was bugging out for some reason. On input A I have the conditioning from ApplyControlNet and on input B I have the conditioning from the positive prompt (Without applying ControlNet). It includes literally everything possible with AI image generation. Open it up with notepad and change the base_path location to your A1 directory and that's all you have to do. Can someone please explain or provide a picture on how to connect 2 positive prompts to a model? 1st prompt: (Studio ghibli style, Art by Hayao Miyazaki:1. Reset the batch counter within the XY Grid Helper. The graphic style I installed instant id for confyui, and tried the example workflows to understand how to use it, the multi id one looks like this, and is used to create an image with 2 people starting from 2 faces, can anyone help me understand which type of image should be inserted in the third input node? thanks to those who will answer All good dude. 2 - Lora: Thickeer Lines Anime Style Lora Mix - ControlNet LineArt - ControlNet OpenPose - ControlNet TemporalNet (diffuser) Custom Nodes in Comfyui: - Comfyui Manager You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. For example, this is what the workflow produces: Welcome to the unofficial ComfyUI subreddit. Repeat the two previous steps for all characters. Agree with everyone saying to use both. 2 x 2 = 4 batch count. ControlNet, on the other hand, conveys it in the form of images. This is awesome! Thank you! I have it up and running on my machine. nine LoRA slots (with On/Off toggles) post processing options. I have used: - CheckPoint: RevAnimated v1. Here are the Controlnet settings, as an example: Step 3: Modify your prompt or use a whole new one, and the face will be applied to the new prompt. After an entire weekend reviewing the material, I think (I hope!) I got the implementation right: As the title says, I included ControlNet XL OpenPose and FaceDefiner models. Cuphead is a classic run 'n' gun set in the style of a one-on-one fighting game universe. With this new multi-input capability, the IP-Adapter-FaceID-portrait is now supported in A1111. ControlNet inpaint is probably my favorite model, the ability to use any model for inpainting is incredible in addition to the no prompt inpainting and it's great results when outpainting especially when the resolution is larger than the base model's resolution, my point is that it's a very helpful tool. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. First time I used it like an Img2Img process with lineart ControlNet model, where I used it as an image template, but it's a lot more fun and flexible using it by itself without other controlnet models as well as less time consuming since one is not I was frustrated by the lack of some controlnet preprocessors that I wanted to use. This IP-adapter is designed for portraits and also works well for blending faces, maintaining consistent quality across various prompts and seeds (as demonstrated your absolutely right, for some reasons two 128 controlnet models where missing, i wish there would be a way to speed up missing models in one single check when loading a workflow. Once I applied the Face Keypoints Preprocessor and ControlNet after the InstantID node the results were really good Using text has its limitations in conveying your intentions to the AI model. Partial 3D model from SD images , Still in a very early stage ,but working on adding Controlnet for multiple views and fixing issues with mesh reconstruction from point cloudand a lot of tuning (so far it works great with Closeup and sharp images ) We've all seen the threads talking about SD3's inability to generate anatomy under certain conditions, but a lot of these issues can be mitigated with decent Controlnet models. Going to python_embedded and using python -m pip install compel got the nodes working. Yeah so it's probably the memory cache filling up from the website itself (Ctrl + F5 may help) or just memory piling up in python. So I decided to write my own Python script that adds support for more preprocessors. The output should be simple. XY Grid Example Image. However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can lead to a degradation in quality. Can be overwhelming to "back read" for answers. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters We would like to show you a description here but the site won’t allow us. Multiple the number of variables in the X by the number of variables in the Y. The goal of this tutorial is to give an overview of a method I'm working on to simplify the process of creating manga, or comics. The actual setting in Automatic1111 is just text-to-image with a very short prompt, for testing For example, we can use a simple sketch to guide the image generation process, producing images that closely align with our sketch. Set ControlNet parameters: Weight 0. e. Vid2QR2Vid: You can see another powerful and creative use of ControlNet by Fictiverse here. You can condition your images with the ControlNet preprocessors, including the new OpenPose preprocessor compatible with SDXL, and LoRAs. Use this as your batch count (extra options>batch count). In this example, we will guide you through installing and using ControlNet models in ComfyUI, and complete a sketch-controlled image generation example. and the other is the line art controlnet, which is out or a couple of weeks, I combine it with this new reference only controlnet to see how much of a reference is actually taken. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows When you use the term "weights" then I am assuming you're meaning the base model, or a derivative thereof, and in using multiple paths from multiple models I have not found a solution to this but have kludged in a convenience LoRA loader that takes a path from one and pipes it to the other, so that I don't have to change/load the model for each Welcome to the unofficial ComfyUI subreddit. In this case, I changed the beginning of the prompt to include, "standing in flower fields by the ocean, stunning sunset". Is there something similar I could use ? Thank you The #1 thing that is junking your image is that you are using a canny ControlNet on an image produced by a depth preprocessor. "New" videos on more older stable diffusion topics like Controlnet are definitely helpful for people who get into SD late. Keep at it! As for formatting on YouTube there's no set way, so not sure why this guy is so quick to give advice. 在 ControlNet 中,多个 ControlNet 可以组合使用,以实现更精准的控制的,比如在生成人物时,如果出现人物肢体错位,则可以叠加 depth 来保证正确的肢体前后关系。 But there's another answer too: You aren't using ControlNet. Inactive ControlNets will be ignored. I've not tried it, but Ksampler (advanced) has a start/end step input. I am working on modifying the Stream Diffusion code to add ControlNet -- I have already gotten ControlNet working using Xformers as the accelerator instead of TensorRT and by passing the 12-length down_block_res_samples tuple of tensors and mid_block_res_sample tensor all the way down to the UNet2DConditionModel's forward pass by the unet_step My goal was to be able to apply multiple character loras with minimal bleeding between loras, while still being able to high res fix for said consistency. Hello! I am hoping to find a good way of addressing individual zones of an image / masked regions with individual prompts for example I am trying to generate a kid jumping on hovering rocks- if I describe the textures of the clothing the kid is wearing SD will make the rocks a similar texture and color, I would like to be able to mask the rocks and the clothes separately in a single generation Created by: OpenArt: Of course it's possible to use multiple controlnets. In this example, we're chaining a Depth CN to give the base shape and a Tile controlnet to get back some of the original colors. Currently, up to six ControlNet preprocessors can be configured to work concurrently, but you can add additional ControlNet stack nodes if you wish. There is an example of one in this YouTube video. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala Welcome to the unofficial ComfyUI subreddit. I haven’t seen a tutorial on this yet. ControlNet is a vital tool in SD for me, so can anyone link me a working workflow that incorporates the possibility of multiple ControlNets together with SDXL + Refiner? This one looked good, however it doesn't work (Problems with Sarge SDXL Sampl Welcome to the unofficial ComfyUI subreddit. 5. What I need to do now: Welcome to the unofficial ComfyUI subreddit. But I don’t see it with the current version of controlnet for sdxl. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. For example, a professional tennis player pretending to be an amateur tennis player or a famous singer smurfing as an unknown singer. Multi-ControlNet methodology. 5 denoising value. ControlNet is similar, especially with SDXL where the CN's a very strong. Controlnet + img2img workflow Inpaint + Controlnet Workflow Img2img + Inpaint + Controlnet workflow Does anyone have knowledge on how to achieve this? I want the output to incorporate these workflows in harmony, rather than simply layering them. This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. No models have a great grasp of concepts like two people hugging. It's kinda workaround until we have proper CNs for SDXL, and it's not as powerful as sd-webui-controlnet for A1111, but it's still fun to use it in single workflow with SDXL models :). traditional cel animation (hand drawn & hand inked!), watercolor backgrounds and live jazz recordings. For videos of celebrities just going undercover and not doing the activity they are known for please submit to /r/UndercoverCelebs. Pose ControlNet. Thanks for pointing out this possibility. 2. unfortunately your examples didn't work. Civitai has a ton of examples including many comfyui workflows that you can download and explore. Partial 3D model from SD images , Still in a very early stage ,but working on adding Controlnet for multiple views and fixing issues with mesh reconstruction from point cloudand a lot of tuning (so far it works great with Closeup and sharp images ) We would like to show you a description here but the site won’t allow us. Ok, I was meaning to try out comfy UI but you just pushed my prioritization for doing it way higher. 1, Ending 0. Either way, this is a memory intensive process which is why I made the tutorial in the first place, because you could technically use an image with preprocessing in all of them without making the templates but that's just way to much memory. May 12, 2025 · Then, in other ControlNet-related articles on ComfyUI-Wiki, we will specifically explain how to use individual ControlNet models with relevant examples. 4) and two Y variables (0, 0. softedge dexined. canny. Multiple controlnet inputs? That sounds like ridiculously powerful potential. 5, Starting 0. Heyho, I'm wondering if you guys know of a comfortable method for multi area conditioning in SDXL? My problem is, that Davemane42's Visual Area Conditioning module now is about 8 months without any updates and laksjdjf's attention-couple is quite complex to set up with either manual calculation/creation of the masks or many more additional nodes. Now I just need more hours in the day to try to keep up with the lightning speed of the advancements in SD. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. Closing notes We would like to show you a description here but the site won’t allow us. Generate all key pose / costumes with any model in low res in ComfyUI, Narrow down to 2~3 usable pose. 4-0. The secret of the details is in how much you define the upscale size, in this image for example I increased it by 2. 0, 0. FYI for anyone unaware, ReActor was updated in the last few weeks to allow you to build Face Models using multiple image inputs, and that leads to a noticeable improvement for likeness in the results. Reference Only is a ControlNet Preprocessor that does not need any ControlNet Model. Can anyone show me a workflow or describe a way to connect an IP Adapter to Controlnet and Reactor with ComfyUI? What I'm trying to do: Use face 01 in IP Adapter, use face 02 in Reactor, use pose 01 in both depth and openpose. you input that picture, and use "reference_only" pre-processor on ControlNet, and choose Prompt/ControlNet is more important, and then change the prompt text to describing anything else except the chlotes, using maybe 0. Aug 17, 2023 · multi-ControlNet (with On/Off toggles) four ControlNet pre-processors. Does anyone have a tutorial to do regional sampling + regional ip-adapter in the same comfyUI workflow? For example, i want to create an image which is "have a girl (with face-swap using this picture) in the top left, have a boy (with face-swap using another picture) in the bottom right, standing in a large field" Enable ControlNet, set Preprocessor to "None" and Model to "lineart_anime". You can also specifically save the workflow from the floating ComfyUI menu Btw. 6. I'm also testing the printing of a few bits of information before every run. Please share your tips, tricks, and workflows for using this software to create your AI art. And above all, BE NICE. It's important to play with the strength of both CN to reach the desired result. INTRO. I've just added basic support for ControlNet models in my mixed SD+XL workflow - you can check out the new version, SD+XL v1. With IP Adapter it's a good practice to add extra noise, and also lower the strength somewhat, especially if you stack multiple. Makeing a bit of progress this week in ComfyUI. Aug 31, 2024 · use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. Image load>open pose preprocessor >applycontolnet. For controlnets the large (~1GB) controlnet model is run at every single iteration for both the positive and negative prompt which slows down generation time considerably and taking a bunch of memory. open pose. 8). I don't think the generation info in ComfyUI gets saved with the video files. Brief Introduction to ControlNet ControlNet is a condition-controlled generation model based on diffusion models (such as Stable Diffusion), initially proposed by Lvmin Zhang, Maneesh Agrawala #Rename this to extra_model_paths. Comfy even made an extra_model_paths_example file to demonstrate what it would look like. With option additional image preview after the preprocessor to see what controlnet gets. 85) and a second ControlNet We would like to show you a description here but the site won’t allow us. I'm using ComfyUI portable and had to install it into the embedded Python install. A lot of people are just discovering this technology, and want to show off what they created. Belittling their efforts will get you banned. example of a multi controlnet set up I am looking for a way to input an image of a character, and then make it have different poses without having to train a Lora, using comfyUI. You can use mklink to link to your existing models, embeddings, lora and vae for example: F:\ComfyUI\models>mklink /D checkpoints F:\stable-diffusion-webui\models\Stable-diffusion made with multi controlnet based on guide from the amazing @ Please remember to treat others with kindness and follow Reddit's Content Policy (https: ComfyUI is hard. Using Multiple ControlNets to Emphasize Colors: In WebUI settings, open ControlNet options and set 'Multi Controlnet: Max models amount' to 2 or more. We would like to show you a description here but the site won’t allow us. An example of Inpainting+Controlnet from the controlnet paper. Mixing ControlNets Created by: OpenArt: Of course it's possible to use multiple controlnets. Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow will help a lot. Chaining together 3 identical LoRAs is unnecissary. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. 4. Generate character with PonyXL in ComfyUI (put it aside). But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. He got a channel? I tried u/AI_Characters "Universal Negative Prompt" in multiple scenarios for a couple of days, and it always produces better images, as suggested. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. So I would probably try three of those nodes in sequence, with original conditioning going to the outer two, and your controlnet conditioning going to the middle sampler, then you might be able to add steps to the first sampler or the end sampler to achieve this. Hence, it's enabled by default, but it's possible to disable it. 😋 the workflow is basically an image loader combined with a whole bunch of little modules for doing various tasks like build a prompt with an image, generate a color gradient, batchload images. Plus quick run-through of an example ControlNet workflow. I also automated the split of the diffusion steps between the Base and the Refiner models. zoe depth. com We would like to show you a description here but the site won’t allow us. Without an example it's hard to tell. Inspired by cartoons of the 1930's, the visuals and audio were painstakingly created with the same techniques of the era, i. Also, if this is new and exciting to you, feel free to post From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. For example, if you have a 512x512 image of a dog, and want to generate another 512x512 image with the same dog, some users will connect the 512x512 dog image and a 512x512 blank image into a 1024x512 image, send to inpaint, and mask out the blank 512x512 part to diffuse a dog with similar appearance. Generate one character at a time and remove the background with Rembg Background Removal Node for ComfyUI. Welcome to the unofficial ComfyUI subreddit. Bring it into Fooocus for faceswap multiple times (no upscale, using different models) Bring it back into ComfyUI to upscale/prompt. Here's an example of how you would do that: We would like to show you a description here but the site won’t allow us. Change your controlnet to an SDXL depth controlnet and your results will at least start making sense. yaml. In other words controlnet gives it the shape of the vessel but the model doesn't understand what to fill it with. ControlNet and LoRAs. 5x (if your image is already big, you can use 1x and set the upscaler to none) that way more tiles will be created, and more details will be added to the faces, I also used a high denoising strength (0. Tutorial on Using Multiple ControlNets in ComfyUI In ControlNet, multiple ControlNets can be combined to achieve more precise control. This is the easiest way to do it imo. Here is ControlNetwrite up and here is the Update discussion. from a folder New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). Aug 17, 2023 · In the diagram below the topmost active ControlNet in the Multi-ControlNet Stack nodes will be applied first. 2), Anime Style, Manga Style, Hand drawn, cinematic, Sharp focus, humorous illustration, big depth of field, Masterpiece, concept art, trending on artstation, Vivid colors, Simplified style, trending on ArtStation, trending on CGSociety By default, the ControlNet module assigns a weight of `1 / (number of input images)`. Since multiple SD3 Controlnet Models have already been released, I'm wondering when I can actually use them - or if there is general news on progress regarding Comfy We would like to show you a description here but the site won’t allow us. The method used in CR Apply Multi-ControlNet is to chain the conditioning so that the output from the first Controlnet becomes the Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow will help a lot. Is there a version of ultimate SD upscale that has been ported to ComfyUI? I am hoping to find a way to implement image2image in a pipeline that includes multi controlnet and has a way that I can make it so that all generations automatically get passed through something like SD upscale without me having to run the upscaling as a separate step I keep meaning to set aside some time but it's just been a nightmare couple of months (like genuine, multiple family emergency, nightmare). An example might be using a latent upscale; it works fine, but it adds a ton of noise that can lead your image to change after going through the refining step. Now that I've got stuff like controlnet, I'm gonna see what things can do. nkxzek wibkl ljnn ycyaewk qjlklv xjsw okpot whmm hxcld kauxn