Comfyui example workflows free reddit if a box is in red then it's missing . Pick a few loras, use XY Lora plot with batch to see how it affects it. it has backwards compatibility with running existing workflow. Once installed, download the required files and add them to the appropriate folders. Feel free to figure out a good setting for these Denoise - Unless you are doing Vid2Vid keep this at one. comfyui manager will identify what is missing and download for you . ) Welcome to the unofficial ComfyUI subreddit. I’ve tried with A1111, Forge, and now with Comfy with the most basic LoRA workflow I was able to find. I need to KSampler it again after upscaling. Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Welcome to the unofficial ComfyUI subreddit. For example, see this: SDXL Base + SD 1. try civitai . Right-click anywhere on an empty portion of the workflow, and see the option in the rgthree-comfy sub menu. Plus quick run-through of an example ControlNet workflow. json", which is designed to have 100% reproducibility Welcome to the unofficial ComfyUI subreddit. Still working on the the whole thing but I got the idea down Welcome to the unofficial ComfyUI subreddit. To create this workflow I wrote a python script to wire up all the nodes. Motion LoRAs w/ Latent Upscale: This workflow by Kosinkadink is a good example of Motion LoRAs in action: 7. I found it very helpful. usually the smaller workflows are more efficient or make use of specialized nodes. Go to the comfyUI Manager, click install custom nodes, and search for reactor. It’s closer, but still not as accurate as the sample images during training. I use the workflow(s) that is/are added when you install a node package, to get a feel for what the package has to offer. comfy uis inpainting and masking aint perfect. Step 2: Download this sample Image. And the reason for that is that, at some point, multi-modal AI models will force us to have LLM and T2I models cooperate within the same automation workflow. com Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. Hope you like some of them :) Welcome to the unofficial ComfyUI subreddit. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. 1 ComfyUI Workflow. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Adding same JSONs to main repo would only add more hell to commits history and just unnecessary duplicate of already existing examples repo. Here are approx. Note, I think ComfyUI also added this more recently. Going to python_embedded and using python -m pip install compel got the nodes working. Welcome to the unofficial ComfyUI subreddit. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. com/ How it works: Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. A workflow to create 3D rotation of the subject. I'd venture to say that 90% of the workflows out there are beginner trash and/or reuse of the workflows that come with the nodes. If you have the SDXL 0. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. In this guide, we are aiming to collect a list of 20 cool ComfyUI workflows that you can simply download and try out for yourself. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt In this workflow I experiment with the cfg_scale, sigma_min and steps space randomly and use the same prompt and the rest of the settings. For your all-in-one workflow, use the Generate tab. The Wan2. If you understand how the pipes fit together, then you can design your own unique workflow (text2image, img2img, upscaling, refining, etc). Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Also, if this is new and exciting to you, feel free to post its not that big workflows are better. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Could anyone recommend the most effective way to do a quick face swap on an MP4 video? It doesn't necessarily have to be with ComfyUI; I'm open to any tools or methods that offer good quality and reasonable speed. - lots of pieces to combine with other workflows: 6. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease Welcome to the unofficial ComfyUI subreddit. 27 votes, 38 comments. 3B (1. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. I'm using ComfyUI portable and had to install it into the embedded Python install. Start by loading up your standard workflow - checkpoint, ksampler, positive, negative prompt, etc. Check comfyUI image examples in the link. Please share your tips, tricks, and workflows for using this… 22K subscribers in the comfyui community. But it's reasonably clean to be used as a learning Welcome to the unofficial ComfyUI subreddit. Share, discover, & run thousands of ComfyUI workflows. Upcoming tutorial - SDXL Lora + using 1. 3 billion parameters), covering various tasks including text-to-video (T2V) and image-to-video (I2V). The sample prompt as a test shows a really great result. IF there is anything you would like me to cover for a comfyUI tutorial let me know. Sep 28, 2024 · Nowhere. 0 license and offers two versions: 14B (14 billion parameters) and 1. The workflow is saved as a json file. I've been using a ComfyUI workflow, but I've run into issues that I haven't been able to resolve, even with ChatGPT's help. Go on github repos for the example workflows. Then find example workflows . Do you want to save the image? choose a save image node and you'll find the outputs in the folders or you can right click and save that way too. Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) INITIAL COMFYUI SETUP and BASIC WORKFLOW. Its just not intended as an upscale from the resolution used in the base model stage. Very good. however, you can also run any workflow online, the GPUs are abstracted so you don't have to rent any GPU manually, and since the site is in beta right now, running workflows online is free, and, unlike simply running ComfyUI on some arbitrary cloud GPU, our cloud sets up everything automatically so that there are no missing files/custom nodes That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. Then add in the parts for a LoRA, a ControlNet, and an IPAdapter. CFG - Feels free to increase this past you normally would for SD Sampler - Samplers also matter Euler_a is good but Euler is bad at lower steps. This guide is about how to setup ComfyUI on your Windows computer to run Flux. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. but mine do include workflows for the most part in the video description. But standard A1111 inpaint works mostly same as this ComfyUI example you provided. 19K subscribers in the comfyui community. You’ll find our custom category, mynode2! Click on it, and this is where you find our little node. We’ve got it now. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. This could lead users to increase pressure to developers. I recently switched from A1111 to ComfyUI to mess around AI generated image. I used to work with Latent Couple then Regional prompter module for A1111, which allowed me to generate separate regions of an image through masks and guided with ControlNets (for instance, generate several characters using poses derived from a preprocessed picture). Got it? If you’ve found it, you noticed our example is in the category “image/mynode2”. If you drag in a png made with comfyui, you'll see the workflow in comfyui with the nodes etc. Please share your tips, tricks, and workflows for using this software to create your AI art. I also had issues with this workflow with unusually-sized images. Hi Antique_Juggernaut_7 this could help me massively. I'm still learning so any input on how I could improve these workflows is appreciated, though keep in mind my goal is to balance the complexity with the ease of use for end users. 9 leaked repo, you can read the README. Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. Belittling their efforts will get you banned. . Complete overkill. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: https://openart. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. You may plug them to use with 1. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Nodes include: LoadOpenAIModel Welcome to the unofficial ComfyUI subreddit. I learned this from Sytan's Workflow, I like the result. In ComfyUI, right-click on the workflow, then click on image. From STL renders to finished products, from hilts to accessories, it can be discussed here. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. 1. Svelte is a radical new approach to building user interfaces. You guys have been very supportive, so I'm posting here first. hi u/Critical_Design4187, it's definitely an active work in progress, but the goal of the project is to be able to support/run all types of workflows. May 12, 2025 · Flux. For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. We've now made many of them available to run on OpenArt Cloud Run for free, where you don't need to setup the environment or install custom nodes yourself. We would like to show you a description here but the site won’t allow us. But let me know if you need help replicating some of the concepts in my process. com) video, I was pretty sure the nodes to do it already exist in comfyUI. New Tutorial, How to rent up to 1-8x 4090 GPUS, install ComfyUI (+Manager, Custom nodes, models, etc). [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. I have a client who has asked me to produce a ComfyUI workflow as backend for a front-end mobile app (which someone else is developing using React) He wants a basic faceswap workflow. May 12, 2025 · Wan2. Please keep posted images SFW. Try to install the reactor node directly via ComfyUI manager. start with simple workflows . I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . I tried to keep the noodles under control and organized so that extending the workflow isn't a pain. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. and remember sdxl does not play well with 1. I long hoped people would start using ComfyUI to create pure LLM pipelines. Does anyone have a tutorial to do regional sampling + regional ip-adapter in the same comfyUI workflow? For example, i want to create an image which is "have a girl (with face-swap using this picture) in the top left, have a boy (with face-swap using another picture) in the bottom right, standing in a large field" Welcome to the unofficial ComfyUI subreddit. ComfyUI Fooocus Inpaint with Segmentation Workflow This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. And above all, BE NICE. 5 + SDXL Refiner Workflow : StableDiffusion. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Hi. 5 base models, and modify latent image dimensions and upscale values to work We would like to show you a description here but the site won’t allow us. Join the largest ComfyUI community. Uncharacteristically, it's not as tidy as I'd like, mainly due to a challenge I have with passing the checkpoint/model name through reroute nodes. What it's great for: This is a great starting point to generate SDXL images at a resolution of 1024 x 1024 with txt2img using the SDXL base model and the SDXL refiner. ai/profile/neuralunk?sort=most_liked. Ending Workflow. It is licensed under the Apache 2. I see examples with 200+ nodes on that site. 5 so that may give you a lot of your errors. Cool! Mine won't conflict if there's already a Copy Image menu item. You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. Starting workflow. See full list on github. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. There is a latent workflow and a pixel space ESRGAN workflow in the examples. 20K subscribers in the comfyui community. For example a faceswap with a decent detailer and upscaler should contain no more than 20 nodes. Usually, or almost always I like to inpaint the face , or depending on the image I am making, I know what I want to inpaint, there is always something that has high probability of wanting to get inpainted, so I do it automatically by using grounding dino segment anything and have it ready in the workflow (which is a workflow specified to the picture I am making) and feed it into impact pack Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Please share your tips, tricks, and workflows for using this…. Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. (Recap) We have hosted the first ComfyUI Workflow Contest last month and got lots of high quality workflows. I'm matching sample settings in Kohya as closely as I can and using the same model, steps, cfg, scheduler, and generation seed. My seconds_total is set to 8, and the BPM I ask for in the prompt is set to 120BPM (two beats per second), meaning I get 16 beat bars. They are images of Welcome to the unofficial ComfyUI subreddit. 0 of my AP Workflow for ComfyUI. true. For the checkpoint, I suggest one that can handle cartoons / manga fairly easily. I can't see it, because I cant find the link for workflow. The reason why you typically don't want a final interface for workflows because many users will eventually want to apply LUTs and other post-processing filters. 1 model, open-sourced by Alibaba in February 2025, is a benchmark model in the field of video generation. A lot of people are just discovering this technology, and want to show off what they created. Select it. Quickly convert all standard reroute nodes to rgthree reroute nodes. Feel free to post anything regarding lightsabers, be it a sink tube or a camera flashgun. This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. Base ComfyUI also doesn't even connect to the internet for anything unless you run the update script. image saving and postprocess need was-node-suite-comfyui to be installed. Base ComfyUI doesn't even come with the manager so the attack surface is a lot more limited compared to base invokeai. 2023 update: we have revived /r/LightsaberBST for vendors who wish to post items for So when I saw the recent Generative Powers of Ten : r/StableDiffusion (reddit. Note - please do not ask the mods about specific products or recommendations. The reason is that we need more LLM-focused nodes. Pick a checkpoint Pick a prompt (and negative embeddings) Generate, see how it works, maybe use xy plot with different steps, cfg. It covers the following topics: That's a bit presumptuous considering you don't know my requirements. 1 ComfyUI install guidance, workflow and example. I'll also share the inpainting methods I use to correct any issues that might pop up. Apr 22, 2024 · Discover the top resources for finding and sharing ComfyUI workflows, from community-driven platforms to GitHub repositories, and unlock new creative possibilities for your Stable Diffusion projects. So instead of having a single workflow with a spaghetti of 30 nodes, it could be a workflow with 3 sub workflows, each with 10 nodes, for example. Hi everyone, this is John from OpenArt. Try an lcm, see how it changes. If you are doing Vid2Vid you can reduce this to keep things closer to the original video This is the most well organised and easy to use ComfyUI Workflow I've come across so far showing difference between Preliminary, Base and Refiner setup. This repo contains examples of what is achievable with ComfyUI. json files saved via comfyui, but the launcher itself lets you export any project in a new type of file format called "launcher. Infinite Zoom: Welcome to the unofficial ComfyUI subreddit. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Muter nodes) and Context Switches. I just released version 4. aithvy bszw uymm tmqi kdwx zudfco gksrvvng zykyb jpd ifyalcklb