Skip to content

Comfyui batch processing reddit



 

Comfyui batch processing reddit. Add support for Draw Things. A new FreeU v2 node to test the updated implementation* of the Free Lunch technique. Upto 70% speed up on RTX 4090. r/comfyui. Would any of you knowledgable souls be able to guide me on how to achieve this? With my just released node WorkFlow - Choose images from batch to upscale. Search for wildcards, they're available in both a1111 and comfyui. 213 upvotes · 68 comments. Sort by: Add a Comment. Is it possible to iterate a ComfyUI workflow over a batch of video clips within a folder for a vid2vid workflow? I'm trying to remaster a CGI video with realism, but it's got about 90 clips I need to process, and I don't want to have to hold the whole video in memory to process all 7 minutes. In the Environment Variables window, under ‘System variables’, find and select ‘Path’, then click on ‘Edit’. I have a text file full of prompts. Hook one up to vae decode and preview image nodes and you can see/save the depth map as a PNG or whatever. I don't understand why the live preview doesn't show during render. SD videos at a haunted house. Is there a way I can add a node to my workflow so that I pass in the base image + mask and get 9 options out to compare? To create a seamless workflow in ComfyUI that can handle rendering any image and produce a clean mask (with accurate hair details) for compositing onto any background, you will need to use nodes designed for high-quality image processing and precise masking. Length defines the ammount of images after the target to send ahead. Add homebrew cask and pip formula Welcome to the unofficial ComfyUI subreddit. > <. when I want to queue up 20. Batch-processing images by folder on ComfyUI. 4. Single image works by just selecting the index of the image. I've kindof gotten this to work with the "Text Load Line A seed defines a value for a deterministic "random" process that means it can be repeated in the future. In the Inspire pack, there is a LoadImageListFromDir //Inspire node that loads images as a list. Hey guys, So I've been doing some inpainting, putting a car into other scenes using masked inpainting. It will pull a random filename each time. Belittling their efforts will get you banned. 12K Members. I hope this is helpful😎. I don't know about the different pictures part, but for the prompts. 0 in ComfyUI I've come across three different methods that seem to be commonly used: Base Model with Latent Noise Mask, Base Model using InPaint VAE Encode and using the UNET "diffusion_pytorch" InPaint specific model from Hugging Face. And above all, BE NICE. This might be useful for example in batch processing with inpainting so you don't have to manually mask every image. in about one and a half seconds. I ended up just removing nodes starting from the end. Comfy UI now supports SSD-1B. A proper node for sequential batch inputs, and a means to load separate loras in a composition. Fix ComfyUI KSamplerAdvanced seed reading #33. (more explanation on clip skip last layer) 5. So segmenting the car with SAM & DINO, inverting the mask and putting the car in the scene, got some great compositions, only issue is I feel with some of them is, while the lighting works I feel as if the colours between the inpainted car and the overall scene aren't matching up. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets I am currently using webui for such things however ComfyUI has given me a lot of creative flexibility compared to what’s possible with webui, so I would like to know. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Fix ComfyUI ControlNetApplyAdvanced node #25. So I'm happy to announce today: my tutorial and workflow are available. If those were both in I'd be so happy. Queue doesn't keep up with the processor. r/leagueoflegends. Note that the image is only a trigger to a batch handler node to fire off the next round. Still very effective, but I found situations where it impairs the capability of the models to generate multiple diverse subjects in the same image. In order to recreate Auto1111 in ComfyUI, you need those encode++ nodes, but you also need to get the noise that is generated by ComfyUI, to be made by the GPU (this is how auto1111 makes noise), along with getting ComfyUI to give each latent its own seed, instead of splitting a single seed across the batch. Output is in Gif/MP4. 1. (i don't need the plot just individual images so i can compare myself). Once you build this you can choose an output from it using static seeds to get specific images or you can split up larger batches of images to reduce Any way to batch process a bunch of images and have the program run back to back autonomously? Currently I have been processing images 1 at a time, using a ton of steps in euler A (40 steps) so I'm taking about 5 minutes to process since I'm using some extra lora motion nodes as well. Open the start menu, search for ‘Environment Variables’, and select ‘Edit the system environment variables’. There are tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning and prototyping. There are also nodes that will let you load sequences of PNGs as a batch, like the VHS Load Images (folder) node, but all the similar nodes I have tested so far would crush the extra 16 bit data, and turn it into a 8 bit mess where all the Noob question: how to batch output multiple inpainting. <. After borrowing many ideas, and learning ComfyUI. I liked the ability in MJ, to choose an image from the batch and upscale just that image. I'm also aware you can change the batch count in the extra options of the main menu, but I'm specifically In researching InPainting using SDXL 1. The goal is to create a similar animation from a sequence of numbers 1 to 100, each number with 32 frames. Currently they all save into a single folder. So, I just made this workflow ComfyUI . But if you saved one of the still/frames using Save Image node OR EVEN if you saved a generated CN image using Save Image it would transport it over. SDXL was trained on clip skip 1 while many NAI based anime model leak were trained on clip skip 2. It works beautifully to select images from a batch, but only if I have everything enabled when I first run the workflow. EDIT: Just in case anyone is experiencing something similar, I had a bad RAM module. I would like to save them to a new folder for each generation so I can better data manage. images it only creates about 10. Thanks for the responses everyone. Top 7% Rank by size. If that doesn't give you the seed used to recreate the image, you need to find the original seed. Reuse the frame image created by Workflow3 for Video to start processing. It works pretty well in my tests within the limits of Since 2001, Processing has promoted software literacy within the visual arts and visual literacy within technology. It'll create all 20 I guess in a way this is a good problem to have. A lot of people are just discovering this technology, and want to show off what they created. What is the different between "Image List" and "Image Batch"? : r/comfyui. r/StableDiffusion. Copy that (clipspace) and paste it (clipspace) into the load image node directly above (assuming you want two subjects). I have a basic workflow that I would like to modify to get a grid of 9 outputs. In ComfyUI using Juggernaut XL, it would usually take 30 seconds to a minute to run a batch of 4 images. . Apr 24, 2023 · You set a folder, set to increment_image, and then set the number on batches on your comfyUI menu, and then run. the best part about it though >. Batch Processing? I was unable to find anything close to batch processing, is that possible in ComfyUI? I love the tool but without batch processing it becomes useless for my personal workflow : ( 6. I've been googling around for a couple hours and I haven't found a great solution for this. Will post workflow in the comments. I'm using Load Image List from Inspire to run multiple AnimateDiff using a single image ControlNet input. There are controlnet preprocessor depth map nodes (MiDaS, Zoe, etc. generates 512 by 768. I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. I don't think the generation info in ComfyUI gets saved with the video files. The t-shirt and face were created separately with the method and recombined. I've put a few labels in the flow for clarity use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example. Sort by: nomadoor. Batch up prompts and execute them sequentially. Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. ComfyUI has several types of sequential processing, but by using the "list" format, you can process them one by one. I would also like to better understand how comfy handles video- ie if video can be piped through most operations in comfy without being broken down into individual frames and batch processed as stills- OR if video in comfy is a very specialized case that only a few specific nodes are capable of processing? ComfyUI only using 50% of my GPU. jackwghughes. org Hello! I have created a workflow that uses an IP adapter + a shuffle controlnet to create variations on an input mask the first step is the generation of a base image, then the base image gets piped through a second ksampler where another graphic overlay layer is inpainted using the mask generated using controlnet shuffle For this process to still work, I would have to manually re-enter the seed for each image, significantly slowing down the process. You can look at the EXIF data to get the SEED used. However, since I am planning to use this as an API, I need to find a way around queuing. Even with 4 regions and a global condition, they just combine them all 2 at a time until it becomes a single positive condition to plug into the comfyui. Just set the image loader to input, add the primitive and set the mode to random. Splash - inpaint generative fill style and animation, try it now. 111 Online. Batch video processing. Take the original video, use ffmpeg to output an image sequence, batch process the images with this tool, use ffmpeg to compile the new densepose image sequence back into a compatible video format for your project to finish the generation. I type Y, and the window closes out. Please share your tips, tricks, and workflows for using this software to create your AI art. And Also Bypass the AnimateDiff Loader model to Original Model loader in the To Basic Pipe node else It will give you Noise on the face (as AnimateDiff loader dont work on single image, you need 4 atleast maybe and facedetailer can handle only 1 ) Was DM'd the solution, you first need send the initial txt2img to img2img (use the same seed for better consistency) and then use the "batch" option and use the folder containing the poses as the "input folder" and to check "skip img2img processing" within the control net settings. DWPreprocessor My ComfyUI workflow takes 2 images as input, and generates an output image combining those two images. Click the seed increment and try again, hope it works. ). ADMIN MOD. Make sure to enable controlnet with no preprocessor and use the Welcome to the unofficial ComfyUI subreddit. Even fewer wires thanks to u/receyuki’s much-enhanced SD Prompt Generator node. GenArt42. I can't figure work out how to "pause execution Hello ComfyUI users! I am new to ComfyUI and I am already in love with it. They have a node that includes the neg and pos prompt, and the settings for the latent image, and their efficient sampler has a better preview and the decode settings, just outputs images. I can't really find a community dealing with ComfyBox specifically, so I thought I give it a try here. Break the video down to a gif, and turn the gif into single images and then batch run the images and turn it back into a gif. Ferniclestix. This is a custom node pack for ComfyUI, intended to provide utilities for other custom node sets for AnimateDiff and Stable Video Diffusion workflows. Fill in your prompts. JAPANESE GUARDIAN - This was the simplest possible workflow and probably shouldn't have worked (it didn't before) but the final output is 8256x8256 all within Automatic1111. Put ImageBatchToImageList > Face Detailer > ImageListToImageBatch > Video Combine. Thus far, I've established my process, yielding impressive images that align with my expectations. It has now taken upwards of 10 minutes to do seemingly the same run. When loading the graph, the following node types were not found: CR Batch Process Switch. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. Add single line prompt option for copy button #20. ”—from Processing. I want to load it into comfyui, push a button, and come back in several hours to a hard drive full of images. For a dozen days, I've been working on a simple but efficient workflow for upscale. $\Large\color{#00A7B5}\text{Expand Node List}$ ArithmeticBlend: Blends two images using arithmetic operations like addition, subtraction, and difference. After lots of troubleshooting, I haven't had a single crash since I pulled one of my 32GB chips out of my motherboard. Trying to work on multi-minute videos and I can't seem to get anything over a frame cap of 600 (Load Video VHS node) before Comfy errors due to out of vram (I have 6GB). Beside JunctionBatch, which is basically a crossbreed of Junction and native ComfyUI batch, I feels like Loop is probably way too powerful and maybe a little hacky. What do I need to install? (I'm migrating from A1111 so comfyui is a bit complex) I also get these errors when I load a workflow with controlnet. In the same vein as your question, why then does the same seed with batch size larger than 1 not produce "X" pictures that are all Hello i am running some batch processing and I have setup a save image node for my controlnet outputs. So in this workflow each of them will run on your input image and you 2. With Automatic1111, it does seem like there are more built in I recently switched to comfyui from AUTOMATIC1111 and I'm having trouble finding a way of changing the batch size within an img2img workflow. There are apps and nodes which can read in generation data, but they fail for complex ComfyUI node setups. But now I can't find the preprocessors like Hed, Canny etc in ComfyUi. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. I'm aware that the option is in the empty latent image node, but it's not in the load image node. From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. i wish to load a video in comfyui, then create a side by side video with the original image on the left and depth map on the right using depthanything and clearing the DrakenZA. With comfy I want to focus mainly on Stable Diffusion and processing in Latent Space. So my though is that you set the batch count to 3 for example and then you use a node that changes the weight for the lora on each bath. PLANET OF THE APES - Stable Diffusion Temporal Consistency. Drop the image back into ComfyUI to load it, then change the seed to what was in the exif CLIPSeg Plugin for ComfyUI. Im quite new to ComfyUI. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. •. I would love to know if there is any way to process a folder of images, with a list of pre-created prompt for each image? I am currently using webui for such things however ComfyUI has given me a lot of creative flexibility compared to what’s possible with webui, so I Input your batched latent and vae. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale One thing I noticed right away when using Automatic1111 is that the processing time is taking a lot longer. In the System Properties window that appears, click on ‘Environment Variables’. • 6 days ago. The goal by the end is to have a 32 frame animation for each number. ComfyUI needs a stand alone node manager imo, something that can do the whole install process and make sure the correct install paths are being used for modules. Hey there, I recently switched to comfyui and I'm having trouble finding a way of changing the batch size within an img2img workflow. We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. Best (simple) SDXL Inpaint Workflow. (Rude n crude I know, but then so was the brute force undo which seems to work just fine). I'd rather just do it in Comfy anyway. Vitor_777_. Can confirm, it's the same process just with different terminology. Welcome to the unofficial ComfyUI subreddit. A new Image2Image function: choose an existing image, or a batch of images from a folder, and pass it through the Hand Detailer, Face Detailer, Upscaler, or Face Swapper functions. I have used the ComfyUI to Python extension to create a Python script, but I think it doesn’t scale well. Fix ComfyUI CLIPTextEncodeSDXL prompt reading #25 #33. Does anyone know how to access full power of the graphics card? Fix ComfyUI SaveImage node with no inputs #25. Side by side comparison with the original. Having a batch size larger than 1 indicates you would like "X" number of pictures that are all different. It's a bit of a process, but the primary way I've been doing it for a couple months. In case you are wondering, this is used to feed controlNet and influence the video generation process. The Universal Negative Prompt is no more enabled by default. Thanks bro, looks like this is what I need. but it certainly mucks up my workflow. So I go back to the cmd window and ctrl+c to kill the process, which stops the server, and then it asks if I want to cancel that batch job, y/n. I'm getting 3dsmax and Maya PTSD just looking at it. ComfyUI is amazing. • 5 mo. You can inpaint inside comfyui by right clicking the "load image" node, open the "Open in mask editor", and mask the area. ago. The title explains it, I am repeating the same action over and over on a number of input images and I would like, instead of having to manually load each image and then pressing on the "queue prompt", to be able to select a folder and have Comfy process all input images in that folder. There’s a new optional node developed by u/Old_System7203 to select the best image of a batch before executing the rest of the workflow. Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. Queue the flow and you should get a yellow image from the Image Blank. ComfyUI is super cool, but the idea of using it makes my eyes glaze over and I will never ever touch it. I want to test some basic lora weight comparisons, like in WebUI where you do XYZ plot. 5. I share many results and many ask to share. I've converted the Sytan SDXL workflow in an initial way. 129 upvotes. Is there something like that within ComfyUI? My A1111 has stopped working & I haven't been able to get it working again yet. You can also specifically save the workflow from the floating ComfyUI menu Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. problem with using the comfyUI manager is if your comfyui won't load you are SOL fixing it. But also you can connect a whole bunch of sampler setups one after the other so The Ultimate AI Upscaler (ComfyUI Workflow) Workflow Included. I'm not sure which custom node I installed that enables it, but when I right-click the workspace background, I can select "follow execution" and my view will re-center on the active node while the prompt is being worked. If I downscale the video it seems I can get more out of it so am trying to feed in a low res video & churn out something It never does this in single generations but almost always does after a few hundred in a batch. Posted by u/Hot-Juggernaut811 - 1 vote and 2 comments Batch Upscale Help! Greetings, Community! As a newcomer to ComfyUI (though a seasoned A1111 user), I've been captivated by the potential of Comfy and have witnessed a significant surge in my workflow efficiency. Where can they be loaded. Then the images from each step will appear in the Ksampler node. It's simple and straight to the point. Also, do yourself a favor and get the Efficiency pack nodes. Using 1. In Automatic1111 the Depth Map script has features where it will generate panning, zooming, swirling, animations based off the 3D depth map it generates. The queue will go through the workflow one at a time as it clears the queue, and you can cancel items in the queue, or add to it. 106 upvotes · 22 comments. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. So much easier to play with. Please keep posted images SFW. I've compared it with the "Default" workflow which does show the intermediate steps over the UI gallery and it seems Welcome to the unofficial ComfyUI subreddit. Training a LoRA will cost much less than this and it costs still less to train a LoRA for just one stage of Stable Cascade. My team is working on building a pipeline for processing images. 2. if I put them in 5 at a time. I guess expect Loop will break if used wrongly. Apply The Black code style 1. Svelte is a radical new approach to building user interfaces. Heya, I am rendering AnimateDiff videos using ComfyUI, but only got 50% of my VRAM being allocated for the rendering. Or the entire python process "Killed" during Upscaling. That’s a cost of about $30,000 for a full base model train. , and then re-enable once I make my selections. spart2004. "Did you try to kill comfyUI from the terminal and restart it before loading the batch image you like and process it with HRF?" ComfyUI Question: Batching and Search/Replace in prompt like A1111 X/Y/Z script? Having been generating very large batches for character training (per this tutorial which worked really well for me the first time), it occurs to me that the lack of interactivity of the process might make it an ideal use case for ComfyUI, and the lower overhead of Welcome to the unofficial ComfyUI subreddit. r/Cisco. Batch load images from a directory Pick a face Upload a face Use the 'outfit' image Batch load from a directory Pick an expression grid (a grid of your character with multiple facial expressions, as in Coherent Facial Expressions ComfyUI) Upload an expression grid Generate expression grid from face photo Pick a pose Install the comfyui manager, then after you restart comfyui, click on the manager button and for Preview method select 'TAESD (slow)'. My 4090. Unfortunately when i run a test with two or 3 numbers only the Welcome to the unofficial ComfyUI subreddit. com Riya_Nandini. Just getting up to speed with comfyui (love it so far) and I want to get inpainting dialled. 3. I tried installing the ComfyUI-Image-Selector plugin, which claims that I can simple mute or disconnect the Save Image node, etc. • 1 hr. I created some custom nodes that allow you to use the CLIPSeg model inside ComfyUI to dynamically mask areas of an image based on a text prompt. A batch will process the batch as one step, so if a batch of 1 takes 10 seconds, a batch of 10 will take 100 seconds. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. The output is Gif/MP4. All the batch handler has to do is replace the appropriate graph settings (including it's own state) and shoot it to the queue when triggered. and let it run down. I'm also aware you can change the batch count in the extra options of the main menu, but I'm specifically looking Welcome to the unofficial ComfyUI subreddit. unfortunately your examples didn't work. Oct 18, 2023 · This function reads in a batch of image frames or video such as mp4, applies ControlNet's Depth and Openpose to generate a frame image for the video, and creates a video based on the created frame image. Go into the mask editor for each of the two and paint in where you want your subjects. OP • 3 mo. I produce these nodes for my own video production needs (as "Alt Key Project" - Youtube channel ). Tried changing the parameters with --gpu-only and --highvram, but nothing changed. Nobody needs all that, LOL. edit: fixed morning spelling brain. It will swap images each run going through the list of images found in the folder. This workflow, combined with Photoshop, is very useful for: - Drawing specific details (tattoos, special haircut, clothes patterns, ) - Gaining time (all major AI features available without even adding nodes) - Reiterating over an image in a controlled manner (get rid of the classic Ai Random God Generator!). fp il yl lk gm eo zk wx ii cf