Comfyui workflow png reddit. 19K subscribers in the comfyui community. Mar 30, 2023 · edited. The complete workflow you have used to create a image is also saved in the files metadatas. More to come. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until I've figured it out. it and the same way you could port forward the comfyui. I was confused by the fact that I saw in several Youtube videos by Sebastain Kamph and Olivio Sarikas that they simply drop png's into the empty ComfyUI. Once the final image is produced, I begin working with it in A1111, refining, photobashing in some features I wanted and re-rendering with a second model, etc. Im trying to do the same as high res fix, with a model and weight below 0. Oh crap. 0 and refiner and installs ComfyUI First of all, sorry if this has been covered before, i did search and nothing came back. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. Save the new image. I compared the 0. Please share your tips, tricks, and workflows for using this software to create your AI art. Comfy Workflows Comfy Workflows. Please DO NOT post any Feral, IRL Selfies, Self Made Art (Unless Permisson is Granted) Porn links, or Random spam. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Loading a PNG to see its workflow is a lifesaver to start understanding the workflow GUI, but it's not nearly enough. However, this can be clarified by reloading the workflow or by asking questions. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now I spent around 15 hours playing around with Fooocus and ComfyUI, and I can't even say that I've started to understand the basics. The test image was a crystal in a glass jar. You can save the workflow as a json file with the queue control panel "save" workflow button. 0 VAEs in ComfyUI. Anyone ever deal with this? This missing metadata can include important workflow information, particularly when using Stable Diffusion or ComfyUI. An example of the images you can generate with this workflow: Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. When I save my final PNG image out of ComfyUI, it automatically includes my ComfyUI data/prompts, etc, so that any image made from it, when dragged back into Comfy, sets ComfyUI back up with all the prompts, and data just like the moment I originally created the original image. I tried to find either of those two examples, but I have so many damn images I couldn't find them. A lot of people are just discovering this technology, and want to show off what they created. 9 and 1. It'll create the workflow for you. Aug 2, 2024 · All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) The image itself was supposed to be the workflow png but I heard reddit is stripping the meta data from it. Belittling their efforts will get you banned. 0 | all workflows use base + refiner Not sure if my approach is correct or sound, but if you go to my other post - the one on just getting started- and download the png and throw it into ComfyUi you’ll see the node setup I sort of cobbled together. Just the workflow including the wildcard prompt, but not what the random prompt generated. Collaborator. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. \ComfyUI_01556_. But let me know if you need help replicating some of the concepts in my process. A quick question for people with more experience with ComfyUI than me. My only current issue is as follows. Save one of the images and drag and drop onto the ComfyUI interface. Layer copy & paste this PNG on top of the original in your go to image editing software. Not a specialist, just a knowledgeable beginner. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. pngs of metadata. 0 download links and new workflow PNG files - New Updated Free Tier Google Colab now auto downloads SDXL 1. . (vid2vid made with ComfyUI AnimateDiff workflow The workflow joson info is saved with the . Just as an experiment, drag and drop one of the png files you have outputed into comfyUI and see what happens. 5 from 512x512 to 2048x2048. json files into an executable Python script that can run without launching the ComfyUI server. Update ComfyUI and all your custom nodes first and if the issue remains disable all custom nodes except for the ComfyUI manager and then test a vanilla default workflow. View community ranking In the Top 10% of largest communities on Reddit. This makes it potentially very convenient to share workflows with other. Jul 28, 2024 · Actually there is better way to access your computer and comfyui. How to upscale your images with ComfyUI: View Now: Merge 2 images together: Merge 2 images together with this ComfyUI workflow: View Now: ControlNet Depth Comfyui workflow: Use ControlNet Depth to enhance your SDXL images: View Now: Animation workflow: A great starting point for using AnimateDiff: View Now: ControlNet workflow: A great starting You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. 0 and refiner and installs ComfyUI Welcome to the unofficial ComfyUI subreddit. and spit it out in some shape or form. png Simply load / drag the png into comfyUI and it will load the workflow. This workflow is entirely put together by me, using the ComfyUI interface and various open source nodes that people have added to it. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. This should import the complete workflow you have used, even including not-used nodes. There is no version of the generated prompt. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in units which are represented as nodes. 15 votes, 14 comments. The one I've been mucking around with includes poses (from OpenPose) now, and I'm going to Off-Screen all nodes that I don't actually change parameters on. Searge SDXL Update v2. My actual workflow file is a little messed up at the moment, I don't like sharing workflow files that people can't understand; my process is a bit particular to my needs and the whole power of ComfyUI is for you to create something that fits your needs. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. Please share your tips, tricks, and… Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it. If you mean workflows they are embedded into the png files you generate, simply drag a png from your output folder onto the ComfyUI surface to restore the workflow. ComfyUI is a completely different conceptual approach to generative art. You can use the remote. This topic aims to answer what I believe would be the first questions an a1111 user might have about Comfy. Reply reply Dry-Comparison-2198 Getting an issue where whatever I generate - a bogus workflow I used a few days ago is saving … and when I try to load the png - it brings up wrong workflow - and fails to render anything if I hit queue. If I drag and drop the image it is supposed to load the workflow ? I also extracted the workflow from its metadata and tried to load it, but it doesn't load. I'm revising the workflow below to include a non-latent option. I can load default and just render that jar again … but it still saves the wrong workflow. The problem I'm having is that Reddit strips this information out of the png files when I try to upload them. This works on all images generated by ComfyUI, unless the image was converted to a different format like jpg or webp. My workflow where you can choose and image (or several) from the batch and upscale them on the Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP The metadata from PNG files saved from comfyUI should transfer over to other comfyUI environments. You can then load or drag the following image in ComfyUI to get the workflow: I'll do you one better, and send you a png you can directly load into Comfy. 8). EDIT: WALKING BACK MY CLAIM THAT I DON'T NEED NON-LATENT UPSCALES. So every time I reconnect I have to load a presaved workflow to continue where I started. Just started with ComfyUI and really love the drag and drop workflow feature. here i just use: futuristic robotic iguana, extreme minimalism, white porcelain robot animal, details, build by Tesla, Tesla factory in the background I'm not using breathtaking, professional, award winning, etc, because it's already handled by "sai-enhance" I would like to further modify the ComfyUI workflow for the aforementioned "Portal" scene, in a way that lets me use single images in ControlNet the same way that repo does (by frame-labled filename etc). I generated images from comfyUI. SDXL 1. Please keep posted images SFW. true. I noticed that ComfyUI is only able to load workflows saved with the "Save" button and not with "Save API Format" button. I would like to use that in-tandem with with existing workflow I have that uses QR Code Monster that animates traversal of the portal. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. I had to place the image into a zip, because people have told me that Reddit strips . 43 votes, 16 comments. -- Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. Anywhere. You signed out in another tab or window. Mar 31, 2023 · You signed in with another tab or window. Comparisons and discussions across different platforms are encouraged. Subreddit Dedicated to Foxgirls, Dragons, Felines and any other sexy Hentai or Furry Girl you have! Whether they're Anthropomorphic or Not. Again I got the difference between the images and increased the contrast. However, I may be starting to grasp the interface. I'm currently running into certain prompts where latent just looks awful. You can simply open that image in comfyui or simply drag and drop it onto your workflow canvas. This includes yiff… A transparent PNG in the original size with only the newly inpainted part will be generated. If necessary, updates of the workflow will be made available on Github. The Solution To tackle this issue, with ChatGPT's help, I developed a Python-based solution that injects the metadata into the Photoshop file (PNG). I use a google colab VM to run Comfyui. This is a subreddit for the discussion, and posting, of AI generated furry content. If you see a few red boxes, be sure to read the Questions section on the page. I have also experienced that ComfyUI has lost individual cable connections for no comprehensible reason or nodes have not worked until they have been replaced by the same node with the same wiring. 0 ComfyUI Tutorial - Readme File Updated With SDXL 1. Flux Schnell is a distilled 4 step model. It works by converting your workflow. Reload to refresh your session. The workflow joson info is saved with the . You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. But when I'm doing it from a work PC or a tablet it is an inconvenience to obtain my previous workflow. 8. If you really want the json, you can save it after loading the png into comfyui. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and I'm not sure which specifics are you asking about but I use ComfyUI for the GUI and use a custom workflow combining controlnet inputs and multiple hiresfix steps. 4K subscribers in the aiyiff community. To access your computer you can use the windows remote desktop and forward the tcp port using https://remote. I dump the metadata for a png I really like: magick identify -verbose . PNG into ComfyUI. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. png. You can use () to change emphasis of a word or phrase like: (good code:1. Welcome to the unofficial ComfyUI subreddit. I tend to agree with NexusStar: as opposed to having some uber-workflow thingie, it's easy enough to load specialised workflows just by dropping a wkfl-embedded . Share, discover, & run thousands of ComfyUI workflows. If you need help just let me know. You switched accounts on another tab or window. The png files produced by ComfyUI contain all the workflow info. Thank you very much! I understand that I have to put the downloaded JSONs into the custom nodes folder and load them from there. And above all, BE NICE. If that works out, you can start re-enabling your custom nodes until you find the bad one or hopefully find out the problem resolved itself. Instead, I created a simplified 2048X2048 workflow. it to port forward Up To five ports on free plan. Then i take another picture with a subject (like your problem) removing the background and making it IPAdapter compatible (square), then prompting and ipadapting it into a new one with the background. Here you can see random noise that is concentrated around the edges of the objects in the image. I've mostly played around with photorealistic stuff and can make some pretty faces, but whenever I try to put a pretty face on a body in a pose or a situation, I I put together a workflow doing something similar, but taking a background and removing the subject, inpaint the area so i got no subject. It is not much an inconvenience when I'm at my main PC. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). So dragging an image made with Comfy onto the UI loads the entire workflow used to make it, which is awesome, but is there a way to make it load just the prompt info and keep my workflow otherwise? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. 0 and refiner and installs ComfyUI Just started with ComfyUI and really love the drag and drop workflow feature. 2) or (bad code:0. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. Second, if you're using ComfyUI, the SD XL invisible watermark is not applied. Potential use cases include: Streamlining the process for creating a lean app or pipeline deployment that uses a ComfyUI workflow Creating programmatic experiments for various prompt/parameter values Welcome to the unofficial ComfyUI subreddit. 1 for ComfyUI | now with LoRA, HiresFix, and better image quality | workflows for txt2img, img2img, and inpainting with SDXL 1. jydycnv pba kerd pqswrktx kqil zqe abcoyx jubiyay ttsvuk cgyfgr