• Lang English
  • Lang French
  • Lang German
  • Lang Italian
  • Lang Spanish
  • Lang Arabic


PK1 in black
PK1 in red
PK1 in stainless steel
PK1 in black
PK1 in red
PK1 in stainless steel
Upscale comfyui reddit

Upscale comfyui reddit

Upscale comfyui reddit. Just use an upscale node. I had a bad download of the last. Whenever I upscale using the Ultimate SD Upscale node, there's a vague 'grid pattern' of squares in the final image. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude Yeah what I like to do with comfyui is that I crank up the weight but also don't let the IP adapter start until very late. Get the Reddit app Scan this QR code to download the app now. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. I was also getting weird generations and then I just switched to using someone else's workflow and they came out perfectly, even when I changed all my workflow settings the same as theirs for testing what it was, so that could be a bug. I then use a tiled controlnet and use Ultimate Upscale to upscale by 3-4x resulting in up to 6Kx6K images that are quite crisp. Reply reply Top 5% Rank by size . I too use SUPIR, but just to sharpen my images on the first pass. g. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Ah shit, I may need a PSU upgrade. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Do you have ComfyUI manager. The 16GB usage you saw was for your second, latent upscale pass. with a denoise setting of 0. ComfyUI: Using the refiner as a model in UltimateSDUpscale. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. You can upscale in SDXL and run the img through a img2img in automatic using sd 1. I want more detail about patterns. 5 manage workflows, generated Welcome to the unofficial ComfyUI subreddit. It depends what you are looking for. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. I only have 4GB VRAM, so haven't gotten SUPIR working on my local system. Sample a 3072 x 1280 image, sample again for more detail, then upscale 4x, and the result is a 12288 x 5120 px image. Simple workflow with componentized frequently used node groups and wireless using UE nodes. You will need also a upscale model, in my case I'm using 4x-Ultrasharp, they are Welcome to the unofficial ComfyUI subreddit. i still use a latent upscale in my upscale processes to add detail, whatever works really, do some comparisons. So that the underlying model makes the image accordingly to the prompt and the face is the last thing that is changed. the number one place on Reddit to discuss Elementor the live page builder for WordPress. ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. The workflow is kept very simple for this test; Load image Upscale Save image. 5), with an ESRGAN model. Thank I recently started tinkering with Ultimate SD Upscaler as well as other upscale workflows in ComfyUI. Through recommended youtube videos i learned that a good way to increase the size and quality of gens i can use iterative upscales first in latent and then iterative upscale for the itself image and also that you can generate pretty high For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Upscale your output and pass it through hand detailer in your sdxl workflow. It´s not perfect, but being able to generate a high-quality picture like this in under a For a 2 times upscale Automatic1111 is about 4 times quicker than ComfyUI on my 3090, I'm not sure why. This results is the same as with the newest Topaz. Share Sort by: /r/StableDiffusion is back open after the protest of Reddit The A1111 image is upscaled, while ComfyUI is not. I need to KSampler it again after upscaling. Bringing any intermediate images into comfyui for comfy upscale automations I'm so excited! Probably going to start shopping for a second 3090 soon. txt after you removed the extension « txt » This new upscale workflow also runs very efficiently, being able to 1. 5-2x and getting generally nice results. New comments cannot be posted. After experimenting with it for an hour or so, it seems the answer is yes. For upscaling it would mean that you can upscale it by a higher factor. 0 refine model chain with 4Xultrashap comfyUI workflow generation: The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. 25- Welcome to the unofficial ComfyUI subreddit. You could also try a standard checkpoint with say 13, and 30. 20K subscribers in the comfyui community. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Welcome to the unofficial ComfyUI subreddit. On my 4090 with no optimizations kicking in, a 512x512 16 frame animation takes around 8GB of VRAM. A lot of people are just discovering this technology, and want to show off what they created. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. * Use Refiner /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Ultimate SD Upscale for ComfyUI. Look at this workflow : Welcome to the unofficial ComfyUI subreddit. (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. There is also a UltimateSDUpscale node suite (as an extension). 35, 10 steps or less. Is there benefit to upscaling the latent instead? So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on Click on Install Models on the ComfyUI Manager Menu. Sort by: Best /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. to choose an image from the batch and upscale just that image. In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Img2Img Upscale - Upscale a real photo? Trying to expand my knowledge, and one of the things I am curious about is upscaling a photo - lets say I have a backup image, but its How can I upscale and increase the line density (amount of lines?) in my geometric artwork? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Makeing a bit of progress this week in ComfyUI. upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. The most powerful and modular diffusion model GUI and backend. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. And when purely upscaling, the best ComfyUI. He's using open source knowledge and the work of hundreds of community minds for his own personal profit through this very same place, instead of giving back to the source where he took everything he used to add his extra Welcome to the unofficial ComfyUI subreddit. I might do an issue in ComfyUI about that. This is a community to share and discuss 3D photogrammetry modeling. Then open Ultimate SD upscale at X2 with Ultrasharp and with tile resolution 640x640 and Mask 16. It didn't work out. So for now, its only good to explore I'm trying to test this upscale plugin with the MultiAreaConditioning I do notice my ComfyUI setup seems a bit slower than a1111, but I mostly work with SDXL with ComfyUI, and stick with a1111 with SD1. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. Thanks Latent upscale is different from pixel upscale. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. and time them /r/StableDiffusion is back open after the protest of Reddit killing open /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). I personally use the ultimate upscale node in a variety of workflows. I don't suppose you know a good way to get a Latent upscale (HighRes Fix) working in ComfyUI with SDXL?I have been trying for Creat a new comfyui, I have created a comfyuiSUPIR only for supir, and in the new comfyui, link the model folders with the full path for base models folder and the checkpoint folder ( at least) in comfy/extra-model. I did once get some noise I didn't like, but rebooted & all was good second try. I For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. workflow - google drive link. Try immediately VAEDecode after latent upscale to see what I mean. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. Organized ComfyUI txt2Img-upscale workflow. If it was possible to change the Comfyui to GPU as well would be fantastic as i do believe the images look better from it Reply reply Top Upscale while adding "detailed faces" positive clip to an upscaler as input? Im new to ComfyUI, some help would be greatly appreciated Share Add a Comment. Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. /r/StableDiffusion is back open after the protest of Welcome to the unofficial ComfyUI subreddit. Denoise 0. comments sorted by Best Top New Controversial Q&A Add a Comment You can upscale using comfyui. How can i fix that? Welcome to the unofficial ComfyUI subreddit. Using ComfyUI, you can increase the siz Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Take the mask upscale it by 4x and than use a cut by mask node from the masquerade nodes. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. What was wondering was if upscale benefits from using LoRA. No attempts to fix jpg artifacts, etc. /r/StableDiffusion is back open after the protest of Reddit killing open API access Hi, there I am use UItimate SD Upscale but it just doing same process again and again blow is the console code - hoping to get some help Upscaling iteration 1 with scale factor 2 Tile size: 768x768 Tiles amount: 6 Grid: 2x3 Redraw enabled: True Seams fix mode: NONE Requested to load AutoencoderKL Loading 1 new model The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. /r/StableDiffusion is back open after the protest of Reddit killing The upscale quality is mediocre to say the least. Hi everyone, I need a upscale method that use in my clothes. Please share your tips, tricks, and workflows for using this software to create your AI art. and Comfyui uses the CPU. comfyanonymous. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. It is intended to upscale and enhance your input images. Could anyone guide me on how to achieve this locally with exceptional outcomes TBH, I haven't used A1111 extensively, so my understanding of A1111 is not deep, and I don't know what doesn't work in A1111. 15-0. A subreddit for those in It added nothing. in a1111 the controlnet We would like to show you a description here but the site won’t allow us. ComfyUI Workspace manager v1. 25K subscribers in the comfyui community. A step-by-step guide to mastering image quality. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. The layout is in Welcome to the unofficial ComfyUI subreddit. then pick one or two to upscale? Most of the upscaling workflows I have upscale every creation which is rarely useful. Also with good results. But I probably wouldn't upscale by 4x at all if 10 votes, 18 comments. Instead, I use Tiled KSampler with 0. Well yes but the upscale node you use really doesn’t matter I think, except the ldsr and a few other special upscaler that need their own node. I wanted to share a comfyui workflow that you can try out on your input images you want 4x enlarged, but not changed too much, while still having some leeway with Welcome to the unofficial ComfyUI subreddit. I have applied optical flow to the sequence to smooth out the appearance but this results in a loss of definition in every frame. I understand how outpainting is supposed to work in comfyui (workflow here - https: (upscale?) * You should use CLIPTextEncodeSDXL for your prompts. 5, so I don't really have any direct comparison. In fact, there's a lot of inpainting stuff you can do with comfyui that you can't do with automatic1111. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. This workflow upscales images to 4K or 8K and upscales in 3 stages. and where it upscale/downscale said area. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. 0 and upscale with comfyUI sdxl1. 21K subscribers in the comfyui community. Please help me fix this issue. ComfyUI SDXL 0. It is highly recommended that you feed it images straight out of SD (prior to any saving) - unlike the example above - which shows some of the common artifacts introduced on compressed images. Drag and drop the image into comfyui (doesnt work with reddit) and you'll get the workflow. 3) This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. You can try out the ComfyUI Workflow here. Holy Paladins, ComfyUI + Animatediff + 2x upscale. Upscale smaller images to at least 1024 x 1024, before you put them in to be in painted. I made an open source tool for running any ComfyUI workflow w/ ZERO setup Get the Reddit app Scan this QR code to download the app now. I made one (FaceDetailer > Ultimate SD Upscale > EyeDetailer > EyeDetailer). r/Stunfisk is your reddit source for news, analyses, and Thanks. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged Have this node immediately after the checkpoint loader before anything else using the model line. I've struggled with Hires. Both these are of I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via I only have 4gb of nvidia vram, so large images crash my process. ExpressWarthog8505 • 10K subscribers in the comfyui community. Please share your tips, tricks, and workflows for using this Welcome to the unofficial ComfyUI subreddit. Solution: click the node that calls the upscale model and pick one Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. safetensors (SD 4X Upscale Model) I decided to pit That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. 3 usually gives you the best results. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. If you want more resolution you can simply add another Ultimate SD Upscale node. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. SD Ultimate upscale is a popular upscaling extension for AUTOMATIC1111 WebUI. Be the first to comment Nobody's responded to this post yet. generating 10-20 images per prompt. And above all, BE NICE. io comments sorted by Best Top New Controversial Q&A Add a Comment. X values) if you want to benefit from the higher res processing Welcome to the unofficial ComfyUI subreddit. github. Step 1 - Text to image: Prompt varies a bit from picture to picture, but here is the first one: high resolution photo of a transparent porcelain android man with glowing backlit panels, closeup on face, anatomical plants, dark swedish forest, night, darkness, grainy, shiny, fashion, intricate plant details, detailed, (composition:1. In addition to a whole image inpainting and mask only inpainting, I also have workflows that upscale the masked region to do an inpaint and then downscale it back to the original resolution when pasting it back in. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very You dont get it don't you? The issue isnt wht he offers. So, recently I've been trying use Ultimate SD Upscale but always get this weird background. I cant find any node to upscale image with model by specific factor (or to specific View community ranking In the Top 1% of largest communities on Reddit. This is not the case. Still working on the the 17K subscribers in the comfyui community. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. For the samplers I've used Flux has been out of under a week and already seeing some great innovation in the open source community. it will add details to your workflow generally if your noise is set too high but it definitely won't blur and the sharpness would be A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. It's a lot faster that tiling but outputs aren't detailed. I am switching from Automatic to Comfy and am currently trying to upscale. I am very interested in shifting from automatic1111 to working with ComfyUI Is there a version of ultimate SD upscale that has been ported to ComfyUI? 25K subscribers in the comfyui community. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. 16K subscribers in the comfyui community. e. I think I have a reasonable workflow, that allows you to test your prompts and settings and Welcome to the unofficial ComfyUI subreddit. 0. New to Comfyui, so not an expert. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). go up by 4x then downscale to your desired resolution using image upscale. I love the use of the rerouting nodes to change the paths. Please share your tips, tricks, and 22K subscribers in the comfyui community. - Reddit for XG, a girl group featuring Jurin, Chisa, Hinata, Harvey, Juria, Maya, and Cocona on XGALX. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which For a dozen days, I've been working on a simple but efficient workflow for upscale. Node looks like iterative upscale from impact pack Reply reply Top 4% Rank by size . example here. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. I somehow prefer it without There seems to to be way more SDXL variants and although many if not all seem to work with A1111 most do not work with comfyui. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. These comparisons are done using ComfyUI with default node settings and fixed seeds. Like 0. py, in order to allow the the 'preview image' node to If you want your workflow to generate a low resolution image and then upscale it immediately, the HiRes examples are exactly what I think you are asking for. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. analysis to work out if magnific does something like using a multimodal model to help generate a prompt to use for the upscale gen. Hires. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. I use this youtube video workflow , and he uses a basic one. 17K subscribers in the comfyui community. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. Or check it out in the app stores     TOPICS Welcome to the unofficial ComfyUI subreddit. 2 and 0. yalm. Comfyui SDXL-Turbo Extension with upscale nodes Tutorial - Guide Locked post. . Is there a way to copy normal webUI parameters ( the usual PNG info) into ComfyUI directly with a simple ctrlC ctrlV? Dragging and dropping 1111 PNGs into ComfyUI works most of the time. However, I switched to Ultimate SD Upscale custom node. It's messy right now but does the job. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. 5=1024). Or check it out in the app stores     TOPICS. You can also run a regular AI upscale then a downscale (4x * 0. The issue is that he is being a self-serving parasyte of this community. 5 if you want to divide by 2) after upscaling by a model. This is a wrapper for the script used in the A1111 extension. You don't have to use hi-res upscale fix if you don't want to. Upscale with different denoise parameters really changes the image. I had to place the image into a zip, because Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Add your thoughts and get the conversation going. If it's the best way to install control net because when I tried manually doing it . Or check it out in the app stores     Because upscale amount is determined by upscale model itself. (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Generates a SD1. I have yet to find an upscaler that can outperform the proteus model. AuraSR v1 (model) is ultra sensitive to ANY kind of image compression and when given such image the output will probably be terrible. You can use it on ComfyUI too! If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. But ReActor did a decent job at a faceswap. That said I have been using 1. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 model, and can be applied to Automatic easily. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. 2 options here. articles on new Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. how are you setting up the upscale nodes? When I try to add upscaling to my AnimateDiff workflow the upscalled version loses a lot of the consistency Reply Luzipher /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I've uploaded the workflow link and the generated pictures of after and before Ultimate SD Upscale for the reference. support/docs After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Hey all, Pretty new to the whole comfyui thing with using 1. From the ComfyUI_examples, Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. Obviously there are a number of Krea/Magnific clone in comfyUI - upscale video game characters to real life! Share Add a Comment. 5 "Upscaling with model" and then denoising 0. 5. If you have previously generated images you want to upscale, you'd modify the HiRes to include the Welcome to the unofficial ComfyUI subreddit. 5 set at SDXL resolutions, then hi-res fix latent upscale another 1. but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. 5 to get a 1024x1024 final image (512 *4*0. 21 RELEASE! Comfyui Ultimate SD Upscale speed upvotes upscale by model will take you up to like 2x or 4x or whatever. You've changed the batch size on the "pipeLoader - Base" node to be greater than 1 -> Change it to 1 and try again. Supir really changed the upscale history. Or you can use different upscale method. The final steps are as follows: Apply inpaint mask run thought ksampler take latent output and send to latent upscaler (doing a 1. Last is orginal upscale only. Adding LORAs in my next iteration. Reply reply This is before the upscale. You get to know different ComfyUI Upscaler, get exclusive access to my Co This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent This ComfyUI nodes setup lets you use Ultimate SD Upscale custom nodes in your ComfyUI AI generation routine. attach to it a "latent_image" in this case it's "upscale latent" Welcome to the unofficial ComfyUI subreddit. 2 Welcome to the unofficial ComfyUI subreddit. 23K subscribers in the comfyui community. Then simply put in your desired latent resolution. Please keep posted images SFW. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. 5 are usually a better idea than going 2+ here because latent upscale Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. More info: https://rtech. Nice, it seems like a very neat workflow and produces some nice images. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the It's so wonderful what the ComfyUI Kohya Deep Shrink node can do on a video card with just 8GB. There is a latent workflow and a pixel space ESRGAN workflow in the examples. I have switched over to the Ultimate SD Upscale as well and it works the same for the most part, only with better results. Third stage utilizes SD ULTIMATE UPSCALE - 8K size. I wonder how much of the I was always told to use cfg:10 and between 0. It´s actually possible to add an upscaler like 4xUltrasharp to the workflow and upscale your images from 512x512 to 2048x2048, and it´s still blazingly fast. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. Also make sure you install missing nodes with ComfyUI Manager. It does a first pass with SUPIR and then Ultimate SD for a second pass, and matched the colour of the original brilliantly and Welcome to the unofficial ComfyUI subreddit. Ah, missing the upscale model. 5 model) during or after the upscale. I was working on exploring and putting together my guide ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. Also, if this is new and exciting to Welcome to the unofficial ComfyUI subreddit. I share many results and many ask to share. 5 denoise. Our friendly Reddit community is here to make the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. This is the kind of thing ComfyUI is great at but would take remembering every time to change the prompt in Automatic1111 WebUI. Giving 'NoneType' object has no attribute 'copy' errors. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets View community ranking In the Top 1% of largest communities on Reddit. It already has Ultimate Upscaler but I don't like the results very much 24K subscribers in the comfyui community. r/Trophies. In Automatic it is quite easy and the picture at the end is also clean, color gradients are smoth, details on the body like the View community ranking In the Top 1% of largest communities on Reddit. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. if you want to upscale all at the same time, then you may as well just inpaint on the higher res image tbh. support/docs/meta You can run AnimateDiff at pretty reasonable resolutions with 8Gb or less - with less VRAM, some ComfyUI optimizations kick in that decrease VRAM required. Comfyui SDXL-Turbo Extension with upscale nodes youtube r/lexfridman. Welcome to the unofficial ComfyUI subreddit. There isn't a "mode" for img2img. You can repeat the upscale and fix process multiple times if you wish. Images reduced from 12288 to 3840 px width. and if I need to upscale the image, I run it through Topaz video AI to 4K and up. extremely detailed We would like to show you a description here but the site won’t allow us. Please share your tips, tricks, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. Belittling their efforts will get you banned. Open menu Open navigation Go to Reddit Home. 4 for denoise for the original SD Upscale. Or you can facedetail the result after upscale. After borrowing many ideas, and learning ComfyUI. I solved that with using only 1 steps and adding multiple iterative upscale nodes. More posts you may like r/Trophies. 8 even. The first stage utilizes CCSR - 2x upscale. 5 and embeddings and or loras for better hands. After Ultimate SD Upscale Welcome to the unofficial ComfyUI subreddit. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Hello ComfyUI fam, I'm currently editing an animation and want to take the 1024x512 video frame sequence output I have and add detail (using the same 1. This workflow was created to automate the process of converting roughs generated by A1111's t2i to higher resolutions by i2i. This could lead users to increase pressure to developers. Sort by: Best. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. Then another node under loaders> "load upscale model" node. It works more like DLSS, tile by tile and faster than iterative Grab the image from your file folder, drag it onto the entire ComfyUI window. Notably I can Reddit removes the ComfyUI metadata when you upload your pic. Thanks! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from Welcome to the unofficial ComfyUI subreddit. Yes, with ultimate sd upscale. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind 15K subscribers in the comfyui community. This is done after the refined image is upscaled and encoded into a latent. You can also look into the custom node, "Ultimate SD Upscaler", and youtube tutorial for it. My workflow. I mean the possibilities are endless. View community ranking In the Top 1% of largest communities on Reddit. I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Search for upscale and click on Install for the models you want. May be somewhere i will point out the issue. ckpt model the node takes, I just downloaded it again and the problem vanished. 5 models but i need some advice on my workflow. Open comment sort options /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For some context, I am trying to upscale images of an anime village, something like Ghibli style. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. 9, end_percent 0. That might be it. Came across a workflow called a workflow called "1minute 8K Upscale". So from VAE Decode you need a "Uplscale Image (using model)" under loaders. You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. then refining. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It's why you need at least 0. And at the end of it, I have a latent upscale step that I can't for the life of me figure out. You should insert ImageScale node. So I'm happy to announce today: my tutorial and workflow are available. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. com Open. More info: https://rtech Welcome to the unofficial ComfyUI subreddit. I've moved past this onto new errors! u/theflowtyone/ is there something specific about the node output format? I have problems like 'list' object has no attribute 'shape' when passing the output to other nodes like ImageCrop. Heres an example with some math to double the original images resolution Welcome to the unofficial ComfyUI subreddit. model makers, it's not useful for end-users to upscale images at this point. I'm trying to find a way of upscaling the SD video up from its 1024x576. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. Hope someone can advise. This ui will let you design and execute advanced stable diffusion pipelines using a 49 votes, 12 comments. /r/StableDiffusion is back open after the protest of Reddit killing We would like to show you a description here but the site won’t allow us. 38 votes, 15 comments. I learned this from Sytan's Workflow, I like the result. 25 i get a good blending of the face without changing the image to much. 9 , euler "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Imagine it gets to the point that temporal consistency is solid enough, and generation time is fast enough, that you can play & upscale games or footage in real-time to this level of fidelity. More posts you may like /r/StableDiffusion is back open after the protest of Reddit killing open API access, which Welcome to the unofficial ComfyUI subreddit. You can use a model that gives better hands. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfections of our models, but sometimes the 2nd pass helps. There is a face detailer node. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. Then comes the higher resolution In this easy ComfyUI Tutorial, you'll learn step-by-step how to upscale in ComfyUI. You can easily utilize schemes below for your custom Here is an example of how to use upscale models like ESRGAN. and set "Controlnet is more important". 123 votes, 148 comments. More info Welcome to the unofficial ComfyUI subreddit. Share Add a Comment. - XG debuted on March 18, 2022. Members Online • MTX-Rage . I managed to make a very good workflow with IP-Adapter with regional masks and ControlNet and it's just missing a good upscale. The higher the denoise number the more things it tries to change. /r/StableDiffusion is back open after the protest of Do you just upscale it or? Or is it a custom node from Searge / others? I can't see it, because I cant find the link for workflow. But somehow it creates additional person inside already generated images. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Welcome to the unofficial ComfyUI subreddit. 9 then upscaled in A1111, my finest work yet . safetensors (SD 4X Upscale Model)I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a Welcome to the unofficial ComfyUI subreddit. Outdated custom nodes -> Fetch Updates and Update in ComfyUI Manager. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode Welcome to the unofficial ComfyUI subreddit. What is the best workflow you know of? I've annotated as much of the workflow I can so beginners can understand how the workflow works and encourage them to use ComfyUI more. The workflow used is the Default Turbo Postprocessing from this Gdrive folder. I usually use 4x-UltraSharp for realistic Today we will use ComfyUI to upscale stable diffusion images to any resolution we want, and even add details along the way using an iterative workflow! This The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. how much upscale it needs to get that final resolution (both normal upscaler or upscaler value that have been 4x scaled by upscale model) Example Workflow of usage in ComfyUI: JSON / PNG. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. SD Ultimate upscale – ComfyUI edition. Second stage utilizes SUPIR - 4K size. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users 72 votes, 20 comments. You just have to use the node "upscale by" using bicubic method and a fractional value (0. Put your folder in the top left text input. 5, euler, sgm_uniform or CNet strength 0. Is there a custom node or a way to replicate the A111 ultimate upscale ext in ComfyUI? Skip to main content. r I find if it's below 0. => in comparison, i can produce greatly detailed pictures in 5 to 10 seconds in 1400x1400. 0 refine model sdxl1. However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with Welcome to the unofficial ComfyUI subreddit. face and hand detail + upscale comfyworkflows. I gave up on latent upscale. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. 5x on 10GB NVIDIA GPU's. Please share your tips, tricks, and workflows for using this View community ranking In the Top 1% of largest communities on Reddit. Internet Culture (Viral) I’m not sure i understand, both Upscale Image (using Model) and Load Upscale Model work fine for me and So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. I'm eager to find a similar capability within the a1111/ComfyUI. If you want upscale to specific size. To install it as ComfyUI custom node using ComfyUI Manager (Easy Way) : Make sure you already have ComfyUI Manager (it's like an extension There are a lot of upscale variants in ComfyUI. 6 denoise and either: Cnet strength 0. Because i dont understand why ultimate-sd-upscale can manage same resolution in same Im trying to use Ultimate SD upscale for upscaling images. It's an 2x upscale workflow. If I want larger images, I upscale the image. You've possibly messed the noodles up on the "Get latent size" node under the Ultimate SD Upscale node -> It should use the Two INT outputs. Welcome to the unofficial ComfyUI subreddit 114 votes, 43 comments. - XG 1st WORLD TOUR - The first HOWL - Starts in May, 2024! - XG FIFTH SINGLE 'WOKE UP' 2024. the factor 2. It will replicate the image's workflow and seed. The idea is simple, use the refiner as a model for upscaling instead of using a 1. The issue is that the upscale adds so much noise that refining step can basically craft a different image that may have newly introduced deformities. so i. LoRA training with sdxl1. unhd mwd wsmxjb fucn xzpt sjf hnwy qpxcsnv mazyq paoj