Comfyui upscale methods reddit


Comfyui upscale methods reddit. 5 if you want to divide by 2) after upscaling by a model. 5 model) during or after the upscale. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. 2 Oct 21, 2023 · Non-latent upscale method. I haven't needed to. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. This is the 'latent chooser' node - it works but is slightly unreliable. ***** Off topic question***** Does this method of linking no longer work? That said, Upscayl is SIGNIFICANTLY faster for me. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. The pixel images to be upscaled. b16-vae can't be paired with xformers right now, only with vanilla pytorch, and not just regular pytorch it's nightly build pytorch. height. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Go to the custom nodes installation section. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. . thats Thanks. But it does take longer to make. ) I haven't managed to reproduce this process i Welcome to the unofficial ComfyUI subreddit. We would like to show you a description here but the site won’t allow us. This specific image is the result from repeated upscaling from: 512 -> 1024 -> 2048 -> 3072 -> 4096 using a denoise strength of 1. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. 5 "Upscaling with model" and then denoising 0. g. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. Tutorial 7 - Lora Usage I made a tiled sampling node for ComfyUI that i just wanted to briefly show off. or DeVinci Resolve to edit Welcome to the unofficial ComfyUI subreddit. I would probably switch it off of 'nearest-exact' and to a better upscaler model like ultrasharp or ESRGAN. The other methods will require a lot of your time. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider… For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. 10 votes, 18 comments. 43 votes, 16 comments. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. Results or 'outputs' can be stunning and awe inspiring other times aren't the best quality and need refining by the usual suspects; Photoshop, Blender, 3DCoat, ZBrush etc. And above all, BE NICE. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. I've played around with different upscale models in both applications as well as settings. Jul 23, 2024 · The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. upscale_method. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. 0. 5, x2 usually produce a better result! Edit, nv, missread your question If it's only one upscale - downscale it's probably because the downscale sharpens the blur added from the x4 upscale and generally produce a better results then just a x2. This process is generally fast, no parameters to tweak. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. 5 was very basic with some few tips and tricks, but I used that basic workflow and figured out myself how to add a Lora, Upscale, and bunch of other stuff using what I learned. Then use those with the Upscale Using Model node. You just have to use the node "upscale by" using bicubic method and a fractional value (0. 5 model) >> FaceDetailer. This is what A1111 also does under the hood, you just have to do it explicitly in comfyui. Thus far, I've established my process, yielding impressive images that al So you end up testing other workflows and methods quite frequently or be playing what could be a frustrating catch-up with the Jones's game later. Each upscale ads details, also the bigger upscale the more blur, so do a few x2, x0. The images above were all created with this method. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). The best method I "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. 5 to get a 1024x1024 final image (512 *4*0. While I'd personally like to generate rough sketches that I can use for a frame of reference when later drawing, we will work on creating full images that you could use to create entire working pages. ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo! - YouTube. 5 to high 0. Greetings, Community! As a newcomer to ComfyUI (though a seasoned A1111 user), I've been captivated by the potential of Comfy and have witnessed a significant surge in my workflow efficiency. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. To upscale images using AI see the Upscale Image Using Model node. It's why you need at least 0. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. 2 and resampling faces 0. 2x upscale using lineart controlnet. Width. The final 3rd stage (8K) being the most time consuming. Side by side comparison with the original. Edit to add: When using tiled upscalers with the right settings you can get enhancements in details without using latent upscaling and relying on . I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. so a latent upscale is inherrantly lossy. Hires fix with add detail lora. Pixel upscale to a low denoise 2nd sampler is not as clean as a latent upscale but stays true to the original image for the most part. Choose your platform and method of install and follow the instructions. This is what I have so far (using the custom nodes to reduce the visual clutteR) . that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. That's because of the model upscale. Hello ComfyUI fam, I'm currently editing an animation and want to take the 1024x512 video frame sequence output I have and add detail (using the same 1. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. 35 -> 0. That's because latent upscale turns the base image into noise (blur). Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. Grab the image from your file folder, drag it onto the entire ComfyUI window. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. Does it actually produce better results? The ImageScale node is designed for resizing images to specific dimensions, offering a selection of upscale methods and the ability to crop the resized image. report. 51 denoising. Are there any other methods that achieve better/faster results? When is ComfyUI's IS_CHANGED method called I'm developing a custom node and wondering how often the IS_CHANGED method is called. articles on new photogrammetry software or techniques. See how you like the results and stop here if they're good enough. Would want to be able to change the input to a node and have that immediately take effect, without me needing to rerun the graph. Latent upscale is different from pixel upscale. 6 denoise and either: Cnet strength 0. There's a bunch of different ones on the market but those are pretty much the only ones I ever use. inputs. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. I was running some tests last night with SD1. Try immediately VAEDecode after latent upscale to see what I mean. Looks like yeah, the upscale method + the denoising strength + the final size you want, that tends to go great lengths to clean up faces. - image upscale is less detailed, but more faithful to the image you upscale. This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent space, and perform the sampler pass. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. The target height in pixels. A1111 is REALLY unstable compared to ComfyUI. The 8K upscale stage takes up 70 gigs of ram during VAE decode and tile reassembly. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). 17K subscribers in the comfyui community. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. What is the method you prefer to upscale to 3 or (God forbid) 4 times? I'm doing lot of composites, and I need very high quality results. 5+ denoise. It abstracts the complexity of image upscaling and cropping, providing a straightforward interface for modifying image dimensions according to user-defined parameters. I would start here, compare different upscalers. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. 5=1024). 5, euler, sgm_uniform or CNet strength 0. Ideally, it would look like some sort of slider that I can control the number of lines to increase by depending on how many times the artwork was upscaled: Hi guys, Has anyone managed to implement Krea. I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. The software creates a Load Image node automatically, with the copied image. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Actually no, I found his approach better for me. 2 options here. 5 -> 0. This means that your prompt (a. NICE DOGGY - Dusting off my method again as it still seems to give me more control than AnimateDiff or Pika/Gen2 etc. 5x upscale back to source image and upscale again to 2x lookup latent upscale method as-well this performs a staggered upscale to your desired resolution in one workflow queue. In my case for example, I make my own upscale method in ComfyUI. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec We would like to show you a description here but the site won’t allow us. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. Please keep posted images SFW. I switched to comfyui not too long ago, but am falling more and more in love. Please share your tips, tricks, and workflows for using this software to create your AI art. To start enhancing image quality with ComfyUI you'll first need to add the Ultimate SD Upscale custom node. The upscale quality is mediocre to say the least. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. 2x upscale using Ultimate SD Upscale and TileE Controlnet. feed the 1. latent upscale introduces noise as I said in other posts here. You can find the node here. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). I need bf16 vae because I often using upscale mixed diff, with bf16 encodes decodes vae much faster. Upscaled by ultrasharp 4x upscaler. 4 -> 0. It will replicate the image's workflow and seed. I have a much lighter assembly, without detailers, but gives a better result, if you compare your resulting image on comfyworkflows. But I probably wouldn't upscale by 4x at all if fidelity is important. 11. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. A. Ty i will try this. The target width in pixels. Hi all! Does anyone know if there is a way to load a batch of images from my drive into comfy for an image to image upscale? I have scoured the net but haven't found anything. This is with 7950X CPU, v555 drivers, python 3. The method used for resizing. I upscaled it to a… If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. embed. Appreciate just looking into it. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and smeary with I ask because after Kohya's deepshrink fix became available, I haven't done any upscaling at all in A1111 or Comfy. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. 5 denoise. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. image. Ultimate SD upscale is good and plays nice with lower-end GFX cards, Supir is great but very resource-intensive. His previous tutorial using 1. After borrowing many ideas, and learning ComfyUI. I'm trying to find a way of upscaling the SD video up from its 1024x576. Jan 22, 2024 · 画像のアップスケールを行うアップスケーラーには ・計算補完型アップスケーラー(従来型。Lanczosなど) ・AIアップスケーラー(ニューラルネット使用。ESRGAN) の2種類があり、ComfyUIでは、どちらも使用することができます。 AIアップスケーラーを使用するワークフロー ComfyUIのExampleにESRGANを Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Latent upscales require the second sampler to be set at over 0. My images are noisy (just like high ISO noise) after using upscaling (iterative upscale method). Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. 1-0. 4 on denoiser due to the fact that to upscale the latent it basically grows a bunch of dark space between each pixel unlike an image upscale which adds more pixels. More consistency, higher resolutions and much longer videos too. I've been wondering what methods people use to upscale all types of images, and which upscalers to use? So far I've been just using Latent(bicubic antialiased) for Hires. The Upscale Image node can be used to resize pixel images. SDUpscaler yields very unpredictive results (faces in the background). more about this here. save. fix then going to img2img and using controlnet + Ultimate SD Upscale script and 4x Ultrasharp Upscaler. k. Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. The prompt for the first couple for example is this: upscale in smaller jumps, take 2 steps to reach double the resolution. No matter what, UPSCAYL is a speed demon in comparison. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. The t-shirt and face were created separately with the method and recombined. This is not the case. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Hires. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. There are also "face detailer" workflows for faces specifically. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. to combat it you must increase the denoising value of any sampler you feed an upscale into. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. This is a series and I have feeling there is a method and a direction these tutorial are Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP What are the pros and cons of using the kohya deep shrink over using 2 ksamplers to upscale? I find the kohya method significantly slower since the whole pass is now done at high res instead of only partially done at high res. With all that in mind, for regular use, I prefer the last method for realistic images. The steps are as follows: Start by installing the drivers or kernel listed or newer in the Installation page of IPEX linked above for Windows and Linux if needed. Please share your tips, tricks, and… I gave up on latent upscale. - latent upscale looks much more detailed, but gets rid of the detail of the original image. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Jan 8, 2024 · 2. Welcome to the unofficial ComfyUI subreddit. com and my result is about the same Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. I have yet to find an upscaler that can outperform the proteus model. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. Personally in my opinion your setup is heavily overloaded with incomprehensible stages for me. Look at this workflow : New to Comfyui, so not an expert. 9, end_percent 0. The downside is that it takes a very long time. My ComfyUI workflow was created to solve that. Belittling their efforts will get you banned. It is indeed very resource hungry. Search, for "ultimate”, in the search bar to find the Ultimate SD Upscale node. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. Instead, I use Tiled KSampler with 0. Please share your tips, tricks, and… May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. Making this in ComfyUI, for now you can crop the image into parts with custom nodes like imagecrop or imagecrop+ (and btw is the same as SD ultimate upscale, right? however splitting it first you theorically could handle this better IDK) 5 - Injecting noise. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. INTRO. Reply reply More replies I liked the ability in MJ, to choose an image from the batch and upscale just that image. 4 and tiles of 768x768. I'm looking for a solution that will increase the number of lines, between the existing lines, when I upscale the image to a larger size. Tutorial 6 - upscaling. sampling methods are entirely an own choice thing, some can have different effects when upscaling because some are better at removing latent noise than others, some produce artifacts I don't Welcome to the unofficial ComfyUI subreddit. Latent quality is better but the final image deviates significantly from the initial generation. 0 Alpha + SD XL Refiner 1. And when purely upscaling, the best upscaler is called LDSR. I tried StableSR, Kohya Deepshrink and a bunch of other methods. 5 (+ Controlnet,PatchModel. This is a community to share and discuss 3D photogrammetry modeling. 9 , euler Upscale Image node. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. 19K subscribers in the comfyui community. Here's how you can do it; Launch the ComfyUI manager. 9 and torch 2. I've struggled with Hires. 4 nightly. I have applied optical flow to the sequence to smooth out the appearance but this results in a loss of definition in every frame. - You can then use that image in whatever workflow you built. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. I just generate my base image at 2048x2048 or higher, and if I need to upscale the image, I run it through Topaz video AI to 4K and up. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Also, both have a denoise value that drastically changes the result. The goal of this tutorial is to give an overview of a method I'm working on to simplify the process of creating manga, or comics. Here it is, the method I was searching for. 0 -> 0. crop 10 votes, 15 comments. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. If I had chosen not to use the upscale with model step, I would have considered using the Ultimate SD Upscale method instead. PLANET OF THE APES - Stable Diffusion Temporal Consistency. Initial Setup for Upscaling in ComfyUI. With this method, you can upscale the image while also preserving the style of the model. a. A lot of people are just discovering this technology, and want to show off what they created. Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Please share your tips, tricks, and workflows for using this… Edit: oh and also I used an upscale method that scales it up incrementally 3 different resolution steps, it works to keep the basic generated image shape and not add too much unneeded detail. Both these are of similar speed. Upscale x1. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. - Click on an EMPTY SPACE in your ComfyUI workflow… and Ctrl+V. Search for upscale and click on Install for the models you want. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. rwwyrqz uwbzzvc hpk dcpmpd qyoi ybk qctwcyy hzyroipq vxuespm vkdb

© 2018 CompuNET International Inc.