Comfyui upscale models reddit
Comfyui upscale models reddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I would like to know or get some advice on how to do it properly to squeeze the maximum quality of the model. The resolution is okay, but if possible I would like to get something better. There are also "face detailer" workflows for faces specifically. Good for depth, open pose so far so good. Text2Image with SDXL 1. 4x Upscale Model - Choose from a variety of 1x,2x,4x or 8x model from the https://openmodeldb. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. I'm using SIAX models or real_ersgan or foolhardy depending on the need when it need to 'go fast' or have an intermediary step to complete with something like zeroscope. (also may want to try an upscale model>latent upscale, but thats just my personal preference really) In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. These values can be changed by changing the "Downsample" value, which has its own documentation in the workflow itself on values for sizes. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. Any paid-for service, model or otherwise running for profit and sales will be forbidden. info Website. pth or 4x_foolhardy_Remacri. py --directml Here is a workflow that I use currently with Ultimate SD Upscale. 5 combined with controlnet tile and foolhardy upscale model. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. 19K subscribers in the comfyui community. Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. The Upscale Image (using Model) node can be used to upscale pixel images using a model loaded with the Load Upscale Model node. You can construct an image generation workflow by chaining different blocks (called nodes) together. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. Thanks. 10. You just have to use the node "upscale by" using bicubic method and a fractional value (0. I have been using 4x-ultrasharp for as long as I can remember, but just wondering what everyone else is using and which use case? I tried searching the subreddit but the other posts are like earlier this year or 2022 so I am looking for updated information. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. There is no tiling in the default A1111 hires. safetensors vs 1. 5x on 10GB NVIDIA GPU's. After borrowing many ideas, and learning ComfyUI. Or maybe others might be able to offer further advice. 5/clip_some_other_model. Instead, I use Tiled KSampler with 0. I do a first pass at low-res (say, 512x512), then I use the IterativeUpscale custo Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and If you let it get creative (i. That's because latent upscale turns the base image into noise (blur). SD upscaler and upscale from that. It added nothing. This is no tech support sub. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). I'm not entirely sure what ultimate SD upscale does, so I'll answer generally as to how I do upscales. the factor 2. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Because the upscale model of choice can only output 4x image and they want 2x. 5 for the diffusion after scaling. You can't use that model for generations/ksampler, it's still only useful for swapping. May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 114 votes, 43 comments. I tried the same main prompt as last night, but this time, it all blew up in my face. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Edit: you could try the workflow to see it for yourself. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). 43 votes, 16 comments. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. with a denoise setting of 0. * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. Note: Remember to add your models, VAE, LoRAs etc. 5 I'd go for Photon, RealisticVision or epiCRealism. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. Put the models here: ComfyUI\models\upscale_models; 1x Refiner Model - You can use the 1x models here for refining the video first. The workflow offers many features, which requires some custom nodes (listed in one of the info boxes and available via the ComfyUI manager), models (also listed with link) and - especially with activated upscaler - may not work on devices with limited VRAM. This is what A1111 also does under the hood, you just have to do it explicitly in comfyui. 6. 0-RC , its taking only 7. Always wanted to integrate one myself. Please share your tips, tricks, and… Now i am trying different start-up parameters for comfyui like disabling smarty memory, etc. As well Juggernaut XL and other XL models. It's been trained to make any model produce higher quality images at very low steps like 4 or 5. 5 if you want to divide by 2) after upscaling by a model. That's because of the model upscale. I want to upscale my image with a model, and then select the final size of it. It has more settings to deal with than ultimate upscale, and it's very important to follow all of the recommended settings in the wiki. You could also try a standard checkpoint with say 13, and 30. 5 model, and can be applied to Automatic easily. Search for upscale and click on Install for the models you want. Girl with flowers. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. Upscaling: Increasing the resolution and sharpness at the same time. I love to go with an SDXL model for the initial image and with a good 1. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. The "FACE_MODEL" output from the ReActor node can be used with the Save Face Model node to create a insightface model, then that can be used as a ReActor input instead of an image. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Final upscaling via UltimateSDupscale and ControlNet - ~7 minutes Reply reply Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) I've been using Stability Matrix and also installed ComfyUI portable. Because i dont understand why ultimate-sd-upscale can manage same resolution in same configuration but supir can not. It depends what you are looking for. Here is an example: You can load this image in ComfyUI (opens in a new tab) to get the workflow. [3]. I generate an image that I like then mute the first ksampler, unmute Ult. You can use folders too, so eg cascade/clip_model. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Sames as Swin4R which details a lot the image. Here is an example of how to use upscale models like ESRGAN. 21K subscribers in the comfyui community. . For example, you might prompt the model differently when it's rendering the smaller patches, removing the "kangaroo" entirely. Thank you community! Hi, does anyone know if there's an Upscale Model Blend Node, like with A1111? Being able to get a mix of models in A1111 is great where two models bring something different to the party. 5, euler, sgm_uniform or CNet strength 0. There's "latent upscale by", but I don't want to upscale the latent image. Also, both have a denoise value that drastically changes the result. Beyond that it might require some fiddling around to find the best results. 5x upscale on 8GB VRAM NVIDIA GPU's without any major VRAM issues, as well as being able to go as high as 2. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. 5 minutes. so i. 1 and 6, etc. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. The model used for upscaling. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. messing around with upscale by model is pointless for high res fix. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. What can I do to fix these issues? if it helps, I'm on Python 3. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. 25 i get a good blending of the face without changing the image to much. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. safetensors and 1. Cause I run SDXL based models from start and through 3 ultimate upscale nodes. Though, from what someone else stated it comes to use case. example. Thank you for helps If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. inputs. I gave up on latent upscale. image. Technical problems should go into r/stablediffusion We will ban anything that requires payment, credits or the likes. Usually I use two my wokrflows: We would like to show you a description here but the site won’t allow us. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. These same models are working in A1111 but I prefer the workflow of ComfyUI. 5/clip_model_somemodel. And when purely upscaling, the best upscaler is called LDSR. We would like to show you a description here but the site won’t allow us. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. 50 votes, 20 comments. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. I rarely use upscale by model on its own because of the odd artifacts you can get. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. I can only make a stab at some of these, as I'm still very much learning. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. As a general rule you want to be rendering at the native size of the model you're using, so tile sizes are probably better set to 512px for 1. Look at this workflow : I am looking for good upscaler models to be used for SDXL in ComfyUI. The best method I 15K subscribers in the comfyui community. 6 denoise and either: Cnet strength 0. I have played around with it but all the low step fast models require very low cfg also so it's difficult to make them follow prompts strongly, especially when you want to go against the models natural bias. In ComfyUI, we can break their approach into components and make adjustments at each part to find workflows that get rid of artifacts. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb Thanks for all your comments. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. safetensors -- makes it easier to remember which one to choose where you're stringing together workflows. It tells me that I need to load a refiner_model, a vae_model, a main_upscale_model, a support_upscale_model, and a lora_model. I am curious both which nodes are the best for this, and which models. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. outputs. 0 Base and Refiner models An automatic calculation of the steps required for both the Base and the Refiner models A quick selector for the right image width/height combinations based on the SDXL training set An XY Plot function (that works with the Refiner) Welcome to the unofficial ComfyUI subreddit. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. Any guide on creating comic books with SD, I’m interested in developing a workflow that maintains characters, scene, and style consistency, and… We’re on a journey to advance and democratize artificial intelligence through open source and open science. Would you mind providing even the briefest explanation on these? I feel like there is so much that is improving and new functionality being added to SD, but when new tools become available the explanation for what they do is non existent. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. 56 denoise which is quite high and giving it just enough freedom to totally screw up your image. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. Tried the llite custom nodes with lllite models and impressed. Please share your tips, tricks, and… Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Upscale a favorite frame with a different model to increase detail, but keeping the overall structure of the frame - 1. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. For photo upscales, I'm a sucker for 1:1 matches so I'm using topaz. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Upscale Image (using Model) node. You can also do latent upscales. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P It s not necessary an inferior model, 1. Aug 29, 2024 · Upscale Model Examples. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. fix. Please share your tips, tricks, and workflows for using this software to create your AI art. The pixel images to be upscaled. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. However, I'm facing an issue with sharing the model folder. Does anyone have any suggestions, would it be better to do an ite A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) I get good results using stepped upscalers, ultimateSD upscaler and stuff. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. We only approve open-source models and apps. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. I have a custom image resizer that ensures the input image matches the output dimensions. Welcome to the unofficial ComfyUI subreddit. - latent upscale looks much more detailed, but gets rid of the detail of the original image. This results is the same as with the newest Topaz. The hires script is overriding the ksamplers denoise so your actually using . r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. The idea is simple, use the refiner as a model for upscaling instead of using a 1. Vase Lichen. upscale_model. 9 , euler I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. higher denoise), it adds appropriate details. 6 and am running an RTX 3090 with the Sytan workflow. This is why I want to add ComfyUI support for this technique. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. Please keep posted images SFW. - image upscale is less detailed, but more faithful to the image you upscale. Please share your tips, tricks, and… For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. I created a workflow with comfy for upscaling images. 5 to get a 1024x1024 final image (512 *4*0. Reply reply Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. After generating my images I usually do Hires. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp results. This way it replicates the sd upscale/ultimate upscale scripts from A1111. Please share your tips, tricks, and workflows for using this… I liked the ability in MJ, to choose an image from the batch and upscale just that image. Remember that 2x, 4x, 8x means it will upscale the original resolution x2, x4, x8 times. IMAGE. But then today, I loaded Searge SDXL Workflow, as so many people have suggested, and I am just absolutely lost. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. e. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Upscale x1. in a1111 the controlnet /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Haven't used it, but I believe this is correct. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale If you'd like to load LoRA, you need to connect "MODEL" and "CLIP" to the node, and after that, all the nodes that require these two wires should be connected with the ones from load LoRAs, so of course, the workflow should work without any problems. I haven't been able to replicate this in Comfy. The downside is that it takes a very long time. This is done after the refined image is upscaled and encoded into a latent. It's especially amazing with SD1. [2]. 15-0. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. Reddit page for Nucleus Co-op, a free and open source program for Windows that allows split-screen play on many games that do not initially support it, the app purpose is to make it as easy as possible for the average user to play games locally using only one PC and one game copy. Generates a SD1. example usage text with workflow image. May be somewhere i will point out the issue. 5=1024). 9, end_percent 0. Solution: click the node that calls the upscale model and pick one. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic vision 5. Hi! I've been experimenting and trying some workflows / tutorials but I don't seem to be getting good results with hires fix. For AI-generate video upscales, something like a chain of AD LCM + Ipadapter + Ultimate Upscale. The upscaled images. It uses CN tile with ult SD upscale. This is the 'latent chooser' node - it works but is slightly unreliable. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting This new upscale workflow also runs very efficiently, being able to 1. For SD 1. 5 based models, and 1024px for SDXL. ukvjhr ztdum pqew fzxppa cvcvys tlllby vcyzcct pct imyy wvrzcky