Image size comfyui reddit
- Image size comfyui reddit. To then view the generated images click on View History and go through your generations by loading them. If I were to make some type of custom node or modify the core node and allow a larger latent image size, would that break the whole process and there is some larger reason that 8192 is the hard Welcome to the unofficial ComfyUI subreddit. The first branch has: Txt to Image and then Image to SDVID with the new SD vid models that came out. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. In this case, the image from comfy has some extra glitches. I have a ComfyUI workflow that produces great results. In the process, we also discuss SDXL architecture, how it is supp During my img2img experiments with 3072x3072 images, I noticed a quality drop using Hypertile with standard settings (tile size 256, swap size = 2, max depth = 0). Generated images automatic1111 image. Welcome to the unofficial ComfyUI subreddit. I want to upscale my image with a model, and then select the final size of it. I started with ComfyUI 3 days ago. Here’s how you can do it: Automatic1111 May 14, 2024 · A basic description on a couple of ways to resize your photos or images so that they will work in ComfyUI. In an effort the generate images faster on my potato pc. Also the exact same position of the body. The hard part is knowing when the image is ready to be retreived and getting the image. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. Howdy! I'm not too advanced with ComfyUI for SD generation yet, but I've made a lot of progress thanks to your help. i do that alot. Stable diffusion has a bad understanding of relative terms, try prompting: "a puppy and a kitten, the puppy on the left and the kitten on the right" to see what I mean. . Please keep posted images SFW. This youtube video should help answer your questions. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. New users of civitai should be aware the PNG (which contains the metadata) can only be downloaded from the "image view". I have tried to push down the sampling step count as low as possible. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. Is there a way to pull this off within ComfyUI? Welcome to the unofficial ComfyUI subreddit. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. Enjoy a comfortable and intuitive painting app. In the process, we also discuss SDXL architecture, how it is supp Welcome to the unofficial ComfyUI subreddit. Layer copy & paste this PNG on top of the original in your go to image editing software. I can obviously pick a size when doing Text2Image but when prompting off an existing image my final image will always just be the same size as the inspiration image. No, you don't erase the image. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) and no workflow metadata will be saved in any image. /* Put custom styles here */ . can prettymuch be scaled to whatever batch size by repetition. - comfyanonymous/ComfyUI Copy that into user. However, my goal is to recreate the exact same image, I understand that the DPM++2M model can do this, at least in auto11 it does repeat the same image all the time. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. So I can't give a simple answer but I'd say if you're still interested and need some help we can join a discord call or something and I can help. It animates 16 frames and uses the looping context options to make a video that loops. How to Magically Resize Your Images: The 1024px Rule That Will Change Everything. And above all, BE NICE. 5 is trained on images 512 x 512. When I do the same in Automatic1111, I get completely different people and different compositions for every image. Automatic1111 would let you pick the final image size no matter what and give you options for crop, just resize, etc. comfyui image. (using SD webUI before) I am getting blurry image when using "Realities Edge XL ⊢ ⋅ LCM+SDXLTurbo" model in ComfyUI I got the same issue in SD webUI but after using sdxl-vae-fp16-fix, images are good But when I try to use the same to fix this issue, not working. so I would assume generating 4 images (with the `batch_size` property) would give me four images with seeds `1`, `2 I think the intended workflow here is to just press several times on the Queue Prompt button. Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. (207) ComfyUI Artist Inpainting Tutorial - YouTube Welcome to the unofficial ComfyUI subreddit. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. The option has been around for a long time with other UIs like Automatic1111 and Visions of Chaos. I have a workflow I use fairly often where I convert or upscale images using ControlNet. Belittling their efforts will get you banned. Also, if this is new and exciting to you, feel free to post Posted by u/tobi1577 - 216 votes and 49 comments Welcome to the unofficial ComfyUI subreddit. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. Here, you can also set the batch size , which is how many images you generate in each run. Hey everyone, I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. You set the height and the width to change the image size in pixel space. css and change the font-size to something higher than 10px and you should see a difference. I've built many ComfyUI web apps for personal business purposes and have helped others on Reddit as well. Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). Or add the Image Gallery extension. I have managed to push it down to 3 steps with some nifty tricks I found The demo images aren't curated, all images just use the seed "3" with a basic prompt, so this is really useful for experimenting. 8 so that some of the structure of the original image generated is retained. This way its an end-to-end txt to animation. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. Works great. A mask adds a layer to the image that tells comfyui what area of the image to apply the prompt too. This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. you wont get obvious seams or strange lines Welcome to the unofficial ComfyUI subreddit. I know i can run the img to vid portion with 512 x 512 input image but im struggling trying to downscale the image by 2. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Save the new image. First we calculate the ratios, or we use a text file where we Mar 22, 2024 · This simple checkbox in the Automatic1111 WebUI interface allows you to generate high-resolution images that look much better than the default output. and see if you can get the image size to be used for the empty latent (converted) height and width (later on - empty Welcome to the unofficial ComfyUI subreddit. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. The one that is shown in the "post view" is a "preview JPEG" (even though it looks as if it is full size) which does not have the metadata. Aug 21, 2023 · If we want to change the image size of our ComfyUI Stable Diffusion image generator, we have to type the width and height. You can't enter a latent image size larger than 8192. Want 10 images? Click that button till the Queue size is 10 (or select Extra options and put in 10 in Batch count). I first get the prompt working as a list of the basic contents of your image. Input your batched latent and vae. Increasing the tile size to half the image's dimensions (1536) does improve image quality, but the speed benefit diminishes. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. Please share your tips, tricks, and workflows for using this software to create your AI art. Stable Diffusion XL is Jul 6, 2024 · So, if you want to change the size of the image, you change the size of the latent image. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. I would like to know if that is due to some reason other than images that large take a long time. Im instead going to just try to work around it but trying to downscale the size of the image. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096 , and then downscale with nearest-extact back to 1500. A lot of people are just discovering this technology, and want to show off what they created. Stable Diffusion 1. I have a workflow that is basically two user branches. A bit of an obtuse take. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. you can just plug the width and height from get image size directly into nodes where you need it too. A transparent PNG in the original size with only the newly inpainted part will be generated. I think the bare minimum would be the following but having the rest of the defaults next to it could be handy if you want to make other changes. You probably still want an Exif Viewer/Remover/Cleaner to double check images since you haven't been using this setting and presumably have prior work to sanitize of metadata. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. This workflow generates an image with SD1. The denoise on the video generation KSampler is at 0. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. It is not a problem in the seed, because I tried different seeds. comfy-multiline-input { font-size: 10px; } ComfyShop has been introduced to the ComfyI2I family. How do I do the same with ComfyUI? Welcome to the unofficial ComfyUI subreddit. eictg cmhyqa dnxqfb xtaz tppcff rfzfh jjec yahy bthsvk tyckj