Lora training in comfy reddit
Lora training in comfy reddit
Lora training in comfy reddit. 5… but comfy has the workflow down… you find something works great it’s easy to keep doing that. However, when I generate images of that Lora with the same parameters and prompt in Comfy UI the person looks completely different. LoRA Problem: Training samples look better than inference in A1111 and Comfy. That way I can hit queue and then come back later with a bunch of examples already generated in exactly the way I'd like to see them. So I created another one to train a LoRA model directly from ComfyUI! By default, it saves directly in your ComfyUI lora folder. But where do I begin, anyone know any good tutorials for a lora training beginner. Welcome to the unofficial ComfyUI subreddit. Training a LoRA (Difficult Level) For those of you who are ambitious, and want to make your own LoRA, there are 2 guides that you can use to train your own LoRA. In this folder, it creates one folder for every training you do within Comfy. 8>. It offers a solution that is particularly useful in the field of artificial intelligence art production by mainly addressing the issues of balancing the size of model files and training power. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. The images above were all created with this method. Please share your tips, tricks, and… This seemingly pitiful node launches what is called Tensorboard, a web interface to study a log created from a LoRA training. Base model "Model" and "Clip" outputs go to the respective "Model" and "Clip" inputs of the first Load Lora node. Hi, Noob user here; working with comfyui for several days now. Below is my setting for character LORA training which I got from SECourses, this can do 3000 steps training in about 50 minutes. ) I have seen some tutorials on LORA training, but w 25K subscribers in the comfyui community. This GUI uses the "Dreambooth" training technique (it does not need captions), which usually does well with style training. Hopefully, this helps people get started with LoRA training. Positive Embeddings: Enhance image quality with these embeddings. A1111 doesn't handle LCM out of the box, and the LCM extension only handles base LCM models, not LCM LORA with regular SD models. I’m slowly warming to it, but I feel iteration is easier in Auto 1111 at the moment. com/LarryJane491/Lora- Join and Support me ### Support me on Patreon: / Stable Diffusion models are fine-tuned using Low-Rank Adaptation (LoRA), a unique training technique. Pony Negative Embeds. your sacks are either hanging too low , so your My request for the people who use Google colab for using comfyui, a1111 and Lora, which plan I should take, becuase I have only 4gb nvidia 3050. We would like to show you a description here but the site won’t allow us. com)1月12日 消息:昨日,国外一博主在reddit上分享了他推出的 ComfyUI Lora训练节点,可以使用Comfy UI可以直接训练Lora模型,并且设置非常简单,让用户可以轻松操作。训练完成后,模型会保存在ComfyUI Lora文件夹中,用户可以随时使用和测试。 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. For example, it’s much easier to see a loss graph, learning rate curve, sample outputs, and pause training. It's Lora for V2 on your screenshot But Loras from MotionDirector have . 0. It’s way easier adding in textual inversions and Lora’s there and Controlnet integration just is so easy for 1. You will learn the most just simply cooking one and seeing the output. 0+1. Atleast for me it was like that, but i can't say for you since we don't have the workflow you use It should be comparable if you are using all the settings in A1111 to offload most things (vae, controlnet, upscaler) to non-video ram, but A1111 has terrible ram management and often winds up with bad memory leaks/out of mem errors when I try it even with nothing changing between generations and plenty of both ram and vram (48/48GB). It'd be nice to have the Lora output into an actual workflow. After that if the chosen epoch could produces nice images I'll use them for another round of training to make it even better. While I've mostly tested it on "narrow concept" lora's, where I thought it would do best, here is an example from the opposite, using the ad-detail-xl lora, which must be considered broad. I leveraged this great image/guide on how to train an offline LoRA. The "Model" output of the last Load Lora node goes to the "Model" input of the sampler node. I want to experiment with training LoRAs, and I can easily see having a 10 Epoch run take far longer than I want my PC unavailable for if I have enough training images. However, training SDXL (lora) seems to be a whole new ball game. And Access Tensorboard loads all logs stored in that folder! I recommend learning dreambooth training with the extension only because the user interface is just much better. "LoRA_type": "Standard", I tried training a lora with 12gb vram, it worked fine but took 5 hours for 1900 steps, 11 or 12 seconds per iteration. 0 but they kind of work at 2. I published an admittedly hastily written article about some important training observations when training anime characters with style flexibility. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Model-wise, there is an additional CLIP-based and Unet-based feature encoder for the (one) reference image, and something that sounds awfully a lot like a LoRA In a LoRA, you're essentially adding/modifying the model itself to learn the concept of those training images. 5 Lora training, with SDXL they automatically become even bigger with the same values, for example settings I use which is 32 for network and 16 alpha dimensions now leads to 162mb lora instead of 36mb Welcome to the unofficial ComfyUI subreddit. An added benefit is if I train the LoRA with a 1. ERROR lora diffusion_model. Both character and environment. I actually have it on 2 different local hard drives now too. I don't really know what things have been improved during those 7 months though, but I'm getting the same quality as with the CivitAI LoRA trainer It's network and alpha dimensions. The leftmost column is only the lora, Down: Increased lora strength. I'm trying to train the style of my own 3D renders and afaik LORA is the way to go at this point. I am using Larry Jane's LoRA trainer in ComfyUI and I thought I had everything working correctly. 2 and go to town. com Lora Training using only ComfyUI!! We show you how to train Loras exclusively in ComfyUI Github https://github. I trained this both locally and with the colab notebook. For a slightly better UX, try a node called CR Load LoRA from Comfyroll Custom Nodes. Please keep posted images SFW. check shuffle captions and check keep one token. 1. 2. Here's everything you need to attempt to test Nightshade, including a test dataset of poisoned images for training or analysis, and code to visualize what Nightshade is doing to an image and test potential cleaning methods. for testing i'd recommend sticking once with the default settings of the online trainer. You can try it with just "style" as the class token, so your prompt would be "instance_token style", upon which you can expand and add more keywords. Comfy does the same just denoting it negative (I think it's referring to the Python idea that uses negative values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no That is way too slow. While there are guides, a few very good ones, at the end of the day, the only question in need of answering is this: Does the LoRA help you create the image you wanted? We had some good results, but we're facing challenges with achieving consistency in detailed elements, such as logos (if you know you know the struggle). We use iterative upscaling methods in Comfy to get to the details of our LoRA models but we struggle to capture them across the entire image. selfie. Has anyone else had this issue and how can I get past it. If you’re completely new to LoRA training, you’re probably looking for a guide to understand what each option does. Besides, the configuration for the LoRA training is incredibly easy. With batch size 2, you should be getting about 4 seconds / iteration. Over-training I don't think have much to do with caption. fuckin throw the kid a bone. :) I am interested in training a set of fictional jets (first being the Sabre Raven from Star Citizen to make some art to post in their community gallery. Hope to get your opinions, happy creating! Edit 1: I usually train with sdxl base model but I realized that trading with other models like dreamshaper does yield interesting results. Used the same as other lora loaders (chaining a bunch of nodes) but unlike the others it has an on/off switch. 5 not XL) I know you can do this by generating an image of 2 people using 1 lora (it will make the same person twice) and then inpainting the face with a different lora and use openpose / regional prompter. Here is the github link for this project: LarryJane491/Lora-Training-in-Comfy: This custom node lets you train LoRA directly in ComfyUI! (github. Super simple LoRA training guide. Make sure you have a folder containing multiple images with captions. g. Please share your tips, tricks, and workflows for using this software to create your AI art. Welcome to the official subreddit of the PC Master Race / PCMR! All PC-related content is welcome, including build help, tech support, and any doubt one might have about PC ownership. Or check it out in the app stores Training issue w/ "LORA Training In Comfy" - Please help :) Share Posted by u/This_You9099 - 1 vote and no comments My only complaint with the Lora training node is that it doesn't have an output for the newly created Lora. 5 and suddenly I was getting 2 iterations per second and it was going to take less than 30 minutes. When you use Lora stacker, Lora weight & Clip weight of the Lora are the same, when you load a lora in the lora loader, you can use 2 differents values. In A111, my sdxl Lora is perfect at :1 Not sure how to configure the Lora strengths in ComfyUI. When training SD things, i sometimes found that to be the case. Multiple lora references for Comfy are simply non-existant, not even in Youtube where 1000 hours of video are uploaded every second. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. I use it in the 2nd step of my workflow where I create the realistic image with the control net inputs. Negative Embeddings: Use scenario-specific embeddings to refine outputs. So, I want to simplify this process with ComfyUI and tweak Lora's parameters directly. Hello :-) Some people requested this guide, so here it is! There is a text "guide" or rather a cheat-sheet with all the tools and scripts that I use and a link to the video guide that describes the process and some tips and tricks are shared. The prompt for the first couple for example is this: Whether you're a seasoned professional or just starting out, this subreddit is the perfect place to ask questions, seek advice, and engage in discussions about all things photography. What needs to be done to maximize clothing flexibility? My theory (untested) would be to prefer fully nude images for training as this allows to put any clothes you like on the characters afterwards?! What are your tips for getting max. flexibility? THX After looking at many guides (and still looking), I'm stuck on understanding how a Lora is supposed to be trained and worked with for Stable Diffusion and if that's even the right tool to use (Lora). Try changing that or use a lora stacker that can allow lora/clip weight. So just add 5/6/however many max loras you'll ever use, then turn them on/off as needed. Tried a few combinations but, you know, ram is scarce while testing. Or just skip the lora download python code and just upload the lora manually to the loras folder. Short Version: I’m just getting started using Comfy, so are there any settings or nodes I should know about in Comfy that might affect the accuracy of a LoRA? ATM, I've just got the model loader, empty latent, LoRA loader, positive prompt, and a kSampler, but what I'm getting isn't as accurate as the training samples Kohya produces. This doesn't matter so much with models that don't share heritage, but trying to take a LoRA trained on RV and use it on Anything can be hit or miss, for example. probably need more tweaking. in_layers. In simple terms, it's how much of the LoRA is applied to the Clip Model. I can select the LoRA I want to use and then select Anythingv3 or Protogen 2. if that doesn't work at all theres something weird My Lora doesn’t appear in the images at 1. json settings attached in the guide. that "2000 total steps" is optimal for lora training. The developer of Comfy, who also helped train some versions of SD3, has resigned from SAI - (Screenshots from the public chat on the Comfy matrix channel this morning - Includes new insight on what happened) If your best sample image is happening sooner, then it's training too fast. The best advice I can give you is to dive in and start cooking. This video shows how to install and use my custom node that makes LoRA training possible directly from ComfyUI. This custom node lets you train LoRA directly in ComfyUI! By default, it saves directly in your ComfyUI lora folder. pt" And when i try to use that as lora it shows error Or do something even more simpler by just paste the link of the loras in the model download link and then just change the files to the different folders. 15K subscribers in the comfyui community. This can have bigger or smaller differences depending on the LoRA itself. When I use this LORA it always messes up my image. I would try network dimensi The redjuice style LoRA is a LoRA created from the images of Japanese illustrator redjuice (Guilty Crown, IRyS character designer, supercell illustrator, etc). faces, cats) and (2) fine-tuning on one particular instance. Even just making an "inpaint only masked" function in Comfy is a huge pain in the ass, you end up creating a massive workflow just for something that's a single checkbox in auto1111. That means you just have to refresh after training (and select the LoRA) to test it! Making LoRA has never been easier! I'll link my tutorial. Then, rename that folder into something like [number]_[whatever]. Are you on the correct tab, the first tab is for dreambooth, the second tab is for LoRA (Dreambooth LoRA) (if you don't have an option to change the LoRA type, or set the network size ( start with 64, and alpha=64, and convolutional network size / alpha =32 ) ) you are in the wrong tab A1111 and Comfy aren't capable of training (unless there are extensions to add training scripts like Kohya), you will need to use either Kohya or Onetrainer. I've followed many videos to make sure I've done it correctly. See full list on github. I've tried updating all the nodes, and installed some other nodes recommended to try and resolve the issue. Tick the save LORA during training and make your checkpoint from the best-looking sample once it's ~1000-1800 iterations. The model is dreamshaper lightning xl. Now I know that captioning is a crucial part here, but havin around 300 training images I don't really want to do it by hand :D I tried using the wd14 tagger, but the results seem very anime-centered (obviously). Anyone have a workflow to do the following. Finally, just choose a name for the LoRA, and change the other values if you want. (This post is addressed to ComfyUI users unless you're interested too of course ^^) Hey guys ! The other day on the comfyui subreddit, I published my LoRA Captioning custom nodes, very useful to create captioning directly from ComfyUI. It has been said by some that for trivial lora training, etc. Download it from here, then follow the guide: Welcome to the unofficial ComfyUI subreddit. Now I want to use a video game character lora. Posted by u/Significant-Pause574 - No votes and no comments Training on a specific model means your LoRA will be better adapted to that model's traits, but less flexible to others. Please share your tips, tricks, and workflows for using this… I usually do that after the first training session. Making LoRA has never LoRA Training directly in ComfyUI! Tutorial - Guide. On a1111 the positive "clip skip" value is indicated, going to stop the clip before the last layer of the clip. . As for the approach, it seems like it consists of two different training stages: (1) pre-training on the wider domain (e. Comfy is great for certain things, but for others it really, really sucks. 0 and 2. Any help would be great. 10,000 steps not enough for the settings I'm using at present. 5 model I can then use it with many different other checkpoints within the WebUI to create many different styles of the face. Training LoRA in ComfyUI Hi everyone, I am looking for a way to train LoRA using ComfyUI. It looks like you basically get a checkpoint for your LoRA training process every Epoch. pt format For example in this Playing Golf motion lora on my screenshot. Reply reply More replies mgtowolf I trained a Lora on a person and the images generated in Automatic 1111 look exactly like the person. Then just click Queue Prompt and training starts! I recommend using it alongside my other custom nodes, LoRA Caption Load and LoRA Caption Save: That way you just have to gather images, then you can do the captioning AND training, all inside Comfy! Jan 12, 2024 · 站长之家(ChinaZ. weight Dimension out of range (expected to be in range of [-1, 0], but got 1) I tried to change the weights in the Lora loader but i have this output every time 🤷, i uploaded files in a drive with the trace and the lora and the settings if you need to have a better look 🙏 Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. This lora called "unet. these work well in my experience if your lora isn't too complicated. Network dimension of 256 seems pretty high, but 256 for network alpha seems to be WAY, WAY too high. Coupled with a frustration over long lora training times and often undertrained results, I started wondering if it was possible to do anything similar with loras (which of course is a completely different concept from TI) Would it be possible to enhance desirable elements, while diminishing undesirable ones? I've trained a LoRA using Kohya_ss, however when I load the LoRA in ComfyUI and use the tag words, it doesn't generate the image e. Posted by u/ivoyavoya - 1 vote and no comments Jul 11, 2024 · Embeddings to Improve Quality. holy shit i was just googling to find a lora tutorial, and i couldn't believe how littered this thread is with the vibes i can only describe as "god damn teenagers get off my lawn" ffs this is an internet forum we all use to ask for help from other people who know more than we do about shit we want to know more about. That's why I think if you are training on unknown token, it shuold go first so the differences of weights gets "assign" to the weird token. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. 13K subscribers in the comfyui community. The Clip model is part of what you (if you want to) feed into the LoRA loader and will also have, in simple terms, trained weights applied to it to subtly adjust the output. I thought I was doing something wrong so I kept all the same settings but changed the source model to 1. Previously I used to train LoRAs with Kohya_ss , but I think it would be very useful to train and test LoRAs directly in ComfyUI. The Problem: I've trained about 3 times, changing tactics a bit, and I can tell my model is affected by it but cannot get it anywhere close to Ok, so I know the importance of having different items in the background so the trainer can differentiate what is like features that are associated with the reference and what is not, so for instance using a white background consistently could be mistaken as part of what is being trained, but why not use the alpha transparency as an omission variable? Thanks! So I have installed Comfy multiple times in the past few days trying to get this to work. Checkpoints --> Lora. Likely, the user used similar settings that were recommended for a while on 1. If you're aiming to make, say, vibrant purple dresses that flow with the wind, an embedding would suit your use-case nicely and quickly. What am I doing wrong? I played around a lot with lora strength, but the result always seems to have lot of artifacts. On the other hand, in ComfyUI you load the lora with a lora loader node and you get 2 options strength_model and strength_clip and you also have the text prompt thing <lora:Dragon_Ball_Backgrounds_XL>. I'm hitting a wall in trying to figure this out. That means you just have to refresh after training (and select the LoRA) to test it! That's all it takes for LoRA training now. It may eventually be added to A1111, but it will probably take significantly longer than other UIs becauss the existing LCM implementation relies on Hugging Face diffusers, and A1111 doesn't use/support that SD toolchain for the main SD image generation function My ComfyUI workflow was created to solve that. This was incredibly easy to setup in auto1111 with the composable lora + latent couple extensions, but it seems an impossible mission in Comfy. I say dreambooth and not LORA because I never had luck making LORA with this extension. input_blocks. That's my theory. I use a sanity prompt of "with blue hair" to identify when it becomes overtrained (loses the blue). The model still tends to put the clothes on it saw during training. It goes through the training process and then I get these errors. maybe train one with basesdxl before going for a custom model. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. while if it goes in the end it will be probably ignore. The LoRA weight list seems to control, but I noticed that some LoRa do not seem to have any effect on a render nomatter the applied weight, or an extreme effect if using the suggested weights that the LoRa training author provided. This is a super simple guide for anyone who wants to dive straight in and use the . By default, Lora-Training-in-Comfy creates a log folder in the root of ComfyUI. It’s not the point of this post and there’s a lot to learn, but still, let me share my personal experience with you: Get the Reddit app Scan this QR code to download the app now. Because a LoRA places a layer in the currently selected checkpoint. Take a Lora of person A and a Lora of Person B, place them into the same photo (SD1. Take outputs of that Load Lora node and connect to the inputs of the next Lora Node if you are using more than one Lora model. The prompt: "product placement". After installing, you can find it in the LJRE/LORA category or by double-clicking and searching for Training or LoRA. Just try replicating the Auto1111 img2img tab, with all it's features, in Comfy. adjust your steps so you hit at least 1600 with one of your last epochs. Yeah it's pretty outdated, but it works. Question - Help So usually, the sample images are at best, a rough indication of how the training is going and a means to tell when the model is over fitted. 57 votes, 26 comments. com) Hopefully it helps! My custom nodes felt a little lonely without the other half. I don't really know or have tested it. lzgoa htbnst shjqdx cylm gawljn ppomv hlj ttdrrz yvvwc ynqqwnm