Hey OP Edition Discussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106533022https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://rentry.org/wan22ldgguidehttps://github.com/Wan-Videohttps://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35y>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2122326https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
I'm a scat man
>2 anistudio gens
Don't update Neo Forge right now bros. Git pull will fuck your UI and reset all your settings back to default. Stay on your current version until they patch this!
>ran and ani in the opgrim
>>106537924>exposing yourself againYou really need to keep up with the times and names
Blessed thread of frenship
>Still no Chroma CivitAI categoryschizos were right, this is intentional
>janny applications open Can we get an AI bro in there finally? I'm tired of the jeet policing this Blessed Thread of Frenship
>>106538093kek
So did anyone try that Wan context node to increase video length? What's the best settings for it and how long can you go until it shits itself?
>>106538183>figurines
Forge anons, how we dealing with this defeat? Haoming's on another antibloat sperg out deleting samplers and schedulers again...
>>106529844I could never try and install python stuff within my old A1111 install I was using before this year. I followed directions and somehow it never worked. Needed to install stuff to the main python installation instead of A1111's venv folder to get things running. And one occasion, I gitpulled for latest version A1111 and got corrupted dependencies, which I had to install to main python folder then copy to venv in A1111. I lost a good couple of hours figuring out that I needed to do that. I didn't want to gitpull against since, but this Forge and its upkeeper really do make SD damn near ".exe tier" in ease of use. Awaesome user-friendly that we all gotta appreciate. I'm afraid of messing that up. A1111 already put me through hell trying to update its pytorch and CUDA stuff on a couple of occasions after gitpulls. Damn.Wan2GP looks to be much like A1111 in how it is "built".>>106533282I have forgotten this weird fetish exists, however, missing that the breasts remain soft and do not start behaving like literal balloons, no?>>106533715lovely>>106533296sweet, and kek
>>106538306Alibaba 2 - Tencent 0
>>106538339>damn near ".exe tier"if you haven't been paying attention because of the schizo, we are getting one. ani has wan implemented
>>106538183that dumbass seemed funny at first, starting off sounding like the anime-girl-crazed permavirgins on /a/, "You still haven't given up on 3D women?" Well, it don't go much farther than that, and the show sucks overall. I like Eriri, but she alone cannot redeem that pile of junk. The show has quite the fanbase, though. Nisekoi is another terrible show, but has its numerous fans, yuck. To make Nisekoi extra annoying and tiresome, it spams full-screen reaction faces constantly.
>>106538363NANI?I don't frequent these threads.
>>106538334>Haoming's on another antibloat sperg out deleting samplers and schedulers again...Wha? Fucking kek why?
UH OH LOCALCHINKS, LOOKS LIKE CHINA PULLED THE RUGhttps://www.reddit.com/r/singularity/comments/1ncn3qy/seedream_4_is_mindblowingly_good/
>>106538090they all hate the AI threads on /g/, forget it
>>106538435https://github.com/FizzleDorf/AniStudiothe dev branch got updated today but he OOM'd on the 14b model gguf. I just want to see the thing working but he's going to try out the fp8
>>106538447Wow a couple of close up portraits and landscapes. I've never seen that before ever. No local model can do that. Wow.
>>106538461>Realism for API>Absolute slop for localfucking chinks...
>>106538478Meant for >>106538461
>>106538472>Gets caught organizing samefaging and doing raids with his two ass buddies>still talks in the third personAre you still salty the anime thread laughs at you?
>>106538478actually they kind of cant without 5 pounds of loras stacked on top lmao
>>106538498this is the schizo btw
>>106538478>Wow a couple of close up portraits and landscapes.look at the skin texture though, that's almost perfect
>>106538513no need to tell us you suffer from skill issues, its okay anon
>julien
>>106538523baptism
>>106538523it's localcope. if it was local they'd say it's the best thing ever, local won, etc. but it's API so they pretend it's bad. sour grapes.
>try prompt with lora>barely any movement>try prompt without lora>fluid movement that's better than what the lora is supposed to dohuh?
>>106538523Dreamshaper face desu. Show me its ability to render text or poses.
>>106538461>I mean it's not going to progress backwardsfor local it's progressing backwards though, the new models are more slopped, they know less shit than fucking SD1.5 (characters, celebrities, styles...)
Go away cloud shills this is not your thread.
>>106538550are you using a 2.1 lora with wan 2.2 or something?
>muh realismabsolutely plebeian and brainrottedThis meme is done to death, beaten like a horse while dead, pulverized, and revived just to repeat the process 10 times more. It's just gross at this point. If I want realism, I just go outside. I actually touch grass, faggots. There's also my job waiting for me out there each weekday.
>>106538491skin texture is bizarro and ugly there, zoomie
unsubscribe
>>106538461Lol can't escape the piss filter. At this point I think it's some saas fetish
>>106538581xD
>>106538581hello Chang, I'm still not downloading your HunyanImage though, go cry somewhere else
I want to move from fuking Forge because this >>106538334How do I download this node sliders themes?? >>106536895
>>106538491for real though, I don't see anything on that image that could indicate this is an AI image, that's impressive
>>106538613>nogen
>>106536895>>106538688Isn't that some add-on slapped on top of comfy?
>>106538694if you ignore the weird skin, sure
>>106538694Yeah the yellow light reflector nose, we all have that of course.
>>106537878
anyone else using radial attention with wan?
I got no problem using the main branch of forgeUI, you can add as many features as you want but nothing beats pure prompting. The issue with chroma is that it is heavy, this takes 9 minutes on my 5090 but it already does so much that my previous workflows on my old system which was a laptop to do all of this would take 12-20 minutes. This model has great potential and listening ability but the weight pretty much gatekeeps everyone outside of a 90 class card out.I still need to get better at making loras, I just made a slop mix to anchor a generic style for consistency, I think I'm going to use a auto tagger and some of my older styles, my biggest worry is the size of my gens so I might need to shrink them down by half
>>106538688>>106538705Find the front end repo and there should be some launch arguments listed that'll enable it, probably a testing branch. Apparently it'll be in main soon though.
>>106538714maybe he did a rhinoplasty and the tip of his nose has botox in it kek
>>106538705which is? I like it, I want to sliiide
>>106538728>I got no problem using the main branch of forgeUI>The issue with chroma is that it is heavy, this takes 9 minutes on my 5090anon...
>>106538720Was this generated with Seedream 4.0?
>>106538745yes
>>106538728>blogposting tf is this? lmao
>>106538728we are truly in the "a finetune will save it" era (i really want a chroma finetune that knows as much as noob im not an antichroma schizo)
>>106538740>Console wars skip>>106538760Your mother blowing me and then cleaning my backdoor clean>>106538762It actually knows more than n00b but it's obvious tags were obfuscated which I despise. The model knows a ton of crazy shit without any extra work but this model is one of the biggest gatekeeping models I have ever seen. You HAVE to make your own loras to fully enjoy this model and that alone is going to gatekeep most anons without higher end cards.
>>106538784>>>106538760(You)>Your mother blowing me and then cleaning my backdoor cleanwhy is it so noisy?
>>106538729no, i didnt find any? please i want to scape Forge titanic
Why are my outputs just black screens when I use? They're fine when I don't use these:>wan21Lightspeed_lightspeedI2v14B480p>wan21Lightspeed_lightspeedI2v14B720pAnybody else have this issue?
>>106538728>>106538784What's your batch size and network rank? I have a 12gb card and I'm hoping I can poorfag sneak my way into training something >but it's obvious tags were obfuscatedYou mean like pony in that all it'll take is a little digging to find the random multi character sting that corresponds to whatever artist?
>>106538797>>106538808Is seedream an autoregressive model? it has the weird perspective issues I see in gpt gens.
>>106537607as a generalization younger = better however you have to take into account genetics, body shape, vaginal cavity shape, etcmy favorite part about that era was not (really) having to worry about std and convincing her that the pull-out method is a sufficient method of birth control.she was obsessed with making me cum on her titslosers here wont understand ;3
>>106538793Follow the instructions to enable the nightly release (I haven't tested but I'm pretty sure that's what you do). https://github.com/Comfy-Org/ComfyUI_frontend
>>106538825I call it the clay golem effect
>spamming SaaS slop on cool down in the local bread
>>106538824I guess that would be possible but it's clear how it can't replicate a style but then have a perfect watermark of that style, it also has the same pony problem of despite being obfuscated some specific artist seem to always show. Now in his defense he didn't block all just the porn ones and the ones willing to sue. Some classical artist I can use (Won't combine them until after I get the correct main style) In regards to training the last batch I did I could only do a batch of 3 on the standard 24GB chroma preset for one trainer but I was using some large images in that dataset, I'm just surprised that made my 5090 OOM. I only made 2 lora so I'm kind of useless in this subject right now
>>106538835thanks
>>106538447I mean I kinda get it, choice is nice but really if I’m having to maintain integration of everything the less moving parts there are the easier it is. And is anyone really using lpm know your steps?
>>106538846Everything that requeires more that 8VRAM its SaaS slop!
>>106538797made this with v3, thought it was pretty hilarious
>>106538852Thanks. You're probably one of only two or three anons that post about Chroma LoRAs. I'll be interested to hear how you (and the others) continue to figure stuff out.
>>106538834HOT
Where do i put this Comfy Argument?--front-end-version Comfy-Org/ComfyUI_frontend@latest
>>106538846i enjoy them coming out of the woodwork anytime a new model is released. you can always count on them trolling.
>>106538885I hope we can get more data, I know only a handful of anons can do this to begin with, I do plan to share loras once I'm happy with them and actually know how to do a good job with them, I'm going to use captioning software for my next run
>>106538491What's going on here though kek? Look at what they claim the actual prompt was
>>106537948Cute, CUUUTE :3
>>106538909do you want anon to wipe your ass and change your diaper too
does Chroma benefit from Nunchaku?
@106538923d*bo
>>106538927I don't know please help me, i dont wantto be in forge anymore
>>106538919lol wut? I guess some weird censor
>>106538931Doesn't nunchaku use scaled models? Why'd you want that?
Requesting a farting marnie
>>106538856>dat ass
>>106538855Perhaps, but is scheduler / sampler code really that annoying to maintain? t. retard whose only ever glanced at it
>>106538931It's next in line after they're done with WanActually it was to be done before Wan but then lodestone started retraining Chroma1-HD and it was bumped down
>>106538955i dunno, im new and i heard nunchaku makes flux faster, which i heard chroma is based on flux.i dont know what im doing.
>>106538937>filtered by forge somehow thinks he’ll do better in comfyAbsolute fucking obliterated sides
>>106538977I don’t know about with forge but I know with auto1111 they were always breaking and acting weird whenever you’d pull. Plus having to test them all with the new models and such. I’m not sure what his reasoning is to be honest perhaps he really is just autismal about it. One day we’ll just have Euler a karras and we’ll like it.
>>106538988You're talking to the same bored UI wars baiting schizo, you have to understand the thread down the hall is so dead they have to eternally seethe here so just ignore it bro
>>106538933What do you mean anon?
>>106538980>It's next in line after they're done with Wani see, so now we'll have to choose between sageattention and nunchaku for wan?
>>106539013I mean, yeah, probably
>>106538986the nunchaku team would have to specifically work on chroma and release their quantized models of it. there's no nunchaku chroma at the moment>>106539013those are two different things and not mutually exclusive
>>106539032>those are two different things and not mutually exclusiveso you can run --use-sage-attention and --use-nunchaku at the same time?
2 anlas has been deposited into your account
>>106539043>--use-nunchakuwut. ah i see, this is the obtuse newbie troll
>>106539032>those are two different things and not mutually exclusiveGiven that both sacrifice quality for speed, I'd say it's not advisable to use both at the same time
>>106539062You have a better chance convincing a British Muslim into not raping a white woman pre stroke before speaking sense into the majority of these schizos that just parrot shit without thought
res_multistep_cfg_pp my beloved
/saas/ saas diffusion general
>>106539086There are so many new ones nowadays. I really need to go through and try them out since I mostly just use euler but in comfy I notice that if I pair the wrong sampler with scheduler in some cases it will run like a normal gen but return something completely unchanged.
>>106538065Do they just not like Chroma?
vibevoice keeps drifting towards a british accent
>>106539061nah, i dont have nunchaku installed, but i do have sageattention so i just guesstimated how it's used in the bat file.
>>106538607There is 0 piss filter on those.
>>106539086>cfg_ppAre these supposed to be used with cfg1? Anytime I tried these I returned overbaked pics.
>>106539086>pp my belovedheh, fag.
Are there any penalties or considerations I should make pertaining to storage?My Windows boot SSD, has the program, the models, and the output.Now say I dual boot and point linux diffusion to the windows partition to read models and output, is that an issue?How about holding models and outputting to a storage SSD instead of a boot drive?
>>106538334 #I went to look at the repo to see since you mentioned this, and I stumbled across this absolute gem picrel. Replying to some vramlet complaining about issues with his card kek. Fucking based
>>106539146As long as it isn’t a spinny disk you should be fine. Loading models and everything else absolutely fucking chugs on mech drives do not recommend. Ask me how I know.
>>106539139With comfy, my CFG will be set anywhere from 0.5 to 1.0. commonly I'll have it around 0.8. Occasionally, prompt depending, I've been able to push it to 1.2.
>>106539155Holy based. Thank you for bringing this to my attention rocketanon.
Can I train Chroma loras on a 3090 or will my PC overheat, blow up, and kill me?
>finally oom'dit's over for me, i'm not buying a 5090 just to risk it burning up or missing rops
>>106539183I'm training on a 4070S
>>106539192What resolution/batch size?
>>106539183Yes, but it's a sacrifice I am willing to make. So go forth anon train
>>106539183I don't see how it's any riskier than genning
>>106538728Does it really take you 9 minutes to gen a single image? I'm using comfyui and a chroma gen with 30 steps/cfg 4/1600x1600 res takes a little over a minute and that's with a 400W capped 5090.
>>106539196512, 2 batch, 100 epochs. About 2 hours. 1 batch with 140 epochs took 3 hours
>>106539203I am using 100 steps for the first pass and 30 steps for the second high res pass. I'm using an undervolted 5090 and it can do that gen you pointed out quickly, most of the time goes into the high res pass followed by adetailer which is really fast desu
>>106539203>avatarfag is a retardwhat a surprise
>>106539126I doubt it, pretty sure there's been longer delays before models have gotten their own section.Was ~two weeks ago that the Chroma training was finalized, and only in the last couple of days have we seen loras start trickling in, which makes sense, it takes some time to figure out good training settings, also OneTrainer adding Chroma support is a big boon since it cuts training time by half compared to Diffusion-pipe.
When is Comfy going to add support for ByteDance Seedream 4.0?
>>106539203If you made that sexy old bitch with Chroma, i'm downloading it right fucking now. Which Chroma are you using, there's like 50.
>>106539217>100 stepswtf why?
>>106539217>I am using 100 steps
>>106539234>>106539236>Following others I never did and never will, I see improvement I follow it there.
Like I've said before, you need at least 150 steps to make Chroma look good.
Why do my upscaled Wan videos look so fucking BLURRY FUCK
>>106539194rocketnon?
>>106539242What improvement nigga. 50-60 is like the upper level of noticeability. And that's only reserved for effort genning. 30-40 for shooting around seeds
>not enough memory: you tried to allocate 37111726080 bytes.AAAAAAAAAAAAIIIIIIIIIIIEEEEEEEEEEEEEEEEEEEEEEE
>>106539230Chroma1-Base
can seeddance seeddream or whatever do voluptuous booba tho?
>>106539258You don't have to follow me, I'm in my own lanePeople always have issues with my settings but never my results
>>106539252
>>106539269based and results pilled
>wow i am such a radical nonconformist xd
>>106539183yes, you can. it shouldn't be a big issue
>>106539260>>nigbo
>>106539266I'm on it. Now I need to find a workflow since i've never used Chroma. The one you prompted kind of reminds me of Fran Drescher, that sexy bitch.
>>106538581your copium intake is getting dangerous sir
>Git pull diffusion-pipe>Shit's fuckin broken and none of my old configs work.>Completely remove and recreate conda environment, still fucked.Great...
>>106539269Whatever works go for it. I used to do 100 iteration gens with dpmpp_3m_sde because some comparison picture suggested it helped. Though that was with SDXL models. Waiting that many iterations in chroma is downright heroic.
100 steps for anime dogshit?
>>106539326i cant remember how long ago i last saw you comment on chroma but have you been able to play with it much? do you think you'll be able to use it a lot in its current state? idr if you bake loras or not but im also curious about that.
>>106539183You can train a Chroma lora on as little as 8gb vram, OneTrainer is the trainer to use as it's the fastestChroma is less demanding to train than Flux dev
yeah. I'm going back to legacy comfy. why would they make the UI worse like that? there is no hope for comfyui getting a better interface at all they just chink'd it right up
>>106539295https://comfyanonymous.github.io/ComfyUI_examples/chroma/
>comfyui hate OUT OF NOWHERE
>>106539370The problem is some people clearly don't have any common sense for interface design. It's like Comfy is one of those devs who thinks literally every other program is doing the toolbar wrong. No, Comfy, you're the one who is wrong.
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
>>106538946Flux Krea did a decent job recreating the one they actually posted
>>106539370>why would they make the UI worse like that?Someone needs to make themselves seem useful to keep their job
>>106539389BUT IT HAS TO BE LIKE VSCODE!!!! IT'S LIKE I'M PROGRAMMING!!!
>>106539356That image >>106539118 was made with chroma. I like it for its knowledge of fantasy concepts but it is slow and on upscaling tends to add these irritating horizontal lines I don't know how to get rid of. Though maybe that is caused by my lora. As for lora training I still have the Royo lora that I made that is functional for my purposes if not very good. Thinking about attempting a Frazetta lora this weekend.
>>106538724It completely destroys details, like randomly forgetting what a character wears etc.Not worth it all.
>>106539301Doesn't diffusion pipe lack the ability to sample during training? Biggest reason I haven't switched from OmeTrainer desu.
>>106539426>>106538724SNAKE OILED!!!!
>>106539389That much was clear ever since the seed 'control_after_generate' fiasco where comfyanon for some inexplicable decided that changing the seed when your generation is done is a smart move, so if you got a good seed you needed to drag the image into UI to get it back, instead of like with every other program where they're not retarded, you get a new seed when you press generate.This idiotic choice is still the default last I check, it's pure ego at this point since NOBODY wants it, it makes NO sense.
>>106539417nice. ty anon
>>106538729>>106538909im on the nightly latest branch of ComfyUI front end, how do I activate VUE nodes?
>>106539449you can just set the seeds to incremental or to fixed...
>>106539416Even VS Code follows basic UI standards. If he was truly copying VS Code, which he should since they have people who aren't retarded making the UI it would be:File | Edit | etc at the topFar Left side is Image, Video, Audio, 3D, quick select iconsLeft sidebar, workflows of the selected quick select (Wan workflows, Hunyuan workflows, etc)Main interface: the nodes you know and love.IT IS NOT FUCKING HARD
>>106539432are you the guy that gets anxiety from samples?
>>106539465they are trying (and failing) to ape invoke. they have no actual frontend developer. the subgraphs taking a year should speak volumes about the level of experience and lack of priorities
>>106539464That's not a solution. The solution is generate a seed when it's run, so you can see the side of the final image without, as the other anon said, have to divine what the previous seed was. I can't tell if you are genuinely this stupid or just being obtuse. Please talk to ChatGPT about why what you said is anti-user and retarded.
I still don't get this
>>106539490>unsaved workflowunsaved workflow
>>106539295Here's the wf I used to make that gen; https://files.catbox.moe/t3gl1r.jsonHappy gooning brother
>>106539488i think you have ever even used comfy and are just looking for a reason to schizo out
>>106539490Don't worry it will forget your workflow when you close it because 1KB of JSON is too much.
THANK YOU CHINA
>>106539518*never
>>106539475y-yes..... :( does diffusion pipe let you spit out backups / intermittent epochs? if so then i may have to stop being a pussy and switch. but i REALLY enjoy having it sample during training tho. im not gunna trust some gayass loss graph or my (probably incorrect) math of when itll be "finished" ykwim?
>>106539518Unlike Comfy, I actually use his dogshit UI so yes, I'm very familiar with all the retarded things he does, like randomize the seed after generation as if that makes sense.But please anon, please tell me why randomizing the seed after the gen is completed is smart.Explain to me why it's smart you have to refresh the entire UI to detect a new LoRA.
honestly, the batch seeds being randomized is the real unforgivable sin. I do not want to regen a batch of 8 images and if it's not the first image, good fucking luck knowing what the fuck the other seeds are
>>106539522Alright. That one's good but only insofar as your idea and its ability to execute. Super slopped style.
>>106539539>please tell me why randomizing the seed after the gen is completed is smartclick queue. click on the last finished gen, load workflow. takes one second>Explain to me why it's smart you have to refresh the entire UI to detect a new LoRApress unload models
>>106539539just create a primitive node and connect it to all your samplers/whatever that uses seeds and set it to fixed and you will literally have the behavior you wanthaving to refresh the ui for loras is retarded yeah but i dont think any other ui can see them immediately after adding either so its still easier to just click on "refresh node definitions" to get it updated than having to restart the whole thing
>>106539568>click queue. click on the last finished gen, load workflow. takes one secondAs opposed to not just having the seed be the last one generated which requires zero clicks?>press unload modelsTo refresh a dropdown list? Are you serious?Why are you like this? I mean this really, is this why you can't get anyone to stick around ComfyAnon, you pick retarded hills to die on? You can't ever admit that something is bad just because you programmed it once without caring in the first place?I mean this seriously, justify why the seed is randomized after generation, write the actually reasoning.
tdruss i know you lurk here pls pls add sampling to your trainer and ill love you long time
>>106539539>Explain to me why it's smart you have to refresh the entire UI to detect a new LoRA.Have you tried pressing the 'r' button on your keyboard?
if it wasn't for rgthree I'd also sperg out
>>106539578>I mean this seriously, justify why the seed is randomized after generationjust set it to fixed retard
>>106539576anon we have something called code, you do realize you can populate a dropdown from a filesystem in microseconds right?click -> check if new loras -> populateliteral microseconds
>>106539568>click queue. click on the last finished gen, load workflow. takes one secondBut why ? It makes no sense. There's literally NO point other than to add more work to have a new seed be generated AFTER rather than BEFORE you start a new generation.Give US one single reason instead of trying to claim that it's not much of a workaround to get the original seed.
what a silly thing to be mad at
>>106539596so you don't have a justification, just a workaround which is I should not use the random seed feature at all, do you actually believe this?
>workaroundnigga if you don't like the default setting then fucking change them
comfy is much better as a minimalist UI. just use legacy instead of the jeetcode ui
>>106539507Thanks, how do I get 'bong_tangent' as a scheduler?
>localbros squabbling about interfaces and trivialities >cloudbros spitting out primo kino after primo kino without a care in the world What happened?
>>106539626you have to move to london
>>106539385>>106539272>>106539194This image belongs to Anime Diffusion Thread.
>>106539627see: >>106480817
>>106539626bong tangent transplant
>>106539627
>>106539626https://github.com/ClownsharkBatwing/RES4LYFif you've got comfyui manager installed then just search for RES4LYF
>>106539646CHINK'D!
>>106539598i think you should reread my post, i agree its retarded and a valid complaint, but i dont think other uis are checking the lora folder for new loras either
>>106539650>comfyui manageryeah don't download that. it calls home and has no security. literally peddles malware
>>106539664lol
>>106539659maybe because the argument that everyone else is dogshit isn't a good argument in the first place and as I remember, Comfy used to brag about how he wasn't dogshit
>>106539645you cheeky cunt
>>106539596That only means you have to switch between fixed and random constantly, for NO reason other than a absolutely retarded defaultHaving you get a new seed when you press generate means NO need to change anything, which is what EVERY other tool uses, because it's the OBVIOUS default choice.This was a stupid idea which comfyanon just can't back down from due to EGO, and it just makes him look even more pathetic.
>>106539664I see a Justin Trudeau and a Furk
>>106539651It's pretty funny in light of someone bragging about cloud being so great. Can't even use their "cloud service" right now.
>My personal opinion is that sampling during training greatly complicates things and adds a large maintenance burden, for little benefit.>Each model needs to be changed to add inference support. Some models use different training objectives, and require different samplers. Inference has interactions with pipeline parallelism, and block swapping, which complicate the implementation. For video models like Wan, inference can be very slow unless you use something like the lightx2v lora for low step inferencing. Basically inference is its own can of worms, and I would rather just have this repo solely focus on the training side.>Here is how I think about evaluating models.>If you have a relatively small dataset (<100 images):Just train it for like 100 epochs with a relatively conservative learning rate, saving every 10 epoch or so. It's a small dataset so this doesn't take that long. Now you have 10 checkpoints you can test using whatever inference program you prefer.>If you have a large dataset (think thousands of images, or hundreds of videos):>You should be using a validation set, and measuring stabilized validation loss. This will give you a very good idea of how your model is learning. Minimum validation loss is not the only thing that matters, but I have found it highly correlated with quality and flexibility of the trained model. Also, the training script supports resuming from checkpoint, so for very long training runs on large datasets, you can still kill the run, test models manually, then resume.Okay tdruss, I will kneel. I'm still sad that it seems like we'll never get sampling but I understand where you're coming from and at the end of the day it's your repo. Pray that I am able to adapt.
>>106539650Thanks again, i'm gonna make some hot hags now.
>>106539676It's funny too because he clearly didn't really care which is why he did it that way in the place (didn't think). But the second he gets a critique he will fight to the death rather than being "yeah, that way is better". That's why I want someone to actually justify it, I want to see ComfyAnon write exactly why it's good the way it is.
>
>>106539693If you have a small dataset of less than 100 of anything validation is retarded. Validation only makes sense on extremely large datasets where random members being discarded doesn't actually matter to the final result. But I guarantee you don't have any redundant images/videos in a typical small LoRA dataset.
>>106539383danke
>>106539705>That's why I want someone to actually justify itThat's never going to happen, because everyone knows it's retarded, including comfyanon
>>106539720>just use someone's random nodeor here's an idea, this shouldn't be a dogshit primitive, and if Comfy is serious about professionals using his UI he needs to start caring about this and more specifically, people's critiques
>>106539507>Happy gooning brotherFUUUUUAAAAAWWWKKK, I dedicate this goon session to you.
>>106539723Retarded question but is 100 images a strict cutoff? For instance, I have a dataset of ~200 but I'm not sure if that's large enough to constitute validation. I've also never used validation before so I'd have to learn proper etiquette. My initial thought is validation is overkill for a regular lora but it sounds like I'm wrong on that.
>>106539637You say this every thread but we aren’t going to go going your pedophile thread honey, it just isn’t happening.
how many images should i be training with and does resolution really matter? i've put 4k images next to 512x512 and they still come out fine.
>having a half-hour long sperg fit because you can't take two seconds to load the previous workflow
>>106539785>I have a dataset of ~200 but I'm not sure if that's large enough to constitute validationI think this validation is waste of time for normal loras. Just let it train and save every x epoch and then test which one gives you the results you wanna see.
>>106539763>klingWAN is basically on par
>>106539808to be fair, the history tab is shit
>>106539785validation is overkill for basically everything, if possible you should just enable sampling during training so you can track progress on a caption with a fixed and/or random seed and I think validation in general makes much more sense in LLMs with next predicted token scenarios with easy to judge rightness, vs images which are highly subjective and also impossible to grade objectively, an ideal LoRA for an image/video model should be generalizing, meaning if you have realistic images the prompt "anime [person]" should show an anime version of that person.there is no cutoff for total images or videos, the more the merrier but you're unlikely to really get a dataset more than a couple hundred and there is diminishing returns unless you focus on variety in captions and input images/videos
>>106539804Qwen, Wan and Flux don't seem to care about resolution at all.
>>106539522meh
>>106538566>local regressing, attempting to hack a new vae onto SDXL, mutilating flux, coping with mega-quantized slopped models like Qwen>SaaS thriving with never before seen qualityWhat's next for us?
I thought I was going insane. I was OOMing on the fp8 but as it turns out they added another flag for just the t5 encoder. well, there goes 20mins to not looking into it sooner kek. I'll try the Q8 againoh and I could have fixed the control_after_generate thing two years ago because I was doing the selector for fixed, inc/decrement and random control. just didn't really know where to put it lol. oh well
>>106539881>validation is overkill for basically everythingFor large dataset training, I think it has a purposeBut for ~1000 images and lower, the validation estimates are practically worthless, just eyeballing samples is 1000% more accurate
>>106539804It doesn't matter as long as you DON'T have the setting that upscales smaller images to the baseline resolution turned on if you're using Kohya, it'll just train the smaller images at their own lower buckets even if it's below the baseline specified res of 1024x1024 or whatever.
>>106539816>>106539881Any tips on training a single lora on multiple artists with the goal of it being a mesh/combination of said artists? Or is it better to simply train individual loras for each and then combine them during inference? I suspect that I just need to ensure they are equally represented in the dataset, but I'm unsure if I should tag the artists names or leave them out. Or if it's even possible in a single lora. Ty for answering my questions anons.
>>106539608
>>106539919>oh and I could have fixed the control_after_generate thing two years agonot a hater but what was the purpose of mentioning this
>>106539929It's much more effective to stick to single concepts (e.g. one artist) per LoRA and then blend them during LoRA loading otherwise the model can get confused about the learning objective and you end up either with a bad LoRA or wasting way more steps because of the signal:noise ratio.
>>106539970butiful localslop
>>106539889good, now ask krea to make her half naked with the same pose
>>106539970>15 minutes
>>106539678>and a Furklmao
How much longer until the pay per gen fags get bored and fuck off to /de3/
>>106539961just found it to be a funny coincidence it's something I could have done something about. I have a PR lingering for incremental seeds on batches if anon needs that too. comfy never added it because it wasn't using the latent tuple or something. idk, it was a long time ago
>>106539977Not that guy, but what if I want to train a character lora, but consisting of just two characters, but they are inseperable recolors of each other. Can I use in captions "left is X right is Y" ?
>>106539808>half-hour long spergIt's been like that for weeks, every fucking day. This is pure mental illness.
>>106539929Training them individually will likely give you better overall separation, since at least you won't have the training 'similar concepts' bleed as they are trained separately.Artist names as 'triggers' aren't strictly necessary since you can use lora strength for that purpose, but it typically helps force a style better when you use it in a prompt, so I would suggest having them.
>>106540009Only if you're using a non-retarded model. Flux is smart enough for that, and definitely Qwen is.
>>106539842No, kling is better and has more fps, but wan isn't far behind. And it can do actual nsfw.
when will we completely and finally move past 832x1216 SDXL slop
>>106539977>>106540017Bless you kind anons.
>>106540009>>/g/adt/
>>106539929>Any tips on training a single lora on multiple artists with the goal of it being a mesh/combination of said artists?Combining datasets to single lora can work if the art style is somewhat similar, I've had some success doing this. It's fun experiment. I think you wanna use high batch size for this so it's forced to generalize more per step
>>106540005maybe you're just shit at coding
>>106539970good news! it's a suppository
>>106540054not nearly as bad as catjak
>>106540037never. embrace SaaS
>>106540051>high batch size for this so it's forced to generalize more per stepIs that what it does? So for styles you want higher and for characters keep it 1?
>>106540049go there and stay there and stop spamming your shitty thread here you nonce
any advancements with tts or is vibevoice still sota
>>106540066Batch size and gradient accumulation (fake batch size) is basically summing up all the weight updates in the matrix and averaging it.
>>106540066>Is that what it does? So for styles you want higher and for characters keep it 1?batch 2 minimum and keep concept datasets even number
>>106539889yeah because it's so much more expensive paying for Saas than owning a system capable of running local models.
>>106540037When anons upgrade their hardware. You have to remember, most sloppers don't even have 16GB, and some don't even have Nvidia
There are people in this thread who don't own at least 4 90s series cards?
>>106540067/adt/ is for anime posting
>>106540105/ldg/ is for local gens regardless of style. fuck off retard
>>106540099does 3dfx Voodoo count as 90s series? I have 2
>>106539978
>>106540105ADT is for you to fuck off and stop annoying everyone faggot, anime website
>>106539661What data does it send?
>>106540116you don't like anime, but its understandable that you want to hoard and centralize everything, greedy as you are
>>106539661least creative fearmonger
>>106540130you dont like anime and you made fun of anime before the splittting, but the fact that you want to disguise yourselves as anime friendly now..
>>106539720>custon seed node>still not on par with a1111's seed generationembarrassing
When ready >>106540158>>106540158>>106540158
>>106540116just ignore the shitter he does it every thread never changing his tactics
>>106540105ok i'm gonna make /ggg/, good gen generali will spam this thread and /adt/ when i see any good gens, and say to post them in my general instead
>>106540313don't forget the "gm"
>>106539203can i gen this with a 12GB 3060god i need some maturesplease prompt
>>106540773nm found link thank you>>106539507