Discussion of Free and Open Source Diffusion ModelsPrev: >>108076014https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/tdrussell/diffusion-pipe>Zhttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Animahttps://huggingface.co/circlestone-labs/Anima>Kleinhttps://huggingface.co/collections/black-forest-labs/flux2>LTX-2https://huggingface.co/Lightricks/LTX-2>Wanhttps://github.com/Wan-Video/Wan2.2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
>>108078691>same gartbage collageat least its a fagollage I guesswhere the fuck are the 1 girls faggot
>>108078782>>108078788It's "@pigeon666, masterpiece, realistic" and the description of everything you see in the picture there. Now what?>noo I need the entire promptWell I'm not giving it to, that's my OC donut steal, I'm not donating my OC to your dirty grabby hands. This is the bit that's relevant to the style
>>108078817was meant to be like this
>>108078817good gen but >no slight pantyshotsad
>>108078817See how soulless that style is compared to the awesome style of the old models I use
>>108078813>no qwen boilerplate >highly simplistic prompt Duh it "looks good" on whatever slopmix. With raw finetunes such as Anima you need to be autistic with your prompt. I'm assuming you weren't one who used regular Noob and instead preferred some downstream mix. It's fine to prefer slopmixes because they are easier to handle, just don't whine about not getting good outputs from raw finetunes because you only use very simply prompts. You will call this cope but the real cope is needing to wait for someone to mix a bunch of models together so that "1girl, standing" doesn't look like ass.
>>108078863without metadata its pretty pointless to continue honestly
>forced tags>@>"you are a helpful..."tripleslop
>>108078856yo who is this?
>>108078863 (cont)The entire idea of mixes is a tradeoff between ease of use and elasticity/expression, this is not a controversial concept. It is simply a fact. >>108078882Combined with the fact that they aren't even the same resolution. But I'm guessing it's moreso stubbornness than purposeful trolling. I saw the same thing when anon would refuse to acknowledge that Illust was better than pony.
>>108078845how can u like this illuslopu have shit tastekill urself
>>108078897jay effkay
>>108078863>no qwen boilerplateNTA, but do people really? It's making no noticeable difference in my testing so far, other than feeling stupid.
okaeri!!!!!!!!
yuribros!!
>>108078913>>>/g/dalle
>>108078845This is more of a GPU RNG vs CPU one, GPU RNG is superior at expression
today its frieren friday
>>108078914The more complicated the prompt the more the effect is noticed.
>>108078920
ready for the stark date
>>108078934Is that still the case for tag-style prompts?
>>108078955Yeah. I don't use any NLP with Anima anyway. But my prompts are still paragraphs long.
>>108078863>simple promptsI want the style of the artist, dumbass. I don't want that>digital painting, highly detailed, cinematic lighting, sharp focus, concept art, trending on artstation, award winning, unreal engine 5, deviantart, octorender, 8k, 4k, 16k, alphonse mucha, ilya kuvshinov, artgerm, greg rutkowski, magic the gathering art, d & d characterprompt that you think is the peak of style, I'm just not interested.>>108078905If this is slop lock me up in a pigsty>>108078898>he thinks style changes based on resolutionYou have literally never used a booru model have you
>>108079005illuslop is 100% recognizable because all illu gens have the same style/shadingits uncannySLOP
>>108079005The idea of longer prompts extends to the overall ability and adherence of a model. Again, it is not new information that raw finetunes necessitate autistically tagged verbose prompts. If you do not wish to take advantage of the elasticity provided by non mixed models than that is your prerogative. The situation you find yourself in, preferring old mixes over newer better finetunes, is neither new or unique - given enough time, unless a better model drops before, I'm sure someone will release a mix you will be happy with. That's just how model timelines go. >You have literally never used a booru model have youIt's just one more thing to knock you on, it's not a true 1:1 comparison. I won't dog on you for preferring slopmix styles but you shouldn't be surprised when very simple prompts don't "look good" on non mixed models. Again slopmixes are unironically predicated on being easier to use at the expense of a stronger "default" (read: slop) style.
>108079093nice llm reply shill
>>108079116It is a little humorous considering this same kind of conversation took place during the early days of Illust, but unfortunately you are incorrect
>>108079005
>>108078944>dongload
>>108078928Plastic ears
https://github.com/sdbds/ACE-Step-1.5-for-windows/tree/qinglong?tab=readme-ov-file#-installationacestep 1.5 with cover functionality, get the portable zip, like comfyui portable.
I don't think cutting out artist styles is a good idea but I don't get why people care about a model having too much of a specific look. Every single model does that, even SaaS slop like NBP that probably trained on everything. I think the future is going to be a bunch of artists training their own individual finetunes to be nothing but their own style rather than there being a single "do everything" model. At least once the current generation of anti-AI luddites die off.
>>108079264>native Comfy nodes are still crap>kana112233/ComfyUI-kaola-ace-step doesn't workMight as well try this.
>>108079276>I don't get why people care about a model having too much of a specific look it makes the model less interesting. predictable.
>>108079300But all AI models do this like I said. Complaining about it seems a little pointless and it's probably going to become the norm in the future because copying someone else's style is even less interesting.
>>108079276Okay, catjak.
uh oh meltie incoming
>>108079295try it, there is a shitload more functionality than the comfy workflow. also not that large (zip)
>>108079351It isn't but it's downloading all safetensors again. Fucking hell.
>>108079361it's worth it for the cover/repaint options, pretty funny even though im still figuring it out.
>>108079320some have less a default style than others, i dont get why youd want more of a specific default look even if you assume its inevitable, which i disagree with
>antichroma schizo again
>>108079005how do these models handle the (actually a lot of) older Booru artists who never published any work at even 1 megapixel? I.e. the native resolution of their entire body of work is below that
>>108079320Emulating and combining multiple artists styles will ALWAYS be more interesting than relying on a models default look kek
>>108079371More interested in picrel there but I hate Gradio and don't have patience to prepare an actual audio dataset.
>pulled>ModuleNotFoundError: No module named 'comfy_aimdo'
>>108079399update the python deps retard
>>108079399>he didnt update requirements.txt
>>108079379It's less about wanting it to happen and more about it being inevitable. If even Google can't stop their model from being easily detectable after its gens get spammed billions of times I don't get why people expect some random finetuner will figure it out.>>108079390That's what model merging is for. Combining artist styles from just prompts is just a band-aid for our current era where it's still really difficult to train full models without being rich and hoarding terabytes of data.
>>108079374I would fund this
>>108079393>Setting constrained decoding max_duration to 240s based on GPU config (tier: tier5)Well shit. In Comfy I tried to do 480s on 16GB and oom'd, if my limit is this low it isn't happy news.
>>108079414Google actually has an incentive to make their outputs homogenized so if they are brought to court they can easily say "the average person would know that's a gen because all our gens look the same". >>108079414>That's what model merging is for.No, models like NoobAI or Anima are adept at combining artists styles BECAUSE they do not suffer from the same default look as something like WAI. I'm having trouble understanding what you're trying to say...
>>108079371also might try this:https://www.reddit.com/r/StableDiffusion/comments/1qxs5qv/acestep_15_full_feature_support_for_comfyui_edit/