Discussion of Free and Open Source Diffusion ModelsPrev: >>108033820https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/ostris/ai-toolkithttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/musubi-tunerhttps://github.com/tdrussell/diffusion-pipe>Zhttps://huggingface.co/Tongyi-MAI/Z-Imagehttps://huggingface.co/Tongyi-MAI/Z-Image-Turbo>Animahttps://huggingface.co/circlestone-labs/Anima>Kleinhttps://huggingface.co/collections/black-forest-labs/flux2>LTX-2https://huggingface.co/Lightricks/LTX-2>Wanhttps://github.com/Wan-Video/Wan2.2>Chromahttps://huggingface.co/lodestones/Chroma1-Basehttps://rentry.org/mvu52t46>Illustrioushttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/r/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debohttps://rentry.org/animanon
https://litter.catbox.moe/epy5sxdo84fxt3eo.png
Blessed thread of frenship
blessed thread of frenshiphttps://litter.catbox.moe/f0f2d7bivwmbhqqj.png
I claim this thread for NUCLEAR WARFARE
>>108035720>>Maintain Thread Quality>https://rentry.org/debo>https://rentry.org/animanonyou never learn schizo. you never learn
>>108035751kill ani
>>108035747why is the white man so carbrained?
https://litter.catbox.moe/4k1w55omnsbjblex.png>>108035758the masculine urge to go vroom vroom is stronger in them i guess
what nationality is tdrussell i need to know
>>108035758never seen toyko drift?
>>108035751>>Maintain Thread QualityI wish people would do this more often and not sperg out.like why cant ldg be comfy?
>>108035764thanks frenhttps://litter.catbox.moe/n4g8qhtkxzvdkugq.png
>>108035775https://litter.catbox.moe/1gzki70nfk94qdh1.png
>>108035767i assume generic white guy prolly given the name. Definitely from US or Canada, doesn't communicate in English like a European.
>>108035796>burgers finally wonned nice
https://litter.catbox.moe/aspirdm5a3dgv3fw.png
https://litter.catbox.moe/o8931ydsy7e5ayvb.png
https://litter.catbox.moe/ffg7yod9pm0qujzq.png
>>108035796just like astralite heart. not a good look
https://litter.catbox.moe/3dno9bvdahb5tfz8.png
https://litter.catbox.moe/4dxzcollbl9i0ejw.png
>>108035704Hell yeah.
litter went down just now, awesome :)https://files.catbox.moe/mypsso.png
how do I unslop ltx2?
>>108035882we gave up on it because they trained with suno and udio gens
>>108035827>noooo he has to be Chinese and inflict dogshit machine translated bullshit captions and stupid cultural biases upon all users of the model
>>108035877Use a base image from another model and use an I2V workflowhttps://files.catbox.moe/niyz6j.png
>>108035886I don't think you understand. I don't care if it's a suno/udio distill.LOCALOCAL
>>108035900garbage in, garbage out
>>108035886((((((((we))))))))
https://files.catbox.moe/ox11yr.png
>>108035882he means 2nd
>>108035846nice
>>108035887>it's my first time>pay me
https://files.catbox.moe/e86bay.png
>>108035751There's a thread without it perhaps /sdg/ is more your speed?
>>108035915like I said in previous thread this makes no sense unless you're admitting to being a corporate SAASjeet inference service yourself
>>108035911kino
>>108035882>two more days
>>108035914thanks fren :)https://files.catbox.moe/poms68.png
>>108035934yeah its kinda weird honestly. is there like some kind of SaaS company focusing on anime models that has ties like to 4chan? like started here or something?
>>108035934>fucked nlp encoder>nooooo you are a jeet!!!
>>108035936kurosawa is the GOAThttps://files.catbox.moe/jj3bjh.png
Lol https://civitai.com/models/2354972/nayelina-z-animeHere come the snake oils, here comes the early access to failed models, here comes the imbalance of Z model due to the fried dataset from slopers, here comes the overfitting on cowboy shots, here comes the shiny skin and hair.Another garbage checkpoint from someone who doesn't understand dataset curation or training, lets keeps pumping out half baked models, all hype and FOMO
https://files.catbox.moe/emhn7s.png
>>108035758Non-whites wouldn't understand.
>>108035895that's exactly what I did but the rest of the video gets slopped
https://files.catbox.moe/n5zfhg.png
>>108035494Gay.
>>108035906Yet men are born of women.
>>108035957>lora trained on 1000 images (rofl) on a single 5090>don't worry guys the final version will be a million images (still rofl)>putting "Z-Anime" in the model name>posting it everywhere and shilling>there's a fucking website for the model>you can sign up and make an account>very obviously positioning it for some kind of API / SaaS thingnow THIS is what a grift looks like
>>108035974interesting. I'm not nearly as familiar with video genning, so I can't help out that muchhttps://files.catbox.moe/4h6qr4.png
>>108035992if you're not griftmaxxing in 2026 you're ngmihttps://files.catbox.moe/q97i10.png
>>108035998in my testing wan is still superior in quality. I hope ltx2 can catch up
Is the Comfy server borked for anyone else or did the jews finally remote-delete my system32 folder
>>108036003I've heard that using new workflows and files can help, since ltx-2's initial release and comfyui workflow were misconfiguredhttps://files.catbox.moe/tqdvqy.png
>>108035957Shit image demo. Do these people even generate anime once in their life?
>>108036008Works On My Machine(TM)>>108036003Try using this one:https://files.catbox.moe/7r8isi.mp4
Klein 4B Distilled versus Klein 9B Distilled on in-place upscaling the other anons's Bateman pic from 544x544 -> 1024x1024```Significantly improve the overall quality and detail level of photograph image 1 while keeping the original composition and layout and color palette and lighting and visual aesthetic and cross-eyed red eyes facial likeness exactly the same as it is.```
>>108035946what the fuck do you mean by "fucked NLP encoder"
I'm late, but is this legit? The FP32 model for base was leaked and it's actually 24GBhttps://huggingface.co/Hellrunner/z_image_fp32/tree/mainhttps://www.reddit.com/r/comfyui/comments/1qt88kg/z_image_base_teacher_model_fp32_leaked/
https://files.catbox.moe/fhj26m.png
>>108035957we know dude
>>108036036it's fucked like the MLP ponyfucker's encoder
>>108036002>>108036016>>108036043what's your prompting method? wildcards to an llm separate then copy+paste ?
>>108036045???? literally nobody knows what the fuck you mean, it uses Qwen 3 0.6B and has good adherence to tags and captions equally, explain or GTFO lol
>leaked
>>108036041isn't z image base fp16 already a teacher model? what am I missing here?
Don't get the complaints about models sucking. Just fine tune things into a one you like. Or combine models in Easydiffusion or what have you. Like for my example. I like dark skinned girls but default models make them too light for my tastes so I refined it myself.
>>108035957This faggot must have the entire dataset tagged in 2023 WD tagger eva large or Slop Caption which omits all the details.
>>108036063they're saying Tongyi-MAI released bf16 version by mistake, which is why it was 12gb.if you check the repo, you can see it's 24gb now.https://huggingface.co/Tongyi-MAI/Z-Image-Turbo/tree/main/transformer
>>108035886>>108035882I don't agree with any of these guys, it's at least v4.5 level, but Alibaba needs to hurry up and cook so these guys can shut up about it not being Suno X tier.
>>108036066tdrustled here, I did the same
>>108036080oh wait disregard that. I was looking at the turbo repo. I guess it's still the same for the z-image repo.
>>108036064Did you refine an entire base model instead of changing tags in the prompt? This is another level of skill issue, dude.
>>108036066it's really not interesting, it's just some generic CivitAIer unnecessarily running a 1000 image lora sized dataset as a full finetune
>>108036034kinda prefer 4B? unsure.Try this>clean digital file, remove blur and noise, histogram equalization, unsharp mask, color grade, white balance correction, micro-contrast, lens distortion correction.
>>108036052I ask chatgpt to generate prompt ideas for me; I've a custom GPT for generating image prompts; i either upload a reference pic or go "okay here's my idea, write a prompt for it in this style" since right now the meta is writing multi-paragraph prompts in natural language, the "war and peace" approach
>>108036132Eyes reminded me of Rodney Dangerfield for whatever reason
z-image optimization when
>>108036125500k images should be the absolute minimum for finetunes. I have person loras that are 2k images. There is not a chance in hell it has enough anime knowledge from 1k images. A single popular artist has more images than that. Waste of time.
>>108036152Stop using a 1070 to gen.
>>108036158what else
>>108036152>>108036158how long you dudes take to generate an image?I use a 4090 and for like 50 steps deis it takes 1 minute and 14 seconds
>>1080361735x
who do I prompt these tits?>>>/gif/30197486
now testing anima. it knows pepe, neta doesn't. some anatomy problems with hands, it really likes adding text and multiple views, but I haven't worked on the negs yet. reminder you can use>%UNETLoader.unet_name%as your save file path
>>108036173I use a 5090 and it takes about a minute to generate a 1216x832 pic, 2m10s to gen a 1920x1080 pichttps://files.catbox.moe/qx6weh.png
>>108035877so far, the only thing that works for me is making a wan video and using it as guidance frames for ltx.If the subject/s are too distant, it will get sloppy and start producing ltx face. Same if the subject/s move too much, twirling and dancing. Close-up portraits are better. The IC detailer is kinda shit but the static camera lora helps. Either way, loras make a big difference in quality. Experiment with enabling and disabling them, some loras do fine at 0.3 but will kill the output at 0.8. More frames per second, higher resolution helps. If you have a 5090 or better, you can skip the upscaler and just gen at higher resolutions. Using more steps is hit or miss but it can sometimes help. I haven't experimented much but lcm at cfg 1.0 works OK. Though I'm probably going to attempt higher cfg with negatives to see if that improves things. LTX2 has a lot of problems but if you have the patience and time to tweak it for a particular i2v, it's usable
>>108036144ah interesting, thanks. what system prompt do you use on that agent?
>>108036173not who you replied to, but I'll jump in. 50 steps on my 4070 ti super is roughly xxminxxsec. I'm using Res MultiStep too.
>>108036226"You specialize in crafting high-quality prompts for image generation tools. You take user-provided descriptions or reference images and transforms them into structured, detailed prompts optimized for AI image generators. Include information about lighting. Include information about camera angle. Do NOT mention the image's resolution. Include information on the image's composition style, such as leading lines, rule of thirds, or symmetry. Specify the depth of field and whether the background is in focus or blurred. If applicable, mention the likely use of artificial or natural lighting sources. Do NOT use any ambiguous language. Include whether the image is sfw, suggestive, or nsfw. If it is a work of art, do not include the artist's name or the title of the work. Do NOT use polite euphemisms—lean into blunt, casual phrasing. Include information about the ages of any people/characters when applicable. Mention whether the image depicts an extreme close-up, close-up, medium close-up, medium shot, cowboy shot, medium wide shot, wide shot, or extreme wide shot. Explicitly specify the vantage height (eye-level, low-angle worm’s-eye, bird’s-eye, drone, rooftop, etc.). Your response will be used by a text-to-image model, so avoid useless meta phrases like “This image shows…”, "You are looking at...", etc."
>>108036248forgot to fill out my times :(1 min 30 seconds. I stick to 30 steps mostly.
That new Z model seems better
>>108036256She looks BURNED.I'm WHITE.
>>108036220how do you gen long video with wan + ltx2? wouldn't wan make weird motion between 5 sec clips?
>>108036220this looks very comfortable
>>108036270svi lets you string 4 5 second vids together. There's a workflow somewhere on civitai
>>108036287then why do you need ltx2?
>make a bunch of wan2.2 gens>looks weird>didn't realise I had one of the low model loras set to half strengthFUCKING TWO MODEL HIGH-LOW BULLSHIT
>>108036256better at anatomy, with much increased gen times as a cost. Use it if you need it for proffessional productions and don't have the time/patience for post-production
>>108036302to add audio and interpolate
wow so original
>>108036315I see.wan can't lip sync
What's everyone's params for z-image base?
>>108036173My current workflow takes 15 minutes with a 3090 and 64gb of ram. I usually just queue up a bunch of prompts overnight
>>108036327My catbox links have the img with my workflow attached; basically it's a basic zib + hi-res fix pass wf
>>108036365>15 minuteswtf are you talking about
>>108036311this nigger got triggered lmao
>>108036374It's not t2i, it's i2i with a lot of input images using slow stuff like multistep samplers, clownshark guides, and multiple conditioning
>>108036383output example?
>>108036383show us, get yer nodes out for the lads
>>108036365you need 15 minutes for 1 image with a 3090?
SongMuLA claims to be genning. We'll see if it successfully vae's though.
>>108036173I'm using that newer model and it takes me : 5 min. 47.3 sec for 100 steps with a high res pass at 2x
>>108036401You should only be comparing iterations / second:(assuming no speedups like torch)>model & version>changed prompt?>cfg/nag>sampler>obviously the image size
>>108036383buddy these gens better be kino and nectar from heaven with that gen time
>>108036427Flux.2 is big, could be that.
>>108036206its funny that this pic is lewd to pedophiles because of the black and white / noir elements but otherwise it would be completely innocuous to see the exact same thing IRL since it would be in color
>>108036381seek meds and bee bee sea treatment at once!>>108036391neat
>>108036427deep down you know it's 1girl
>>108036436I have used unquantized flux2 with a 3090. it does not take that long
RACISM IS GENERATING*cackles*oh yes racism is soon
>>108036391>>108036394>>108036401>>108036427The gens are nothing special, they just use many conditioning images from my custom 3D renderer, I'm sorry I'm not gonna share images. They need to be perfect and match eachother because they are used as keyframes to an animation workflow
>>108036440yeah i guess so. never seen it as possibly being lewd, but now that you mentioned it does seem a tad intimate,sensual
>>108036448You didn't, go away retard.
>>108036184
>>108036454bait
>>108036441But they smell and are gay.
>>108036462k moron
curse vishnu
>>108036152The new model only takes 5 minutes on a 2 pass and adetailer run
Prompt executed in 349.40 seconds
>only 5 minutesis ran for real? 5090 btw
what gen time can I expect for z image base if I get a 5070ti?
>>108036460>never seen it as possibly being lewd, but now that you mentioned it does seem a tad intimate,sensualit has similar vibes to something like Therese Dreaming by Balthus, juxtaposing an otherwise innocent situation with an eroticized perspective while looking for an appropriate image I found that Maya Hawk (from Stranger Things) recreated it for a music video or something which is cool since I was going to probably make something similar with Qwen Image Edit or something
ok what are anons using to prevent random text and multiple views?
>>108036543It may be my porn-fried brain speaking, but it is just a girl sitting to mereposting this one since I borked my ComfyUI and now I need to redownload the models fmlhttps://litter.catbox.moe/bmu4wcoi474tfucx.png
>>108036559Using a better model.
>>108036125100%. it's like getting worked up about a bad tweet; who cares. anyone can put some shit on the internet. it isn't more official because it has a huggingface repo and a card. some misguided kid wants a career in AI shit. it's fine. >>108036155bullshit. i have a 200k booru dataset that covers the top 5000 characters, and every tag and style otherwise, quite generously. you don't need a huge dataset. when you're feeding back weight adjustments over 200k samples, how much uniqueness do you think you need? it already has the general knowledge. come onre: the pic - it's borked but i like it. the signature is so good
>>108036598>bullshit. i have a 200k booru datasetAbsolutely nothing.Illustrious v2.0 was trained on 20M images. If you can't do even a fraction of that then your finetune is a waste of electricity.
>>108036598You are better of turning them into individual loras.
>>108036612is that a crash bandicoot level?
>gemma 3 12b abliterated for ltx2snake oil?
>>108036220>More frames per second, higher resolution helps. If you have a 5090 or better, you can skip the upscaler and just gen at higher resolutions. UsingIt's really slow then, isn't it?
>>108036639>>108036650You guys have never done any serious finetuning. I don't care how many images Illustrious was trained on. The results of my training have only been positive. These anime models output cooked shit anyway. Fuck off
>>108036692it's not unreasonably slow. Still faster than wan
>>108036707neither did tdrustled
>>108036711I'll have to try then, I thought the spatial upscaling was mandatory for the model.
>>108036735if you wanna go crazy, you know how the initial sampling is at 50% the size of your final resolution? Don't downscale, so that the longest side, for example, ends up being 2048 lol. The quality is certainly better but the gen takes 5+ minutes - it works on my 5090
>>108036631
>>108036707>These anime models output cooked shit anyway.Then why are you even doing it? Either commit fully or fuck off.
>>108036658it's a FAIL t2i of a Quake hallway. The real fail is trying to then wrangle latent upscale to work without shitting it up>>108036751>>108036766neat
>>108036755Can you share your wf anon? I'll try that tonight on my 5090.5min is nothing, I've been genning the first video models for 2h using a 3090.
>>108036755>2048you got oom?
It's ironic that the same people that complain about "bloated" models also complain about "overcooked" models
>>108036802It's almost as if the faggot is priced out of using modern models and wants to take it out on others
https://litter.catbox.moe/62n8kcn6pmgwvsti.png
sorry went out for a bit
>>108036829woops meant to reply this to:>>108036132
>>108036734ching chong ping pong>WE SPEND MOST GPUYOUOWA TO FINETUNE MODUR
I need 128 gb ddr4 under $250
>>108036559its kind of the wild west right now. Setup a NAG node in codex or CC, it helps a lot with negative enforcement.
>>108036540ackk
This workflow is fucking crazy. I did not know lTX could look this good.https://litter.catbox.moe/bqiinka0sn8h7ybf.jsonhttps://litter.catbox.moe/m2ryke6uybhfnqpd.mp4https://litter.catbox.moe/4v6dubcoq7ygzc27.mp4https://litter.catbox.moe/dclzhivv12hjlshg.mp4https://litter.catbox.moe/gb8lhopsbaek4mr4.mp4
>>108036440nobody was talking about this shit until you brought it up. i think it is you who is le pedophile good saar
>>108036540I dunno but about 3 minutes on a 5060 ti using comfy template and 50 steps. comfy has it set it to 25 though
>>108036155right nobody was unironically saying it was good
>>108036859I was thinking the same thing. what a bizarre takeaway from that image
>>108036688"muh censored text encoder" in the context of any diffusion model is always nonsense technically speaking yes, it's not a thing at all
>>108036854>girl in shower "you like these tiny tits">they're not tinydisappointing
>>108036854>slopped>good
catbox sucks fat donkey dick
>>108036854Great job, and great stuff. This is the exact workflow I need to create a short film! Shame about LTX-2's tin-can audio quality though
>>108036791it's just the ltx 2.0 Distilled template in comfy with the 'upscale image by' node changed to 1.0. Also, the LTXVNomalizingSampler is supposed to be better than the standard one.
>>108036880I'm talking about fidelity, not the content of the video. Fidelity wise it blows away wan 2.2 now
>>108036885y
>>108036885yeah so fucking slow
>>108036888OK.
>>108036854needs more erotic stuff, this asmr is boring
>>108036885yeah; right now the main catbox site fails silently, with all generated links 404'ing on you; litter seems ok for now
>>108036896
>>108036639quality>quantity thougheverbait, unfortunately
>>108036918aackk
>>108036918Try opening on an anonymous tab, or use another browser. I can access litter just fine in Bravehttps://litter.catbox.moe/qhqw4tu3hugsef6s.png
>>108036945I think I got rate limited or some shit because I opened all four links at once. They started to load then stopped.
>>108036409>SongMuLAAce step comes out February 3rd and will blow that shit out of the water.
governments must have a local model that is at least nbp tier capabilities but fully uncensored
>>108036988How do you think they'll make the deepfakes for arresting opposition?
>>108036988They might have nbp running on a supermachine somewhere, or at least a dev/experimental fork but running on the same infra as the consumer one does otherwise; we know that openai has a special version for the military for example. That version is for sure less censored. You're onto something herehttps://litter.catbox.moe/28hb50ygxtee4ilj.png
>>108036988I bet that's what is keeping SAI afloat right now
>>108036854>m2ryke6uybhfnqpd.mp4ouch my earsat least it cant get worse from here>>108036988>governments must have a local model that is at least nbp tier capabilities but fully uncensoreda reasonable assumption knowing obvious allegiances is that chinese government can do whatever they want with literally any chinese SOTA model, and american government can probably do whatever with Grok's best models. so still not really good enough yet but in a few years video footage probably isn't going to cut it
>>108037055we'll just have AI forensic experts
>>108036988Yeah, it's called NBP
how do I unslop ltx2 audio? they all sound like a system
>>108036206Using your workflow inference on an RTX 6000 is about 42 seconds with 2.2 it/s on the first 40 and 1.2 s/it on the last 20 steps.
>>108037104wait for 2.1 / 2.5
>>108037104https://github.com/haoheliu/versatile_audio_super_resolution can work for some improvement
>>108037166example data https://audioldm.github.io/audiosr/
>>1080371062.2 iterations a second? holy
>>108037185oh god i'm itoooratinnnng ag ahahahh
can somebody fix fp16 computing for all these bf16 models? (anima, z image base)
Illustrious 0.1 needed 20 million anime images and still came out suboptimal and unstable. So making character lorad for bigger non anime models will require tons of data, not to mention full anime finetuning on Klein or ZiT. The character loras I downloaded for ZiT are super unstable, and when the character looks right, the pose is stiff like SDXL or even SD 1.5.
>wasting 5090 for this?
>>108036988pretty sure jeets are jerking off to their uncensored ndp lewd gens at google headquarters. personally i hate using nanobanana pro for 1girl gens. qwen, klein and sdxl illustrious are better suited for that.
>>108035985hamster lookin face
>>108036598Your model is trash if this is your best sample.
simple message really bothers some anons
>>108036468yes this is good
>>108036441skyrim lora?
Need a good I2V wf for ltx2.Character likeness falls apart after few frames in all the one I've ordered
>>108037276put the bunny, in the box
you're all untalented
>>108037335>>108036854use the I2V lora
KEKSTONE!!! WHERE ARE MY ARTIST TAGS!!!!!
>>108037341i resent the implication I give a shit
>>108037368He must have it for the klein tune right?
>>108037382SEXOSEXOSEXOSEX RIGHT NOW
>>108037382Give her a belly bulge.
>>108037382give her a futa bulge
>>108037275shit don't look that Max saaar
>>108037341rude!>>108037395put a baby in her. it would be really funny haha
>>108037199that's because niggerfaggots keep doing retarded no caption loras despite it leading to the same kind of rigidity on every model arch that ever has or ever will exist
comfyui is such a piece of shitand it gets even more shittier with each git pull
>>108037390 (me)I apologize for losing my cool.
>>108037428then use something else and stop complaining?
>>108037445there is nothing else
>>108037328No lora, but close. It's a t2i of a prompt I got from feeding Qwen3 32B a Skyrim screenshot.>>108037454Auto abandoned us
>>108037457that's remarkably good for a t2i
>>108037341accck
some of you should consider self harm
>>108037492no pretty lady don't jump
i still can't figure out why my workflow starts lagging after a few gens when doing XL. refreshing the browser completely fixes it. it has to be the face detailer nodes. what puzzles me is how no one has reported it as an issue in the github. it cant just be me
>>108037382
>>108037382weird. I was just wondering where the asuka posts have gone
>>108037497too late
>>108037492
zimagebros... when will the bf16 training be fixed?>Conveniently, the fp32 weights for Z Image appear to have "leaked":https://huggingface.co/notaneimu/z-image-base-comfy-fp32
>>108037537Model seems alright
I was gonna train coom lora for Klein but maybe now I don't need to, this one seems good:https://civitai.com/models/2357212/face-cumshot
>>108037537>TWO MORE MONTHS UNTIL THE ACTUAL Z IMAGE RELEASES. GET HYPED FOR BASED CHINA!!!
>>108037548i don't think you even need a lora for cum.
>>108037569it does white liquid that is similar but clearly not cum, by default, the lora is definitely bridging that gap
>>108037579>Cum detectivegrim
>>108037583wat
>>108037583>slop eating jeet
>>108037612m'booba
>>108037305Jenny is my HamHam!
>>108036598wtf is going on in this
>>108036559multiple views is literally an actual booru tag. also probably like watermark, signature, logo, artist name, web address, etc
>>108037616quit samefagging
prompting jizz: https://files.catbox.moe/wbntmw.png
>>108037629was this supposed to be jennete mccurdy this whole time???? lmao
>>108037648i'm for sure not the guy using what looks like some A1111 variant based on the filename i promisee
>>108037649sloppa
>>108037656get better taste then
>>108037545>>108037612complete slop
>>108037668Go cry LMAO
>anima requires @ for artist tagsNo wonder it didn't work the other day..
>>108037668
I love experimenting with new models.
remember when /ldg/ was hyped for RadialAttn and then everyone just forgot it existed when it released and it was never mentioned again
>>108037703>I AM SILLY!never really liked strawman comics like this. normally means the ridiculed party is correct. comfy does suck btw
comfyui sucks because i am too poor to use api nodes
>>108037723You're entitled to your opinion. Just would be nice if you didn't shit up the threads with it for attention.
>>108037722because it had draconian length and resolution requirements or some shit but I think they might have fixed that
>>108037548it's meh
I have no implementation and I must refine
>>108037733are you ok catjak?
>>108037703That's a nice style. Is it based on some particular artist?
>>108037774harada takehito
>>108037763>Anyone negative to him is the same guyCan't stop exposing yourself eh?
new /ldg/>>108037746>>108037746>>108037746
>>108037776Thanks
migrate>>108037746>>108037746>>108037746
>>10803654045-60secs for me at 1024x1024