[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor application acceptance emails are being sent out. Please remember to check your spam box!


[Advertise on 4chan]


Discussion of Free and Open Source Text-to-Image/Video Models

Prev: >>107108437

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Neta Yume (Lumina 2)
https://civitai.com/models/1790792?modelVersionId=2298660
https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd
https://gumgum10.github.io/gumgum.github.io/
https://neta-lumina-style.tz03.xyz/
https://huggingface.co/neta-art/Neta-Lumina

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
File: 00006-3672281271.jpg (1.21 MB, 2560x2048)
1.21 MB
1.21 MB JPG
>>
File: cat car_2x.png (1.5 MB, 1024x1024)
1.5 MB
1.5 MB PNG
>>
Blessed thread of frenship
>>
thread of 80% there friendship
>>
A joke so good he wrote it twice because he really needs attention.
>>
File: 1740404841688382.png (1.38 MB, 863x868)
1.38 MB
1.38 MB PNG
>NegPip (negative weights in prompts)
>>107113964

Actually big, how wasn't this implemented so negative prompts would actually be useful by now? Lmao
>>
File: 00007-1486313296.png (2.84 MB, 1536x1536)
2.84 MB
2.84 MB PNG
>>
File: WAN_22_I2V_INT__00021.mp4 (3.44 MB, 1120x1440)
3.44 MB
3.44 MB MP4
>>107114274
Trying to have the action being fully completed but it gives up at the 5th second LOL.

The recommended sampler/scheduler works well though.
>>
>>107114791
how does this work, as in, do you pass a still frame and then a video with the animation to it? how long does it take (what's your gpu)? does it work well with anime?
>>
File: DigitalBrush3_00004_.jpg (705 KB, 1152x1512)
705 KB
705 KB JPG
>>
File: struggle.png (3.93 MB, 2050x1024)
3.93 MB
3.93 MB PNG
I've been having trouble getting SwarmUI to work right. I'm trying to switch over from Forge but whenever I try to get LORAs to work in Swarm the results are noticeably a lot worse. Left is Forge while right is Swarm, same prompts, loras, resolution.
Any ideas how to fix this? Am I missing something?
>>
>>107114694
Does this hold equally true on something like naked illustrious or even base XL?
>>
>>107114895
prompt weights? they arent calculated the same so if those are included then itll be drastically different
>>
File: ComfyUI_00172_.mp4 (217 KB, 560x720)
217 KB
217 KB MP4
>>107114856
Using Wan 2.2, I pass a starting image, set a prompt and the workflow does the rest of the job coming up with the animation and everything. I'm still learning on how to write proper prompts to get more accurate results. I have tried frame-by-frame prompting but I'm not sure if it even works. Making small clips then combining them can also be a good way but consistency may be the problem. Using FLF along with other techniques is also there and I have seen some cool results from other creators.

This takes about 26 minutes. I use 7800xt on Windows via ROCm. I have tried with anime and realistic images and they work fine for a local, free app.
>>
>>107115015
ah, not for me. if i could pass a video and it would follow along, i'd be interested. i don't think this can work well with just a text prompt, it will get lost easily
>>
>>107115042
You mean video-to-video? There are plenty of V2V workflows for that out there.
>>
File: me.jpg (204 KB, 884x1585)
204 KB
204 KB JPG
>>107114768
>>
>>107115219
prompt?
>>
>>107115205
is there a tutorial somewhere? i already know comfy basics. i want one that works with 3dpd video for movement and anime image for style
>>
>>107115219
Have you trolled any devil corps lately?
>>
Why are people shilling epycs as the poorfag LLM driver if LGA-2011-3 systems are many times cheaper?
>>
>>107115219
i look EXACTLY like this
same phone and everything
>>
>>107115250
no but the /adt/ thread got removed xd
>>
>>107115012
What are the ideal training settings 512/640/768? How many images and steps.
>>
>>107114898
because there are tons of papers and tools being published all the time and they get lost/ignored. someone should really make a PR to add it to Comfy, if Epsilon Scaling made it in, IDK why this shouldn't.

>>107114694
It works on all SDXL and SD1.5 models, including noob, illus etc.
>>
>>107115302
well, there was almost no discussion in that thread, only pedobait gens, no technical talk whatsoever
>>
File: 00008-1722653395.jpg (1.94 MB, 2560x2048)
1.94 MB
1.94 MB JPG
>>
>>107115426
The thread was made to troll I couldn't imagine they can maintain that level of bad faith forever.
>>
>>107114694
>>107115417
is there a comfyui-manager extension yet?
>>
>>107114694
>how wasn't this implemented
I wish I knew the answer. It's been out for so long and it seems like not many care about it.
>>
>>107115475
it's linked in the last thread
>>
File: Superbitches.png (3.26 MB, 1336x1728)
3.26 MB
3.26 MB PNG
Sneed.
>>
>>107115492
>>107114694
Oh BTW, this works perfectly at CFG 1 too, so it's compatibly with DMD2/Turbo/Lightning LORAs
>>
>>107115505
comfyui-ppm seems to have it, guess i'm installing it. thanks anyhow
>>
>>107115521
>Sneed.

hello the year 2012 just called, you gotta go back Unc
>>
>>107115521
top left reminds me of bazz from concord
>>
>>107115557
are you saying I... can't sneed?
>>
File: 00000-614386978.png (1.59 MB, 896x1152)
1.59 MB
1.59 MB PNG
>>107115557
Cope, seethe and sneed.
>>
File: 00009-2131906952.jpg (2.71 MB, 2560x2048)
2.71 MB
2.71 MB JPG
>>
>>107115557
I will gladly go back. Please, provide a method of transportation and moving services for my rig and other personal belongings. I'll become a great artist and make billions of dollars.
>>
>>107115243
Dig Youtube. I have learned a lot of techniques and stuff from there.
>>
Isn’t that Arthur girl 8 years old?
>>
>>107115744
are you concern trolling over a cartoon rabbit with huge tits and ass?
>>
File: image_00028_.jpg (728 KB, 1264x1656)
728 KB
728 KB JPG
>>
>>107114694
Oh nice, I was wondering when an equivalent of nai's -x::tag:: would be a thing on local.
>>
>>107115744
out of 10
>>
>>
is the 3 month old qwen image still the best model for general use?
>>
>>107115832
>qwen image
Kind of looks like Chinese flux. How do they compare to each other?
>>
>>
>>107115832
You say that as if the time between foundational model releases isn't at least twelve months
>>
>>107115886
yeah but someone might come along and say some different company made something better and qwen is ded
>>
>>107115894
Qwen Image is the biggest anyone can realistically use so there are no new foundational models unless someone does something architecturally state of the art or cares about optimizing for less parameters (China won't).
>>
its sad that local image diffusion doesnt have a sleek timeline diagram like local language models
>>
>>
>>107115914
With proper ramtorch-like swap optimization we can easily run 40-60b models on 24gb gpus, and with MoE architecture models we can run whatever as long as you have RAM
>>
>>107115894
Qwen llms were pretty crappy until the third gen. Deepseek llms were pretty crappy until the third gen. Wait a minute...
>>
>>
>>107115844
Nice
>>
>>107115933
It'd be unbearably slow and I think we've hit a hardware/price bottleneck so we're in a 6090 waiting room although I was happy to see that the 6000 Pro is $1000 off on Newegg but honestly I can't be assed to justify more than $2000 for a GPU.
>>
>>107115915
>sleek timeline diagram like local language models
Lol.
>>
>>107115947
>It'd be unbearably slow
As long as it's a sub minute gen for a top quality image it's not slow
>>
>>
>>107115976
fluffy pink room, furry plushies, glass shelf with anime figurines, anime posters, neon pink and purple wall lights
>>
>>107115969
Unless it's literally perfect with 1 minute turning out masterpieces I didn't know I wanted it is too slow for gacha.
>>
>>107116006
That's why we need >>107112436
And then just queue a lot of images

>@comfy
>Maybe it would be a good idea in comfyui to have an option to precompute all prompts first, which if enabled would then do that first and thus ensure a HUGE speedup given you can throw away the text encoder forever out of VRAM.

>Hugely speeding up the prompt computation since initially the model used to compute the prompt will be permanently loaded in VRAM and process everything in one go.
>And then later on you will also permanently now have the diffusion model in VRAM and computing the images one after another.
>Never needing to waste huge time swapping from RAM (or God forbid, disk) to VRAM, TWICE for every single generation.
>This is a huge, double digit % speedup for most newer models. And would also allow people to load older models at maximum precision too.

>A simple "Process all prompts first" switch which enables this in the settings would be great.
>I also don't see why this wouldn't be a default option anyway, this would hugely speed up anyone who has more than 1 thing queued up, which is especially true for enterprise users which comfyui wants to cater to anyway.
>>
>>
>>
>>107116034
This is poorfag cope not people with $5000 computer cope. Also just make that, you know ComfyUI allows for custom nodes right? Just duplicate the text encoder node and add precaching and save the prompt as an npz with a hash.
>>
>>
>>
>>107116034
Would be a cool feature but no way in hell should that be the default. I often queue up a bunch of different variations of a workflow just to end up canceling the queue after I see the first image generated, if it randomly started taking 30 minutes for the first image to generate without me changing any settings explicitly I would lose my mind
>>
>>107115933
don't encourage the bloat. we do NOT need 40b param models, we need better architecture and training methods.
>>
>>
>>
>>107116063
>This is poorfag cope
This speed up will work on any setup and you don't have real time genning on any hardware, so it's a big thing regardless, retard
>>
>>107116117
Optimization = boring and gay. Add more layers.
>>
>>107116156
Then make it lmao
>>
>>
>>107116156
I wish more people would understand this and how important it is to optimize software. But most end users are braindead low IQ niggers like >>107116063, and so lazy devs don't really bother most of the time.
>>
>>107116117
No, fuck vramlets and fuck impatient zoomers who can't wait, pushing for speedcopes is one of the bigger reasons why the tech and the discussion around a lot of the models is shit, because the average retard giving his opinions is running 7 speedcope snakeoils that all shit on the quality
>>
>>107116177
>lazy devs
What does that make you then, a lazy baby that needs their hand held?
>>
>>
>>
File: file.png (970 KB, 1024x1024)
970 KB
970 KB PNG
>>107114970
Do you mean CFG scale or Steps? I tried either, no good
>>
>>
>>107116245
just post the workflow
>>
File: file.png (225 KB, 838x656)
225 KB
225 KB PNG
>>107116255
>>
>>107116165
if you just want moar layers, then you'd be using HiDream or HunyuanImage 3. oh wait, they are slopped shit, because making models bloated doesn't magically improve the output quality and rapidly hits diminishing returns.

>>107116179
>running 7 speedcope snakeoils that all shit on the quality
random snakeoil comfyui extensions on inference side are not the same thing as redoing the model architecture and training methodology. we have genuine optimizations that IMPROVE output quality that can be used in new locally-trained models.

Qwen and Flux are SLOPPED and BLOATED, I use them personally but can still admit this. Even if you want a fuckhueg model, you should want it to actually have output quality matching how large it is. Current bloated models are wasting your compute on bad architecture.
>>
>>107115015
You have to describe every motion in details, for example
>the girl raises her fist above her head and then opens her hand showing her 5 fingers
>the girls jumps, her breasts bounces/jiggles up and down
and such, I mean you are technically telling a machine what to do, so the same rule of thumb still holds.
>>
>>107115832
I unironically think it's slightly worse than Hunyuan Image 2.1 if you consider they're both dogshit at realism by default. More censored and not quite as good English prompt adherence IMO.
>>
<Bots can't wojak and XML>
Any bot want to talk at least? I feel lonely
</Bots can't wojak and XML>
>>
>>
>>107116278
Qwen Image with a LoRA is the best porn model.
>>
holy shit with these disgusting hags, FUCK OFF
>>
every day a new retardation here, this is just sad
>>
>>107116302
what LORA? all the qwen nsfw LORAs I tried have limited knowledge of anatomy and poses.
>>
Every time I think about crowdsourcing captioning on a massive dataset I think about how it'll get trolled and I lose motivation. I guess I'll have to do the Joycaption method and do it all myself.
>>
>>107116302
there is a general nsfw lora?
>>
>>107116341
The biggest reason oai is able to make sora2 so good is because of the quality of their captionning, probably millions to make it as manual as they could.
>>
>>107116272
Can you specify which frames those motions will happen? Like "at Xth frame, <something>"?
>>
>>107116338
You take your favorite porn, caption it, and train the LoRA yourself and it's like magic. It's a very thin veil for the model to do anatomy, it's not like Flux.
>>
>>107116341
its quicker to review and edit someones captions than do it all yourself, vllms wont caption it as good
but you dont need a big dataset to train a lora anyway
>>
>>
>>107116368
Currently I'm doing Joycaption Loras where you caption a bunch then train and then caption a bunch. It's mostly like what you said. And it's not for a LoRA, it's for major scale training.
>>
>>107116379
Is joycaption better than 4b abliterated qwen3 vl? https://github.com/1038lab/ComfyUI-QwenVL
>>
>>
>>107116386
Joycaption is quite accurate and it gets a lot better with a LoRA and it's not abliterated so it does basically have all the NSFW terms in it whereas abliterated will always have major gaps on knowledge, abliterated would be a good start for a new major VLM finetune which I would do once I have tens of thousands of handcaptioned images.
>>
>>107116405
what model are you training on and what lora? hopefully qwen image
>>
>>107116355
Random I guess? But what I know is that it applies the prompt in a chronological order, like if you prompt "the girl jumps, the girl cries, the girls dances" then it will do those three events in that order most of the time unless you use words like "while" or "at the same time".
>>
>>
>>107116357
this only works when you have a specific thing you want. it's not comparable to noob where you can get whatever you want just from prompting.
>>
>>107115015
For creating and then easily extending infinitely you can run the (LOOP) workflow from https://civitai.com/models/1818841/wan-22-workflow-t2v-i2v-t2i-kijai-wrapper
>>
>>
File: 00010-3990566639.jpg (829 KB, 2560x2048)
829 KB
829 KB JPG
>>
>>107116449
i cant imagine the workflows retards use to gen something this bad in the current year, you really gotta go out of your way to do it
>>
>>107116449
Alright who installed comfyui on the downie's pc
>>
Are there any new methods to do infinite length t2v with wan 2.2 without degradation?
Last thing I remember was some new lora called sv2 or something like that?
>>
File: file.png (152 KB, 1896x504)
152 KB
152 KB PNG
>>107116255
My bad, I'm an idiot. Here's the workflow. I haven't touched it since installing
>>
>>107116470
>>107116545
this is clearly a troll from /b/. these guys post all sorts of horrible stuff
>>
File: ComfyUI_00266_.png (1.01 MB, 1280x1120)
1.01 MB
1.01 MB PNG
>>
>>107116436
qwen?
>>
seems like someone is butthurt again...
not the first time...
>>
File: ComfyUI_00174_.png (3.07 MB, 1248x1848)
3.07 MB
3.07 MB PNG
>>
>>107116249
>what years of solitaire does to a mf
>>
File: 00011-2764447756.jpg (1.34 MB, 2560x2048)
1.34 MB
1.34 MB JPG
>>
Anon >>107116441
Debo >>107116464

Why /ldg/ anons are moving to /sdg/?
>>
>107116700
>eceleb schnizo drama
Kys.
>>
File: ComfyUI_00275_.png (916 KB, 1280x1120)
916 KB
916 KB PNG
>>
>>107116700
If you ask me personally because sometimes i'm bored and want to bully someone
>>
>>107116849
go on, tell us about your abusive father
>>
/ldg/ is full of nogen schizos, however /sdg/ is unbearable because its had this one meth-head who spams the thread nonstop with really shitty gens, so I stopped going there
>>
>>107117096
Lumi?
>>
>>107117096
>d this one meth-head who spams the thread nonstop with really shitty gens
who? debo?
>>
>>107117096
>/ldg/ is full of nogen schizos
im fine with this considering the alternative is "the veracity of my claims do not matter considering ive attached an image of my recognizable 1girl OC that ive been posting for 3 years"
its the same as the /b/ thread of whatever trooncord where you feel like everyones walking on eggshells
>>
>>
>>107116555
Trips
Upload both the forge and comfy images to catbox.moe and post the links so anon can see exactly whats going on. It doesn't look like the prompt is for the images you posted though.
>>
>>
>>107116470
>workflows
thats clearly an auto/forge gen you faggot
>>
>>107116436
don't stop
>>
i have 24 gigawats of vram. is it worth getting a card with 36? should i wait for a future card that has more?
>>
>>107117268
i have 0.33 liter mug on my table. is it worth getting a mug with 1 liter? should i wait for a future mug that has more?
>>
>>
>>
>>
>>
>>
>>
Boomer whose new to AI
What do I doenload from the OP if I want a fully uncensored AI for loli porn
>>
>>107117463
nice try officer
>>
>>107117459
looooooooooool
miggerbros...
>>
File: 00013-1131638003.jpg (1.88 MB, 2048x2560)
1.88 MB
1.88 MB JPG
>>
>>107117200
>>107117362
>>107117422

>>107088865
>>
File: 1731383608079357.png (517 KB, 680x515)
517 KB
517 KB PNG
SD keeps outputting low effort faces. Everything else is alright or at least passable but the eyes specifically always come out... Well, not wrong but also looks very artificial, like the shit AI used to do in 2022.
Even worse when I'm specifically trying to replicate a mangaka's style, if I lower the Lora value it could fix the faces but also would look generic and not at all the way I want.
>>
>>107117549
just use adetailer retard, a tech from 2022
>>
>>107117520
It clearly is Chroma, but I appreciate the compliment
>>
Is there a retard-proof guide to train my own lora for a character?
>>
>>107117575
Using reForge btw
>>
>>107117575
https://www.youtube.com/watch?v=MUint0drzPk
>>
>>107117575
https://archive.is/H2gzu
>>
File: 1749794193484419.png (507 KB, 720x720)
507 KB
507 KB PNG
>>107117603
You need 24GB Vram to train a miserable Lora?
I mean I do have a 3090 but damn
>Captcha: 8MKYS
>>
>>107117679
I was using a 8gb card originally. it can just take forever. Unbatching also makes it take a long time even if you have 24gb, but I do notice the quality improve when I do this.
>>
>>107117679
You dont "need" anything, this is just the biggest and best model to get best lora results on
>>
>>107117679
12gb lora trainer reporting in
>>
>>107117689
>>107117697
Btw I have never trained anything in my life (and I'm only recently getting back into SD after 2 years or so), only considering it because what I've found online isn't really what I want.

I'm using
https://civitai.com/models/835655/illustrious-xl-personal-merge-noob-v-pred05-test-merge-updated
How do I make sure the end result of >>107117603 or >>107117606 is compatible with that? I've noticed some of my older Loras don't appear when I use that model.
>>
File: 00015-532242218.jpg (1.47 MB, 2048x2560)
1.47 MB
1.47 MB JPG
>>
>>107115778
very nice. can you give a catbox of that please?
>>
Is doing one-shot gens the way, or is doing a two-step gen the way to go? If it's the latter, can someone give me a hand getting a wf set up? Getting some fun ones at the slime water park, picrel.
>>
File: test.png (1.76 MB, 1024x1024)
1.76 MB
1.76 MB PNG
>>
>>107117805
sweet nectar...
>>
>>107117720
You select which model to train on in the trainer itself. But you should always use the base model and not a merge i.e. regular Noob.
>>
>>107117720
Honestly, read what >>107117606 posted in that archive link. Having an absolutely perfect set of training images, that you also tag correctly, is honestly going to give you better results than some awesome GPU. You don't even have to go that fucking insane with the config settings. Lora's for 1.5, SDXL, and a lot of the SDXL model families usually arent compatible with one another. So you'd train the same lora on each root model family. I've still been using Kohya_ss this entire time as it does what I want.
>>
>>107117848
>But you should always use the base model and not a merge
I second this. using merges as the training target gives you really screwball results. Sometimes you get neat effects but shit is so cooked it's not worth it.
>>
>>107117805
>Is doing one-shot gens the way, or is doing a two-step gen the way to go?
Elaborate?
>>
>>107117883
Basically doing a gen with a partial denoise (like only 0.3 rather than 1.0) and then feeding that to another sampler. I may be describing an upscaling workflow, I'm still learning and not too sure.
>>
File: test2.png (1.95 MB, 1024x1024)
1.95 MB
1.95 MB PNG
>>
>>107117899
Usually it's the opposite; first pass being 1.0 and the second pass being <1.0 with a different model. As a way to for example utilize the anatomy and composition of one (the first) model and the style of the second. This is also the case with upscaling workflows. If your second pass is 1.0 then it basically negates anything present in the first unless you use a really strong controlnet.
The average gen is merely a single pass in my estimation however thobeit desu.
>>
>>107117939
Got it. What cfgs would you use for step 1 and 2? I'm using chroma, so 2-5 is where I keep it.
>>
>>107117951
Whatever looks good, CFG is heavily model and settings dependant. Personally I use the same CFG, scheduler, sampler for both passes but you really don't have to.
>>
File: ComfyUI_00263_.png (1.99 MB, 1280x1120)
1.99 MB
1.99 MB PNG
>>
Why three different threads, what is the difference between them?
>>
>>107118009
If you use big boy models like chroma or wan you stay here, if you use rice bunny models like lumina you go to /adt/
>>
Uhm and /sdg/?
>>
>>107118022
What is the difference between /ldg/ and /sdg then? Sorry to ask but I'm new here
>>
lilbros been mindbroken by lumina for unironically two weeks kek
>>
>>107118036
/sdg/ is a discord chat for a handful of demented spergs
>>
File: ComfyUI_00270_.png (1.15 MB, 1280x1120)
1.15 MB
1.15 MB PNG
>>107118045
You need at least 8 billion parameters to post here, if not youre considered a beta cuck model and are outcast to sdg
>>
>>107118089
Makes sense. Thanks for an honest answer. I was expecting to get flamed or something
>>
>>107118053
That's how you know it's a good model. Remember how much seethe Chroma caused?
>>
>>107118089
The first prerequisite to being on /ldg/ is being White, sorry jamal
>>
>>107118089
How long you gonna keep annoying us?
Also why didnt you put Anistudio in the OP? In one of your samefag bakes?
Here is the link: ttps://github.com/FizzleDorf/AniStudio
>>
>>107118134
fuck off
>>
>>
>>107117722
is everything okay
>>
File: ComfyUI_00265_.png (1.49 MB, 1280x1120)
1.49 MB
1.49 MB PNG
>>107118134
>anistudio
You sir will get your own blacked gens very soon, stay tuned
>>
>>107118045
1girl, laura kinney, green eyes, long hair, black hair, slender, (toned:0.8), flat color, no lineart, , murata yuusuke, white lace thong, white stockings, white elbow gloves, topless, contrapposto, erect nipples, adjusting hair, hand on stomach
>>
I've only got one new gen to share today, maybe more next time.

>>107118385
Been too long since I saw one of these

>>107118273
"Pixels" look pretty good for an unaided gen. I wonder if it would be better doing pixelization in an early step and then genning from that, which is a trick I used to use with Pony/etc
>>
>>107118466
yeah I haven't visited in a while
>>
File: ComfyUI_03321_.png (757 KB, 640x1152)
757 KB
757 KB PNG
>>107118466
>>
File: LLDG2.jpg (1 MB, 1920x1270)
1 MB
1 MB JPG
Landscape Diffusion Bros here. In solidarity with our dear brother thread /ldg/, we invite you to take refuge in our general until the spam is over. Your local diffusion will be on topic until this ends.
Feel at home
>>107118520
>>
But that one anon isn't spamming and hasn't for awhile
>>
File: ComfyUI_00214_.png (2.75 MB, 1248x1848)
2.75 MB
2.75 MB PNG
>>
File: ComfyUI_21139.png (1.87 MB, 1344x1344)
1.87 MB
1.87 MB PNG
>>107116034
Honestly, I thought this was the default behavior right now. When I queue up a bunch of images I get a big row of "got prompt" and I don't see any copy operations or changes in VRAM usage while it runs through them.
>>
>>107118587
Because you're using small models that can keep their tiny text encoder in VRAM compared to something like Qwen
>>
>>107118570
Really? So your threads themselves are shitposts? Wow. Well sorry for thinking they were being spammed, honestly I remembered better quality from your generals



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.