[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Eternal Kingdom of Heaven Edition

Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106873109

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2203741
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
File: 1753938770777495.jpg (25 KB, 500x328)
25 KB
25 KB JPG
>>106879198
>>106879200
>>106879203
Fucking CivitAI man. Im having fun generating HiRes image for my blonde loli and suddenly i got cucked. FUCKFUCKFUCKFUCK
>>
Why is there no rentry for Qwen Image Edit?
>>
>>106879231
get the cheapest 5090 you can buy, or the cheapest 4090 if you can find one, and enjoy unlimited cunny at home
>>
>>106879231
If you're that poor then buy a 16GB AMD card as befits your station.
>>
>>106879266
>AMD
>AI stuff
Lol you sure ??
>>
>>106879266
That's cruel, anon.
>>
File: 1742095036658040.jpg (54 KB, 693x448)
54 KB
54 KB JPG
>Took 10 - 20 minutes per image to generate 960x540 (Upscaled to 1920x1080) image with 20 steps.

i- is this normal for 8gb GTX 1080 or what ??
>>
bait
>>
>>106879266
16GB is shit no matter what. If he's poor he should look at a used 3090 or 7900 xtx, or used professional or server cards.
>>
Blessed thread of frenship
>>
>>106879402
>7900 xtx
Shitpost
>>
>>106879447
it can run all the latest models like qwen and wan in comfyui, it's a perfectly viable card.
>>
>>106879354
what model are you genning with? for SD1.5 that sounds like cpu only speed
>>
a 3090 (preferably 2) is the bare minimum nowadays
>>
>>106879470
its slower than a 5070 (non-ti) for all AI tasks
>>
>my landscape in collage
NICE THANKYOU SO MUCH!
>>
>>106879471
I think its Illustrious Model with Hatsune Miku pic in it. Im using Automatic1111 UI.
>>
File: Video_00043.mp4 (2.96 MB, 720x1280)
2.96 MB
2.96 MB MP4
> genned on a 4GB AMD card
>>
>>106879231
LOCAL
GENERAL
FAGOT!
>>
>>106879500
how many hours
>>
>>106879503
Its related since im [spoiler]forced[/spoiler] to use a Local UI now
>>
>>106879480
FUD. it takes roughly the same time as a 3090 to do a 20 step 1024^2 chroma gen. And a 5070 12GB is laughable, larger models like Qwen will run like dogshit.

you read some shitty benchmarks conducted by people who don't know how to use AI. they simply fucked up their AMD installs, probably benchmarking on Windows too.
>>
>>106879554
never used auto1111
>>
>>106879554
Check cuda.
>>
>>106879554
what the hell is this?
>>
>>106879519
9001
>>
>>106879600
what models
>>
Im going to kill myself
>>
>>106879500
ok but why did you have to include the trashy tattoo
>>
>>106879500
jesus, my dick.
>>
was jesus a mushroom?
>>
File: Video_00055.mp4 (2.45 MB, 720x1280)
2.45 MB
2.45 MB MP4
>>106879620
the asian lady is wan t2v, then wan i2v for animation.

this one is flux then wan i2v to animate. both are about 30mins on a 3090 to gen.
>>
>Nunchaku requires insight face which only works with numpy 1.26, but other custom nodes require a higher numpy version to work
sigh
>>
>>106879243
No no, the 4090D or the 5000 Pro is the right choice. 32GB is not enough for many LoRA training tasks, and it's borderline for some inference tasks. 48GB gives you some future-proofing.
>>
So just getting into wan. I have a 4090. I loaded the template from comfy for the 14b i2v wan 2.2 and.. It works I guess. It uses the lightning loras and scaled fp8 models. Takes around 4-5 minutes for 720p, and looks to be offloading to RAM.

What's the best way to use the hardware I've got? The lightning loras and native workflow? I've seen people talking about kijais nodes.. Ilooked at the gguf models and they're about the same size as the fp8 scaled models from the native workflow..

I'm a complete newb when it comes to wan, if you have any tips I'll gladly take em
>>
>>106879748
q8 gguf is better than fp8, use that instead.
native workflow is better than kijai
be aware that using lightning loras removes the negative prompt(cfg 1) and greatly diminishes the quality of the motion.(dumbed down)

use this sampler instead of the dual ksamplers
https://github.com/stduhpf/ComfyUI-WanMoeKSampler/tree/master
>>
>>106879648
Can I have your GPU
>>
>>106879818
Based thank you kind anon. 3/4 gens I've made so far have done a weird ghosting thing, what's with that?
>>
File: RaMu HuG.webm (3.99 MB, 852x1280)
3.99 MB
3.99 MB WEBM
>>106873991
Oh, I just woke up. That's two LoRAs; one for Gwengoolie and another for big boobs. It looks like that because of the Gwen LoRA, it over trained on the studio background and gives everything a plasticy look. I need to go back and edit her into different backgrounds (which should be easy now with QIE) and redo it.

>>106879748
Drop down to 1152x*. I upscale and interpolate with TVAI before downscaling back to 1280 for upload, so there's no benefit (for me!) to genning at the higher resolution.
>>
>>106879872
What GPU you use and can i work under your CEO dad ?
>>
File: supergirl 1970s.png (2.39 MB, 1296x1728)
2.39 MB
2.39 MB PNG
>>
>only blue board ai
>filled with a** and t***
the internet already has infinite porn, there is no real use case for ai
>>
>>106879925
Not my specific scenarios.
>>
>>106879925
Use case for complaining about use cases???
>>
>>106879872
2 loras in the same run or made first image and then reused that image again in a new run ?
>>
File: G3LdejJbUAAATt3.jpg (204 KB, 2400x1256)
204 KB
204 KB JPG
NVIDIA DGX Spark support for ComfyUI!
ComfyUI continues to cook and improve everyday!
https://x.com/ComfyUI/status/1977886707245797707
>>
>>106880034
LFG!
>>
File: RaMu ReVeAL.webm (3.92 MB, 960x1280)
3.92 MB
3.92 MB WEBM
>>106879913
>Asus ROG Strix OC 4090
It's nice because I can limit it to 400W and it's still a hair faster (literally just one!) than a stock 4090... and also, my dad is retired.

>>106879949
Same run.
>>
Ok I am doing 5 gens with Chroma 2k and the gen is running really slow for me, anyone know what it could be? is this the famous OOOOMING? how can I prevent that?
>>
im getting better results in WAN I2V with no prompt that using a prompt.

should I be referencing the subject?
>>
DONT BUY GPU
RTX 6090 WILL BRING THE 5090 DOWN
I WANT TO BELIEVE
>>
File: 1.jpg (138 KB, 1057x1186)
138 KB
138 KB JPG
>>
>>106880134
good.
>>
>>106880095
>*WAN 2.2 Image-to-Video Formula, the source image already establishes the subject, scene, and style. Therefore, your prompt should focus on describing the desired motion and camera movement.
Shouldn't need to unless it's not properly recognizing something.
>>
File: ComfyUI_02679_.png (2.05 MB, 1536x816)
2.05 MB
2.05 MB PNG
>>
File: ComfyUI_02684_.png (2.95 MB, 1824x1248)
2.95 MB
2.95 MB PNG
>>
if this AI sh*t is so next level then try making a picture of a dog wearing sunglasses on a skateboard
go ahead I'll wait ;)
>>
get better bait
>>
I'm reading on github that an RTX3060 user took 7.5 hours to generate an 11s 720p video, oh fuck
>>
maybe the future will be training models on 4bit (meaning we'll be able to go for bigger models on 24gb vram cards), and guess what, it's Nvdia that'll save us from Nvdia lmaooo >>106880242
>>
File: ComfyUI_02686_.png (2.78 MB, 1824x1248)
2.78 MB
2.78 MB PNG
>>
>>106880252
Too good to be true, at least for general application.
There is bound to be a catch the p-hacked arxiv paper isn't telling, as always.
>>
>>106880180
should I literally prompt something like "camera moves" or just "moving"
>>
>>106880249
the human eye can't see past 480i anyways it just diminishing returns after that point
>>
if anyone's bored, make some NES style pics. I need inspo for my game and sometimes there is cool stuff in the details. My computer's too potato.
NES games that don't exist, 'this and that but it's a NES game', 'kid icarus underground forest region NES' I dunno.
>>
>>106879924
how do you get this style of panties? do they have a specific name?
>>
>>106880334
here on the cutting edge of ai, at the very brim of innovation, we ask the most complex of questions
>>
>>106880319
Not your personal army
>>
>>106880347
panties are fascinating, so indeed
>>
>>106880233
Nice
>>
File: supergirl before upscale.png (1.05 MB, 864x1152)
1.05 MB
1.05 MB PNG
>>106880334
The prompt only specifies "white panties, bow panties." The latent upscale helpfully added the lace on its own. But latent upscale likes adding details anyway, so you'll probably get that consistently too if you specify lace panties.
>>
>>106880305
here's the official prompt guide from alibaba; https://alidocs.dingtalk.com/i/nodes/EpGBa2Lm8aZxe5myC99MelA2WgN7R35y
scroll down to Basic Camera Movement and Advanced Camera Movement, they've got prompt and video examples of the types of camera movements.
>>
File: 00009-3505374827.png (259 KB, 512x384)
259 KB
259 KB PNG
>>106880237
enjoy
>>
>>106880334
low leg panties
https://civitai.com/models/933778/adorable-low-panties
>>
File: 1732317411453062.jpg (493 KB, 2028x2030)
493 KB
493 KB JPG
I finally got around to trying open pose because I wanted to do something with it but it appears to be kind of worthless. Am I just doing something wrong? I fiddled with all the sliders and nothing is making it consistent. I guess one of the other controlnes would be better for stuff like this?

Does anyone have advice for doing character turnarounds and such?
>>
>>106880377
thanks anon
>>
'388

Heh. Nice try, but no.
>>
is WAN 2.2 even worth trying out with 16gb vram? is it possible to train LORAs with that amount? i've seen some impressive img2vid and txt2vid but i dont know if i should even bother
>>
>>106880390
oh nice find
>>
>>106880237
The technology isn't quite there yet, unfortunately.
>>
>>106880397
>Does anyone have advice for doing character turnarounds and such?
Openpose on its own is not great for this. You probably also want a dedicated turnaround Lora (or wait six months when we have 1click image-to-model workflows).
>>
>>106880471
>(or wait six months when we have 1click image-to-model workflows).
What's that about?
>>
>>106880397
openpose got significantly worse for sdxl onward
>>
File: supergirl lace.png (2.29 MB, 1296x1728)
2.29 MB
2.29 MB PNG
>>106880416
No problem. To double check, I tried a no-frills upscale instead of my usual modifications. Adding "lace panties" consistently produces that pattern. The original gen probably automatically used lowleg panties as >>106880390 says due to the retro artstyle lora, but prompting it explicitly also ensures that the base gen has the correct cut.

>>106880347
We're doing that exact underwear scene from Weird Science. I can't wait for the adaptation twenty years from now.
>>
>>106880397
qwen image edit can do it.
>>
>>106880388
1.5 sucks. do 1.4. I deleted 1.5 yet again. 1.4 stays on the drive. It's SOVL
>>
>106880435
you're on your own if you don't like it, shitposter
>>
>>106880549
It just so happens that I've been meaning to try it, where do I get 1.4 and whatever GUI works with it?
Civitai has on 1.4 pic on it, it is pretty close to looking like a 1.5 pic.
>>
the VRAM optimizer goat is back
https://github.com/comfyanonymous/ComfyUI/pull/10308
>>
>>106880590
let's see it in stable-diffusion.cpp.
>>
File: oy vey.png (106 KB, 1919x473)
106 KB
106 KB PNG
https://civitai.com/articles/20211
STOP NOTICING
>>
File: Wanimate_00002.mp4 (1.31 MB, 616x832)
1.31 MB
1.31 MB MP4
is wanimate good?
>>
>>106880631
Did you use a refiner? (serious question)
>>
>>106880478
Obviously I can't predict the future, but multiple companies have tried their hand at creating tools that take a gen and automatically turn it into a fully textured and rigged 3D model. None is practical quite yet (to my knowledge, happy to be corrected on this point) so six months is a placeholder, but people are actively working on it for obvious reasons. If you can turn a gen into a textured and rigged 3D model, then turnarounds and other poses etc "come for free."
>>
>>106880520
yeah it's pretty nice, I got tired of getting the most plain ugliest panties, so might as well use the right terminology to get better looking ones
>>
>>106880639
imo the way is um what do you call those ai that take control of your keyboard and mouse and use apps? One that uses Blender, based on references you supply sounds like the way to go.
>>
Holy fuck changing the UI makes night and day difference.
>>
>>106880649
If you haven't already, download the danbooru autocomplete or another tagging database. Lots of specific panty types that models are likely to know.
>>
>>106880665
what do you mean
>>
File: Wanimate_00004.mp4 (1.16 MB, 614x832)
1.16 MB
1.16 MB MP4
>>106880634
no?
>>
>grok imagine cannot do a woman carrying a man in a fireman/bridal carry
Local can still win!
>>
>https://www.illustrious-xl.ai/sponsor
Have you heard of the Korean salaryman with two (2) active braincells?
>>
File: 1753743083938477.png (1.7 MB, 1456x1081)
1.7 MB
1.7 MB PNG
>>106880515
That makes sense. I thought it was supposed to be like rock solid for posing so I was confused with it being so inconsistent.

>>106880542
Don't you need a lot of vram for that? I use this sometimes for small edits but I'd want to do this locally. Also in this test it wasn't great. Usable though I suppose. https://huggingface.co/spaces/multimodalart/Qwen-Image-Edit-Fast

>>106880639
I thought we would get 3D generation before video and boy was I wrong lol.
>>
>>106880746
Doesn't look like her. idk. funny, admittedly.
>>
File: 1743472036623042.png (158 KB, 288x292)
158 KB
158 KB PNG
>>106880729
Im the guy with Poorfag GPU. Im using ComfyUI now and i can generate 1440x1440 image with 1.5x Upscaling with my old ass GTX 1080 in about 5 minutes on each image. It use every single of my RAM and VRAM but its way quicker now
>>
>>106880764
>Im using ComfyUI now
i dont understand. what were you using before that was slow?
>>
>>
wake me up the day local will be able to do this kino >>>/wsg/5998181
>>
File: 1734910901853797.png (1.86 MB, 1360x768)
1.86 MB
1.86 MB PNG
>black and white and (red:133742069) all over
Qwen is playing coy and ignoring my token weights
>>
File: 1752384654611533.jpg (117 KB, 1000x915)
117 KB
117 KB JPG
>>106880771
I have no idea. I just deleted it. last time i used it before CivitAI incident was like 3 years ago
>>
>>106880717
oh yeah forgot about that node, I'll install it, good idea
>>
File: ochlys gemini.png (1.94 MB, 2916x1056)
1.94 MB
1.94 MB PNG
>>106880397

Google Nano banana and Qwen Edit 2509 made open pose redundant to the most obtuse of use cases. When I try to use open pose, the perspective/proportion is fucked up and isn't naturally matching. Straight genning is superior to trying to fuck around with the angles/proportions.
>>
File: 00121-3046113715.png (2.41 MB, 1248x1824)
2.41 MB
2.41 MB PNG
>>
File: buzzing.png (60 KB, 963x687)
60 KB
60 KB PNG
what is the point
>>
File: IMG_0206.jpg (158 KB, 1298x1245)
158 KB
158 KB JPG
>>106880397
If you’re using sdxl or derivatives like noob/illustrator/etc, open pose is total shite. You’ll get much better results with the sdxl control net and the invert or depth_anything_v2 modules. Open pose must have been great for 1.4 or something cause you still see it recommended tons but it just doesn’t work.
>>
>>106880881
>>106880948
Thanks guys.
>>
>>106880947
I guess the yellow buzz (the one that allows you NSFW) is more expensive?
>>
>>106879234
Rentry anon is long dead and no ones picked up the torch
>>
>>106880948
I cannot tell if this was real or if I just imagined it, but wasn't there something about some of the SDXL controlnets being trained with the wrong channels, and if you switched them around to be "wrong" in the interface (and therefore correcting the training error) they worked better (but not yet SD1.5 tier)? Does anyone else remember that?
>>
File: on thank you.png (2.2 MB, 1600x896)
2.2 MB
2.2 MB PNG
>>106880940
>>
>>106880940
Damn I love that hair. Anime hair in a semi-realistic style just triggers something amazing. Also, booba.
>>
>wan21 had a good cumshot lora
>wan22 cumshot loras all make the futa spit cum out her mouth
>>
>>106880965
>I guess the yellow buzz (the one that allows you NSFW) is more expensive?
Looks like it gives compensation as blue instead yellow now. At least majority of it.
>>
For a character lora, is this enough for tagging?
>>
>>106880986
No idea. Every module seems to work fine except openpose ones. Generally I just stick to the two I mentioned though, cleanest results.
>>
>>106880986
It was something about gree nand blue color channels switched, but even if you use channel switching node, openpose is often hit or miss.
>>
File: Colored pencil art_00006_.jpg (890 KB, 1472x1896)
890 KB
890 KB JPG
>>106880986
>Does anyone else remember that?
It was some simple change for the code that required you to switch around some limb colors for controlnet pose. Not 100% sure but I think it made pose work with SDXL. NoobAI should have their own CN pose model that just works with Illustrious too.
>>
File: impossibly tight jeans.png (2.88 MB, 1296x1728)
2.88 MB
2.88 MB PNG
>>106881060
>>106881077
>>106881094
Ah, thanks, that was it.
>>
Any grifter news? I want to griftMAXXX
>>
>>106880947
Cuck currency
>>
File: 00139-205164759.png (2.47 MB, 1248x1824)
2.47 MB
2.47 MB PNG
>>106881013
>>
File: 1747155583446312.png (41 KB, 673x460)
41 KB
41 KB PNG
Brooooo it only took 7 Minute to generate 2K Image (using hires fix). Fuck Civit AI i think my old shit GPU can generate Loli Miku now. Why the fuck i only know this until now ??
>>
>>106881170
Paedophile thread is over there >>>/g/adt
>>
File: um ok.png (2.36 MB, 1600x896)
2.36 MB
2.36 MB PNG
>>106881007
>>
File: 1733115706426011.jpg (2.99 MB, 2688x1536)
2.99 MB
2.99 MB JPG
>>
File: 1760302778919638.jpg (261 KB, 1664x2304)
261 KB
261 KB JPG
>>
File: file.png (146 KB, 602x339)
146 KB
146 KB PNG
>>106880134
naruto is always appreciated
>>
>>106881046
Which captioner is this?
>>106880782
I don't think Qwen's text encoder supports that.
Same story with t5
>>
https://futurism.com/future-society/openai-chatgpt-smut Is this the end for all other models?
>>
>>106881217
https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
>>
>>106881240
I don't trust anything that OpenAI says, especially after the enforced ChatGPT-5 safety router and other garbage. Call me when they have an Ani stripdancing on screen who suggests sexy prompts on her own volition. Then and only then I will believe.
>>
>>106881046
IMO you can disable the first one and prepend the characters name to every caption. I can't remember the last time JoyCap mentioned the resolution by default desu.
>>
>>106881240
Very nice that special needs people get to write their own articles these days.
>>
>>106881240
>trust me bro, in 2 weeks my models will be uncucked
he said this shit for years loool
>>
>>106881278
Modern reporting is such garbage it has gotten me to retroactively question all of recorded history. These people are barely even conscious, but can we be sure that's a purely modern phenomenon?
>>
>>106881287
>it has gotten me to retroactively question all of recorded history.
don't forget that only the winners write history so yeah...
>>
File: ComfyUI_02713_.png (1.3 MB, 1024x1024)
1.3 MB
1.3 MB PNG
Can anyone tell me why the hell is this the case? This is genuinely driving me nuts.
On illustrious and noob, I can do second pass upscaling with low denoising (typically between 0.25 and less than 0.4 denoising strength), using karras schedulers. It works fine.
I prefer to use this instead of adetailer or GAN models that just sharpen the image because I like the additional details second pass can add to the higher res image, while sometimes fixing the minor errors in the initial image.
However, I just can't fucking get this to work on base SDXL or Flux. It's always blurry as shit compared to what I get on illustrious.
So why the hell is this the case? I can buy, it's just a different model shit with Flux maybe but shouldn't it also work on SDXL? JUST WHY DOESN'T IT WORK????
Regardless, does anyone know how to get it to work on these models without cranking denoising high? (I very much prefer it to stay low because I hate the deformed hallucinations high denoising likes to add when upscaling)
To illustrate my point further, here are two catboxes with an upscaling workflow. Same prompt, samplers, seed, etc. Just the models are different.
It upscaled fine on Noob while SDXL is blurry as shit.
https://litter.catbox.moe/qr2lfzmixpygkd5s.png
https://litter.catbox.moe/gyuw51hozgtp42tt.png
>>
>>106881287
>can we be sure that's a purely modern phenomenon?
https://www.amazon.com/Looking-Back-Spanish-George-Orwell/dp/6257120985

Can recommend, easy read
>>
File: cfg1 uhh.png (2.11 MB, 1600x896)
2.11 MB
2.11 MB PNG
>>106881179
>>
>>106881314
Can you just summarize it for me with a bot?
>>
File: 1736534811383801.jpg (2.95 MB, 7629x5240)
2.95 MB
2.95 MB JPG
babe wake up, bytedance released finetunes of the flux text encoder (that seems to act like PuLiD)
https://huggingface.co/ByteDance/FaceCLIP
>>
>>106881340
>bytedance released
imagine if it was "bytedance released Seedream 4.0" :(
>>
>>106881303
Different models handle this low denoise thing differently. It's not something you can really avoid in the way you're hoping. However, there is a way to make this work.
>I very much prefer it to stay low because I hate the deformed hallucinations high denoising likes to add when upscaling
One way to solve this is by adding a controlnet to the second pass. Depth sometimes, lineart sometimes. You want it to end early (~0.6-0.8) so that the second model can fill in additional details, and you can reduce the strength (~0.7-0.8) so that compositional errors get fixed.
>>
>>106881314
I'll probably check it out, thanks anon. I remember liking his writing style a lot from 1984.
>>
What model makes the best [spoiler]plants[/spoiler]?
>>
>>106881370
>I remember liking his writing style a lot from 1984.
1984 is a great book, but a bit verbose imo, I prefer Camus he's more straightforward, I guess I just don't like fluff lol
>>
>>106881240
big doubt. they nuked all the celebrities, and censored even more things. unless they revert to the uncensored version, it's over.
>>
>>106881340
Is it drop-in, or will we have to wait for updates for support?
>>
closed source ComfyUI when?
>>
>>106881390
not soon enough
>>
>>106881340
also, the github was taken down?

pickle files are known to be a bit risky.
>>
>>106881340

you do notice world largest fuel tank
>>
>>106881340
>The GitHub code repository returns with a 404
Wake me up when this shit hits Comfyui, until then, I sleep.
>>
>get up much earlier than usual
>start genning while eating breakfast
>it's like 3 times faster than yesterday at the same settings

???
>>
>>106881340
>they're still using flux
that's brutal, Qwen Image exists lol
>>
>>106881415
That's usually a sign your GPU is starting to fail. It's giving out it's swansong for you.
>>
>>106881340
bytedance always release poop for free. fuck them for real
>>
>>106881240
No, they won't.
Even if they did, I've never in a million years generate NSFW content on someone else's server(cloud/api). You will be profiled. Your usage will be studied, monitored and analyzed extensively. Like having sex in an interrogation room.
>>
we already have all the models we need why do you yearn for more
>>
>>106881422
this, they're the epitome of companies that only release failed experiments and keep their succesful ones to themselves lol
>>
>>106881421
Bro, it's like a 2month old 5090, pls no.
>>
>>106881439
they're all subpar compared to government backed ones
>>
>>106881415
you forgot something. Like you're on schnell or something. No free lunch.
>>
>>106881415
>first thing anon does when he wakes up is gen
addiction
>>
>>106879818
>native workflow is better than kijai
according to??
>>
>>106881415
Put your PC on better airflow. Stop putting your PC outside, Clean your PC. If anything fails RMA your GPU
>>
>>106881446
Good enough for me, and that's all that matters. I'll take NSFW subar models over censored, locked down normie shit any day.
>>
>>106881340
Is this for the edit model? what are the faces in the corner?
>>
>>106881474
flux isn't an edit model, it's like PuLiD, you reproduce the human by giving it its face, and desu this is a really outdated method, like edit models can do that any more by itself
>>
>>106881240
>Government issued gooning
>>
>>106881464
native has always been better than kijaimeme, it's faster, it uses less vram and it allows you to run gguf models
>>
>>106881483
>flux isn't an edit model
I meant flux kontext.
>>
File: 1759792805794304.png (151 KB, 465x453)
151 KB
151 KB PNG
there is no future for local bros. we are doomed to use wan, qwen, chroma, illustrious foreveeeeeeeeeer
>>
>>106881415
Do you powerlimit your gpu? If so, check your settings. I've had an occurrence where my gens were faster for some reason and I found out it was because MSI Afterburner had defaulted back to stock settings.
>>
>>106881496
Did they ever release Wan 2.5 or what ? If it does then its truly over. Fuck Jews and Fuck Payment Processors
>>
>>106881496
and it sucks for wan, I want to get rid of that 2 models MoE meme, fuck this shit
>>
>>106881496
and kontext.
>>
>>106881503
>Did they ever release Wan 2.5 or what ?
they won't release that model, maybe there's a tiny chance they'll do it if they make wan 3.0 and that one is competitive with sora 2 but I doubt it A LOT
>>
>>106881352
Interesting. I use the first pass image as the controlnet then, right?
>>
File: 1736734632524454.jpg (201 KB, 871x1053)
201 KB
201 KB JPG
>>106881514
Welp.
Its O V E R then. I guess its time to draw again
>>
God i hope Wan2.5 or 3.0 will be API only
>>
>>106881464
Yeah, I'm not finding that to be the case either. The default workflow has stopped producing anything but noise after one or two gens multiple times now (with zero changes to settings) and I have to fully restart Comfy to get it to work again.

>>106881492
While Kijai may be doing a hundred stupid things in his workflows (custom everything), it really has consistency going for it. It just works!
>>
Where did all the good genners go
>>
>>106881516
You got it.
1) Gen the original image.
2) Feed this base gen into a depth or lineart preprocessor or both, depending on the style you're going for.
3) Use these as your controlnet, applied to the second sampler only.
That means you can crank up the denoise (an resolution, or both) a little more, though this obviously comes with more change.

You can also take advantage of the different way models deal with low denoises by changing the model on your second sampler, instead of reusing the first. For example, imagine genning your base with SDXL, then using your noob to denoise the higher resolution version. Etc etc. Just experiment.
>>
>>106881340
what's the point? edit models can do this shit with humans and anime characters
>>
Is there a way to inpaint faces automatically with wan 2.2?
>>
>>106881543
>The default workflow has stopped producing anything but noise
You're clearly doing something wrong. I've done hundreds of wan gens from native without any issues.

Kijai's workflows are overly convoluted and verbose.
>>
>>106881563
Sometimes it means that python's messed up.
>>
>>106881563
>Kijai's workflows are overly convoluted and verbose.
not only that but he coded this shit in a way that makes his nodes incompatible with the rest of the ComfyUi's ecosystem, it sucks ass
>>
>>106881562
there is
>>
>>106881553
I appreciate the info. Results may be pretty good. We'll see. It's basically dev.
>>
>>106881591
TELL ME
>>
>>106881496
>there is no future for local bros. we are doomed to use wan, qwen, chroma, illustrious foreveeeeeeeeeer
I won't doom that hard, but you have to understand that the curent SOTA local models we have are really decent, so it's getting harder and harder for companies to surpass that, it's not 2022 anymore, you can't just release a poop (SD1.5) and be considered as the second coming of jesus christ, you have to put some efforts now to get the localkeks' attention, and for some companies, it's just too much work, if they can make great models they'll just try the API route, at this point it's just too expensive and time consuming to just give their secret sauce to everyone
>>
>>106881604
same as sdxl
>>
>>106881584
yeah thats what irritates me the most. I cant just swap out a node to use a kijai based workflow. I have to redesign the entire thing. if I see someone use kijai for a wan workflow i just don't bother.

dont get me wrong he has some awesome nodes that I use, but his wan wrapper pipeline is cancer.
>>
>>106881623
SDXL doesn't do video.
>>
File: 1665571886157.gif (3.78 MB, 374x356)
3.78 MB
3.78 MB GIF
>turn on vae tiling
>oom
>>
For wan, does entering the loras in the text prompts also work? I just noticed it popping up as I was typing.
>>
>>106881682
maybe you went for values bigger than the output itself, it has to be lower
>>
>>106881546
we had good genners? when?
>>
>>106881717
It doesn't confine the grid within the image resolution, but expands? Classic.
>>
>>106881740
if you have a 1024x1024 image but decided to go for 2048 tilted grid you're not using that node well, maybe that's not something that you did, but the grid size has to be lower than the image itself (duh)
>>
>>106881303
Try to sample again to remove blurriness.
>>
>>106881303
Keep in mind that sdxl isn't going to respond as well when you're resizing to a larger size. Also, mess with your CFG. In general, the lower you can keep it the better but you need to test.
>>
>>106881787
Does this work for video as well?
>>
>>106881787
Have you gotten FaceCLIP to work?
>>
>>106881880
>mask
>>
>>106881880
Dude, I'm calling the cops.
>>
>>106881880
>deleted
Anon's going to jail
>>
File: ComfyUI_02724_.png (2.06 MB, 1280x1280)
2.06 MB
2.06 MB PNG
>>106881787
I guess I can try some more but didn't really help.
Setting denoising as high as your image would also defeat the point.
>>106881801
>Keep in mind that sdxl isn't going to respond as well when you're resizing to a larger size.
Yes at high denoising it is going to be rough.
Low denoising it should be fine if I can find a way to get rid of the blur.
>Also, mess with your CFG. In general, the lower you can keep it the better but you need to test.
Well this was CFG 3
>>
>>106881880
What's supposed to be illegal about this?
>>
File: HASAN PIKERRRR.mp4 (557 KB, 866x512)
557 KB
557 KB MP4
it's been confirmed it was a shock collar lol
https://www.youtube.com/watch?v=zxUlZEdPStc
>>
ok
>>
Saars.
>>
>>106882209
wtf is this picture doing on fucking civitai on all places lmao
>>
>>106882209
>river clean
AI slop
>>
>>106856257
This is cool anon, I shared it on twitter

>>106882209
how did you get that picture of me
>>
Guys wake the fuck up, they improved the lightning I2V version, fucking finally!
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
>>
i love how the noodles only succeed to connect half the time
>>
>>106882209
I'm proud of you
>>
>>106882209
i don't get
>>
>>106882243
The fuck is a moe distill. And why isn't it v2.
>>
>>106882243
>lora key not loaded:
bruh, it's not compatible with ComfyUi AGAIN, when will they learn?
>>
>>106882342
>The fuck is a moe distill.
Wan 2.2 is a MoE model, it means it's actually a 28b model but uses half of its weight (HIGH model) for one specific task (the first steps) and the other half of its weight (LOW) for another specific task (the last steps)

distill means it's meant to work on cfg 1 and low steps (4 steps for those lightning loras)
>>
Nunchaku Qwen lora?
>>
>>106882243
>>106882343
waiting for kijai's version
>>
>>106882376
the [insert asian slur] are holding the PRs hostage. There are TWO (2) implementations waiting to be merged.
>>
Remember this?
https://www.reddit.com/r/StableDiffusion/comments/1o488hl/cancel_a_comfyui_run_instantly_with_this_custom/
Comfy just added an official commit that makes the interuption instant, was about fucking time
https://github.com/comfyanonymous/ComfyUI/pull/10301/files
>>
>>106882397
thanks for the heads up, I'll try them out
>>
>>106882430
poggers
>>
>>106882430
he was waiting for somebody else to do it?
>>
>>106882430
I remember anon's post about it an hour before the plebbit repost, yes.
>>
>>106882452
>he was waiting for somebody else to do it?
that's his motto, if no one complains, he won't be bothered to do it
>>
>>106882430
I thought he'd refuse to ever add that out of pure spite.
>>
>>106882482
yeah same, that's surprising (in a good way)
>>
File: 1752431330480043.png (3.03 MB, 1248x1872)
3.03 MB
3.03 MB PNG
>>
why isn't there an android app for comfyui
>>
File: 1748490945264625.png (2.44 MB, 1248x1872)
2.44 MB
2.44 MB PNG
>>
>>106882546
if there would be an app it would be iOS only
>>
File: 1740481069561550.png (3.15 MB, 1248x1872)
3.15 MB
3.15 MB PNG
>>
>>106882479
everyone complained and he said no. someone else does it and praises them so he does by stealing the code. it's that simple
>>
>>106882594
all the code does is to add an exception on all the forward passes if you press "interupt", like you don't need to be a genius to figure this shit out, I guess he thought it wasn't important and realized how important it actually was was when that reddit post appeared
>>
>>106882243
>>106882343
I tried to convert the loras with the musubi-tuner but it still says it's not compatible when running it on ComfyUi, weird
>>
I'm trying to follow this guide: https://rentry.org/wan22ldgguide

and everything works but I'm getting this error on the WanVideoSampler: ValueError("type fp8e4nv not supported in this architecture. The supported fp8 dtypes are ('fp8e4b15', 'fp8e5')")

Which makes no sense because the WanVideo Model Loader doesn't error out when I set the quantization to fp8_e4m3fn_scaled and it loads fine.
>>
>>106882676
the setting has to match the model you downloaded. if you have a 30 series download e5m2
>>
>>106882676
>he has 30 series
LOL
>>
is kontext good?
>>
new wan 2.2 i2v loras dropped:

https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/tree/main/loras
>>
>>106882985
too late anon >>106882243
and it doesn't seem to be working well at 4 steps, all I got got is some blurry shit
>>
>>106882987
apparently 12 works well (6/6 or whatever)

ill have to test myself though
>>
File: WanVideo2_2_I2V_00001.mp4 (581 KB, 480x832)
581 KB
581 KB MP4
>>106882720
>>106882883
Thanks anons, that was it.
>>
>>106882987
>>106882985
>lora key not loaded
>>
>>106883028
>he fell for it
>>
>lightx2v
Owner
29 minutes ago

This model focuses on training for high noise. For low noise, using the LoRA from Wan2.1 directly still yields good results.

huh
>>
>>106883175
gonna try the new 2.2 for high and 2.1 on low and see what happens.
>>
File: 1755188057189383.mp4 (1.42 MB, 640x640)
1.42 MB
1.42 MB MP4
>>106883186
the man puts down his banana on the desk in his office, and drinks some coffee from a mug.

oh man, hallucination generator.
>>
I got the most unhinged video with the new loras. The guy literally pushed his dick through the back of the womans head and cummed on her breasts.
>>
>>106883210
that's exactly what I got as well, ghosting shit
>>
File: 1752411458952486.mp4 (1.18 MB, 640x640)
1.18 MB
1.18 MB MP4
>>106883210
that was 3 high, 3 low (new lora). next test, 2 steps high (2.2 new), 8 steps low (2.1)

what the fuck is going on.
>>
File: 053125_00001.webm (1.09 MB, 480x592)
1.09 MB
1.09 MB WEBM
>>
>>106883243
next i'll try 2.1 lora high, and 2.2 (new) low noise, default 3/3 steps.
>>
>>106883210
That's like 90% of my Wan gens now. I don't know what changed, but with or without low step LoRAs the output looks like an after school special's first hit of weed.
>>
>>106882985
it has a readme now
>This approach allows the model to generate videos with significantly fewer inference steps (4 steps, 2 steps for high noise and 2 steps for low noise) and without classifier-free guidance
right... I did that and the results are horrible
>>
>>106883274

>>106883028
>>
File: 1738505113742973.mp4 (913 KB, 640x640)
913 KB
913 KB MP4
>>106883254
okay, high noise one is doing something fucked, now it's at least functional.
>>
in a few hours or whatever someone will figure out what the issue is, something must be fundamentally different in their sampler setup or whatever.
>>
File: 1759052601649031.mp4 (1.46 MB, 640x640)
1.46 MB
1.46 MB MP4
2.1 high, 2.2 (new) low

the man runs down the street in new york.

lmao
>>
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/discussions/2

maybe they will clarify there, they posted one thing (it's high focus, or whatever)
>>
>>106883274
https://huggingface.co/lightx2v/Wan2.2-I2V-A14B-Moe-Distill-Lightx2v/discussions/3#68ee0c1746daa22bb8c791d0
>I tried the ksampler, and it gave me drunk/blurry/oversaturated results, tried 8 different configurations, its a mess, the MoEKSampler fixed it.
what the fuck? what difference could that make? it's just a sampler node
>>
>>106883323
hes a normie retard with 0 understanding of how stuff works.
the moeksampler just decides which model to use for which step, that's it.
I think for 4 steps it does 1 high 3 low, its been a while since I used wan at all, im back to the 1girl grind
>>
so you need a new node?

its so fresh, the model card isn't even deployed :D

UPDATE: its updated!, though a working workflow would be much appreciated!

KSampler for Wan 2.2 MoE for ComfyUI is required!
by author: stduhpf

In Comfyui use the "Customs Node Manager" to install it.

Afterwards, use these settings by u/ucren

https://imgur.com/a/iuYsmUu

Sigma Shift: can be 3.0 to 5.0, depending on how much motion you want.
>>
>>106883345
>https://imgur.com/a/iuYsmUu
>just go for 12 steps bro
fuck that, the previous loras worked fine at 4 steps lmao
>>
>>106883345
KSampler for Wan 2.2 MoE is in node manager, gonna test and see wtf it does.
>>
>>106880034
>>106880049

kys
>>
File: 1739417700698556.png (41 KB, 366x378)
41 KB
41 KB PNG
so it works with one node instead of two. now we see the results/difference.
>>
>>106883375
>12 steps
if it doesn't work at 4 it's completly useless lol
>>
File: 054652_00001.webm (676 KB, 480x592)
676 KB
676 KB WEBM
>>106883345
it doesnt work still, ghosting is gone but it doesnt respond to prompts

prompt was:
>the man puts on a pair of glasses
>>
>>106883398
I know. I just wanna test to see if this shit works with it or if it's still drunk-o-vision.
>>
File: 054957_00001.webm (1013 KB, 480x592)
1013 KB
1013 KB WEBM
>>106883405
For comparison, using the previous wan i2v lora.
>>
File: 1729290467491110.mp4 (1.25 MB, 640x640)
1.25 MB
1.25 MB MP4
the man runs down the street in new york.

okay, it's clear at least and not drunk vision, need to test more/less steps.
>>
>>106883420
how many steps was that one?
>>
>>106883428
12, trying 8 now, the sampler wants to switch at 3 steps apparently, but idk since I just started trying it. now with 8 steps it is doing 3/5.
>>
>>106883432
*also thats with the other sampler "KSampler for Wan 2.2 MoE"

cant say if it's better or worse but at least this one had no drunk vision.
>>
there is discussion on the huggingface apparently

I don’t mean to be rude but there’s so much misinformation it’s baffling. You don’t need no useless custom nodes. Seems people don’t even bother reading simple instructions anymore. You MUST :

use EULER sampler;
use SIMPLE scheduler;
use shift 5;
use 2 steps for HIGH;
use 2 steps for LOW;
possibly use official settings ( 480x832/720x1280) (81 frames) ( 16fps) but some slightly variations are possible and more or less working.
SHIFT 5 => SIMPLE scheduler => 2/2 steps=> Sigmas used during distillation. You can very easily verify this by yourself inside comfyui.
>>
>>106883485
that's what we tried, and we got ghosting shit
>>
>>106883493
kijai will fix it
>>
>>106883503
>kijai will fix it
KJBoss is here, I'll test out his lora
https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Wan22_Lightx2v
>>
>>106883530
my hero
>>
File: 1729338242265051.mp4 (1002 KB, 640x640)
1002 KB
1002 KB MP4
the anime girl eats a slice of pizza.

wtf (this isnt with the kijai one)
>>
>>106883530
I can confirm it's working again with Kijai's format
>>
>kijai lora didnt give the key error
so now we see what it can do
>>
>>106883530
i've been gone for 5 days, how is this lora different from the other 2.2 light loras?
>>
File: 061045_00001.webm (847 KB, 480x592)
847 KB
847 KB WEBM
based kijai

>>106883549
now im hungry
>>
>>106879714
If that takes you 30 minutes you need to fix your flow bro
>>
>>106883566
the previous 2.2 I2V lora had a big motion problem, it had slo mo shit, let's hope that one doesn't
>>
File: 1758940797112827.mp4 (724 KB, 480x640)
724 KB
724 KB MP4
3/3 steps, new lora high, 2.1 lora low (only the high one is out so far for kijai version), comfy template workflow/settings.

the anime girl eats a slice of pizza

we are so back. now I will test with the old 2.2 low lora.
>>
File: 061207_00001.webm (1 MB, 480x592)
1 MB
1 MB WEBM
>>106883414
>>106883405
now its working
>>
>>106883581
>2.1 lora low
where do you find that one?
>>
File: 1738383370737855.mp4 (753 KB, 480x640)
753 KB
753 KB MP4
>>106883581
with 2.2 lightning low, and 2.2 high (new kijai one):

seems good also? so this new 2.2 high (kijai) fixes the old 2.2 high lora which fucked motion, I guess. 2.2 low lightning lora seems to work with it fine.
>>106883599
you just use the regular 2.1 i2v lora, before you used one for both.
>>
>>106883608
>you just use the regular 2.1 i2v lora
I don't have it, that's why I'm asking lol
>>
>>106883611
https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/blob/main/loras/Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors
>>
>>106883614
thanks
>>
wanbros, dare I say we're so back?
>>
>>106883530
it's really annoying that everytime they release a lora, its format never work on ComfyUi, you'd think they learned how to nail that at some point
>>
>>106883626
well this new lora apparently fixes what 2.2 loras fucked up (motion), kijai one is working + no ghosting bs. and 2.2 low (old one) seems to work fine. the high one caused all the slow mo before.
>>
>>106883630
their qwen loras work good, im not sure whats the issue with wan that they always need them to be fixed by kijai. It was like that the last time too, I think we had to do them at 0.3 strength
>>
File: 1750907355123134.mp4 (1.24 MB, 640x640)
1.24 MB
1.24 MB MP4
the anime girl eats a slice of pizza

2.2 high lora (kijai, new), 2.2 lightning low noise lora, 1 strength for both:

yep I think we're back.
>>
File: 1742268976830962.mp4 (1.48 MB, 640x640)
1.48 MB
1.48 MB MP4
>>106883649
the anime girl picks up a bucket of popcorn and eats some popcorn from it.

https://huggingface.co/lightx2v/Wan2.2-Lightning/tree/main/Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1

low noise one from there

https://huggingface.co/Kijai/WanVideo_comfy/tree/main/LoRAs/Wan22_Lightx2v

high noise from here. I still need to test though if 2.1 is better than 2.2 low noise though.
>>
>>106883649
>>106883626
we're so back is not even real
>>
>>106883661
not sure what these vids are supposed to prove
>>
>>106883673
the new kijai lora works and fixes the ghosting bullshit. it also has normal motion unlike the old 2.2 lora.
>>
you know it's funny I said kijai will fix it and who knew he uploaded a lora around the same time. mvp for fixing this shit.
>>
File: 1737421493310499.mp4 (1.49 MB, 640x640)
1.49 MB
1.49 MB MP4
the anime girl walks to the left through a door, and closes it.

not exactly, but we got more motion now.
>>
>>106883691
KJGod is always here to save us anon, remember that!
>>
File: 1746636565225268.mp4 (1.22 MB, 640x640)
1.22 MB
1.22 MB MP4
>>106883699
the anime girl runs to the left through a door in the white room, and closes it.

a few more specifics and poof:

HOLY SHIT WE GOT MOTION based lightx2v + kijai. setup: 2.2 lora (kijai), 2.2 lightning low noise.
>>
File: 00155-129326675.png (2.47 MB, 1824x1248)
2.47 MB
2.47 MB PNG
>>
File: 1743262467189654.mp4 (1.11 MB, 640x640)
1.11 MB
1.11 MB MP4
>>106883744
*new lora for high noise model, old 2.2 lightning low for low noise model. 1 strength for both.

wan 2.5 came early anons (not really but this lora fixes the 2.2 issues)

now we got the 2.2 motion instead of 2.1 (loras werent working before for 2.2).
>>
Kijai

[+1] 7 points 32 minutes ago

Something is off about the LoRA version there when used in ComfyUI, the full model does work, so I extracted a LoRA from that which at least gives similar results than the full model:

https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22_Lightx2v/Wan_2_2_I2V_A14B_HIGH_lightx2v_MoE_distill_lora_rank_64_bf16.safetensors
>>
File: 1754691541846787.mp4 (767 KB, 480x640)
767 KB
767 KB MP4
the japanese girl turns around and runs far away on the beach.

now i'm gonna try with the new lora for low too. but it works well.
>>
File: 1743893045575950.mp4 (891 KB, 480x640)
891 KB
891 KB MP4
>>106883777
same lora for high/low (kijai)

yep we're back
>>
>>106883788
>same lora for high/low
what do you mean? only the new high lora can be applied on both high and low model and it'll work?
>>
File: 1756101544569083.mp4 (842 KB, 480x640)
842 KB
842 KB MP4
>>106883800
well, it sure seems to work with this test. high worked great and seems to work great with low too (and the motion is fine)

interpolated but it's very good:

ill use a new image for more tests.
>>
File: 00007-3643619576.png (2.69 MB, 1824x1248)
2.69 MB
2.69 MB PNG
>>
>>106883808
update

Kijai

[+2] [score hidden] a minute ago

Just on the high noise, they didn't release any new low noise LoRA since the old 2.1 lightx2v distil LoRA works fine on the low noise model.
>>
File: 00028-3500161865.png (2.63 MB, 1824x1248)
2.63 MB
2.63 MB PNG
>>
The new kijai light lora just seems to hang on my system.
>>
File: 1744606251153714.mp4 (1.34 MB, 480x704)
1.34 MB
1.34 MB MP4
>>106883818
and yes, the 2.2 (new) + 2.1 distil lora does in fact work as you can see.
>>
is the old 2.1 low lora better in any way than the 2.2 4step low lora?
>>
File: 1734509820800185.mp4 (983 KB, 640x640)
983 KB
983 KB MP4
>>106883846
here's a more fun example to test.
>>
>>106883016
Oh I see a nipple, goodbye anon!
>>
>>106883882
kijai said 2.1 distil lora works fine, so try that first
>>
File: 00039-3123253239.png (2.53 MB, 1824x1248)
2.53 MB
2.53 MB PNG
>>
File: 1752543441574154.mp4 (712 KB, 640x480)
712 KB
712 KB MP4
a large dog punches the man with glasses in the face, and a huge lightning strike hits the man, who falls to the ground.

lmao, just in time for horror month.
>>
>>106883894
why is he wearing a bra
>>
>>106883969
thats why he's so upset I guess
>>
>>106883969
Manssiere!
>>
File: 1757627948684236.mp4 (592 KB, 640x480)
592 KB
592 KB MP4
>>106883951
the man holds up a black remote and presses a button, causing a huge lightning strike to hit the dog behind him, who bursts into flames.

holy shit. wan 2.2 is back.
>>
File: 1745275164733247.mp4 (617 KB, 640x480)
617 KB
617 KB MP4
>>106883999
>>
>>106883990
take your schizo back
>>
hm rife vfi node for interpolation is faster than film vfi, maybe not the same quality but it's good for fast gens.
>>
File: 00073-3579559290.png (2.82 MB, 1248x1824)
2.82 MB
2.82 MB PNG
>>
>>106884119
back where? because, personally, to me that loooks like an avatartranny that finally found the cesspit its kind belongs in
>>
>>106884204

>>106883549
>>106883581
>>106883608
>>106883649
>>106883661
>>106883699
>>106883744
>>106883760
Explain this retard then
>>
File: AnimateDiff_0000112.mp4 (3.72 MB, 960x528)
3.72 MB
3.72 MB MP4
>>106883951
>>
>>106884226
a new lora came out and im testing it, meds.
>>
>>106884226
looks like an anon testing stuff out, not an ostracized faggot begging for the attention its parents never gave it
>>
>>106884235
>>>/g/sdg/
>>
>>106884231
kek wan was made for this stuff
>>
>>
>miku apologists
no wonder your general is always headed for page 10
>>
File: 1738010731523535.mp4 (819 KB, 640x640)
819 KB
819 KB MP4
image source is qwen image edit + miku and scam altman (2 sources)

the anime girl on the left fires the black pistol, and the man on the right falls to the ground. she gives a thumbs up gesture.

we are SO back. now the motion is way better, and using the 2.1 loras was faster but gave you 2.1 motion not 2.2 with no lora motion.
>>
>>106884278
>image unrelated
>>
File deleted.
>>106884278
better fate for scam:
>>
Migu faggots get the rope
>>
>>106882133
nice
>>
>>106884317
anime website
>>
File: 1755066534670263.mp4 (909 KB, 704x480)
909 KB
909 KB MP4
this new lora is great, the motion of the doggo is even better now.
>>
File: 1736940116514399.mp4 (1.1 MB, 704x480)
1.1 MB
1.1 MB MP4
DORYA!
>>
new
>>106884374
>>106884374
>>106884374
>>106884374



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.