[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106657828

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2122326
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
the sex & fetish thread (continued)
>>
Comfy thread of locally diffusing Bytedance's Seedream 4.0 through the power of ComfyUI's API Nodes
>>
so what now?
>>
>>106661743
Nogen, pls
>>
>>106661748
wan2.5 waiting room
>>
>>106661748
local nanobanana but pixel space waiting room
>>
>nigbo
>>
is there any point in using NAG with chroma (with flash lora)?
>>
File: Wanimate_00033.mp4 (1.2 MB, 832x480)
1.2 MB
1.2 MB MP4
>>
Are these settings correct for standard Wan 2.2 i2v with lightx2v, using the correct i2v loras? Top is high noise sampler, bottom is low noise. I'm running it at 720p.
The videos it produces are relatively clean, but lower quality visually than Wan 2.1 with lighx2v. Like, it's more aliased and granular looking.
>>
>nigbobumping this thread
>>
File: ComfyUI_00420_.png (2.12 MB, 1536x1536)
2.12 MB
2.12 MB PNG
I love turning things into figures, if only the text was better, using qwen edit.
>>
File: Wanimate_00034.mp4 (2.28 MB, 832x480)
2.28 MB
2.28 MB MP4
>>
This may be a dumb question but is chroma the new top tier model now? Does comfyUI still work/play nice?
I work on a glowie ship in a IT capacity, I recently got ahold of three (badly abused) H100 systems and I have entire empty cabinet to use/experiment with. I have no outside communication (even radio) when at sea so local/standalone AI is pretty ideal.
>>
>>106662299
No and don't let the people here gaslight you into believing it is.
>>
>>106662299
There's no way that profile is real or a real woman
>>
File: 50523152.mp4 (3.84 MB, 960x528)
3.84 MB
3.84 MB MP4
>>
>>106662299
lots of redflags.
Anyway, depends on what you wanna gen. If I were you I'd get:
>Noob/Illustrious and some loras for your anime needs (also would try Neta Lumina)
>For realism get Chroma and SRPO (both of them are flux finetunes)
>For fun gens get Qwen Image / Qwen Image Edit - it can do both anime/realism but is kind of sloppy. This is my go to model now for playing around.
>For video, get Wan 2.2 and Wanimate
also send a card over to me thanks
>>
>>106662149
post whole wf
unipc should be better than euler
>>
>>106661748
>so what now?
Flu season. But also waiting for wan 2.5 as the other anon said. This is a good moment for using your GPU to do other stuff like games, or to work on a small-medium sized project with wan 2.2 like a little skit or music video

>>106662299
Blow weed smoke up that furfag's bussy anon, you only live once
Ray Blanchard (one of the most important sexologists on the planet) just announced that research has confirmed traps aren't gay. You're probably never gonna get a better cope than that or political climate for faggotry for the next 20 years so seize the day

As for the text of your post, I'd suggest setting up GLM 4.5 if the glowies are okay with a Chinese LLM being run on their hardware and then you have a local version of Claude at home, to do whatever you want when you're sailing around
>>
>>106662403
>unipc should be better than euler
You got a side by side to assert this claim? I switched from unipc to Euler months ago and they're both basically the same for videos
>>
>>106662403
Here it is. The LoRA's are the right ones, high noise for high, low noise for low. Tried it with and without NAG, lower length, etc. Motion and adherence is better than 2.1, but the image quality is noticeably worse.
>>
>>106662403
euler is better. no way unipc
>>
>>106662358
https://i.4cdn.org/wsg/1758525503416630.mp4
>>
>>106662475
Set high noise sampler to 8 steps, start at 0, end at 4. Set low noise sampler to 8 steps, start at 4, end at 8. It'll look better. Twice as long gen time, but it doesn't matter with lightX2V.
>>
>>106662424
nothing that i can post now, try side by side yourself on the same seed in higher motion actions, unipc is always better to prevent motion blurrines and keep body part consistency
>>
>>106662475
>wanvideonag
Is it technically equivalent to use nag with cfg1 and to use cfg > 1 with no nag in terms of respecting the prompt?
And is using nag a way to get the speedup that cfg1 gives?
>>
once again thinking about a $2000 printer to make high quality wallet-sized printouts of my best 1girls.

I'd like to keep some by my bedside at night to contemplate as I drift out of consciousness
>>
>>106662606
maybe u'll isekai faster that way, you will be doing a service to the world
>>
>>106662403
unipc's whole shtick is being good-enough at very low steps (e.g. 12-16), but it definitely does not perform as well as euler at 30+ steps
>>
>>106662606
Will you be printing them out in most degenerate situations?
>>
>>106662621
The anon he's replying to is using the 4 step lightning lora for wan, so yeah
>>
>>106662627
I don't know if it applies in a situation like that, outside my area. I'll leave that for others to argue
>>
>>106662624
some of them yes...
>>
>>106662299
Sadly, from personal experience women and weed do not mix well. Who knows, she might be the unicorn who's super chill and just wants to share gens and workflows. As for chroma, it tends to struggle with limbs but it excels at creating unique faces (something other models desperately need to start doing).
>>
how does one update neoforge? it just got qwen support.
>>
>>106662615
I don't know anime memes so the meaning of this is lost on me. I don't really care about servicing the world I just I want something pretty I can set my eyes on sometimes when my heart feels restless. I think 1girls can heal
>>
>480p takes 160s to gen
>720p takes 220s to gen

SPEEEED

>>106662681
Right click inside the root folder>open in terminal>git pull
>>
>>106662498
Thanks, that mostly fixed it. I tried increasing low noise steps myself, but the video ended up corrupted. Now I see why, I didn't adjust high noise along with it. Thanks, anon.
Also tried unipc, but it causes heavy artifacts with the light loras or my specific config, so euler it is.
>>106662556
I remember it helping with prompt adherence and motion with 2.1, but I haven't fucked with video gen for awhile now, I'm just getting back into it today. I have to test if it makes any difference with 2.2 or light/cfg1.
>>
>>106662704
thanks. those are your qwen gen times? damn.. I dont want to do it now
>>
Does anyone have a good guide link on how to prompt for wan? I'm getting such random results, prompting like flux etc.

>>106662709
Wan 2.2.
>>
>>106662667
>As for chroma, it tends to struggle with limbs but it excels at creating unique faces
I need to make >>106653036
a bit shorter and more pointed so I can turn it into copypasta
>>
>>106662704
>720p takes 220s to gen
how
>>
File: Promo_BrieLars_02.jpg (1.22 MB, 3840x2160)
1.22 MB
1.22 MB JPG
not sure if there is a better thread to ask about this but just out of curiosity:
Does anyone know if creating better textures/models for VR experiences is a use case where AI is going to do some heavy lifting?
ImageGen and videos are getting amazing and I would love for VR models to look half as good, but I am wondering if this is a texture/work/skill problem of the 3d artists or engine limitations?
>>
>>106662366
Yeah and double that this is in Japan. But theres quite a few smokers here even though its supposed to be super illegal.

>As for the text of your post, I'd suggest setting up GLM 4.5 if the glowies are okay with a Chinese LLM being run on their hardware and then you have a local version of Claude at home, to do whatever you want when you're sailing around.
Obviously everything is airgapped or in a scif. Its my personal hardware and my own testing space apart from work.

>>106662366
I know a one click solution isn't a goal but im more interested in Anime/Manga comics. All of the fine tunes for doing coherent panels I've seen are pretty bad imo. I've been using illustrious on my personal machine and so far I'm very happy.
>>
>>106662707
>I remember it helping with prompt adherence and motion with 2.1, but I haven't fucked with video gen for awhile now, I'm just getting back into it today. I have to test if it makes any difference with 2.2 or light/cfg1.
I use something like 3 for high noise and 1 for low noise, I'd be happy if cfg1 is enough and I can get double the speed there.
>>
>>106662692
kys
>>
>>106662739
Oh yeah I wanted to clearly state I have this hardware im trying min/max when it comes to image generation. Im so used to using the standalone version of comfy that I am having a bitch of time to get it to work with this machine that by default is using a tier1 hypervisor and shares compute.
>>
>>106662722
I prompt literally. exactly what I need. works fine
>>
What does Wan Animate Relight LoRa do?
>>
File: ComfyUI_00560_-1(1).png (834 KB, 1884x867)
834 KB
834 KB PNG
>>106662366
I would send a card but it in the smx form factor. You wouldn't want it anyway these cards were already failing and they are flaky as fuck.
>>
>>106662730
720p, square ratio, light loras.
Both gguf q8 and fp8 are around the same speeds. fp16 in 418s.

>>106662762
Guess I'm just a shitty proompter then.
>>
>>106662808
what gpu?
>>
>>106662796
>smx
yeah, no use for me sadly.
>hes the janny+chocola anon
based, but I will reiterate that chocola is a loli, not an hag
>>
>>106662808
>Both gguf q8 and fp8 are around the same speeds. fp16 in 418s.
This is the time I get on a 5090 in q8 with 4/10 steps in 720p, can you share your wf?
>>
>>106662734
VaM is held back by using ancient Daz3D models and untalented artists. I can tell your picture is just a lazy FaceGen texture slapped on the default model and adjusted with sliders, the fact that fag is shilling a patreon too is hilarious.
>>
File: WA22.mp4 (3.91 MB, 1920x804)
3.91 MB
3.91 MB MP4
>>
>>106662839
>VaM is held back by using ancient Daz3D
And a very dated unoptimized engine. I wonder if the fabbled vam2 was even released at some point.

>untalented artists
The best artists sharing scenes I've seen were korean/japanese, for some reason western ones had the ugliest girls on average.
>>
>>106662813
>>106662829
5090 as well.

I get incredibly long times without the loras.

Cannibalized some default workflow that let me use FFLF without much colorshift.
nsfw https://files.catbox.moe/28g9n9.png
>>
File: 1894132418.png (3.64 MB, 1248x1824)
3.64 MB
3.64 MB PNG
>>
File: 1730518169064108.png (149 KB, 1747x553)
149 KB
149 KB PNG
>>106662863
Oh that's why, I do 4 steps high and 6 low, while you do 2 and 2, thus half the speed.
I thought the 4 steps was per model so 8 in total.
>>
File: ComfyUI_01138_(2)(1).png (1.99 MB, 1149x1680)
1.99 MB
1.99 MB PNG
>>106662826
Maybe my definition have change but I dont ever remember chocola being *loli*.

Then again with the stupid shit zoomer on xitter (I like the art style but its *problematic* not separating fiction) im not suprised.
>>
>>106662863
you should use triton and sage
>>
>>106662862
>vam2
I saw yesterday that it went into a closed beta this year (certain patreon tiers) and they are collecting creator resources in their hub since July.
I agree that the project is full of people looking to make a quick buck from horny dudes with questionable quality, not least the engine itself. But I also realize that kind of development costs money and effort. I have been checking it out for a while and I hope the new engine incorporates the lessons learned from people tinkering with it hands on.
My question however remains: can ai tools improve the quality of the models? at least in the new engine? or is that just a use case it is not good for.
>>
>>106662884
The loras confuse me. As before I couldn't change the steps or it just genned broken noise, but other times I can increase it and it works just fine. I doubled the steps up to 8 total, 30s longer gens.

>>106662892
Isn't it using sage attention?
Didn't know of triton. I'll look into it.
>>
>>106662722
If you're trying to do sex animations I suggest just looking for a lora
>>
>>106662894
>My question however remains: can ai tools improve the quality of the models? at least in the new engine? or is that just a use case it is not good for.
If by AI you include machine learning, obviously it can improve stuff, look at what dlss does.
If you mean big LLMs, we are 10 years from being able to run any of this in real time.
>>
>>106662894
>>106662894
>My question however remains: can ai tools improve the quality of the models? at least in the new engine? or is that just a use case it is not good for.
Afaik currently there are no open source AI models that can create a (good) 3D model or an accurate face texture from reference images. Though video gen has progressed so fast in the last year that perhaps 3D isn't far off
>>
Holy shit. Even a paid Videogen model couldn't do this a year or 2 ago. Relight LoRa appears to make anime more realistic, definitely going to leave that off for 2d stuff. I am going to need to scrap more dancing girls among other reference videos.
>>
>>106662939
>Though video gen has progressed so fast in the last year that perhaps 3D isn't far off
Based on what I saw with hunyuan3d which was the ML equivalent of a shitpost, I give it less than 3 years. Maybe it won't be local but remember that child actors are a liability and have annoying work and labor laws so if we stop masturbating to the idea of Hollywood Jews all raping kids for a second we can realize that everyone's interests are aligned in being able to make high quality 3d models of [real] people

I'm excited to walk around a VR mansion in 15 years in my bedroom with sunglasses on and an omnidirectional treadmill as all of the prettiest social media girls between the ages of 9-16 of the last 15 years are walking and lounging around and you can interact with them and talk to them and stroke their hair as you masturbate to their pretty faces
>>
>>106662930
>in real time
no, that is clear

>>106662939
thanks. I guess we have to hope the video game and movie studios get into this and it bleeds into the nsfw space.
>>
>>106662892
So triton just needs to be installed, no extra nodes or command lines running?
I added the Model Patch Torch Settings node to enable fp16 accumulation, but I guess that's just for the actual fp16 model? I use gguf q8 mainly.

>>106662909
I do have a bunch of them. I tend to rely more on the basic sfw prompting, could be my mistake.
>>
>>106662990
Your best current hope for celebs in your VR porn games is Kojima keeps scanning every actress in Hollywood and vam2 makes it easier to port the models in rather than relying on awful Daz3D zwrapped abominations
>>
File: elf hugger_00149_.jpg (979 KB, 1080x1920)
979 KB
979 KB JPG
>>
File: Wanimate_00006-noaudio.mp4 (1.69 MB, 1162x544)
1.69 MB
1.69 MB MP4
>>106662971
How did you get your reference character to stay coherent?
My longer gens always end up like this. At 9 seconds he starts morphing into a deepfried cartoon man
>>
>>106662892
Damn, with triton installed, I'm now at the same speed of 4steps at 220s, with 8 steps. 160s at 4steps.

Thanks.
>>
Anyone got access and tested sage attention 3?
For some reason it's been months behind approval only access, not sure why.
https://huggingface.co/jt-zhang/SageAttention3
>>
>>106663114
losing my shit at the first 5 seconds of this
>>
>>106663114

Increased resolution to 784x1360 and Relight Lora disabled. Might not work on realistic gen or on low VRAM machines. I am using 5090x4090 + 124GB Sysram. I am also noticing unwanted double sampling at the tail end since Kijai nodes process them per 77 frames. The video may get fried at the end due to imperfect overlap? Anyway, will need to tune and investigate what is the best settings.
>>
>>106663114
kek, my day is no longer ruined
>>
>>106663169
doesnt it only work on blackwell gpus? from what I remember comfy also didnt implement it since it's behind a 'request' wall, once it's ffa he will implement it
>>
Is there a setup like the Meta Batch Manager, which takes a video and processes it at the set amount of frame, freeing up vram, but for t2v/i2v workflow?

>>106663210
My 4090 is packed up as a backup in a box right now.
You can share the vram? Does the gen speed take a hit from the x8 bandwidth?

As for the unwanted frames, I think you can unhook the Get_frame_count node to the WanVideo Animate Embeds node, to then set the total framecount to what your video has, and try and match it up with the batch count.
>>
File: multigpu blockswap.png (133 KB, 1494x654)
133 KB
133 KB PNG
>>106663322

No perceptible speed hit with blockswapping 5090/4090. I don't know what motherboard you are using, but my 5090 is using PCIE5x8 and 4090 is using PCIE4x16.

https://github.com/pollockjj/ComfyUI-MultiGPU
>>
>>106663355
Does the offloading gpu simply just use the vram, no processing power? So when you're genning, you're not pulling like 1200w.

Do you feel any impact on the system outside of genning? Stuttering in games?
>>
on a laptop with quadro card and 16gb vram. i got hunyuan to work but 512x512 gens take hella long and I go oom pretty ez
looking at the GGUFs on huggingface, just wanted to know what's the lowest I can go without it being shit
>>
>>106663311
Oh OK.
>>
https://github.com/huggingface/diffusers/pull/12357
>feat: Add QwenImageEditPlus to support future feature upgrades
hold up, let them cook!
>>
any trick to avoid slow motion output from using light loras in the high noise wan model?
>>
File: capture.jpg (57 KB, 460x586)
57 KB
57 KB JPG
Unironically what did he mean by this
>>
I have a few possibly retarded questions if you'd be so kind to help a beginner with wan2.2 in comfyUI
1. Is it better for image quality to not run the 4-step lightning lora? and would not running it lead to increased vram usage, or just way longer genning time?

2. I've read wan2.2 prefers long and descriptive prompts that sound like a narrative. Could someone suggest a good prompt generator for that in comfy?

3. If running the "last frame becomes first frame for next video" thing, how does one influence the actions in the next vids? do you change prompts between each gen?
>>
>>106663644
Nothing because it was sent to everyone.
>>
>>106663661
>Is it better for image quality to not run the 4-step lightning lora?
not visual quality but quality of motion and prompt adherence
>and would not running it lead to increased vram usage
no
>or just way longer genning time?
yes
>I've read wan2.2 prefers long and descriptive prompts that sound like a narrative
only for t2v. for i2v prompts are retarded simple
>how does one influence the actions in the next vids? do you change prompts between each gen?
yes
>>
>>106663634
use 6 total steps 3/3 or 8 total steps 4/4.
Then plug a string to float list node on the strength of the light high lora, value 0,3,3 if 6 steps, 0,0,3,3 if 8.
For the one plugged to the cfg 3,1,1 or 3.5, 3, 1 ,1
>>
File: nike_goddess.png (2.5 MB, 1758x1204)
2.5 MB
2.5 MB PNG
>>
>>
>>106663676
Alright, never even heard of Tensor Art. Looks like a clone of Civitai but everything is paywalled
>>
>>106663704
oh, basically disabling the lora for the first step to preserve motion, I see
what lora loader node accepts floats?
>>
>>106663723
And even more restrictive on nsfw since a payment processor purge some time ago. It used to be unmoderated so you could find celebs, and loras that got banned from civitai.
My guess is that since the thing is now restrictive, they're losing users left and right.
>>
File: Wanimate_00035.mp4 (967 KB, 832x480)
967 KB
967 KB MP4
>>
>>106663735
The lora loader from the wrapper works just fine, don't know about in native.
It not only fixes the slow motion but also the colors and light.
>>
I wonder if he used AI to use Destiny's face lol
https://www.youtube.com/watch?v=wX5YHQb94IY
>>
File: 1742292724590571.png (1.45 MB, 1233x1394)
1.45 MB
1.45 MB PNG
they'll do anything but unslop that model right? when Qwen Image SPRO??
>>
>>106663788
ok will check, thanks anon
>>
>>106663723
People use Tensor Art because it's more accessible, especially for free users. You can upload your models there as an alternative to Civitai's shitty image gen service. The NSFW restriction lately only applies to posted images as well, so as long there's nothing adult then you can host things there. The paywalls and exclusive models are bullshit though.
>>
>>106663926
Not the same team, so SRPO would be like a affront to the qwen image team or something like that.
>>
>>106663944
*nothing adult in the example images
>>
>>106663946
what do you mean? Tencent said they're ok to do SRPO on their rivals' (Alibaba) model (Qwen Image)
https://github.com/Tencent-Hunyuan/SRPO/issues/2#issuecomment-3274994781
>>
>>106663944
>The NSFW restriction lately only applies to posted images as well,
it's a big deal though, how am I supposed to find good quality NSFW loras if there's no examples to look at
>>
>>106659309
>a lot of chronic symptoms
like slowly inflating all day and farting like a locomotive all night.
my SiL has achalasia, so mostly air and fluids get through into her stomach. the last time she stayed with us, she told me the doctor had never seen so much air inside someone before, which at the time I took as a fun fact and not a warning. she's not big, but after we all went to bed, I could hear her ripping absolute ass through two doors until I fell asleep.
>>
>>106663966
If they host on both Civitai and Tensor then they might link it on their Civitai page. If it's some exclusive model trash then you're fucked
>>
>>106663962
Oh, for some reason I thought they were both from Alibaba.
>>
>>106663983
>If they host on both Civitai and Tensor then they might link it on their Civitai page.
so... I'll just use civitai only then lol
>>
>>106663788
Trying other lora loaders and I end up with the same error.
>>
>>106663991
I mean if you're already running local then yeah. Tensor is only appealing for freetards who can't run a local setup and fags trying to make money with exclusive models.
>>
>>106663992
No idea, it may only work with the wrapper then.
>>
>>106664000
>>106663992
Has only ever worked with a wrapper. You think comfy would come up with an idea as useful as controlling the strength of a LoRA at every step?
>>
>julien
>>
Civitai is an awful bloated mess of a website. It's insane how there still isn't a good alternative, that's just about hosting and previewing your models/loras/workflows. No on-site generations or training, no paywalls, no buzz ecosystem
>>
>>106664000
>>106664030
Mind sharing a workflow that make use of this?
I'd love to experiment.
>>
https://github.com/FizzleDorf/AniStudio

Ani if you are here, I'm a techlet how do I download VC code?
>>
boy cant wait for this fake request for help to advertise this shitty imgui ui!!!
>>
*yawn*
>>
>>106664045
for those who want to know, here's the filter to add on 4chanX (comments tab) to not see this trash anymore
>/^https:\/\/github\.com\/FizzleDorf\/AniStudio/i
>>
>>106664060
Exelent! Can you make me one for Chroma?
>>
>>106664078
>he's too retarded to add a word to a filter
>>
>>106664083
kek
>>
>>106664083
is there a way to filter all mean things said to me before i read them
>>
>>106662971
don't want to sound rude but it looks bad, the morphing is off the charts
>>
>>106664039
There's dozens of sample workflows inside the wanvideowrapper folder in the custom_nodes folder.
Just drag the base 2.2 one for t2v or i2v.
>>
is anyone here also retarded enough to try and do wan video gen on Forge Neo like me? I feel like I'm going insane since there is virtually no documentation on how to actually set this up
I did manage to get a single video of a dog to generate, but have been somehow incapable of making anything since... I should really just move to ComfyUI for video gen, but I like the UI of the basic A1111 too much to leave just for video gen.
Vid related. It's the dog.
>>
>>106664035
if you're going to have to make money to pay for hosting and storage, you might as well make more than you need. for everything else, there's torrenting.
monetization and incentive are harder problems than people give credit for. I couldn't have raised billions of dollars for nothing like OpenAI.
>>
>>106664164
it is indeed a dog
it was a difficult switch to comfy for me too, but it was worth it, you can use even the basic workflow for good results
>>
>>106664045
I am at the airport rn. landing tomorrow in Japan. I probably won't have time to keep working at it until late at night tomorrow.

>>106664055
what is shitty about imgui?
>>
>>106664118
yeah, close your eyes nigga, walk away from the screen
>>
>>106663849
>I wonder if he used AI
no shit. also that guy has way too much free time
>>
Wish there was a way to train a wan video lora directly in comfy.

>>106663926
Based Ostris, his runpod templates are pretty good.
>>
File: close your eyes nigga.png (143 KB, 1843x515)
143 KB
143 KB PNG
>>106664118
your skin is too thin for this site
>>
>>106664154
Oh shit, wasn't aware of that.
Either the workflow or the stepped strength did huge difference. Never seen this amount of motion before.

nsfw: https://files.catbox.moe/fgsrho.mp4 It's like she's trying to take away the toy from a dog.
>>
>>106664268
kys
>>
>julien """stealth""" advertizing his product with commercial license again
>>
File: WanVideo2_2_I2V_00005.mp4 (568 KB, 480x480)
568 KB
568 KB MP4
Strange, the wanvideo2_2I2V_A14B_example_WIP workflow only lets me gen one result. If I gen again it just puts the same first result back in.

>>106664305
You first.
>>
>>106664344
Did you change the seed...
>>
>>106664364
you know fucking well they didn't
>>
>>106663019
i remember you anon, the tech has finally caught up with your prompting
>>
>>106664364
Why would you ever want a fixed seed for video? First time I've come across a workflow with it.

But good thing I brought it up. The high noise sampler needs a fixed seed, right? And the low noise at 0? I'm seeing very similar results between vastly different seed values. Is this workflow simply that very strong with its prompt adhesion?

Is there a node I can slot into randomize the high noise seed?

I am getting half the gen times in this workflow, with like 20x the fidelity. Doesn't make any sense.
>>
>>106664430
you're a bona fide retard
>>
File: 1751725515529871.jpg (14 KB, 297x119)
14 KB
14 KB JPG
>>106664430
>>106664430
>Why would you ever want a fixed seed for video?
Lots of reasons, so you can experiment with step count, shift values, schedulers, lora strength, etc.
>The high noise sampler needs a fixed seed, right? And the low noise at 0?
No? they can be whatever you want
>Is there a node I can slot into randomize the high noise seed?
Bro, seriously. Picrel
>>
5090 anons, this is going to hurt your pride but the best way to train chroma is on the default one trainer settings and the only thing you should change is rank to about 128/64 which can handle 2 batches. It is the most retard proof way to get a good chroma lora while leveraging your 5090
>>
Hello,
Yesterday, I posted about an issue I had with Krita plugin in Comfy and SDXL, the thing is that I can run and generate without problems with Comfy, A111, Swarm, and Invoke.
Where can I complain about this?
Has anyone complained before about a diffusion plugin or extension?
I have logs, .json everything
>>
File: WanVideo2_2_I2V_00009.mp4 (2.12 MB, 704x704)
2.12 MB
2.12 MB MP4
>>106664468
Oh shit, you can hover over parameters to see what they can do. I've been a forge user for years.
All the workflows I've come across so far has always had the low noise at seed 0 with a fixed seed. Each time I change to a new premade workflow I get more confused.
>>
>>106664506
file bug reports here: https://github.com/Acly/krita-vision-tools/issues
>>
what is the difference between euler and euler ancestral in wan2.2?
>>
>>106664525
Thanks!
>>
>>106663593
>https://github.com/huggingface/diffusers/pull/12357
So it'll be called Qwen Image Edit Plus huh? it'll be a new model? I hope it'll be less slopped this time
>>
>>106664621
>Chinese
>unslopped
>>
>>106664626
not a single local model is unslopped desu, nor the west or chinks made it happen
>>
>>106664643
engrish too hard fo Chinese brain
western safety slop niggers can't help themselves
same problem, different cause
>>
>>106664621
nunchaku when?
>>
>>106664643
Hunyuan Video could do pussies at least
>>
File: 101749788.png (1.45 MB, 832x1216)
1.45 MB
1.45 MB PNG
>>
File: 1750154675001972.png (979 KB, 1080x968)
979 KB
979 KB PNG
lmao, with qwen edit even your ms paint creations can become art:

"make the image anime."
>>
>>106664787
she isnt screeching thoughbeit.
Excited for qwen edit plus desu
>>
>>106664540
ancestral samplers don't converge, they add noise each step
>>
File: 1734764203461878.png (435 KB, 1148x1028)
435 KB
435 KB PNG
>>106664787
and the source: autistic screeching + miku face
>>
>>106664756
hi toroJ
>>
File: WanVideo2_2_I2V_00020.mp4 (1.83 MB, 832x480)
1.83 MB
1.83 MB MP4
Is there a way in comfyui to scan the image for descriptive words/sentences to copypaste into the prompt? Or is that a bad idea?

And this workflow that was so good sadly has the god damn color/brightness shifts..
>>
File: 1746725105134111.png (729 KB, 1017x677)
729 KB
729 KB PNG
make the image realistic. keep the girl's tshirt the same.
>>
>>106664953
>realistic
I start to believe the chinks have associated "realistic" with plastic doll images during the training or something
>>
>>106664953
I like it
>>
File: pulleet.mp4 (1.4 MB, 512x512)
1.4 MB
1.4 MB MP4
>ani are you ok?
>are you ok?
>are you ok, ani?
>>
File: DS1 Gwynevere (22).jpg (200 KB, 904x1280)
200 KB
200 KB JPG
>>106664953
Try Gwynevere.
>>
>>106664039
You can make the same thing by using multiple samplers in series, you need to feed no light lora to the first then feed it to the second.
>>
File: medieval girl.png (1.43 MB, 768x1344)
1.43 MB
1.43 MB PNG
i am still using sdxl. whats considered the best model for realism these days?
>>
File: 1742901507939182.png (863 KB, 990x663)
863 KB
863 KB PNG
kek

make the image realistic. from a bayeux tapestry meme img:
>>
File: 1733957204220680.png (1.08 MB, 856x1216)
1.08 MB
1.08 MB PNG
>>106665000
poof, qwen edit magic. you could do the inverse too, make anime/sketch versions of RL girls as well.
>>
File: file.png (33 KB, 640x537)
33 KB
33 KB PNG
many thanks for the anon who shared using that node instead of tediously changing size every time in i2v wf, it's super useful
>>
>>106664953
>>106665031
sameface kinda
>>
File: 1757930447401742.png (1.12 MB, 856x1216)
1.12 MB
1.12 MB PNG
>>106665031
tried to get an anor londo-ish image:

make the image realistic. a gigantic version of the woman is lying on her side, in a large cathedral.
>>
>>106665040
I'm the creator of that node, you're welcome :3
>>
File: WanVideo2_2_I2V_00023.mp4 (1.16 MB, 832x480)
1.16 MB
1.16 MB MP4
>>106665002
Oh shit, clever girl. I'll try it out.
>>
File: DS1 Gwynevere.jpg (159 KB, 1205x919)
159 KB
159 KB JPG
>>106665031
Eh, that's pretty good. Try the official one. I wanted to do this one, but I thought that it might be tricky considering all the detail that's in it.
>>
>>106664794
Many samplers don't converge well at lower steps with chroma, I'm getting tired of this, how much of a loss is the lighting lora?
>>
File: lob_00001_.jpg (769 KB, 1024x1024)
769 KB
769 KB JPG
caress
cruise
revoke
exodus
fluffy
stucco
reason
permit
zombie
breech[spoiler][/spoiler]
>>
>>106664953
Is there a way to do something similar to this with Chroma?
>>
File: 1757073349395374.png (1.02 MB, 1360x768)
1.02 MB
1.02 MB PNG
>>
>>106665071
no, chroma isn't an edit model
>>
>>106665071
for edits nothing is really close to qwen edit or kontext, your next best option is to img2img it with controlnet canny/depth/openpose or something.
>>
can kontext edit sex acts?
>>
>>106665090
lmao
>>
>>106665090
No.
>>
>>106665071
>Is there a way to do something similar to this with Chroma?
not right now, but seems like he wants to transform chroma into an edit model
https://xcancel.com/LodestoneE621/status/1963467050501992811#m
>>
File: 1753717064923218.png (1.02 MB, 1360x768)
1.02 MB
1.02 MB PNG
>>
>>106665090
not only it can't but BFL forbid civitai to host any NSFW loras, you'll be more lucky with QIE, China seems to care less about porn (even though porn is forbidden in their own country kek)
>>
>>106665090
This post made me feel unsafe.
>>
>>106665105
He needs to fix his foundational model, what the fuck is this guy even doing?
>>
>>106665111
no way
>>
>>106665076
Oh I know its not an edit model, but wondered if something kind of frankenstien workflow to transfer styles are out there. I have image to image but it's a little TOO wild

>>106665081
Yeah image to image is fun however, this is the only thing I could find for controlnets and I cannot get it to work with chroma https://huggingface.co/XLabs-AI/flux-controlnet-collections. Some redditor said they got it working but as usual, never posted workflow or anymore info
>>
>>106665090
they hate porn to the point of forbidding nsfw finetunes
>>
is there any point in using an fp32 vae while the model itself I use is q8
>>
File: 1751557078015343.png (1.05 MB, 1360x768)
1.05 MB
1.05 MB PNG
>>106665111
>>
>>106665115
>it can't
>well it can but you can't host them
hmm
>>
>>106665057
Hang on, how do I set this up then? The return with noise?
And it stopped genning again, as if the seed was the same. Genned once and it was broken.
>>
File: gen_00007_.jpg (520 KB, 1024x1024)
520 KB
520 KB JPG
>>106665226
octave
attack
armpit
sprint
meddle
lessen
hijack
upbeat
thrift
become
>>
>>106665090
>can kontext edit sex acts?
Why? You want to undo all interracial porn?
>>
>>106665252
If I can ruin those cuck faggot's fantasy I would anon. You think I wouldn't do that to rid this website of that reddit scum?
>>
>interracial porn out of nowhere
>>
File: WanVideo2_2_I2V_00024.mp4 (635 KB, 480x480)
635 KB
635 KB MP4
Going through my reaction image folder has a whole new meaning for me now.

>>106665245
As often said in here, kys.
>>
>>106665270
Why else would you want to edit porn with AI if not that?
>>
>>106665219
if you can't share them on the internet, they don't actually exist yeah, that was my point
>>
>chroma
>>
File: 1748206144867305.png (1 MB, 1360x768)
1 MB
1 MB PNG
The man is sitting in a computer lab. A large sign at the back of the room says "DUBS CHECKER" in stylish text. Keep his expression the same.

neat all the stuff you can do from a base image
>>
File: gen_00005_.jpg (790 KB, 1024x1024)
790 KB
790 KB JPG
>>106665272
allure
absent
induct
little
rhythm
entire
forget
hollow
module
abound
>>
File: 1749228483823535.png (1.15 MB, 1360x768)
1.15 MB
1.15 MB PNG
>>106665308
The man is in an ice cream parlor, holding a vanilla ice cream cone. many flavors of ice cream in metal containers are visible. Keep his expression the same.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.