[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor applications are now being accepted. Click here to apply.


[Advertise on 4chan]


Stop Being Fucking Poor Edition

Discussion of Free and Open Source Text-to-Image/Video Models and UI

Prev: >>106556603

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts
https://github.com/tdrussell/diffusion-pipe

>WanX
https://comfyanonymous.github.io/ComfyUI_examples/wan22/
https://github.com/Wan-Video

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Neta Lumina
https://huggingface.co/neta-art/Neta-Lumina
https://civitai.com/models/1790792?modelVersionId=2122326
https://neta-lumina-style.tz03.xyz/

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbours
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
Any news in the last 5 days? Is Chroma dead and buried?
>>
Damn, /ldg/ sure is getting pretty sexual these days.
>>
Blessed thread of frenship
>>
>>106559888
https://blog.comfy.org/p/seedream-40-now-available-in-comfyui
>>
Unpopular opinion:
When it comes to AI image generators, we never made it past 2023.
>>
>>106559888
Radiance now https://huggingface.co/silveroxides/Chroma1-Radiance-GGUF/tree/main
>>
>>106559914
>Search the “Seedream” API node
Huh?
>>
Is it better to train your image Lora based on the base model, or the checkpoint you use? For example, if you use Pony Realism, should you use ponyrealism as the model or SDXL?
>>
>>106559936
Where's the workflow for this?
>>
Popular opinion:
CatJak is a narcissistic loser and a shitty baker
>>
>>106559959
Always base, never mixes
>>
>>106559961
Ask the dev, i dunno
>>
>>106559888
i got better at making 1girls :3
>>
>>106559959
If you want to train a lora to use with Pony Realism, train it against Pony Realism

Pony Realism is a large finetune, essentially a different model, training it against SDXL and then using it with Pony Realism will make the lora less effective
>>
Guys, I have an idea, what if we all go back to using WebUI and leave Comfy as “our nerdy phase”?
>>
>>106558492
>the delta weight mixed with the HD weight is pretty strong at 2k and still preserves prompt following of original.


how do you mix those models? or you just add a second sampler?
>>
>>106559991
>:3
You're so cute and silly! :3

I just want to eat you up! c:
>>
>>106560047
I'm not a brainlet though.
>>
File: Chroma_00086_.jpg (586 KB, 1248x1728)
586 KB
586 KB JPG
>>
>>106560062
>>>/g/adt/
>>
>>106560071
a fellow cli Chad. good to finally meet you
>>
>>106560119
this is the only time ive agreed with you, anon
>>
>>106560062
no homo?
>>
File: Chroma_00091_.jpg (633 KB, 1408x1952)
633 KB
633 KB JPG
>>
File: ComfyUI_06833_.png (1.93 MB, 1152x1152)
1.93 MB
1.93 MB PNG
>>106560052
Use the ModelMergeAdd node. Keep in mind in this case it's the regular v50 I'm merging with the delta weights (not the new HD) as that seems to work better. Here's a workflow
https://files.catbox.moe/nkn901.png

In that case it's 1152, but in my tests to deslop it properly it should be 2k. I do not use it much as the wait time for 2k is a little longer, but it works, and transfers over prompt following quality from HD to Flash.
>>
>>106560119
No you! ;p

>>106560131
No homo!

I just like cuteness in general! That's all! :3
>>
>>106560161
then you are unironically in the wrong thread. there is a fellow BA poster in the other thread
>>
>>106560062
>NAI in the local thread
it's over. the shills won. /ldg/ is no longer a local model safespace
>>
>>106560185
NAI is local though. Don't tell me you missed the leak?
>>
>>106560197
by that logic Photoshop is open source because I can crack it. wtf are you on?
>>
>>106560218
Are you telling me that running NAI locally is illegal?
>>
File: barebear720.mp4 (995 KB, 720x914)
995 KB
995 KB MP4
>>106559901
fag alert
>>106559895
HOT.
>>106560082
>bear furry girls
MY ONLY WEAKNESS!!!

C A T B O X:
>https://files.catbox.moe/dh4n3j.mp4
>>
>>106560228
he is using the proprietary version over their service you retard
>>
>>
>>106559844
MORE threats huh?
>>
>>106559851
Can I make a bbc porn with this?
>>
>>106560303
boutta get kirkd
>>
Any nano banana torrent leaks yet?
>>
File: ComfyUI_00041_.png (1.8 MB, 1328x1328)
1.8 MB
1.8 MB PNG
Any qwen-image LoRAs yet for improving eyes? I finally coaxed qwen-image into a closeup, and the eyes really aren't that good.
>>
>>106560420
what
>>
>>106560431
https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/main/README_en.md
>>
>>106560426
try img2img in another model that has eye lora?
>>
>>106560420
that's not how cloud image models work. nano banana is almost certainly a collection of models and also loras that are called when the prompt

>>106560437
this is an amazing resource for SFW inspiration wow thank you
>>
>>106560138
>shit the forest
kek
>>
>>106560138
now generate her eating berries and then shitting them out in the forest
>>
>>106560437
That's just pics nigga
>>
File: localfugginWON.jpg (31 KB, 1029x95)
31 KB
31 KB JPG
local won
>>
>>106560469
oh no comfysaasers! what will we do now?!!?!!
>>
>>106560535
>comfysaasers
comfysaasters is better and thanks for the inspiration
>>
File: ComfyUI_00221_.mp4 (674 KB, 720x720)
674 KB
674 KB MP4
>>
>>106560598
horrible bro
can NO ONE see the noise? warping? glitchyness? its shit wan 2.2 is SHIT
>>
>>106560462
https://github.com/CeFurkan/Leaked_Nano_Banana_Weights/
>>
>>106560613
BASED FURK BASED FURK
>>
>>106560598
how many steps is this?
>>106560607
seems like too little steps given the signature pointillism around the hair
>>
>>106560613
https://github.com/FurkanGozukara/Leaked_Nano_Banana_Weights
>>
>>106560644
Downloading now!
>>
>>106560637
4 steps
>>
>>106559851
>Stop Being Fucking Poor Edition
Based and fuck poorfags
>>
Why are there both video and image models?
Shouldn't a video model be superior since if it can make a video it obviously can make an image?
>>
>>106560664
try with 6 steps. i dont do 8 anymore because it feels like diminishing returns but 6 steps is the sweet spot for me

>>106560673
gatekeeping is essential, and the natural form of gatekeeping for this hobby is what helps this general be the straightest tallest whitest AI general on this website
>>
>>106560677
i do 50 steps anon, each 5 second video takes me 47 minutes
>>
>>106560675
why are there both video and photo-only cameras used professionally? Shouldn't a video camera be superior every time since if it can make a video it obviously can make an image?
>>
>>106560682
knowing what I know now, I'd still use WAN if I got nerfed into one-hour-per-gen, I'd just temper my expectations and move my GPU to a full time rig since I barely play any videogames that need a GPU stronger than a 1050
>>
>>106560673
it was typed by a poorfag with one gpu
>>
>>106560675
No?
>>
We must:
1) Ignore anyone using a UI other than ComfyUI.
2) Gatekeep workflows
From there we can say that /ldg/ is healing, otherwise, we will continue to have shizos.
>>
>>106560716
>a poorfag with one gpu
What it it's an A100?
>>
File: 3_00008_.mp4 (646 KB, 720x720)
646 KB
646 KB MP4
>>106560677
6 steps gave me this, which looks worse imo
>>
>>106560724
>Gatekeep workflows
why? the gatekeeping in this general comes from setting up the python dependency hell, not from a workflow.
>>
>>106560724
>Ignore anyone using a UI other than ComfyUI
why? comfyui sucks ass and I would gladly drop it if there was a different option, not some fork of a fork in varying states of abandonment.
>Gatekeep workflows
already happens
>>
>>106560724
>>106560082
>>106560138
>>106560252
>>106560702
>>
File: ComfyUI_06849_.png (1.76 MB, 1152x1152)
1.76 MB
1.76 MB PNG
>>
>>106560732
other anon. by worse you mean the slowmotion slop?
>>
>>106560732
i'm gonna need you to share your workflow because something is super slopped with your setup. your lightning loras have turned your videos into basically static images, if anything you shouldve gotten MORE movement with more steps

maybe you're using some weird version of a lightning lora? i know that at the very beginning of 2.2 there was different versions that use different weights between the lightx2v repos and the Kijai repos but i no longer know if that's the case.

TLDR: your shit is fucked I can't help unless you share your workflow
>>
>>106560726
the retard blogs like having a 5090 is special. he's a fucking dramaqueen avatarfaggot retard
>>
gatekeeping my long dick from these hoes
>>
>>106560766
if you have a 5090 but you don't use it to train loras or do something else that a 16GB vram + 64GB ram anon can't do, the only thing you're flexing is your poor ability to purchase the right tool for the job
>>
>>106560779
this is the same person that complained it takes 15mins to gen an image with chroma because he inferences 100 steps
>>
File: stock t2v wan22.png (216 KB, 1381x768)
216 KB
216 KB PNG
>>106560762
I'm just using the stock settings on Wan 2.2 14B Text to Video.
>>
>>106560716
>>106560784
Mind broken I'm not even that poster KEKK
>>
>>106560789
nice damage control niggerjak
>>
>>106560787
Don't use the Lora on the high noise model.
>>
>>106560798
Who? Sorry I'm not as well versed in dumbass lore as you
>>
>>106560804
the thread baker that inserts their shit art
>>
>>106560787
ok so the issue is your garbage prompt, and then the garbage 2.2 version of the lightning lora, in that order. feed your prompt to chatgpt and ask it to enhance it and try using the 2.1 versions of the lightning lora

https://huggingface.co/lightx2v/Wan2.1-T2V-14B-StepDistill-CfgDistill-Lightx2v/blob/main/loras/Wan21_T2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors

>>106560801
retarded advice (not exactly, but you should get away from using the 2.2 lightning loras in general first before smaller experimental optimizations like this)
>>
>>106560801
Can you please circle what you mean?
>>
What is needed for the next big jump in creating this stuff? More data better models?
>>
>>106560784
rons a funny guy

>>106560766
https://www.reddit.com/r/nvidia/comments/1mf0yal
there's some people to be jelly about but they're rarely going to be on 4chan
>>
File: Chroma_00124_.jpg (441 KB, 1224x1880)
441 KB
441 KB JPG
>>
>>106560827
they are at /lmg/ posting about llms anon... why are you posting reddit links? maybe go back
>>
>>106560726
Trying to gen with WAN 2.2 with an A100 was a pain in the ass because it kept spitting out errors. Protip, use the e5m2 version of the model instead of e4m3fn.
>>
>>106560824
What's needed is for you to service my cock.
>>
>>106560824
>What is needed for the next big jump in creating this stuff? More data better models?
I would be happy for the rest of the year if we just got to 10 seconds natively.

The true next big jump is cracking how to use synthetic training data to actually improve the model instead of lobotomize it. We're running out of data and replacing it with slop on the internet faster than we can make new human content (good or not)

>>106560839
no reason to rent an A100 over a 4090 for WAN
>>
>>106560823
The other anon mentioned it already, but the 2.2 loras are no good. Not at my PC right now, but the top of the two Loraloadermodelonly nodes can be set to bypass mode by right clicking it.
>>
>>106560824
non-python ui
>>
>>106560837
>they are at /lmg/
uh huh

>why are you posting reddit links
because i can
>>
>>106560824
2x or 4x bigger vram consumer cards so some people can take interest in bigger models and make nsfw loras for them
>>
>>106560824
For video 30+ seconds gens. For image fully uncensored non synth-data model.
>>
File: 3_00011_.mp4 (876 KB, 720x720)
876 KB
876 KB MP4
>>106560815
>A radiant blonde woman in a flowing red dress stands proudly before a line of uniformed American soldiers. Her smile is warm yet dignified, and a gold wedding ring glints on her finger. With a focused gaze, she looks directly into the camera, gently waving a small American flag. The night sky behind her bursts with colorful fireworks, casting flickering light across the scene. The mood is celebratory and patriotic, with soft ambient crowd noise and distant cheers. The camera slowly pans in, capturing the emotion in her eyes and the shimmering glow of the fireworks overhead.
This is still with 2.2 lighting.
>>
>>106560841
>>106560856
How do we get longer video gens
>>
File: 3_00012_.mp4 (466 KB, 720x720)
466 KB
466 KB MP4
>>106560863
This is the same prompt with 2.1 lighting
>>
>>106560863
yeah time to download the 2.1 loras anon. that's what all your prompts for wan t2v should look like btw

>>106560866
you have to train the model to support it. give it a few months, either there will be a new base model or someone will have figured out how to finetune WAN for it. if bytedance was able to up the FPS then there's definitely hope

>>106560871
>This is the same prompt with 2.1 lighting
at strength 1 on both samplers? weird. i use a 0.6 0.4 split of the 2.2 and 2.1 lightning for what its worth so you can try that for a sweet spot idk

are you sure you turned it on anon? because that 2.1 gen really looks like low-steps vanilla wan to me
>>
File: ComfyUI_temp_scioi_00005_.png (1.69 MB, 1024x1024)
1.69 MB
1.69 MB PNG
>>
File: 3_00013_.mp4 (1.13 MB, 720x720)
1.13 MB
1.13 MB MP4
>>106560815
>>106560871
This was with 2.1 causvid bidirect 2
This one is now 2.1 causvid without bidirect 2
>>
>>106560812
Completely mind broken
Ass raped
Fucked in the showers
Fucked up your rectum
Fucked into oblivion buckbroken 4D
>>
File: wan22.png (498 KB, 1962x996)
498 KB
498 KB PNG
>>106560886
Thank you for trying to help me, this is what my workflow looks like now.
>>
>>106560834
still has that heat haze wobbly edges i remember from earlier
>>
>>106560894
>2.1 causvid bidirect 2
nigger who asked for you to try causvid or bidirect 2?? what the fuck are you doing right now

>>106560905
i'm pretty sure i saw this workflow before when I downloaded a t2v lora so I'm almost tempted to just set it up for you even though I hate using the fp8 like you wouldn't believe

what's your VRAM and RAM btw, if you can afford it it's worth running the full fp16 version of the t5 xxl text encoder for better prompt understanding
>>
>>106560824
Seedream just destroyed the competition with native 4k gens. Local needs to stop being slop
>>
>>106560920
score 1 for comfysaasters!!!!
>>
>>106560912
>nigger who asked for you to try causvid or bidirect 2?? what the fuck are you doing right now
I don't know lol
24gb vram, 64gb DDR4. I thought fp16 was the less quality version according to the readme.
>>
File: 3_00014_.mp4 (1.13 MB, 720x720)
1.13 MB
1.13 MB MP4
This is with 2.1 and the other at 2.2
>>
>>106560927
>I thought fp16 was the less quality version
why do you have 24gb of vram and 64gb of ram if you don't know this, are you a content creator or something

the 64gb of ram is the weirder part

>>106560933
honestly if that's lightx2v lightning 2.1 then that's as good as you're going to get. i like mixing in a little bit of lightning 2.2 to help it look less "fuzzy"

just for a sanity check can you run your prompt at one of the "recommended" resolutions like 832x480. i have a feeling the square aspect ratio and non-standard resolution is contributing to the sloppiness of the output
>>
File: Chroma_00130_.jpg (448 KB, 1152x1920)
448 KB
448 KB JPG
>>106560911
I used way too high cfg with second pass, testing female anatomy lora I made
>>
>>106560944
I got a good deal on it.
>>
Can any decent img2vid be done with a 12GB 3060 and 32GB Ram?
>>
>>106560972
define "decent". what do you want to accomplish
>>
its think its time for \ldg\ to put some \sdg\ photos in its collage
>>
>>106560954
>female anatomy lora
Exquisite. Perfection, even.
>>
>>106560973
I'm not even that sure about what I want, to be honest. I guess just making some believable enough animation from a photo, even if at lower resolution, would be nice
It's just that I tried out some txt2img with Stable Diffusion. I didn't get too deep into it but had some fun, and now I want to try out img2vid. But I'm thinking that either my machine won't run it, or it will run it so poorly that there will be no point in even setting it up
>>
it was the best of times
>>
>>106561023
>I'm not even that sure about what I want, to be honest.
okay then just decide on something and then we can tell you if it can be done or not

you can absolutely i2v your highschool crush sucking dick at an acceptable resolution in about 10 minutes on a 3060 with 32gb of ram
>>
>>106560933
your first settings aren't bad. your main problem is the crappy 16 fps. try 24.
>>
>>106561046
this is a troll but he also didn't test out my 832x480 suggestion so he's either gone or he deserves to be trolled
>>
>>106560724
Based fuck inferiorUI and newfags
>>
File: 3_00018_.mp4 (844 KB, 832x480)
844 KB
844 KB MP4
>>106561068
Sorry about that, this is in fp16.
>>
>haters hate comfyui because it's obnoxious and shitty
>users hate comfyui because it's bugged and bloated
everyone suffers
>>
>>106561099
>this is in fp16
i'm assuming you mean with the fp16 version of the t5 xxl encoder

looks good anon! now give us some benchmarks with your 3090/4090. how long does it take to generate 832x480, 1024x576, and 1280x720

i am jealous of your 64gb of ram

also what are you generating these for? 9/11?
>>
>>106561042
For real? Man, I thought that whatever video came out of this rig would look like absolute shit, now I'm even more interested

I'm giving OP's guide a read and then I will give it a try, any hints on which model I should give a try first? I have nowhere near those recommended 24GB VRAM for Q8
>>
>>106560954
She is the Iron Maiden. I see what you did there.
>>
>>106561099
You dhould make it so tat when the woman walks past a soldier his fifle also starts spewing fireworks.
>>
>>106561124
>I'm giving OP's guide a read and then I will give it a try, any hints on which model I should give a try first? I have nowhere near those recommended 24GB VRAM for Q8
You might be able to use the Q6 version on 12GB of vram + 32gb of ram. I can use the Q8 with 16gb vram + 32gb ram just fine (but if i had 64 gb of ram my video time would be a minute faster since i wouldnt have to swap to the hard disk)

you might have to increase your page file size on windows or swap size if you're on linux, i really don't know. there's been a few 3060 i2v genners here so it's definitely doable


you can also look into the 5B model, maybe for i2v it would be okay but don't do this before looking into making the 14B work
>>
>>106559851
How are you not getting uncanny valley from this?
>>
STOP!
If you want me to reply (you), you need:

24GB+ VRAM
Your own custom nodes + workflow setup
Use ComfyUI

If you're not running ComfyUI, don't even bother. I only help with Comfy setups, nothing else.

Ido not speak another language that is not Comfy, sorry.
>>
>>106561165
its a spectrum. we work with what we can get.
>>
>>106561165
The model used for the girls has anatomy issues at the moment
>>
>>106561165
Uncanny valley
Uncunny valley
Cunny valley
>>
>>106561165
Sorry I didn't post any gens in the last thread. I'll share my unslopped kino here now.
>>
>>106561165
building a tolerance from years of AI slop generation

that being said, I can no longer go back to SDXL-tier models. it all looks like 2023 garbage now and I'm surprised I ever thought it looked good
>>
Thank fucking god we have forge, neoforge, and autoforge
>>
thank god anistudio is coming
>>
>>106561208
you a blacksmith or something?
>>
>>106561208
What's minecraft gotta do with this?
>>
File: ComfyUI_06857_.png (2.42 MB, 1152x1152)
2.42 MB
2.42 MB PNG
>>
>>106561225
that's where webui forge got its name from (serious)
>>
>>106561165
>>
>>106561248
>>106561206
>>
>>106561256
>>106561225
>>
>>106561260
>>106561237
>>
File: 3_00021_.mp4 (903 KB, 1280x720)
903 KB
903 KB MP4
>>106561113
832x480: 7.2s/it
1024x576: 27s/it
1280x720:42.89s/it
I guess so, feel like there are enough pictures of the American flag that it can't get it wrong. Also seeing how realistic the waving is, it ignored the American flag this time.
>>
>>106561269
>>106561213
>>
Is stable diffusion dead
>>
File: Over.jpg (56 KB, 720x459)
56 KB
56 KB JPG
>>106561208
dev is taking a vacation due to burnout.
UI project is dead, NeoForge is done. rip another one
>>
What do you guys use to batch rename your dataset? It's a fucking mess when there's pngs and jpegs together.
>>
>>106561299
lol what a fag
>>
>>106561305
Bulk Rename Utility.
>>
File: AnimateDiff_00284.mp4 (2.1 MB, 1280x720)
2.1 MB
2.1 MB MP4
>>
>>106561297
No, but your sexlife is and always will be
>>
File: ComfyUI_06864_.png (2.09 MB, 1152x1152)
2.09 MB
2.09 MB PNG
>>
File: 3_00025_.mp4 (1.37 MB, 1024x576)
1.37 MB
1.37 MB MP4
I'll keep experimenting, thank you for all of your help anon.
>>
>>106561236
that's a huge bitch
>>
Cozy level maximum
>>
dead level maximum
>>
>>106561431
Things that I can do with SDXL
>>
>>
>>106560759
hot
>>
>>106561723
poopie poo poo
>>
>>106561436
Just a heads up, 2.2 doesn't really care what resolution you're outputting at within reason. You could safely go down to something tiny like 368x368 and still get a coherent result. 2.1 was much more finicky which is probably what people are used to troubleshooting for.
>>
So the key to getting rid of the slow motion in 2.2 is to just use the light 2.1 T2V lora + 121 frames at 24 fps? Is that the super secret sauce? Cant imagine waiting 18 minutes for a gen. Someone recommended "just dont use light 2.2 on high noise" which is dumb because it slops the video
>>
>>106561893
Put 2.1 lightx2v I2V on high noise at an insanely high strength (like 5), then put the 2.2 Lightning lora on low. fps and frame count stay the same, 16 & 81
>>
Is there a site that gathers paid workflows, for free?
>>
Okay so i been genning simple bj and pov sex with wan 2.2, is there a way to keep genning off a previous video? or extending the video out? I tried using the continue from video option but it never listens to the new prompt. like if it were bj I'd want to continue sucking and get a cumshot in the next gen
like
https://files.catbox.moe/59c35v.mp4
to
https://files.catbox.moe/63hrvc.mp4
it didn't progress much, is that the limitation to genning atm?
>>
>>106561893
Secret Wan Jutsu: Motion was trained using lower resolution video, and detail was trained using higher resolution video.
If you want high detail and good motion, you should gen high noise at a low resolution and upscale it for the low noise model.
>>
>>
>>106561937
I've never seen realistic lightning in a generated video. Shame.
>>
>>106561902
Yeah I tried that high strength trick yesterday and it burned the shit out of the video, then again I have yet to try it with the 2.1 loras so I'll give it another go. Thanks
>>
>>106561952
It's only necessary with the 2.1 loras since they're not as effective on 2.2, you just get a blurry video at regular strength
>>
File: 00046-2993364558.jpg (1.55 MB, 1536x1920)
1.55 MB
1.55 MB JPG
>>106559851
Does ComfyUI have an equivalent to A1111/Forge's "Send to txt2img"? Tried Prompt Reader but having to copy paste prompts kind of sucks. How do you guys manage your prompts without it getting bloated in the UI?
>>
>>106561963
>having to copy paste prompts kind of sucks
>click prompt box
>ctrl a + ctrl c
>open other workflow
>click prompt box
>ctrl a + ctrl v
is this genuinely too hard for you
>>
File: 1749553173883085.png (2.2 MB, 1416x2120)
2.2 MB
2.2 MB PNG
>>
File: 1744183490672531.png (2.57 MB, 1416x2120)
2.57 MB
2.57 MB PNG
>>
File: 1747430247023577.png (2.43 MB, 1416x2120)
2.43 MB
2.43 MB PNG
>>
>>106561925
there's a workflow on civitai "for loop with scenario"
i think that's the limit that can be done until there's a vace 2.2 (if ever)
>>
>>106561967
Do you genuinely think I'm asking because it's hard or because I want to reduce the steps and time necessary to actually get to prompting?
>meanwhile on A1111/Forge
>drag image to png info
>click "send to txt2img"
plus it also sends other parameters too. At least Comfy has queueing though, that shit is dope.
>>
>>106561963
idk if it's what you're looking for but with rgthree custom nodes you can drag a genned image onto a node in the ui and it'll try to fill it with the info from the same node used to gen the image
>>
>>106561963
there're custom metadata reader nodes, search the manager. i wouldnt know any
but honestly i do the same as the other anon, load the workflow and grab the prompt or open it in a text editor
>>
>>106561299
Another forge clone hailed as the next big thing where dev realized indeed he wont be able to actually keep up with the developments after less than a month, lmao, cant make it up
>>
>>106561963
>How do you guys manage your prompts without it getting bloated in the UI?
Vibecode a sidebar extension that lets you copy/paste on click
>>
Any chromabros know how to use radiance?
>>
File: 1739226975044493.png (2.74 MB, 1416x2120)
2.74 MB
2.74 MB PNG
>>
File: file.png (11 KB, 331x223)
11 KB
11 KB PNG
>>106562031
>>106562068
Thanks, I'll look into it.

>>106562033
Yea I precisely don't want to do that because I flip flop between multiple prompts or parameters and even use my genned images as a visual checkpoint. Manually copy and pasting everytime will get old quickly. But I'm still transferring my workflow and getting used to Comfy so I might not have to do any of it at all in the future.
>>
>>106561963
I just have it all connected to one prompt box and then just enable and disable group depending what I am doing (there is a toggle node for it), makes for the least hassle way to do it imo. Another easy feature that can be added but comfy has a hard on for needless complexity.
>>
>>106561299
comfy remains undefeated

all hail
>>
File: ComfyUI_temp_kttuf_00001_.png (3.09 MB, 1728x1344)
3.09 MB
3.09 MB PNG
>>
>>106562147
Oh yea I thought about that but without images paired with it I'll 100% forget what is what at some point, and having more than like 20 images in the UI at once kind of worries me performance-wise.

Anyways I'll check out the suggested nodes, thanks for the help.
>>
File: ComfyUI_temp_kttuf_00002_.png (3.06 MB, 1728x1344)
3.06 MB
3.06 MB PNG
>>
Welcome... to Furkland.
>>
File: file.png (7 KB, 228x182)
7 KB
7 KB PNG
yeah i'm pretty gangster, how could you tell?
>>
>>106562354
>Yes I have five different pyhton environments that have the exact same packages installed. I am smrt
>>
>>106559854
ChatGPT is banned explicitly as the only one named but pretty much every Western SAAS model is banned. but there are resellers that try and get around it with pricing access pretty cheaply and using more specific terms that won't get them banned which means it's entirely grey market. Given that is the case, the general Chinese populace lap it all up. Chinatalk is pretty good on these topics.
https://www.chinatalk.media/p/the-grey-market-for-american-llms
Part of the reason for open sourcing is to retain that audience so they don't go out for their hobbies with RP and boyfrend/girlfriend consulting with an American model if a Chinese model as capable. Clearly it has some impact but not a whole lot, I think image and video diffusion, no one really uses non-Chinese models there if they can help it.
>>
>>106562354
ffs.. I cant get neoforge nor onetrainer to install
>>
You know, furk has his own gradio based UIs if you really want a solid comfy alternative.
>>
>>106562418
Why is he sitting on a pile of oversized confetti?
>>
>>106562414
because you're retarded, new or having a very esoteric issue?
>>
>the ram I need to expand mine to 128gb isn't in stock until 2weeks from now
>>
>>106562418
if "grifter" had an image next to it to desribe the word.. his face would be it.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.