Against My Better Judgement Edition Discussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106638601https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2122326https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Blessed thread of frenship
posting in a seedream thread!
>>106642335>>106642342très bien, first especially
>>106642309I thought about it, then it gave me an estimated time of 40 minutes and I just stopped.
>>106642335>>106642342Is this a lora trained on whoever does those Monogatari ED's?
>>106642361https://x.com/papilioninight
>>106642333>noisedream
>>106642383Danke
uh oh the chromafootschizo arrived
Footfags..
It's crazy how much foot fags get away with. Like, if we assume they get as much arousal out of a foot as most people do looking at a pair of boobies. They essentially get do whatever the fuck they want and get away with it. Imagine if I posted bare boobies all over this thread and it was okay. I think we should hold foot fags to the same standard. Maybe make them war a foot shaped armband so we know they're foot fags.
>>106642482good thing with the foot fetish stuff is if the girls are wearing skirts I usually get so see some cute panties. so I don't mind.
>>10664251499% of footfags are just as into legs and assit's the 1% that wants only stinky feet that gives the rest a bad name
Jesus christ, westoids..
>>106642714Are the original image gens Dall-E 3?
>>106642559I remember in University, I had a class on Anatomy and all the professor would ever talk about is how there is a nerve from the feet to the genitals. Like he would bring it up every lecture. The man loved feet.
>>106642648>Average 18 year old white woman.
>>106642468those legs have just the right amount of muscle on them, literal perfection
>>106642648The chink was hideous, what do you expect
Can someone post a tutorial on how to inpaint with pony/illustrious/noobai in swarmui? It just doesnt work for me, im no using a workflow but swarm itself.
>>1066423015060 ti 16gb or 5070 12gb?correct answers only
>>106642795More vram = better. Literally nothing else comes into play, especially if the GPUs are of the same series.
>>106642731Yeah, pre-cucked Oct 2023 gens
>>106642800>Literally nothing else comes into playthe 5060 will be slower with models that fit in both
>>106642800That only applies to nVidia tho.
>>106642811>the 5060 will be slower with models that fit in bothA whole lot less relevant models fit into the 12gb than fit in the 16gb. That kind of shitty information might make anon buy a GPU he sorely regrets buying. Is that what you want?
>>106642714>>106642806Really nice, 90s arcade feel achievedIs it supposed to be 'Rage Rust' or is it just something random ?
>>106642829it's not shitty information, retard and I wasn't telling anon YOU MUST BUY THE 5060go back to school
>>106642835Nah, nobody in history has picked up a smaller GPU because it was a little faster than the bigger one. It was a dumb statement designed to deceive a dumb person who has no idea what GPU to buy.
Why does the video length double sometimes? Can I find where to set the length
>>106642833No, part of the meta for getting nice boobs back then was to put "race queen" in everything. "Rust" just popped in from a detail prompt.
>>106642844>Nah, nobody in history stopped reading here, fuck offnotice how I didn't address the anon asking the question, I addressed you and your retarded comment that "nothing else comes into play"
>>106642859Nah, you wrote that under the assumption he would see the message and implied it was relevant to his final decision. You are a deceiver and a demon.
>>106642867and you are retarded
>>106642848Oh yeah, looking nice and fluxxy. Got that plastic glow.
>>106642848if they can get rid of the slop that would be a fine product, but for the moment... meh... still on the wan 2.5 waiting room
>>106642853Heh, ok
>>106642867>>106642859>>106642873>>106642853Why does the generation come out blank when I inpaint on swarm?
>>106642880I swear to god it's the strength of the controlnet but nobody ever listens to me even when I'm very very right.
>>106642848I not 100% sure but I think it's the frame window size on the Wan Animate Embeds node. Feels like if your video goes over the default value it does another 77 regardless of how many remain in your source video
has anyone trained a Krys Decker style lora? the ones on civit kinda suckpicrel
>>106642884>swarmThere's your problem, why the fuck would you use swarm ? It's duct tape C# over Comfy, it's crap
>>106642301how do i generate myself ai little girls? i'm on a amd raedon system on arch linux
>>106642914sovl
>>106642899I have dataset ready, just havent trained. Perhaps next week for Illustrious
>>106642931pls post on civit if you get around to doing it
>>106642902I tried some workflows, but its just FOSS tinker tranny shit, I need a UI, not a rocket schematic
>no you just need 30 workflows to get the same functionalities as swarmui trust me im a real woman
>>106642986Use one of the Forge's then
Original video is 10s long, it randomly added 4s behind.
>>106643108>Rubber ball turns into boxing glovesCool trick>740x1024How long did that take?
>>106643121Sometimes my generation will slow to a crawl. I have twitch and youtube running on 2nd monitor
>>106643160are you maxing out your vram?
>>106643177
>>106643177>>106643210SNK logo on the top would be so fitting
Newfag here. What kinds of mileage can I get out of a laptop 3060 and 6GB VRAM with WAN 2.2?
>>106643227None because that's a piece of shit.
>>106643227It can theorethically work with enough quantization, but seriously it's not worth it, at least get a 12gb vram card and 64gb system ram
>>106643243NTA but any meaningful difference between RAM speeds? Mostly gaming but I might upgrade to 64GB next time I do a system update.
>>106643108>>1066431605090?it's so funny while it takes you 14 mintues to gen a 1024p videoit takes me 14 minutes to genn only one high res picture lmao
>>1066432974090
>>106643257No, not for AI, DDR4 or DDR5, there will be no noticeable difference, most of the time is spent squeezing data through the GPU bus
>>106643312Yeah figured as much, thanks.
>>106643227>laptop 3060 and 6GB VRAM with WAN 2.2
>>106643227maybe a not so high res video per 1.5h? you may not want to do that.
why the shit is qwen image so slow.. I can gen a wan video, more like 2 by the time I get a gen from qwen
can you use qwen and linux yet?
>>106643508no. windows users only
>>106643508???
>>106643518fuck thisimagine making a fucking android build but no linux. lmao
>>106643525Are you retarded ? Qwen works on Linux, why the fuck wouldn't it
>>106642648WTF are you doin to Léa Hélène Seydoux-Fornier de Clausonne!?
How tf can i get a lora or some shit to inpaint change an jewelry around someones neck or add
anyone uses Kijai's WanAnimate example workflow ?Where do I get "refer.jpeg, and raw.mp4"
>>106643604nigger just use your own reference image and video
>>106643604there's where u put your own video and image dummy
>>1066429071. buy a rope.2. use it.
>>106643483nunchaku?
>>106643099I did just that, loaded a model, tried to do a gen on fresh install and it hits me with "TypeError: 'NoneType' object is not iterable"What now?
>>106643655used the rope to get some irl little girls. Thanks for the help, King.
>>106643666Nope Q8. The nunchaku version is fast, but blurry af
>>106643616>>106643622I did it but Idk wtf is this points editor and What Am i supposed to do with it
>>106643676then use 4-8 steps lora then>Q8 Muh qualityfuck this jew shit.
>>106643669What model ?
>>106643698Look at this wildly varying timesI don't use the lora, not a fan.
>>106643687red = negative targetgreen = positive target
>>106643701models/1620407/diving-illustrious-flat-anime-paradigm-shift
>>106643711>480.34sTime to give up
>>106643727already didnot worth it
>>106643726Start with some standard model and see if it works, that way you know it's something with this weird hybrid that is a problem
>>106643727>>106643731thats the thing, some gens are like 30-40s. some go on forever, Idk why.. utterly confusing
>>106643737what's your vram usage while genning?
>>106643735Whats weird about that one? It works on swarm and sdnext, the culprit heres forge, or my lack of understanding of it
>>106643737>>106643743again more bs - picrel98%
>>106643752>98% (vram usage)gee i wonder what the issue is
why doesn't it work then https://files.catbox.moe/wix79k.mp4>>106643719
>>106643752I think you are using system ram and it slow down the processTry to leave at least 1.2-1.5GB free in the vram
>>106643761>>106643769Oh..ok how do I fix it?The problem I have is with time consistency. Why does one img take a certain time while another, with just a randomized seed, take a whole different time.
>>1066437751st step is to disable system ram fallback on your nvidia panel, then you will have to play with offloading stuff to gpu (using the multigpu nodes for ease of use)
>>106643784ok, Ive no idea how to go about doing the 2nd part. Shall read up
>>106643762in the preview u can see the black squares are getting swapped. that's how u know the AI got the correct spots
>>106643775it seems your settings put you right on the limit of your vram. different seeds will use more or less vram so some might gen normally but others might cross the threshold where the vram is too full and it causes catastrophic slowdown. i think that's what's happening anyway
it's good to be the king
>>106643837gen time?
how many avenues does 8gb -> 16gb vram open? not interested in training/tuning
>>106643837renting is not owning
Does ComfyUI have a built in wan animate workflow now? Not using Kijai
>>106643800https://files.catbox.moe/fpt1d6.mp4it works but 480p quality kinda bad. Probably only good for closeup video
>install comfyui>installs on C:/appdata without asking>finishes installing >1 error, want to send a report?>no explanation, please reinstall o algoKill all FOSS trannies
>>106643961it looks really fake
Does Kijai still refuse to implement the usage of quants into his nodes
>>106643961it's fried, try lowering cfg or cnet strength
>>106643967just install the portable version. always works fine
Once I use inpaint, or try to attach a pic to a loras meta, this is what happens, the pic gets fucked up, how can i solve this?I can generate just fine in comfy and swarm, but once i edit it with inpaint (or just attach any image to metadata) this happens.Please help
>>106644027it's Kijai's example workflow. I didn't edit anythingWhat is cnet strength?
>>106644025this. kijai nodes are fucking boring
>>106644055Changing scheduler to euler helps a little
>>106644055try a shift of 1
>>106644070workflow? i'm late to the wan animate party
get the 5060 ti, or wait even longer for gpus we're never gettingthis shit pushes 1280x720 in wan, pretty neato.
>>106644096me? I'm personally waiting for a 96gb cuda compatible 2k eurodollars card. my 4080S will do for now. sad vramlet face
something fucking wrong with KiJai workflowI tried Animate HF and it gave completely different (better) resulthttps://files.catbox.moe/n9qu8v.mp4
This is the image im trying to attach on swarm's metadata editor right?
>>106644132And this is what I get, it also applies to inpainting and comfyui workflows, what the fuck is happening anons?
>>106644070can do the reverse?Like make reference image do walking pose of the video, instead of replacing original character with ref character.
Are there any hubs with real people loras apart from the archive?
>>106644132>>106644137bro idk how to tell you, nobody here uses swarm.it's either comfyui spaghetti masters or forge copers.
>>106644144But this applies to comfyui workflows too...
>>106644139won't u just use wan i2v for that?
>>106644025they work just fine
>>106644139Yes, you just disconnect Get_background_image and Get_mask from WanVideo Animate Embeds
>>106644137what custom node is this?
>>106644155Huh? animate can clearly do that. See move examples.https://www.modelscope.cn/studios/Wan-AI/Wan2.2-Animate
>>106644183Cool, first time using KJ nodes in a while. What turned me off them was his steadfast refusal to implement gguf into his 2.1 nodes.
>>106644095https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_WanAnimate_example_01.json
I still don't understand why random seconds was added behind for some videos
so uhh... is there a model that can do amateur porn image frame? like sex act in motion, realistic, not ai-slop looking? can chroma do that or something?
>>106644228It generates in chunks of 77 frames unless you use the context options
how decent is qwen as a refiner/upscaler? like how people use wan for images.
>>106644189thanks
>>106644228what cfg,shift, sampler value?
hatsune miku reads a book at the librarycool
>>106644298
13.3B qwen image pruned model just droppedhttps://huggingface.co/OPPOer/Qwen-Image-Pruning
Using the new fine tuner for vibevoice is practically perfect cloning even in other languages. https://github.com/voicepowered-ai/VibeVoice-finetuningIt's quite slow for realtime stuff and using low steps+cfg 3.0 only works for sort sentences, otherwise it goes "monster" mode.So it seems like 10 steps and 1.3CFG is the best compromise for longer than 1-2 sentences
>>106644326but why? (genuine question)is it for vramlets?
>>106644326what's the point? I have no issue running the 20b model on my 3090
>>106644356you can already do other voices if the input voice is in that language.far more interested if this will let you control the emotion better. like a "asmr" lora or a "moaning cumslut" lora
>>106644326> The pruned model has experienced a slight drop in objective metrics. literally worse with ZERO benefits. what the fuk m8
To the 5s club membersWhat sampler gives the best results?I've stayed on euler and simple combo for a while now.
>>106644326i have the image and vace models fatigue
>>106643676
this animate shit is so slow man832x832 each chunk takes 4minutes (x3-x4) on 3090
kekanimate tip: add green points to whatever you want to mask, add red points to stuff you dont want swapped
>>106644478lower the res to 640x480 or whatever, higher res = slower gen timesame with wan 2.2
>>106644442what gpu? how long for this 720p gen?
>>106644492*also I had to set block swap to 30, prob dont need to if you have more than 16gb.
>>106644509actually its 578x1024, im on 4090
>>106644531that's.. surprising that it only took that short for that res.
>>106644326Isn't this just for training?
with a realistic photoliterally upset
>>106642629nice
>>106644539I don't get it, does wan-animate use openpose?
>>106644619openpose and face capture for identifying/masking, seems like
not exactly haruhi but still a neat swap:
>>106644478>each chunkyou mean each step? that doesn't seem right. a 720x1040 gen with 6 steps takes me like 8 minutes total on a 3090
/adt/ anon here, UPDATE: CivitAI Helper Fix for NeoForge and Classic Important message for anon >>106636188If you're trying to get CivitAI helper working in NeoForge>For NeoForge branch:1. Get the fix from yesterday's github repository post2. Apply the checkpoint fix from >>1066439643. Replace the VAE the code from >>1066445114. Done, CivitAI helper should now work>For Classic branch:- Just use the GitHub fix as is, no modifications neededTested and working on my setup. Questions? Find me in /adt/Bye
>>106644689>>106644511 says the lora section but its the VAE section
okay, now this one's actually impressive. I think the model leans more towards realism than anime (initially).prompt: japanese woman standing on a race track.
>>106642806had to ask just because the look was so intimately familiar. That model had such an obvious 'house style' but it's hard to pinpoint what exactly it is about it
>>106642335>>106642342cute!
>>106644503>>106644688Oh NVM, you're talking about wan animate? I haven't tried that yet
>>106644708nice
%chances that I can run qwen with 8gb vram/32gb ram?
>>106644233Yeah Chroma can do that, but be warned that not all sex acts are equally easy to get right. e.g. doggy is gonna be all over the place for the same reason that lying in the grass is, whereas cowgirl pov is pretty easy. The difference is in how widely the source images vary
>>106644689Will you put this on the git page too?
>>106644326>>106644359Yes I am a VRAMlet and I am interested in a good distill or pruned model.I am not ashamed to admit this.
>>106644708pls consider: >>106642097
Is there a way to combine or merge samplers or schedulers? Like the model merge node? I wanna make some wacky stuff
Did anyone ever made a https://github.com/LeanModels/DFloat11 for vaes and text encoders? I am curious if it brings any speed up (for stuff like (um)t5 and tiled vaes for large upscales)>>106644818Samplers and schedulers are not weights so the answer is no.You can just read the source code for what they do and try to write your own with combined behavior.
https://www.reddit.com/r/StableDiffusion/comments/1nlybq8/wan22_animate_test/lmao nice
>>106644879I'm far too stupid for that, however I did find thesehttps://github.com/BlakeOne/ComfyUI-CustomSchedulerhttps://github.com/BlakeOne/ComfyUI-SchedulerMixerLooks interesting, I'll give them a shot
>>106644818i just want to point out how really fucking stupid this question is.you can however end a sampler at x step and then continue the generation with another sampler. don't know why you'd want that but sure.
cozy bread
>>106644708that is not anri okita though
Can wananimate be used for static image character swap? Can Qwen do that?
>>106642301Been out of the loop for a few years. Is that Wan2.2 model only available with comfyUI? I have a 4090 and am wondering how it runs, the limitations, but the guide is pretty barebones
>>106645122Whatever smart ass
>>106645176WanAnimate is for videos, for images you can use either Qwen Image Edit or Flux Kontext.
>>106644689Based artsy and smart sister general
>>106642907join the feds and try to get yourself on chan sabotage duty
>>106645334retard or bot? I asked if anyone tested the character swap abilities of both. Not generic information about the models.
AniStudio will get sound support next week. I'm preparing a new release.
>>106645408test it yourself?https://www.modelscope.cn/studios/Wan-AI/Wan2.2-Animate
>>106645421Fuck off
SDXL bros, rejoice nunchaku-sdxl https://huggingface.co/nunchaku-tech/nunchaku-sdxl/tree/main
>>106645466Seems like you are butt hurt.
>>106645421penises
>>106645470why? sdxl is fast enough. this simply degrades the quality
qwen image q8 is pretty good for anime, used the waiv14 banner image/prompt to test:hatsune miku, power elements, microphone, vibrant blue color palette, abstract,abstract background, dreamlike atmosphere, delicate linework, wind-swept hair, energy ,masterpiece,best quality, On her arm is the text "01" in red text.also if you dont add the last part it doesn't add the number to miku, for whatever reason. but it works.
>>106645421Can i finally run it then without it crashing all the time? And are the text clipping issues finally solved?
>>106645463Wow I can test is myself??? Amazing! That totally answerers my questions about if others tested it and their results. What a great discussion thread.
>>106645506Yes and yes. I noticed that model offloading didn't do anything.
>>106645408>>106645525Kys you retarded lazy faggot.
>>106642301bros I am new to this. Can someone explain if I want to generate realistic images is it the underlying model that changes things or is it the technology that is different ( i.e: stable diffusion for anime like , some other thing for realistic )
STOP BULLYING BASED ANI
>>106645290wan2gp
What is recommended for picrel?I get artifacts 2.0 and slowmo at 1.0...
Is it normal for fp8 to bring extremely little to no speed up over Q8 in 3000 series?
>>106644399>model fatigue No such thing
>>106645408If you think a video model will perform better than a good dedicated image editing model like qwen edit then you'll have to test that yourself, we're not gonna waste our time
>>106645492gotta go fast
>>106645587Yes.
>>106645408It's new model, nigga. most people here running distill loras with it and result will be always worse. you expect a honest answer from who exactly?
>>106645583don't use the 2.2 light loras, they are literally broken.
>>106645534Maybe listen next time 20 anons tell you about problems
>julien
>>106645554It's all stable diffusion
>>106645784I'm extremely busy. Animation pipeline has taken most of my time.
>>106645774What can I use to get faster gens?
Man this stuff is nuts, akin to gambling, kek
>>106646012You could use lightning loras but for 2.1 instead, but you could also crank up your high noise cfg to 2 or 3 and specify the speed of motion in the prompt
fucking comfy, KJ chads are using wananimate left and right and comfy still struggling to make it work for native
>>106645554there is both, underlying models that kind-of change everything and also some different ways to use it though of course the "meaningful" methods are typically somewhat constrained to, like, methods that at least might give you a dog if you prompt a dog.
>>106646134OK will try, thank you anon.
https://huggingface.co/nunchaku-tech/nunchaku-sdxl/tree/mainbase sdxl got nunchaku before any other model that actually gets used.are they just fully stupid?
>>106646278OH COME ONThat before wan2.2?Or even before qwen lora support?Do they all have adhd or something?
>>106646278>sdxl
almost one month and still comfy doesnt deliver his WAN S2V native support aka the best implementation, kek what a joke this guy is, now that hes not getting any models in advance hes getting behind, Kijai is making this guy bite the dust
has anyone come up with the solution for self forcing wan always doing slowmo?
qwen edit is fun
>>106646332so true grandma
>>106646332doesnt kijai work for them now? meaning we will never get native implementations again.
>>106646332?
>>106646332serves him well for going down the API route, now major new models wont send him shit kek
>>106646334just use this workflowhttps://civitai.com/models/1818841/wan-22-workflow-t2v-i2v-t2i-kijai-wrapper
>>106646314>That before wan2.2?kek, they were suppose to do 2.1 a long time ago
>>106646386I know but at least I would have understood them migrating to wan2.2 since it's superior in all fronts.But nothing happened, it's crazy.
>>106644708>I think the model leans more towards realism than anime (initially).So does every model released this year, minus the underbaked neta
Also S2V is fucking useless because what the fuck are you gonna do with a 5 second sound? Say half a sentence?
>>106645554yes the style is controlled by the checkpoint, usually good checkpoints will focus on one style really well and not be able to do much else, multipurpose checkpoints are garbage. but for realistic i find you can only use it to generate contemporary stuff, even if you try to generate an "elf" it will make a halloween costume with plastic ears
>>106646405Listen, I wou
>>106646340
>>106646405yep, this anon got it
why are there so many shitty nodes abstractions for all these new models? why couldn't it just be factory-styled like more competent software?
>>106646457lol
>>106646457kek
>>106646505That's inevitable because CumUI is headed towards being a bloated mess. Some devs don't bother even with cum backend but make wrappers instead (like vibecvoice nodes).
Redpill me on nunchaku, I have skimmed through its paper but I got some questions:I am on 3000 series so I should use the int4 version, correct? (I expect the fp4 performance to be ass without dedicated NVFP4 acceleration of Blackwell)In the paper, they claim that they have chosen rank 32 as a compromise between 16-64 for overhead/quality optimal, but I see that they have rank 128 version for models available. Does anyone have a rough ballpark number/anecdote for how slower these versions are? Is it 10-20% slower for noticeably better output so worth it thing or is it 2 times slower for little difference it's worthless thing?I expect the answer to be yes but are loras trained on standard fp16 compatible with these quants?Lastly nunchaku needs its own comfy nodes, any BS or limitations I should be aware of?Thanks if you respond.
>>106646564nunchaku is cope for vramlets, if you care about quality and have even 12gb of vram, just use q8 and wait
>>106646564If it's qwen just use a q6 with distill lora>>10664660912 is not enough for Q8. I couldn't run it on a 4070s
>>106646609nunchaku for video gen would make gen way faster for the same quality as fp8 and without the need to use lightning loras
>>106646629>for the same quality as fp8lmao
i just peaked at sdg.oh god why are they so shit
>>106646653lmao to you, read the paper
>>106646564>Redpill me on nunchakuit's a Q4_M quality quant
>>106646676because of censorship of the online moduls lmao
>>106646679>he believes papers
I was genning and I got an error saying to lower the gpu weights, it said to lower to a save 1500mb or so or face potential burn out
>>106646231what's KJ? does anyone still use voldy?
>>106646692show me where it's wrong
wonder if qwen edit would be able to edit Mahiro into the first picture.because img2img and controlnet really hate the bolt cutter.
>>106646679>b-b-but THEY SAID IS THE CAME QUALITY SO ITS TRUE I DONT NEED ACTUAL COMPARISONSabsolute state of underage newfag vramlets
>>106646692>in areas of the social sciencesAre you retarded anon?
>>106646676You are not schizo-anon.
>>106646750can you compare that with Q8, Q5 and fp8?
>>106646762so you can't show where it's wrong, thought so
>>106646750Post your own reproducible workflows instead of slopped cherry picked images of fluxCopechaku is a meme
>>106646763yep, you're completly braindeadhttps://en.wikipedia.org/wiki/Replication_crisis>A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others)
>>106646779are you illiterate?
>>106646801Concession Accepted.
Absolute state of poorfags I can't even>p..please stop doing random shit please work on nunchaku wan
>>106646779Oh nice, this doesn't apply to the svdquant paper then. Thanks for the confirmation.
>>106646384ive tried this out, smaller videos are still slowmo, and i get oom for things i can gen with this work flow https://rentry.org/wan22ldgguide
>>106646681More like Q2_K_L and only for some models that quant ok, i tested it initially for flux kontext, and it was unusable the moment you needed any semblance of quality preservation
>>106646814it applies to your appeal to authority of "read the paper bro xD", brainlet
>>106646814>sure there's a 70% chance the svdquant paper is bullshit but let's gamble for the 30% instead
>>106646815what do you mean by "smaller videos"? obviously you fucked something
How do I lewd up this pic into a video, bros?
On this day.. I go to coomer sovngarde..
>>106646840..mostly to find a better model than nova animal because i'm kind of sick of the grainy details. that or i just need to stop being a jew and do more than 10 hires steps.
>>106646805trying not posting irrelevant stats next timesome percentage of researchers failing to reproduce another's results =/= some percentage of papers not being reproducible
>>106646826I'm curious, where do you find 70% of a math paper being wrong? It's not in any of the "studies" you cited.And how do you know the svdquant paper is wrong?Please share a workflow, we can test that easily.
>>106646852>how do you know the svdquant paper is wrong?how do you know the svdquant paper is not wrong, you're the one claiming it's right, therefore you have the burden of proof, and since you have the burden of proof...>Please share a workflow, we can test that easily.
>>106646839There are million females like this, just open up xvideos and browse up.
>>106646849>the reproduction crisis is a hoax brokek
>>106646840catbox me nugga
>>106646870I didn't claim anything, I'm not the anon who talked about it, but I'm tired of you posting the "70%" studies without understanding what it is nor how it doesn't applies to every paper under the sun.
>>106646834i mean low resolution. i can do 120 frame 704x1280 videos in my usual work flow, but the one you shared is screwing something up, im using the same block swapping. but it doesnt matter since it doesnt solve the issue anyways
>>106646881what I said doesn't imply the reproduction crisis is a hoax, retard
>>106646890hmm? sweety? why are you not tired of anons believing papers like it's gospel though?
>>106646896>the reproduction crisis is not a hoax, it's real retard!>btw, check out this paper and look at those nice numbers, you got to believe them broo!!
>>106646895you shouldnt go above 81 frames, and you should use 720x1280, chain the videos if you want to elongate them with the "loop" workflow from that link
>>106646907there really should be a minimum IQ allowed on the internet
>>106646839generate a better1girl first then animate with wan i2v. or generate it all with just wan t2v.
>>106646920Concession Accepted again.
I hate everything about the current AI ecosystem and it's thanks to comfyui. I can't even find a simple wan-animate workflow, even Kijai's own github is slop infested garbage with broken missing nodes that the manager can't find, not to mention completely ignoring the design sensibilities of comfyui with all that "set vae" global variable slop. WHY DID YOU NOT ESTABLISH STANDARDS COMFY YOU FUCKING PRICK. This dude just released an unfinished barebones UI using someone else's node graph library and did fucking nothing for 3 years and someone gave him SEVENTEEN MILLION DOLLARS>Want an integer? Here sarr, let me randomize that for you>Want to loop through a directory? Go fuck yourself>Want to do basic math like adding two numbers? Go ahead and download someone's node pack with 6 gorillian dependencies. Oh, you only wanted the node that adds stuff? Tough luck kid, you're getting everything>Want to use models stored somewhere else on your hard drive? There's some esoteric yaml file you have to add and then you have to make symlinks anyway because it expects a specific directory structureFUCK YOU COMFY YOU FUCKING NIGGER
>>106646926okay retard
>>106646884https://civitai.com/models/784543/nova-animal-xl>cfg 5>use any sampler but i think DPM ++ 2M Align your steps does best>1216x832>hires 1.5x 20steps same sampling method>no changes to adetailer upped my hires from 10 steps to fix the grainy details. mostly. model still suffers from extra fingies. and my hires model is something a trainer sent me a while ago so there's extra realism from that.>>106646927wow he's literally me, preach it from the rooftops brother.
>>1066469273/4 of these are literal skill issues
>>106646927Nothing's stopping you from forking it and fixing the shit for yourself.
>>106646957Nobody's giving me 17 million dollars either
>>106646609>>106646628>>106646681>>106646816Is it that bad? Are these images just cherry picked? I swear I remember reading about some peopling talking about waiting for Wan version here. Were they doing it for shits and giggles?What about speed? It's activates are in low weight, so it should be fast at least?
>>106646907>look at those nice numbersI posted images retard, you can say you don't believe in them, but that's a (You) problem
>>106646963>Are these images just cherry picked?All latest models are benchmaxxed shit for muh numbers.
>>106646963The problem with this comparison was always that it's too basic, with a huge room for error in the image, you can fuck it up during inference a lot and as long as it's vaguely a book shop of books with correct words on it, it's good.Gen a realistic crowd of different people of different clothes/races all holding different objects engaged in battle for example or other similar complex prompts, it will shit itself.
>>106646927do your own fork if you don't like how some ui details work, submit features as patches.comfy did not nothing but quite a lot, see the commit log
>>106646963qwen nunchaku is ass.. blurry shit
>>106646968promo images are worthless, if its so good and you're using it, you would have posted an actual replicable comparison
>>106646766Probably a worth a test for qwen I guess.FP16/FP8/Q8/Q4/Nunchaku
>>106646992well, go ahead, let's see comparison gens using that prompt
any idea for a complex prompt?
>>106646957>>106646993>just work for free fixing everything so that cumfart can take the credit
>>106647019>no uyou're the one claiming its good, so if you are not a liar you should already have it set up and ready to go? i deleted the trash when i tested it initially and saw that it was trashthe burden of proof is on you
>>106646946thanks nugga
>>106647032>MOOOOOOOOOOOOM CUMFART OOM'D AGAIN!
>>106647037you see anon it worked great when I tested it so I only kept the nunchaku versionwho is right? how can we know?
>>106647064you dont need to install anything extra to use a full sized model, it works out of the box after 1 click download
>make my own nodes by fixing other people's shit>"share your workflow anon!">"wtf anon I can't find these nodes! Where are they? I can't use this shit help me!"I find this a win desu
>>106647081it would take me so long to download the big fat Qwen tho, my internet is shityou do it
>>106646910>you should use 720x128their git recommends 704, i assumed because its divisible by 32>you shouldnt go above 81 framesnot an issue for a lot of videos like a girl dancing, youd only get problems with panning shots or people walking from one place to another
>>106647098>their git recommends 704where?
>>106647133https://github.com/Wan-Video/Wan2.2
>>106647143that is only for the Wan2.2-TI2V-5B model, which is trash that nobody should use, if you dont have much vram just stick to 14b in wan2gp
>>106647143Many such cases! (im just jumping into this tard fight for fun)https://chimolog-co.translate.goog/bto-gpu-wan22-specs/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=bg&_x_tr_pto=wapp#1280%C3%97704%EF%BC%9AWan22_720p_%E3%83%99%E3%83%B3%E3%83%81%E3%83%9E%E3%83%BC%E3%82%AF
>>106647201>>106647201>>106647201>>106647201