Gigantic Hyper Ass in Focus EditionDiscussion of Free and Open Source Text-to-Image/Video ModelsPrev: >>106972449https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://civitai.com/models/1790792?modelVersionId=2298660https://neta-lumina-style.tz03.xyz/https://huggingface.co/neta-art/Neta-Lumina>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
i have no idea what to local diffuse
>>106975770nice cleavage and crossed legs 1girls
>>106975770gen some lightx2v super slow motion
>>106975770these threads always lack loli
Don't forget to put all the shitty artists you come across into the negative prompt.
>>106975807there's no such thing as lacking loli fuckin pedophile
>>106975807>>>/b/
Can you guys share what are the absolute best settings for Chroma1-HD (the non-Flash one)? Number of steps, sampler, scheduler etc?
>>106975817i heard one guy hoards a big shota collection here and won't share
>>106975807adt has the goods
the blonde girl on the right shakes hands with the girl on the left.with the new loras, testing it out. smooth shake!
>>106975851damn yeah now that's a blessed thread, thanks
>>106975838even the best settings are pretty bad anon
>>106975886
>>106976008butiful light slowmo is now butiful light grain
>>106975798
>>106976039butiful FILM interpolation artifacts
>>106976039yikes. do you have the uninterpolated video?
>>106976039this is why the comfyui rife node maintainer needs to update to the latest models jesus christ
>>106976066
ComfyUI sends your prompts to the NSA
>>106976039>>106976104pretty good, just slopped
the camera pans to the left and the blonde girl shakes hands with a skeleton.skeleton meets skeleton
>>106976174yep, certainly looks like shaking hands. wan is such a janky model
>>106975122Yeah, just tried it on Chroma Flash, 3 mins per gen, seems to have worked. I guess how well it performs depends on style image you use, and the settings look a bit complex so I don't fully get them.>>106975838Personally for Chroma HD I use res multistep with 35 steps (minimum of 30) and beta scheduler. Sometimes heun or dpmpp2m. I avoided Euler.
>>106976197It's under trained for a lot of touching (hitting, hugging etc).
>>106976207we love censored models
>>106976213Well it's censored for nsfw and violence, but beyond that, shaking hands being hard to the model means it's just not trained enough.
>>106976227fixed in 2.5 btw :)
>>106976233Not the censorship.
>>106976240Proof?
>>106976039gimm-vfi
so i am actually using the two distilled models for these experiments:>wan2.2_i2v_A14b_high_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensors>wan2.2_i2v_A14b_low_noise_scaled_fp8_e4m3_lightx2v_4step_comfyui.safetensorsspeed really isn't bad, but slowmo is still obviously an issue 2 steps with at 3.5cfg does seem to add way more movement though it looks crap. gonna see what bumping steps does
>>106976349try regular model with new light loras instead, they just released new ones
>>106976349Regular with the trick of one high noise step without the lora, then 3-4 with lora, then low with lora.Works for me.
>>106976367thanks anon, updated.for 2 mins of gen on a 3090 i don't hate it
butiful
>>106976531are you not using fp16? that is blurry as fuck, fp8 sucks
>>106976491whoa.the anime girl drives her golf cart off a ramp into the air, high into the sky in the streets of Tokyo.by studio ghibili
>>106976349Try adding additional lighting lora to each a high and low noise at 0.25 strength each. No, I won't explain.
>>106974423(compatibility with ProductConsistency lora transition from posing for the camera to nsfw)It's crazy there are so many blowjob loras and so many are shit at doing anything outside of very easy stuff.WAN DR34MJOB - Double Single Handy Blowjob -> horrible when the penis isn't already in the starting image, the guy has a penis coming from his mouth, the girl has a bulge (as if the model expects her to have male dangling testicles), or the guy sucks his own penis.Issue with same face in lora transition (all characters look the same regardless of the starting image). I have no idea what he did but it's so bad.Blowjobs - Side View -> The girl gets a bulge, almost always. And it has to be viewed from the side. You get a random girl instead of the one in the image at the transition.BBC Blowjob -> Characters look ok for transition, the girl is giving a blowjob properly but has the usual bulge problem and of course limited to interracial, also boobs are always bigger than the initial image.Blowjobs - Man in Frame-> Looks mostly fine, a bit mechanical, but the girl is the same as the one before the transition, and it doesn't have weird anatomy mostly.Deepthroat Blowjob Wan 2.X T2V -> Looks mostly fine. I need to do more tests, it fails some images for some reason, mainly less realistic, more ideal faces.Ultimate DeepThroat I2V Wan2.2 Video LoRa - K3NK -> Either blurry nightmare fuel or ok looking blurry result, I think it's the most promising when combined with another one maybe.
>>106976584damn
Not sure if I'm stupid, but I can't get get custom nodes to work in comfy.I've got basic setup and wanted to add basic features like token counter and tag auto complete. But despite downloading several nodes for each, nothing works.I do have manager and some other new nodes are loading fine - but not these ones. Is this normal?
still in the uncanny valley.no exit.
WAN sucks. Bad.
>>106976201bongmath thing gets rid of background blur from Chroma HD at the cost of a more noisy output.
>>106976592Have you find ANY that work well? lol
>>106976615check the log. copy into chatgpt
>>106976615>custom nodesenjoy your keylogger
source: new snow miku figureprompt: the anime girl waves hello to the camera.cute!
>>106976631Oh, actually, nvm. Changed sampler to heun 3s. Result is stunning, no blur!
>>106976633They all work if the starting image is a blowjob.Otherwise outside of the last ones, no, they're all obviously badly trained outside of super specific conditions.So they're all bad with ProductConsistency. Which is weird because the reverse cowgirl one was working great.I think the model is confused about "something going into mouth" and the lora aren't trained on diverse datasets to make it understand that. Especially the DR34MJOB one.
RollingForcing updated their page with the code....is..is it finally habbening, we long vid now???https://github.com/TencentARC/RollingForcinghttps://huggingface.co/TencentARC/RollingForcing/tree/main/checkpoints
>>106976667catbox? curious to try the custom samplers
>>106976670can you prompt in the middle?
>>106976540i'm not gonna keep messing around with it, the speed up loras are just not worth it if i wanted sharp quality output. they're great for fast outputs that look okay. i don't expect a lora to increase speed and quality.>this one i frigged up the settings on though
>>106976665
>>106976679https://files.catbox.moe/znlo44.png
https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/tree/mainlet me guess, they still don't know how to make the compatible with ComfyUi and it's still doing slo mo shit?
>>106976592why not use oral insertion?
>>106976732it works but it's still slow motion
>>106976732>compatible with ComfyUiit works as is (didn't test, other anons did)>it's still doing slo mo shityeah but it solves other issues
>>106976690light is slowmo but it shouldn't be a blurry, grainy mess. was that 1022?
>>106976734I wanted a before/after erotic effect (sfw/nsfw), one she is posing cutely, the moment after she is giving an enthusiastic blowjob; but I'll try it later.
>no anime in opfor shame
>>106976774interdesting
>>106976718Messes with coherence a bit though, but that's a lot less blur on average than I got with this prompt on regular Flash HD.
>>106976584
>>106976864sexo
>>106976824Another example, not all gone yet, but close to it.
>>106976718Even with more steps, on the non-flash model this looks like shit lmao
the new lightx2v lora is better than the lx2v + nvidia rcm combo I was using before
Imagine having the power to create any image you want and you just make boring photos of mid asian women
>>106976917notice how these statements NEVER have a creative image attached?
>>106976917not only that, but have them look like absolute garbage grainy photos shot with an old camera.
>>106976917>no gen included
here's a gen
>>106976917I don't need to imagine, I can GEN them hehe
>>106976917I don't mind asian women, but that retard is using the most retarded model (Chroma) to make these
>>106976930catbox?
>>106976958no, foxbox
the anime girl walks to the right and sits at a table with a birthday cake, as the camera tracks her.also cute!
>>106976958its just holo, looking back, onsen, ass focus or something like that. cmon brah you can do it too
>>106976930very original anon. clearly the internet needed this pic
>>106976970i only want the style
>>106976679Dunno, hope Kijai implements it and we'll find out
>>106976973ty bro, im lucky we have you policing the thread, it would be a mess otherwise. here's another gen for your efforts
>>106976981Woops, meant for >>106976681
man that sloptwerk lora is so good but i wish it filled in details like the pussy and anus when theyre not in clear viewffffuuckk i love wan
Running windblows. 4090 overheats and system stops. Do laptop fans degrade over time? Do I need to do maintenance on them?
>>106976864tad too strong effect
>>106977031Romulan or Vulcan?
>>106976864that's actually kinda gross instead of sexy, she needs to do some squats or something
>>106976690light is never grainy like that though on its own, it gets just as sharp. dont use fp8, that looks like fp8's blurryness
anistudio is the most disruptive and controversial gui so it's going to win in the end
>>106976983nice
>>106976784
new light lora at 1.2 weight, 3 + 3 steps, 17 shifthttps://files.catbox.moe/0kpgqe.mp4at 832 x 832 its about 45 seconds a gen
>>106977066>17 shiftwhy
>>106977059no one knows about it (and never will) except the 10 people that post here>>106977066welp, at least it wasn't a tranny
>>106977078lower steps = more shift for better quality
>>106976906Yeah, unfortunately Chroma HD is a bit inferior in overall coherence, whereas HD Flash would oneshot the prompt. Chroma HD could probably still do it but it might take many more seeds to get anything as coherent.
>>106977082his stars doubled and this will keep happening. the people that find it are greybeards and some anons here. if anything that is the sure sign it's autistic approved
>change my prompt just slightly on chroma>model now stubbornly only does animea finetune cant come quick enough
>>106976983
>>106977127fabulous ass, uncanny face
>>106977127shiny sd 1.5 xlop style
>>106977039Lore friendly Tolkien elf
>>106976917If you're not White, lower your tone while speaking on /ldg/.
>>106976983nice image, using it to test qwen edit (2 images, sign in image2), with qwen clothes remover lora.remove the clothes of the girl in image1. put the sign in image2 over her ass.https://files.catbox.moe/ijez1d.png
>>106976774^^
>>106977188how about adding clothes or changing to lingerie?
>>106977174unemployed neckbeards don't represent the majority of white men
>>106976774>>106977191Had one job and I forgot the video
>>106977193qwen edit can do all that great. also if you use a photo of an outfit in a second image node, you can just say "change their outfit to the outfit in image2".
>>106977087
>>106977209>zooms in the imagenothing personal kid
>>106977209I know it can do that for realistic images, I was just wondering if it could do that for anime ones, and without a second image as a model to use
>>106976750yeah but it's probably the source image isn't anything to sneeze at
>>106977188got a link to the lora?
>>106977224sure, example:change the clothes of the anime girl to a white one piece swimsuit.if you dont want the ties/ribbons you can just prompt to remove it.
is this meme tired yet
>>106977031The new Lightx2v LoRA seems to increase movement quite a bit with other motion LoRAs. This one is with slop twerk at 0.5.
>>106977228have to upload, cause you cant host it on civitaisecalso another example: change the clothes of the anime girl to a black business suit with a black skirt.
>>106977247tensor/cuda cores are rapidly becoming more important. vram is just to fit the model
>>106977247I2V would be more interesting if we could VACE this shit (meaning adding any character we want and make them appear on the screen)
>>106977240works better than I expected, thanks anon
change the anime girl to hatsune miku. change the text "TSUKUMO" to "LDG General". Change the watermelon to a white CRT monitor.
>>106977251nice except the stiff face
>>106977310The me from 2010 would have been blown away by this. And for free too.
>>106977228k the site wasnt as slow this time for uploadingqwen edit 2509 remove clothes lora (just prompt remove clothes/change their outfit to/etc)also good for clothed swaps, sometimes a gen borders on NSFW and it won't work without the lora. usually it's ok though.https://limewire.com/d/AvpLO#Gd7AyXiz1r
>>106976718Thanks for this workflow, easily now my go-to for any realistic gens.
>>106977310we definitely need a vae-less edit model, going for a vae destroys the color
>>106977299there are like 5 models like that, 2 released this week alone
>>106977329>limewirewe've come full circlethank you anon
>>106977339>there are like 5 models like that, 2 released this week aloneI2V?
>>106977345heh yeah, it's https://www.file.io/ but limewire own them I guess.
>>106977347im not spoon feeding, check the banodoco discord's wan channel
>>106977360I accept your concession.
>>106977363then don't use them I guess? that is all wan-chatter is full of, people replacing characters
>>106977360>check this trooncordno
>>106976667Turns out blur was a few bad seeds on regular heun sampler. The sampler is a bit schizo and messes with prompt following so I'll just go back to default.
>>106977360>discordgo fuck yourself
>>106977374>banodoco a trooncornthen forever be 1-2 months behind here
>>106977385>he's lurking a place that's supposedly "1-2 months behind"why? if you feel trooncord is so superior, why won't you stay there and leave this "inferior" place?
>>106977360buy an add nigger
>>106977360>banodocoThis?https://banodoco.ai/It's a company and on discord so wouldn't be heavily filtered?
>>106977400I haven't visited in a month and came to deliver the news of a new light lora actually worth using
>>106977336>easily now my go-to for any realistic gens.If you haven't tried it already, just use the default KSampler one, that is just the experimental workflow.https://files.catbox.moe/1qe8zt.png
>>106977414>a new light lora actually worth usingthen give the fucking lora link instead of wasting everyone's time, you sound like a fucking woman
>>106977360damn haven't been there in a while thanks for reminding me about it
https://civitai.com/models/2066358/svi-unlimited-length-wan-generations?modelVersionId=2338214https://github.com/vita-epfl/Stable-Video-Infinityhttps://www.youtube.com/watch?v=p71Wp1FuqTwNew lora? for long wan video just dropped.
>>106977431scroll up wtf?https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/tree/main
>>106977520noice
>>106977412nah, there is a nsfw channel, no idea about the company but its where all the actual devs like kiajii are and so most news goes there first
>>106977528>wtf?who asked about that lora though? you were saying this >>106977339where are those models retard?
>>106977541>its where all the actual devs like kiajii are and so most news goes there firstyou don't need to go to trooncord, r/stablediffusion provides the news pretty quickly
Greetings, hopefully someone is bored enough to help me leave the local nogen zoneWhat GPU is the current overall best in a cost/performance report? The 5070?I have about 2.5K to spend on a new rig
>>106977526Nice, have the updated it so the models can run in comfy yet? I can view civitai, gotta vpn kekThere's also code and model for rolling forcing that's been released, still needs conversion though https://github.com/TencentARC/RollingForcing
>>106977559Im the one that posts it most of the time weeks later lol and I have not posted about any of the models that dropped lately
>>106977565WAIT FOR 5070 TI 24GB!that would be by far the best bang for the buck
>>106977575>only me can provide news to other spaces than trooncordso you're like the chosen one and shit? damn
>>106977591apparently no one else who keeps up, no
>>106977526still no workflow or idea how to use it or prompt that
>>106977600>apparently no one elseyou don't either, since you are unwilling to share those new models here >>106977360>im not spoon feeding, check the banodoco discord's wan channel
>>106977606kijai has a WF on the discord, he was showing off 20 sec clips that look good
>>106977622>discorddiscord was a mistake, what's wrong with regular boards? I don't want to go to a LGBT place to get AI news, that's ridiculous
give the anime girl a black burka covering her entire body and legs.>the world feminists want
>>1069775654090
is this good consistency for a character? (sdxl)
>>106977656nose bridge
>>106977647dont do this, no fp4 support is a costly mistake when nunchuku or dcquant comes out, nvidia is going all in on fp4
>>106977647Acrually that's not MSRP anymore, and used it goes for $2k... just get used 3090 which is still the meta after all these years.
>>106977669>nvidia is going all in on fp4I don't understand why, this quant is a meme
>>106977680no it is not, nunchku's quants for instance are closer to fp16 than q8 is and look better than q8. They keep the important bits at fp16
>>106977574>have the updated it so the models can run in comfy yet?I don't yet, but I know comfy has a beta node called Context Windows (Manual) which might work, I'm gonna play around with it and see what happens.
>>106977685>nunchku's quants for instance are closer to fp16 than q8 is and look better than q8.
>>106977647This one is out of stock>>106977589This one is not yet out, and may get scalpedWe'll shit, thanks for replying at least
>>106977692have you actually used it? it legit looks exactly like fp16, just another seedbtw here is a nunchuku kre and chroma merge:https://huggingface.co/spooknik/CenKreChro-SVDQ
>>106977685>Closer than q8Nunchaku is great but cmon anon, that is not true. Q8, assuming that is implemented right, is nearly lossless.
>>106977685I wonder how many (You) this bait is gonna get.
>>106977645lets remove that burka and give her a nice suit.
>>106977685top kek, is he serious or something?
here is a comparison, had to use catbox cause of filesize limitshttps://files.catbox.moe/4o1ncy.jpg
>>106977680to get localkeks to buy overpriced underpowered garbage cards, why else?
>>106977718Too much leg. This content violates our content policies.
anyone who thinks nunchuku is lower quality than Q8 must have used the int4 version, use the fp4 R128 version
>>106977725the difference is big, look at her legs it's not the same anymore, do you realize that the difference between Q8 and fp16 is just a matter of a few pixels? and you're telling me that this piece of shit is better than Q8, come on dawg you're wasting your time with your bullshit, you can't stop being delusional, you shill chroma, now you shill this, you are one of the most retarded anons of this board, it had to be said
>>106977725>no promptWorthless comparison.
>>106977756nunchaku literally looks better, just like it was a different seed
>>106977766>nunchaku literally looks betteroh now he's saying that the quality is better than the original fp16, lmaooooooo
>>106977772for that image the left one looks like a better seed retardhere is another comparison
>>106977688Looks like the one in civit is just this one here: https://huggingface.co/vita-video-gen/svi-model/tree/main/version-1.0 unless this guy already converted it? 0 useful information on civit though, sadly.A few threads back an anon mentioned something about they need to be converted or something but yeah, still going to give it a try. Yeah, context nodes are cool, however it mainly repeats the movement.
>>106977783so you lose like 2 creases in the hat on top compared to FP16, in exchange its 4x faster, worth it imo
>>106977783also I could just do more steps with nunchuku for more detail and still be faster, but this is side by side, it is nearly identical to FP16, far closer than Q8 is
>>106977783Can someone explain to this moron what the only purpose of a quant is supposed to be?>>106977797he's only going for simple prompts, if he goes for something that requires good prompt understanding, it will shit the bed, bad quants always destroy the prompt understanding
>>106977805>it is nearly identical to FP16, far closer than Q8 isyou need to get your eyes checked, are you fucking serious or something? >>106977756
>>106977816that image is like 1 year old, stop reposting that crap retard
also DCgen is looking to be even better, FP4 will what models start being trained natively on>>106977829point out these differences that are worse retard, all you could say is that the leg was positioned differently, like it was another seed
>>106977839>that image is like 1 year oldand?
>>106977844It's a fucking baby you pedophile.
>>106977805>it is nearly identical to FP16>>106977841>point out these differences that are worse retard
>>106977844they are different models, that chart is for flux idiot, not qwen, how dumb are you
>>106977860>nunchaku only works on specific models like qwen, not flux(You)
>>106977856you are retarded. I said from the get go that it looks closer than Q8 does to FP16 and it literally looks almost exactly the same, just small variances that would be made from a different seed, no quality loss at all
>>106977856wow what model generated this image?
>>106977526>Yes! This is our top priority on the TODO list. We are actively working on upgrading SVI to support Wan 2.2 and higher resolutions including 720p.>PS: We’re collecting TODO suggestions! We want to better meet real user needs—if you have ideas, please share them in our Issues. Also, if you have great reference images and text prompts but lack the resources to run inference, leave them in an Issue and we’ll run the inference for you!>Long-form generation: We tested 10-minute talking videos with absolutely no drifting issues>Potential Issue 1: Slight color shift. This issue has two main causes:>VAE encoding-decodingrip>Q4: Did you consider building upon the Self-Forcing series of works?>Initially, we did want to build upon Self-Forcing, but several critical issues led us to abandon this approach:>Different objectives: The Self-Forcing series is better suited for scenarios prioritizing real-time interaction (e.g., gaming), where visual quality does not need to reach cinematic standards. In contrast, our work focuses on story content creation, requiring higher standards for both content and visual quality.>Causal/Bidirectional considerations: Self-Forcing achieves frame-by-frame causality, whereas SVI operates at a clip-by-clip or chunk-by-chunk level of causality, with bidirectional processing within each clip. Since SVI targets film and video production, our design mirrors a director's workflow: directors repeatedly review clips in both forward and reverse to ensure quality. SVI maintains bidirectionality within each clip to emulate this process. Directors then seamlessly connect different clips along the temporal axis with causality consistency, which aligns with SVI's chunk-by-chunk causality. Intuitively, SVI's paradigm has unique advantages in end-to-end high-quality video content creation.
>>106977867>it looks closer than Q8 does to FP16 and it literally looks almost exactly the sameit doesn't though, Q8 would change a few pixels, on your image you have the girl who doesn't have the same leg position anymore, please tell me this is a bait or something, I have a hard time to believe someone can be this retarded
I think we have copers who cant afford 5000 series cards
>>106977879jesus christ you are retarded, Q8 has noticeably less detail im comparison to FP16, while nunchuku, as many of its weights are still FP16, looks like FP16 just with small variances in generations that would look like seed variances anyways, quality was always the point
>>106977892>while nunchuku, as many of its weights are still FP16it's 4x smaller in size but "many of its weights are still fp16", all right I think that's an elaborate bait at this point, take this last (You) saar
>>106977892You are arguing with disingenuous people
>>106977866You're comparing apples to oranges, of course is not the same, its like comparing the q_8 from flux with WAN, they are not the same models hence no the same results, you have been posting the same chart for a year now, I mean c'mon you have NF2 in that chart, with your retarded broken english prompt, no seed, not scheduler or sampler givenyou're such a retard, stay on reddit and stop posting your crap here
>>106977892nunchaku is based on fp8
>>106977879I'll *talk* slowlyQ8 is less accurate and so retains less detailsnunchuku is more accurate and retains more detailsnunchuku contains many weights that, when changed, caused visual changes, and so were instead kept at fp16, other weights that have little or no quality impact are kept at fp4, this shift causes small variances in generations, but maintains qualityDid that work for your little reptile brain?
>>106977906>You're comparing apples to oranges, of course is not the sameis this guy retarded or something? why would you care the model it's being applied on, if it doesn't work on flux it won't work on wan or qwen, the nunchaky guys never said "our quant only work on one model", how do you survive with such a small head anon?
>>106977907no it is not, stop making shit uphttps://github.com/nunchaku-tech/nunchaku>>106977906this is not me btw
>BUY NEW CAAARD TO RUN POOCHUCKU SAAAR IS LIKE FP16
>>106977912>it looks completly identical as fp16 anon, you wouldn't see a single difference!>w-well, it looks completly different but the quality is here!pick one subhuman
>>106977920>I TOO BROWN AND POOR TO AFFORD NEW 5000 CARD SIRS! Q8 JUST AS GOOD SIRS!
>>106977922you are either too stupid to explain it to or are trolling, goodbye
>>106977935nta but you must be retarded or esl
Localkeks unironically buy ‘high end’ cards just to run shitty quants, then get defensive over their shitty purchase by calling others poorfags (yet theyre too poor to afford a h100 which btfos the entire 5000 series)
>>106977914I dont give a fuck about nunchaku, I'm not some vramlet or idiot who cannot optimize my workflow to get sweet balance between quality and speed, my issue is with this retarded anon who has been posting the same chart for a year, every time someone questions the quality of quantz, he always posts the same outdated image like its gospel, its annoying
>>106977935I would accept the argument of "it looks different but it doesn't provide any additional errors so it's fine" rather than "it looks exactly the same, pixel by pixel", that all I wanted to say
>its another poor-off between sub-60GB VRAM gaymercard brownsKEEEEEEEEK
The fact that copecucku sissies never created a single comparison of their cope quants next to fp8 fp8_e5/e4 q4/6/8 and bf/fp16 tells you everything you need to know about their delusions.The reason they didn't do it is because a lot of them literally physically can't do it as their 1060 (3GB (laptop) edition) can't even run the fp16 model without instantly vaporising the stock rock hard termal paste from 9 years ago through osmosis.
>>106977948>the same outdated imagean objectively right image can't be outdated, it's like saying we shouldn't use einstein relative equations because they are 100 years old, that's retarded as fuck, if it provides something valuable and objectively right, it doesn't matter when it was made, do I really have to explain this simple concept to you, fucking retarded fuck
fucking kek, i swear /ldg/ attracts the most angriest motherfuckers on this site, can't go a single thread without arguing over literally nothing
Can a non-nunchaku shill anon try it out and make a comprehensive comparison?
>>106977953that was never the claim, just that it was better than Q8 which is true
>>106977979>>106977783
>>106977947>btfos the entire 5000 seriesOnly if you buy enough for a cluster with nvlink. But (You) wouldn't know that.
>>106977989I said a non-shill
>>106977998good morning bothaar i am non-shill and this is my comparison:>>106977783
I will NEVER use speedslop. I will not run quants, hyper, turbo, light, or any other garbage that turns everything into plastic. Poorjeets shouldnt be allowed to gen locally, they give models a bad reputation by posting garbage turboquanted civitslop that they claim is “just like qwen fp16!”
>Faster workflow cancelling. by @comfyanonymous in #10301
>>106977947who the fuck would get a H100 instead of 3 RTX 9000s
>>106978027>RTX 9000broski is from the future
>>106978035he gend them
>>106977979>Can a non-nunchaku shill anon try it out and make a comprehensive comparison?all they can do is to compare their quant with fp16 without adding Q8 to the middle because they know it would make the quant look bad lool
>>106978024>comfyui’s #1 requested feature is faster gen canceling because local models output shit 99% of the timeKEKKYPOW
CumUI
>>106978050gottem
>>106978024he was shit on since the beginning for it being as bad as it was waiting for literal minutes + tip with videogen to cancel a workflow and then he adds basically 2 lines worth of change to the code and suddenly it works>why didnt u add it then????because im not paid the millions his project got to fix basic qol for free while he leaves open PRs like the chroma fix one unmerged because he doesnt want to admit to making an implementation mistake
>>106977972>he thinks he is Einstein because he made a shitty chart about some image generation model in some shitty UI repo>an objectively right image can't be outdatedlmao anon, learn some history before posting such idiotic takes, you're so dumb, the sole fact that comfyui performs different than a year ago already made your image obsolete
>>106978050holy MOTHER OF TRVKE
here you miserable fuckshttps://files.catbox.moe/hvb5my.mp4
>>106977973I lurk on /pol/ so I'm kinda used of that aggressivness, feels like home kek, and desu I prefer an angry place over "omg your pronouns are xe/xir, that's awesome!" reddit forced positiveness
>>106977892My tests showed that nunchaku was systematically worse in text vs fp16 or q8.
>>106978050it's because you click run too quickly while gooning and forgot to add the thing to the prompt
>>106978067oh, this is old light loras btw
>>106978062>comfyui performs different than a year agothat doesn't change a thing, it's the same seed so the goal is still the same "a quant is supposed to look as close as possible as fp16", I think you're a bored troll and you just want to argue for the sake of arguing, I won't believe you can be this idiot, that's just not possible
>>106978050he's out of line but he's right!
>>106978060he would have never added it if there wasn't that reddit post talking about a custom node making the canceling faster btw
Whenever you see a gen here, remember it is typically the result of cherry-picking from a slew of dogshit gens.
>>106977973
>>106977979>>106978044https://imgsli.com/NDE2NzEz/1/3https://desuarchive.org/g/thread/106647201/#106649159I knew an anon tested that.
>>106978098also true
>>106978100qwen edit usually hits the first time with a well worded prompt
>>106978103is this raw T2I? if so what was the prompt?
>>106978103>nogens declare war on chroma stansbased, since chroma was created, there is a huge influx of retards in this place
>>106978113*pooma
fuck… i was hoping for a day without complete saas victory but today is not that day…
>>106978067thanks for the test anon
>>106978100>cherry-pickingThe first draft reveals the art; revision reveals the artist.
>>106978127d*bo?
>>106978111Qwen Edit 2509
>>106977973It is a sound that slowly becomes annoying.
>>106978100good artists copy, great artists cherry pick
real artists use SaaS
real artists pick up the pencil
>>106977379>>106977087I’ve been experimenting with chroma Lora’s. Would you chroma anons know if I have to train on HD flash in order for a Lora to be compatible with it?
real artists rape
the famous rar format
>>106977718the anime girl is sitting at a computer with a white CRT monitor and is typing.
>>106978107When wanting to see how good something is, you should use the hardest prompt possible to break the model, where you will see the biggest difference in how the quants further break it, instead of genning 1 subject closeups or, God forbid, no humans in the image at all.For example:>Detailed photograph RAW of seven smiling friends of different races are at a nightclub concert with dim lighting that is shining on their faces, behind them is a crowd of people dancing while fighting with large swords, everyone is holding a sword in their left hand and an intricate beer glass with differently colored beer in the right hand. Far behind them above the DJ there is a sign which has "Minimum drinKing age 021!" written on it in stylized cursive letters.
>>106978200The example prompt given looked hard enough, especially the text where nunchaku fails a bit : A vibrant, warm neon-lit street scene in Hong Kong at the afternoon, with a mix of colorful Chinese and English signs glowing brightly. The atmosphere is lively, cinematic, and rain-washed with reflections on the pavement. The colors are vivid, full of pink, blue, red, and green hues. Crowded buildings with overlapping neon signs. 1980s Hong Kong style. Signs include:"龍鳳冰室" "金華燒臘" "HAPPY HAIR" "鴻運茶餐廳" "EASY BAR" "永發魚蛋粉" "添記粥麵" "SUNSHINE MOTEL" "美都餐室" "富記糖水" "太平館" "雅芳髮型屋" "STAR KTV" "銀河娛樂城" "百樂門舞廳" "BUBBLE CAFE" "萬豪麻雀館" "CITY LIGHTS BAR" "瑞祥香燭莊" "文記文具" "GOLDEN JADE HOTEL" "LOVELY BEAUTY" "合興百貨" "興旺電器" And the background is warm yellow street and with all stores' lights on.
>>106978200>races areraces that are
>>106978107based, thanks anon
Art is knowing which ones to keep
>>106978231why does nunchaku looks like a totally other seed?
>>106978225It was OK for testing text but that's it
>>106978231based on this, the quality looks better than Q4_K_M, which is a good thing
Anyone have tips for upscaling chroma gens?
>>106978247>local outputs are so bland and same-y that slight variations are considered “a totally different seed”grim. try using actual good saas for once
>>106978161>I have to train on HD flash in order for a Lora to be compatible with it?Train on HD and gen on HD-Flash
>>106978267meds
>>106977574>>106977606>>106977877https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/1519KJ is on it and thinks it won't need any code changes, just need to figure out how to replicate their pipeline within a workflow.
>>106978267kek, he's not wrong, Qwen and Wan have so little differences between seed
>>106978231please enlighten this retardhow to xy compare in comfyui
>>106978286I'm using this scripthttps://github.com/BigStationW/Compare-pictures-and-videos
>>106978189the anime girl is typing on her computer keyboard, while using a word processor on the computer.(new lightx2v loras from today)
>>106978280>https://github.com/kijai/ComfyUI-WanVideoWrapper/issues/1519Oh shit, nice find. I keep forgetting to look in the issues section. He says its mostly for i2v and provided a workflow but going to try it with regular native
>>106978299Come up with a story. Make a cartoon. Put it on youtube.
>>106978333Why
>>106978351Logical next step. You guys should be making cartoons and bankrupting Hollywood.
>>106978364Nobody wants to see that slop. The only people eating it up is children scrolling Youtube Shorts.
>>106978280>>106978332seems like he's testing with 2.1?what would this do compared to a last frame -> first frame wan2.2 workflow? does it actually inject latest n frames?
>>106978370lol
>>106978370do you even know what is being discussed?
>>106978370Not a clue you're better off making a github and asking the man him self. Only just discovered 5 minutes ago it can work in comfy apparently, kek. But yea, looks like its 2.1 and they're working on 2.2
>>106978402butiful pornmasterPro_noobV3
>>106978231Q4_K_M really looks bad, I'm really surprised by that, I thought it would fare well since Qwen Image is a big model and big models are less sensitive to quants overall
>>106978380>>106978382yeah long videos, but no one gives a shit about explaining what exactly is so novel here
>>106978370according to the paper multiple frames have to be reused and a tensor appliedniggerjai says "no code changes"
>>106978462>niggerjaikek
>>106978468>1girl, looking at viewer, expressionless,they say what you gen is a reflection of your soul
>>106978332>>106978370probably need to use something like pic related for native nodes. I'm just running a test on those context window nodes without the lora's because I've never used them before. It is creating 161 frames in 81 frame chunks so far so good on my 3060 12GB. The lora is 2.5 GB in size though, so maybe it will struggle unless i lower the chunk size from 81 to about 49. I'm gonna keep testing will report back. In any case I need to do video without the lora and then with the lora to see if its actually working. Fingers crossed this just works and we have something of a poor mans wan 2.5
>>106978416Not a lot of models do 2.5D without fucking shit up - like even v4 or v5 of this model do, for example
>>106978468lovely
>>106978462ok that's pretty cool>>106978487please test if you can prompt every 81 frames, that would be the best
>>106978462>>106978526correction, no tensor application during inference, so yes, could work without code changes
>>106978019>I will NEVER use speedslop. I will not run quants, hyper, turbo, light, or any other garbage that turns everything into plastic.to be fair, Qwen Image even without quants and shit renders plastic shit so...
>>106978487Good. Yeah context nodes will probably help but I'm gonna test on 2.1 soon
sirs what is a tensor
>>106978567>>106978567
>>106978526>please test if you can prompt every 81 frames, that would be the bestThat would be pic related models I guess, yeah I'll test them but with wan2.2 first because I want to see if they work with wan2.2 just like wan2.1 loras typically do. I guess I'm not going to sleep tonight.
Where do i get the new lightx2v
>>106978526>please test if you can prompt every 81 frames, that would be the bestI'm thinking you would have to take like the last 5 or 15 frames from the first chunk then use those to continue motion in the next chuck. I really don't know desu how it would work with multi prompts because the context window is doing everything all at once. like how would we change the prompt? maybe it just understands the entire prompt within the context, or we prompt it like:first 5 seconds: prompt second 5 seconds: promptBut i'm thinking we would have to chain samplers and continue each chuck using frames from the last.
>>106978610probably herehttps://huggingface.co/lightx2v
>>106978652>But i'm thinking we would have to chain samplers and continue each chuck using frames from the last.I should have added and obviously a different prompt node for each sampler, however there is a custom node that treats each line (by pressing enter) as a separate prompt and that can some how be automatically switched on using a loop node but that is something I have to rip from someone else's workflow because I don't know how the loop works. Then it might be possible to feed all through one ksampler instead of having a huge chain of them.
when will API keks be able to do this, again?https://files.catbox.moe/0bv1ex.mp4
>>106978817lol
>>106978817jumpscare
>>106978817Just opened this at work and got fired
>>106978299the anime girl picks up the CRT monitor and throws it through the window behind the desk, breaking the glass.
Any ComfyUI-ers around that could please advise on how to do inpainting without it taking 7 hours? I looked into some custom workflows as well but they ass
is there a word token limit in noobai pos prompt? wasnt there some limit back in the day or am i remembering it wrong? seems like there isnt one now