[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Discussion of Free and Open Source Text-to-Image/Video Models

Prev: >>107325191

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/kohya-ss/sd-scripts
https://github.com/tdrussell/diffusion-pipe
https://github.com/ostris/ai-toolkit

>WanX
https://rentry.org/wan22ldgguide
https://comfyanonymous.github.io/ComfyUI_examples/wan22/

>NetaYume
https://civitai.com/models/1790792?modelVersionId=2298660
https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQd
https://gumgum10.github.io/gumgum.github.io/
https://huggingface.co/neta-art/Neta-Lumina

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
Training: https://rentry.org/mvu52t46

>Illustrious
1girl and Beyond: https://rentry.org/comfyui_guide_1girl
Tag Explorer: https://tagexplorer.github.io/

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Bakery: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/b/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
>>
Now that Flux 2 turned out to be a nothingburger, what's our next cope?
>>
Remember do not respond to trolls
>>
>>107328526
this
>>
>>107328533
if I wanted to use API I would go for Nano Banana Pro and not this Flux 2 mid shit
>>
Blessed thread of frenship
>>
File: 1760708027751630.png (1.84 MB, 832x1248)
1.84 MB
1.84 MB PNG
>>
>>107328543
Lucky for you, ComfyUI remains the premier way to interface with the world's most powerful models: https://blog.comfy.org/p/meet-nano-banana-pro-in-comfyui
You can even mix and match them, interweaving in and out of the localspace for maximum quality generations
>>
*yawn*
>>
File: 1756972686764098.png (89 KB, 498x498)
89 KB
89 KB PNG
why not just make uncensored models instead of retarded garbage?
>>
>>107328522
Z Image, LTX 2 Video, and Qwen Image Edit 2511 are next.
>>
File: venti.webm (761 KB, 1080x1920)
761 KB
761 KB WEBM
Will local models ever be able to produce something like this?
>>
>>107328565
making kino models is not what attract investors :(
>>
>>107328533
Can I make loli?
>>
Downloading Flux.2 Dev FP8. Are we back, bros?
>>
>>107328565
but muh visa! muh low risk investment status!

>>107328573
I keep getting hammerhead sharks. weird
>>
File: sad.png (2.08 MB, 1408x768)
2.08 MB
2.08 MB PNG
>>107328581
nope, this shit sucks
>>
Where are the jeetpics usually you repost jeets from twitter to advertise cloud
>>
File: desd35_00008_.png (1.6 MB, 1152x896)
1.6 MB
1.6 MB PNG
any flux2 anons make some flux2 debos
>>
File: f.jpg (135 KB, 1248x832)
135 KB
135 KB JPG
>>107328581 >>107328522
seems quite decent to me so far
>>
File: 1761706914803242.png (1.49 MB, 864x1152)
1.49 MB
1.49 MB PNG
>>107328566
>Z Image
Here's another image from that model
https://xcancel.com/mugen_shuu/status/1993451446797582527#m
>>
>>107328622
That robot has a cool aesthetic to it.
>>
>>107328630
wonder if the model made those compression artifacts or if it's twitter
>>
>>107328630
>https://xcancel.com/mugen_shuu/status/1993451446797582527
>https://github.com/alibaba/Z-Image
Unconfirmed, but the stars align.
>>
>>107328649
>wonder if the model made those compression artifacts or if it's twitter
I wish they posted these with the prompt used
>>
File: QwenImg_00052_.png (1.47 MB, 1152x1440)
1.47 MB
1.47 MB PNG
>>107328606
yeah here
>>
>>107328663
>alibaba
but the chink said it wasn't supposed to be a chinese model, so alibaba is now making z-image on top of qwen image?
>>
Can anyone share a fast wan workflow that doesn't use Kijai's nodes or the AIO model? I want to be able to try nag and see if that fixes mouthflapping.

Someone shared theirs a few weeks ago but it was slow. Just to clarify; when using LoraLoaderModelOnly, the lightx2v lora should always be the first lora in the chain, right? (i'm used to the kijai workflow).
>>
>>107328663
please be good please be good please be good
>>
>>107328676
>Just to clarify; when using LoraLoaderModelOnly, the lightx2v lora should always be the first lora in the chain, right
I don't think it matters
>>
>>107328675
seems obvious that "z image" would be from z ai (glm)
>>
File: x.jpg (148 KB, 1248x832)
148 KB
148 KB JPG
>>107328633
to me it seems like many things are better than on flux.1, including less <safety>

it's maybe not going to hit as hard as wan did, but it seems interesting enough
>>
>>107328573
does she do OF
>>
>>107328779
you wanna see those flapjacks that bad?
>>
File: ComfyUI_08764_.png (1.53 MB, 1024x1024)
1.53 MB
1.53 MB PNG
>>107328508
Alright, turns out one might have to really boomer prompt this to unslop it. Dunno, maybe we could get SRPO type thing to help.
>>
>>107328761
I thought the same, but why not just use a GLM model for the text encoder? Collab maybe?
>>
>>107328787
i mean i could just train a lora but i wanted to know if the real deal existed first
>>
>>107328809
LOL @ thinking Venti is real.
>>
>>107328787
You don't know how to get OF for free? keke
>>
>>107328809
there's a venti model floating around the parody ai thread on /r/ (formerly /b/)
>>
File: ComfyUI_08766_.png (1.85 MB, 1024x1024)
1.85 MB
1.85 MB PNG
>>107328799
Alright, so this thing understands prompts in Korean, Japanese etc... I guess this could be secret to unslopping it?
>>
>>107328878
"model" as in LoRA? Or just generated images? There was a LoRA of her on civitai, but it got deleted long ago.
>>
>>107328878
>the parody ai thread on /r/ (formerly /b/)
dang why the move? the /b/ thread looks so dead now
>>
So I googled a bit and I found I can run one of the smaller wan2.2 models on a 16GB gpu. But what I'd like to know is, how much worse is the quality gonna be? Is it even worth bothering or do I have to do all these concessions with lightning loras that kill the movement like with 2.1?
>>
File: ComfyUI_08767_.png (1.57 MB, 1024x1024)
1.57 MB
1.57 MB PNG
>>107328799
>>
previews for 5~ min gens would be really useful
>>
how long does it take to generate a flux2 image with a 4090?
>>
>>107328921
yeah lora is what i meant
>>107328927
the /b/ thread was getting too cockblocked by the janny/spammer is what I gathered. the /r/ thread has been comfy but it's starting to gather the usual types from /b/
>>
File: ComfyUI_temp_iubdp_00170_.png (3.41 MB, 1824x1248)
3.41 MB
3.41 MB PNG
https://files.catbox.moe/3x1gz4.png
>>
File: 14_test_00001_.mp4 (2 MB, 1440x1440)
2 MB
2 MB MP4
>>107328799
>herou gaijin
>>
File: ComfyUI_temp_iubdp_00058_.png (3.34 MB, 1824x1248)
3.34 MB
3.34 MB PNG
https://files.catbox.moe/ap6t1d.png
>>
File: ComfyUI_temp_xzbjk_00030_.png (2.8 MB, 1248x1824)
2.8 MB
2.8 MB PNG
https://files.catbox.moe/ensyll.png
>>
File: ComfyUI_08768_.png (1.79 MB, 1024x1024)
1.79 MB
1.79 MB PNG
>>107328917
Hard to tell right now, also what are those green dots?
>>
File: gun.webm (2.54 MB, 720x1280)
2.54 MB
2.54 MB WEBM
>>
>>107328991
Beautiful render. I would love to see some woods scenery with that light and color.
>>
File: ComfyUI_temp_iubdp_00097_.jpg (458 KB, 1824x1248)
458 KB
458 KB JPG
https://files.catbox.moe/bjtgt8.png
>>
>>107328984
I tried your workflow as it is (switched to fp6, and removed WanLightninCmp Lora), and it worked

Then I gave it a shorter prompt, and it gave me ANIME illustration.

Did this happen to you too?
>>
File: boxer.webm (2.99 MB, 600x912)
2.99 MB
2.99 MB WEBM
>>
File: ComfyUI_temp_iubdp_00135_.jpg (542 KB, 1824x1248)
542 KB
542 KB JPG
>>107329037
Interesting little glitch! I saw this happen with Chroma, but never with WAN
https://files.catbox.moe/etoysf.png
>>
>>107329003
model?
>>
>>107329049
>>107329023
Are those AI vids? They seem way too smooth and coherent to be one genned with a local vidgenner
>>
>>107328875
>get OF for free
Free trial? I'd like to know how to get actual paid content for free, if that's possible.
>>
File: download.png (2.59 MB, 1824x1248)
2.59 MB
2.59 MB PNG
cw: suifuel (happy couple, implied interracial)
https://files.catbox.moe/2oil2u.png
>>
https://files.catbox.moe/24i01w.png
>>
>>107328930
you can run q8 on 16gb with offloading depending on how much ram you have. light loras just allow you to gen videos faster with lower step counts but it doesn't lower the system requirements.
>>
https://files.catbox.moe/573oy6.png
>>
File: ComfyUI_08772_.png (1.26 MB, 1024x1024)
1.26 MB
1.26 MB PNG
>>
>>107329119
neat
>>
File: selfie.webm (2.67 MB, 606x1080)
2.67 MB
2.67 MB WEBM
>>107329067
No. Video generation is indeed not quite there yet.
>>
File: ComfyUI_08769_.png (1.73 MB, 1024x1024)
1.73 MB
1.73 MB PNG
>>107329061
Flux.2, first testing a particular type of prompting following the official BFL guide (pic rel and the pic you quoted) and next
>>107329148
just straight up asking for amateur photograph. The prompts are written in Japanese.
>>
Can someone please explain the magic of Kijai's wan2.2 workflow and how it's able to make motion look clean with the lightx2v lora? Especially with hands. The comfyUI wan2.2 template with lightx2v always looks way worse to me.

I want to replicate it without using his nodes but I don't know how.
>>
why is buddy posting an irrelevant e-whore
>>
File: 15_test_00001_.mp4 (1.97 MB, 960x1440)
1.97 MB
1.97 MB MP4
>>107329003
>>
I actually want a picture of ebussy and the catalogue is dry, figures
anyway... what's the point of prompting from a json
>>
>>107329119
Gotham City?
>>
>>107329195
brevity..?
it still requires some degree of natural language, so I guess it's easier to tweak a prompt if it's formatted
>>
File: 1738987552091441.png (223 KB, 1055x1000)
223 KB
223 KB PNG
>>107287524
I know nobody cares, but I fixed my temps. Now running quieter and cooler. The problems:
>didn't tighten the fan mount standoff screws enough, had to remove the fan to diagnose and fix this
>fan curves
I totally redid my fan curves, putting way more voltage into the big, quiet case fans, max quiet voltage on CPU in low temps, aggressive ramp up in med-high temps.
>>
>>107329195
>what's the point of prompting from a json

it implies higher IQ
>>
>>107329203
It's a riff on a scene from Tron Ares, actually; so more sci-fi, although the city in this one came out quite gothic
>>
File: 740full-brittany-venti.jpg (204 KB, 740x991)
204 KB
204 KB JPG
Not sure if real, or AI -- seems to be missing a finger.
>>
>>107329286
are you actually retarded?
>>
>>107329286
... never mind. It's real.
>>
>>107329119
>>107329235
>>
>>107329286
anyone explain why rightoids obsess over this goblin?
>>
>>107329067
Small details are where you can still tell. Look at the ejected casings, some disappear mid-fall or go in unrealistic directions. But things are at the point though if you aren't paying attention or haven't done genning day in and out, it can fool the majority of people.
>>
>>107329286
Daily reminder
China won.
>>
>>107329073
coomer(dot)su
as payment you must reply with your most beautiful 1girl
>>
What should this node be set to for wan2.2?
By default it's 5, but will I get better quality if I increase it to 7?
>>
>>107329462
lots of browns orbit the AI space obsessed with making deepfakes and low-quality plastic 'porn'. you'll see them pop up here time to time spamming the same girl over and over begging for edits or loras.
>>
>>107329529
>By default it's 5, but will I get better quality if I increase it to 7?
I could not see anything special between 5 and 8
>>
i see they coated flux2 with layers of vaseline to make sure it's extra safe! every image looks like it had a blur filter applied to it
>>
File: ComfyUI_08777_.png (1.55 MB, 1024x1024)
1.55 MB
1.55 MB PNG
Alright, turns out distillation kills the skin texture too much. It really is too slopped, shame. Also if soles are at all possible then I have a skill issue with them.
>>
File: venti.jpg (187 KB, 900x1200)
187 KB
187 KB JPG
>>107329523
Does it let you view actual paid content? Will it hack my browser and install rootkits on my 'puter if I go there?
>>
>>107329551
No one needs those these days. You can run Kontext for free online, without even registering.
>>
>>107328875
I got it for free :^)
https://litter.catbox.moe/krm53rr3lrtzge4k.mp4
>>
So in addition to Flux 2, there is also the new Qwen edit, some z-image model, and a 6b realism model (possibly z-image?) coming?
>>
>experience running flux2 Q4
>16gb vram maxed out,64gb ram maxed out, 32gb swap maxed out
99s/it and it worked. Offloaded clip and vae to CPU and it's down to 74/sit or 25 mins per image.
tranny jannies aren't letting me upload images
>>107329563
It's a big step up in quality.
>>107329681
Wan 2.5 has also been out for a while which will probably be open sourced at some point.
>>
File: ComfyUI_08782_.png (1.95 MB, 1024x1024)
1.95 MB
1.95 MB PNG
>A concept art colored sketch of a survivor camp inside the wreckage of a large space station in a forest, sharp edges, high quality
>>
>>107329694
wan 2.5 is never going to be open sourced, it's complete copium to think it is. it's like saying seedream will be released locally because it's china and bytedance released other models before.
>>
File: ComfyUI_08785_.png (1.48 MB, 1024x1024)
1.48 MB
1.48 MB PNG
>>107329694
>It's a big step up in quality.
Yes, the model has definitely seen good stuff in its training data, but to take advantage of it the distillation needs to be broken and we need a massive Chroma style finetune. But then who is up for the task? Also this time around, Chroma's only bottleneck that I could think of for learning artist styles (T5) is gone, so like Lumina it should learn everything.

>1990s, retro, vintageanime, anime(1990s), 1girl, an athletic young woman with a compound bow, the young woman is wearing cargo shorts, a pink top, black gloves, and has long blonde hair in a ponytail with an iridescent hair tie. The woman is aiming the bow at a target on a tropical beach, 1990s anime tv episode still
>>
>>107329764
Yeah. If it's too big to be used then it isn't useful. There will eventually be 4/8 step loras for it.

Chroma cost a lot to train. This will probably cost much more.
>>
>>107329694
damn you're dedicated, I would have canceled it the moment it started swapping
>>
>>107329783
>There will eventually be 4/8 step loras for it.
If only they were actually following the meta instead of being joked up in their ivory tower, they would have provided those at release.
>>
>>107329102
Thanks anon, I'll give it a shot
>>
File: ComfyUI_08787_.png (1.73 MB, 1024x1024)
1.73 MB
1.73 MB PNG
>A ethereal young woman with flowing auburn hair, standing by a misty lake at twilight, surrounded by lush foliage and ancient ruins, in the romantic Pre-Raphaelite style of John William Waterhouse, with intricate details on her gossamer dress, soft lighting casting a dreamy glow, high resolution, oil on canvas texture.

Wow... Based SD 1.5 artbros are we back?
>>
>>107329794
I offloaded clip/vae to cpu and now it isn't going above 32gb ram, no swap usage, and runs at 77s/it. 25 mins is the same as wan 2.1 Q4 for a 5 second clip.
>>
File: flux2_bf16_c_00009_.jpg (548 KB, 2048x2048)
548 KB
548 KB JPG
In case anyone wonders, flux 2 will do 2048x2048 ok enough, but breaks down fully at 3072x3072. I'm guessing the vae is a bottleneck because it takes particularly long to decode, and the output is blurred instead of repetitious.
>>
File: ComfyUI_08788_.png (1.66 MB, 1024x1024)
1.66 MB
1.66 MB PNG
>A elegant teenage girl with flowing long hair and flower petals swirling around her, confessing love under cherry blossoms in spring, soft romantic lighting and delicate features, in the shoujo anime style with sparkly eyes, pastel tones, and emotional close-ups, high resolution, watercolor-like.

Yeah this is much better than OG Flux. Insane.
>>
File: flux2_bf16_c_00010_.jpg (786 KB, 3072x3072)
786 KB
786 KB JPG
>>107329834
Same at 3072
>>
File: Flux2_00003_.png (1.39 MB, 1248x832)
1.39 MB
1.39 MB PNG
>>107328984
Let me remix this
>required an input image
>put a random Asuka image and said make her realistic
>Q4
>>
File: ComfyUI_08789_.png (2.37 MB, 1024x1024)
2.37 MB
2.37 MB PNG
>A pirate crew on a wooden ship sailing through stormy seas, captain with a straw hat grinning wildly, diverse character designs with unique abilities, in the mangaka style of Eiichiro Oda with intricate cross-hatching, exaggerated proportions, and adventurous storytelling, high resolution, black and white manga with color accents.

>Knows mangaka too now

Wild stuff.
>>
File: ComfyUI_08790_.png (2.57 MB, 1024x1024)
2.57 MB
2.57 MB PNG
>A swirling starry night sky over a quiet village with cypress trees in the foreground, vibrant blues and yellows swirling in expressive brushstrokes, in the post-impressionist style of Vincent van Gogh, thick impasto texture, dynamic movement in the clouds and stars, high resolution, oil painting feel.
>>
File: 1741633343173843.png (2.95 MB, 1120x1440)
2.95 MB
2.95 MB PNG
>>107329825
is that flux 2? that's pretty unslopped, better than qwen anyway.
here's spark chroma.
which one follows the style prompt better?
https://en.wikipedia.org/wiki/John_William_Waterhouse
>>
Is it not possible to get Kijai's own NAG node within his wan2.2 workflow? I can't plug model inputs from WanVideoWrapper into WanVideoNAG; the latter is designed for the built-in Comfy nodes.

Kijai's workflow produces much better quality results than any workflow that uses KSampler + lightx2v, so I would strongly prefer integration into his workflow.
>>
File: ComfyUI_08791_.png (2.04 MB, 1024x1024)
2.04 MB
2.04 MB PNG
>>107329884
It is Flux.2, very good results with artists and art mediums. Can't believe BFL has finally done it.

>A surreal landscape with melting clocks draped over barren trees and a vast desert plain, an elephant with impossibly long legs in the distance, in the surrealist style of Salvador Dalí, dreamlike precision and bizarre elements, warm earthy tones with high contrast, high resolution, as if an oil painting.
>>
File: ComfyUI_08792_.png (1.42 MB, 1024x1024)
1.42 MB
1.42 MB PNG
>A dramatic self-portrait of an elderly man in shadow, illuminated by a single light source highlighting his thoughtful expression and textured clothing, in the baroque style of Rembrandt van Rijn, rich chiaroscuro with deep browns and golds, intricate details on fabric and skin, high resolution, oil on panel.
>>
File: 1760424204085524.png (351 KB, 1572x915)
351 KB
351 KB PNG
>>107329895
>Kijai's workflow produces much better quality results than any workflow that uses KSampler + lightx2v
I think you're wrong but here you go
>>
File: ComfyUI_08793_.png (1.84 MB, 1024x1024)
1.84 MB
1.84 MB PNG
>A young girl riding a magical flying creature over lush green valleys and ancient forests, whimsical creatures peeking from the trees, in the enchanting Studio Ghibli anime style with detailed hand-drawn backgrounds, soft earthy colors, and a sense of wonder, high resolution, as if from a fantasy film.
>>
File: ComfyUI_08794_.png (2.5 MB, 1024x1024)
2.5 MB
2.5 MB PNG
>A lone warrior in massive dark armor wielding an enormous jagged sword, standing atop a battlefield strewn with fallen enemies under a blood-red eclipse, intricate Gothic architecture crumbling in the background, demonic entities emerging from shadows, in the mangaka style of Kentaro Miura with incredibly detailed cross-hatching, dramatic high-contrast black and white, muscular anatomy, baroque ornamentation, and visceral dark fantasy atmosphere, high resolution, as if from a seinen manga masterpiece.
>>
flux 2 is pretty decent!
now comes the part where finetuners only touch this shitty schnell version, de-distill it, and turn it into a meltyfried mess for 50 epochs!
>>
>>107329945
Did you ask AI to shit on flux 2?
>>
File: ComfyUI_08795_.png (2.31 MB, 1024x1024)
2.31 MB
2.31 MB PNG
>>107329945
If it's successfully de-distilled and de-slopped we will truly have the greatest model of all. The best photorealism we have ever seen, coupled with the best uncensored art or anime we have ever seen. We are this close to achieving the greatest uncensored model of all time.

>A solitary figure traversing infinite megastructure corridors with towering brutalist architecture, cybernetic threats lurking in vast darkness, in the mangaka style of Tsutomu Nihei with heavy blacks, architectural precision, oppressive scale, minimalist character design, and post-apocalyptic sci-fi atmosphere, high resolution, black and white manga with extreme perspective.
>>
File: ComfyUI_08796_.png (2.76 MB, 1024x1024)
2.76 MB
2.76 MB PNG
>A disturbing scene of spirals manifesting in human bodies and architecture, characters with wide horrified eyes and detailed grotesque transformations, in the mangaka style of Junji Ito with meticulous line work, unsettling body horror, psychological dread, and surreal cosmic horror elements, high resolution, black and white horror manga illustration.
>>
>>107329916
>I think you're wrong but here you go
I can't speak for you, but in every single KSampler workflow I have tried with lightx2v loras: the official template, workflows provided by anons here, workflows on civitAI; there is always blurriness around the hands in motion.
With KJ's workflow, that simply doesn't happen. Hands are objectively cleaner in motion.
KSampler workflows also take A LOT longer for me when using the UnetLoaderGGUFDisTorch4MultiGPU to load a GGUF, compared to just selecting a GGUF in KJ's WanVideoModelLoader and disabling quant.

Thank you for the screencap. Can you spoonfeed me on what original_text_embeds and _nag_text_embeds are supposed to be? Why isn't the chink negative prompt going in the TextEncodeSingle? Is it important that both inputs are satisfied for this node? In older workflows for 2.1 I noticed nag tends to only use the negative conditioning.
>>
File: ComfyUI_08797_.png (2.15 MB, 1024x1024)
2.15 MB
2.15 MB PNG
>A masterless samurai in weathered robes executing a precise sword strike, raindrops frozen mid-air around him, in the mangaka style of Takehiko Inoue with photorealistic ink wash techniques, dynamic brush strokes, philosophical depth, and stunning anatomical accuracy, high resolution, as if from a historical martial arts manga masterpiece.
>>
>>107329987
>Is it important that both inputs are satisfied for this node?
*forget i asked this question
>>
File: 1747656038449610.png (2.44 MB, 1440x1120)
2.44 MB
2.44 MB PNG
>>107329994
can it do a sword fight between two guys?
>>
File: Flux2_00004_.png (1.44 MB, 1248x832)
1.44 MB
1.44 MB PNG
>>107328984
This one directly copied your prompt and put it into Flux 2 Q4 with my tweaks. You can see Asuka bleed through.
>>
>>
TEN more years of illustrious. and that's a good thing.
>>
File: flux2__00022_.png (1.61 MB, 832x1216)
1.61 MB
1.61 MB PNG
frankly, this is good enough for me to start an of with
>>
>>
flux2bros can you test anime? does it have knowledge of booru artists?
>>
>>107326606
bloatmaxxers btfo
>>
File: img_00242_.jpg (875 KB, 1600x1264)
875 KB
875 KB JPG
>>107329825
It doesn't have oil on canvas texture. Chroma + lora does it almost correctly
>>
File: 1752503896428809.png (2.69 MB, 1440x1120)
2.69 MB
2.69 MB PNG
>>107329966
>the greatest model of all
the greatest model of all will be fast and not bloated with outdated architecture like CFG and VAE.
>>
File: ComfyUI_08798_.png (2 MB, 1024x1024)
2 MB
2 MB PNG
>>107330014
It can, not the best gen but here you go
>>
>>107330274
Using the edit feature with a reference painting by Adolf Hiremy-Hirschl seems to mimic it pretty well. This is the pro model though.
>>
I want to see what happens when you run AI from intel optane storage. half the latency of ram, 7gb/s max sequential read, 16gb/s pcie link, and very good random iops.
P5800X is $2,000 for 1.6TB. Still cheaper than RAM.
>>
>>107330336
better than most models, still pretty fucked though. it's funny even illustrious can do sex positions but not convincing combat.
here's qwen's attempt
>>
>>107330274
All good, since it works so well at copying individual artist styles one could just find an artist that has the style they want and try to copy that with pure T2I, but it probably won't be as good as Chroma with certain things because the distillation is still there.
>>
>>107330399
holy plastic
>fp8
>>
File: ComfyUI_08801_.png (1.57 MB, 1024x1024)
1.57 MB
1.57 MB PNG
>A solitary monk in a pristine white woolen habit kneeling in prayer within a stark stone monastery cell, hands clasped reverently holding a wooden crucifix, a single shaft of divine light streaming through a high narrow window illuminating his contemplative face and the folds of his robe, a simple wooden table with a skull and open scripture in deep shadow beside him, in the Spanish Baroque style of Francisco de Zurbarán with dramatic tenebrism, sculptural rendering of fabric with crisp angular folds, muted earth tones of ochre and umber against deep blacks, spiritual solemnity and meditative stillness, meticulous realism in textures of wool and weathered wood, high resolution, oil on canvas with monastic austerity and mystical devotion.

Okay, this thing knows thousands of artists. Will be refreshing to see one of those artist lists.
>>
>>107330422
Can you try surreal painting by Chris Mars. I'm curious
>>
>>107330161
neat
>>
Guys, that's not how it works.
The guys around Robin Rombach, BFL, formerly Stability AI, published the latent diffusion paper in 2021, then:
SD 1.5 -> NAI finetune
SDXL -> Pony, Illustrious, Noob, Juggernaut finetunes
Flux -> Krea, chroma finetunes
Flux2
I'm looking forward to when the hype around Flux2 is over and we can hate on these jerks again for everything they've done to our beloved community. Seriously, I love to hate them.
>>
Failgens is one of my favorite part of this hobby.

https://files.catbox.moe/kx4cr9.mp4 NSFW
>>
>>107330422
>this thing knows thousands of artists
I wonder if all of them are dead.
>>
File: kj_vs_comfytemplate.webm (1.16 MB, 900x720)
1.16 MB
1.16 MB WEBM
Still trying to figure out why KJ gets so much hate here.
>>
>>107330481
This time, there's no real innovation in it, though. If someone wants to finetune and has the funds, they may as well be finetuning qwen.
>>
>>107330502
KJ gets no hate from me.
>>
>>107330502
Never seen anyone hating him in particular.
t. only uses his colormatch node.
>>
>>107330481
bigASP or that other sexo tune is more deserving of inclusion than juggernaut. and i realize your list is non exhaustive but my autism compels me to mention the non SAI/BFL models.
>>
>>107328770
>it's maybe not going to hit as hard as wan did
lel, no shit sherlock

Flux 2 devs are bragging about how they removed nipples and genitals from the model and poisoned any attempts at adding them back in, Flux 2 will have less impact than Flux 1 did, since back then there were no real alternatives
>>
>AI drive ran out of space
>turns out I had an old forge install full of duplicate models
>>
>>107330502
Can you upload both workflows?
>>
File: ComfyUI_08802_.png (2.13 MB, 1024x1024)
2.13 MB
2.13 MB PNG
>>107330438
Doesn't seem to know him
>A distorted humanoid figure with multiple melting faces emerging from cracked porcelain skin, exposed anatomical elements of muscle and bone intertwining with mechanical gears and rusted metal, surrounded by fragmented childhood toys and decaying Victorian dolls in a claustrophobic nightmarish room with peeling wallpaper, in the surrealist style of Chris Mars with grotesque bio-mechanical horror, vivid unsettling colors of sickly greens and bruised purples against pale flesh tones, psychological darkness and outsider art rawness, thick impasto texture with obsessive detail in decay and transformation, haunting commentary on mental illness and societal masks, high resolution, oil on canvas with visceral emotional intensity and dark carnival atmosphere.

This is where the edit model would come in useful I guess.
>>
>>107330527
>bigASP or that other sexo tune is more deserving of inclusion than juggernaut
I seem to remember juggernaught was finetuned with less than 10k images. Lora territory now.
>>
>>107329664
I don't like nakedness, personally. I hope this doesn't make me gay.
>>
>>107330540
Here you go anon:

https://litter.catbox.moe/mtlrqio4qfe2n2nv.mp4
https://litter.catbox.moe/rqhbav0l6vkxyds3.mp4
>>
>>107330500
>I wonder if all of them are dead.
Nah, mimics Eiichiro Oda >>107329875
Tsutomu Nihei >>107329966 and Takehiko Inoue >>107329994 just fine. Let's see if it knows Tatsuki Fujimoto
>>
>>107330546
They might have tried to respect copyright while including pre-copyright artists. Could still be interesting, because what the model needs is to learn a concept of 'style'. Flux didn't, and that crippled Chroma pretty bad.
>>
I assume there are no light loras for anisora high and low?
>>
>>107330558
Thanks boss.
>>
>>107330590
Why not just try the regular ones?

Personally I don't like how AniSora changes my characters' eyes.
>>
>>107330502
because you're forced to use his nodes for everything. I understand why, but I'd like more freedom. Strange that you get ghosting issues, when I used wan I had no problems (I dont gen that much lately :( )
>>
File: img_00257_.jpg (1.07 MB, 1264x1696)
1.07 MB
1.07 MB JPG
>>107330546
TY for trying. Geat prompt anyways
>>
File: 1696408133997335.jpg (318 KB, 1024x1024)
318 KB
318 KB JPG
>>107329875
Flux 2 is finally a local model at dall-e 3's level
>>
>>107329468
The casings look a little weird. But I think the video is real. Local models wouldn't have produced anything this good. And non-local ones wouldn't have produced Venti's face.
>>
>>107330546
>This is where the edit model would come in useful I guess.
You'd say either "would come in handy" or "would be useful"; the former is more causal than the latter. Not to be rude, in case you are not a native speaker.
>>
>>107330599
Is the model based on wan 2.2? The usual light loras worked. But the results are still ugly plastic 3d with i2v.
>>
SDNQ for cumfart when?????
>>
>>107330055
prompt?
>>
>>107330603
>because you're forced to use his nodes for everything.
Comfy native almost always has an alternative for popular shit.
>>
File: AnimateDiff_00198.mp4 (1.51 MB, 720x720)
1.51 MB
1.51 MB MP4
Isn't anisora an i2v model?
It's still turning shit into 3d, just less. And this is the 64gb version.
>>
File: ComfyUI_08806_.png (2.31 MB, 1024x1024)
2.31 MB
2.31 MB PNG
>>107330562
>Let's see if it knows Tatsuki Fujimoto

>A man engulfed in eternal flames stumbling through a frozen post-apocalyptic wasteland, his charred regenerating flesh perpetually burning yet never consumed, carrying an unconscious companion through knee-deep snow past the skeletal remains of civilization, desperate survivors watching from makeshift shelters built from rusted vehicles and human bones, in the mangaka style of Tatsuki Fujimoto with relentless bleak nihilism, scratchy expressive linework depicting constant agony, extreme tonal whiplash between profound suffering and absurdist dark comedy, religious symbolism corrupted by cannibalism and survival horror, wide cinematic panels of desolate snowscapes contrasted with claustrophobic violence, existential questions about identity and revenge visualized through body horror, high resolution, black and white manga with arthouse experimental pacing, shocking brutality that asks if living itself is worth the pain, and deeply human moments of connection amid incomprehensible cruelty.

Not sure if it knows him, but he draws in so many different styles I'll give it a pass for not knowing this one kek.
>>
>>107330495
Thanks doc!
>>
>>107330603
>because you're forced to use his nodes for everything.
Which is why I don't use them. You can't turn off teacache. His nodes are great but you don't need them if you optimize.
>>107330617
kek
>>107330686
Well it's probably trained on 3d people.
>>
>>107330686
Disable lightx2v.
>>
File: AnimateDiff_00201.mp4 (1.01 MB, 720x720)
1.01 MB
1.01 MB MP4
>>107330758
>>107330781
Guess I'll succumb to the 9hour render without light loras.
>>
I still think qwen edit is better than flux or flux 2, edit 2509 + lightx2v lora 4 or 8 steps is fast and can take any source and make gens as if it were a lora.

you can take ANY image, real or anime, and make it do anything. and the outputs are good too.
>>
File: ComfyUI_08809_.png (1.93 MB, 1024x1024)
1.93 MB
1.93 MB PNG
>>107330617
Yes, I saw someone post Dalle-3 tier stylized anime feet.

>A distinguished middle-aged doctor in a white coat standing in a dimly lit hospital corridor, his face shadowed with moral conflict and exhaustion, clutching a patient file while rain streams down a window behind him, realistic European architecture and period details from 1990s Germany, in the mangaka style of Naoki Urasawa with photorealistic character designs, meticulous background detail and architectural accuracy, cinematic panel composition inspired by thriller films, subtle facial expressions conveying deep psychological complexity, masterful use of negative space and shadow for tension, restrained screen tone application, incredibly nuanced linework that captures every wrinkle and emotion, high resolution, black and white manga with Hitchcockian suspense and literary sophistication.

Local is eating so good.
>>
>>107330793
That's the heart of it. Qwen image is almost as fast as sdxl for me now.
When and if flux gets light loras, we'll see. But flux handles styles better atm, so it might edge qwen out.
>>
I want to try the full 14b model a try, but all I can find on these files is an obscure third party script to merge them.
There must be a way to merge them in comfyui, surely? Or even a proper download link for the merged model.
>>
File: 1696048947119085.jpg (224 KB, 1024x1024)
224 KB
224 KB JPG
It actually looks very similar to Dall-e's idea of manga (picrel), which is better than Flux 1 at least.
>>
File: 1750976077141353.png (1.19 MB, 666x1166)
1.19 MB
1.19 MB PNG
>>
>>107330822
https://huggingface.co/orabazes/FLUX.2-dev-GGUF/tree/main

try these, havent used them yet but apparently it's just quant models from the base files.
>>
File: Flux2_00002_.png (1.75 MB, 1024x1024)
1.75 MB
1.75 MB PNG
Using the prompt shared by anons for flux1.dev:
holographic hovers over, fluorescence, warm colors, hatsune miku illusion anime girl made of glitch effect displayed emerges into the real world. data equipment, dials, io panel, few wires. transparent in a thin translucent diy, vintage computing. interactivity.

 Requested to load Flux2
loaded partially; 27605.69 MB usable, 27585.02 MB loaded, 6228.00 MB offloaded, lowvram patches: 0
100%|| 20/20 [00:35<00:00, 1.77s/it]
Requested to load AutoencoderKL
loaded partially: 24588.00 MB loaded, lowvram patches: 0
loaded partially; 152.58 MB usable, 152.58 MB loaded, 7.73 MB offloaded, lowvram patches: 0
Prompt executed in 65.57 seconds
[MultiGPU_Memory_Monitor] CPU usage (90.0%) exceeds threshold (85.0%)
[MultiGPU_Memory_Management] Triggering PromptExecutor cache reset. Reason: cpu_threshold_exceeded
>>
so for wan, is 4 high/2 low steps ideal for lightx2v lora? or is it still 2/2 or 3/3
>>
>>107330055
i don't seem to understand your picture anon
>>
>>107330839
.8 and 1
>>
File: 1738223858303622.png (2.92 MB, 1336x2008)
2.92 MB
2.92 MB PNG
>>
>>107330838
neat glitch miku
>>
>>107330839
I like 4/4.
>>
>>107330686
booba
>>107330791
this and install the comfyui-gguf node then add ram as vram cache
https://github.com/pollockjj/ComfyUI-MultiGPU
>>107330847
AAAAAAAAA
>>
File: ComfyUI_08810_.png (1.88 MB, 1024x1024)
1.88 MB
1.88 MB PNG
>>107330800
>A weary surgeon with precisely detailed facial features and deep-set eyes sitting alone in a shadowy hospital office, single desk lamp illuminating case files, rain visible through accurately rendered window frames with German architecture beyond, in the mangaka style of Naoki Urasawa with photorealistic linework using minimal hatching, clean economical pen strokes, micro-expressions of moral exhaustion, cinematic low-angle composition, sparse strategic screen tones only for ambient shadow, no manga stylization, Western comic clarity, Hitchcockian visual tension through negative space, high resolution, black and white with film noir atmosphere.

Not as much his style at first but this is more like it.
>>
>>107330558
Looking at it, you are using the FP8 model and the FP8 clip with the Cumfart template while using the Q8 model with the Kijai workflow as well as an additional two steps. The LoRA stack is also missing BounceHighWan2_2 and the Lightx2v LoRAs are different as well. The samplers are different, shift is different, the seeds are different.
>>
File: zimage.jpg (299 KB, 2048x810)
299 KB
299 KB JPG
z image ready to mog cux2.dev with only 6b
https://x.com/dx8152/status/1993528526553931940
https://modelscope.cn/models/Tongyi-MAI/Z-Image-Turbo
>>
>>107330889
>Z image
is this from the glm guyz?
>>
>>107330889
>another chinese model promising everything
ok for sure this time
>>
File: 1754935946238999.mp4 (569 KB, 704x480)
569 KB
569 KB MP4
4 steps high, 2 steps low

2.2 MoE distil high by kijai for high, 2.2 lightning low (old one)

seems good.
>>
>>107330889
>6B
>multilingual

Did the Chinese guy on Twitter lie or does it really know multiple languages too? Kek
>>
>>107330899
>slowmo
>doesnt even properly jump off
sure buddy
sure
>>
>>107330899
the high is the fixed one that kijai made for the new 2.2 high lora that had some issues.

>Something is off about the LoRA version there when used in ComfyUI, the full model does work, so I extracted a LoRA from that which at least gives similar results than the full model:

https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22_Lightx2v/Wan_2_2_I2V_A14B_HIGH_lightx2v_MoE_distill_lora_rank_64_bf16.safetensors
>>
File: 1749339778132764.mp4 (467 KB, 704x480)
467 KB
467 KB MP4
>>107330902
this is pretty smooth, it's a pretty view after all.
>>
>>107330899
>Still using Wan

Why not Hunyuan bro?
>>
>>107330898
Well, they have a better track record than western models
>>
>>107330922
wan 2.2, qwen edit 2509, noobai/illustrious, all you really need for gens desu. how is hunyuan better than wan 2.2?
>>
>>107330838
>>
>>107330889
>Apache 2.0
I really hope this is good and releases within the next week, would be hilarious if it shat all over BFL's slop with only a fraction of the parameters
>>
File: ComfyUI_00018_.mp4 (1.55 MB, 720x976)
1.55 MB
1.55 MB MP4
>>107330884
>you are using the FP8 model and the FP8 clip with the Cumfart template while using the Q8 model with the Kijai workflow
Let me explain:
If I load the Q8 model in the Comfy Template with a UnetLoaderGGUFDisTorch2MultiGPU node, it takes forever. After 1800 seconds it only just reaches the low-noise sampling. I don't know if it requires additional configuration, but I see most people using this node to load GGUFs in a non-KJ workflow so I assume it's optimal.
Secondly, the differences between fp8 and Q8 are rather negligible. I understand Q8 is better (which is why I use it now), but it's clearly not the difference between blurry hand movement and clear hand movement. I have been using fp8_scaled on KJ's workflow since 2.2 released and the hand motion was never this bad. I could post thousands of video gens that prove this. Not sure the clips would be the difference either.
>as well as an additional two steps
This does make a difference, but again, there is more to it.
Pic related is 6 steps on the Comfy Template. Clear noticeable improvement over the default (4), but you can still see that teacachy effect on the hands, which isn't in the KJ version.
>The samplers are different
That's kind of my point. I'm not exactly sure what KJ is doing, how his nodes work under the hood or how he's configuring everything, but for me, the end result is a faster, much better quality output than the default Comfy wan2.2 template.
>different seed, loras, etc
I should have prefaced that my original webm wasn't intended to be a direct side-by-side comparison with all settings the same. I just wanted to highlight the blurry hands on the non-KJ workflow and see if anyone had an idea on exactly what was causing it.
>>
File: 1763309828924153.png (1.03 MB, 832x1248)
1.03 MB
1.03 MB PNG
>>
>>107330931
>how is hunyuan better than wan 2.2?
It isn't. Stop replying to morons.
>>
>>107330960
you're fucking retarded
>>
>>107330944
dance lora or just prompt? looks good
>>
>>107330966
>autist chimps out
Standard behavior from a mentally unhinged, socially rejected moron like yourself :)
>>
>>107330972
you dont even know how to do 1:1 comparisons and then cry like a bitch when shit gets different output and dismiss differences as 'uuuh they dont really matter'
literally kys
>>
>>107330969

Hip bump dance.

https://civitai.com/models/2120562/hip-bump-dance
>>
>>107330973
I choose not to because it didn't particularly matter you angry little idiot :)
Once again, I have thousands upon thousands of KJ gens without teacachy hands, and every single gen from the Comfy workflow has teacachy hands, even with steps at 6.
Continue being an angry little autistic chimpanzee at your leisure. I love a good meltdown from a mental freakazoid.
>>
>>107330981
kys
>>
>freakazoid
Is it the 90's again
>>
File: 1753947450915362.mp4 (456 KB, 640x640)
456 KB
456 KB MP4
4 high steps 2 low steps does seem pretty good so far, although I was doing 3/3 which isn't that different.
>>
>>107330981
>it didnt matter really!
>saying this while you were lamenting how comfyu workflow produced different/worse result then kijai's
>anons point out you're using different settings/models
>nu-uh, thats not it!
lmao
>>
>>107330966
>>107330983
>>107330973
keep it up, always entertaining to see wanschizo's feelings get hurt
>>
>>107330983
Keep crying you laughable blithering idiot.
>>
>>107330825
>better than flux 1
Anything's better than that.
>>
just stop replying to the retard its simple
>>
>>107330991
Yes, it wasn't it, because I provided an additional example showing how increasing the steps doesn't fix the problem, my dear retard.
I could very easily generate two side-by-side examples with the exact same settings, prompt and seed and the result will be the same. The problem is an angry little autistic chimp like you simply isn't worth the effort.
>>
>>107328878
I might be retarded but I don't see it in the threads.
>>
>>107330862
Can you check seed variability?
>>
>>107331014
>different sampler
>different model
>different shift
>i-it's just the steps
ok bro
>>
>>107331006
Anon, you lost the argument, and exposed yourself as mentally unhinged.
>>
File: ComfyUI_00056.jpg (1.41 MB, 2000x3008)
1.41 MB
1.41 MB JPG
I thought Flux.2 was supposed to handle larger resolutions this time around? Guess not.
>>
>>107331014
>I could very easily generate two side-by-side examples with the exact same settings
but you can't, you're already saying FP8 is the same as Q8 quality wise, which is retarded, so there's no hope of getting an objective comparison from you.
>>
>>107331022
>different sampler
Yes, this is literally the whole point you laughably stupid idiot. Kijai's sampler is better than the built-in samplers.
>>
File: bake again.png (192 KB, 866x539)
192 KB
192 KB PNG
they actually baked again
>>
File: dawn-ribbon.mp4 (412 KB, 832x480)
412 KB
412 KB MP4
>>107331036
I didn't say it's the same, my dear illiterate idiot. I said the difference is negligible, particularly in the context of the issue I'm highlighting.
Webm related is an fp8_scaled gen, and not even a spec of blurry hands. It simply doesn't happen on KJ's workflow regardless of whether it's using Q8 or fp8.
Continue being an angry little idiot at your leisure, my retarded friend :)
>>
File: 1756316115428101.mp4 (737 KB, 832x480)
737 KB
737 KB MP4
the japanese girls in dresses hold up white signs saying "LDG" and wave hello.

4 high/2 low (with lightx2v)
>>
>>107331051
fucking based
>>
File: hunyuanvid_upscale_00004.mp4 (3.74 MB, 1080x1674)
3.74 MB
3.74 MB MP4
>>107330931
I posted them yesterday but I'll post again upscaled. This is how. Perfectly smooth motion. Not slowed down.
>>
i'm surprised he has the time to post at all with the amount of bumping his own dead threads he has to do
>>
>>107330981
>I choose not to because it didn't particularly matter you angry little idiot :)
Yeah, one anon seems to have developed a sperging syndrome concerning wan workflows. Weird, that. You never know how a mental illness is going to manifest.
Keep digging.
>>
File: hunyuanvid_upscale_00006.mp4 (3.8 MB, 1080x1674)
3.8 MB
3.8 MB MP4
>>107331061
>>
>>107331040
I don't understand why some pretend as if these foundational models are the end all be all and finetunes do not exist. Surely they're simply unaware... right? I also lament the lack of artists (compared to tunes) in base models but I don't act as if it's the end of the world. Because it's not.
>>
>>107331065
It's not my problem you have high-functioning autism, my dear retard. It is highly abnormal to chimp out in the manner that you did over an absence of 1:1 parity between workflows in an AI software.
I am actually concerned for you. You clearly must've had difficulty growing up.
>>
File: 1738811092004358.png (2.71 MB, 1336x2008)
2.71 MB
2.71 MB PNG
>>
flux 1 never recovered from its lack of artists and slopstillation, as evidenced by chroma being a dead-in-the-water failbake that failed to learn a single artist tag in the 50 epochs of training. luckily it's about to be btfo by based china's 6b, and we'll get an actual good model based on that
>>
who are you talking to?
>>
File: 1760686332383715.mp4 (885 KB, 832x480)
885 KB
885 KB MP4
the japanese girls in dresses run out the door at the back of the room.

neat
>>
>>107331092
post hands
>>
>>107331092
I was sort of supporting you there, since he railed at me yesterday, too.
>>
>>107331092
thing is, you cant just say 'X is better than Y' if you can't even do proper parity testing, you dont even know what a sampler is
>>
>>107331139
he calls other illiterate but can't reason beyond "this reply to my post has a bad word in it" lol
>>
>>107331102
I thought Chroma didn't learn artist tags because they got deliberately purged from the captions before training.
>>
>>107331072
could you please post i2v with focus on face please?
reference image and maybe turning head towards the side or something?

and how long does it take to gen, 30 minutes or so is it?
>>
Just did my first Flux 2 gen
RTX 3080 (10 VRAM)
64GB system RAM
1024x1024 resolution
160 seconds
>>
>>107331145
You are illiterate though, moron. The point of the comparison is to highlight a very specific issue, and I posted another example that demonstrates this with high step count.
It doesn't even require a comparison you laughable little retard. If the base template workflow is producing substandard results, then there is clearly a problem with that workflow.
>>
>>107331145
There is such a thing as 'works out of the box', you know.
>>
>>107331150
No, you are clearly illiterate. This is one of the many symptoms of your severe autism. Your inability to comprehend a series of very simple, clearly laid out statements is a testament to your illiteracy and stupidity.
Again, I pity you anon.
>>
>YOUR THE BLOODY
>NO YOU ARE THE BLOODY!!
flux 2 must've been bad if the brown-offs are resuming so soon
>>
>>107331180
but we were discussing not WFs, but NODEs, and native comfy nodes can produce the same result as kijai's nodes, point being you have differences due to wrongly configured nodes. actually to use goofs you have to use external non comfy nodes anyway. but at least they dont lock you out on the rest of the ecosystem.
>>
>>107331184
lmao
>>
>>107331161
He claimed he didn't purge them around v28. And it didn't know a artists by then (it didn't change much). And ponyfag tested the model to see if it would learn his fakeass artist replacements, and it didn't, either. Auraflow didn't, too. They have T5 in common.
>>
>>107331189
>point being you have differences due to wrongly configured nodes
Not me, I use wan with native nodes because I hate bloat, and my comfy is bloated enough as it is.
But his point is valid. If a workflow with NODES doesn't work as well as some other workflow with other NODES out of the box, why should he bother plotting sigmas for a weekend to find out what's the problem?
>>
File: ComfyUI_07862_.png (2.93 MB, 2560x2560)
2.93 MB
2.93 MB PNG
>>
>>107331189
>>107331190
Good job posting from your phone to make it look like you're not samefagging, it must have been difficult with your debilitating autism. Everyone in this thread can see that you have mental retardation. I shouldn't have bothered to post my Wan question here, since clearly everyone here is a retard who probably can't even get 10 reactions per gen on Civit. Keep scrounging for (You)s while I swim in ComfyUI job offers, my illiterate friend.
>>
File: 1736326839376804.mp4 (1.39 MB, 832x480)
1.39 MB
1.39 MB MP4
lmao, holy shit

the golden retreiver on the left jumps and fires a lightning bolt at the man with glasses, hitting him in the face. The man with glasses flies very fast off his chair into the air to the right, and smashes through a white wall, flying off a mountain cliff during the day. several lightning bolts hit the man after he lands.
>>
File: 1741211075086071.mp4 (2.91 MB, 1436x1216)
2.91 MB
2.91 MB MP4
>>107330960
Not the anon that called you retarded but the anon that pointed out the settings differences here. I'm going to take a guess and say that the MultiGPU node is probably fucking up offloading for you if it jumps to 1800 seconds considering you are also using a 35 block swap in Kijai setup but hey, use what works and honestly, the Cumfag template is pretty barebones and shitty anyway. Matching more of the settings does give a better comparison between the two but I didn't notice the 2 CFG on first step for high noise from the Kijai setup until I had already run the gen. Neat way of doing it and to do that in Comfy you'd have to use a triple sampler setup or that other custom node that I can't remember off the top of my head. https://litter.catbox.moe/xa1p0kmelscpb4vu.mp4
>>
>>107331027
What's with that obnoxious dithering? My outputs have that as well
>>
>>107331286

Probably their watermark. Cuckmark.
>>
File: ComfyUI_00027_.mp4 (282 KB, 832x480)
282 KB
282 KB MP4
>>107331049
Just for fun, I tried re-genning this in the comfy template with the exact same settings and models.
Wouldn't even work, I got the error
>mat1 and mat2 shapes cannot be multiplied (77x768 and 4096x5120)
so I had to change the Clip.
Output is obviously different, but the same old hand blurring from the Comfy Template is there.
>>
File: ComfyUI_00058.jpg (2.29 MB, 1600x2400)
2.29 MB
2.29 MB JPG
>>107331286
I don't know. Ican't get a clean output on any Flux derivative anymore, maybe something changed.
>>
>>107331281
I take your point, and I'd be very interested to see the workflow for the left side because I'm a little stumped as to what's driving the hand problem. If I knew what it was, I could provision to always avoid using whatever is causing that problem.
and yes, I was suspecting there is some kind of big problem related to the GGUF loader. Obviously I'm not technical enough to figure out what, but the KJ loader works just fine, and lets me load sageattn_compiled right away so I'm just going to keep using that. I'm especially happy to keep using KJ now that I've learned how to integrate NAG, which I think definitely makes a difference for unwanted mouth movement (even if I get more OOM errors).
>>
File: ComfyUI_08814_.png (1.85 MB, 1024x1024)
1.85 MB
1.85 MB PNG
>A graceful peasant woman in her prime with porcelain skin and delicate features cradling a lamb in her arms, standing barefoot in a sun-dappled pastoral meadow with wildflowers, her simple rustic dress rendered in exquisite fabric folds, gazing with innocent tenderness at the viewer, soft afternoon light illuminating her face with an ethereal glow, in the academic realism style of William-Adolphe Bouguereau with flawless technique and idealized beauty, impossibly smooth skin texture achieved through invisible brushstrokes, perfectly anatomical proportions with graceful classical poses, luminous flesh tones with subtle pink and cream gradations, meticulous attention to textile draping and natural elements, sentimental romanticism with technical virtuosity, soft atmospheric perspective in the background, high resolution, oil on canvas with porcelain-like finish and timeless neoclassical elegance.

SD 1.5 Bouguereau bros... Flux.2 can do him (can't post y'know what gen here so have this instead).
>>
Why aren't the shards being read by the config file? How do you even store multiple models like this? Can you rename the files?
>>
File: 00341-3717267876.png (1.41 MB, 1024x1024)
1.41 MB
1.41 MB PNG
>>
>>107331197
Some of ponyfag's ramblings post V7 also talked about how T5 struggles to separate style and content. Maybe there is some truth to that.
At this point T5 is too outdated to be used in new models anyway.
>>
Has anyone tried to use a Lora with Flux.2? I get this error:
>"mul_cuda" not implemented for 'Float8_e4m3fn'
>>
>>107331301
If cfg is the culprit, try some cfg snake oils comfy has implemented without rhyme or reason. Cfgnorm acts like an offset for cfg 1 at ranges 0.95-1.05 without increasing actual cfg.
>>
File: ComfyUI_08815_.png (1.64 MB, 1024x1024)
1.64 MB
1.64 MB PNG
>A elegant society woman in a black silk evening gown with dramatic décolletage, standing in a opulent gilded interior with her body turned three-quarters while her face gazes confidently toward the viewer, one pale arm resting gracefully on a marble mantelpiece, candlelight and lamplight creating complex illumination, in the painterly realist style of John Singer Sargent with virtuosic loose brushwork, bold confident strokes that appear effortless yet capture exact form, mastery of fabric rendering with luminous black silk catching light, sophisticated limited palette with rich darks and luminous flesh tones, bravura technique with visible energetic brushstrokes, psychological depth and aristocratic presence, dramatic chiaroscuro with multiple light sources, high resolution, oil on canvas with Gilded Age glamour, spontaneous yet controlled execution, and capturing personality through gesture and gaze.
>>
>>107331323
I would never call the one you posted 'not clean'.
>>
>>107331370
Which lora? The model is brand new, who trained any? Do any of the trailers even support it?
>>
>>107331370
There are loras already? Some pajeet is probably grifting during the hype wave.
>>
>>107331392 >>107331402
i made one myself with ai-toolkit
>>
File: 1742823760433779.mp4 (1.4 MB, 832x480)
1.4 MB
1.4 MB MP4
sent to the abyss:
>>
>>107331286
>>107331292
Watermark can be defeated with a finetuned VAE
https://www.reddit.com/r/StableDiffusion/comments/1oytasv/get_rid_of_the_halftone_pattern_in_qwen_imageqwen/
>>
>>107331420
Assuming ai-toolkit worked properly, then my guess would be that comfy hasn't officially implemented lora support for flux 2 yet.
I think there is a custom node you can try to load loras in different ways, but the name eludes me at this point.
>>
>>107331326
The one labeled Kijai on the left was from >>107330558. I only genned the one on the right.
>>
>>107331442
I meant to say the right side, my bad.
What specifically did you change? Clip? Shift?
>>
>>107331435
>>107331027
It's not a watermark, though. This dithering appears in lots of DiTs that didn't have expected sigmas at the early steps (scanlines and moire also stem from this). They get baked in and propagated downstream. Sort of a sign of inference insufficiency. Here it's probably just a sign that it wasn't trained on those resolutions.
>>
File: ComfyUI_00002_.png (1.52 MB, 1024x1024)
1.52 MB
1.52 MB PNG
>>
File: 1757820501901206.mp4 (1.71 MB, 832x480)
1.71 MB
1.71 MB MP4
the golden retreiver dog on the left jumps in the air towards the man with glasses and fires a lightning bolt at his head. The man with glasses flies very fast off his chair into the air to the right, and smashes through a white wall, flying off a mountain cliff during the day. the man with glasses lands on the ground far below and explodes in a huge explosion with fire and smoke.

karma!
>>
>>107331493
4 steps high, 2 steps low, the newer 2.2 MoE distil lora for high (kijai one), 2.2 lightning low (from before)
>>
>>107331165
Check
https://www.reddit.com/r/StableDiffusion/comments/1p4paqd/my_testing_of_hunyuanvideo_15_and_wan_22_on_i2v/

It does a good job too

As for how long, for 2 secs it's 2.5 mins, and for 5 it's 6 on 3090.
>>
>>107331502
lightning is only good for non-anime gens right?
also kijai himself recommends to use the wan2.1 480p lightx2v lora alongside his new high noise loras.
>>
>>107331536
2.1 for low?
>>
>>107331447
Besides loading Q8, FP16 text encoder, and correcting the missing LoRA, I changed step count, shift, and then CFG in that order. It's probably a combination of step count and shift being important here. The workflow should be in >>107331281 litterbox.
>>
>>107331281
at least we have 1 (one) person able to do 1:1 comparisons. thanks for your service.
>>
>>107331547
https://huggingface.co/Kijai/WanVideo_comfy/discussions/109
>>
>>107331562
ah, thanks. yeah the new 2.2 high one he posted is great, low noise was never really the issue so I guess 2.1 for low works well.
>>
>>107330502
>Still trying to figure out why KJ gets so much hate here.
no one is hating him lol, if someone doesn't want to use his nodes he won't use them and go for native comfy, that's all, this dude has done way more good things than bad I can't hate that dude
>>
>>107330617
>Flux 2 is finally a local model at dall-e 3's level
which is terrible since we are now in the Nano Banana Level and this shit is way more capable than dalle 3
>>
>>107330889
https://xcancel.com/dx8152/status/1993537553149083807#m
So Alibaba will release the new version of QIE + that Z image model thing? damn they're cooking!
>>
File: 1762390068066336.mp4 (1.32 MB, 832x480)
1.32 MB
1.32 MB MP4
>>107331493
same setup but with with 2.1 low lora instead of 2.2 lightning

kek, seems smooth
>>
File: 1742953402723560.png (72 KB, 617x129)
72 KB
72 KB PNG
>>107331594
Oh wait nevermind it's from another company
https://xcancel.com/bdsqlsz/status/1993619020361445446#m
>>
>>107331561
Noone here owns you anything.
>>
>>107331527
speed seems reasonable
kk tyvm anon
>>
>>107331608
This is the competition we need. Focus on consumer GPUs, high quality models, and open licenses. Much better than whatever cuck mechanisms BFL is doing with Flux. Hell, they even gave up on video because they got mogged by Wan and Hunyuan which they won't admit.
>>
>>107331694
>This is the competition we need. Focus on consumer GPUs, high quality models, and open licenses. Much better than whatever cuck mechanisms BFL is doing with Flux.
exactly, this, you have to show you can do impressive stuff at decent size, we're still at the begining of this thing there's plenty to optimize instead of just "STACK MOAR LAYERS"
>>
>>107331608
tongyi mai is another branch of alibaba not related to the qwen team but also releasing an open model called z-image. which, despite being called z-image, seems to have no relation with z-ai who make glm??? so confusing
>>
>>107331575
>which is terrible since we are now in the Nano Banana Level and this shit is way more capable than dalle 3
I think he meant artstyles and Nano Banana ain't uncensored anyways, whatever local doesn't know can be tuned in (Flux.2 is literally a single LoRA away from what you claim Nano Banana is better at concept wise).
>>
File: 38891587.mp4 (3.75 MB, 624x1408)
3.75 MB
3.75 MB MP4
>>107331051
2 high/2 low with PainterI2V motion amplitude 1.15

Have you tried it?
>>
>>107331728
whats painteri2v?
>>
File: 1740789956426358.png (296 KB, 2085x1100)
296 KB
296 KB PNG
>>107331711
>so confusing
it is confusing
https://xcancel.com/srameojin/status/1993571145380192266#m
>>
File: 00118-2649842283.png (701 KB, 800x800)
701 KB
701 KB PNG
>>
>>107331739
This https://github.com/princepainter/ComfyUI-PainterI2V
>>
File: 1747947580820236.png (2.32 MB, 960x1280)
2.32 MB
2.32 MB PNG
for a 32b model I'm really dissapointed by the hands
>>
>>107331771
interesting, will check it out after
>>107331761
cute migu
>>
https://youtu.be/D0_QGrdtvEg?t=247
Has anyone tried this, go for a good style image on input 1 and ask flux 2 to go for the style of input 1? they showed you can do this kind of shit on their blog page
>>
File: 1757795242510776.mp4 (1.21 MB, 832x480)
1.21 MB
1.21 MB MP4
meme magic
>>
File: 1734168543289944.jpg (3.71 MB, 6144x1536)
3.71 MB
3.71 MB JPG
https://huggingface.co/city96/FLUX.2-dev-gguf
thank you city for this quant comparison
>>
>>107331811
until Q5_1 her left hand has the right pose, if you go under that it starts to look too different from bf16
>>
>>107331811
>q4km is still 20gb
lmao bros
>>
>>107331790
I wanna make more because I don't remember where my other ones are.
>>
>forge neo bluescreens my computer trying to run flux
good stuff. just. mwah chef's kiss. cumfart users are handed one (1) "i told you so".
>>
>>107331855
i told u bro, embrace spaghettification of software NOW
>>
File: 1762840873654616.jpg (3.5 MB, 6554x1990)
3.5 MB
3.5 MB JPG
>>107331852
that's why I'm waiting for nunchaku, it has the 4bit size but it's a way better quality than q4km
>>
File: I mean...png (1.01 MB, 1079x1074)
1.01 MB
1.01 MB PNG
>>107331855
I told you so
>>
>>107331871
also double the speed (with the unified kernels meme they do). there's SDNQ but im not installing SDNEXT tbqh. cumfart said he had something better than SDNQ in store and said to 'just wait' but MAN im starved for good 4bit quantiziations, even so because blackwell is supposedly optimized for fp4 memes so FUUCKKKKKKKKK
>>
>>107331878
>there's SDNQ but im not installing SDNEXT tbqh.
that's funny you say that because flux 2 actually has a SDNQ quant right now
https://huggingface.co/Disty0/FLUX.2-dev-SDNQ-uint4-svd-r32
>>
>>107331890
its an SVD quant too (nunchaku method) but it lacks the fused kernels
>>
>>107331855
there's a reason we're torturing ourselves with cumfart spaggheri shit, its back end is the best and it's not even close
>>
>>107331861
>>107331877
credit were its due, cumfart's cum a long way and used correctly it's faster than every forge iteration. it cracks 3s/it easily vs 2.7s/it on sdxl based models at 720p.
also wan. it just works.

>>107331878
coming from a niggerwell owner, don't hype up quants too much. remember they're like lobtomies that only scoop out more and more important parts of the brain, be happy with sage attention and how fast they are in general before you start giving every model a concussion.
>>
>>107331893
>it lacks the fused kernels
that's a shame :(
>>
>>107331852
>>q4km is still 20gb
>lmao bros
Insane bloat. What the fuck are these people doing
>>
>>107331901
>coming from a niggerwell owner, don't hype up quants too much. remember they're like lobtomies that only scoop out more and more important parts of the brain
we're almost in the year 2026 of our lord and no company is willing to train their model on fp8, only deepseek did this shit so far
>>
File: 00176-979101957.png (1.62 MB, 1024x1024)
1.62 MB
1.62 MB PNG
NYX8G
>>
>>107331911
because FP8 is AIDS
>>
File: 1744055319742343.png (192 KB, 690x388)
192 KB
192 KB PNG
>>107331918
shut up, Bitnet is the future AND NO IM NOT COPING
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T
>>
>>107331930
>bitnet
Now that's a name i haven't heard in a long time.
>>
>>107331930
>IM NOT COPING
>image depicts coping
>>
>>107331950
the image actually depicts u
>>
>>107330502
>>107330515

When something new for wan releases it was always for kj nodes first or it either never releases until months later for native. Also his nodes tend to eat up a lot of memory for example I can load 250 frames with native no issues but 250 frames will OOM on kj.

Then again, his nodes seem to be improving.
>>
>>107331930
>>107331946
Damn I remember /lmg/ shilling that shit HARD.
I assume it turned out to be snakeoil like most the next big things getting posted here.
>>
>>107331976
that's where i originally heard it from as well.
It was supposed to propel us into the promised land. Like gpt 4 at home.

kek.
>>
>>107331950
>thatsthejoke.png
>>
File: file.png (309 KB, 653x565)
309 KB
309 KB PNG
It's funny how people are rightfully complaining about how slow Flux 2 is, but you have to remember this is the distilled model, if we were using the one with CFG > 1 it would've been 2 times slower
>>
>my card can gen at the max res in flux in just a minute for 20 steps
i.. do not know where i was getting the idea i needed upscale and detailer passes from..
for reference, an sdxl model genned at 1280x720 upscaled to 1080p + detailer passes, depending on forge's mood would take between 48 seconds and 56 seconds. Honestly a very reasonable time difference for way better quality realistic 1girls.
>>
>>107331950
you have autism but unironically btw
>>
>>107332044
i'm afraid we've all got autism haven't we?
>>
File: 1732750997843985.png (25 KB, 732x232)
25 KB
25 KB PNG
nunchakubros... i'm scared.
>>
File: 1758836837054336.png (393 KB, 500x672)
393 KB
393 KB PNG
>>107332056
kek, nvdia killed them
>>
>>107331811
Q3 looks fine.
>>
>>107331811
>basic bitch prompt used as benchmark
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
>>
File: 1761152628855572.mp4 (1.23 MB, 832x480)
1.23 MB
1.23 MB MP4
8 steps, 2.2 kijai MoE distill lora 4 steps, 2.2 lightning low 4 steps

karmic justice:
>>
File: file.jpg (2.04 MB, 7961x2897)
2.04 MB
2.04 MB JPG
>>107332086
that's why the "black migu with dreadlocks" is still the gold standard, with so much goin on you can see better what quant shits the bed or not
>>
>>107332021
>>107332056
>no nunchaku
>the real model was even bigger
5090keks... how are we coping??? i thought i invested in the future??
>>
>>107332097
KAYA WINS FATALITY
>>
>>107332097
lmao, that one is kino
>>
>>107332097
also prompt: the golden retreiver dog on the left fires a huge lightning bolt from their paws at the man wearing glasses. The man with glasses flies off his chair into the air to the right through a window, over the side of a mountain cliff during the day. he lands far below and a giant lightning bolt hits his body where he lands, creating lots of smoke and fire.

you can add tons of shit and wan will still work, even if it's a word salad.
>>
>>107332086
nice, you're starting to notice. next step is realizing it's intentional. 99% of local 'speedups' are snakeoil slop
>>
File: 1764078864672176.mp4 (1.58 MB, 832x480)
1.58 MB
1.58 MB MP4
lmao the motion on this one, action cam

adding more steps to high/low really did the trick, 2/2 was passable but it's far better with a few more steps. and not much longer to gen either.
>>
>>107331896
Speak for yourself. I'm torturing myself with cumfat spaghetti shit because it gives me a peek at actual values that get passed between nodes, but not always. And because it gives me an illusion that I can automate a certain task only to find out later that I would have been better with a vibecoded python script all along.
>>
>>107332104
>"black migu with dreadlocks"
Lol no, resort (or was it zoot?) that's just you.
>>
>>107332141
the point is that if you want to test quants you need to go for complicated prompts that adds a lot of shit, every quant is going to nail "1girl, standing", it's an useless comparison
>>
File: 1744205511258933.mp4 (1.35 MB, 832x480)
1.35 MB
1.35 MB MP4
surprise, motherfucker:
>>
>>107332161
>flies like a fucking ragdoll instantly
reminds me of AVGN episodes were he'd use a dummy in scenes where he gets tossed kek
>>
>>107332170
I wonder if any animation studios use AI for conceptualizing stuff, they could prompt something silly then animate it for real based on the outputs.
>>
File: file.png (13 KB, 644x298)
13 KB
13 KB PNG
GGUF Flux 2 workflow, no similar issues on github
>>
>>107332192
might be a VAE issue? not sure
>>
File: 2025-11-26 084809.png (97 KB, 1167x709)
97 KB
97 KB PNG
How do I disable that new UI? Its fucking broken.
>>
>>107332192
>>107332208
seems like firefox broke my mistral download, dogshit browser
>>
File: 1755100693389452.mp4 (1.23 MB, 832x480)
1.23 MB
1.23 MB MP4
KINO

the golden retreiver dog on the left fires a huge blue lightning bolt from their paws at the man wearing glasses. The man with glasses flies off his chair into the air to the right through a window flipping over and over, over the side of a mountain cliff during the day. he lands far below and a giant lightning bolt hits his body where he lands, creating lots of smoke and fire.
>>
>>107332192
>>107332208
File \comfyui-videohelpersuite\videohelpersuite\latent_preview.py", line 90, in decode_latent_to_preview
latent_image = F.linear(x0.movedim(1, -1), self.latent_rgb_factors,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected size for first two dimensions of batch2 tensor to be: [48, 128] but got: [48, 32].

seems like it's a conflict between the videohelpersuite node and comfy's preview technique on flux 2
https://github.com/comfyanonymous/ComfyUI/commit/f16219e3aadcb7a301a1a313ab8989c3ebe53764
>>
>>107332214 (me)
nevermind its >>107332227
>>
>>107332227
I cant say for sure cause flux 2 is so new, give it a day for people to work the kinks out and workflows, also could be a node that is buggy
>>
File: 1752886887607220.png (211 KB, 2310x1435)
211 KB
211 KB PNG
>>107332237
>>107332238
you have to disable "Display animated..." to make it work again
>>
>>107332249
that works, thanks
>>
File: 1732996715504938.png (1.17 MB, 1080x810)
1.17 MB
1.17 MB PNG
this is a 6b model btw, and it looks better than that flux 2 32b model, the chinks are cooking
https://modelscope.cn/models/Tongyi-MAI/Z-Image-Turbo
>>
>>107332318
when i can actually use it, i will definitely check it out.
>>
File: 1733384722276040.mp4 (1.73 MB, 832x480)
1.73 MB
1.73 MB MP4
great explosion imo:
>>
>>107332379
are you using wan 2.2 or hunyuanvideo 1.5?
>>
hurry up and bake you freaking retard
>>
>>107332393
hmm..
>>
>>107332385
wan 2.2. dont see a reason to change yet as its the standard so far (imo)
>>
flux2 recommended res?
>>
File: 1761642039782079.png (85 KB, 1390x950)
85 KB
85 KB PNG
for those who don't get why comfyui has gotten black all of a sudden, you can switch back to the gray background by doing this
>>
File: file.png (940 KB, 498x818)
940 KB
940 KB PNG
>huge breasts according to flux 2
>>
Baker?
>>
What is the standard practice for leveraging the vram installed on a separate PC on the same network?

I have an RTX 5070 Ti with 64GB dram and it's been mostly fine for wan2.2, but I recently added NAG to the workflow and gen time is now taking twice as long for 97 frames for 992x720. Can't I just offload the text encoding off to some 8GB vram pc?
>>
File: Flux 2 dev (Q8 - RTX3090).png (2.75 MB, 2065x1605)
2.75 MB
2.75 MB PNG
I waited 13 minutes for this...
>35/35 [12:51<00:00, 22.04s/it]
>>
>>107332437
few mins
>>
>>107332452
>>107332452
>>107332452
>>107332452
>>
>>107332249
Thanks I had the same issue
32 s/it 4060 Ti 16 VRAM with 32 GB...twelve minutes total. I am stuck with chroma for local. but 0.07 $ a gen doesn't seem so much.
>>
File: 1736413817945439.png (512 KB, 875x355)
512 KB
512 KB PNG
>>107332443
qwen edit is the best for that and multi image edits.
>>
>>107332525
>qwen edit is the best for that
for cartoon yeah, but for realism not so much



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.