[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Discussion and Development of Local Image and Video Models

Previous: >>108461444

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
>>108466467
gemmy
>>
>mfw Resource news

03/27/2026

>ComfyUI-DaVinci-MagiHuman
https://github.com/mjansrud/ComfyUI-DaVinci-MagiHuman

>ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling
https://luo0207.github.io/ShotStream

>Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration
https://v-gen-ai.github.io/Calibri-page

>Free-Lunch Long Video Generation via Layer-Adaptive O.O.D Correction
https://github.com/Westlake-AGI-Lab/FreeLOC

>MACRO: Advancing Multi-Reference Image Generation with Structured Long-Context Data
https://macro400k.github.io

>EagleNet: Energy-Aware Fine-Grained Relationship Learning Network for Text-Video Retrieval
https://github.com/draym28/EagleNet

>PMT: Plain Mask Transformer for Image and Video Segmentation with Frozen Vision Encoders
https://github.com/tue-mps/pmt

>PixelSmile: Toward Fine-Grained Facial Expression Editing
https://ammmob.github.io/PixelSmile

>RealRestorer: Towards Generalizable Real-World Image Restoration with Large-Scale Image Editing Models
https://yfyang007.github.io/RealRestorer

>A Google AI breakthrough is pressuring memory chip stocks from Samsung to Micron
https://www.cnbc.com/2026/03/26/google-ai-turboquant-memory-chip-stocks-samsung-micron.html

>Matrix-Game 3.0: Real-Time and Streaming Interactive World Model with Long-Horizon Memory
https://huggingface.co/Skywork/Matrix-Game-3.0

>Disney's Sora Disaster Shows AI Will Not Revolutionize Hollywood
https://www.404media.co/disneys-openai-sora-disaster-shows-ai-will-not-save-hollywood

03/26/2026

>PP-OCRv5: A Specialized 5M-Parameter Model Rivaling Billion-Parameter Vision-Language Models on OCR Tasks
https://github.com/PaddlePaddle/PaddleOCR

>OpenAI puts erotic chatbot plans on hold ‘indefinitely’
https://www.ft.com/content/de9bf0af-b241-424f-8229-5870b1c0d93d

>VACE Skyreels V3 R2V Merge — GGUF
https://huggingface.co/mickmumpitz/VACE_Skyreels_V3_R2V_Merge-GGUF
>>
>mfw Research news

03/27/2026

>RefAlign: Representation Alignment for Reference-to-Video Generation
https://arxiv.org/abs/2603.25743

>AnyID: Ultra-Fidelity Universal Identity-Preserving VidGen from Any Reference
https://arxiv.org/abs/2603.25188

>Self-Corrected ImgGen with Explainable Latent Rewards
https://yinyiluo.github.io/xLARD

>InstanceAnimator: Multi-Instance Sketch Video Colorization
https://arxiv.org/abs/2603.25357

>Z-Erase: Enabling Concept Erasure in Single-Stream Diffusion Transformers
https://arxiv.org/abs/2603.25074

>PackForcing: Short Video Training Suffices for Long Video Sampling and Long Context Inference
https://arxiv.org/abs/2603.25730

>DCARL: A Divide-and-Conquer Framework for Autoregressive Long-Trajectory VidGen
https://junyiouy.github.io/projects/dcarl

>BiFM: Bidirectional Flow Matching for Few-Step Image Editing and Generation
https://arxiv.org/abs/2603.24942

>CIAR: Interval-based Collaborative Decoding for ImgGen Acceleration
https://arxiv.org/abs/2603.25463

>PSDesigner: Automated Graphic Design with a Human-Like Creative Workflow
https://henghuiding.com/PSDesigner

>Semantic-Aware Prefix Learning for Token-Efficient ImgGen
https://arxiv.org/abs/2603.25249

>Learning to Rank Caption Chains for Video-Text Alignment
https://arxiv.org/abs/2603.25145

>ReDiPrune: Relevance-Diversity Pre-Projection Token Pruning for Efficient MLLMs
https://github.com/UA-CVML/ReDiPrune

>BizGenEval: Systematic Benchmark for Commercial Visual Content Generation
https://arxiv.org/abs/2603.25732

>From Weights to Concepts: Data-Free Interpretability of CLIP via Singular Vector Decomposition
https://frangente.github.io/SITH

>MoE-GRPO: Optimizing Mixture-of-Experts via Reinforcement Learning in VLMs
https://arxiv.org/abs/2603.24984

>Wan-Weaver: Interleaved Multi-modal Generation via Decoupled Training
https://doubiiu.github.io/projects/WanWeaver

>TRACE: Object Motion Editing in Videos with First-Frame Trajectory Guidance
https://trace-motion.github.io
>>
>>108466518
>>108466523
You're not welcome
Fuck off thread schizo
>>
>>108466573
>being this angry over news post
please ran stop trying to sabotage this general just because you dislike some poster
get a job already, please
>>
File: 1652362253530.png (750 KB, 900x675)
750 KB
750 KB PNG
>no blessing spam
he was a old man kek
>>
Blessed thread of frenship
>>
wtf is this roleplay
this ain't /trash/ retard
>>
cozy breas
>>
>>108466811
https://desuarchive.org/g/search/text/cozy%20breas/
>>
so many breas that are cozy
>>
>>108466467
Just bought 64gb (DDR4) Ram
What can i expect for Slopping ??
>>
>>108466956
enough for wan
>>
>>108466956
>he boughted
>just when ram prices are projected to fall soon
LMAO'd
>>
>>108466956
in two sticks right? so what's that total with your current ram
>>
>>108466982
Im skipping DDR5 anon. Ill wait for DDR6 in 2030 instead.
>>
>>108466467
Thank you for baking this thread, anon
>>108466700
Thank you for blessing this thread, anon
>>
>>108466997
Yes. 2x32gb but i have my old 16x2gb. It has different frequency and timing though so im not gonna use my old ram.... unless i desperate enough for more ram
>>
>>108466982
>He thing RAM prices immediately goes down
Lol no those Micron and Samsung jews still have the reason to jew out the prices. Just see Nvidia
>>
File: ComfyUI_0404.png (1.67 MB, 1024x1536)
1.67 MB
1.67 MB PNG
>>
Anons, I want to animate hentai images on my RTX 5070 Ti. I take it that WAN 2.2 is the best for that? Which version of that should I download? I'm kinda lost. 14B? 5B? I need help.
>>
>>108467188
just don't bother with video it's still all garbage especially nsfw, just focus on making nicer images .
You can spend days configuring and genning for a 5s clip of 1girl wiggling a bit
>>
>>108467136
Box?
>>
File: ComfyUI_0408.png (1.63 MB, 1024x1536)
1.63 MB
1.63 MB PNG
>>108463383
ran through captioner then put that caption back through qwen2512
>>
>>108467188
14b fp8 + light2v loras
>>
>>108467188
try a bunch of shit, and the gguf's too. find the balance of quality and speed you are happy with and then get some motion loras from civitai.
probably take a few gens to get a feel for prompting motion, but once you get a feel for it you should be golden (lots of "and then" and "while rhythmically").
there are generals on other boards that are anime focused so check the links in the op.
>>
>>108467188
Use LTX 2.3
Its great and better than WAN but Its limited due to lack of Lora for now BUT in 2 or 3 months its gonna surpass WAN
>>
>>108467454
Nah, not in prompt comprehension.

In pure local, Wan 2.2 is by far the best at understanding prompts. When you want a girl to fuck off, another to just go to the right, and a center one to kneel and show her plain white cotton panties.

Or anything you absolute degenerates use it nowaday. The whole point: it's fucking the best, by absolutely far, for prompt comprehension in I2V in local nowaday. It includes Loras - as it happens, Loras for Wan 2.2 are both extremely easy to train and permissible;

LTX 2.3 is the poor man's version of Wan 2.2. It still somewhat works, but yeah, it's basically a failure. It doesn't undestand prompts as well as Wan 2.2 by quite a margin, and the videos generated are subpar by all metrics.
>>
>>108467695
Wan only last about 5 seconds.
>>
>>108467704
Quality over quantity.
>>
>>108467704
I am happy that Wan has some competition, maybe it would force Alibaba to release more Open Weights model, after all that.

But saying LTX is better than Wan is like saying Stable Diffusion 3 is better than Qwen Image. There are a surprisingly huge amount of luddites among you, but that's just not true.
>>
File: Flux2Img_00011_.png (3.63 MB, 1440x1920)
3.63 MB
3.63 MB PNG
flux2 using about 60gb memory on the latest comfy
can't remember if that's better than using --low-vram but it hasn't oom'd yet
>>
Hello. I am a ginormous retard. I got flux.2 working on comfyui, but I don’t exactly understand what the 2 image input nodes are for. I’ve been putting the same image in both nodes and then prompting and it’s turning out well, but I’m sure I could be doing better.
>>
>>108467780
You can have two (or more) reference images. Prompt "put clothes on from image 1 on the character in image 2" etc
>>
https://huggingface.co/AiArtLab/sdxs-1b
BABE WAKE THE FUCK UP
brand NEW anime and illustration model, INNOVATIVE architecture, and APACHE 2 licensed!
Anima BRVTALLY mogged yet again, it cannot stop catching L's
tdrustled on suicide watch
>>
Got LORA training for Anima running, but is there anywhere I can go for a good rundown of info on training with it? Some schedulers don't seem to work with it, and I'm not sure what settings are ideal for characters/styles. Searching just brings up shit like plebbit posts with no likes or replies.

Really enjoying the natural language functionality on Anima, being able to tell it specifically what to change is fun.
>>
>>108467829
>1million image dataset
into the trash
>>
>>108467935
Careful. He'll get upset like last time and start samefagging.
>>
>>108467829
>Qwen3.5
>VAE: 32ch
Well well well.
>>
>>108467829
>>108467935
without the entirety of booru its not very interesting desu
>>
how can i create gore effects locally? i've tried all popupal models, and everything is censored, except for the blood
>>
>>108467829
>Unet: 1.5b parameters
>Unet
>UNET
>UNEEEEEEEETTTTTTTT in 2026
oh come on! more stupid than SDXL!
>>
>>108467282
catbox pls
>>
File: result_grid.jpg (3.07 MB, 2777x7663)
3.07 MB
3.07 MB JPG
>>108467829
This is glorified SD1.5. I don’t get the logic behind this waste of money. Is it some kind of personal challenge? I don’t understand them.
>>
How do people get buttery smooth video gens? I'm just getting started with Wan 2.2 and everything I make (with the Wan2GP 14B i2v stuff) looks kinda jittery. I'm kind of assuming it's just a config issue or "2 step vs 4 step", but want to double-check that there's not a post-gen refinement step that people are using. Or maybe it's a wan thing, I dunno.
I should probably give it a shot on Comfy instead of posting here for a spoon but also I am drunk sorry loves.
>>
File: _AnimaPreview_00042_.png (3.98 MB, 2034x1136)
3.98 MB
3.98 MB PNG
>>
File: 1766755400170229.png (577 KB, 624x546)
577 KB
577 KB PNG
>25% through 2026
>AI still cant do fingers
>>
>>108468089
The results looks horrifying, this is the worst expo I have ever seen.
>>
>>108467813
Ohhhhhhh thank you anon
>>
File: ComfyUI_18690_.png (1.93 MB, 1328x1328)
1.93 MB
1.93 MB PNG
>>108467775
Flux2 has created many ways to be more pozzed; it's cutting edge on the pozzedness. Flux2 Tokenizer will not only detected that you asked something bad - the entire DiT model has been pozed to ignore if you ask for a woman. It's like an entire fracture inside the entire model: Flux 2 can understand men, and pants (as much as it pertains to men). It can't understand, by heavily forced design, I mean post-training design, women.

It's not a case of Qwen Image or Z-Turbo not understanding what is a breast, because they were never told. It's a case of deliberate malicious pozzeining.

It would be that a problem, if say, 'chest' would be understood by flux, but because it's a no-no concept, legs too, most of random poses or Yoga, cannot be understand correctly. Depending on your words, things like heads, feet, hands, what exactly is happening, the exact number of people in the scene, etc... can trip a Flux safeguard. The smallest word or qualifier.

It's just not worth it. You're fighting the model, by this point.
>>
>>108467829
>anime model
>not a single example looks like anime

???
>>
File: SidePlates.jpg (83 KB, 319x340)
83 KB
83 KB JPG
>>108468089
The guns on the bottom guy lmao.
>>
>>108468422
For whatever reason, it's the bird that does it for me.
>>
>>108468089
Dang and I thought the new Neta was bad.
>>
>>108468089
takes me back to when every retard trained on dream/niji journey
>>
>>108468594
*midjourney, not dream
>>
File: flux2styles.jpg (1.89 MB, 2048x1136)
1.89 MB
1.89 MB JPG
>>108468289
I found flux2 to be good at doing style changes with i2v than anything good with t2v. We can blend styles in one image this one with a better shading style on one part and a crude on the other with a blend between the two using this as a test input.
>>108468121
>>
>>108467829
>The VAE in Simple Diffusion utilizes an asymmetric VAE architecture featuring an 8x encoder and a 16x decoder. While a compression factor of 8 is maintained during training, the resolution is effectively doubled during inference through an additional upscaling block. This strategy reduces training costs by an order of magnitude and boosts inference speed without perceptual quality loss.
This looks like an interesting architectural trick but judging by the examples I don't think it worked out lol.
>>
https://files.catbox.moe/7tbnph.png
Klein 9B learns NSFW good
>>
File: 1746468381562027.jpg (624 KB, 1328x1640)
624 KB
624 KB JPG
>>
>>108469020
lora with what settings?
>>
Some time ago I generated jpg image
>>
File: ComfyUI_0421.png (1.21 MB, 1024x1536)
1.21 MB
1.21 MB PNG
>>
>>108469278
Can u generate jpg image
>>
>>108469292
no I generate png
>>
File: 1768626277445166.jpg (543 KB, 1840x1232)
543 KB
543 KB JPG
>>
>>108469349
It's over for local
>>
File: ComfyUI_0001.gif (537 KB, 1296x896)
537 KB
537 KB GIF
retvrn to gif
>>
we must strive to bitmap, as our forefathers once did.
(bitmap not attached)
>>
>>108469477
Kino
>>
>asset
>>
hello
>>
>>108468131
two seconds with iopaint and no one wouldve noticed baka
>>
>>108466518
>Disney's Sora Disaster Shows AI Will Not Revolutionize Hollywood
>https://www.404media.co/disneys-openai-sora-disaster-shows-ai-will-not-save-hollywood
Funny how this website run by jews has a dozen articles seething about how AI is being used by Iran to generate memes dunking on israel
>>
>>108466467
Everyday I see in the news businesses buying GPUs for AI. Its the most stupid thing ive ever seen. I know the best graphics dont require GPUs and the best AI processing doesnt use it either. These scammer groups in Google claiming breakthrough are the same group of hacks that never deliver. They also always claim ridiculous things like the dumbass idea that gpus are better.

If you dont program like a dumbass, then you never need a gpu. Why havent these resource siphoners been fired? Stop the stealing cycle. Cut their funding. Prosecute them. Designing stuff to only work well on a gpu is an extremely poor practice.
>>
>>108469940
wow, with all that powerful insight I am sure you are a millionaire consultant listened to by many and certainly not a literal old man yelling at cloud. Thanks for stopping by.
>>
>>
File: 1768619074319385.png (358 KB, 1024x1024)
358 KB
358 KB PNG
>>
>>108469940
brief history for retards
>computers do math
>cpu is good at 2d math, not so good at 3d math
>companies start to make hardware to process 3d math
>enter the gpu with its own dedicated ram
-a few moments later-
>ai comes along and it is full of linear algebra, gpus are really good at linear algebra
>>
>>108470210
This is art.
Catjak spammer cannot accomplish anything like this.
Simple, effective, catching.
>>
>>108468289
hauhau go back to r*ddit
>>
>>108468289
>it's not A but B
At least you replaced the em dashes.
>>
>Loading Error
>A required resource failed to load. Please reload the page.
i swear cumfart ui frontend is made by fucking monkeys
>>
>>108470609
35 stars fronted is surely better, yeah
>>
>>108470609
just make your own anon
>>
>>108470619
Thank you for this input, catjak.
>>
>>108470637
i would but cumfart wouldn't give me money
>>
>>108470609
CumUI has accumulated tech debt since year ago. There's no changing this. Plus the new ui competes with the backend. Plus python.
Stop updating unless you really need to and have special python 3.12 for that, not your system python version.
>>
>>108470653
>not doing it for the love of the gen
NGMI
>>
>>108470657
even reddit agrees
https://www.reddit.com/r/StableDiffusion/comments/1s4xrc0/comfyui_timeline_based_on_recent_updates
but yeah, cumfart killed itself
>>
>>108470700
yeah dude sure 2mw until total collapse
retarded faggot
>>
>>108470731
AniStudio Rises!
>>
>>108470700
Yeah, python isn't a suitable software delivery platform. But, it might look like a big deal to us who are hobbyists.
One thing what they should look at is Houdini. It was developed by geniuses and to this day, its core is untouched. It's not something what some hired newfag is going to ruin in one evening.
>>
any local models for making 3d models, and if so, any recommended comfy workflows?
>>
>>108470781
trellis2
>>
>>108470746
>>108470734
>>
File: ComfyUI_00066_.png (2.42 MB, 1248x1824)
2.42 MB
2.42 MB PNG
>>
File: 1759131224726447.png (3.74 MB, 1792x1168)
3.74 MB
3.74 MB PNG
>>
>>108470883
Thanks. Seems like your medication isn't still working. Please keep on trying.
>>
hang about, does native STILL not have video previews?
II'm trying not to install any extras just like he wants, but that's a pretty basic feature
>>
>catjak
>>
>niggerjak mentioned
>no one posts for three hours
really makes you think
>>
Am I retarded or is comfyui a piece of shit?
Every workflow I download from civitai has a couple of custom nodes that it can't find in the repo, and sometimes it's already downloaded but still can't find it. Updating and restarting and shit never fixes anything. If I do get to run shit, I get extremely vague errors that are impossible to google like TypeError.
I'm still not sure which subfolder I should put my downloaded models in, I try to get the exact ones listed in the workflow just to eliminate more points of failure but I can't keep track of what all these fucking words mean.

Specifically working on WAN I2V, I had a more simple workflow that worked but I want a looping video, so I look up "WAN loop" and now I'm here
>>
>>108472093
just use the template workflow
>>
>>108467829
Yeah, their cherry-picked samples look worse than SDXL and the model was trained on a tiny dataset.
Epic fail.
>>
>>108467829
>>
Greta-anon, we are begging you to come back.
>>
>>108467829
what happened to image gen, bros... have we lost the juice?
used to be that every few weeks/months something new and exciting would drop that was a real game changer, nowadays all we're getting is just worse and worse and worse...
>>
>>108472314
it's rationing, image models hit a limit and now the objective is to shrink them while losing minimal quality
>>
>>108472314
>new pencil? when are we getting new pencil? graphite was good yesterday, but what about today? i can't draw unless i have new pencil.
>>
>>108472314
Image gen is harmful to celebrities, didn't you notice, anon?
>>
tfw Anima is never coming out
>>
Why is Flux still so bad with skin?
>>
>>108472367
but so is video gen and they're making major strides?
>>
>>108472381
Show me the local video model that's not messing up and can run on actual gaming hardware.
>>
>>108472414
lol
>>
>>108472093
not every node that exists is catalogued. sometimes you have to get them from git, and that's up to the workflow template creator to provide. there's also a learning curve because it's an AI ops platform, not a single-purpose app. other tools exist if you want simpler on-rails genning.
is there a reason you want comfy to handle the loop, or would ffmpeg be better?
ffmpeg -stream_loop <loop_count> -i input.mp4 -c copy output.mp4
xarg it to batch process your output directory, unless you meant flf2v.
>>108472314
kind of the opposite. we're all getting more comfortable and less adventurous, but there are so many different types of models and methods coming out that developers have to focus support. i clone a lot of new projects, and while some use uv, others require manual venv and package management, make assumptions about build environments, have dated dependencies, etc. it might take me a few days before i can build something to confirm whether it's shit.
>>
>>108472043
rent free
>>
>>108472285
lmao'd
>>
>>108472378
BFL safetykrauts purposefully gimp anatomy in their models
>>
>>108472378
>>108472788
To add on to anon's explanation they use synthetic plastic slop data with Barbie doll genitals to poison their models. That's what gives it the plastic look.
>>
File: 1754367275623601.png (1.77 MB, 1408x768)
1.77 MB
1.77 MB PNG
>>
File: 1756451832385232.png (3.13 MB, 1536x1024)
3.13 MB
3.13 MB PNG
>>
ded general
thank you tran
>>
been out of the local loop for like 2 years. I see forge and comfy are the way to go these days so I'm getting into those. I have 12gb vram, is there a good image editing model that I can use, or am I stuck with inpainting?
>>
>>108473338
wai illustrious, realism will all be a waste of time with 12gb
>>
funding
>>
>>108473338
with 12gb vram you can use klein 9b fp8
I'm not sure if it's supported in forge, make sure to use forge neo, iirc it's the most up-to-date fork.
>>
File: _AnimaPreview2_00008_.jpg (441 KB, 992x1456)
441 KB
441 KB JPG
>>
never say LTX 2.3 cant make anything

https://files.catbox.moe/grjigt.mp4
>>
>>108473486
>talking heads
wow!!!!!
>still melts like crazy
fantastic gen there my friend!!!!
>>
why can't Julien stop being a worthless raped retard for even a single day?
>>
>>108473501
well enjoy sora 2 oh wait you cant, cause it's dead.
>>
>>108473220
see >>108472460
>>
I'm not sure if this is the best place to ask or if what I'm asking for exist.

Is there a way to classify booru tags into tag groups like pose tags, clothing tags, background tags, physiological tags, etc? Say I see a booru image I like but I want to insert another character into it, then I would want to change the clothing tag or physiological tag to something that fits the character better (for example, booru image has huge tits but the character has flat tits), but keep the pose tags unchanged. If I want to do that right now then I would need to read the entire taglist and manually decide which tags to change and which to keep, which feels like a long time.

Does a classifier for such a thing exist?
>>
>>108473530
waiting for wan to release their weights... any day now right? they said they commit to open sores right??? right?????
>>
>>108473559
With LLMs, but I don't know which have boorus tag knowledge.
>>
>>108473175
>big fuck tech
Noice.
>>
>>108473559
danbooru has some tags grouped already. are you looking to structure your prompt into groups of tags or something?
https://danbooru.donmai.us/wiki_pages/tag_groups
>>
>>108473338
>>108473392
I am on 12gb and bf16 works fine after dynamic memory. No need to kneecap quality.
>>
>>108473559
There is a dataset that's made available every year, but the only classifiers it has are general, artist, meta, copyright, and character

https://huggingface.co/datasets/dataproc5/metrics-danbooru2025-alltime-tag-counts
>>
>>108473579
meltie
>>
>>108473605
That looks like a good list actually. Is it a very comprehensive list?

I was thinking of either writing a python script, or if possible, getting a custom node in comfyui itself, where I can paste a list of tags and it will separate those tags into separate list of tags such as body tags, attire tags, etc.
>>
>julien the homosexual pedophile is still seething
Comfy will never give you millions
Yup hates you
You're a lolcow
You never contributed anything of value
Ran won
>>
File: _AnimaPreview2_00024_.jpg (366 KB, 992x1456)
366 KB
366 KB JPG
>>
>>108473637
I dont know how often it gets updated but new tags are frequently added to danbooru so its probably a bit outdated
>>
>>108473719
Well even if it isn't completely comprehensive, it might be something to train a classifier (though admittedly I haven't done word classification myself but it sounds like something modern language processing should do).

Is there a way to export the tag groups list or do I have to write a script to do it?
>>
File: _AnimaPreview2_00031_.jpg (519 KB, 1536x920)
519 KB
519 KB JPG
>>
File: _AnimaPreview2_00039_.jpg (487 KB, 1072x1376)
487 KB
487 KB JPG
>>
File: _AnimaPreview2_00042_.jpg (404 KB, 1072x1376)
404 KB
404 KB JPG
>>
>>108473677
see >>108472460
>>
>>108473815
that actually looks pretty good.
how annoying and autistic is the prompting? does it have to be the danbooru shit, or can you just prompt normally?
>>
>>108473942
prompt from grok, testing lora
>Film grain, deep shadows,Cinematic scene, dramatic lighting, 1girl, solo, mature woman. Short black hair floating in air and intensely glowing blue eyes floating weightlessly in the air, body slightly arched backward. Massive arcs of electricity crackling and flying left and right around her, sparks exploding from exposed wiring in her torn synthetic skin at the torso and shoulders. Her expression is focused and powerful. Human machine hybrid, female cyborg, electricity arcing wildly, floating mid-air. Safe. Indoors, dark cyberpunk laboratory with shattered equipment, broken glass and floating debris, intense blue and white electrical glow.
>>
File: _AnimaPreview2_00052_.jpg (396 KB, 992x1456)
396 KB
396 KB JPG
>>
Soooo did we stagnate?

>No cool new models coming out
>General has a crumb of its former activity

Is it over?
>>
File: 1748248486610090.png (3.6 MB, 1328x1744)
3.6 MB
3.6 MB PNG
>>108474038
post gens or news you fucking retard metaposting subhuman piece of shit
>>
>>108474013
Nice long arm
Nice 1.4MP gen on a 1MP model
Nice Chroma gens labelled as anima
>>
I'm new to this, so just to be sure, 99% of custom checkpoints on civitai are snake oil for retards that don't actually change the model meaningfully, right? And the model "updates" are just cheap engagement farming?
>>
>>108474056
back to sdt retard
>>
>>108474074
see >>108474056
>>
File: _AnimaPreview2_00058_.jpg (559 KB, 992x1456)
559 KB
559 KB JPG
>>
>>108474084
That was a genuine question bro I'm actually new to this. Do checkpoints like moody mix or cyberrealistic actually alter the base model in any meaningful way?
>>
>>108474099
>Do checkpoints like moody mix or cyberrealistic actually alter the base model in any meaningful way?
no but yes but really no
this is my real genuine answer no sarcasm
>>
>>108474038
i gen a ton of 3D assets and other useful shit but can't really post them here. same with audio stuff.
local is thriving. ldg is just showing its age.
>>
>>108474058
lmao
>>
ltx 2.3 + klein edit 9b is such a good combo.

https://files.catbox.moe/80wftx.mp4
>>
>>108474074
>>108474099
Non-troll response: They are what we call shitmixes: random, often contradictory or poorly compatible loras are getting baked into the model. This is already bad, but said loras are most often poorly trained and make the shitmix run even worse. As a result shitmixes have the "plastic look", low quality, can become "rigid", unable to comply with prompts base model can, and can output body horror when you get unlucky enough.
Only idiots use them.
Use base models, apply whatever lora you need yourself.
>>
To keep the general alive let me post some news: n*gbo is still unemployed
>>
>>108474147
What amount of spam it would take? Wherever you are going, "catjak", you are the poison what shit up the well.
>>
>>108474147
But that's great as long as you are getting attention.
>>
File: _AnimaPreview2_00072_.jpg (455 KB, 992x1456)
455 KB
455 KB JPG
>>
File: Flux2-Klein_00675_.png (1.17 MB, 1024x1024)
1.17 MB
1.17 MB PNG
>>108474136
kekked
>>
Catjak truly is a piece of shit.
>>
>>108474121
why cant you post them here? benchod
>>
>>108474138
proof?
>>
File: _AnimaPreview2_00088_.jpg (298 KB, 992x1456)
298 KB
298 KB JPG
Preview3 when
>>
>>108474121
there isn't any local 3d mesh generator that's worthwhile
>>
>>108474264
even meshy.ai is pretty fucking bad. best to just ai generate concept images and make your own meshes
>>
>>108474247
>Preview3 when
srsly
>>
>>108474247
Around a month if he makes a similar, roll back to an early epoch and retrain experiment.
He was talking about running a Qwen 3.5 2B one in parallel last time. He wasn't very impressed with it, but maybe it can come early if he decided to push that to finish line.
>>
>>108474247
Why can't you cumbrained faggots stay on boards where you belong?
>>
File: 1770924625018336.png (2.03 MB, 1360x768)
2.03 MB
2.03 MB PNG
>change the grass to a f1 racetrack.
klein edit 9b is pretty cool.
>>
File: _AnimaPreview2_00101_.jpg (493 KB, 1456x992)
493 KB
493 KB JPG
>>
>>108474138
This. Any checkpoint that involves simply baking loras in is usually pure shit for retards.
>>
File: _AnimaPreview2_00132_.jpg (459 KB, 1072x1376)
459 KB
459 KB JPG
>>
File: 1763669967964735.jpg (671 KB, 1840x1328)
671 KB
671 KB JPG
>>
File: _AnimaPreview2_00139_.jpg (284 KB, 1072x1376)
284 KB
284 KB JPG
>>
>>108474640
kek
>>
File: 1746820200457931.png (526 KB, 1114x911)
526 KB
526 KB PNG
I have a question about regional prompting. In the OP's 1girl guide there's a specific warning that each region must have the 2girls prompt. However what if I want an image with 1 girl and a non-female? Would a 1girl and 1boy image need both in each prompt?

I tried having 1girl and 1boy separately in the two prompts and got some examples where it worked. Anyone else have experience with this?
>>
File: _AnimaPreview2_00146_.jpg (550 KB, 1072x1376)
550 KB
550 KB JPG
>>
File: ComfyUI_18118.jpg (3.78 MB, 1500x2000)
3.78 MB
3.78 MB JPG
Among Us?
>>
File: G9L5YS8WsAEI-1f.jpg (115 KB, 572x595)
115 KB
115 KB JPG
>>108474688
To put it simply, it's there to make it easier for the model to digest the overall idea of what it's being forcefully guided to generate, you are just letting it know that there is something else versus not let it know, but in either cases you will get results, the only difference it makes is how hard it becomes to control the desired output when it comes to the later case.
>>
File: _AnimaPreview2_00155_.jpg (189 KB, 1072x1376)
189 KB
189 KB JPG
>>
Are we being raided?
>>
>>108474731
no anon is just being based and kinopromptpilled
>>
>>108474731
probably just a troll
>>
>>108474728
Maybe I'll just try more examples and see how it works out.
>>
File: _AnimaPreview2_00162_.jpg (170 KB, 1376x1072)
170 KB
170 KB JPG
>>
File: _AnimaPreview2_00177_.jpg (261 KB, 1608x920)
261 KB
261 KB JPG
>>
File: _AnimaPreview2_00182_.jpg (167 KB, 1608x920)
167 KB
167 KB JPG
>>
>>108474777
>>108474804
>>108474830
Thinking about uploading it? Looks bretty good desu
>>
>>108474877
It's there!
>>
I thought it were my loras but anima really does makes inverted penises even with "pov" in the prompt, I feel less bad now
https://files.catbox.moe/ask7ye.png
>>
>>108474935
basedjak lora fixes it https://files.catbox.moe/x4lj9t.jpg
>>
>>108474923
>>108474977
based
>>
So I've been testing prompt scheduling with anima, trying to blend artist styles, and I would have to conclude it just doesn't work.

[@artist1: @artist2: steps] just seems to result in artist1's style.

Curiously, [@artist1|@artist2] which is supposed to alternate the tags every step, just seems to result in artist2's style.

The only thing that reliably seems to work is just trying to use multiple artists every step, @artist1, @artist2, etc

Thanks anon. That sure was an epic troll
>>
>>108474688
normally you add a third region (at a lower strength): the whole image and define the subjects & background, style, etc and then you add the masks for the subjects with just "1girl" and whatever in it. shit's finnicky tho but I guess you figured that out already. you can also try out the native uncomfy nodes, see https://blog.comfy.org/p/masking-and-scheduling-lora-and-model-weights
bit of a pain to wade through but fairly easy to set up actually
& don't forget you can just invert a mask with an "invert mask" node
>>
>>108474935
anima is incredibly bad at nsfw, i still don't get why would anyone use this piece of shit instead of illustrious. money well spent comfyorg
>>
File: 1774050110127320.jpg (1.25 MB, 1248x1824)
1.25 MB
1.25 MB JPG
>>
>>108474990
Just use two artist tags normally and they'll blend.
>>
File: ShitDreamFourPointFive.jpg (3.52 MB, 3072x4096)
3.52 MB
3.52 MB JPG
Not local but relevantish IMO: why do people think Seedream is even good? Their 4K outputs look like absolute fucking dogshit, like look at this shit, it looks like SD 1.5 / SDXL tier quality that was just fucking lanczos scaled up to 4K with no denoise, it's actually terrible
>>
File: 04084-453892596.png (1.35 MB, 896x1152)
1.35 MB
1.35 MB PNG
where are the celebrity Loras??
>>
>>108475045
also forgot to mention fucked up her hand too lol
>>
File: regex.png (148 KB, 1693x881)
148 KB
148 KB PNG
the captcha is evolving zomg
>>108474990
like, with forge? you can certainly force the model to do x amount of steps with artist1 and then x amount of steps with whatever, that's gotta work, no? like this, with regex nodes
>>
>>108475050
>>>/r/parody
>>
File: til.png (919 B, 116x39)
919 B
919 B PNG
>>108475062
TIL
>>
>>108474977
I love this stupid shit
>>
>>108475063
Thanks. Although they are somewhat limited with their content.
>>
>>108475050
udunno https://huggingface.co/malcolmrey
>>
File: 00092-1527999504.jpg (325 KB, 1344x1728)
325 KB
325 KB JPG
>>
File: 03724-4194058836.png (1.85 MB, 832x2048)
1.85 MB
1.85 MB PNG
>>108475126
Thanks.
I should probably train my own but, I can't get fucking AI toolkit to work. It'd help if those assholes updated their page every once in a while. Spend hours looking for solutions to their outdated install instructions at the end of the day my install is still broken.
>>
>>108475198
nigga stop microwaving your gens
>>
>>108474264
>>108474269
a few are generally good enough for 3d printing after making them watertight, but yeah, could be better.
the sam3ds and derivs with text guidance have been decent.
https://ai.meta.com/research/sam3d/
mainly just using them for static assets for video, not renders.
>>
>>108475212
there are tons of loras floating around, civitarchive.com hosts some. nice gen, bit sloppy but kinda hot nonetheless. lower chick got an extra pinky and some extra flesh mass on her other hand but thats easy to fix!
>>
>>108474977
GEMERALD
>>
lmao first/last ltx 2.3 is amazing

https://huggingface.co/RuneXX/LTX-2-Workflows/tree/main/older_comfy_pre_feb2026

https://files.catbox.moe/tycxrq.mp4
>>
>>108474994
Actually I just don't know about invert masks.
>>
>>108475324
Lol
>>
File: hmm.png (764 KB, 1037x446)
764 KB
764 KB PNG
>>108475198
nothing about her lower body makes sense lmao
>>
https://www.youtube.com/watch?v=v4P4hRvmi8A&list=RDv4P4hRvmi8A
>>
>>108474999
illustrious is EPS so its automatically unusable garbage
>>
>>108475324
this time it's funny kek
>>
>>108475396
then use the flow prediction noob
>>
File: 1753153010352392.webm (1.34 MB, 736x960)
1.34 MB
1.34 MB WEBM
>>
>>108475769
nice idle animation, when is your game coming out?
>>
Kinda ironic desu, floored by the sheer volume of slop on Civit right now, they need to ban these literally who shitmerges. Looking at the feed is surreal, like someone's botting the site with slop models.
But here's the thing, while I'm watching this dumpster fire of stagnant models and endless slop, I'm having the best creative streak in months with new workflows , prompt creativity, kino gens, vibecoded nodes working.
>>
>>108475842
The only reason I visit Civitai is if an author doesn't post to Huggingface. I haven't casually browsed there since 2023.
>>
>>108475769
why does this child have massive breasts
>>
>>108475786
only reposting it to see if the schizo loses their shit

>>108475882
ask ani
>>
>
>>
https://huggingface.co/RuneXX/LTX-2-Workflows/blob/main/older_comfy_pre_feb2026/LTX-2%20-%20V2V%20(extend%20any%20video).json

video extend is hilarious (it works)

https://files.catbox.moe/k0la6w.mp4
>>
>>108475965
>>
>>
>>108475882
Legal Loli age 9001 years
>>
>>108475842
Ironic is a bad word to describe this. Evident by the level of technical competency of the staff, Civitai was never supposed to last. With the censorship, the payment service providers drop despite that, the new laws they comply with, that engagement metric -- how many nEw MoDeLs are uploaded there, not somewhere else -- might be the only thing that lets them stay afloat by earning them some secret investments.
>>
If someone could just train an image model for storyboarding or scene progression then you could have Seedance 2 at home
>>
>>108475966
https://files.catbox.moe/gqi7se.mp4

The man says "No one cared who I was till I put on the mask. You shitpost on 4chan about this movie, do you not? I know you do."
>>
>>108475966
just as well as it did on day1 I guess
https://files.catbox.moe/vicus0.webm
>>
>>108475882
Author has good taste in women
>>
>>108476021
which workflow are you using? curious how they differ
>>
>>108476036
wan2gp
>>
>>108473686
Ahh, a fellow aficionado of black bob cuts I see
>>
https://files.catbox.moe/g5l0jd.mp4
>>
>>108476052
>ntrmix
I knvvl
>>
lmao, working out the kinks. if you type the part of the video before the extend it works better:

https://files.catbox.moe/g2qhyc.mp4
>>
What's the Anima shills' cope for the catastrophic forgetting? You can't train new characters in the base model without murdering existing knowledge. More epochs = more damage. Every Anima finetuner is hitting this wall right now, including tdrusell himself in making Anima 2
>>
its a cozy thread anon no need to do that
>>
>>108476173
there we go, now it's almost seamless. just type "the man says (what they say)" before the extend, then add text after that.

https://litter.catbox.moe/ma5ylxs9764wkmyc.mp4
>>
>>108476181
It's not that catastrophic seeing that it learned quite a lot of characters.

YMMV if it is/will be too much BUT either way for the most part anyone can do ATM is train and see how far the training goes. Predicting the outcome ahead of DOING the training is not possible and monitoring options during training are at least somewhat limited.
>>
>>108476190
deus ex 2:

https://litter.catbox.moe/015iw35wdvm03vf5.mp4
>>
>>108476201
This is bad man, nobody can progress with Anima even tdrusell himself because every new concept erases old ones. This is a fundamental architecture limitation showing up in very early previews.
>>
last deus ex one, tried 15s extend, worked: but wtf why are there people walking in the back?

https://litter.catbox.moe/ymon1bb3m0130129.mp4
>>
>>108476201
He's b8ing you anon
>>
>>108476250
This is the preview version and it's already cooked. New characters and styles keep releasing every day but the model just forgets everything, catastrophic forgetting on steroids.
>>
>>108476304
All the new Anima knowledge is backed into the Qwen adapter :^)
>>
>>108474703
>>108474703
based jenner
>>
>>108476201
It’s decent but the memory loss during fine tuning is still a problem. Pair that with the restrictive license and there’s no reason to bother training it. Feels like it’ll be a good model out of the box, but a dead end if you actually want to push it further.
>>
>how can it be good?!? hes never even trained anything before
>well... okay hes the dev of a trainer... but look at the licence!!! no ones going to train on that crap!
>okay i get the licence just means if you release your version you have to give it out for free.... but it cant even do high res hes so retarded!
>okay well now it can do high res.... but the styles dont even mix its so ass!!
>alright the styles do actually mix.... but its not exactly the same as SDXL so it sux!
>i-it forgets everything if you train on it!!!!!
<--- YOU ARE HERE
>well... okay more good finetunes are coming out but... itll never surpass illust!!!
>so anons what is your favorite anima model? :3 i like shitmix_alpha_v44_epsilon a lot
>>
>>108476350
jenner please
>>
>>108476250
>>108476432
Crazy what jealousy does to a man.
>>
>>108476378
> the memory loss during fine tuning is still a problem
it's how it always is when training ai models? you surely don't have a way around this either, do you?

> restrictive license
basically just no commercial use of the model itself but you can make derivatives and share them and all that? not very restrictive.
>>
Anima catastrophic forgetting when training loras is no worse than SDXL (and probably better even) if you follow the official recommended advice. Which is: don't train the llm_adapter module, and use a low learning rate. The problem is retards using the default sd-scripts command, which trains llm_adapter, and using their SDXL learning rate of like 2e-4 or some shit which is nearly 10x too high. "Wow Anima learns really fast" everyone says, "shame about the catastrophic forgetting", like no dumbass it's learning so fast because your effective learning rate is giga fucking high which is also why it forgets.
>>
File: 1774498204347447.jpg (122 KB, 442x572)
122 KB
122 KB JPG
finally ltx has better optimization than plastic wan. will use a lot ltx, in waiting davinci or something
>>
File: ComfyUI_18137.png (3.5 MB, 1500x2000)
3.5 MB
3.5 MB PNG
>LTX 2.3
>no movement and lip sync, it's seemingly one or the other
>default resize has 720px in the box
>crops to 704px anyway (this makes me irrationally angry)
>re-encodes the FLAC audio forcing me to remux it back in if I'm going to use it
>just a hair slower doing lengthy gens as Wan 2.2 is doing 5-7sec
So far, it's pretty nice. The output is a lot sharper than I thought it was going to be (which is great), but it drops the likeness harder and faster than Wan ever would. Definitely have to work on figuring out how prompt it better though.

https://files.catbox.moe/c3j47f.webm
>>
>>108476620
if we're being genuine and not listening to schizobabble of anima hating retard who never even used anima - this
>>
It's been too long since this thread has been this cozy.
>>
>>108476620
Truth nuke
>>
File: 3.png (1.17 MB, 960x1400)
1.17 MB
1.17 MB PNG
works on my machine
>>
>AI-generated image provenance metadata (EU AI Act deepfake disclosure)
https://github.com/huggingface/diffusers/issues/13359
>>
File: ComfyUI_00139_.mp4 (479 KB, 640x640)
479 KB
479 KB MP4
>>108476989
>eu
>>
>>108476620
weird because I was referring to all loras, not just the jeet fried ones. you seem to forget that loras themselves are destructive to the base models. doesn't matter how you train it, it will forget when you apply one. this is why loras will always be a cope
>>
>>108474038
I'd be sent to /b/ for my degenerate gens so I rather not post anything.
>>
Does the filesize and imagesize matter for datasets even if they are being resized? I still get get random ooms despite other jobs on same setup working fine. Are the fuckhueg images 5+MB and 3k+ pixels per size an issue? Is dataset preprocessing a thing for imagegen?
>>
is flux2 klein still the best for image edits?
>>
>>108476999
You're supposed to subtly move the goalposts not throw them across the field.
>>
>>108477031
You will need to preprocess your stuff.
>>
>>108477035
>I've been here for years and learned nothing about how any of this shit works
do you want a reward for being a braindead jeet or something?
>>
How do I cope with APIs being infinitely better than local?
>>
what's even the difference klein 4b vs 9b? is 9b even better?
>>
>>108477101
APIs might get Sora treatment at any moment.
>>
>>108477107
9B is the best if you don't care about having a good model otherwise 4B is the best.
>>
>>108477126
in what way is it better though?
>>
>>108477107
4b is meh, but 9b is pretty damn good for what it is. download both models and a/b test them, that is the proper way to figure out the difference.
>>
>>108476989
yeah scary. that bizarre scandal about some washed up celeb cunt being terrorized by her washed up celeb husband with deepfakes of her that is being paraded in the news here makes more sense in that context. oops.
>>
>>108476707
based jenner always on some next level shit
>>
>>108476999
>>108477086
So why single out current-thing-model?
>>
>>108477140
it runs on his 8 gb of vram
>>
>>108475356
???
>>
>>108477031
> Does the filesize and imagesize matter for datasets even if they are being resized
No, just make sure your training tool actually resizes images.

> Is dataset preprocessing a thing for imagegen?
It's not necessary, but could improve the results if you sharpen the images, remove watermarks and so on.
>>
>>108477031
i resize and crop everything before hand.
don't upscale buckets.
all images should be power of 2.
i personally try to avoid excessive bucketing, i don't use more than 2 or 3 aspect ratios in my datasets.
if your dataset is irl photos, some light processing is fine.
don't go overboard and cook the fuck out of your images with seedvr or a low denoise ksampler to "fix" them.
a bit of sharpening is fine but if the photo is out of focus, delete it.
>>
>>108477696
how to caption doe
>>
>>108477696
If i resize the "almost 4:3" dimensions to actual 4:3 are the small deformities gonna fuck something up?
>>
>>108477782
>resize
CROP nigga CROP
>>
>>108477781
depends on the model but the trigger token and then everything you don't want associated with the thing you are training.
if it's a natural language model
>sexywaifu is sitting in a chair wearing xyz in a whatever setting/background
token always the first word in the caption.
tag based models like sdxl
>sexywaifu, shirt, pants, necklace, red wall, brown tiles, window,
>>108477782
crop to the aspect ratio and then resize, png to keep it as "lossless" as possible. you can let the trainer resize for you if you want, but i would rather see what the resize looks like before training just in case.
>>
>>108474056
ToT
>>
>>108477825
that's called 'regularization' if you want to look it up, >>108477781
another example is if you you're training a character lora and have images of them wearing different outfits or accessories that you don't want showing up when prompting that character (but not a specific outfit or accessory), you have to caption the images so it can tell what separates those things from the character itself.
when your results have prompt bleed, that's from 'overfitting', meaning the model didn't learn, it memorized.
if a character has a sword and scabbard in most of the dataset and you don't caption those images to indicate the sword and the scabbard, the model might generate results with the belt of the scabbard around their waist or part of the hilt in their hand when you don't prompt for them because it thinks those are just features of the character.
>>
>>108477107
9B is very noticably better than 4B. No need to bother with 4B unless your setup is very weak.
>>
>>108478449
I've gotten better edit results with 4B the few times I've tried them, oddly.
>>
>>108478408
for lora training isn't regularization an additional dataset that helps reinforce the concept?
for basic lora captioning, using the above example and the sword, wouldn't it just be
>sexywaifu, shirt, pants, sword, necklace, red wall, brown tiles, window,
or is that regularization?
>>
Bready when ready
>>108478554
>>108478554
>>108478554
>>108478554
>>
it's over



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.