[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


Discussion and Development of Local Image and Video Models

Previous: >>108483401

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, Upscalers, & Workflows
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info
https://openart.ai/workflows

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/musubi-tuner
https://github.com/tdrussell/diffusion-pipe

>Z
https://huggingface.co/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
another desperate early bake from the cretin ranjak. you love to see it
>>
Blessed thread of frenship
>>
heyyy i have a question! i have a ryzen 7 AI 350 AMD laptop running linux, what's the easiest way to run diffusion models on my npu? and if anyone doesnt mind, chat models as well. arigatou gozaimasu!
>>
>mfw Resource news

03/30/2026

>Gaussian Shannon: High-Precision Diffusion Model Watermarking Based on Communication
https://github.com/Rambo-Yi/Gaussian-Shannon

>From Static to Dynamic: Exploring Self-supervised Image-to-Video Representation Transfer Learning
https://github.com/yafeng19/Co-Settle

>DUET-VLM: Dual stage Unified Efficient Token reduction for VLM Training and Inference
https://github.com/AMD-AGI/DUET-VLM

>Mugen: SDXL to Flux 2 VAE conversion model
https://huggingface.co/CabalResearch/Mugen

>Z-Image-SAM-ControlNet
https://huggingface.co/neuralvfx/Z-Image-SAM-ControlNet

03/29/2026

>HybridScorer: CUDA-powered image triage tool
https://github.com/vangel76/HybridScorer

>Calgary artists debate AI's role in creativity as library launches new residency
https://calgaryjournal.ca/2026/03/12/calgary-artists-debate-ais-role-in-creativity-as-library-launches-new-residency/

03/28/2026

>Seedance 2.0 ComfyUI Nodes
https://github.com/Anil-matcha/seedance2-comfyui

>ComfyUI-DreamScene360
https://github.com/jfirma1/ComfyUI-DreamScene360

>ComfyUI-Foundation-1: Structured Text-to-Sample Diffusion for Music Production
https://github.com/Saganaki22/ComfyUI-Foundation-1

03/27/2026

>ComfyUI Enhancement Utils
https://github.com/phazei/ComfyUI-Enhancement-Utils

>SDXS - A 1B model that punches high
https://huggingface.co/AiArtLab/sdxs-1b

>ComfyUI-DaVinci-MagiHuman
https://github.com/mjansrud/ComfyUI-DaVinci-MagiHuman

>ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling
https://luo0207.github.io/ShotStream

>Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration
https://v-gen-ai.github.io/Calibri-page

>Free-Lunch Long Video Generation via Layer-Adaptive O.O.D Correction
https://github.com/Westlake-AGI-Lab/FreeLOC

>MACRO: Advancing Multi-Reference Image Generation with Structured Long-Context Data
https://macro400k.github.io
>>
>>108489747
fuck off
>>
>mfw Research news

03/30/2026

>Neighbor-Aware Localized Concept Erasure in Text-to-Image Diffusion Models
https://arxiv.org/abs/2603.25994

>InstaVSR: Taming Diffusion for Efficient and Temporally Consistent Video Super-Resolution
https://arxiv.org/abs/2603.26134

>Verify Claimed Text-to-Image Models via Boundary-Aware Prompt Optimization
https://arxiv.org/abs/2603.26328

>TaxaAdapter: Vision Taxonomy Models are Key to Fine-grained Image Generation over the Tree of Life
https://arxiv.org/abs/2603.26128

>MemCam: Memory-Augmented Camera Control for Consistent Video Generation
https://arxiv.org/abs/2603.26193

>CREval: An Automated Interpretable Evaluation for Creative Image Manipulation under Complex Instructions
https://arxiv.org/abs/2603.26174

>When Identities Collapse: A Stress-Test Benchmark for Multi-Subject Personalization
https://arxiv.org/abs/2603.26078

>Generation Is Compression: Zero-Shot Video Coding via Stochastic Rectified Flow
https://arxiv.org/abs/2603.26571

>MPDiT: Multi-Patch Global-to-Local Transformer Architecture For Efficient Flow Matching and Diffusion Model
https://arxiv.org/abs/2603.26357

>Label-Free Cross-Task LoRA Merging with Null-Space Compression
https://arxiv.org/abs/2603.26317

>Preference-Aligned LoRA Merging: Preserving Subspace Coverage and Addressing Directional Anisotropy
https://arxiv.org/abs/2603.26299

>ClipTTT: CLIP-Guided Test-Time Training Helps LVLMs See Better
https://arxiv.org/abs/2603.26486

>Few Shots Text to Image Retrieval: New Benchmarking Dataset and Optimization Methods
https://arxiv.org/abs/2603.25891

>PruneFuse: Efficient Data Selection via Weight Pruning and Network Fusion
https://arxiv.org/abs/2603.26138

>Restore, Assess, Repeat: A Unified Framework for Iterative Image Restoration
https://restore-assess-repeat.github.io

>Versatile Recompression-Aware Perceptual Image Super-Resolution
https://arxiv.org/abs/2511.18090
>>
File: Anzhc anima fud.png (52 KB, 881x223)
52 KB
52 KB PNG
I lost all respect I had to this faggot today. Literally trani tier fudding, and evidence presented is "come dilate with our xisters on the 'cord:)". Enjoyed his weird experiments in the past and he was one of the main reasons I bothered to check that shithole subr-ddit. Extremely disappointing.
>>
here comes the discord faggotry
>>
File: Screenshot__DuckDuckGo.jpg (244 KB, 720x1013)
244 KB
244 KB JPG
What causes this?
>>
>>108489823
You're right, anyone remember Cosmos before Anima? That model that got fast adopted by the locals? Yeah me neither
>>
>>108489823
>>108489840
Why even make claim like this if you dont have proof. I wish I could atleqst see the often claimed lora related forgetting
>>
>>108489823
But he's got a point desu. Lots of people are already complaining about Anima for the same fucking errors that keep showing up again and again and it's impossible to fix.
>>
>>108489858
Lora no, checkpoint finetuning
>>
>>108489858
>Why even make claim like this if you dont have proof.
To push their own agenda simpleas. They are catty like woman.
>>
>>108489864
>the same fucking errors that keep showing up again and again and it's impossible to fix.
Such as?
If you are referring to fingers those are caused by being a 512p preview.
>>
>>108489823
Anima's perfect for Comfy Cloud free tier. Lightweight, fast in Comfy's 96VRAM gpu, barely uses credits from your 400 permonth. No loras needed and the model is already in their repo to use.
>>
File: Preview_00130_.png (1.66 MB, 1728x1344)
1.66 MB
1.66 MB PNG
trying anima, and its like gambling, you gotta keep spamming that run button until you get something that isnt horribly slopped, feels like ole 1.5 days
7.9/10
>>
>>108489881
I went from 120 seconds per gen on my beloved RTX 3060 down to 10 seconds, and on top of that it's an ultra cheap model I can batch a lot of gens.
>>
>>108489883
People masturbate to this?
>>
>>108489883
>you gotta keep spamming that run button until you get something that isnt horribly slopped,
Just need a better prompt desu
>>
>>108489840
>8k to train sdxl noobslop with flux 2 vae and rectified flow
Local = mental illness
>>
>>108489823
>Anima architecture is dramatically flawed
>trains on SDXL with CLIP
>Anima licensing is le bad
>trains derivative of Noob, inheriting Noob license which is even more restrictive on commercial use than Anima
what did blud mean by this
>>
>>108489924
>>108489840
>>108489823
Friendly reminder that all the Noobcord troons are getting brainwashed by Ani on the server and they just keep parroting the same FUD shit like Ani says here: Anima is bad, Comfy is adware, Comfy is sabotaging local.
>>
>>108489921
>Local = mental illness
well yeah, just look at this thread
>>
>>108489934
>Anima is bad, Comfy is adware, Comfy is sabotaging local
Based. What a king.
>>
>>108489823
the banner image for the model is nothing but 1girl images lol. isn't it about time we try to advance further?
>>
>>108489934
comfy is adware and sabotaging local though, but anima isn't bad
>>
File: 1746711644526391.png (11 KB, 696x59)
11 KB
11 KB PNG
>>108489823
>>108489840
THEY BLEW 8 FUCKING K ON FUCKING SDXL WITH CLIP HOLY SHIT LMAO
HAHAHAHAHA I GIVE UP EVERYONES A RETARDED MORON
>>
>>108489957
that's not bad compared to chroma's failbake which cost 25x more yet didn't learn a single character or artist. $8k is nothing really, it's a reasonable amount to spend on SDXL because it's nowhere near enough to train anything modern
>>
File: image-22.png (2.23 MB, 1159x960)
2.23 MB
2.23 MB PNG
>>108489957
It's depressing to spend to 8k to get this blurry slop as the result.
>>
>>108489883
User error
>>
>>108489957
>THEY BLEW 8 FUCKING K
is that a lot to you? holy has this general gone downhill
>>
>>108489823
holy fucking kek, and the best part is this "Mugen" model is completely fried
>>
>>108489971
It's pretty easy to tell if SDXL or Anima or whatever other model is retarded enough that you need a massive amount of training, a diverse dataset, and perfect tagging just to teach it a single character properly. And even with all that effort, you still get concept bleeding. Now imagine scaling that garbage up to an entire checkpoint where it's not just one character but millions of different things.
>>
File: mugenlmao.png (2.76 MB, 1792x1152)
2.76 MB
2.76 MB PNG
>>108489823
>>108489840
>left: WAI v15
>right: Mugen-Aesthetic-Anzhc
For a model advertising itself as a "modernized anime SDXL base" that is "preserving Noobai knowledge" it sure is ruining the knowledge a lot while also looking like complete ass.
>>
>>108489971
there was one image where a girl has a 6-fingered hand in front of her face. embarrassing when promo images are meant to be cherrypicked
>>
>>108489976
why dont you film yourself flushing 8k into the toilet then buddy because thats exactly what happened here
>>
>>108489964
chroma at least has its uses as the best realism model

meanwhile here's mugen using their official worklow
>>
>>108489924
Brain damage, no other explanation. These morons are going to keep "finetuning" SDXL for the next decade. The Anzhc guy also made this retarded snake oil post - https://www.reddit.com/r/StableDiffusion/comments/1revwgq/clip_is_back_on_anima_because_clip_is_eternal/

You gotta hand it to Russell, he actually knows what he is doing and doesn't care about the opinion of discord servers/jeets.
>>
>>108490013
>doesn't care about the opinion of discord servers/jeets.
the day he starts a coomcord is the day i stop caring about him
>>
>>108489998
>>108489992
>>108489971
8k for a WAI gen + some impasto oil paintig lora
>>
File: Preview_00139_.png (2.5 MB, 1728x1344)
2.5 MB
2.5 MB PNG
>not gambling
kekd
>>
>>108490013
Not surprised at all. Everyone in Noob cord is a complete schizo.
>>
>>108489747
thanks news guy
>>Mugen: SDXL to Flux 2 VAE conversion model
>https://huggingface.co/CabalResearch/Mugen
anyone try this yet?
>>
>>108490027
Nice clean upscale.
>>
>>108490039
Not my gen but I'm saying it again, 8 k dollars for an anime model with that impasto lora slop look. I could enhance the colors Klein or Photoshop or some meme AI program that enhances colors.
>>
im very new to all this. should i stick to forge or give comfy a try? comfy is faster but im unfamiliar with node based workflow.
>>
>>108490067
Comfy,
>>
>>108490039
its shit but im sure the general will be flooded for the next week with posts on how its way better than anima
>>
can we fucking stop with sdxl meme
and stfu about fucking glm image lmao
>>
File: Preview_00147_.png (1.38 MB, 1440x1120)
1.38 MB
1.38 MB PNG
>>
>>108489964
>didn't learn a single character
It absolutely did, there's no need to make up shit.
>>
Ladies and gentlemen
Local
>>
>>108490134
BWAHAHAHA
>>
>>108490134
whats the alternative, locked down models that gen the same slop ad nauseam?
>>
File: 359.png (899 KB, 3960x3080)
899 KB
899 KB PNG
>>108489944
keeeeeeeeeeeeeek, you think these retards care about literally anything else? its literally a 1girl spitting machine, look at the metrics they track, zzz, hololive and arknights character similarity, KEK
>>
>>108490067
nobody relevant uses forge anymore. you will not find any help for using forge. either you use comfy or you dont gen locally.
>>
>>108490067
troonfy, you get the hang of it and later you will thank yourself for doing it because you will have access to the latest stuff
>>
>>108490174
the alternative is you touch grass and get a real hobby
>>
>>108489712
For image gen on AMD NPUs, searching a bit turned up AMD Stable Diffusion Sandbox, but it depends on Ryzen AI Software, whose install instructions for 1.7.1 only talk about Windows. The last time there were Linux instructions was 1.4.
https://github.com/amd/sd-sandbox
https://ryzenai.docs.amd.com/en/latest/

Image gen on GPUs should have better general support via ROCm and ComfyUI. AMD has Ubuntu instructions; other distros may vary. I'm on Windows though, so I haven't tried these instructions myself:
https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/install/installryz/native_linux/install-ryzen.html
https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/advanced/advancedryz/linux/comfyui/installcomfyui.html

For text models on AMD NPUs, someone pointed me to FastFlowLM once. Haven't tried it myself:
https://fastflowlm.com/
>>
>>108490228
idk if i was going to get a new hobby i would probably pick up yours.
crying about open source models in /ldg/ while waiting for free credits to replenish seems pretty fun.
>>
>>108490211
>>108490214

are there any good resources out there to look into for establishing workflow nodes in comfy? average google search gets me nothing but useless reddit posts that don't tell me anything
>>
>>108490276
https://comfyanonymous.github.io/ComfyUI_examples/
>>
File: Preview_00168_.png (1.69 MB, 1440x1120)
1.69 MB
1.69 MB PNG
i yearn to gamble so i gamble with gens
>>
>>108490276
took me a few days to learn it, but it was worth it
>>
anime diffusion general
>>
File: Preview_00208_.png (1.4 MB, 1440x1120)
1.4 MB
1.4 MB PNG
>>
File: Preview_00213_.png (1.18 MB, 1440x1120)
1.18 MB
1.18 MB PNG
found a new way to gamble
>quality tags only for positive and negative prompt
>0-0.5 cfg
>gen initial image until it looks cool and youre able to imagine something from it
>write new positive prompt and feed into upscaler with new description and denoise accordingly with normal cfg

>Highly abstract colored pencil sketched style. An alien planet with pink and tan blobs and organic structures making up its environment. Center frame is a structure with a glowing white door that is emitting white light blasting towards the viewer.
>>
>>108490441
>>108490481
based
>>
File: 1394930679128.jpg (59 KB, 449x444)
59 KB
59 KB JPG
>5060Ti 16GB
>5070 back to 12GB
JENSEEEEN
>>
File: Preview_00216_.png (810 KB, 1440x1120)
810 KB
810 KB PNG
>hiroshima getting nuked and a massive dust cloud towers over the cityscape.
>>
>>108489921
>>108489936
>Local = mental illness
it's local diffusion = mental illness, they seem to be sane in their mind on /lmg/
>>
>>108490441
>>108490481
all of my own gens suddenly look like total ass
>>
>>108490503
looks more like 911
>>
>>108490628
the worse gen looks, the more sovl
didn't they teach you that?
>>
File: 1764362468087441.png (183 KB, 1996x564)
183 KB
183 KB PNG
https://www.reddit.com/r/accelerate/comments/1s7twd8/byteplus_is_selling_exclusive_seedance_20_access/
Do you think they'll let you make AI coom videos if you give them those 2 million? rofl
>>
Anyone know if generating coom slop on a cloud provider like runnode will get you banned?
>>
>>108490013
Retards are forever stuck with SDXL fine tunes because the internet has become a digital favela for the global hordes and third worlders can run, train, and mess with SDXL models.

That's where the audience is. The 'community' has become a business already. Forward momentum has stopped and nothing since SDXL and Flux has ever gotten more than a couple dozen fine tunes and a 100 LorAs. It's fucking sad.
>>
File: Video_00002.mp4 (2.27 MB, 720x1072)
2.27 MB
2.27 MB MP4
I think it's a resolution thing that makes the first few frames break when doing latent upscale.

According to chatgpt it's an innate issue with wan..

But with Kijai it works and I recall that if you don't use a particular setup, you just get an error, won't even gen.
>>
>>108490134
The only reason any of us use local models is because we have to. You get banned for shit as tame as hypnosis porn now on websites plus I don't like the idea of third parties accessing the data on some rented GPU I'm using to see me genning giant cleavage tots onto my IRLs

>>108490228
You're on /g/ too, faggot. Browsing a stable diffusion thread. You're even replying in it. You can lie to yourself but you can't lie to us.
>>
>>108490228
ladies first
>>
File: fucking jeets I swear.png (1.84 MB, 1024x1024)
1.84 MB
1.84 MB PNG
https://huggingface.co/lodestones/Zeta-Chroma/discussions/15
omg guys, it can do a close up of a face, local is saved!
>>
File: 1748515880295837.png (1.49 MB, 1344x1520)
1.49 MB
1.49 MB PNG
>>
>>108490785
>jpeg artifacts all over
>DOF completely fucked
wow!
>>
>>108490785
>it can do a close up of a face
you're being too nice, even that close up looks like shit
>>
>>108490802
>jpeg artifacts
I think it's gonna be amplified on pixel models like Zeta-Chroma, a vae kinda "averages" shit so it cleans up artifacts like that
>>
>>108490785
i'm more concerned about the state of closed models.
what do we have for image gen? nanobanana, not that impressive these days.
and video gen is in the dumpster. seedance 2 would have been nice, but by the time they release it it will be old news.
if they ever properly release it.
>>
File: 1758535685077169.png (2.25 MB, 1344x1520)
2.25 MB
2.25 MB PNG
>>
>>108490816
>nanobanana, not that impressive these days.
how? it's still destroying the competition, let's hope that Z-image edit will be closer to it
>>
Everyone's praising me relentlessly for a longstanding bug I fixed in an open source project, but I used AI to fix it, should I tell them?
>>
>>108490886
if they couldnt tell its not ai, no. can you tell?
>>
>>108490886
tell em in 6 months
>>
>>108490828
idk everything i've seen from nano banana looks pretty run-of-the-mill. face swaps, fashion change, location change.
maybe it's just the userbase for closed models. but people always come here and hype up closed models and no one ever posts or links anything that lives up to the hype.
>>
File: 1773262747386264.png (586 KB, 1080x1506)
586 KB
586 KB PNG
>>108490886
>should I tell them what tool I used to fix this bug??
yes, also tell them what IDE, operating system, CPU, GPU, RAM, motherboard, internet provider you were using to achieve this, transparency is important after all!
>>
>>108490886
use it later for a gotcha moment
>>
File: ltx2_00194.mp4 (3.28 MB, 576x1024)
3.28 MB
3.28 MB MP4
>>108490886
What's the etiquette for submitting vibed bug reports?
>>
>>108490898
The main thing is that it's good enough for and squeezing more out of the rock isn't worth frontier labs' time without re-architecturing and doing a paradigm shift and they have enough compute to not care to battle it out. Local has annoyances with the models available no one wants to fully commit to fixing with their own finetunes. I'm sure there are probably some people trying but knowledge in the space is very little and misallocated to the wrong people, see everything with Zeta-Chroma and Mugen above.
>>
>>108490816
the chinese models are alright. western ai companies are absolute cucks and just want to redirect their focus coding agentic agents and clawbot clones. video and image generation seems to be an after thought in the west at the moment. It's the chinese closed source models that i care for the most. Hopefully other chinese ai companies catch up seedance 2.0 and also have the "sora2 killer" model status. Kling3.0 itself is more photorealistic than seedance 2.0 and uncensored. The censorship of seedance 2.0 is aggressive and it re-writes you prompts to be more pg13 and sfw.
kling 3.0 videos
https://litter.catbox.moe/ak7lph.mp4
https://litter.catbox.moe/q0nyvv.mp4
https://litter.catbox.moe/bdfqjs.mp4
>>
>>108491048
>western ai companies are absolute cucks and just want to redirect their focus coding agentic agents and clawbot clones. video and image generation seems to be an after thought in the west at the moment.
I think the way it plays out is AI is accepted when it's invisible. And it is completely invisible for code, which is great, the west will accept it completely there.

The more image/video generation improves, the more invisible it will get and then america will adapt those as well. The most popular critique against AI is calling it slop, and that's because it's identifiable as slop. So what happens when it's no longer identifiable as slop? All that remains are schizos talking about the SOUL in art and they won't be taken seriously for long.
>>
>>108491048
those videos don't look good. the physics are fucked, the lighting is fucked and the anatomy is fucked.
the westworld style video is the best looking one and that has fucked up facial distortion as he is moving in and out of frame.
i assume they are old ltx gens and you are false flagging saas.
>>
>>108491137
>They don't have a choice, the stock price and market conditions in China are terrible and AI only has momentarily saved them from them dumping any lower in 2024. They haven't regained their market cap from before their financial group IPO which Xi personally stopped to humiliate and pressure Jack Ma after he said negative things about the government's market governance. I do think they will have enough incentives to at least get a Qwen 4 out but how much open sourcing after that or etc, remains to be seen and we don't know with the new Qwen team if it will still be as effective as the old one, we may end up seeing another Llama 4 situation.
is it why China won't save us anymore? :(
>>
>>108491091
I'm away from my pc but no those are kling 3.0 video and yes the westworld concept one with naked hosts in the cold storage is a very hard concept for most video model at the moment. Even for image 2 video on kling 3.0 it produces morphing deformities and samey repetitive faces and body types. Would too see how it performs on seedance 2.0 but that faggot model keeps re-writing my prompts in ridiculous ways to stay pg13.
Kling 3.0 img2vid https://gofile.io/d/x2NUjW
Seedance 2.0 text2vid censored
https://gofile.io/d/8DUl17
>>
>>108491216
put those videos there >>>/wsg/6119462 if you want us to see them without having to go through an alternative link
>>
>>108491216
idk man, closed models are fucked. and it's only going to get worse after seedance bent the knee.
hollywood has been casually writing digital likeness into contracts for over 2 decades, and they own the copyright to virtually all video media.
the future of closed models is going to be heavily censored models that are trained on synthetic and public domain data.
>>
>>108491263
It's not the end of the world anon. Chinks can focus on using training data from old b rated movies, indie movies, forgotten straight 2 dvd/vhs movies, armature recording footage and foreign non Hollywood movies. The chinks can even use bollywood, nollywood, centra asia, russian and south east asian movies and tv shows. There even old movies and shows from yugoslavia and the soviet union era they could use. They have access to content from wechat and bilibili. They don't even have to rely on mainly mainstream Hollywood content for the their training data. It's image 2 video and omni references that will be the true strength and horse power moving forward for chinese ai video models. Western ai companies besides runway, luma and pixverse have given up in the ai video generation race.
>>
>>108491216
nta

pls provide the full prompts too

I wanna try out
>>
>>108491216
fuck off with your shitty api gens
>>
>>108491537
he brought some titties though
>>
>>108490202
Looool gacha girls graphics
>>
>>108491537
Why are you so sensitive to API? Comfy embraced it and encourages it and you championed him as a leader of local.
You guys don't care about actual foss and local. Stop being hypocritical and embrace what you are. A bunch of coomers. You don't give a shit about spirit of free software, you only dislike saas because it doesn't let you gen porn.
>>
>4 years and the pictures look the same
Is local imagen stagnant?
>>
>>108491582
>you only dislike saas because it doesn't let you gen porn.
the funny thing is that some API models like Grok lets you do coom AI porn kek
https://www.reddit.com/r/Grok_Porn/
>>
>>108491582
>why won't you pretend closed models are better than they are?
SD is good for 3 things. porn, memes, real world production.
most closed models are only viable for memes.
can't finetune, so no style, character, motion loras. there goes real world production.
what company is going to waste time and resources on something that a jeet shitting on the side of the street can 1:1 duplicate with ctrl+c ctrl+v in a prompt window?
and porn is out the window. even if the model allows porn, no style, character, motion loras so you are stuck with what they trained.
>>
cumfart shills stopped being in denial and full-on shill api now lmao
>>
>>108491587
exactly what kind of images are you expecting that would be better than the data used to train them?
>>
>>108491751
>what kind of images are you expecting
images that don't have a plastic skin and normal hands, is that too much to ask for?
>>
>>108491610
that's been censored for a while
>>
>>108491953
any realism model can already do that if you know what you're doing. plastic skin hasn't been an issue since SDXL.
>>
>>108492004
proof?
>>
>>108492013
goto to civitai and look at images from any modern local model.
>>
>>108492024
the burden of proof is on you anon, show me some images
>>
File: 00020-3277381146.png (2.28 MB, 1536x1536)
2.28 MB
2.28 MB PNG
>>108492013
NTA but I don't think you can make anything this realistic on cloudshit models.
>>
>>108492038
you 'avin a giggle m8?
>>
>>108492038
cool shitpost. the guy that spams this shit specifically likes his 2.5D slop style.
>>
>>108492038
Very nice.
>>
File: representative.jpg (415 KB, 1943x1314)
415 KB
415 KB JPG
Babe, babe, wake up!, anime segmentation never been this easy before!
>We’re reintroducing and open-sourcing the project “See-through.” Given a single anime illustration. Our approach decomposes a single image into fully inpainted, semantically distinct layers with inferred drawing orders — up to 24 layers including hair, face, eyes, clothing, accessories, and more.

https://github.com/shitagaki-lab/see-through
>>
File: 1750877813275161.png (172 KB, 460x460)
172 KB
172 KB PNG
>>108492116
usecase for cutting 1girl into dozens of organs?
>>
File: 1765306055968124.png (238 KB, 345x502)
238 KB
238 KB PNG
>>108492116
mfw
>>
>>108492116
Feels like this could be incredibly useful for people that do Live2D or puppet animation.
>>
we love our puppeteers, don't we folks?
>>
File: 1762197733159242.gif (1.4 MB, 360x238)
1.4 MB
1.4 MB GIF
>everyone is trying to use ai to make the next big thing that'll make them rich
>meanwhile im perfectly content with just gooning
>>
when are we getting suno 5.5 quality locally?
oh right, local gotta catch up to suno 2.0 first
>>
>>108492199
I want to make high quality memes just for the love of the game but only the combinaison of Nano Banana Pro + Seedance 2.0 can do the trick, I hope local will get on their levels at some point...
https://files.catbox.moe/sh6hc4.webm
>>
>>108492146
that was my first thought
>>
I'm trying to get back into local videos and can only remember the insane amount of shit that was needed just for 5 second videos. What does a typical Wan2.2 workflow look like nowadays? And what's the best trainer for loras?
>>
>>108492221
kinda cool, m8
>>
I want to discover chicks that do not shave in my country. Tried grok, it cannot give me facebook/insta links of these woman because it would violate their consent, privacy.
>>
>>108492248
Oh wait there's LTX2.3
Is this the go-to now?
>>
>>108492221
pack it up. local will never top this
>>
File: 93261236765275.png (1.04 MB, 832x1216)
1.04 MB
1.04 MB PNG
>>
>>108492358
In a few years it might but yeah I wish I could look at current local videos without disgust. I legit don't get why would people waste so much time and money on terrible shit that local video models output.
>>
>>108492354
not exactly. I use wan to generate a video to use as an input for LTX which helps guide the motion, and LTX extends the clip and adds audio. otherwise LTX by itself is pretty bad
>>
>>108492354
No, and the audio from LTX is still bad.
>>
where is the ltx fart lora? or someone give me a fart dataset at least
>>
>>108492369
that isn't good because of an ai model, it's good because the person who made it understands filmmaking and has a good sense of humor and comedic timing.
>>
>>108492370
>>108492385
Back to wan it is. Have there been any good tips for lora baking?
>>
what was last years april fools gag? Something AI related?
>>
>>108492221
>https://files.catbox.moe/sh6hc4.webm
this is genuinely good, if he made an actual movie out of this I'd pay for it
>>
Making a nsfw blowjob lora for MMAudio is so hard..I have to watch every clip intensely to capture the perfect blowjob SFX and i always end up fapping instead.
>>
>>108492414
>the person who made it understands filmmaking and has a good sense of humor and comedic timing
that's why I'm for AI democratization, there's a lot of talented people that could produce cool shit but unfortunately don't have (((friends))) that would lend him millions of dollars to hire hollywood VFX employees
>>
>>108492414
partly, but I don't think anyone can deny that even raw outputs from sora/outperform local by an incredibly huge margin. now with how restricted (and in case of sora straight up unavailable in case of sora) those feel like window into the future instead of something actually usable though.
>>
>>108492116
This feels like ADetailer on steroids, it was about time. How much long until we have ControlNet 2.0?
>>
>>108492453
having professional tools doesn't make someone a professional.
we are surrounded by professional tools with virtually zero entry cost and 99% of people are still completely talentless and uncreative.
>>
>>108492414
no amount of good sense of humor will save ltx 2.3 attrocious outputs, you definitely need a good AI model that doesn't shit itself when you ask it to do more than just slow movements
>>
>>108492508
to be fair, humanity will need time to master AI tools, it's been only one or two years we got models good enough to produce anything, cinema started to be interesting decades after the first video camera got invented
>>
>>108492038
LMAOOO you're such a blind retard, go back to do shitty plastic onsen gens fucking retard piece of shit.
>>
>>108492541
true, but sadly most people will never learn.
for example, go look at the smoldering dumpster that is /3/. every other thread is a variation of
>i have zbrush and max and maya and substance painter, why my character not look right, what do i do?!
>you need to learn anatomy, texturing and color theory.
>what button is that? i don't see that in drop down menu?
>>
File: moreanimafud.png (138 KB, 884x549)
138 KB
138 KB PNG
>if you rip out part of the Anima model, the output gets worse!
>that means all the knowledge is actually on a tiny 6 layer preprocessing adapter module
>no I don't have any proof of that but just trust me it makes perfect sense
>it's also very hard to update the knowledge base
>no I've never trained it myself but JUST TRUST ME BRO it's almost impossible to finetune
There's actually random people now repeating baseless Anima FUD, probably taken from the Noobcord. Why are they so defensive of SDXL and instantly hostile to new models?
>>
when did this anima shill raid start?
>>
But I need to artist mix. SDXL is the only architecture broken enough to artist mix and 1girl. Anima is shit. Netayume is shit. Newbie is shit. Just tlreteain SDXL, maybe add the rectified flow. Jam a new VAE in there as well. Fuck all of you boomer prompting schizos. True anime model relies solely on TAGS.
>>
>>108492675
Such a damn shame we will never get XL 3.5 pred.
>>
>>108492634
It's actually hilarious how obvious their raids are. No other model ever has so many staunch defenders.
>>
>>108492695
Gooks got too cocky with their 1.0 release only to proceed to absolute failure
>>
35
>>
>>108492714
It's so outdated now that I see no reason why they can't just release it instead of hoarding the model. Next time they should rethink trying to crowd fund a model knowing no one but corpo investors can realistically pay for it.
>>
>>108492625
Does autism play a role here? Asking this seriously. Autistic people are very sensitive to their routines and can react disproportionately and irrationally when those are disrupted.
These people have been slopping SDXL since 2023, I wonder if they consider it part of their daily lives at this point?
>>
>>108492766
NAI haven't open released v3 either but seeing that local still hasn't gone beyond SDXL, there's no reason to.
>>
>>108492625
oof that sounds pretty bad Anima bros...
>>
>>108492769
Ad hominem fallacy
>>108492625
Not surprising, more finetuners are waking up to this.
There is a reason why NewBie does not waste compute on Anima, Noob2 neither, and Ikena neither. Yume already got disappointed and dropped it, and other mixers are starting to abandon that architecture too.
>>108492793
Do you remember Cosmos before Anima? No? That is because it is a completely shit model, Tdrusell is a LLM fag and trained Anima like a LLM and selected the base model with the LLM mindset.
>>
Friendly reminder that Anima shills defend a dev that never had a CivitAi account or experience.
>>
File: 1762283904905655.png (140 KB, 1150x312)
140 KB
140 KB PNG
>>108492831
>a dev that never had a CivitAi account or experience.
like Anifart?
>>
The real secret to Anima is Comfy Cloud. The day you anons finally use those 400 free monthly ComfyCloud credits and start prompting out Anima gens at 0.5 seconds while barely making a dent in your balance, that is when everyone will finally understand the meta.
>>
>>108492815
>Ikena
Who the fuck is this supposed to be?
I don't care enough to respond to the rest of the trani fud.
>>
what even is the latest Stable Diffusion? I saw they made a 3.5? how fucked is it that nobody even bothers to use it or make any finetunes etc? I'm not even seeing a category for it on civit
>>
>>108492854
Hassaku dev. He is actually finetuning Hassaku Klein 4b on a 150k image dataset right now and failing because Klein base model is basically untrainable. NewBie already tried throwing an even bigger dataset at it to tame Klein but they also completely flopped.
>>
>150k
We're never leaving sdxl huh
>>
>>108492882
Yes StabilityAI stopped making image models after 3.5. 3.5 isn't a complete trainwreck like 3 but no one gives a shit about it because 3 burned all the community interest and it was eclipsed by Flux on release.
I guess there was that one weirdo who was trying to grift of making an anime model for it but no one gave him money (hopefully)
>>
>>108492882
SDXL was a miracle, everything after has been a flop. I think it might be due to how base models are being built. There is some new tech introduced after SDXL that makes it harder to fine tune base models with good quality and flexibility.
>>
>>108492894
Huh. I thought Hassaku was a shitmix. 150k is on the smaller end but counts as a real finetune I guess.
Anyway, did they try Z-Base?
>>
>>108492840
Ani has been lucky with FUDing Anima, because now people are actually starting to find flaws in the architecture. But Ani's FUD was always about the license, since he never bothered with fine tuning or technical aspects.
>>
>>108492625
Objectively speaking I want to thank tdrusell because he is the only one so far who went this deep taking a random model and trying to make it 100% anime. Anyone else would have left it halfbaked and we would never see these flaws. Lumina would probably have the exact same problems or worse if someone finetuned it as hard as he did with Cosmos.
So big thanks to him for pushing Cosmos to the limit just so we know it is a dead end.
>>
It's not a dead end. It just needs a better license
>>
>>108492116
Oh neat. This is exactly what I needed. If it works as advertised, it will save me a great deal of time.
>>
>still zero proof
>"just trust me"
holy lel
>>
>>108493027
Proof 1 >>108487391
Proof 2 >>108489823
Proof 3 >>108492625
All of this happened in less than one week.
>>
>>108489653
>in dire need to run things with friends or colleagues but have none available and can't show vulnerability to the ones that are because they're women that rely on me
>too poor to run local
>choices are between partisan technocrats (grok and claude), the armed forces (openai) or the chinese (pretty much anything else) to get them all you have
I hate life *snort*
>sad zizek picture
>>
>>108493050
>All of this happened in less than one week.
...by three different people
>>
>>108493050
>"proof"
>no examples
>"just trust me"
again, holy lel
>>
Searching for LDG in the Noob pedocord returns some interesting results.
>>
>>108493063
You’re not going to memory hole Anima like you did with the 40 failed Chroma bakes. I’m going to remind you about the Anima flop every day.
>>
>>108493050
Do you understand that those are simply claims and do not include any actual proof whatsoever?
>>
File: 1770263598196853.png (253 KB, 500x500)
253 KB
253 KB PNG
>>108492116
Thought for sure the chinese were mining bitcoins off me but after a few minutes I got a decent result. Neat.
>>
No seriously, what is up with this seethe over this one specific model?
>>
>>108493092
Can you adetailer it?
>>
>>108493110
>if i shit on comfys present he will give me millions!
Basically a loser with mental illness
>>
>>108493111
I don't use that but it outputs a psd file so, I guess?
>>
Anima and NL prompting simply can't be good. Because if it is then the members of the finetuner groomcords will start wondering why the finetuner is still sticking with SDXL. Much easier to bake gachaslop loras into an already existing model than to recaption the danbooru dataset.
>>
>>108493092
Can you test and see if there's a image dimension limit for the input file? Let's say you put in a character image that's 1536x2048, will it still work?
>>
>>108493063
>>108493085
>...by three different people
How many people do you need? Can’t you just download the Anima fine tune from the first proof and test for yourself how it forgot nearly every concept that wasn’t included in the dataset?
Can't you download AnimaYume and prompt anything outside gacha lolis to test for yourself?
>>
>>108493130
They/He/It's been at it ever since it dropped. Not even netayume was like this
>>
>>108493050
>>https://civitai.com/models/2495369/kirazuri-anima
>>Known Limitations & Issues: Catastrophic Forgetting Any of the base model knowledge outside of that dataset will have significant forgetting, and LoRA trained on the base model are not expected to function very well with this finetune.
>Total training dataset of 15,420 images
That's a small dataset for a full finetune but thanks for something that counts as an example.
>>
>>108493141
This image was actually 1680x2920.

In hindsight, that's probably why it took so long to process...
>>
>continues with "just trust me"
>>
>>108493146
No need to test the finetune when it was trained on Anima Preview 1 that was already stated to have issues with training and was corrected in Preview 2.
>>
ani was right again, turdrussel is a retard, anima is untrainable trash and a waste of time, it will only get worse with training. you keep seething about ani but he's always right in the end
>>
>>108493172
>>108493161
>>
>>108493162
And it gave you that 500x500 image? Or did it keep the original dimensions in the PSD?
>>
>>108493161
I don't fully trust this guy to not fuck something up or tell the full story btw but I am at least entertaining the possibility that "architectural problems" might not be a complete fud lie now.
>>
>>108493158
He's desperate and running out of options because everyone makes fun of him
He really should put the fries in the bag and stop trying to be succesful in this space after all those years
>>
>>108493176
But juliens problem was only the license that makes it impossible for him to squeeze money out of the work of others
Greedy fucker
>>
>>108493161
You’re proving my point anon. A small dataset of only 10k images leads to massive memory loss
>>
>>108493181
Nah it output to 1280x1280, I just resized it to save 3 seconds of upload
>>
>>108493176
>Ani was right
about what? none of this is about the license LOL
>>
>>108493197
>>108493204
nah he was saying cosmos is shit and that turdrussel has zero experience in model training. he could see where this is going. anons are dumb and only after months of sucking turdrussel's dick they can finally see how shit anima really is
>>
But will anon ever post proofs tho
>>
clip = trash
thats all I care about
>>
>>108493250
Let's try to fit a better text encoder within sdxl
>>
>>108493221
You really need help but i enjoy you suffering
You really deserve everything you got
>>
>>108493232
no because he doesnt have any
>>
File: BluSlop.png (72 KB, 1091x365)
72 KB
72 KB PNG
>>108493232
>>108493221
4th proof
>>
I don't get anons problem
Anima is fine? Based comfy gifting us an anime model yet anons are seething for whatever reason
>>
>>108493270
this I love anima so much, hope everyone who dislikes it fucking die. doesnt matter if its untrainable proprietary piece of trash
>>
>>108493277
35 stars status???
>>
>>108493262
>just trust my words please
>please just trust me bro
>>
>>108493277
Why are you THAT angry anon?
>>
>untrainable
but it already does everything that I want it to do?
>>
>>108493270
I don’t get it. I’m grateful that Comfy gave us an anime model, but can’t I criticize it? What’s the point of this anonymous space otherwise?
>>
>>108493262
>>108493085
>>
>>108493296
yep, it's already good enough to use on comfy cloud
>>
>>108493296
And out of the box models are good for local? faggot ,where is the freedom if you cant modify yourself the model to you likeness
>>
>>108493298
You've been seething about this for over a month while you could've worked on your failed UI instead
>>
>>108493298
Fund a better one then? Why seethe for literal months anon? It's just a free gift comfy funded for us, nothing more nothing less
>>
>>108493314
he needs the 1mill to fund his discord grooming
>>
>>108493314
Ad hominem. Show me positive Anima examples instead. I shared four proofs, now show me four of yours from four different people that prove the opposite.
>>
>>108493308
The only thing I want it to do is to print endless cunny, and it does that. One of the few models that has the cunny seal.
I don't really give a shit about this "drama". All I know is that anima is cunny approved.
>>
>>108493327
>>108493283
>>
>>108493327
>four proofs
>three trust me claims and one model trained on an older version
one last time holy lel
>>
>>108493327
>I shared four proofs
where?
>>
>>108493260
Look in the mirror, cretin.
>>
>>108493339
>>108493283
>>
>>108493327
Sweet deflection. Am I wrong? Why haven't you touched your UI code?
>>
Let’s take the Loras out of the equation. What’s the point of a local model that can’t be trained? What excuse does /ldg/ have for this? Also, the first two Comfy Cloud tiers don’t let you use Loras. Strange coincidence.
>>
*yawn*
>>
I got Ani on the brain…
>>
>>108493353
Envy ruins your karma anon
>>
>>108493353
bro you're insufferable, go fud other discords/civitai whatever
>>
Julien should start a new career i think
How about "Julien Somali", he could be a nuisance streamer in south korea :)
>>
>>108493353
Dude it's just a free model use another one if it bothers you so much
No one is holding a gun to your head and forces you to use it
>>
>>108493262
Can someone explain this?
>>
>>108493445
He wants money
>>
>Lower amounts of activity
>No big news
>Comfy dying
>Collage filled with boderlined /degen/ shit
It's so over
>>
>>108493465
hey now germ-chan is cute
>>
Looks like Chroma Kaleidoscope training got paused again, sadge:
https://huggingface.co/lodestones/Chroma2-Kaleidoscope/tree/main

It was learning the concepts absolutely fine IMO, it literally just needed a bunch more 1024x1024 training to reverse the compositional degradation versus the base model that came with the 256x256 training.
>>
>>108493465
comfy shills chased out most legit posters
don't think many anons have the motivation to sit through the constant bullshit comfy cloud and anima shilling
try mentioning you use forge here. immediately you have "anons" dogpiling on you. we all know why that happens
>>
>>108493499
>chroma
Are they still going on with that charade?
>>
btw all the anti anima posting is ironic anon just gets off on seeing you take it seriously
>>
>>108493512
He's been training experimental Chroma2 on Klein 4B and Z Base, Kaleidoscope was the Klein 4B one. Both versions ATM are just massively degraded compositionally relative to the original model, Lodestones biggest flaw is thinking that super low res training just magically works in a way it definitely doesn't actually
>>
>>108493550
who needs Anima when we have BASED Mugen, with its standard CLIP encoder and Flux2 VAE
/s
>>
Go back to Reddlt faggot
>>
>>108493613
you spend more time on reddit than any of us
>>
>>108493554
Don't know why they're shilling this model so hard in Noob cord. Also don't get why all the anime schizos moved there, it's basically /ldg/ at this point.
>>
>>108493641
Wait the noobcord is blessed and full of frenship?
>>
>>108493551
>Both versions ATM are just massively degraded
These new untrainable local models are the death of /ldg/. Now everyone's confused about who to blame. "It's CLIP's fault!" "It's my dataset!" "It's the tagging!" "The dev never finetuned a diffusion model!"
No anon, all that stuff works fine. The real problem is the architecture of these new models, they don't want (You) to finetune them.
>>
>>108493675
>The real problem is the architecture of these new models, they don't want (You) to finetune them.
Yeah it's very likely deliberate. I wonder if they regret that older models like SDXL were so trainable.
>>
>>108493262
>anima preview 2 is not a re-train, it's clearly continuation
tdrussell claimed he retrained from an earlier epoch with new methods, what evidence does he have that contradicts this claim?
>the DiT has yet to catch up to the LLMAdapter
What the hell does this mean exactly?
>>
It’s almost like SDXL was a fluke
>>
>>108493675
Yet someone thinks they can fine-tune their own "apache" version because .... they're just going to do it okay!
>>
>POST PROOF
>posts proof
>hurr I don’t understand what that means
cumshart isn’t sending his best
>>
>>108493686
that's one of the devs of the esteemed MUGEN model he knows what he's talking about
>>
>>108493707
>>108493337
>>three trust me claims and one model trained on an older version
>>
>>108493707
>screencap of some discordfag vagueposting nonsense
>OH MY SCIENCE I HECKING LOVE PROOOOOOOOOOOOOVING!
>>108493712
Can the honorable esteemed dev, his majesty and highness, deign to type an actual paragraph explaining what's wrong with it, rather than two single sentence vagueposting bullet point claims with nothing to back them up?
>>
>>108493684
As an investor, I wouldn’t invest in a model that’s easy to modify people can that people can create and destribute child pornography and copyrighted material.
>>
Is this the new RP general?
>>
>>
>>108493756
tranny general
>>
>>108493675
What? I was specifically blaming Lodestones for his insistence that training as low as 256x256 sometimes is Just Fine Guise
>>
>>108493136
that's what NL labbing was intended to explore and what led to edit models, so not a total waste.
thankfully newer doesn't mean replacement for local.
>>
>pretending mugen's 100GB vram requirement wont get optimized like every other model in existence
>>
>>108492116
comfy support wen
>>
>>108493802
Hawt
>>
Would TurboQuant affect local image gens?
>>
>>108493912
no, local image gen is DOA
>>
>>108493912
Diffusion also uses attention so most likely yes if someone implements it. How much will it matter? That I don't know.
>>
Imagine if someone optimizes vram usage by 6x
>>
>>108493950
i could finally run ZiT...
>>
>everyone liking the most popular UI *must* be shill because i say so
lolcow
>>
>>108493950
>6x optimization
>6x worse outputs
>>
>>108493999
MoE image gen when?
>>
>>108493979
I got a 12 month blacked subscription for shilling anima and comfy on reddit and ldg for a month
>>
File: 1746407719559697.png (68 KB, 929x420)
68 KB
68 KB PNG
>>108493999
>6x worse outputs
it's really not that bad, Q8 KV cache seems fine
>>
>>108494003
Sounds like you're a serious poorfag desu
>>
File: 1760030757982365.png (210 KB, 587x493)
210 KB
210 KB PNG
wtf I love google now
>>
File: ComfyUI_18291.png (2.8 MB, 1500x2000)
2.8 MB
2.8 MB PNG
>>108494171
30% is nothing when prices went up 500%. I wonder how long retailers are gonna tolerate all that memory shitting up their warehouses though?
>>
>>
File: 1765736448553243.png (3.98 MB, 2048x1216)
3.98 MB
3.98 MB PNG
https://www.reddit.com/r/StableDiffusion/comments/1s8ppel/i_spent_weeks_fixing_the_plastic_look_of_ai/
I FIXED THE PLASTIC SKIN SAAR, JUST SHARPEN YOUR IMAGE TO 200% SAAR
>>
File: 1768007199787801.jpg (165 KB, 1478x1027)
165 KB
165 KB JPG
Still nothing can beat Grok imagine image edit : \
>>
When ready

>>108494530
>>108494530
>>108494530
>>108494530
>>
>>108494473
yeah klein completely undresses her :/
>>
>>108494402
I thought this was a shitpost, jesus christ AI has made some people stupid.
>>
>>108494342
based jenner always posting right before the thread dies so it's hard to reply with words of thanks and encouragement
>>
>>108490202
>every x axis has different numbers
cool comparison bro
>>
>>108493262
failed dev and failed trainer really is a dreamteam of seethe lmao
>>
>>108492126
Showing off the PSD layers to pretend you drew it by hand.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.