[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


Discussion and Development of Local Image and Video Models

Previous: >>108799954

https://rentry.org/ldg-lazy-getting-started-guide

>UI
ComfyUI: https://github.com/comfyanonymous/ComfyUI
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
re/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneo
SD.Next: https://github.com/vladmandic/sdnext
Wan2GP: https://github.com/deepbeepmeep/Wan2GP

>Checkpoints, LoRAs, & Upscalers
https://civitai.com
https://civitaiarchive.com/
https://openmodeldb.info

>Tuning
https://github.com/spacepxl/demystifying-sd-finetuning
https://github.com/ostris/ai-toolkit
https://github.com/Nerogar/OneTrainer
https://github.com/tdrussell/diffusion-pipe
https://github.com/kohya-ss/sd-scripts
https://github.com/kohya-ss/musubi-tuner

>Z
https://huggingface.co/Tongyi-MAI/Z-Image

>Anima
https://huggingface.co/circlestone-labs/Anima
https://tagexplorer.github.io/

>Qwen
https://huggingface.co/collections/Qwen/qwen-image

>Klein
https://huggingface.co/collections/black-forest-labs/flux2

>LTX-2
https://huggingface.co/Lightricks/LTX-2

>Wan
https://github.com/Wan-Video/Wan2.2

>Chroma
https://huggingface.co/lodestones/Chroma1-Base
https://rentry.org/mvu52t46

>Illustrious
https://rentry.org/comfyui_guide_1girl

>Misc
Local Model Meta: https://rentry.org/localmodelsmeta
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Txt2Img Plugin: https://github.com/Acly/krita-ai-diffusion
Archive: https://rentry.org/sdg-link
Collage: https://rentry.org/ldgcollage

>Neighbors
>>>/aco/csdg
>>>/b/degen
>>>/r/realistic+parody
>>>/gif/vdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/trash/slop
>>>/vt/vtai
>>>/u/udg

>Local Text
>>>/g/lmg

>Maintain Thread Quality
https://rentry.org/debo
https://rentry.org/animanon
>>
>>108807440
1st for 1grunge slop.
>>
bums and tits, amazing.
>>
so when is that anima ip-adapter coming out?
>>
(I invented the genre - it's a 1girl, and she's singing grunge)
>>
Blessed thread of frenship
>>
kill ani IRL
>>
Eh~!?
https://www.comfy.org/workflows/
>>
*make changes*
*SYNTHESIZE*
>>
>>108807501
disastrous, it doesn't have a filter for local.


I mean, I haven't tried, but can you set the cfg on Grok video gen? if not, why even use comfy for api???
>>
>>108807501
Comfyanon is drowning in money
>>
>snubbed
>>
>>108807440
Please include in the OP a short description for each, their pros and cons, to help beginners know which one to choose according to what they want to do. Thanks.
>>
anyone got the anima -> zit refiner workflow? not sure why i can't pass the latent through directly when everyone else has been able to
>>
>>108807632
ask chat gpt
>>
File: GUpZ7nIWMAAMwJd.jpg (25 KB, 674x650)
25 KB JPG
Anyone has a Hidream 01 workflow?
>>
>>108807637
they can? i just re-encode the image
>>
>>108807632
You could just read the model cards desu
>>
>>108807637
>pass the latent through directly
no one does which is why you shouldnt call it a "refiner" because technically youre not using it like an actual refiner
>>
>>108807560
>why even use comfy for api???
I imagine for local processing in general. it's not like everybody that uses api's are gpulets/vramlets. Having the ability to use api/local coexisting in one spot is quite useful for people especially for large projects like those coke commercials and such.
>>
>>108807501
>can't open in new tab
what a faggot
>>
chads only: https://github.com/comfyanonymous/ComfyUI_examples
>>
>>108807651
>>108807669
thanks so im just retarded. what kind of denoise do you use? i always end up with a bit too much of the anima gunk
>>
>>108807727
i like hooking ksampler advanced and setting start step on that to -5 from total anima steps so it continues from there
>>
>mfw Resource news

05/12/2026

>Pixal3D: Pixel-Aligned 3D Generation from Images
https://ldyang694.github.io/projects/pixal3d

>SWIFT: Prompt-Adaptive Memory for Efficient Interactive Long Video Generation
https://github.com/ShanwenTan/SWIFT

>Forcing-KV: Hybrid KV Cache Compression for Efficient Autoregressive Video Diffusion Models
https://zju-jiyicheng.github.io/Forcing-KV-Page

>Masked Generative Transformer Is What You Need for Image Editing
https://weichow23.github.io/EditMGT

>Pixal3D: Pixel-Aligned 3D Generation from Images
https://ldyang694.github.io/projects/pixal3d

>Micro-Defects Expose Macro-Fakes: Detecting AI-Generated Images via Local Distributional Shifts
https://zbox1005.github.io/MDMF-project

>ERASE: Eliminating Redundant Visual Tokens via Adaptive Two-Stage Token Pruning
https://github.com/Tuna-Luna/ERASE

>MicroViTv2: Beyond the FLOPS for Edge Energy-Friendly Vision Transformers
https://github.com/novendrastywn/MicroViT

>Variational Inference for Lévy Process-Driven SDEs via Neural Tilting
https://circle-group.github.io/research/NeuralTilting

>Mean Mode Screaming: Mean–Variance Split Residuals for 1000-Layer Diffusion Transformers
https://erwold.github.io/mv-split

>Kijai/hidream-O1-image_comfy
https://huggingface.co/Kijai/hidream-O1-image_comfy/tree/main/loras

05/11/2026

>Causal Forcing: Autoregressive Diffusion Distillation Done Right for High-Quality Real-Time Interactive Video Generation
https://thu-ml.github.io/CausalForcing.github.io

>ReasonEdit: Towards Interpretable Image Editing Evaluation via Reinforcement Learning
https://github.com/IntMeGroup/ReasonEdit

>Object Hallucination-Free Reinforcement Unlearning for Vision-Language Models
https://github.com/XMUDeepLIT/HFRU

>SR2-LoRA: Self-Rectifying Inter-layer Relations in Low-Rank Adaptation for Class-Incremental Learning
https://github.com/FqWan24/SR-2-LoRA

>EditRefiner: A Human-Aligned Agentic Framework for Image Editing Refinement
https://github.com/IntMeGroup/EditRefiner
>>
>>108807750
so start 35, steps 50?

do you just feed the same prompt?
>>
>mfw Research news

05/12/2026

>Qwen-Image-2.0 Technical Report
https://arxiv.org/abs/2605.10730

>LimeCross: Context-Conditioned Layered Image Editing with Structural Consistency
https://arxiv.org/abs/2605.10319

>Power Reinforcement Post-Training of Text-to-Image Models with Super-Linear Advantage Shaping
https://arxiv.org/abs/2605.10937

>Noise-Started One-Step Real-World Super-Resolution via LR-Conditioned SplitMeanFlow and GAN Refinement
https://arxiv.org/abs/2605.09328

>FlashAR: Efficient Post-Training Acceleration for Autoregressive Image Generation
https://arxiv.org/abs/2605.09430

>WorldReasonBench: Human-Aligned Stress Testing of Video Generators as Future World-State Predictors
https://unix-ai-lab.github.io/WorldReasonBench

>Filtering Memorization from Parameter-Space in Diffusion Models
https://arxiv.org/abs/2605.10439

>AllocMV: Optimal Resource Allocation for Music Video Generation via Structured Persistent State
https://arxiv.org/abs/2605.10723

>Reinforce Adjoint Matching: Scaling RL Post-Training of Diffusion and Flow-Matching Models
https://arxiv.org/abs/2605.10759

>When Few Steps Are Enough: Training-Free Acceleration of Identity-Preserved Generation
https://arxiv.org/abs/2605.09460

>Dynamic Cross-Modal Prompt Generation for Multimodal Continual Instruction Tuning
https://arxiv.org/abs/2605.10765

>Progressive Photorealistic Simplification
https://arxiv.org/abs/2605.10409

>Improving Human Image Animation via Semantic Representation Alignment
https://arxiv.org/abs/2605.10523

>Towards Robust Sequential Decomposition for Complex Image Editing
https://arxiv.org/abs/2605.09233

>A Real-Calibrated Synthetic-First Data Engine
https://arxiv.org/abs/2605.09699

>Evading Visual Aphasia: Contrastive Adaptive Semantic Token Pruning for Vision-Language Models
https://arxiv.org/abs/2605.09429

>Hystar: Hypernetwork-driven Style-adaptive Retrieval via Dynamic SVD Modulation
https://arxiv.org/abs/2605.10009
>>
>mfw MORE Research news

>PermuQuant: Lowering Per-Group Quantization Error by Reordering Channels for Diffusion Models
https://arxiv.org/abs/2605.09503

>DeltaRubric: Generative Multimodal Reward Modeling via Joint Planning and Verification
https://arxiv.org/abs/2605.09269

>Discrete Langevin-Inspired Posterior Sampling
https://arxiv.org/abs/2605.09302

>Offline Preference Optimization for Rectified Flow with Noise-Tracked Pairs
https://arxiv.org/abs/2605.09433

>ExtraVAR: Stage-Aware RoPE Remapping for Resolution Extrapolation in Visual Autoregressive Models
https://arxiv.org/abs/2605.10045

>Attention Sinks in Diffusion Transformers: A Causal Analysis
https://arxiv.org/abs/2605.09313

>PGID: Progressive Guided Inversion and Denoising for Robust Watermark Detection
https://arxiv.org/abs/2605.09319

>Vocabulary Hijacking in LVLMs: Unveiling Critical Attention Heads by Excluding Inert Tokens to Mitigate Hallucination
https://arxiv.org/abs/2605.10622

>ML-CLIPSim: Multi-Layer CLIP Similarity for Machine-Oriented Image Quality
https://arxiv.org/abs/2605.09479

>LLaVA-CKD: Bottom-Up Cascaded Knowledge Distillation for Vision-Language Models
https://arxiv.org/abs/2605.10641

>Make Each Token Count: Towards Improving Long-Context Performance with KV Cache Eviction
https://arxiv.org/abs/2605.09649

>Overcoming Catastrophic Forgetting in Visual Continual Learning with Reinforcement Fine-Tuning
https://arxiv.org/abs/2605.09640

>The Alpha Blending Hypothesis: Compositing Shortcut in Deepfake Detection
https://arxiv.org/abs/2605.10334

>Perceptual Asymmetry Between Hue Categories: Evidence from Human Color Categorization
https://arxiv.org/abs/2605.09339

>Meow-Omni 1: A Multimodal Large Language Model for Feline Ethology
https://arxiv.org/abs/2605.09152

>Exploring the AI Obedience: Why is Generating a Pure Color Image Harder than CyberPunk?
https://arxiv.org/abs/2603.00166
>>
>>108807777
>>108807777
>>LimeCross: Context-Conditioned Layered Image Editing with Structural Consistency
the what
>>
File: D-OPSD ZIT anime.jpg (1.09 MB, 3909x1630)
1.09 MB JPG
I skimmed through the paper Z-Image team released recently. As always take it with a grain of salt due to cherry picking and p-hacking, but there seems to be some interesting new possibilities about finetuning distilled models while preserving low step distill and also not get shit quality.
https://arxiv.org/abs/2605.05204
Also of note is that talk about finetuning ZIT on 25k anime images. We are most likely not getting it. Add it to the pile of MIA like Z-Edit and whatever happened after they got the Noob dataset.
>>
guys why is every comfy SDXL workflow with refiner shit?
I don't want to use refiner
Does anyone here have a simple workflow with VAE?
>>
What's Chroma like nowadays, can you get decent outputs without custom node snakeoil and esoteric jeeted workflows, just prompting?
>>
>>108808044
No. Chroma is always seed lottery.
>>
File: viewing.png (1017 KB, 1160x896)
1017 KB PNG
>>
>>108808124
that's a WINNER
>>
This new video model just dropped any good?

It claims its uncensored, I wonder if it is and how much.

https://huggingface.co/SulphurAI/Sulphur-2-base
>>
File: 1771653099041072.png (472 KB, 1285x1134)
472 KB PNG
>>108807440
I see people swear up and down that A.I. is causing the enshitifucation of the internet, bot replies are more rampant than ever, and they they CANT go or look anywhere without seeing A.I. slop shoved down their throats. Bot replies here and there I certainly see on Twitter replies. But other than all of my time lines: reddit, twitter, YouTube, TikTok, etc, all basic shit I'd normally get recommend but no forced A.I. as far as I can see. No specific algorithm settings or recommendation settings on my end to be anti A.I. I'm an local A.I.user so if anything you'd think my TL would be FULL of it. Even on my personal Twitter account that's specifically for sharing my ai side projects or hobby, the timeline is just nothing but A.Ithiret traps. I guess that technically counts but none of the "DUDE CHECK OUT THIS NEW PAPER DUDE CHECK OUT THIS NEW TURBO QUANT MEME DUDE OPENCLAW OPENCLAW OPENCLAW" that people swear is all over their timelines. What gives? Are people like pic rel just exaggerating the prevalence of it?

https://xcancel.com/i/status/2054124214253174956
>>
>>108807869
what the hell? this looks amazing
>>
File: 1675439012756.webm (1.57 MB, 1280x720)
1.57 MB
1.57 MB WEBM
>>108808164
That's not AI that's real, and many such cases.
>>
>>108808164
>luigi
absolute uh pol word coded
>>
>>108808164
local diffusion?
>>
File: WTF.png (2 KB, 194x40)
2 KB PNG
Guys why is comfyui constantly "updating" its ui? is this a red flag? constant useless updates?
>>
File: inpainted with SD 1.4 .png (1.78 MB, 1536x1024)
1.78 MB PNG
>>108807869
does it work like this?
>>
File: aniem.webm (2.72 MB, 1920x1088)
2.72 MB
2.72 MB WEBM
So what's the best Comfy for LTX image to video with LoRAe
>>
>nigbo
>>
File: tmpslft5p76.png (799 KB, 768x960)
799 KB PNG
>>
>>108808124
Gem alarm!
>>108808162
Opinions are mixed. I haven't bothered with it, but from what I've heard it is shit for t2i and good for i2i.
>>
>>108808355
Nice, classy gen.
>>
>>108807440
snuck a Jenny in there, hug? nice.
>>
>>108808124
do you need his e-mail address?
>>
>>108808522
The guy kicked bucket years ago.
>>
Is there any z-image porn model yet?
>>
>>108808540
Technically Zeta but it's broken, raped and fried.
So no.
Maybe Juggernaut guy will add some NSFW in the future iterations.
>>
>>108808570
>Technically Zeta but it's broken, raped and fried.
why can't this gigafaggot dimply do a finetune just once
>>
>>108808164
people tend to over-dramatize online, and online people have an ai psychosis currently
>>
>>108808164
fakenews are nothing new anon, AI is just another tool
>>
>ernie : 8B
>Hidream : 8B
>Zimage : 6B
what's the best for text between these 3?
>>
File: .png (28 KB, 133x135)
28 KB PNG
>>108808164
>>
File: 21983891.jpg (127 KB, 1728x1584)
127 KB JPG
>>108808661
>
>>
>>108808339
switch to old menu in settings then add to start args
--disable-api-nodes --enable-manager --enable-manager-legacy-ui --front-end-version Comfy-Org/ComfyUI_frontend@1.37.11

fast as fuck no bloat
>>
File: ComfyUI_00018_.png (1.51 MB, 1024x1024)
1.51 MB PNG
>>
File: ComfyUI_00019_.png (1.45 MB, 1024x1024)
1.45 MB PNG
>>
>>108808646
Hidream was supposed to be amazing for text, but from what I've seen they lied about that too alongside everything else. It's just a worthless shit model.
Z does short text very good. Can't reliably write multiple sentences but reliable enough for shorter text.
Ernie is MAYBE a bit better than Z for text, I have only tested it briefly. But I would say it's a sidegrade for text and overall a downgrade from Z.
So I would go with Z.
>>
File: ComfyUI_00020_.png (1.62 MB, 1024x1024)
1.62 MB PNG
>>
File: ComfyUI_00021_.png (1.38 MB, 1024x1024)
1.38 MB PNG
>>
File: ComfyUI_00022_.png (1.4 MB, 1024x1024)
1.4 MB PNG
>>
>>108808700
thanks anon
>>
>>108808540
have you tried snofs klein?
>>
>>108808700
I was about to ask how good are HiDream and Ernie. So ZIT is slightly better? What about the speed, are they faster than ZIT?
>>
>>108808930
Ernie runs with comparable speed.
Hidream is fast.
But it's too shit to be worth a damn.
>>
File: zImageturbo_00253_.jpg (455 KB, 1376x1920)
455 KB JPG
>>
What was the gimmick with training Z-Image? Just the shift 3?
>>
>>108809026
>not jenny
disappointed
>>
>>108808642
tool to make even worse faek news
>>
>>108808642
Is this a real news?
>>
>>108808696
Godspeed, anon. They raised millions but can't even bother establishing a proper UI/UX department.
>>
I had snubbed Flux2.Klein because I had been told that it was shit compared to ZIT, but I am trying it now and it's actually very good, and so fast! I will never use Flux1 and Kontext ever again.
>>
is there anything worth running on a 3060? I've been trying to install Wan2GP because the github mentions being ideal for gpu poor, but python hasn't made it easy for me.
>>
>>108809258
It's not vad but there's some fuckery with human body proportions and how loras work
>>
File: zImageturbo_00496_.jpg (839 KB, 1376x1920)
839 KB JPG
>>
>>108807440
is there a box for that toohoo one?
>>
>>108809285
most of them if it's the 12GB 3060
>>
>>108809363
okay good to know I'm not wasting my time, the last time I tried any of this stuff was with sd models on auto1111 but I was kinda disappointed.
>>
>>108809285
Anima is lightweight
>>
Is there a template for generating two images in a sequence? Randomly generated character and consistent art style.
>>
File: zImageturbo_00070_.jpg (616 KB, 1376x1920)
616 KB JPG
>>
>>108809285
Anima
Klein 9b (not base)
Z-Image Turbo
Wan 2.2 (int8)
LTX 2.3 probably also works at int8 if you have decent system memory
>t. fellow 3060
>>
>>108809494
>>108809399
good to know if I ever can figure out how to get flash attention to compile, I have 96gb ddr5 and a second 3060, do these guys have a text model that can be split out to the second GPU?
>>
File: 00354-367063815.jpg (331 KB, 1344x2016)
331 KB JPG
>>108809450
cute
>>
>>108809514
google for your os/cuda/torch combo precompiled flash wheels
>>
>>108809450
can you share prompt for this lovely beauty please...
>>
>>108809514
>how to get flash attention to compile
It will take forever. (Really, hours.) Find a prebuilt wheel for your os + torch + python + cuda version combo (Don't use google.)
If you are using Comfy, the default pytorch attention works fine usually.
> a text model that can be split out to the second GPU?
Yes, you can keep the text encoder on another gpu and unet/vae on another.
With 96gb ddr5 memory though, this won't provide more than a few seconds of speed up for most models.
>>
File: zImageturbo_00089_.jpg (654 KB, 1376x1920)
654 KB JPG
>>108809536
>>108809551
>Photograph of a young woman with long, straight, dark blue hair and blue eyes, wearing a white and gold bikini top that barely covers her large breasts. She has a white flower hair accessory on the right side of her head. Her hands are positioned near her chest, with her index fingers touching her lips. She is wearing a small white choker and a white wristband on her left wrist. Her skin is fair, and she has a small mole on her right side, below her breast. She sits with her knees slightly apart, revealing white side-tie bikini bottoms. Her expression is neutral, with a slight, closed-mouth smile. The background features a dark, patterned wall with a yellow moon and abstract designs.
>>
File: zImageturbo_00105_.jpg (630 KB, 1376x1920)
630 KB JPG
>>
File: ComfyUI_00053_.png (2.09 MB, 1000x1504)
2.09 MB PNG
>>108809594
derpa herpa
>>
File: zImageturbo_00112_.jpg (608 KB, 1376x1920)
608 KB JPG
>>
File: 1758457743827838.jpg (12 KB, 234x302)
12 KB JPG
>>108808044
I would ask their discord since nobody posts here anymore. Chroma is not at all intuitive to prompt for, but I wouldn't bother with tiny speedup snake oil for it. The only optional things that helped my outputs were getting T5gner for the encoder and res4lyf samplers.
>>
>>108809780
>T5gner
This felt like snake oil to me. Like almost all te experiments.
>>
after doing another 20 captions by hand I'm ready to surrender.
what auto caption tools do I need?
>>
>>108809938
For natural language start a llama.cpp server with Gemma4/Qwen3.6 (MoE if VRAMlet, 31/27b if you have 24+gb vram), heretic variants if you need NSFW.
Then use a script like this:
import requests
import json
import os
import base64
from pathlib import Path

def encode_image_to_base64(image_path):
with open(image_path, "rb") as image_file:
return base64.b64encode(image_file.read()).decode('utf-8')

folder_path = "pathtofolder"
url = "http://127.0.0.1:8080/v1/chat/completions"

for filename in os.listdir(folder_path):
if filename.endswith((".png", ".jpeg", ".jpg")):
base64_image = encode_image_to_base64(os.path.join(folder_path, filename))
data_url = f"data:image/jpeg;base64,{base64_image}"

headers = {
"Content-Type": "application/json"
}
payload = {
"messages": [
{
"role": "system",
"content": "WRITE A CAPTIONING SYS PROMPT HERE"
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Provide a caption for this image based on the guidelines."
},
{
"type": "image_url",
"image_url": {
"url": data_url
}
}
]
}
],
"temperature": 0.65,
"min_p": 0.01,
"reasoning": {
"effort": "medium"
}
}

response = requests.post(url, headers=headers, json=payload)
response = response.json()
with open (os.path.join(folder_path, filename.rpartition('.')[0]+".txt"), "w") as output:
output.write(response["choices"][0]["message"]["content"])
>>
>>108809977
wow, okay thanks. I haven't really used llms locally at all, but this seems cool. being able to run scripts like this.
>>
>>108809977
what about deepseek? is it good? or do I specifically need Qwen/Gemma because they can "see" images?
>>
>>108809977
how are those heretic variants compared to something like joycaption when it comes to tagging nsfw images?
>>
File: zImageturbo_00150_.jpg (845 KB, 1376x1920)
845 KB JPG
>>108809938
gemini is really good, but it's not local
>>
File: zImageturbo_00152_.jpg (650 KB, 1376x1920)
650 KB JPG
>>
>>108809780
>since nobody posts here anymore.
Source?
>>
>>108810079
Joycaption is really outdated and compared to even the 8-9B newer qwens it's fucking blind.
>>108810053
Yeah you need vision models. Don't forget to grab the mmproj file from the gguf repo as well and load it alongside the main model.
>>
>ok AI draw me the character Tomoko at age 30
lets see how this turns out
>>
File: app.png (214 KB, 1362x703)
214 KB PNG
when are we going to admit that nodes was a fucking mistake and A1111 were right
>>
>>108810267
we knew it all along, it's just that the Russian hacker known as Voldemort (or A1111) stopped developing his interface.
>>
>>108810327
forge neo continues where a1111 left off
>>
>>108810267
ani was trying to free us from nodes but all you guys did was shit on him endlessly. now he's killed himself and the only option we have is being shoveled into api nodes by comfy the tyrant
>>
>>108810345
>now he's killed himself
I wish, don't excite me like that.
>>
>>108810341
yes I know and I am grateful for it. But I miss Voldy though. He was a real pioneer in diffusion arts and a true visionary.
>>
>>108810345
>now he's killed himself
not true otherwise i'd be dumping trash on his grave everyday
>>
>>108810345
Gotta be really weak to get bent out of shape over a little 4chanz bantz. Shame.
>>
does anyone have some Video Workflows that work on MPS
>>
>>108810345
AniDiffusion could be amazing for all I know, Ani is what killed it with his(her) insane behavior.
>>
>>108810441
what insane behavior is that exactly?
>>
>>108810441
uh oh be careful hes gunna start sperging out i suggest you do not reply
>>
https://x.com/viccpoes/status/2054278218719637925
Come on, local cucks, beg on your knees, like the filthy roaches you are!
>>
I wanted to install AniStudio but I couldn't figure out how. The documentation pages were empty.
>>
>>108810489
please be SaaS! please be SaaS!
>>
>>108810053
Deepseek is fucking big, unless you have multiple 3090s or whatever that's outside your reach.
Pretty much any major LLM release nowadays comes with vision. (Don't forget to grab mmproj)
You run llama server with something like this, minimal working example, it can be beneficial to tinker with stuff like image tokens in some models:
./llama-server -m /path/to/gguf/quant/of/model --mmproj /path/to/its/mmproj -c 16384 --n-gpu-layers -1 --parallel 1
Build instructions here:
https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md
>>
>free us from nodes
You can easily vibe code a frontend for the API calls to comfy backend negating the need for noodles desu
>>
>>108810526
thanks.
I kinda thought deepseek was withIN reach since I've been low level aware that the folks over at /lmg/ keep raving about deepseek and even made a mascot for it. Do all those folks have multi-GPU setups?
Why don't we have multi-GPU setups?
>>
>>108810489
>our first foundational model
100% it's literally just a Flux2 finetune otherwise they wouldn't have called it Krea '2' lmfaoo
>>
>>108810530
maybe what we need is comfy nodes that look just like A1111 tabs.I mean like a single huge ass node that has everything in it.
>>
we can call it A1111 (pbuh) Big Ass nodes
>>
>>108810566
>Do all those folks have multi-GPU setups?
That or they rent compute or use API.
>Why don't we have multi-GPU setups?
It offers rather limited help for /ldg/ and it costs money.
>>
can't drag and drop files since the comfy update anymore, what gives?
>>
>>108810672
i know. i know. it's just denial.
>>
File: 00214-1765364476.png (1.19 MB, 1656x960)
1.19 MB PNG
>>
anyone knows if hidream team ever said they would share their 200B model?
I can rent compute to run it, so I'd really like to try it to see if it's the model actually used to make their impossible to replicate images on their page
>>
>>108810093
it's also prone to refusals or using euphemisms
>>
>>108810526
>Deepseek is fucking big, unless you have multiple 3090s or whatever that's outside your reach.
DS isn't multimodal yet.
>>
>>108809890
It's more up to date than base t5, and it varies outputs pretty dramatically. Since Chroma is so poorly documented there's no actual data behind any of this but I can find some old tests I did last year.
>>
File: 00270-120589266.png (2.6 MB, 1656x960)
2.6 MB PNG
>>
>>108811011
Ah forgot about that.
I actually attempted to test it for captioning a few weeks ago and was really surprised they didn't include the functionality.
It's cheaper than Gemini Flash, and technically "local", so possibly a competitive captioning alternative if what they release ends being a performant VLM.
>>108811028
How many seeds have you tried, ballpark figure?
I admittedly only tried a dozen maybe.
t5 is obsolete at this point, and any decent future model is unlikely to use it, so I guess it doesn't matter too much though.
>>
>>108811100
>I actually attempted to test it for captioning a few weeks ago and was really surprised they didn't include the functionality.
I think they will get there, but they seem to be a small group specialized in text to text.

>It's cheaper than Gemini Flash, and technically "local", so possibly a competitive captioning alternative if what they release ends being a performant VLM.
If you use gemma locally, it has image to text fore free (with a good gpu), and with a proper jailbreak it's able to describe a lot including nsfw, though it's not perfect at that, we are one finetune away from that.
So, long term, maybe we can finally have a no nonsense proper description.
>>
File: 8216.png (3.46 MB, 2551x1951)
3.46 MB PNG
im making a thingy because comfyui sucks ass
>>
>>108811186
very cool.
btw why not fork a version from not more than few months ago - all custom nodes come as bonus then.
>>
>>108811227
because the code sucks too, and i need to be able to move models to and fro vram in between generations so i have enough vram for the llm, the nodes are very easy to reimplement anyway and i have complete flexibility over the workflow without having to worry about the bullshit implementation details
>>
>>108811186
how do you read this
>>
>>108811274
by looking at the screen
>>
>>108808540 https://civitai.red/models/2221503/zimage-turbo-nsfw-by-stable-yogi?modelVersionId=2668773
>>
My boys, has anyone ever figured out how to create consistent backgrounds for Z-Image or for any model for that matter? I am thinking of creating a LoRA for each specific room, but I am still unsure if it'll learn small details. Fucked around with ControlNet, but its ultimately for ZIT, which is great, but weaker than a base model.
>>
15.75 days of nofap.
>>
>>108811375
6.7 days of nofap.
>>
Rationalizing stage. Maybe fapping will make me more lucky. I don't want to be unlucky!
>>
>>108811379
You're nearing your first drop.
>>
>>108811335
Nothing new or better since February?
>>
File: ltx-omninft.png (1.6 MB, 2035x1552)
1.6 MB PNG
New LTX-2 RL LoRA dropped.
>https://zghhui.github.io/OmniNFT/
>https://huggingface.co/zghhui/OmniNFT
>>
>>108811186
nice Punished Miku.
Post MOAR.

post MOAR Punished Miku.

also
>s-expressions
Based Beyond Belief. This is the workflow the elders envisoned decades ago. Congratulations you are healing our timestream, God be With You.
>>
>>108811557
>I know what Americans like
>basketball
>let's generate imaginary basketball!
xi tried, at least.
>>
>>108811557
I hope this can be adapted to LTX2.3.
Though I'm not even sure what it actually changes.
>>
>>108811583
Basketball is actually quite popular in china, daft cunt.
>>
>>108811637
Do they like imaginary games?
>>
File: slop-tune.png (1.8 MB, 1501x963)
1.8 MB PNG
>>108811621
>https://zghhui.github.io/OmniNFT/
Look at the examples. It improves text alignment and AV sync which is great.
Sad part is that they preference optimized it using HPSv3 as a reward model, so it will probably slopify the gens unless you turn the weight down a bit.
>>
File: mc1 - Copy.png (2.13 MB, 1536x1536)
2.13 MB PNG
>>108811100
I left it running for a day when I was out, running each seed in t5, t5gner, v48 and HD respectively. Unfortunately I'm retarded and I only have a handful available. Presumably I have them on another hard drive somewhere but I'll post this one for now. This was originally done to show how fucked up v50 was compared to v48 so it's not the best example.
>>
>>108811698
Oh I see, thanks anon.
>>
>>108809728
try zimage
>>
File: ZIT-SWIFT-detail_00007_.png (1.54 MB, 1024x1024)
1.54 MB PNG
>>108808714
fun one to play around with thanks, z image turbo has been a blast
>>
even this foid >>108809728 is better than generic zit slop >>108811843
i'm just tired of looking at the same fucking ai image
>>
File deleted.
So, seeing that the Nunchaku project is dead, and this PR has been left unattended over two months: https://github.com/nunchaku-ai/nunchaku/pull/928 , I decided to try something. After burning an embarrassing amount of tokens, I managed to slop in ComfyUI support for Chroma nunchaku.
The quality is... I will be blunt, ass though.
On the flip side, it runs twice as fast as q8? (Honestly expected more...)
Is there any interest? Do you want me to share this?
I honestly don't know for sure if it's the model, my slop code, or both. (I have zero faith in my slop, but the official ZIT for Nunchaku also had ass quality compared to Flux, so I am not sure what's exactly the cause.)
Very low effort comparison image with uneven resolutions attached.
>>
File: chroma 8bit vs nunchaku.jpg (1.43 MB, 5248x1344)
1.43 MB JPG
>>108811965
I even manage to fuck up the filename.
>>
damn they've already deleted playtime_ai's alt account on civitai. Why do they have such a hate boner for that guy?
>>
>>108812003
I've never heard of a site where you if you get banned you can just make an alt and everything's fine. (Except for MMOs like WoW where you have to pay them more money for your second account because they have a financial incentive to let it slide.)
>>
>>108811978
I'd be curious to see how much it degrades complex posing or text. A black migu perhaps.
>The quality is... I will be blunt, ass though.
No free lunch.
>>
>>108811965
> Chroma
> 2026
>>
Alright fuck it I posted.
https://huggingface.co/tonera/Chroma1-HD-SVDQ/discussions/4
>>108812060
What was the full black Migu prompt?
>>108812065
I mean I was waiting for Chroma nunchaku in late 2025.
This closes a chapter if nothing else.
>>
File: 240635558320286.png (2.76 MB, 1728x960)
2.76 MB PNG
>>
if you ever try to do realism on anima you'll get how fucked up cfg really is.
is there a good way to normalize these cfg color blowups that force model into flat color 2d clip art
>>
>>108812145
>tries to do realism on anime model
>it's cfg fault
>>
>>108812145
>if you ever try to do realism on anima
Anima author explicitly says the model can't do realism. There was little realism in the dataset.
>>
>>108812145
thresholding, pp samplers, etc
>>
File: Chroma_00014_.png (1.42 MB, 832x1216)
1.42 MB PNG
>>
>>108812109
>A drawing of hatsune miku with dreadlocks and light black skin skateboarding in New York at night. She is holding a smartphone on her left hand and a multicolored ball on her right hand, she has a red t-shirt with text on it that says: "MIGU". A pikachu can be seen on the top of her head. Her speech bubble says "Hard to keep me in style huh?", neons, 50's comic book style
>>
File: 415178908560391.png (1.03 MB, 1074x1160)
1.03 MB PNG
>>
>>108808164
Google took away dislike counts because government propaganda was getting ratio'd like crazy all around the world during covid time. I assume the Biden administration pressured them as they did to Twitter and Facebook and everyone else, but it was happening in every country. They had a tight grip on the mainstream online narrative but it was all falling apart when people could see that LITERALLY 99 FUCKING PERCENT of people disagree with what they are saying.
>>
File: 956020769698352.png (1.36 MB, 1472x1152)
1.36 MB PNG
>>
>>108812129
fukken saved.
>>
File: debo_cc_anima_00013_.png (3.54 MB, 1920x1047)
3.54 MB PNG
>>
File: 00150-2617378839.png (1.3 MB, 896x1152)
1.3 MB PNG
>>108812169
>>108812168
except it does alright if you use shift scheduling, cfg normalizing snake oil of your choice, negpip to remove score_9 bias
>>
>>108812279
generate them face first, I wanna see face details
>>
>>108812279
Everything ends up looking like a low quality video screenshot, probably because the original base model was trained on nothing but videos from what I could glean.
>>
File: Chroma_00015_.png (1.87 MB, 1024x1024)
1.87 MB PNG
>>108812196
Yeah fucking GG I guess.
This was the first attempt no cherry picking. (There are seeds where she is properly skateboarding and with proper item/speech bubble placement, but it never looks like Miku or 50s comic book.)
Honestly feels like a proper int8 checkpoint will mog this while being like 30% slower tops. (Again I expected like 3-4 times speed-up instead of 2x)
A hypothetical Rank 128 lora might boost quality to a proper skateboarding black Miku but probably will still be largely pointless without a significant speed bump.
>>
>>108812279
>shift scheduling
?
>negpip
?
>>
>>108812221
Catbox plz anon?
>>
>>108812279
>shift scheduling
I can't even imagine how different shift curves would affect the output.
>>
>>108812313
Chun-Li, 1girl, standing,
>>
>>108812315
nta but it makes a massive difference.
>>
>>108812315
on anima shift 3 - good composition and prompt response, soft smeary look
shift 2 or lower - abstract composition with broken anatomy, increased details from more noise
>>
>>108812279
>>108812323
What node do you use? https://github.com/SparknightLLC/ComfyUI-ModelSamplingSD3Advanced
>>108812324
Right. But per timestep.
>>
>>108812344
that one but shitty vibecoded as modelsamplingauraflow instead
>>
>>108812316
Proof
>>
File: debo_cc_anima_00015_.png (3.44 MB, 1920x1047)
3.44 MB PNG
>>
>>108812375
>Proof
>>108812221
>>
>>108812344
you can run a gen at low denoise to see the difference even that makes.
the low noise is when it gets pushed into cartoon.
>>
File: 938760810749943.png (2.41 MB, 1088x1600)
2.41 MB PNG
>>108812313
Using this lora https://civitai.red/models/2617626/the-ghost-in-the-shell
>masterpiece, best quality, score_7, tgits, chun-li, street fighter, 1girl, arms behind head, blue dress, bracelet, breasts, brown eyes, brown hair, china dress, chinese clothes, double bun, dress, earrings, hair bun, jewelry, large breasts, looking at viewer, multiple views, pantyhose, short hair, spiked bracelet, spikes
>>
yikes!
dead thread
dead hobby
>>
>>108812827
dead site
>>
File: 6346546544685446.gif (1.14 MB, 640x352)
1.14 MB GIF
>>108812827
>>
>>108812477
>posting anime in non anime generals
At least tell me how you would like to be reported, for off topic or for trolling outside of /b/?
>>
>>108813008
you will never be a janny faggot, doing it for free
>>
>>108813008
it's against the law to tell someone you are going to report them
>>
>>108813008
You sound like a whiny faggot anon
>>
File: ComfyUI_24493.png (3.79 MB, 1500x2000)
3.79 MB PNG
>>108812827
>>108812914
We'll all be dead soon enough.
>>
so hidream was a scam?
>>
>>108813134
Was a nothing burger like any post Klein model
>>
>>108813161
is klein still the best editing model?
>>
>>108813245
people say qwen is
>>
>>108813245
what makes klein so good huh
huh??
>>
>>108811965
>Nunchaku
there were tests here showing that while it looked better at the same quantization (various flavours of q4/fp4), it looked really bad compared to the model at q8+
>>
File: Juggernaut_Z_V1_00003_.jpg (498 KB, 1344x1728)
498 KB JPG
>>
>https://github.com/AMAP-ML/DreamX-World
>trained with a scalable data engine on Unreal Engine data, gameplay footage, and real-world videos, combined with camera estimation and strict data filtering to learn realistic dynamics and interactions
Can't wait to have interactions
>>
>>108813441
Even the biggest models google showcased had many cohesion/attention issues, so I wouldn't hold my breath on a 5B model being anything other than a 5min fun gimmick.
>>
>>108811335
>Get the GGUF versions Q4 (3.3 GB), Q5 (4.1 GB), Fp16 and ZIT Pro Workflows from my Patreon
how do I pirate these?
>>
File: Juggernaut_Z_V1_00017_.jpg (383 KB, 1344x1728)
383 KB JPG
>>
>>108813500
>he cant do his own quants
LMAOOOO
>>
>needing quants
I'm sorry
>>
jokes aside, whats the use case for a 3.3gb zit model? running it alongside a larger llm?
>>
File: ComfyUI_temp_gpctu_00003_.png (3.94 MB, 1152x1664)
3.94 MB PNG
The hidream is pretty ass...
>>
By default, Anima uses qwen_3_06b_base.safetensors but if I were to load a larger Qwen model into Forge Neo, would I get even more precise results? I have a RTX 4090 with 24gb VRAM so if a larger Qwen model can get better results then I'd like to try something heavier than the 0.6B model.
>>
File: 1121292239719335.png (2.79 MB, 1728x1024)
2.79 MB PNG
>>
>>108813292
how much vram does qwen need?
>>
what's the best alternative for kontext?
>>
>>108813758
I have no idea how the previews they're using look so good compared to whatever I'm able to generate using their model
>>
>>108813887
they have a large model they didnt release
>>
File: Juggernaut_Z_V1_00039_.jpg (381 KB, 1344x1728)
381 KB JPG
>>
>>108813900
if they used the huge model to showcase the small one, it's beyond retarded
>>
>>108813886
Klein 9b
>>
>>108813839
no, it won't work
>>
>>108813902
bobs and vagene status?
>>
wtf is flux schnell good for anyway?
>>
>>108813925
I've only 6 GB VRAM, 32 GB RAM
should I go for gguf instead? 4b?
>>
>>108814009
probably

>>108813990
move on
>>
>>108813942
Is Anima optimized specifically for the 0.6B model or something like that?
>>
>>108814098
you're a retarded faggot
>>
>>108813886
>>108813925
Kontext?
Klein?
Huh.wav
>>
File: forrest.png (3.53 MB, 1248x1824)
3.53 MB PNG
>>
>>108814134
fuck off
>>
>>108813980
I like it, just painfully slow, this is with quick test lora https://litter.catbox.moe/jtsikisvjnc3irfr.jpg
Obviously doesn't do hc porn etc
>>
is civitai search broken?
It keep loading but nothing shows
I'm searching for LoRA, not even porn
>>
>>108814216
>is civitai broken
always
>>
>>108814159
Be kind Anon.
>>108814216
The split to red made it very unstable when I wanted to use it.
>>
>>108814230
no i'm sick of handholding these retards fuck off
>>
Can I just install ComfyUI and it werks out of the box, or is it really complicated?
>>
>>108814237
But if I hold your hand would it make it better?
>>108814240
Basic stuff will work fine when genning but custom nodes can be trouble if you aren't prepared for some mild error searches at times.
>>
>>108814240
install ComfyUI
Install ComfyUI manager
put workflows in your tab
download the shit the manager says "not found"
>>
>>108814152
When does the AI learn what pine straw is?
>>
>catbox down for good
End of an era...
>>
>>108814485
any goodbye message?
>>
>>108814201
Poast moar of the model.
>>
>>108814631
"niggers"
>>
>>108814742
we don't say that word here
it's highly illegal
>>
>>108814742
sreggin
>>
workflow says I'm missing
>flux-2-klein-base-9b-fp8.safetensors
but I've
>flux-2-klein-9b.safetensors
do I need fp8 version?
Also
I've
>qwen_3_4b.safetensors
but it asks for
>qwen_3_8b_fp8mixed.safetensors
will it not work?
Thanks
>>
>>108814823
select the models you have installed and click run and see what happens.
if downloaded the models and they aren't showing up in the dropdown menu click r to rescans folders.
>>
>>108814485
RIPs in piss
>>
>>108814485
How do you know it is down for good?
>>
>>108814823
No for 9b you need 8b text encoder.
Switch to a non-base workflow to use your 9b.safetensors.
>>
>>108815342
dogpark.moe will be replacing it confirmed
>>
>>108815349
it doesnt load for me
>>
>>108814823
>>108815346
Base will be quite slow unless you have a beefy GPU btw.
You can just replace the fp8 with your normal (bf16) safetensors on a non-base workflow.
>>
>>108815358
its not up yet
>>
>>108815367
But the fp8 and bf16 are the same size why does that matter? took me the better part of the morning to download this
>>
>>108815358
lemonparty is the backup site
>>
>>108814641
Later, training new stuff for it
>>
>>108815460
Nice. Hope it gets a distill, never liked to fuck with lightning and such loras.
Although more direct comparisons from the devs would have been even better. The stuff showcased HF is all of artsy photography that's probably the default instead of being prompted, ZiT's DoF issue is one of the notable things that they brought back it appears.
>>
Will Illustrious loras work with Anima?
>>
>>108815507
no
>>
>>108815507
probably not
>>
>>108815507
>will lora work with [completely different model that has nothing in common with the other model]
No.
>>
File: ComfyUI_02033_.jpg (1.2 MB, 2400x1020)
1.2 MB JPG
>>
File: ComfyUI_01963_.jpg (1.39 MB, 2400x1013)
1.39 MB JPG
>>
>>108815592
>>108815601
nice
which model/lora is this?
>>
File: ComfyUI_01868_.jpg (1.73 MB, 2400x1013)
1.73 MB JPG
>>
File: ComfyUI_01862_.jpg (1.81 MB, 2400x1013)
1.81 MB JPG
>>
File: ComfyUI_01975_.jpg (1.12 MB, 2400x1013)
1.12 MB JPG
>>
File: ComfyUI_01979_.jpg (1.24 MB, 2048x864)
1.24 MB JPG
>>
File: ComfyUI_01998_.jpg (1.29 MB, 2400x1013)
1.29 MB JPG
>>
File: ComfyUI_02049_.jpg (1.25 MB, 2400x1020)
1.25 MB JPG
>>
File: ComfyUI_02014_.jpg (1.39 MB, 2400x1013)
1.39 MB JPG
>>
File: ComfyUI_02011_.jpg (1.45 MB, 2400x1013)
1.45 MB JPG
>>
File: ComfyUI_01993_.jpg (1.46 MB, 2400x1013)
1.46 MB JPG
>>
>Using bots to spam a dead general
Ebin trololol namefag.
>>
>>108815479
>never liked to fuck with lightning and such loras
They could just make new Z turbo. Zit is so damn good
>>
File: ComfyUI_01838_.jpg (1.64 MB, 2400x1013)
1.64 MB JPG
>>
File: green.png (2.47 MB, 1248x1824)
2.47 MB PNG
What is the current edit model that will do as told?
>>
>dead general
Source?
>>
>>108815700
Klein 9b.
You need lora memes for NSFW though.
>>
>>108815706
It kinda is but I'm glad when it isn't.
>>
File: ComfyUI_temp_kqpac_00004_.png (1.51 MB, 1024x1024)
1.51 MB PNG
>>
cozy breas
>>
>>108815720
>>108813008
>>
>>108815885
it's /co/ actually
>>
File: 1775868044285669.jpg (545 KB, 1040x1520)
545 KB JPG
I just realized that newlines have subtle effect on a prompt with anima-preview, is that a qwen thing?
>>
Anima diffusers PR seem to be going smoothly (inb4 jinxing it):
https://github.com/huggingface/diffusers/pull/13732
Expect more support from lora training tools soon.
>>
>>108815896
I believe "ignore new lines" was just a clip thing and newlines change output in pretty much any other text encoder.
>>
File: 1773362598805203.jpg (1.44 MB, 1248x1824)
1.44 MB JPG
>>108815914
Well shit then lmao. Gotta build a custom prompt constructor then.
>>
>>108815902
Aww shiet OneTrainer time
>>
>>108815902
Preparing for a heavy influx of slop from braindead jeets using baby trainers
>>
>https://huggingface.co/ResembleAI/Dramabox
Oh damn
>>
Anima VAE is flux2?
>>
>>108816105
qwen image vae
>>
>>108813378
That was my experience as well. It was better than q4 but would slop everything more than even base flux. That's around the time perchance switched to Chroma v30~ and I remember using that for a while.
>>
File: Chroma_0009.jpg (1.1 MB, 1344x1632)
1.1 MB JPG
fucking chroma, turns out the fp8-final model is broken, and I was using it for months, the author is so fucking dumb and all his discord trannies that keep pumping out experiments, how many versions does chroma need to have, its all so confusing
>>
File: chroma-mess.png (243 KB, 1503x1267)
243 KB PNG
>just use chroma bro
>which one?
>>
>>108816161
Silveroxides moment
Try asking and see how mad he gets
>>
>>108816161
DC2K
>>
>>108816161
Mental illness
>>
>>108816161
Easy, just read the model card that explains what stuff does
>>
>>108816135
Could you throw a workflow with the correct model?
>>
>>108816135
it's mostly in-training checkpoints. not that you should be particularly confused about this.
>>
>>108816161
v50 = Chroma HD
v48 = Chroma Base
Radiance = the vaeless meme
2K/DC2K = some training above 1024
Everything else are just schizoversions
>>
>>108816135
It hurts that this woman wouldn't give me the time of day in a realistic situation.
>>
What's the difference between nicegirls and the lenovo version?
>>
File: Chroma_0020.jpg (1.37 MB, 1344x1632)
1.37 MB JPG
>>108816245
kek

>>108816194
bingo

>>108816324
yeah, the problem is the outputs, the final iterations from chroma are bad, he fucked it up at the end

>>108816366
niecgirl is a meme 30 image 1girl instathot dataset and the other uses some blurry grained dataset from the 2000s
>>
>>108816366
How are these crap loras so popular?
>>
>>108816539
downloading them is step 2 of every "make an AI influencer to get thousands of dollars a month!!!" scam
>>
>>108811375
>15 days of anal only
is yo butt ok buddy?
>>
File: pine.jpg (183 KB, 819x1024)
183 KB JPG
>>108814152
>>108814481
klein 9b is a little better maybe
>>
Fresh

>>108816650
>>108816650
>>108816650
>>108816650

Fresh
>>
>>108812477
nice one
>>
>>108809213

could be, there were recent news horten wink like aircraft flying over israel

horten brothers had developed light pressure high altitude suit
>>
File: 8226.jpg (1.67 MB, 1752x2560)
1.67 MB JPG
>>108811570
the pipeline grows, haven't decided on a name for the project yet, but it now supports saving intermediate results, model type inference, img2img, txt2vid, img2vid, ipadapters, loras on detailers, streamed intermediate/output emission (there's a CBOR-RPC for other stuff to use it), more flexible automagic vram control (optional maximum vram usage, keep models loaded as long as there's space, unload and retry on OOM, unload to disk if not enough RAM to cache, etc), and uhh bunch of other shit
>>
>>108813902
hot who dis



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.