>>108558647
==GEMMA 4 PSA FOR LE RAM USAGE FINE WHINE==
[tldr;]
For all Gemma:
--cache-ram 0 --swa-checkpoints 0 (or 3 to reduce some reprocess) --parallel 1
For E2B/E4B also add this:
--override-tensor "per_layer_token_embd\.weight=CPU"
[/tldr;]
https://github.com/ggml-org/llama.cpp/pull/20087
Because Qwen 3.5's linear attention makes it impossible to avoid prompt reprocessing within the current llama.cpp architecture, the devs decided to just brute-force it with 32 checkpoints every 8192 tokens.
This shit also nukes SWA checkpoints because they're using the same flag just different aliases kek. SWA is way larger than the Qwen linear attention layer, so running 32 copies of it is just madness.
https://github.com/ggml-org/llama.cpp/pull/16736
Then the unified KV cache refactor. They bumped the default parallel slots to 4 because they thought it would be "zero cost" for most models (shared pool, why not, right?). But since Gemma's SWA is massive and can't be part of the shared pool, you're effectively paying for 4x the SWA overhead.
They optimized for agentic niggers at the cost of the average single prompt user.
https://ai.google.dev/gemma/docs/core/model_card_4
Lastly, the command for E2B/E4B is because the PLE can be safely thrown to the CPU without incurring any performance cost. They're like a lookup table and they are the reason why E2B and E4B have an E for Effective, with that flag E2B and E4B are very much like 2B and 4B models in terms of vram occupation.
Thank you for your attention to this matter. Donald J Slop.