[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1751875897536766.jpg (672 KB, 2048x1448)
672 KB
672 KB JPG
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>108416874 & >>108410115

►News
>(03/17) Rakuten AI 3.0 released: https://global.rakuten.com/corp/news/press/2026/0317_01.html
>(03/16) Mistral Small 4 released: https://mistral.ai/news/mistral-small-4
>(03/11) Nemotron 3 Super released: https://hf.co/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-BF16

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers
https://rentry.org/MikupadIntroGuide

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/gso.html
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling
Token Speed Visualizer: https://shir-man.com/tokens-per-second

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
>>
►Recent Highlights from the Previous Thread: >>108416874

--MSA-4B outperforms GPT-4.1 and Qwen models in long-context benchmarks:
>108418758 >108418791 >108418819 >108418842 >108420734
--Advanced virtual companion setup with home automation:
>108417587 >108417605 >108417650 >108417676 >108417683 >108417727 >108417745 >108417769 >108417811 >108417879
--Mistral CEO proposes revenue-based content levy for AI companies in Europe:
>108417643 >108417668 >108417678 >108417740 >108417747 >108421003
--MistralAI CEO proposes AI content levy in Europe:
>108418980 >108420788 >108420874 >108420907 >108421234 >108421283 >108420826 >108420878 >108420879 >108420951 >108421015 >108421176 >108421248 >108421305 >108421306 >108421482 >108422209
--The End of Coding: Andrej Karpathy on Agents, AutoResearch, and the Loopy Era of AI:
>108422422 >108422476 >108422608 >108422615 >108422643 >108422670 >108422734
--Debating prompt format effectiveness for Literotica finetuning data:
>108417215 >108417294 >108417388 >108417663 >108417731 >108417818 >108417885
--Vulkan llama.cpp performance vs ROCm:
>108421311 >108421377 >108421508 >108421570 >108421613
--Phrase banning vs token banning in KoboldCPP and ik_llama.cpp:
>108421847 >108421854 >108421857 >108421873 >108421884 >108421914 >108421928 >108421965 >108421977 >108422023 >108422035 >108422080 >108422096 >108422118 >108421993
--Sarvam 105B benchmark results:
>108422282 >108422382 >108422388 >108422440 >108422451 >108422497
--Qwen 3.5 abliteration issues and Heretic uncensored alternatives:
>108418501 >108418584 >108418609 >108418621 >108418672 >108418686 >108420443 >108418693
--DDR4 vs DDR5 RAM upgrade considerations:
>108419791 >108419799 >108419904 >108420101 >108420151 >108419888
--Tensor parallelism progress in llama.cpp and fork alternatives:
>108421401 >108421433 >108421476 >108421778
--Miku (free space):


►Recent Highlight Posts from the Previous Thread: >>108417029

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
>>
>>108423198
Required https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
>>
>>108423177
Are the techniques that make newer models better being explained publicly?
>>
>>108423255
>newer models better
??? Nemo still mogs anything newer.
>>
Mikulove
>>
Do you guys goon to your local models? is it even possible?
>>
File: 5864469_1774116590516.jpg (312 KB, 2291x1141)
312 KB
312 KB JPG
what am i doing wrong? kobold is working but sillytavern cant connect.
what do to?
>>
File: dipsyRawr.png (2.08 MB, 1024x1536)
2.08 MB
2.08 MB PNG
TMW
>>
>>108423333
undress me >>108423323
>>
>>108423323
Missing v1 or v1/ at the end of the API URL?
>>
https://www.reddit.com/r/LocalLLaMA/comments/1rzyha4/new_ai_policy_by_white_house_us/
> 1. Protecting Children — Require age-assurance measures, parental controls, and safeguards against sexual exploitation and self-harm on AI platforms, while affirming existing child privacy laws apply to AI.
>>
>>108423322
I think that was the first thing a lot of people on /lmg/ did after getting one running
>>
>>108423322
I think the best part about it is when you're new to this and don't know how to prompt or what model you're even running.
The novelty combined with the challenge of making the pretty lady stop saying "What you're doing is highly inappropriate, let's respect each other's boundaries" is great.
And when you figure it all out, wherever your brain's reward center is, it gives you a LOT of funny chemicals.
Then you start seeing how dumb the small models are and how much the big ones like shivering and smirking. The magic is lost, you start giving them scripting and summarization tasks, wondering if your AI rig was ever worth the investment...

So yes, it's possible.
>>
>>108423462
This is why photorealistic ai was a mistake.
>>
>>108423462
>>108423585
I have written a legal disclaimer which states that every character I gen, regardless of the model, is always 21 years of age or older. Signed and stamped by me.
There's nothing what they can do to me in this case.
>>
>>108423585
Meanwhile ZiT/ZiB just shit out CP like no tomorrow.
>>
>>108423604
Photorealistic sloppers are in trouble.
>>
Qwen3.5-27b (HauhauCS uncensored version) is extremely good. Unironically rivals GLM-4.6, worse in raw intelligence (but good enough usually), better in not being slopped and formulaic.

Qwen3.5-9b (HauhauCS uncensored version) is absolutely fucking retarded. Worse than Mistral-Nemo or Gemma 3 12b.

What gives?
>bro it's a smaller model of course it's worse
The gap is enormous and makes me wonder if something went wrong somewhere.
>>
>>108423646
>makes me wonder if something went wrong somewhere.
Or the one that worked was a fluke.
>>
>>108423623
You can't ban open source models
>>
>>108423646
Sadly is ruined by shitty architecture, no context shift and no antislop sampler.
>>
>>108423646
It's literally shit
>>
Blessed miku thread.
>>
https://goombalab.github.io/blog/2026/mamba3-part1/
>>
>>108423462
>>108423585
This isn't even about generation it's about making sure chatGPT doesn't tell kids to rope themselves.
>>
>>108423198
>"Experiments" by a schizo called DavidAU.
Look at his model collection, choose one of the older ones with a really long name, then read the model's card.
Behold the magnificence.
"expanding" (upscaling) a smaller model into a larger one then Pretraining the shit out of it, essentially using the original model as a base for a whole new model, is legit.
It's just that you need to do proper pretraining with trillions of token, not do some qlora.
Look at SOLAR 10B from back in the day. It's upscaled from mistral 7B IIRC.

ah thx anon!
any 27b upscaled that is decent? feels like it q5km is just on the cusp of being amazing for a single 24gb gpu. I am testing putting vision and tts on secondary gpu
>>
>>108420443
Benchmarks would work better, just way more expensive. For any correct answer, there are a multitude of token sequences that reach it (and moreso with reasoning).
>>
File: uncensored_vs_vanilla.png (3 KB, 176x144)
3 KB
3 KB PNG
>>108423646
You know why it seems to be retarded?
I can't enable reasoning in the uncensored 9B model. Vanilla 9B behaves much better when reasoning is enabled but now it seems like this uncensored version doesn't have that at all. What in the heck..
I wonder if 27B is the same.
>>
>>108423870
When are they going to figure out that we should just ban kids? Kids make everything unsafe. Damn brats :anger_vein: :anger_vein: :anger_vein:
>>
>>108424028
>we should just ban kids
yes, support your local ID laws, they're for exactly that
>>
>>108424032
I don't mean "ban them from minecraft".
>>
>>108423675
Why does antislop not work with Qwen 3.5?
>>
>>108424032
No, parents are for that. ID laws are for the police state.
>>
>>108424066
hybrid attention makes surgical context modifications impossible it's all one big block
>>
>>108423462
Seems completely irrelevant since we don't even fucking get models from America anymore
>>
File: 1768914022117126.png (347 KB, 870x516)
347 KB
347 KB PNG
>>108424222
>>
With Qwen3.5-27B in Kobold, uploading an image seems to perpetually disable thinking, with the model always creating an empty block. Same in the last two versions. Is it a model quirk or inference bug?
>>
>>108423177
My local model collapsed when I trained it on another model
>>
Working on getting an API service setup to host qwen3-coder so I can use it anywhere, has anyone done this and willing to provide an example?
>>
>>108424072
Thanks. I was wondering why I was seeing occasional smirks.
>>
>>108424395
I just ran llama.cpp + an ngrok tunnel.
Both locally and on a kaggle instance.
>>
>>108424342
Did she really say this?
>>
>>108424357
After more experimentation, it must be a bug in Kobold. Having uploaded an image wrecks thinking even if you start a new session. The program must be restarted to fix it.
>>
>>108424481
Why it stands out

- It does not happen often
- But when it does, it is very visible because the user already gave:
- the problem
- the severity
- the priority shift
- the go-ahead
>>
>>108424395
For when you're at home, just setting your ST or other frontend's host to 0.0.0.0 will let you access it anywhere in the house by going to your PC's IP and port. If you don't know what any of this means just ask your bot.

For doing it across the internet, I can recommend Tailscale Funnel. It's easier to set up than anything else I tried, while still offering the highest security. Like I was set up in a few minutes. The only issue is that on the free tier it may be choppy at times (so the text streaming is not smooth). I can deal with that so it's fine for me but maybe you are different. Also, on the free tier, it only allows you to network one port at a time, but that's fine if you only want one API anyway.

To set up Tailscale, just go to their website and follow the instructions. Then run
tailscale funnel 8029
or whatever port number you have your API on. This is what I did for Linux at least, idk what it's like on Windows.
>>
>>108424535
>routing your shit in externally managed services
kill yourself
>>
>>108424535
>moments before anon's machine was hacked
>>
File: 1758568068811419.png (50 KB, 1202x313)
50 KB
50 KB PNG
It's OVER
>>
>>108424603
I read "nemo says".
>>
>>108424603
accelerate a



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.