[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor application acceptance emails are being sent out. Please remember to check your spam box!


[Advertise on 4chan]


File: 1751276140253030.jpg (782 KB, 2105x2963)
782 KB
782 KB JPG
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>107104115 & >>107095114

►News
>(11/01) LongCat-Flash-Omni 560B-A27B released: https://hf.co/meituan-longcat/LongCat-Flash-Omni
>(10/31) Emu3.5: Native Multimodal Models are World Learners: https://github.com/baaivision/Emu3.5
>(10/30) Qwen3-VL support merged: https://github.com/ggml-org/llama.cpp/pull/16780
>(10/30) Kimi-Linear-48B-A3B released with hybrid linear attention: https://hf.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
>(10/28) Brumby-14B-Base released with power retention layers: https://manifestai.com/articles/release-brumby-14b

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
>>
File: threadrecap2.png (506 KB, 1024x1024)
506 KB
506 KB PNG
►Recent Highlights from the Previous Thread: >>107104115

--IndQA benchmark and EU multilingual LLM evaluation discussions:
>107104680 >107104733 >107107367 >107107455 >107107533 >107107631
--Finetuning DeepSeek 671B with 80GB VRAM with catastrophic overtraining and context length challenges:
>107105625 >107105860 >107105896 >107106164 >107106215 >107106275 >107106297 >107106332 >107106416 >107106433 >107106446 >107106502 >107106351 >107106466 >107106181 >107105710 >107105737 >107105769 >107105765 >107105792
--RTX 6000 Workstation Edition vs Max-Q: Performance, power, and safety tradeoffs:
>107107561 >107107669 >107107690 >107107807 >107107853 >107107866 >107107837 >107107926 >107107938 >107107946
--Fedora 43 compilation issues for llama.cpp due to glibc/CUDA incompatibilities:
>107110453 >107110623 >107110723 >107110957 >107110964 >107110991 >107111240 >107111261 >107111609 >107111643 >107111712 >107111726
--Windows vs Linux CUDA/llama.cpp setup challenges:
>107110661 >107110852 >107110953 >107111011
--French LLM leaderboard criticized for flawed rankings and perceived bias:
>107107537 >107107559 >107107574 >107107562 >107107617 >107107572
--Quantization benchmarking and model performance tradeoffs in practice:
>107109145 >107109251 >107109456 >107109345 >107109466 >107109353
--Rising RAM prices linked to AI demand and HBM chip production shifts:
>107105971 >107105987 >107105997 >107106030 >107106079 >107106178 >107106242 >107106246 >107106305 >107106317 >107106488 >107106496 >107107544 >107112114
--Model comparison in D&D 3.5e one-shot roleplay scenarios:
>107112449 >107112461 >107112747 >107112761
--Critique of Meta's Agents Rule of Two security model as inconsistent risk assessment:
>107105204
--AI-driven consumer DRAM shortages:
>107106504
--Miku (free space):
>107104379 >107105550 >107106025 >107109466 >107110129

►Recent Highlight Posts from the Previous Thread: >>107104116

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
>>
>>107113095
Thank you Recap Miku
>>
where's glm 4.6 air fuckers
>>
>>107113348
2 more hours
>>
>>107113348
I'm more interested in the llama.cpp MTP PR.
>>
>>107113391
vibe coding status?
>>
What is the best model i can run nowadays for programming / tech related shit? t. 12GB vramlet 64gb RAM
>>
>>107113464
GPT-OSS-120B
>>
>>107113548
yeah ok bro
>>
I need/want a sophisticated note taking solution that keeps me reminding of shit that I have todo - powered by a language model
what would be a privacy safe way to do this?
>>
>>107113567
Is he wrong?
I know that the model is shit for ERP, but it should at least be good for assistant type tasks and coding right?
>>
>>107113575
>I need/want a sophisticated note taking solution
You need a note book.
>>
>>107113581
you cant run a 120B model on 12GB vram and 64GB ram lol
>>
>>107113575
Vibe code your own.
It's not that complicated a project.
I'd use Claude 4.5 via lmarena to plan the high level implementation and some guiding code snippets and use a local model as an agent to actually write the final code.

>>107113590
Of course you can. Quantized, sure, but still.
>>
>>107113590
Your NVMe SSD?
>>
File: toss120b.png (132 KB, 1566x556)
132 KB
132 KB PNG
>>107113590
You can.
>>
>>107113610
Oh yeah, they even have their own 4ish bpw quantization scheme.
>>
File: filen.png (17 KB, 492x148)
17 KB
17 KB PNG
>>107113348
>>
>>107113610
>>107113604
Why not run the 20b model? I get like 13 tokens per second with the 20b model, wont 120b be slow as shit even if quantized?
>>
>>107113575
Just use a calendar or todo application. You're as stupid as an LLM if you think it's a good idea to manage your agenda by having one of them guess what belongs on there and when.
>>
>>107113666
>>107113610
>>107113604
also is it reasonably smarter than the 20b version? It just sounds weird how you can make a 120b model able to run on 12GB vram without completely lobotomizing it
>>
>>107113348
hopefully, never
>>
>>107113666
I never compared then, satan. But it should absolutely be better simply by virtue of having more parameters to hold more knowledge, and having more activated params during inference.

>>107113676
>>107113676
Yeah. Quantization feels almost like magic, but it's just maths.
Granted, it's not going to be as good/capable as the full thing, but it should be more than usable.
To be clear, I don't know if it's any good, when I asked if anon was wrong (>>107113581) I was legitimately curious how it performs compared to, say, an equivalent Qwen model or whatever.
>>
>>107113666
>Why not run the 20b model?
It'd be even dumber.
>wont 120b be slow as shit even if quantized?
Yes.
I don't care for that model, and i don't use it. I'm just saying that you can.

>>107113676
Then try the 20b. Come back with your assessment. Both models were released with the same 4bit quant,
There's also a huge variety of qwens for you to try. Make your own assessment.
>>
qwen coder is better and can be used for FIM as well.
>>
>>107113725
Which coding client uses FIM instead of just having the model produce diffs and using git to apply these diffs?
>>
>>107113739
https://github.com/ggml-org/llama.vim
https://github.com/ggml-org/llama.vscode
>>
>>107113093
Your special interest is interesting
>>
how autistic a person has to be to obsess over the same fictional character for years and feel compelled to share their obsession with the rest of the world
>>
>>107113796
>how autistic a person has to be to obsess over the same fictional character for years and feel compelled to share their obsession with the rest of the world
>>
quen3-coder 30b is the goat for coding
NO other model comes close for people with less than 20gb VRAM
>>
>>107113739
https://github.com/lmg-anon/mikupad
>>
>>107113796
but enough about petra
>>
>>107113739
Continue.dev uses fitm. Can use it with Qwen models for auto-complete. The latency with local models is annoying though.
>>
>>107113725
120B > 30B
>>
>>107113748
>>107113812
Alright, that's actually really dope.

>>107113840
>Continue
I'm yet to fuck with Continue.
Maybe I should.
>>
>>107113811
>for people with less than 20gb VRAM
I wouldn't call them "people", but ok
>>
>>107113739
VSCode Copilot allows you to use your own model endpoints
>>
>>107113739
Cursor works well with ollama. My company PC uses that by default.
>>
>>107113796
very true, almost as weird as obsessively trying to kill a 4chan general for years
>>
File: 1747976091623646.jpg (21 KB, 374x374)
21 KB
21 KB JPG
>accidentally renewed novelai
>>
>>107113811
correct that Qwen is an essential local model.
>>
>>107113899
Can I use local models with the Cursor app without paying them? My two-week free trial said I hit the limit after like four hours of messing around. They're shady so I don't want to give them money but the tool was decent.
>>
>>107113459
I could but I won't
>>
>>107114148
>subbing to novelai in the first place
lol
>>
>>107114192
roo cline is good too
>>
>>107114091
>he thinks every person who doesn't like him is samefag
who's going to tell him
>>
>>107114207
For some reason, I get much better results out of normal cline than roo.
Which is wild, cline's prompts are such bloated mess.
>>
>>107113811
You won't be able to fit enough context on a consumer GPU for it to be useful

Start offloading to ram and it's slower than just doing the work yourself

You do know how to code right?
>>
>>107114293
imagine listening to tr00ns like this guy
>>
>>107114273
Roo allows you to override the prompts.
>>
>>107113666
120b is the only decent coding model at <=120b. You won't have a lot of space for context though if you only have 12 gb vram
>>
File: parrot.png (750 KB, 678x453)
750 KB
750 KB PNG
>>107113348
glm 4.6 air?
>>
if we had a way to quant models at sub-1bit level we wouldn't need glm 4.6 air anymore.
>>
>>107114331
You can override the prompts on cline too
>>
>>107114408
fack you ungratefuls you get free and complains like idiot
>>
>>107114496
Maybe we should get a way to quant to sub-8bit without crazy brain damage first.
>>
File: thick-billedparrot7918-1.jpg (226 KB, 1169x1319)
226 KB
226 KB JPG
>>107114519
Get free and complains?
>>
I take back all the nasty things I said about gpt-oss:20b. It's actually pretty nice to use with Ollama/Zed
>>
>>107114699
We need more bait. Unleash them all!
>>
File: 499683473.jpg (535 KB, 1920x1200)
535 KB
535 KB JPG
>>107114519
fack you?
>>
llama 4.1 status?
>>
>>107114713
I'm being serious, it's actually decent in cases where you aren't able to use cloud models.
>>
>>107114748
ok
>>
>>107114748
If you can run ASS 20B you can most likely run Qwen3 30BA3B which is significantly better at literally everything.
>>
>>107114763
I'll give Qwen3 30b a3b a try. I do recall it being a decent writer. Thanks for your suggestion.
>>
>>107114822
Make sure it's the later versions since Qwen fucked up the post-training on the original launch of Qwen3.
>>
>>107114408
>>107114603
>>107114718
Teach the parrot to say H1B was a mistake.
>>
>>107114831
why would you ever not use latest version
>>
Sounds like gemma 4 is only getting "nano" and "small" variants. Hopefully small is at least 8B
>>
>>107114917
source my nigga?
>>
File: file.png (51 KB, 1504x35)
51 KB
51 KB PNG
How do I fix this?
>>
>>107114763
Nice joke. It's an overthinking POS that produces garbage results. Qwen 32B is the only small Qwen model that produces good output from time to time.
>>
Is running qwen vl 235B at q1 worth it or should I stick with GLM air?
>>
>>107115135
Kimi Q0.1
>>
more perf improvements for macs
https://github.com/ggml-org/llama.cpp/pull/16634#issuecomment-3490125571
>>
>>107115135
q1 is probably too low, imo the main advantage of 235b over air is the intelligence and at that low of a quant idk how much it applies anymore
couldn't hurt to try and see for yourself if you can spare the bandwidth though
>>
>>107114917
What do you think the "n" in "Gemma 3n" meant?
>>
>>107115135
>Is running qwen [...] worth it
no
>>
>>107115148
What the fuck is Kimi Q0.1?
>>
Why are people shilling epycs as the poorfag LLM driver if LGA-2011-3 systems are many times cheaper ?
>>
>>107115135
I can run Qwen 235B at Q5 and it doesn't get any better. The model is very schizo.
>>
>>107115300
ddr5 and max ram capacity, limping along on slow ass ddr4 is torture
>>
>>107113093
What's the best model for ntr?
>>
>>107115327
DavidAI
>>
>>107115327
You have to find the one with the largest containing some esoteric all caps shit, like
>https://huggingface.co/DavidAU/Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
>>
>>107115326
I meant ddr4 epycs, used SP3 boards cost like 800$ here and people still buy those while new chink 2011-3 huananzhis are starting from 100$
>>
>>107115011
update llama.cpp
>>
>>107115578
I have an X99 board with 8 memory slots, but I'm afraid to buy the whole 128 gb because I worry I'll be disappointed with the performance
>>
>>107115801
You better decide fast because ram prices are going to only keep going up when the specs for next year's GPUs are announced.
>>
i'm warming up to iq2 r1 after using iq3 glm 4.6 for a while
it feels even more uncensored and also less parroty for a change with thinking prefilled out
>>
>>107114831
Damn it's just a bit too big. I only have 16GB VRAM. Sucks though because the results when I run it split CPU/GPU are great, just very slow.
Maybe I'll tell it to do something and leave it overnight lol
>>
>>107116014
Are you using -ngl 99 + -ncmoe?
>>
>>107115988
NOOOO YOU MUST USE Q5 ABOVE!
>>
File: parrots.jpg (1.69 MB, 4276x2635)
1.69 MB
1.69 MB JPG
>>107116097
Q5 above?
>>
is it normal that i get way better results with whisper's large-v2 compared to large-v3 or turbo in asian languages like korean?
>>
>>107114408
Hi Drummer!
>>
>>107116148
Yes
https://github.com/openai/whisper/discussions/2363
>>
>Geminiposters stop right as the parrotposting GLM seething begins
I'm noooticing.
>>107116194
I don't think it's him.
>>
>>107116060
I am using Ollama saar I couldn't get tool calling working between Zed and llama.cpp
>>
>>107113589
>>107113673
Those suggestions do not work, as they require you to write your notes in them. If an LLM does not do the thinking for me, it is completely useless.
>>
>>107116262
Is this a personal thing or for work? If for work, I suggest recording meetings, generating a transcript with faster whisper and nemo asr, then building a knowledge graph based on the transcript with a hybrid NL/LLM approach
Where do the things which you have todo originate from? Or the request to do them at least.
>>
>>107116262
What you wish for is a complete digital replacement for a personal assistant, but it's not possible with current technology, sorry you got duped by AI bubble hype grifters
>>
>>107113575
Sillytavern Lore Book
>>
File: parrotornot.png (82 KB, 1184x475)
82 KB
82 KB PNG
>>107115988
Is this what u mean by parroty?
>>
>>107116493
>Unironically using DavidAU
>>
>>107113575
>note taking
Speech to text, any whisper model would do
>remind shit I have to do
even a small 3B model would do, it just needs to be able to tool call
>>
>>107116523
He asked for a solution, not shit he needs to wire together himself.
>>
>>107116529
That would be $4K to hire me then
>>
>>107113575
There's probably something in open web UI that does that
>>
>>107113575
You're trying to body double your ADHD with AI, aren't you?
It's not possible at this time. AI can't actually think. Once we can get it to output without input, we can. Otherwise, it's just reminding yourself with extra steps. You can't get it to do anything other than write the notes for you. It needs to pass the tuning test.
>>
>>107116592
This is not accurate, RAG is extremely powerful. There's a lot that you can automate with rarted small models.
>>
>>107116659
Examples?
>>
>>107116592
the main thing LLM's have revealed to us is just how retarded most people are
>>
>>107114408
glm 4.6 air-chan when? Two weeks?
>>
Anyone get an agentic model to do 90% of their job for them setup?
>>
>>107116811
Yep but it's not local
>>
>>107116816
how so? My only hurdle is not breaking any company rules by handing off all my company emails to an AI (or just not give a shit)
>>
>>107116828
>My only hurdle is not breaking any company rules
Yeah that's mine as well, it's a pain. Semi-auto might be better for emails.
I automated a subset of tasks that are boring and frequent by building a RAG setup over the documentation of the product I'm interacting with plus some tools to interact with a mock of that system in Docker on an administrative/development level. This is with GPT-5 though.
I'll be building a microservice designed for giving Langchain agents access to applications running in Docker which I may open source for this next. It just uses the docker Python lib now.
>>
>>107116887
I'm an electrical engineer that works with electrical drawings/ random forms /etc so I've just been trying to braintstorm what pieces I could input to a model and what I could even possibly receive from it that would help me speed up shit I do, even if I have to do some stuff manually per its findings
>>
OpenRouter now supports embedding models, cool!
>>
>>107116936
Most embedding models are <1B. Even the most empoverished of vramlets can run their own. Why would one pay for this?
>>
>>107116811
I would if not for the boss spyware.
>>
>>107116924
Do you use digital design for the drawings? It might be possible to make reader/writer tools for the files.
It depends on how low level those files are, like if it's text-like or binary.
You might have some luck with converting circuits to a graph representation that could then be converted to text, and letting the model work with that.

Any data you have access to is gold for this stuff. Think like part databases, old design files, documents with specifications, stuff like that might be useful to build RAG / agentic search for.
>>
>>107116957
Screen capture from HDMI + Teensy USB Keyboard and Mouse emulator controlled by a personal device with a fat GPU, easy
>>
>>107116991
could use mermaid diagrams, which are defined in a text-based format
>>
>>107117021
If you're lucky, GPT-5 might understand them
>>
>>107117002
What about USB device detection and software detection?
>>
>>107117057
They would see that you plugged in a personal keyboard and mouse. What of it?
>>
>>107117057
Physical device to type on the keyboard and move the mouse
Camera for the screen
Good luck anon that sounds like a pain to work with.
>>
>>107116991
My drawings are more high level power distribution type stuff and its more so just CAD work of lines, layout drawings and things like that. There isn't much math being done and any many that is needed there are electrical modeling programs for that.

I do import the electrical codes and standards and have it search the documents to help me find sections quickly that i can then reference, but it just feels like there are far too many isolated sources of information for me to be able to easily connect them all for context

Maybe I'm just a brainlet
>>
>>107117071
Even with GPT-5 and Claude are fucking retarded and you have to babysit in ideal situations.
I can't imagine how bad the results would be with OCR mistakes and relying on models to move the cursor. I suppose could work if one has an entirely terminal-based workflow, but I don't see that working with an IDE.
>>
>>107117068
Damn, okay, never thought of it like that.
>>
>>107116659
>small models.
Unfortunately there's no good enough models to do the job. It has to be instruction fine tuned model, and the best you can get currently is Qwen3-4B, but instead of that why not use 30B? It has similar speed but has more knowledge.
>>
>>107116953
It's good to test stuff without having to download all of them
>>
>>107117072
>import the electrical codes and standards and have it search the documents
How are you doing this?
And are you doing this locally or in the cloud?

I expect anything interacting with professional CAD suites is something that would require custom models and pro AI researchers to build something for.

The documents and really anything text based though, I expect you could do a lot with those.

>>107117098
Kek yeah my response was mostly joking, that seems pretty hard to workaround unless you can fake a legit HID that's permitted by policy.

>>107117107
The trick is you have to embed a lot of domain knowledge in your agent deterministically. The intelligence and context limits of the model mainly affect the scope of a task you can successfully pass off to it and expect to get good results for. So your pipeline needs to be more hardcoded with smaller models, and will be less flexible.
>>
>>107117136
the codes and standards are just pdfs, its nothing fancy, i just upload them into chatgpt for context and ask it questions based on the codes

Interacting with CAD I know is not going to happen, at least not on a company computer. And I'm sure Autodesk and those types of companies are looking to do their own integrations anyway
>>
>>107117160
You can get pretty advanced with document RAG. The basic idea is you convert the PDFs to text, split the text into chunks, and calculate an embedding for those chunks.
Then when you query the system, before sending the query to the LLM you calculate an embedding for the query, and use that to search through the embeddings of the chunks for the closest matching ones. Those chunks get attached to your query and sent to the model so it automatically gets some context related to your prompt.

You can really go crazy with this though. There's advanced methods for deciding how to chunk it, doing multiple levels of chunking, doing embeddings of LLM generated summaries of chunks. All kinds of techniques there. Look into building RAG with Langchain and ChromaDB that would be a good start.

There's also agentic search, which is building functions that expose deterministic search with filters over your data. For example for getting all documents mentioning some word that was modified between two dates or something like that. When you prompt an agentic search system, it would call those tools with filters based on what you're looking for, and then use the results to respond.

You will definitely need to know how to run Python for this. You could write everything you need with a competent model though.
>>
I've been away for a while. Has any small model (<70B) surpassed Nemo when it comes to writing sovlful Reddit/4chan threads?
I always thought this was very fun with Nemo because you can clearly see they fine-tuned the model with human writing rather than benchmaxxing with AIslop.
>>
After using GLM 4.6 since it came out for sessions I am amazed that it's out for free. What do the companies get out of releasing these models to the public? They spent money making it, then hand it out. I know they have their online API for the memorylets, but are they really banking on the poors to give them money to recoup the cost of making it?
>>
File: thick_billed_parrot_(1).jpg (218 KB, 2000x2000)
218 KB
218 KB JPG
>>107117355
Sessions?
>>
Is there a flag to format llama.cpp's console output?
>>
>Kimi references something that happened on an entirely different session
>Kobold had been fully closed between sessions
What the fuck?
>>
>>107117450
context still sitting in ur gpu
>>
>>107117450
No one can explain with certainty why this happens, but yeah, it's a thing. I noticed it when my tribal character card's made up language jumped across to another character.
>>
>>107117454
I've had this happen before when I powered down the machine to shift surge protectors my setup was plugged into too but I wrote it off as me being schizo.
>>107117469
Spooky.
>>
>>107113093
rape miku
>>
>>107117509
>5:53AM in India
Good morning saar. Gemini needful today?
>>
>>107117342
Try GLM Air.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.