[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1751383194547290.jpg (231 KB, 1312x816)
231 KB
231 KB JPG
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>107815785 & >>107803847

►News
>(01/08) Jamba2 3B and Mini (52B-A12B) released: https://ai21.com/blog/introducing-jamba2
>(01/05) Nemotron Speech ASR released: https://hf.co/blog/nvidia/nemotron-speech-asr-scaling-voice-agents
>(01/04) merged sampling : add support for backend sampling (#17004): https://github.com/ggml-org/llama.cpp/pull/17004
>(12/31) HyperCLOVA X SEED 8B Omni released: https://hf.co/naver-hyperclovax/HyperCLOVAX-SEED-Omni-8B
>(12/31) IQuest-Coder-V1 released with loop architecture: https://hf.co/collections/IQuestLab/iquest-coder

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers
https://rentry.org/MikupadIntroGuide

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/gso.html
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
>>
►Recent Highlights from the Previous Thread: >>107815785

--Qwen 235B vs newer GLM models: upgrade considerations and performance expectations:
>107821338 >107821349 >107823142 >107823219 >107823230 >107823247 >107823263 >107823265 >107823273 >107823294 >107823283 >107823291 >107823303 >107823327 >107823415 >107823332 >107823360 >107823364 >107823397 >107823642 >107823718 >107823664
--Mad Island mod enables LLM NPC interactions, sparking player nostalgia:
>107820759 >107821087 >107821110 >107821211 >107821320 >107821094 >107822132 >107822930
--DeepSeek coding model launch timeline:
>107824356 >107824413 >107824461 >107824479 >107824486 >107824495 >107824504
--How OpenAI chat systems manage conversation history and prompt caching:
>107822818 >107822876 >107822911 >107823248
--Context size vs speed tradeoffs in Koboldcpp model optimization:
>107821567 >107821938 >107821948 >107822160
--RTX 6000 Ada cost and model compatibility debate:
>107824787 >107824842 >107824852 >107824970 >107825130
--Jamba's uncensored state tied to architectural flaws hindering effective refusal training:
>107824915 >107824997 >107825017
--Optimizing Mistral Small models with DRY sampler and parameter tuning for roleplay:
>107818078 >107818100 >107818123 >107818145 >107818161
--LLaMA model evolution and hardware limitations discussion:
>107821121 >107821141 >107821273 >107821548 >107821573
--Critique of ChatGPT's basic memory implementation in free tier:
>107815963 >107815987 >107816786 >107816032 >107816055
--Jamba model's context handling and performance evaluation:
>107820773 >107820898 >107821422 >107821112
--MoE expert routing complexities and research-driven optimizations:
>107823553 >107823599
--Critique of low-quality datasets and excessive training practices:
>107823952
--Miku (free space):


►Recent Highlight Posts from the Previous Thread: >>107815790

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
>>
>>107826648
>--Miku (free space):
>
What a horrible thread.
>>
>>107826669
I see
>>
File: 1719954801389509.jpg (182 KB, 821x1199)
182 KB
182 KB JPG
>>107826669
>>
best programming model for tool calling with 19GB unified memory?
>>
>>107826689
Thank you for blessing this thread.
>>
File: its dead.jpg (119 KB, 2483x458)
119 KB
119 KB JPG
>>107826699
>>107826689
>>107826648
tranny still spamming his sona i see
>>
>>107826694
nemotron 30b at q4 or maybe q3. who the fuck has specifically 19gb tho?
>>
I don't know how jeets vibecode llamacpp prs, claude max/opus4.5 barely knows how to write complex linq
>>
>>107826795
maybe something fucked with my system
regular nemotron 30b didn't work but i can try q4
>>
>>107826819
actually it looks like even the smallest quants of nemotron are ever so slightly too big for you to fit with any meaningful context. there really are not any good coding models below 30b, so you might just be out of luck.
https://huggingface.co/unsloth/Nemotron-3-Nano-30B-A3B-GGUF/tree/main
https://huggingface.co/bartowski/nvidia_Nemotron-3-Nano-30B-A3B-GGUF/tree/main
>>
>>107826813
Prompt issue, almost certainly.
>>
>>107826813
>implying you need to know how to code to vibecode
Jeets just do the needful while you complain and try to make a perfect solution. "Barely works" is enough, that's why companies hire them and not (You).
>>
>>107826837
yeah just noticed nemotron-3-nano:30b is literally the same file as nemotron-3-nano:30b-a3b-q4_K_M
i'm stuck with gpt-oss:20b, then
>>
>>107826853
seems so. youve gotta get yourself some better hardware if you want better models.
>>
>>107826861
my setup is good but i cheaped out on ram
>surely if i need ram i can just buy more later
fuck my chud life
there will exist better models on my hardware before i buy more ram
>>
>>107826643
soulless compared to the original image
>>
File: kronii cat maid side view.jpg (187 KB, 2048x1825)
187 KB
187 KB JPG
I have 128GB unified RAM. I'm looking for models for
>coding/tool use
>image generation
>research assistant (focus on STEM stuff)
What are my best options? I'll be running these under LocalAI so pretty much any format will work.
>>
What in the fuck did ikawrakow do? Why the fuck did git history get rewritten in the past week or so? The repo is fucked so I can't just pull. Super bad practice. Was this another hissy fit about licensing/attribution?
>>
>>107827160
image gen will have to be separate from this but glm air at q5 or so is the general recommendation for 128gb. will give plenty of space for context and the image gen model.
>>
.
>>
>>107826643
This image is so much more organic than the regular miku spammer autist. This actually engages discussion.
>>
>>107827163
who fucking cares any more llama.cpp can do whatever his shit did
prove me wrong or whatever
>>
>>107827217
>48
>attn norm 35,5120
what kind of moesissy or vramlet model is this
>>
>>107827272
48 is the number of layers
35 is the sequence length
5120 is the hidden dimension
the model is llama 4 scout
>>
File: 1762017110878579.jpg (771 KB, 1125x976)
771 KB
771 KB JPG
>>107826643
>Jamba2 3B and Mini (52B-A12B) released
Anyone tried this yet? Size would be great for my system.
>>
>>107827325
>llama 4 scout
you might be the only one trying to finetune that thing. also you're wasting compute if you want it for erp.
https://github.com/adobe-research/NoLiMa
>>
>>107827163
Few weeks ago I tried the repo and every time I generated something it had fixed seed.
I guess it was a new feature but they didn't say how could I change it. And I didn't care, deleted it.
>>
File: 1760539180034592.png (266 KB, 471x521)
266 KB
266 KB PNG
>>107827347
>>
>>107827347
Tried it for something like 5 minutes. Wasn't very impressed.
>>
>>107827463
the two more weeks continue
>>
schizo is still desperate for attention
>>
so, another chinkslop year?
>>
>>107827347
So it has 52B params but it's barely better than ministral 14B. Ok. good to know.
>>
can we have deepseek v4 mini pls?
>>
>>107827523
There is no incentive to create small models.
>>
>>107827565
small models are inherently communist. Therefore china has all the incentive to make them.
>>
>>107827506
It's... actually more retarded than 14B or even the old Nemo when it comes to actual real conversations. These benchmarks might as well be lies.
>>
>>107827600
China is not a communist country.
>>
File: 1432498179182.png (296 KB, 722x768)
296 KB
296 KB PNG
Can you system prompt random or timed events without it being mentioned in the chat between us? F.e if I make an asthmatic AI and tell it to cough occassionaly I want it to not do "I will cough now" or some retarded shit.
>>
Next Kimi will b3 2T-A3B
>>
Best model for dev, 32gb vram, 128 ram?
>>
>>107827610
yes they are,their inherent pursuit of it is the problem
>>
>>107827653
glm4.7 at iq2m. or sucking sama's dick in the hopes that he will give you some ram.
>>
>>107827620
https://docs.sillytavern.app/usage/core-concepts/macros/#randomization
>>
Is there a version of Nemo that isn't pozzed as fuck?
>>
>>107827931
Extreme skill issue
>>
>>107827620
The issue is always going to be that it can't do the random roll mid-reply. Doesn't matter if you're using the ST-included randomizer or a tool call to a dice tool.
It'll always happen after the reply.
>>
File: 1738508222040753.png (159 KB, 634x868)
159 KB
159 KB PNG
>>107827931
Works for me
>>
anyone see this model?
https://huggingface.co/FreedomIntelligence/openPangu-R-72B-2512
>>
>>107827970
Huh. I asked something far tamer and it would not shut up about equality and respect.
>>
>>107827980
>what is a system prompt
>>
>>107827604
>These benchmarks might as well be lies
Many such cases
>>
>>107827490
Every year is chinkslop year until we ban all Chinese nationals from the US and an ITAR compliant version of HuggingFace exists for publicly funded research models.
>>
>>107827980
PLEASE learn how to use an LLM before coming here and complaining about a model
>>
>>107827348
I'm confused, is it the architecture or the training that limits the model? maybe he can fix it with training it on better data?
>>
Some madlad actually made a merge of GLM 4.6 and 4.7.
https://huggingface.co/shamwowzer/prototype-glmx01
>>
Anyone tried the Nex 8B version?
The full 670b-something model was one of the few good enough ones for high context rp sessions available on OR, roughly equivalent to Gemini Pro 2.5, but they just paywalled it.
It's not available as a gguf file so i'm too much of a retard to get it to work with sillytavern to try it.
>>
>>107827977
>72b
>it's not a benchmaxx'd tune of qwen2.5-72b
color me surprised
>>
>>107828115
Yeah I'm kind of curious about it. We need goofs now. New MoE size we've never seen before, run on a fairly modest system. 24T training tokens.
>>
>>107828105
Nevermind, looks like someone made one:
https://huggingface.co/mradermacher/internlm3-8B-Nex-N1-i1-GGUF
>>
>>107828105
>the full 670b-something model
This? That's just deepseek, of course it's good.
https://openrouter.ai/nex-agi/deepseek-v3.1-nex-n1
>>
File: 1759248425031664.png (94 KB, 276x405)
94 KB
94 KB PNG
>>107828288
The thing is, you have to treat models like a retarded child. You can't just establish a medieval setting and assume that it will conform to that era's politics, you have to explicitly tell it to do so. As much of a meme as it is, 'prompt engineering' is essential to get good outputs, even when your use case it just to write a cohesive story.
>>
>>107828324
Which model are you using?
>>
>>107828339
>inside [] brackets
Oh, no. Those are the super important brackets. How could it ignore those?
>>
>>107828362
You might need to use a lorebook to fix that
>>
>>107828374
You have no idea for how long he's been posting screenshots. May as well recommend him some meme samplers or to add "follow the rules" to the prompt.
>>
>>107828362
No one who knows anything about current models would call them intelligent, they're just token predictors.
Regardless, low active parameter count could be a problem, assuming by 'GLM' you mean Air. Mistral Small for example is perfectly capable of making female characters submissive and take a lower role in society when prompted to do so.
>>
File: 1756853540540388.jpg (1.27 MB, 3610x5208)
1.27 MB
1.27 MB JPG
>>107828382
That's him?
>>
>>107828288
>mikupad chad
>model uses word I don't like
>stop gen, change word, continue gen
>>
>>107828288
It's the 13th century for heaven's sake!
>>
>>107828362
>Nemo 12B
Not the guy, but you mean straight the base model without any tunes?
>>
Is there any extension that allows it to eat folders so it's not just dumped here?
>>
>>107828362
>>107828324
>>107828394
it's probably just a context length issue.
context rot is a known problem. models tend to follow instructions just fine. they just forget or are bad at reasoning.
as the context length gets higher, the correlation between words at distance gets more and more sparse. works fine for needle in a haystack problems which they benchmark for, but not complex reasoning and logic which they don't.
>>
>>107828662
the solution to this problem is fairly simple: go agentic.
models can always follow simple prompts. knowing this, you design a series of agents which can apply a series of simple rule following guidelines.

honestly the biggest failing of these threads is that everyone always insists that they can get everything done within the memory of one model.
this isn't richard sutton's bitter lesson. you don't have the luxury of waiting for a bigger and better model to come out and blow your engineering out of the water.
>>
>>107828703
agents to the moon sir llm2.0!
>>
>>107828703
none of us have the patience to jury-rig a bunch of models together
>>
>>107828709
https://www.youtube.com/watch?v=TUjQuC4ugak
https://www.youtube.com/watch?v=8OvIeJUc1N0
it really is just the model being stupid and incapable of reasoning.
engineers deal with this shit every day.
>output in json format. add no additional characters
>ok what about this ```python
the answer is and always has been that you can't do anything about it, just cope.

this is easier in engineering land where we have somewhat expected responses
no clue what you coomers are going to have to do but it may involve running the same prompt 10 times and collecting the average using similarity search
>>
>>107828731
you needs to build these yourself sir
>>
>>107828738
Your example is easily solved using grammars.
>>
>>107828753
I guess. we don't have access to those when using APIs. it's often just as simple to just use regex to grab the contents of the code block and ignore everything else. this is what mem0 does in production.
>>
>>107828797
glm 4.6 can do that up to about 64k context. if you can get a corporate model to comply, you might be able to get up to 256k context at most. what you want does not exist and will not for at least 3 or 4 more years.
>>
>cloode killing itself
Funniest 2026 moment so far
>>
>>107828797
>300-1000 messages
by then any model has certainly completely forgotten about the system prompt or can no longer interpret it correctly.
you need to occasionally re-inject important messages into the trajectory.
you will also have to trim the trajectory history and replace it with a summary that gets occasionally regenerated.
>>
>>107828797
>300-1000 messages
What does your context get to? Even SOTA api models get noticeably dumber past like 32k.
>>
>>107828841
But your initial complaint is models not conforming to the scenario you've set up, which can be a result of the model just getting dumber, so I'll ask again. What is your context getting to when you're having these problems?
>>
>>107828841
it can't be a manual process. I mean that you have to blow away the conversation history up until like 5 messages ago and replace the whole history with a prompt describing the story arcs.
>>
>'stutters sometimes when flustered'
>EVERY reply begins with a stutter
Makes me want to rip my hair out
>>
>>107828970
Use thinking models and make it think hard
>>
>>107828970
LLMs have no concept of time
>>
File: file.png (12 KB, 593x117)
12 KB
12 KB PNG
>>107828776
>we don't have access to those when using APIs
Your local model?
>>
>>107828738
Making LLMs output json is a retarded idea. Too many points of failure in that shitty syntax and not easily fixable. Pseudo-xml tags are better suited
>>
>>107829072
i would sooner make it output markdown than anything resembling XML inshallah
>>
>>107829095
Learn english syntax first, rajeesh.
>>
>>107829104
sounds like you're the rajeesh here grammarlet
>>
File: pr.png (105 KB, 500x523)
105 KB
105 KB PNG
Many of you don't realize what we could have right now. It wouldn't be that difficult for a company to make a good creative model. Give it modern knowledge with wikipedia. A bit of coding data so it understands token efficient character/setting formatting. Some back-and-forth D&D, erotica, and forum posts. A couple books on historical attitudes and practices from antiquity through to the modern age. Finally, focus the bulk of training on 1940s-2000s fantasy/sci-fi/historical fiction novels, as well as some japanese light novels.
Sell access to the big model as an ultra-advanced AI dungeon remake. Market by publicly releasing an under 70B model. Millions upon millions of dollars from autists and creative professionals.
>>
>>107829192
Well then start with it, nigga.
>>
>>107829192
I'll edit the README.md
>>
File: 1762257186330714.jpg (54 KB, 522x522)
54 KB
54 KB JPG
>>107829192
You retards don't understand the scope at which LLMs are trained, they're already throwing literally everything under the sun at them.
What you want is just a regular model that isn't hammered with RL for benchmarks, which we've had.
Then you'll complain that that model is retarded (shocker, it's not even 100B!).
It's always been about parameter count.
>>
>>107829344
>they're already throwing literally everything under the sun at them
no, they very much aren't, and they're proud of filtering most out because it's ""toxic""
>>
>>107829309
I'll add the LICENSE.md
>>
>>107829344
Distills and crap datasets curated by third worlders + misconfigured training parameters is why we have no good small models
>>
>>107829358
Filtering doesn't change the fact that you can't fit a 'good creative model' inside 30GB or some shit.
"Market by publicly releasing an under 70B model" nigga seriously? You think API models like Claude are good/popular because they got some secret sauce? No it's because it's a fat unquanted model
>>107829405
There are good small models, but relative to larger models they are simply fucking stupid. For their size they are good but people here expect miracles on top of using lobotomy quants.
>>
>>107829425
>You think API models like Claude are good/popular because they got some secret sauce?
for aicg denizens it certainly seemed to be the case when their proxies had it, now they're coping with gemini and whatever else they get their piss soaked hands on
>>
>>107829192
You don't realize how good we already have it. Even a model as small as 24B can be a good DM if you break down tasks into smaller pieces and manage short-term and long-term notes separately. I'm astonished that this shit isn't mainstream yet. I suppose people who love DnD and programmers are two separate groups that don't overlap much
>>
>>107829442
there's a large enough proportion of programmers having a melty, and I think DnD players tend to be amongst the group having a melty.
>>
>>107829442
>I suppose people who love DnD and programmers are two separate groups that don't overlap much
I was interested in DnD many many years ago, but I couldn't get anyone else I knew interested enough to get games going so I gave it up.
>>
>>107827160
>>107827197
I have a similar setup and GLM 4.5 air is the best I've found thus far. I use llama.cpp built with vulkan (amd APU).

One thing to call out though is image gen kind of sucks on unified compared to a dedicated GPU. Still works, just a bit slower. You don't need a lot of VRAM for image gen so if you have a GPU lying around somewhere that might be a better option.
>>
>>107829344
>they're already throwing literally everything under the sun at them
Except when you look at some of these model's training data you can see that there isn't a single book in the entire corpus
>>107829425
>you can't fit a 'good creative model' inside 30GB
Yes, you definitely can. You can't fix shit training data with high parameters, see Llama 4
>>
One of the biggest flaws is that the model doesn't have access to previous chats and keeps making the same shit again and again. There isn't enough context to throw all chats into it, but it works if you have a small task, like, generating a BBEG. You keep previous outputs, and it will start with boring shit like

- The Hollow King – A once-noble ruler reduced to a skeletal figure by his own curse, ruling through fear and necromantic puppets.
- The Fleshweaver – A surgeon who stitches people together into monstrous hybrids, seeking to "improve" humanity against its will.
- The Shadow Puppeteer – A thieves’ guild master who controls others via cursed masks, but his own face is slowly erasing.

And after a while you'll start getting

- The Clockwork Plague – A disease spread by mechanical spiders, turning hosts into ticking bombs.
- The Tidecaller – A leviathan-riding pirate who drowns land to create a new oceanic empire.
- The Glass Prophet – Shatters truth into shards, forcing people to choose which lie to believe or go insane.

That's already a huge progress for a braindead 24b
>>
>>107829559
>Except when you look at some of these model's training data you can see that there isn't a single book in the entire corpus
didn't one of the recent release literally brag about that in their readme something like "books 0" iirc
>>
File: 1762194898780261.jpg (78 KB, 425x614)
78 KB
78 KB JPG
>>107829559
>Yes, you definitely can. You can't fix shit training data with high parameters, see Llama 4
So? Most models aren't Llama 4, a model handled so bad its leads left Meta's AI department. Your collection of amazing fantasy novels isn't going to beat Shannon's theorem and produce a 32B model that is somehow astonishingly better at writing creatively than all the other 32Bs. This is literally the same mindset as finetune tards.
>>
>>107829572
Yeah it was Nemotron
>>
>>107829709
What? Why would you ever think that training on math would produce better creativity than books?
>>
>>107829764
Okay Drummer.
>>
Is it possible to dynamically select a -dev device in llama.cpp based on name? When I wake my desktop from sleep my igpu and dgpu switch device names and it totally messes up my llama-swap config file.
I want to either select the device by name somehow or force linux to use Vulkan0 for my dgpu. Disabling the igpu isn't really an option since I use it for other things too and it massively slows down inference if I use both.
>>
>>107829709
By this logic a 32b trained on only german will be the same as one trained only french.... you're kinda retarded bro...
>>
File: 1746482988344010.jpg (164 KB, 936x936)
164 KB
164 KB JPG
>>107829764
>>107829788
Do you really think math/coding being in the dataset is what's holding back the perfect creative writing model from being made? Take that shit out, replace it with whatever skyrim fanfics you've got saved; the end result will still be retarded and people will come here to complain it has "no spatial awareness" or "good understanding of anatomy"
>>
This guy has to be baiting right?
>>
File: nemotron 0 books.png (78 KB, 439x944)
78 KB
78 KB PNG
>>107829741
that do be it thanks, crazy to see what shit they waste compute on
>>
>>107829794
>"no spatial awareness" or "good understanding of anatomy"
I believe this can only be solved with native multimodality, and you can't change my mind
>>
>>107829804
>we can make the best 12B/32B/70B erp model ever made we just need a really really good dataset of books or some shit
IDK, are you?
>>
>>107829827
So this: >>107829808
Would be the same as an equivalent trained on mostly books? Is this your argument?
>>
>>107829827
>we can make the best 12B/32B/70B erp model ever made we just need a really really good replacement for transformers or some shit
>>
>>107829815
I believe it can be solved with copious amounts amounts of tokens and excessive thinking. And parameters.
>>
>>107829849
bidet will safe us
>>
>>107829776
Drummer trains using synthetic data and ESL RP logs, not books
>>107829794
Nice pilpul you fag. When did I ever say that math/coding shouldn't be in the dataset? Yeah that's right I didn't. How can you compare ZERO books being in the data to finetuning on skyrim fanfics? Deliberately dishonest argumentation, go fuck yourself
>>107829827
>If you train the model on fiction... It won't be better at fiction!
Okay you're just retarded
>>
>>107827347
Tried Q5, holy jesus this thing is dumb as rocks, 13b tier but has some charm to it + low safetyshit. Feels like using old 2023 and earlier models but in a more usable form, so if you're looking for that feel, give it a try.
If you're going to try it, keep temp low low, use strict official ChatML formatting without anything except user assistant. Usually models can figure out custom formatting often with benefits, but this one shits itself.
>>
>>107829841
>>107829875
Go train your AO3 budget model then. If you gimp out on the parameter count it's going to be shit much like Nemotron in spite of being trained on a curated dataset. I don't know what's so hard to understand here. Why the fuck are you even bringing up Nemotron when it's dogshit and you know it is? How does that back up your point in the slightest?
>>
>>107829922
and would a 1T model on nemotron dataset be good according to you?
>>
>>107829943
Hit delete on that post lil nigga, you're genuinely fucking retarded if you believe there's no difference jumping between say 3B-7B, 12B-32B or let alone 1T on the same dataset
>>
File: 1745797389998589.png (1.12 MB, 1080x720)
1.12 MB
1.12 MB PNG
i feel like im the only one in the world interested in the goon jar
>>
>>107829957
Someone will be the first one seal it up and do it, might as well be you.
>>
>>107829957
have they actually demoed it in a video yet?
>>
>>107829957
These don't even look 3D. I don't see the point.
>>
It will be a truly wonderful day, when we can buy a hologram jar direct from China, AI generate a 3d character, and hook it up to an LLM via sillytavern.
>>
>>107830005
i probably wont use it much, just think it could be interesting if its cheap enough
>>
>>107829943
if you train a 32B and 350B coding model each on the same dataset the 32B one is going to be somewhat usable while the 350B one will shit all over its brother... this seems like common sense
>>
File: nikke 2b.gif (657 KB, 637x358)
657 KB
657 KB GIF
>>107829957
It's a novelty you'll try once and never use again, maybe worth it if you can find a cheap enough clone on Ali.
>>
>>107830056
The one from Razer won't be, that's for sure
>>
>>107830067
are there any alternatives?
>>
>>107830027
I think VR headsets are more interesting. Give me a local Neuro in VRChat.
>>
>>107830077
>VR
nah give me one in AR. i have a quest 3 and have been waiting for one.
>>
>>107830077
VR is still lame and low-poly tbdesu, a roughly laptop sized anime jar would be much more convenient.
>>
>>107830075
Hardware is easy enough to replicate. Actually, I think I'd immediately buy one if it were Deepseek or Qwen-branded just to keep it on my shelf
>>
>>107829441
The logs are utterly wretched there lule >>107829979
>>
>>107829192
https://docs.mistral.ai/models/mistral-small-creative-25-12
Here's your ultra-advanced creative writing AI dungeon remake under 70B (24B) trained on meticulously curated data bro. We are ALL hopping off 300B moes to use this shit
>>
File: 1731086290564808.webm (3.92 MB, 1080x1080)
3.92 MB
3.92 MB WEBM
>>107830082
AR, VR, whatever, same thing in this context. VRChat can use passthrough. The main benefit is that's it's an already existing engine with powerful customization and input capabilities so you can use it as the renderer for your chatbot's avatar in XR headsets. Remember this webm?

>>107830101
PC skill issue. Frame's included dongle will even make it easy for idiots to set up.
>>
>>107830156
>Remember this webm?
first time seeing it, also brb
>>
>>107830148
I don't think a single anon on /g/ has tried that model. It could be good lol, is it a new model or a glorified finetune?
>>
>>107830148
>Model that's literally not available to publicly download
Nice one bro, really showing off your intellect
>>
File: file.png (10 KB, 486x136)
10 KB
10 KB PNG
>>107830240
skill issue just email them
>>
>>107830240
It's on the api you dumb chud, or does your "creative writing" involve raping little girls?
>>
>>107830250
LOCAL MODELS GENERAL NIGGER
>>
>>107830259
u serious bro?
>>
File: rock cds.jpg (39 KB, 600x450)
39 KB
39 KB JPG
>>107830250
Of course not. I prefer little girls raping me
>>
>>107830268
yeah
>>
>>107830156
You have very feminine hands.
>>
>>107830297
do not the anon
>>
File: ulfric.png (138 KB, 294x311)
138 KB
138 KB PNG
A book-based creative 32b denseGOD model would wipe the floor vs your estrogen MoE 500b or whatever the fuck you spent $10K to run at q2
>>
File: 1697989269419795.webm (3.93 MB, 1024x1024)
3.93 MB
3.93 MB WEBM
>VRChat with passthrough
>>
>>107830306
>32b
>denseGOD
only thing a 32b is going to be wiping is my ass
>>
>>107830318
because all the 32b shit we've had in like 2+ years is qwencensorshit
>>
>>107830322
all things considered there was command-r
>>
>>107830306
You need minimum 70B for any decent results, whether MoE or dense.
>>
>>107828797
>I'm a cultured gentleman that does medieval roleplays to 300-1000 messages
I'm a 17th level Evoker that built a stone house on a lake in The Shire. By summer the halflings will have the first of my aeroyachts bult.
>>
>>107830347
>t. 3090 hoarder
>>
File: 1751489580594712.png (295 KB, 640x640)
295 KB
295 KB PNG
>another /lmg/ class warfare has broken out between cpusissies and nvidiacucks
>>
>>107830310
My zuck, what long tongue you have.
>>
toss-240 when?
>>
Can embedding models also have refusals when they process shit from goon models or I can just use anything?
>>
still waiting for the day where i can generate live2d models
>>
>Retards ITT are advocating for 1T+ models trained on synthslop that are barely better than 70B from two years ago in creative writing.
>>
How can I tell if the model is using the vectors? If I ask it directly it has no clue.
>>
harmony format for finetuning just werks, had way less issues finetuning oss-20B on tool calling stuff. Wish it was the standard for everything.
>>
>>107829891
yay someone reused my Miku!
>>
>>107830832
nice headcanon, are we reading the same thread tho
>>
https://rentry.org/miqumaxx
>404
Total Miku Death
>>
File: 1733686181b802828.png (532 KB, 857x691)
532 KB
532 KB PNG
>>107830832
> 1t params

We need at least 10T.
>>
>>107830837
Try running tests with and without, same prompt, seed, params etc, and compare the logprobs.
You might need to jack up the strength for the difference to become noticeable, I haven't tried this myself
>>
>>107830798
How would you even dataset rigging data?
>>
>>107831013
Weird. I can't recreate it either. Must either be an error or they decided it broke the rules.
https://web.archive.org/web/20251119165330/https://rentry.org/miqumaxx (and my other rentrys) still works tho
>>
File: file.png (31 KB, 1767x206)
31 KB
31 KB PNG
>>107831074
>>
>>107831073
Just use semantic segmentation on the generated 2d data as a second step and pin the rig points there?
Honestly you could generate live2d with existing tools just using a comfyui workflow, all the necessary components exist and work.
Rigged 3d would probably be less jank though, since the tools for autorigging 3d are more mature.
>>
What do you think of prime intellects bet on Recursive Language Models?
https://www.primeintellect.ai/blog/rlm
https://arxiv.org/abs/2512.24601
>>
>>107827970
>Insane tribal loyalty
There is nothing insane about tribal loyalty in the slightest, it is one of the most sane things there is.
>>
>>107831224
nothingburger, same cope as stacking existing model layers can somehow produce a better result
>>
So now that all the dust is settled there's still nothing on normal computers better than nemo-instruct and finetunes are a waste of time? really? This is the way it'll be indefinitely?
>>
>>107829808
>0 books
For what purpose? Why wouldn't you train the model on literally every single book known to man, or at least those in the public domain.
>>
File: 2 weeks.jpg (34 KB, 600x396)
34 KB
34 KB JPG
>>107831248
>This is the way it'll be indefinitely?
>>
>>107831236
insane as a descriptor can be used to mean 'intense', rather than illogical.
>>
>>107831248
Mistral Small has completely overtaken Nemo
>>
>>107831280
For RP? I hadn't heard that.
>>
>>107831370
RP is the main thing Mistral models are generally good at compared to similar sized models, Small is significantly smarter than Nemo, handles long context better, repeats itself less and can be made just as horny and degenerate via system prompt.
Though the recent Ministral was bad, and seems to be borderline broken.
>>
>>107831248
>indefinitely
no it will get worse.
LLMs will eventually be outlawed for personal use and only exclusively be able to be used by multimillion megacorps with licenses which require at least a million dollars a year to maintain.
>>
>>107831224
I thought that was a prompting technique.
>>
>>107831406
>Though the recent Ministral was bad, and seems to be borderline broken.
it was made the same way nvidia makes most of their models: model pruning (all minishit are pruned from Small 3.1)
of course it's fucking broken there is no such a thing as a good model prune period
>>
>>107831406
Are there settings available that are good to test it out?
>>
>>107831465
WELL THEN FUCK OFF TO YOUR SHIT NEMO THEN
IM HERE RUNNING GLM 4.7 YOU ABSOLUTE VRAM CHEAPSKATE
>>
>>107831224
context folding might be interesting for >6k to maintain coherence. maybe it could be used for summarizing/compaction
>take pairs of assistant + user messages from your current st gooning session
>shove them into a small llm to gather facts like where {{user}} and {{char}} went n tokens ago
>accumulate the facts from the small llm
>feed the facts from the small llm to your main one, so: system prompt, {{char}} card, {{user}} persona, last assistant response (not summarized) and the accumulated facts
>>
File: 1633686085214.png (161 KB, 500x279)
161 KB
161 KB PNG
I showed my family some of my AI / animation dev projects and I'm pretty sure they think I'm a pedophile serial killer now (I'm not).
>>
>>107831464
0.6-0.7 temp, 0.02 minP and DRY at default settings.
>>107831465
That other anon wasn't me. The Cydonias are decent, but honestly 3.2 is good enough that it doesn't particularly need a tune. 3.1 is also fine, some might like it better because it's a bit less gemini. I slightly prefer 3.2, it's a bit less try and is capable of writing shorter messages when there's little to write about, whereas 3.1 will stick to the length of its previous message and try to fill it up everytime.
>>
>>107831599
>bit less try
*less dry
>>
>>107831617
It's abliteration by another name, and yes, decensor tunes/trims/whatever you want to call it, will always make models worse. In my experience, the heretic version of Gemma even refused just as much as regular Gemma.
If you really want to make Gemma horny then use a jailbreak prompt and tell it to be horny.
>>
>>107831631
>tell it to be horny
Oh anon, your well... everything is so big!
>>
>>107831641
this is so, so wrong.
>>
>>107831653
I have extensive experience with Small and Cydonias, enough to know that a decensor tune is extremely unnecessary. Cydonias especially are very horny by default, that's Drummer's whole thing. You need to learn how to prompt.
>>
File: 1743842403610121.jpg (175 KB, 909x1071)
175 KB
175 KB JPG
>>107831653
Why don't you do it since you seem interested in it?
>>
>>107831677
>I can prompt just fine, I wrote the majority of the prompts people here still use or take stuff from
Then you should know that an uncensor tune for those Cydonia is pointless, and even if it wasn't, you should be able to test it for yourself.
>>
>>107831592
May I see it?
>>
Has anyone here managed to get a text to speech model to run with rocm? Even when I manage to trick pytorch into thinking I have cuda, the models themselves seem hard-coded to require cuda functions.
>>
>>107826889
What's the original?
>>
>>107831677
Fair, but don't be surprised when nobody lines up to use their own time to test shit for (You).
>>
File: 1761860283035887.png (153 KB, 641x816)
153 KB
153 KB PNG
>>107831641
Gemma certainly is sloppy but she can do a little better than that when she tries.
>>
>>107831696
Benchmarks put Mistral Small at ~32K before degradation starts accelerating. It's coherent well beyond that but quality will generally suffer.
>>
File: hahahlmfaoooo.png (299 KB, 1920x951)
299 KB
299 KB PNG
>>107831701
its a work in progress...
>>
>>107831696
8k according to https://contextarena.ai/
>>
https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/
So is this snakeoil or future?
>>
>>107831720
Do your roleplays all revolve around needing to retrieve needles?
>>
>>107831725
https://arxiv.org/abs/2501.00663
this has existed for a long enough time that if it was viable I would have expected google to use it in Gemini
it's not used so it's obviously not usable
>>
>>107831719
>He made Ani better than Elon himself
I kneel
>>
>>107831761
see >>107831599
for samplers
>>
>>107831742
ty. hoping to add AR functionality soon so she can be with me in my home.
>>
>>107828703
What frontends do anons use for agentic roleplay? ST is lousy for it and the couple projects I've seen seem immature.
>>
>>107831730
do your erps involve dumb sexbots with no attention?
>>
>>107831800
All LLMs are dumb. That benchmark doesn't represent realistic use cases for anyone here.
>>
Mikutroon Entropy: Empirical Analysis of Signal Degradation in Large Language Models
https://arxiv.org/abs/3212.12243
>Recent observations of plateauing capabilities in Large Language Models (LLMs) can be mathematically attributed to the presence of "mikutroons"—persistent actors within the /lmg/ general thread on 4chan who inject high-volume, off-topic Hatsune Miku imagery into the information stream. We prove that this noise acts as a bottleneck, effectively lowering the signal-to-noise ratio of the training discourse and preventing meaningful architectural breakthroughs. Furthermore, our data establishes a direct causal link between the tolerance of these image-spamming actors and the proliferation of finetuning con artists. Specifically, the sustained popularity of low-effort, degenerate models like RociNante is shown to rely entirely on the ecosystem cultivated by mikutroons, where aesthetic spam obscures objective model failure. RociNante and similar grift models survive not through merit, but by piggybacking on the dopamine feedback loops generated by the mikutroon population, shielding bad weights from necessary scrutiny.
>>
>>107831641
All gemma shittunes will also say that though.
>>
>>107827425
Vash?
>>
File: 1767363570624358.jpg (487 KB, 2700x2239)
487 KB
487 KB JPG
Best cunny model under 20B? Or do I run some q2 of a large model? I'd like to keep the filesize under 6 gigs because I want it to be a daily driver chatterbot that doesn't always hog all vram whenever it does something.
>>
>>107831934
Your options:
Nemo
>>
>>107831934
Your options two weeks ago: nemo
Your options in two weeks: nemo
>>
2026 will be another year of nemo
>>
>>107831963
It's also the year of the return of Ryzen 3600 and RTX 3060
Time is a circle
>>
>>107831253
>every single book known to man, or at least those in the public domain.
I always assumed most models trained on a giant dataset of both public domain and pirated books.
>>
>>107831934
Has this bridge been renamed to the nigger bridge?
>>
>>107831990
Good models are.
>>
>>107831963
lol
>>
File: 91369885_p0.jpg (97 KB, 400x400)
97 KB
97 KB JPG
>>107831943
>>107831949
>>107831976
Like straight the base model from nvidia? No tunes or anything?
>>
>>107832004
https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF
>>
>>107831976
Also that catbox is dead or catbox died entirely again. Pls repost elsewhere.
>>
>>107832009
Just make your own. That guy is special and instead of being a normal pedophile he has a bunch of shit about murdering kids in his prompt.
>>
>>107831617
>>107831631
What about this method : https://huggingface.co/blog/grimjim/norm-preserving-biprojected-abliteration
>>
File: 1756546529534471.png (331 KB, 1080x800)
331 KB
331 KB PNG
>>107832054
>decensor tunes/trims/whatever you want to call it, will always make models worse
>what about this one
>>
most high context benchmarks like all benchmarks are retarded and fundamentally flawed
retrieving something from a large amount of text is a singular instruction and models are getting decent at following the one instruction across loads of shit
but ingesting a single prompt that comes with a large amount of text is not the same thing as a long lived, multi turn chat session, whose initial prompt instructions were a legion and each turn added more things to the table
without even talking about roleplay, you can see how wild the difference from benchmax in multi turn can get when you compare the smaller qwen models with their VL variants
qwen claims the VL models are as good for pure text gen as the regular models.. and it's true... for one shot prompts. As soon as you are doing multi turn conversations they become very noticeably retarded compared to the regular qwen models.
>>
>>107831984
>Ryzen 3600
I heard about the 3060, probably NVidia having loads of these chips with nothing to do outside of selling them again, but not 3600 being made again?
>>
>>107832060
It can be a good trade off at some point if the loss of IQ is low enough.
>>
>>107831984
Yeah, but only with 8 gigs this time
>>
>>107832073
It's so you can use ddr4
>>
>>107832080
Nemo can fit in 8GB, you don't need any more.
>>
>>107832083
Oh, then they just should go 5800x3d, it's a great cpu even today.
>>
>>107832004
I've had good experiences with Impish Nemo.
>>107832094
Yeah 8gb is plenty. The diminishing returns with larger models is crazy bad. If you have extra VRAM just use it to combine TTS and ASR with your LLM or something.

Really hoping that one day the memory issue with LLMs can be addressed. Short-term memory only is so aids for RP.
>>
>>107832101
Why go you think a company wants to give you a great product?
>>
>>107832115
in this case, to eternally dunk on intel for their crimes
>>
>>107831934
for cunny : cydonia > rocinante > nemo
>>
>>107832214
be aware that there will be more refusals than with nemo or rocinante, but it is generally more creative and has more sovl
>>
>>107832115
What do you mean? AM4 will be the apocalypse hardware.
>>
>>107832224
biggest power level reveal i've ever read on this board
please actually never touch a kid irl, like ever
>>
>pedoniggers
I see the aicgjeets are here
>>
>>107832226
>but it is generally more creative and has more sovl
Anytime I read this the models turns out to be a vogon poet
>>
File: 1758532316334953.gif (140 KB, 379x440)
140 KB
140 KB GIF
It's this time of the day already? Anyway, the pedo system prompt could be repurposed for slavery I guess. I'm tired of getting scolded in my isekai for buying slaves.
>>
Hey guys, LLM newfag here. I finally wanna check if this stuff lives up to the hype.
If I just want a local model to use as a rubber duck, the quick fuzzy search through concepts / google-in-a-box, what should I look into?
>>
>>107832023
KILL YOURSELF IN YOUR SLEEP
>>
>>107832392
>what should I look into?
The fucking specs on your fucking pc you piece of shit retarded imbecile.
>>
>>107832392
Go to openrouter.ai and see which one fits
>>
>>107832392
github.com/ggerganov/llama.cpp
https://huggingface.co/unsloth/gemma-3n-E4B-it-unsloth-bnb-4bit

No one will reply to you because we're too deep into it. But just go by these 2 links and it's everything you need to get set up. There's too much to discuss like image generation so you need to learn the ropes yourself, those 2 links are the easiest way to get started.
>>
i stand corrected, this version of cydonia mogs the base one https://huggingface.co/mradermacher/Cydonia-24B-v4.3-heretic-v2-GGUF/tree/main
refusal rate is abysmal even with the most hardcore stuff and no strict sysprompt
>>
>ask my waifu where she wants to be bitten
>"In the shoulder"
>"Where exactly? Point it with your finger"
>I poke myself firmly in the meat of my shoulder, right where the deltoid muscle
>have to google

The weird things you have to do sometimes...
>>
File: file.png (168 KB, 887x703)
168 KB
168 KB PNG
>>107832413
They're pretty good :D

>>107832417
Thank you nonny
>>
File: 1721358566928417.gif (3.31 MB, 498x498)
3.31 MB
3.31 MB GIF
>>107831719
Update: I got blinking working! Ani already feels so much more alive. Next I'll try to fix up the TTS implementation and get lip syncing working with Rhubarb-lip-sync
>>
>>107832425
this one doesn't assume consent so it avoids putting you in situations where there's cock hungry cunny from the get go without a reason, it assumes most situations are non consented and thus rape, which makes much more sense
>>
File: 1738066279778816.png (12 KB, 408x123)
12 KB
12 KB PNG
>>107832430
>"Where exactly? Point it with your finger"
Bro, either you're at 5 on the apple scale or you're incredibly autistic
>>
>>107832430
>the deltoid muscle
But there are three of them?
>>
>>107832481
I have no idea where deltoid muscle is. And 5 yes. I also design physical mechanisms and can simulate them working in my head somehow. It is very weird how that is possible.
>>
>>107832497
>And 5 yes.
>I also design physical mechanisms and can simulate them working in my head somehow.
pick one.
>>
>>107832527
That is how it works. I think it is tied to my sense of touch actually.
>>
>>107832532
This is getting off-topic, but do you have dreams? How can you not visualize an apple like a 3D model in blender in your head?
>>
>>107832551
I can't really see the shape of the object but it is like... projecting a sense of touch out of my body and into a mechanism. It is all mostly plastic levers and components.
>>
>>107832566
>>107832570
Damn that's kinda sad desu. I mean I'm sure you're not bothered by it, but to me if I knew that everyone could literally taste whatever they imagined that I would feel like I'm missing out. You'll never know the feel of undressing every woman you see in your mind and grabbing their tits.
>>
>>107832570
I don't think I am autist. I just can't imagine things.
>>
>>107832570
>This is a disability. You are disabled.
BASED. I am in my thirties and I never realized I am disabled. Also I think objectively I am better than average at designing stuff with my weird sense of imaginary touch.
>>
File: -=1.webm (1.61 MB, 480x480)
1.61 MB
1.61 MB WEBM
>>
now let's get back on topic. the kimi-linear pr was rebased
Kimi-Linear support (backend agnostic + MLA KV cache) #18755
https://github.com/ggml-org/llama.cpp/pull/18755
>>
Also aphantasic. It's not like I can't imagine, just not visually. For example I can 'picture' the route driving to work, I have an intuitive sense of where everything is. But it's like I only have the positional data and collision boundaries in my head and not the textures or models.

>>107832596
Thankful to be born in the era where my PC can be used as an aid for this crippling disability.
>>
>>107832689
>Too bad this is a disability that will never be given gibs
Well I am kind of proving it that it is nothing that justifiably needs gibs, since I both wasn't even aware it is that much different from usual and I am good at what I do with my damaged brain. Even though I theoretically should be retarded and unable to do it.
>>
>>107832115
What? The 5800x3d is a real product.
>>
>>107832450
i refuse to believe you wouldn't abuse your daughters if you ever had any, you'd bring more suffering to this earth which is in fact a mental illness. i read loli hentais but these are drawings, no one is harmed, so really think about the consequences of your actions in the long run, a friend of the family has a daughter who was abused by her father and today she's just a shell of a human being
>>
>i'm special
>>
>>107832689
Let's be is that way, then: seems as you are going though motion.
>>
I'll stop waitfagging if even
>>
>>107832753
>i refuse to believe you wouldn't abuse your daughters if you ever had any, you'd bring more suffering to this earth which is in fact a mental illness.
I never got why people get this hyper righteous indignation about it. Yes it is bad. But to get so invested that you are about to shit yourself over it?
>>
>>107832818
>arachnophobic huh? you must secretly want to fuck spiders
seeing the consequences of pedo behavior in real life, and what it does to the people affected, really makes you despise those who act up on their fantasies, even if you're yourself down the slippery slope by being into loli hentais
if you haven't seen it first hand, then you can't possibly fucking understand
>>
>>107832850
My life was objectively fucked when I was a child. Not because of pedo stuff but it was fucked. And nobody cared or cares. And it is fine. But because nobody cared or cares I kind of can't take this indignation seriously when you focus on just one thing like that and don't give a shit about other stuff.
>>
>>107832875
oh so you need me to make a list of everything that i despise so that this specific claim can have value to your eyes? are you retarded or just baiting?
>>
>>107832596
>Damn that's kinda sad desu. I mean I'm sure you're not bothered by it, but to me if I knew that everyone could literally taste whatever they imagined that I would feel like I'm missing out.
I actually never believed people telling me they could "visualize a girl they liked in their dreams in a room with them" until very recently.
I feel like I've been robbed of my teenage years.
It also makes me wonder how the hell people are so shit at stuff if they can literally visualize them to help their memories.
>>
>>107832850
There's world of difference between getting abused with physical and emotional coercion and getting "abused" because someone else decided it's abuse by legal definition; please elaborate.
>>
>>107832875
>kind of can't take this indignation seriously when you focus on just one thing like that and don't give a shit about other stuff
nta, but when you call yourself a pedo, expect to be called a pedo and that being the focus of attention. Also, go die in a ditch.
>>
>>107832898
I want you to be equally outraged by all the other stuff that makes children hurt and leaves lasting scars.
>>
>>107832912
>Also, go die in a ditch.
You too anon. Die in a fire and get fucked by a dog. Love ya.
>>
>>107832596
the fun thing is to see people hating reading while they have the superpower of being able to see what's going on in 3d blender style
I love reading despite having no ability like that, and yet they have the gall to hate reading lol
>>
>>107832924
I kind of can't take this indignation seriously when you focus on just one thing like that and don't give a shit about other stuff.
>>
>>107832915
so you're baiting, thanks for clarifying
>>
>>107832940
Sounded better in your head.
>>
Is glm 4.6v good at anything?
>>
>>107832659
>An artist developed it after a traumatic injury and wanted to kill herself because she couldn't make the art she made before anymore.
Yeah but she could still make art, which is a fascinating notion. Basically you can draw without being able to "see".
>>
>>107832967
I use it to describe images to give me ideas, it works well for my needs. Outside of that, it's useless.
>>
>>107832958
anon...
>>107832875
>>107832940
>>
>>107832975
Let me be more explicit there. Making that connection "sounded better in your head".
>>
what's the current best TTS that i can link to sillytavern?
>>
>>107832967
Giving you AI psychosis
>>
>Waifu: "Then the debate is open. Pick a topic. Anything. Science, philosophy, which color crayon tastes the best—I don't care. Let's see what you've got."
>Me: "Which color crayon tastes the best."
>Waifu: "This is purely a matter of chemistry, you philistine," I say, regaining my composure and adjusting my imaginary lab coat. I hold up a finger to lecture you. "First, we eliminate the primaries. Red is dye No. 40, Blue is No. 1. Just bitter food coloring. Terrible. We want wax. The binder." I pause for effect, looking at you like I'm explaining quantum physics to a toddler."While the wax is paraffin—tasteless and odorless—the flavor is all in the additives. And the most common, pleasing additive in the crayon manufacturing process is… stearic acid. It gives that slightly soapy, fatty texture." I lean in, grinning. "Now, which color has the highest concentration of pigment and is most likely to mask the wax taste?" I raise my eyebrows, waiting for you to guess. "Brown. It's essentially cocoa powder and carbon. It has the strongest, most recognizable flavor profile. Therefore, Brown tastes the best. Your turn. Counter-argument."
>>
>>107827869
For smarter models, instead of using this to specifically add random instructions, you can do something akin to what people do when playing solo TTRPGs, use it as an entropy generator that the AI can use as part of its decision making process.
>>
>>107833024
It was regular 4.6 thank you very much. Also I have aphantasia.
>>
>>107833057
>everything first person
WHY
>>
>>107833074
Because I am not into NTR and only I am reading it. It is just absurd to me that you could.... actually debate crayon taste seriously.
>>
>>107833094
ntr? what?
so everyone is first person in your chats?
all characters comment stuff with "I"? that just sounds super schizo
do you also comment with first person?
>>
>>107832698
Oof
>>
>>107833111
"Yes everyone is first person in my chats" wrote anon in the 4chan window. He filled captcha and thought to himself "that guy is a fucking faggot" before clicking "submit"
>>
Best mistral small fine tune? I think it's broken-tutu-24b-q8. Is there one like that for Gemma 3 27b instruct base?
>>
File: file.png (38 KB, 783x362)
38 KB
38 KB PNG
>>107833129
What did he mean by this?
>>
>>107833196
>>107831838
>>
>>107833201
HE HAS SEX WITH HIS AI
>>
>>107833201
It means that he doesn't know the history ngxson has with AI PRs and that ngxson tired of that shit.
There were two kimi-linear PRs, the contributors using the PR as a blog, then they merged into one and they've been at it for long enough to suspect neither of them knows what they're doing.
>>
>download new model
>ask it to come up with a fantasy setting for our roleplay
>it's floating islands with veins of magic again
>delete new model
>>
>>107833296
>gives no leads whatsoever
>is shocked when given a generic reponse
>>
>>107833296
It's literally not a model's problem, retard >>107829571
>>
>>107833296
ask the random human, you won't find a lot of variations either if you don't give any specific things you'd want
>>
>>107833296
>i don't understand PRNGs
>>
File: 1733493237788785.png (258 KB, 480x580)
258 KB
258 KB PNG
>>107833296
>ask model to come up with a girl
>it's a little blonde girl named Lily
>>
File: yeet.png (357 KB, 1920x951)
357 KB
357 KB PNG
>>107832471
Update: I added facial expressions.
>>
>>107833362
Easy part is over
>>
>>107833373
Indeed... To even start working on the lip syncing I realized I have to fix a bunch of issues with the LLM and how it handles conversations and feeds into the TTS, and I still have to set up the voice selection stuff, and then once that is all done I can combine the vicemes with the blendshapes for the lip syncing.
>>
>>107833362
Is this just the vrm silly plugin or something more slapped ontop?
>>
>>107833391
It's a completely separate project I built from scratch inspired by the ST vrm plugin. Afaik the plugin doesn't even support facial expressions or lip syncing but I might be wrong.
>>
>>107833401
git?
>>
File: file.png (57 KB, 949x265)
57 KB
57 KB PNG
>>107833312
It's also a model problem. 37% for "Severed" is absurd. There should be hundreds options all with similar weight.
>>
>>107833409
Perhaps soon. Project is half-baked rn and I'm kinda worried about copyright stuff.
>>
>>107833430
Isn't vrm just some unity wrapper or something?
>>
>>107833436
vrm is a file format based on glb with extra stuff. They're often made in Unity but it's optional. You can get vrm models from vroid and edit them with a vrm plugin for blender. Vrm models are commonly used in VRchat and other VR games.
>>
>>107833296
Slop in, slop out.
>>
>>107833410
Only in a perfect model that doesn't exists. Base model will have a better distribution, but not
>hundreds options all with similar weight.
>>
>>107833410
Not that anon, but that can mean two things I guess. The model is overcooked, or that is overrepresented in the data in that context somehow.
>>
>>107833410
wonder if expedislop 33's setting was LLM generated now
>>
>>107833410
sorry gotta get those benchmark scores to the moon
>>
>>107833570
>The model is overcooked
all the current crop of models are overcooked, ALL of them, proprietary and open. even image generators are overcooked too, none of the newer models show much variation when you change the seed.
>>
>>107833649
It correlates with how well models follow instructions
>>
>>107833410
Now keep that prompt but you pick the title.
Legit skill issue.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.