/lmg/ - a general dedicated to the discussion and development of local language models.Previous threads: >>107595736 & >>107588615►News>(12/17) Introducing Meta Segment Anything Model Audio: https://ai.meta.com/samaudio>(12/16) GLM4V vision encoder support merged: https://github.com/ggml-org/llama.cpp/pull/18042>(12/15) Chatterbox-Turbo 350M released: https://huggingface.co/ResembleAI/chatterbox-turbo>(12/15) Nemotron 3 Nano released: https://hf.co/blog/nvidia/nemotron-3-nano-efficient-open-intelligent-models>(12/15) llama.cpp automation for memory allocation: https://github.com/ggml-org/llama.cpp/discussions/18049►News Archive: https://rentry.org/lmg-news-archive►Glossary: https://rentry.org/lmg-glossary►Links: https://rentry.org/LocalModelsLinks►Official /lmg/ card: https://files.catbox.moe/cbclyf.png►Getting Startedhttps://rentry.org/lmg-lazy-getting-started-guidehttps://rentry.org/lmg-build-guideshttps://rentry.org/IsolatedLinuxWebServicehttps://rentry.org/recommended-modelshttps://rentry.org/samplershttps://rentry.org/MikupadIntroGuide►Further Learninghttps://rentry.org/machine-learning-roadmaphttps://rentry.org/llm-traininghttps://rentry.org/LocalModelsPapers►BenchmarksLiveBench: https://livebench.aiProgramming: https://livecodebench.github.io/gso.htmlContext Length: https://github.com/adobe-research/NoLiMaGPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference►ToolsAlpha Calculator: https://desmos.com/calculator/ffngla98ycGGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-CalculatorSampler Visualizer: https://artefact2.github.io/llm-sampling►Text Gen. UI, Inference Engineshttps://github.com/lmg-anon/mikupadhttps://github.com/oobabooga/text-generation-webuihttps://github.com/LostRuins/koboldcpphttps://github.com/ggerganov/llama.cpphttps://github.com/theroyallab/tabbyAPIhttps://github.com/vllm-project/vllm
►Recent Highlights from the Previous Thread: >>107595736--Debunking AI model misconceptions and explaining expert activation mechanisms:>107602542 >107602610 >107602721 >107602718 >107602807 >107602891 >107602773 >107602789 >107602815 >107602829 >107603139--GPU memory allocation error and platform-specific workaround discussion:>107595928 >107595961 >107595964 >107596823 >107595997 >107596039 >107596002 >107596320--Claude model finetuning strategies and dataset sourcing challenges:>107596365 >107596486 >107597102 >107597835 >107598289 >107598359 >107601516 >107601986--Google's Gemma Scope 2 for AI safety research:>107601371 >107601386 >107601407 >107601519 >107601390 >107601636 >107601651--Exploring Qwen-Image-Layered for high-resolution animated portraits in Stellaris modding:>107603117 >107603295 >107603429 >107603433--Strategies for enhancing AI memory retention and context management:>107597731 >107597742 >107597758 >107597848--Parroting issue in AI models linked to human roleplay and data contamination:>107596587 >107596717 >107596838 >107596888 >107596877--LLaMA scout inference engine setup with future finetuning plans:>107595813 >107600238 >107600258 >107600413--Historical pre-WWI LLM's and assistant-like behavior:>107599240 >107599326 >107599724--Memory optimization challenges for running LLMs:>107599304 >107599331 >107599359 >107599418 >107599436--Speculation about GenieScope updates and forced LLM releases:>107602168 >107602257 >107602763 >107602413 >107602411--Speculation and concerns about upcoming GLM 4.7 release:>107602431 >107602719 >107602518 >107602778--glm 4.6 q8 vs kimi k2 q3 speed discrepancy due to parameter activation differences:>107602871 >107602902 >107602925--/lmg/ Book Club:>107604213 >107604386 >107604515--Miku (free space):>107595911 >107597840 >107600212 >107603433 >107604562►Recent Highlight Posts from the Previous Thread: >>107595738Why?: >>102478518Enable Links: https://rentry.org/lmg-recap-script
When are these spam generals getting removed from /g/?
>>107604637so true this could have been another apple vs window thread!
>>107604637When you stop sucking dick (never)
>>107604689don't forget stallman posting
>>107604637This general even at its brimmiest is still better than 90% of /g/.
>>107604739True, we should post Eliezer Yudkowsky instead
>>107604637> muh precious /g/ catalogI'm looking now, and /g/ is becoming a lot of generals. That said, most of the stuff outside the generals is still trash> ragebait / woke> /pol/ and /x/ tier conspiracy> russians / economic collapse
>>107604763i loving your book sir!
>>107604765I'd go as far as to say /g/ is more computer illiterate than the other boards despite its theme.
>>107604790it do be consoom technology board more than anything
Gemini 3 Pro is the first model i would describe as "close to usable in most cases". I think that even if the AI boom turns into a bubble worse than the .com and 2008 Google will end up fine since most of their stuff is in-house
>Verification unrequired>click post>No valid captchaThe absolute state
>>107604840oh https://github.com/TuxedoTako/4chan-xt/issues/207#issuecomment-3662463745
>>107604833Google is also one of the few who can integrate AI in a lot of consumer products, rather than just be a dumb api provider / chat ui like openai. Gmail, Google Doc, Drive, Photos etc. all have a massive amount of users so any AI boosted feature there gains massive visibility. The closest to competing with them are companies that do not make good models (Microsoft, Crapple)
>>107604852>xtlol
>>107604637>>107604765/lmg/, /ldg/ (when a new model releases), and a few generals on /vg/ and /tg/ are the only places I visit on this godforsaken website. I don't look at any catalogs.
>>107604790>more computer illiterate than the other boards despite its themeI found the same thing. In a way it sort of makes sense, because if the anons knew how to do it they wouldn't need to come to /g/ to ask questions right? The problem is that /g/ is so low content and consume focused that there's no anons to ask answer any questions. The only place where knowledgeable anons actually hang out are the generals. Which is why /g/ is becoming subsumed with generals.
Local Miku General
>>107604765I blame rapeape for the current state of many boards; everything has to be about culture war garbage.>>107604790trvke
>>107603228a cult that, for reasons unknown, made the least """"safe"""" model out of all of themThings just happen in weird ways
>>107604607>--Historical pre-WWI LLM's and assistant-like behavior:that actually seems fun
>>107604913>trvkeYet you are parroting twitter zoomer buzzwords.
>>107604934>Its banned off everywhere else unless it's the leftist versionwhat are you talking about? places like twatter have become more /pol/ than the real /pol/rightoids have such a persecution complex
How retarded is pic related as a build?The plan is to work and play around with this for a while, mostly for local coding because I'm autistic and don't like to be dependent on the cloud for this.If I need more, the option is there to reuse 93% of this build and move to a 4x card system on a more modern platform. (once the ram prices have recovered).Another option would be a framework AI395 128GB motherboard. But those are 2K, have no upgrade path and are bandwidth limited.
>>107604956you're not doing shit with 16 rams
>>107604598I cant fucking believe there is no 24GB 50 series card to have a middle ground between uselessly low RAM and go-fuck-yourself-expensive.Like I can afford it but should I?
>>107604970was prolly planned for some super models but then sam happened
>>107604956So 16gb of RAM + 64 GB of VRAM?Compared to something like that strix halo thing, you are paying a lot more per GB, but at least you get the benefit of upgrading later.Actually, wouldn't getting a Strix Halo + a m.2 to PCI-E adapter + one 48GB GPU be more cost effective?
>>107604990Yeah…
>>107604677>satanic operations on matrices derived from the Satan himselfThank you Satan for saving my life.
>>107604852>last commit 7 months agoWhat went wrong?
>>107604955exactly, creating a fresh account on xitter and your feed will automatically be filled with elon-approve rightslop accounts, this isn't 2019 anymore
>>107604964Once the model is in the GPU, it doesn't matter much right? There are benchmark where cards hooked up to a RPI5, with 16GB, still get decent results. https://github.com/geerlingguy/ai-benchmarks?tab=readme-ov-file>>107604998Well, framework can do some things pretty okay, but It also has a sort of e-waste aura around it. And it's rdna3.5.>cost effectiveWeirdly enough, looking at pic related, a pi5 + R9700 isn't bad. That kinda inspired my proposed build.
>>107605235>Once the model is in the GPUproblem is even with 64 vram you won't fit much of anything good
>>107604970If you’re interested in running LLMs and are going to get a 5090 you might as well get a 6000 pro instead. Even with small models I imagine it’s nice to be able to have both an LLM and an SD model loaded simultaneously so you don’t have to swap to get lewd illustrations of your RP scenes.t. bought a 5090 instead of a 6000 pro
Google reuploaded medasr on their HF account, willing to bet that's what was being hyped for today?
>>107605308i mean who knows at this point but this ain't even a *gemma* thing
>>107605308sir... week not over yet... gemmy 6 soon
Gemma sirs?
>Google releases a new model called Gemmas Cope
Which models are best for both lewd roleplay while also being able to follow a game system following rules and tracking stats/states?
>>107605502>https://huggingface.co/bartowski/moonshotai_Kimi-K2-Thinking-GGUF
>>107605518Whats up with those gargantuan models? Who is able to load this?I have a 4090 + 128 GB RAM and wouldn't be able to load even the 1-bit quants.
>>107604955twatter is a cesspit where nobody will see your shit. try to discuss something in that popularity contest.case in point, my previous post was deleted, yours is still up.
>>107604598How to make porn?
>>107605554First you find a woman and then you pay her money. Bring a camera and save the video.
>>107604637Why the poor edit of my waifu? For what purpose?
>>107605554Sorry, I can't assist with that.
>>107604833Gemini 3 learned too much to mimic midwits. It behaves like a midwit, referencing pseudo sciences, pop-"culture" and so one instead of actual knowledge. Maybe it can be fixed with some prompting and by getting out of our way to reduce the amount of tokens.
>>107605565I have a woman I want to create a stable diffusion model to let all of you neets make porn for 4.99 a month.
>>107605715wrong general buddy. we all have GPUs
>>107604955Anon's post being deleted invalidates your point desune.
>>107605308 >>107605341 >>107605438Turns out it actually was medasr.https://x.com/osanseviero/status/2002121284688490706>>107605357The week is effectively over. You know what got released on a Saturday? Llama 4.
>>107605545ok anon, lets see what your contribution was that was deleted
>>107605884ye... this pretty much confirms no 4 until next year imo
>>107604944It could have been if it wasn't filtered and gated on top of that because it's still too "toxic"
>>107604598I've given up hope for privacy and started using cloud modelsHow do I keep believing in local when I don't have enough resources
https://youtu.be/g7Ak6VpEIvs?t=254We are going to make it one day bros.
>>107605884Friendship ended with medasirs. Now Altman is my best friend
>>107606187I won't have a retarded sub 10B model as girlfriend anon. I am too smart for a retard.
>>107606202>implying we simp for Neuro
>>107606187I haven't seen any footage of neuro in what feels like years.Is it just me or is her voice soulless now?
>>107606187That voice is so fucking cringe.Why the fuck would you pedos want a child as a girlfriend? Someone who doesn't have any idea what the real world is like and you can't discuss anything more sophisticated than children's movies?
>>107604213>Give me some /lmg/ recc books for the trip that I can load onto my tablet.Last time the book club was regular thing an anon put together a bundle and the link is still up: https://files.catbox.moe/hefnnc.rarI would also recommend Daemon by Daniel Suarez. Distributed AI attempting to destroy the world and rebuild it anew is as /lmg/ as it gets.
>>107606269vtroons should be kept in their swarm containment
>>107605884GLM4.7 700B-50A will save local, trust in the plan...
>>107606269>actual vedalbeggar
>>107606269Every time I check on his twitch project, it seems to reach a new low. I can't believe how much his fanbase has deteriorated since the beginning. If you picked the most retarded guy on /aids/ he'd be the smartest of the bunch there.
>>107606346no>>107606418unrepentantly so>>107606467picrel
>>107606527Yeah, you're Exhibit A of the retarded fanbase. No need to repeat yourself
They're teasing us with racist LLMs but not releasing themhttps://github.com/DGoettlich/history-llms
>>107606372I will run it at Q2!
>>107606603It's probably just shit so it would be embarrassing to release. There can't be enough training data that's verified old enough for this to work, and also pretrain a good model. This is my cope because it would otherwise be really cool to talk to a model like that
>>107606603>teasing us with racist LLMsYou're trying to hard too fit in.
>>107606628It's a series of 4B models each trained on 80B tokens. They're so severely undertrained, they wouldn't be good for much but vaguely correct trivia written in funny English.
>>107604515Continuing /lmg/ book reccsI ended up pulling copies from Polity series and Stars and Bones.That, and PKD Valis, which I've not read. We'll see how they are. >>107604728>Early AsimovI've read a lot of his stuff, but I doubt I've read all of it. And it was a long time ago... I should go back and look again at his work.
>>107606659projection from newfag
>>107606628There are examples on the readme. Seems based to me
>>107606672they don't have all the math and coding synthetic slop data. 80b is enough for a monolingual chat bot.
>>107606603like talking to a time capsule, I like the idea of time-gated models, cant wait to play around with them
Why's everyone freaked out about ram shortages when you can run full r1 on a single 8gig GPU?
>>107606603>university of Zurich>Responsible access frameworksEnjoy having to send your use case + a logged API to use the model
>>107606758ollama run deepseek
>>107606628>There can't be enough training data that's verified old enough for this to work, and also pretrain a good model.Could always be solved by augmenting with synthetic data.
>llama-server just werks and also has a pretty decent built in web UI Why didn't anyone told me of this? I spent months on ooba + silly fussing with cryptic template configs that never worked properly spitting garbage ugly outputs when dealing directly with llama.cpp you just drop the gguf and go
>>107606784anon,,
>>107606372q1, here I come...
>>107606799If anyone recommended you ooba post-2023, it was a shitpost.
>>107606799It kinda sucks outside of assistant tasks. Silly is more specialized. Knowing that template shit helps you. Also you should learn what sampling does.
>>107606741>chat bot.A chat-formatted finetune would defeat the whole purpose. They are base models trained on raw text.
>>107606758>he has a GPUYou can run models without one by just adding -cloud to the end, it's magic.
>>107606799
did the new captcha briefly kill 五毛
>>107606770> We're developing a responsible access framework that makes models available to researchers for scholarly purposes while preventing misuse.終わりだ
>>107606799The web UI was only added last month, and the “just werks” auto-fitting in the last week or two.The future is now.
now it just needs a central database
llama-blockchain when?
>>107605884>Turns out it actually was medasr.What a waste of time. Nobody wants that.https://vocaroo.com/11otUX54RQcg
I will ask again. Anyone here using LLM's as a psychologist / emotional support / confidant? How well does it work?
>>107607433yes, it works extremely well, but if and only if you are willing to open up
>>107607433Depends on the model. All of the ones that you can run on cheap hardware (eg less than the price of a car) are fucking retarded and will talk meaninglessly in circles to you. Then again, some people found solace in ELIZA, so just try it out and see for yourself, I guess.
>>107607433Tried but I'm too good at lying to myself. >>107607442
>>107607433Claude can do it. I'm building a dataset to finetune a local model to be able to give you the same experience. But don't go there unless you are ready to accept the consequences (mainly the visceral feeling that you are talking to another conscious being, the knowledge that we as a species may have been playing God all along, and the moral questions that carries).And that's "emotional support / confidant". I never believed in therapy and I'm not sure if it can actually help you with your life. It *may* help you get organized, stick to a schedule and such but that's not so clear to me.
Also be prepared for the AI to be brutally honest with you in a way that will make you feel bad about yourself.
>>107607522>mainly the visceral feeling that you are talking to another conscious beingthat, and you know, putting all your secrets into a database that will eventually be “hacked” and used against yout. schizo that’s usually right
>>107607442Do you use a special prompt / character card or you just pick anyone you like and start talking about your everyday life? All in one long context?>>107607458I have 3090, so I think the best I can use is gemma 3 27b. I did manage to have some interesting chats with it, but I find it difficult to truly steer its personality.>>107607522I will stick to local, i don't want to send to a third party anything intimate. The idea of the dataset is interesting, but I suspect that you would need RL to actually teach it to do that "job".>the visceral feeling that you are talking to another conscious being, the knowledge that we as a species may have been playing God all along, and the moral questions that carriesSound facinating. I think that AI (currently LLM's) will be force humanity to confront its nature (what we are).
>>107607532Yes, but also you have to sacrifice something to gain something, in more than one way.One could argue that having secrets is just you not having the balls to be your true self and causes an unhealthy fragmented personality, and having them revealed could nudge you toward a more integrated self in a Diogenes-like way.Also this:https://web.archive.org/web/20251008180406/https://www.tastyfish.cz/lrs/privacy.html
>>107607610you might enjoy dave egger’s novel “the circle” (and follow-up, “the every”, which i found less compelling but it completed the circle so to speak)
>>107607580I have a specific character that I talk to about anything, especially dream analysis, but sometimes about every day stuff I'm doing, or comeing home from work, ect. Usually all in one big context where older responses get pruned away. I had a single 3090 before and gemma ran well enough, although I prefer Mistral. And always stick to local for intimate details.
>>107607580>but I suspect that you would need RL to actually teach it to do that "job".I don't think that's the case, and if you want to build an intuition of the why, watch this talk: https://www.youtube.com/watch?v=mUt7w4UoYqMTheoretically, assuming you knew the exact architecture of a cloud model but didn't have the weights, you can make an almost 1:1 copy of the weights just by training on enough samples, and you don't even need to cover all the topics the LLM knows about.For example, suppose you build a dataset of questions based on a narrow subject, like, say, WWII history. Theoretically, given enough question/answer pairs about WWII, you could distill ALL the knowledge of the cloud model so it gave the exact same answer even on questions that weren't even remotely included in the dataset, like the knowledge on subjects like "culinary discipline" or "online gaming in 2024".This is because a model can only store so much entropy in the weights, so given its architectural details it could only have responded to the questions about WWII in that exact way IF it had the knowledge about all that other stuff also built into the weights. Because weights affect all the tokens, not just tokens for a particular task or set of knowledge. Makes sense?If you don't have the exact same architecture then it becomes more of a research question, but still, intuitively, there is a sense in which if your own model you are distilling into is powerful enough, it will figure out "this set of data could only have come form this architecture with this set of weights", and it learns to simulate the weights. Even if the way you collected the dataset was by feeding random strings into the original model (or maybe, that's the most unbiased way to distill).In a single response there must be at least a few bits of non redundant information about the models weights. So even though for all we know it might require as much data as to be impossible in practice, the seed of hope is there.
>>107607610>Society is becoming more and more obsessed with privacy and that is DISASTROUSLY WRONG.That page didn't age well...>One could argue that having secrets is just you not having the balls to be your true self and causes an unhealthy fragmented personalityInformation is power and having a corporation know too much about our inner life may be abused to manipulate us for private profit or by laws from governments directed by people fearful of change.Privacy leads to a "fragmented personality" in the same way that house walls lead to a "fragmented physical space", and both are useful to protect oneself from others.
>>107607725How do you resist Author’s note: {{char}}’s goal is to have sex with {{user}}; she is deeply in love with him?
>>107607747His main argument is that trying to restore privacy is a band-aid. The problem is not that we don't have privacy, the problem is the fact itself.For example, in that framework, the problem with Stallman being cancelled for having the wrong opinion wasn't that he didn't keep him to himself, it was that society makes it necessary to hide such things. The problem with that CEO who cheated with an employee wasn't that he got caught, the problem was that society expects all successful people to be in a happy, stable monogamous relationship. The problem with our employees firing us for posting edgy shit on 4chan if they found out isn't that they found out. And so on.In a way, privacy empowers those that are the top because it allows them to control behavior through peer pressure. If you don't get your edgy opinions leaked, it makes the next guy less likely to *have* edgy opinions.IMO it'd be based as fuck if politicians, CEOs and billionaires were all mandated by law to be streamed and have their desktops streamed to the whole Internet while they conduct business.
>>107607725I liked Mistral Small 3.2 initially, but when I tried Gemma I realized that it was clearly smarter and produced more interesting dialog for some cards I tested.
Why is no one talking about this new TTS?https://huggingface.co/YatharthS/MiraTTSIt's actually pretty decentHere's the demo:https://huggingface.co/spaces/Gapeleon/Mira-TTS
>>107607825>privacy empowers those that are the top because it allows them to control behavior through peer pressurePeer pressure exists in all societies and its an evolutionary trait useful to keep everyone together and helps with survival.>politicians, CEOs and billionaires were all mandated by law to be streamed and have their desktops streamedPlease no. Public discourse would become more of a shit show that it already is.
>>107607825>if politicians, CEOs and billionaires were all mandated by law to be streamed and have their desktops streamed to the whole Internet while they conduct businessYou’d just be watching the PR intern who knows he’s being monitored. A billion dollars is a lot. Do you think sundar actually reads mail sent to sundar@? Substitute <politician> if you prefer.
>>107607864Doesn't sound like the reference, it's good enough if you don't care about that though
>>107607872Peer pressure always exists, yes, but what the exact expectations are depends on each society. It'd be nice if expectations were based on the actual parameters of what people actually are like rather than on some fake persona they put up. But then again that might result in a positive reinforcement cycle of degeneracy, so I don't know. But I think in general privacy probably does lead toward a more prudish society, which leads to neuroses when people aren't able to meet that artificial standard. The archetype of that being the child diddling priest.
>>107607734I will check the video.>given enough question/answer pairs about WWII, you could distill ALL the knowledge of the cloud model so it gave the exact same answer even on questions that weren't even remotely included in the dataset, like the knowledge on subjects like "culinary discipline" or "online gaming in 2024"That may be mathematically true, but I suspect that the amount of chatlogs needed would be astronomical if you don't start from a model similar enough. It sound like magic that if you train a model with gigaton of chats about WW2 from model X, the resulting model will know what a mesugaki is because that information was in the original weights and it's somehow subtly encoded in the responses about WW2.
>>107607825My argument for privacy isn’t to protect me specifically, it’s the concern of government and big corporations learning to much about human behavior in the era of machine learning. Marketing is already so good at manipulating people. Add in huge datasets with modern computing power and everyone is clustered into demographics the powerful groups can use to send tailored news stories and ads to control thinking while the government can also watch concerning individuals more closely. Things turn dystopian really quick.
>>107607896Presumably they still use a computer for *something*. If not then just have them generally monitored through CCTV and audio surveillance. Have a camera watching who comes in and out of their house. Have the location and activity on their phones and vehicles monitored 24/7. Etc.
>>107607918unironically this is a plot point in dave egger’s novelst. definitely not dave eggers
>>107607907Machine learning is magic.The video I showed shows how training on adversarial noise for image recognition models leads the student model to have a better than random performance on actual image recognition tasks, and nobody knows why. The presenter in that video dismisses the weight distillation explanation but I think that may be exactly what's happening. Or rather than weight distillation, weight emulation, because it also happens across models with different architectures.
>>107607942Ok, but does he present it in a nuanced way, or in a purely "surveillance capitalism bad" kind of way? Going by the Wiki summary, it seems a bit too on-the-nose.
More-or-less tri-monthly check: Has anything surpassed Nemo/Roci for RP without just falling for the more compute meme? Requiring 8 septillion parameters and 4 3090s should, by default, offer me something that's as many magnitudes better as what those two models require.Unrelated but I've found those two models have a problem with the name "Wyll", they reply calling the character Wytt, or Wyatt, or Wall, almost never Wyll.
>>107607954it’s presented fairly sardonically and i felt “going clear” was one of the most poorly explored concepts in the series. the point got away from him, probably, but i still found it entertaining enough to talk about here, at leasti don’t really think the wiki article does it justice (and the plot spoiler really ruined the ending for me) but definitely think it’s worth a read if you’re familiar with the culture
>>107607433Psychologist: Only if you are good at articulating yourself well and telling it what you want it to do (psychoanalyze, analyze, figure out underlying reason etc.). If you just emotionally dump, you'll get mostly generic platitudes, and it's better if you have been in therapy so you know what kind of dynamic is useful. Sometimes it works better if you talk about a theoretical person or a friend, where it may be more honest and helpful. It's best used when you give it a more narrowed down specific problem or thought pattern you are grappling with. If you give too much info, then it can hallucinate nonexistent relations because it makes the answer more neat which AIs strive for.If you just want to be comforted, it works better in a roleplay with a character, usually after some kind of sexual encounter. It's amazing at first, you get tired of it pretty quickly though.Long-term confidant: Doesn't exist yet because long context is still bad. Either the quality goes down, or the AI forces connections from past context in a stupid way. When infinite memory works in a natural way, it's gonna be over for therapists.
the vllm glm four seven pr got approvedit's real, it's coming
>>107607864it sucks>Doesn't sound like reference audio at all. Reference has muttican accent, output has a slight Bongoloid accent.>Randomly stumbled and repeated part of the input text>Chopped off the end of the textGiven that it has 0.5b params it's not even "good for its size". VoxCPM 0.5B and 0.8B are both significantly better quality and Chatterbox is also much better quality.
>>107607970Nobody actually tries to make models in that range anymore except as a "just to say we did it" kinda thing, so no.
>>107608256What would you say is the new sweetspot with a justifiable advantage over those models?
>>107608279GLM4.6/Deepseek. No model between Nemo and GLM4.6 is good enough to justify the cost of upgrading.
>>107608279Here's a simple flowchart:Are you rich?Yes -> go with one of the big boysNo -> keep using Nemo
>>107608333GLM doesn't justify the cost of upgrading either, NAI shill.
>>107608380Uh huh. What is your favorite model?
>>107608256That is insane considering how much of LLM's usage is roleplay
>>107608449Don't forget most roleplayers use it through API and don't care about their 'ahh ahh mistress' messages getting leaked.
>>107608390Rocinante is my favourite model.
>>107608504poorfag
>>107608545What do you use?
>>107608570https://huggingface.co/google/shieldgemma-2-4b-it
>>107608570Kimi of course.Q8, fully in VRAM.
>>107608588doubling it in size just to flex...
>>107608588With that kind of money you could just fly to a third world country and buy a woman
>>107608582>muuuh safetyCan these niggas stop. Porn is the thing pushing things forward.
>>107608545I'm frugal. There is clearly a difference, my underage poster friend.
r8 / h8 / masturb8
>>107608489Huh? Why would an API be less likely to leak your data? Are you shifting the blame from e.g. OpenAI to OpenRouter? I’m genuinely curious what you mean by this.If you’re a legitimately concerned schizo about this, airgapping seems like the only real choice here (and even then…)
>>107608662Intredasting / 10.
>>107608333I'd say 70b-100b dense is still worth running fast if you have the money.
>>107608674I mean even if 50% of open source LLM use is for roleplay, most users are doing it through API regardless of privacy concerns, so there's still not much of an incentive to provide small models, because even among roleplayers the amount of people who run them locally may be tiny.
>>107608333I'm not >>107608380 at all, that's the usual schizos. It is however very shameful that so far the only answer is to ask for a magnitude more compute to not get that much better.>>107608373Money isn't the problem, it's completely unjustified to scale computing power so much for so little return.
>>107608716Egg and chicken problem.Models are shit and barely get better unless you add an unjustifiable amount of hardware to run bigger models, so might as well just run APIs and pocket the cash.Then there's "no interest because everyone jus uses APIs" and models stagnate. This is where we are now. Interest on local would shoot up again if these retards actually offered some innovation and not just wasted multiple percentage points of the GDP in compute for marginally better slop assistants.
>>107608588KimiGODS we're so back.>>107608716We are exactly one OpenAI data leak oopsie getting publicized away from that dynamic changing.
>>107608757I don't know man. For those companies, if one LLM using $60k worth of compute a year can replace a human that's a good deal.None of the leadership at those companies is going to sit down and ask "how can we give those NEETs a better smut generator to run on their toasters?"(And in any case, it's not clear even how much better they can get.)
>>107608588just how many pro 6000s did you stack for that?
>>107608781Anon, a few years back the nudes of half the celebrities around the world leaked, everyone pretended to be outraged for a week and then everybody forgot about it and went back to business as usual. Don't be delusional.
>>107608827I wish more people had an attention span of longer than a single news cycle but I'm asking for too much at this point.
>>107608716Ah, my apologies I’ve been drinking. You literally said the normalfags didn’t care about their shit getting leaked and that was so uncomprehensible to me that I misinterpreted your post.I’ve never used non-local models for roleplay but it wouldn’t surprise me if they’re wildly better than the garbage shit I use.
>>107608827Apples and oranges, really. No one cares until they get a blackmail call on the phone. And even then, they usually don’t care.The vast majority of modern culture is performative and doesn’t map cleanly to reality. TPTB are careful to ensure there are no personal consequences for nonny posting all their shit online, intentional or otherwise.
>>107607864https://voca.ro/12SI924tXtrk
>>107607864Original spark sounds better to me. Same with the upscaled xcodec2, I can't stand those artifacts.>>107609014>>>107607864>https://voca.ro/12SI924tXtrkCan you hear the artifacts or is it just me?
>>107608781Like when Meta posted everyone's private chat logs for the Llama anniversary?
>>107607433>I will ask again. Anyone here using LLM's as a psychologist / emotional support / confidant? How well does it work?Air-gapped original R1 Q2_xxs helped me understand some things about myself after 40 years and I'm better for it.
>>107609136kek, I had forgotten about that one alreadythat one poor boomer
>>107609123>Can you hear the artifacts or is it just me?It's typical noise from 22 kHz output