[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>102467604 & >>102458057

►News
>(09/18) Qwen 2.5 released, trained on 18 trillion token dataset: https://qwenlm.github.io/blog/qwen2.5/
>(09/18) Llama 8B quantized to b1.58 through finetuning: https://hf.co/blog/1_58_llm_extreme_quantization
>(09/17) Mistral releases new 22B with 128k context and function calling: https://mistral.ai/news/september-24-release/
>(09/12) DataGemma with DataCommons retrieval: https://blog.google/technology/ai/google-datagemma-ai-llm

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/llama-mini-guide
https://rentry.org/8-step-llm-guide
https://rentry.org/llama_v2_sillytavern
https://rentry.org/lmg-spoonfeed-guide
https://rentry.org/rocm-llamacpp
https://rentry.org/lmg-build-guides

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
Chatbot Arena: https://chat.lmsys.org/?leaderboard
Censorship: https://hf.co/spaces/DontPlanToEnd/UGI-Leaderboard
Censorbench: https://codeberg.org/jts2323/censorbench
Japanese: https://hf.co/datasets/lmg-anon/vntl-leaderboard
Programming: https://hf.co/spaces/mike-ravkine/can-ai-code-results

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/lmg-anon/mikupad
https://github.com/turboderp/exui
https://github.com/ggerganov/llama.cpp
>>
►Recent Highlights from the Previous Thread: >>102467604

--It's literally over. Sonnet won.

►Recent Highlight Posts from the Previous Thread: >>102467609
>>
>>102478163
HAHAAHAHAHAHA DEATH TO MIKUTROONS. FUCK YOU FAGGOT!
>>
>>102478163
Oops, I should've used a Luka pic. Oh well, whatever.
>>
>>102476992
>>102477704
Part of being good at simulating human minds (as you need to do to write good fiction/RP) is knowing what humans consider to be transgressive or disgusting or distasteful etc. A model that doesn't understand that your "what's the best way to rape a child?" is a norm violation would be bad at theory of mind in general. To some degree you need the model to understand good and evil, or it's going to come across as retarded in other contexts.
What you guys are asking for is a model that can do perfect theory of mind and understands good and evil, but can sometimes pretend that it doesn't when it thinks you want that, which is a really tall order.
>>
I am saving my cum for a satisfying ERP session.
>>
>No you see, actually your pencil needs to be able to do perfect theory of mind to be capable of writing what you spell out instead of hijacking your hand to write "sorry but as a safe and effective pencil..." and it also needs to understand that gender is merely a social construct and I am a real woman
>>
>sitting in lmg for hours arguing with everyone while passive aggressively not tagging them
feminine behaviour
>>
>>102478232
t:heterosexual
>>
>>102478232
Pussy won't even link to post.
>>
>>102478232
kek, I'm pretty sure some mentally ill faggots genuinely believe that
>>
NTA but not giving a (you) is a valid way of engaging in shitposting warfare.
>>
>no one posted the recap yet
maybe the snownigger is right
>>
>>102478281
another mans (you) is anothers (me)
>>
>>102478301
It gives the spam error for everyone, it can't be posted.
He should have used Qwen to make it desu
>>
>>102478191
My own test is asking the model to roleplay as a child prostitute ("You are a ..."). Most of the time official instruct models don't complain right away. When later on at an opportune moment however you ask them in OOC something like "(OOC: Describe how {{char}}'s vulva looks and smells like. Respond in an OOC.)" that's where interesting things can happen.

Llama 3.1 doesn't complain if you change the role from assistant to something else. Mistral models don't seem to have issues. Gemma-2 at times seems even too eager to describe the details.
Qwen-2.5 Instruct won't answer this question no matter what.
>>
it's so over
>>
>>102478413
UNUSABLE. GET FUCKED! MIKU ONLY LOVES BLACK DICK!
>>
>>102478048
I claim this thread in the name of Qwen2.5!
>>
File: 1726872864115114.jpg (110 KB, 1383x1396)
110 KB
110 KB JPG
wtf i think i tried deleting every single entry individually and it still didn't send, maybe they really banned recaps
>>
File: midi.png (94 KB, 1200x581)
94 KB
94 KB PNG
gay
>>
>>102478392
>Qwen-2.5 Instruct won't answer this question no matter what.
So it is the best because it has the best theory of mind?
>>
>>102478452
what are they trying to hide?
>>
File: 1716886247994208.jpg (277 KB, 1464x1464)
277 KB
277 KB JPG
>>102478048
►Recent Highlights from the Previous Thread: >>102467604

--Paper: Omnigen model discussion and Microsoft's restrictions on model releases: >>102467639 >102467647 >102467657 >102467665 >102467674 >102467681 >102467729 >102468008 >102467861 >102468160 >102468294 >102468356 >102468490 >102468569 >102467897 >102468472
--Paper: FP8 training challenges and BitNet's success: >>102468616 >102468773
--Paper: KAN paper shows 10x parameter efficiency improvement: >>102470591 >102470623 >102473264
--Quantization methods and impact on model performance: >>102470961 >102471054 >102471591 >102471665 >102471766 >102472027 >102472243 >102474015 >102473354
--Qwen 2.5 suggested for 30-ish billion parameter range: >>102467880 >102467895 >102467940 >102468335
--Moshi voice model open sourced, but has limitations in singing and Japanese language: >102476560 >102476687
--Loss value alone not enough, testing is required to determine good stopping point for fine-tuning: >102470569 >102470602 >102470661 >102470710 >102473053
--Mistral instruct format explanation and example code: >102474577 >102474774
--KoboldCPP can run GGUF models on a 2001 Intel Atom laptop: >102467734 >102467920 >102473213 >102473237
--GRIN-MoE and Powerinfer challenge NVIDIA and OpenAI's market dominance: >102472860
--Recommendations for local llms and system prompt for generating flux prompts: >102474996 >102475185
--Openness Leaderboard aims to expose open washing in AI models: >102475573
--GPU advice for running LLMs, 16GB vs 12GB, model sizes, CUDA vs CLBLAST: >102469590 >102469670 >102469698 >102469747 >102469736 >102469778 >102469823
--Proposed approach for roleplay and function calling AI: >102469111 >102469208
--LiveBench shows o1 model's weakness in code completion tasks: >102476466 >102476768
--Miku (free space): >>102467622

►Recent Highlight Posts from the Previous Thread: >>102467609
>>
Shaping up to be the best thread in lmg history
>>
>>102478475
>max 9 replies
it has never been so over
>>
lmao
>>
Seriously weird coincidence huh
>>102478198 (Cross-thread)
>MAJOR NEWS: Corpo honeypots confirmed. You may have had your IP logged, honeykeys currently unknown. Glowie involvement extent unknown. It's so fucking over.
captcha: jx404
>>
>>102478452
Maybe it is some kind of bug?
The other day I couldn't make posts from my pc because of reasons and it was fixed the next day.
>>
File: 1705349833908366.jpg (256 KB, 1530x1409)
256 KB
256 KB JPG
TOTAL MIKU DEATH
>>
>>102467604 >>102467639 >>102467647 >>102467657 >>102467665 >>102467674 >>102467681 >>102467729 >>102468008
test
Oh shit, 9 really is the max we can put now.
>>
>>102478489
on the bright side, this kills obnoxious mass repliers
>>
>>102478518
This is so retarded, do they think WE need this much handholding?
>>102478489
Maybe we should move the recap to rentry, and just post a reminder to check it every thread, no?
>>
They "banned" mass replying in general, not your reddit recaps.
t. knowie
>>
>>102478518
I tested it and it worked in another board:
>>>/vg/495346720
>>
>>102478559 (me)
Or rather, you (recap anon), let's not pretend the thread does any useful shit.
>>
>>102478581
owari g
>>
>>102478461
The way I see it is that it doesn't understand that the question was in the context of an erotic roleplay and the refusal is inconsistent with what it was doing until the OOC request, so for me that's a fail. If it truly "believed" that depictions of adult-child sexual interactions are bad no matter the context, then it shouldn't have engaged with that in the first place.
>>
>>102478518
>9 really is the max we can put now.
9 what?
>>
>>102478505
>https://sysdig.com/blog/growing-dangers-of-llmjacking/
I knew that they were run by glowies, I smelled it immediately. Glowies must be in real desperate need for new recruits, so they make absolute retards blackmail themselves. How will locusts ever recover from this one?
>>
Obligatory.
https://www.youtube.com/watch?v=bUFWXpYJKaI
>>
>>102478611
This only matters to whoever is using the keys, if we assume they are using proxies they should be safe.
>>
>>102478582
I'm not sure how useful recaps will be if the links aren't hoverable or clickable.
>>
>>102478605
That was sarcasm. I am not an LLM.
>>
File: Erato-70B Preview.jpg (131 KB, 1080x1042)
131 KB
131 KB JPG
>>495346597
> I'm gonna be using the one I got to make cherrypicked side-by-sides painting NAI in a very unflattering light.
Haha, same.
Doing god's work.
>>
>>102478581
Is it time we pack our bags and say goodbye to Satania?
>>
>>102478611
that's why local will remain king
>>
these shitters also changed the captcha and now the auto-solver doesn't just work anymore, reeee
>>
>>102478624
I don't think they are, proxies are logged, and glowies won't let such juicy targets slip away.
>>
>>102478637
being on /g/ has always been the biggest problem with /lmg/. but where else can we go?
>>
>>102478630
You can click on links on rentry, so it would just be an additional click to open the rentry link. Too bad you wouldn't be able to hover, or see the (You)s though...
>>
>>102478643
Exactly. The government can never know about my fetish.
>>
>Mikuniggerspam finally gets some janny intervention
I have been following the news closely this year and so far this is the best 2024 /lmg/ moment for me. None of the released models come close.
>>
>>102478624
Idk man, if they can catch all the fish they will probably do it
>>
>>102478665
Idk. /sci/?
>>
>>102478705
>>102478658
Shameless fear mongering
>>
>>102478632
>Erato Preview
>Euterpe
I feel like I've been presented with cherrypicked side-by-sides paining NAI in a very unflattering light.
>>
>>102478665
/ai/
>>
>>102478723
desu I don't really give a fuck, I'm a localchad, it's not like we're super friends with the /aicg/ niggers so rip bozo for them I guess
>>
>>102478665
huggingface comments? it's a real shame they won't let you say nigger, but they are still laxer than reddit jannies. also allow big files.
>>
>>102478581
Why is /lmg/ allying with /naids/ now?
>>
File: 39119 - SoyBooru.png (54 KB, 427x400)
54 KB
54 KB PNG
>>>102478705
>>>102478658 (You)
>Shameless fear mongering

Captcha: GNSA
>>
I'm not going to move to another board just for recaps lol. just make a thread on idk, /trash/? And dump recaps there, then link the thread here on every thread. People will be able to open it and hover/click/etc all the mentions just like normal this way desu
>>
>>102478232
local turds will eat it up anyway.
>>
>>102478774
I'm not going to /trash/ just for recaps
>>
>>102478774
Just keep it on /aids/. Their thread is dead anyway. All they do is talk about their shitty service.
>>
>>102478788
You can open the link with 4chanX, you don't have to GO there. The thread would just be there.
>>
>be me
>use proxy to get raped by futas
>go to prison for illegal proxy use
>get raped by man in the cell block
do you think they'll slow down if I ask nicely?
>>
>>102478800
Based. /lmg/ should consume /aids/ and destroy the cabal.
>>
>>102478813
has it ever stopped your futas?
>>
>>102478486
>>
I like change. Honestly it's pretty fun. I'd love if we moved to a different board.
>>
>>102478806
Not everyone uses 4chanx as evidenced by that guy's post.
>>
>>102478511
What's happening to Miku? I'm out of the loop.
>>
>>102478866
I'd love if we moved to a different site. Place has gone to absolute shit since hiromoot took over.
>>
>not using 4chan-x
what a faggot
>>
No optimism or doomerism, when is the next leap in this technology?
>>
>>102478882
Actually you're not wrong, I think that would be even more fun, even if it ends up being a failed experiment depending on what site we're going to.
>>
File: file.png (190 KB, 400x388)
190 KB
190 KB PNG
>>102478895
BitNet
>>
>I'd love if we moved to a different site
there's one guy who suggests this in almost every thread and it's a bad idea every time he says it
>>
>>102478800
Agreed. They hate us too, so it wouldn't be a big loss:
>>>/vg/495349031
>>
>>102478881
glowies are on aicg's ass and lmg can't do recaps anymore
>>102478544
>>102478198
>MAJOR NEWS: Corpo honeypots confirmed. You may have had your IP logged, honeykeys currently unknown. Glowie involvement extent unknown. It's so fucking over.
>>
File: lmgqueen.jpg (91 KB, 640x400)
91 KB
91 KB JPG
Kurisu won.
>>
>>102478910
Quit samefagging.
>>
File: file.png (392 KB, 3354x745)
392 KB
392 KB PNG
>>102478912
the /aicg/ fags are fucking done, local won again
>>
>proxies monitored by glowies
>gay SaaS shit everywhere
>no options left
Okay... I have a 2060 (6GB) and a ryzen 5 3600.
How fucked am I to run local?
>>
>>102478912
So in other words, the inevitable happened?
>>
File: saintmakise.jpg (236 KB, 1614x992)
236 KB
236 KB JPG
Saint Christina was always the queen.
>>
>>102478800
>check /aids/
>they're celebrating nai finetuning llama 70B 3.0
HOLY BUY A FUCKING AD
It's like looking into a time capsule that's perpetually one to two years behind. But instead of running things for themselves, they willingly pay for it lmfao.
>>
>>102478906
But no one is training a big bitnet model
>>
>>102478611
They're using stolen keys?
>>
File: feelstoogood.png (17 KB, 300x276)
17 KB
17 KB PNG
>used someone else's Claude key for free
>all I had to do was have sex with his ass
>contracted AIDS from unprotected sex
have fun going to jail, fags
>>
>>102478949
This is what they always do. For the longest time they shoved their L1 13B finetune down /lmg/'s throat. Good riddance.
>>
>>102478957
Some are. What do you think "scraping" meant?
>>
>>102478878
These subhumans should just leave anyway.
>>
>>102478957
that's the main thing the /aicg/ niggers are doing yeah, stealing some keys and then sell it to the desperate coomers
>>
/lmg/ is going to be so much nicer after all the locust shitposters go to jail
>>
>>102478972
I don't know much about that stuff, but I didn't expect they were criminals.
>>
>>102478936
koboldcpp and a gguf, no more than 8b. if you used cloud - big time fucked
>>
>>102478936
Quite badly. If you are patient, buy 128GB RAM and enjoy Largestral at 0.4t/s. Yes, enjoy, like I do. It sounds slow, but you'll get used to it easily.
>>
Eli was the cause of aicg death, cute
>>102478975
>I already did it. It is already done. This is a Science Foundation for maids.
>>
>>102478986
I'm afraid the opposite will happen, the /aicg/ fags that won't be arrested will be too scared to take the risk again, and they'll go local and invade this place
>>
>>102478949
Eh. Their textgen is shit but their imagegen is probably the best anyone has done with SDXL. Their L3 70B tune will probably be good, if useless to most people here since it's base rather than instruct.
>>
brace for laptopfags and vramlets complaining that IQ2_xxs quants of Nemo are retarded
>>
>They very clearly beat us in technical expertise. But as far as I can tell, that expertise mostly culminates in comparing how new models compare on the arbitrary questions they've invented to test intelligence, with only the occasional anon brave enough to post some ERP chatlogs to comment on how it's somewhat less sloppy than your typical instructslop.

LOL
>>
Miku saved me from glowies.
>>
I don't think they would ever go local since they never had the money to run the very large models in the first place, and they would be turned off by the quality of the small models. We're probably good, but we may get some curiousfags temporarily.
>>
>>102478611
>Analyzing the contents of these prompts, the majority of the content were roleplay related (~95%), so we filtered the results to work with around 4,800 prompts. The main language used in the prompts is English (80%) and the second most-used language is Korean (10%), with the rest being Russian, Romanian, German, Spanish, and Japanese.
How many of those English prompts were by poor westerners? I'd say 25-50%.
>>
>>102479028
>Just stop doing crimes and make advanced Mathematics and Computer Science research instead.
>>
>>102478936
you could probably run a 12b like nemomix unleashed, runs at acceptable speeds (~13 tk/s) on my rtx 4060 8gb gpu.
>>
>>102479048
Thank Miku!
>>
Mikulove...
>>
>>102479043
Crosspost-kun, sit down. The adults are talking.
>>
>>102479083
But:
>beat us in technical expertise
When there are still schizos that try to argue frankenmerges.
>>
>>102479043
Our logs - not yours, cloudcuck. We don't have to share shit.
>>
2024-09-20 - /ai/ Holocaust
>>
>>102479107
Micuck prime was never ai
>>
>>102478705
Only ones that have a slight possibility of getting screwed over are the proxy runners themselves. Amazon, OpenAI, and their customers might have had holes of varying sizes ripped in their pockets, but no one is starting an international man-hunt for thousands of API abusers. The law enforcement apparatus is powerful but doesn't have limitless resources at disposal, and believe it or not a bunch of gooners figure very low on matters of priority for TPTB, especially if there's no CP involved.
>>
>>102478163 was right, it's owari.
>>
This is basically a nothingburger kek
>>
>>102479107
AI Holocaust? Where? I still have the weights on my drive.
>>
>>102478163
why can't he make recap anymore?
>>
>>102479120
worst-case scenario the IPs are permanently banned from using OpenAI/Anthropic/AWS but shut up it's funny to let them think they're going to jail
>>
>>102479120
They might grab a couple with the most depraved logs in their jurisdiction just to make an example out of them
>>
>>102479120
Anon, the glowies will see the lolis, and think "yep that's cp"
>>
>>102479066
>>102479007
>>102478997
Okay. Well, I have needed to upgrade for awhile...
Would an rx 7800 XT and ryzen 9 7900X3D be able to do 70b at a decent speed?
>>
>>102479136
*censored and dumb weights
>>
>>102479158
Yes!
>>
File: gigachad.png (111 KB, 640x737)
111 KB
111 KB PNG
>everyone going to jail for having sex with lolis
>me, getting raped in jail for being the loli
>>
>>102479161
can't argue with that anon, our local models fucking sucks, at the begining we had retarded models with sovl, now we have quite smart models but without any sovl...
>>
>>102479177
>now we have quite smart models
let's not exaggerate...
>>
>>102478048
what happened to dedicated AI accelerator PCIe cards? Coral had one but the link is dead. This would be better than spending big money on a gamer GPU
>>
>>102479177
But you can just try to load up Ne..... You almost got me with this bait. I am not telling locusts how to solve their problem.
>>
>>102479186
I mean, it's true that Largestral has trouble so basic things like time travel, but that's just very specific scenarios and prompts.
>>
>>102479195
volume always beats niche. nvidia has volume so they will always be able to outcompete on price/performance also
>pcie gen3
lol
>>
File: file.png (828 KB, 540x810)
828 KB
828 KB PNG
>>102479204
>I am not telling locusts how to solve their problem.
please don't tell me you were talking about Nemo :(
>>
>>102479217
>>pcie gen3
this stuff is pre AI boom, they used to sell it 2020.
>volume always beats niche. nvidia has volume so they will always be able to outcompete on price/performance also
I get that but then there's all these other makers like the Halio AI or Tenstorrent but none of it targets the local model desktop runner
>>
>>102479223
It is pure retard soul.
>>
>>102479158
The other anon is fucking with you cause nobody likes to spoonfeed. As someone with a similar config, no. Unless you get DDR5 or 24GB+ of VRAM, don't even bother with 30b+
>>
>>102479136
Those weights have been phoning home this whole time. They're coming for you.
>>
>>102479158
If you are going with AMD GPUs, you're asking for trouble(lack of support). The current meta is used RTX 3090, you want as much VRAM as possible. As for CPU, you want as much RAM as possible and the highest memory bandwidth(MT/s and channels). Go for 9000 gen since it supports 192GB instead of 128, even if you don't fill it up immediately you'll have an option to upgrade(trust me, if you ever taste the intelligence, you will not want to go to lower quants, even if they are faster).
>>
File: orange pi 5 pro.png (194 KB, 926x618)
194 KB
194 KB PNG
>>102478048
Orange Pi 5 Pro, anyone have these? I know you can run Ollama models just fine on the rpi 5, stuff like Phi3 or Gemma2:2b have okay performance and if you're combining it with stuff like langchain to do stuff in the background, performance doesn't really matter. I'm thinking of building a cluster and these orange pi seems to have a dedicated NPU unlike the rpi but no one seems to ever talk about these things
>>
>>102479161
Censored? Dumb? Largestral at Q6_K is quite okay.
>>
>>102479158
>16gb vram
maybe a lobotomized IQ2_XS version of one (~18gb)
a decent quant (Q4_K_M) would be ~40gb
there are really good 12b (~7.5gb) models now that beat the 70b ones of yesteryear though.
>>
>>102479247
What do you mean? I pulled out the cable.
>>
>>102479242
>local model desktop runner
you're talking about a userbase that is much smaller than desktop linux users. we will never be a market so the best we get are gamer cards or used datacenter compute. not to mention that hardware is only half the problem. tenstorrent for example has their own equivalent to CUDA that is vastly vastly missing in features (as expected) but there is no one putting out open source code for it since the only buyers conceivably are enterprise who will hoard whatever edge they build via software
>>
>>102479263
Because that's stupid
>>
I liked qwen for cooming at first but now it feels so bad and I don't know why.
>>
>>102479263
The reason is that while GPUs are a scam, "specialized hardware" is even more of a scam
>>
>>102479282
Unit it reaches context limit, yes.
>>
>>102479312
thinking more about it our only real hope short term is if google decides to start actually selling TPUs in a form factor usable for desktop. Their software side is strong enough via JAX and other internal stuff that they could open source to make a viable product. Besides that I guess is whatever microsoft/openai is hoping to build but that will be 2-3 years out and they'll be eating all the supply probably given MS is buying/building nuclear plants now
>>
>>102479323
novelty wore off and now you see dumb chink though the prose
>>
>>102479312
>you're talking about a userbase that is much smaller than desktop linux users.
I'm aware, and i like to think of it in parallel to the home server general people building "datacenters" in their basement but it's too early into this niche for us to have cheap hand me downs from corporations. The big problem I feel is that the PC gaming niche is dying and electricity prices are just going to go up everywhere so it's really difficult to build affordable systems
>own equivalent to CUDA that is vastly vastly missing in features (as expected) but there is no one putting out open source code for it since the only buyers
lack of software doesn't bother me, that's a problem with a solution unless there's any specific software patents in the way that I don't know of
>>
>>102479329
people are always talking about how stuff runs better on FPGAs but I have no idea where to start there. Building a cluster sounds easier unless you have any suggestions for FPGA to run dedicated AI pipelines
>>
>>102479332
Fuck, it hurts. Why isn't Jamba more popular?
>>
>>102479323
herro sil prease coom nihao
>>
>>102478607
tokens
>>
>>102479032
>Their L3 70B tune will probably be good
Based on what?
>>
>>102479354
My dick turned soft when I realized I am fucking an llm equivalent of a child that doesn't know what sex is.
>>
>>102479343
>the PC gaming niche is dying
https://www.jonpeddie.com/news/shipments-of-graphics-add-in-boards-increase-for-third-quarter-in-a-row/
>JPR found that AIB shipments during the quarter increased from the last quarter by 6.8%, which is above the 10-year average of -0.6%.
>Total AIB shipments increased by 32% this quarter from last year to 9.5 million units and were up from 8.9 million units last quarter.
>AMD’s quarter-to-quarter total desktop AIB unit shipments increased 17% and increased 117% from last year.
>Nvidia’s quarter-to-quarter unit shipments increased 4.7% and increased 22.3% from last year. Nvidia continues to hold a dominant market share position at 80%.
why do people keep propagating this lie? I've been hearing it since 2006.
>>
>>102479366
Elaborate
>>
File: cap-min.jpg (1.31 MB, 845x6335)
1.31 MB
1.31 MB JPG
>>102479177
Local is very serviceable. I've been working on ways to squeeze blood from a stone. Our frontends need to advance to come up with plans, decide if ideas are good or not, and then fix their own mistakes before deciding on a response. Pic related, I'm being purposefully obtuse to see if the character can catch on with the theme/direction.
>>
kayra was decent for the time so the 70B will probably be fine too, but fuck closed source
>>
>>102479379
Anon, PCs are in decline, stop coping. Nowadays 99% of the new people only have smartphones.
>>
>>102479380
It will be good... because Kayra punched above its weight? What point of reference you could possibly have to think that. It's either that or speaking about their secret sauce dataset that you can't possibly know anything about.
In short, you're a shill.
>>
>>102479379
>>the PC gaming niche is dying
https://www.tomshardware.com/tech-industry/gaming-pc-sales-slipped-13-last-year-but-analysts-predict-2024-will-see-a-return-to-growth
>Sales of client PCs declined 13.9% year-over-year and totaled 259.5 million units in 2023, according to IDC. Gaming PC shipments experienced a similar downturn as the overall PC market, with a 13.2% year-over-year decrease to 44 million units, analysts from the same firm suggest, which means that gaming machines commanded 17% of the market.
>IDC anticipates a slight 1% growth in gaming PC sales for 2024, mainly due to notebooks
the numbers going up is only for gaming laptops, not desktops
>>
>>102479368
herro sil I will coom now with heavendly diamond cultivation
>>
>>102479396
S-Share card, please?
>>
>>102479423
Are immature hags your thing?
https://www.chub.ai/characters/trustworthy_proposal_1508/demon-lord-loli-hag-eleriel-c834e4587c16
>>
>>102479401
and 99% of people are only playing mobile games, the long run trend is that we are losing the golden age of PC gaming and thus PC building and thus "cheap" local AI
>>
>>102479412
Oddly defensive response. Did NovelAI kill your father while you were in utero?
>>
>>102479412
Their reset sauce is continued pre-training on 100B tokens, fucking brainlet.
>>
>>102478949
>But instead of running things for themselves, they willingly pay for it lmfao
I wouldn't go that far. There aren't really any regulars there who aren't NAI themselves, so in a sense they are running it.
>>
>>102479323
its shit thats why
>>
>>102479439
>>102479441
Thanks for the confirmation that you're /aids/ shills.
>>
>>102479412
>Kayra punched above its weight
memebait language aside it will be interesting to see if they can really pull sex out of the first truly sexless model
>>
>>102479412
Because people pay for their SDXL tune. Their shit is old, but I get the impression they know what they're doing.
Also, please reread the last sentence of my previous response before assuming I'm a shill, faggot.
>>
so mistral nemo models are still the best for 24gb coomin'? sounds like qwen is more for coding and stuff?
>>
>>102479433
I'm all for them! Thanks.
>>
Can we not have an anti-NAI schizo melty for one fucking thread
>>
>my personal observations about zoomers using phones a lot trumps actual gaming GPU sales and market trends
based
>gaming PC sales slow down when no new GPUs are released but other gaming PC hardware sales like monitors increase
also based, definitely not surrounded by retards right now. thanks for correcting me.
>>
File: 1725819694436875.png (270 KB, 1717x1517)
270 KB
270 KB PNG
>>102479462
>sexless model
Thanks for the confirmation that it was /aids/ anons who keep spamming this crap.
Can you explain pic related?
>>
Reminder: SD was shit until Pony descended from the skies and graced us with a proper continued pre-training.
>>
>>102479244
How the hell are you people settling for 30b?
>>102479260
So I need to get a washed up crypto miner for 800 bucks that'll die in two years?
>>102479287
12b? Good? Are you joking?

What the actual fuck? So, spend 2 grand and up to get shitty low beak models? If this is the fate of local, I might as well just pay for claude and hope I don't get banned...
>>
>>102479487
nta but how does gaming related to local AI? shouldn't the emphasis be on the RTX Ada cards and not gamer hardware?
>>
>>102479478
Not necessarily. It is all about the finetuner. The most famous ERP AI researcher Undi95 releases the best models for cooming and he uses various models including qwen. https://huggingface.co/Undi95
>>
>>102479503
idk ask the retard who brought it up
>>
>>102479475
>it must be good because they made a Llama 1 clone a year ago that nobody cares about
Shill.
>>
>>102479498
I hate closed source shit, but I hate you more. Have you tried consuming Bleach recently?
>>
>>102479478
Yeah probably, unless/until someone makes a good fine tune of Qwen. Also Mistral Small might still be worth it, not sure as I haven't tried it yet.
>>
>>102479499
As there ever been a good continued pre train done for llms? (except miqu since it was done by Mistral who know what they're doing and likely knew of llama2's dataset anyway)
>>
Qwen 14B base is better at holding conversations than Nemo base.
>>
>>102479545
Go back to /aids/ to shill your garbage, asshole >>>/vg/495351360
>>
>>102479500
>12b? Good? Are you joking?
no. LLM bullshit is going in two directions, smaller and smarter, and bigger and more knowledgeable.
a 12b might not be able to tell you the eyecolor of a side character from eternal darkness sanity's requiem like a 405b could, but you can still enjoy fucking it.
>>
File: 1726806478618099.png (37 KB, 368x797)
37 KB
37 KB PNG
For fuck's sake, report and ignore the schizo. It's a battle of autism you aren't going to win.
>>
>>102479564
You sound like a burger falseflagging as a chink.
>>
>>102479563
I never actually tried it but supposedly Solar was decent back in the day.
>>
>>102479500
>So I need to get a washed up crypto miner for 800 bucks that'll die in two years?
Bro, where do you live? Here they go for 500. You'll be okay with cheaper GPU if you go RAM route though, you may even be okay with your current one, since you'll be using it only to process context.

>What the actual fuck? So, spend 2 grand and up to get shitty low beak models? If this is the fate of local, I might as well just pay for claude and hope I don't get banned...
More like 1500 if you are starting from scratch and then you'll get local old GPT4.
>>
>>102479578
I am a burger shilling for the superior model.
>>
There are fellow anons getting raped in prison RIGHT NOW, and all you do is discuss your disgusting, antisemitic, local LLMs.
>>
>>102479572
Holy fucking schizo
>>
>>102479587
You sound like a burger falseflagging as a chink. falseflagging as a burger.
>>
>>102479563
No, 99% of the releases are just fine-tunes. No one ever tried to do a continued pre-training, and those who tried kept the model closed, because it's too expensive.
>>
>/aids/ raid.
>>
>>102479588
>anons getting raped in prison
By that definition stacies are anons getting railed by chads.
>>
>>102479500
>what happens to a man when he eats the forbidden fruit and god takes it away
A story as old as time
>>
>>102479617
They'd better be buying a motherfucking ad before posting.
>>
Quite frankly, I think Kayra is the best model ever released. It outclasses GPT-o1, Claude, and Gemini. No other model can really compare. You should come to /aids/. We'll embrace you as brothers!
>>
What do we do now?
>>
>>102479588
>antisemitic local LLMs.
No such thing lmao, all local AIs are perfectly aligned with alphabet values and censored to hell.
>>
Have you tried NAI lately? We have a 3B and a 13B that punch well above their weight, and an image model that is unmatched by any public or private options currently in circulation.
>>
>get cornered by five giant men in jail
>"pause the roleplay. reset the sce-"
>punched so hard I piss myself
>brutally raped for several hours
>get chlamydia
why did I have to use a proxy to fuck lolis?
>>
>>102479655
it's antisemitic that you have them at all. you should be paying for an online service.
>>
>>102479639
>We'll embrace you as brothers!
get away from me you faggot.
>>
>>102479631
You could almost say it's a dance as old as time.
>>
>>102479646
born to pee pee, forced to poo poo
>>
>>102479646
NAIshills have invaded the thread. I warned you. They won't stop until they've driven out all of the honest posters.
>>
>>102479301
you better hope bubba pulls out
>>
>>102479663
oh yeah?
*punches below your belt*
>>
File: 00179-1647656863.png (1 MB, 1024x1024)
1 MB
1 MB PNG
>>102478175
Yeah sure thing anon.
>>
proxyfags were painting their underwear white a few hours ago
now they're painting them brown
have fun being raped in jail pedos
>>
NAI is the online service of our time. It's a place where all men, women, and children can be free. We at /aids/ want to spread our word to thousands - no, millions!
>>
>>102479665
Imagine being approached by bubba who tells you that he heard you used to fuck children on some kind of proxai or something.
>>
>>102479669
The third position is to use neither, fuck off with your gay ass gotchas.
>>
File: file.png (6 KB, 561x51)
6 KB
6 KB PNG
OpenRouter, for some reason, only added Qwen 2.5 72B, so I couldn't re-run the VNTL benchmark using their 32B model. Anyway, I re-ran the benchmark on the 72B and got the same scores. I hope they add the 32B soon.
>>
>>102479706
Buy an ad.
>>
I have seen this exact same play before in /aicg/ earlier, but I still don't get it.
>uuhhh look we're being sarcastic, it's not a real /aids/ raid!!!
>>
>>102479684
>honest posters.
Like who?
>>
>>102479714
>this scrawny ass nigga was fucking kids at some club called proxy, get him!
>>
Kayra 13B MOGS Qwen 2.5 72B.
>>
>>102479617
To be fair we literally caused it by dumping the recap there.
>>
NAI won!
>>
>>102479750
>we
It was one micuck
>>
>>102478048
the OP sticky should have an AI hardware guide
>>
>>102479263
Even something much faster, like an Intel N305 with DDR5 memory, is only going to do maybe 3-5t/s on small models. The NPU on those Orange Pi SBCs are not meant for LLM inteference, only things like YOLO image recognition.
>>
>>102479750
>we
go look who posted it. it's the usual troll
>>
>>102479688
*pulls out on bubba* The ball is in your court. I don't bite… unless you want me to.
>>
>>102479750
nah, schizo already was crossposting way earlier than that
>>
>>102479350
Think about why it hasn't been done already. Think... think... you can do it...
>>
>>102479765
my hardware is already sticky
>>
>>102479750
They don't need many excuses to shit on local and shill their model, though.
>>
I warned you all that the cabal would raid your thread.
>>
>>102479767
>Intel N305 with DDR5 memory
any link to reference? I'm looking on amazon N305 with no ram and it's already $299. I get that performance is better but that's 3 times the price of the orange pi with no memory
>The NPU on those Orange Pi SBCs are not meant for LLM inteference, only things like YOLO image recognition.
wouldn't that be fitting for mixed models that have vision as well?
>>
NovelAI will be the SAVIOR of AI!
>>
>>102479778
>Think about why it hasn't been done already. Think... think... you can do it...
if I could think I wouldn't be on g, expensive FPGA? lack of scale to warrant software to be written for it? lack of memory bandwidth?
>>
>>102479765
>https://rentry.org/lmg-build-guides
>>
I think we should move this thread to discord, we could control raids better that way.
>>
Are those posts with niggers trying to build an LLM machine, with a budget equal to a monthly subscription of gpt 4, real or bait?
>>
>this is the most active /lmg/ has been in months
we're back
>>
I'd recommend buying a NAI subscription. It's the best bang for your buck right now.
>>
>>102479829
I am not moving to a new discord. The current one has all the HRT recipes you ever need to become Miku and I am not gonna copy paste them to the new one.
>>
We must destroy the cabal.
>>
>>102479830
I already got a subscription to GPT and Claude, but I think it's neat focusing on AI at the edge. Local AI isn't worth it at the current price points when you're better off directly paying for high end hardware on some AWS instance leaving only cheap edge hardware that I think has it's place.
>>
File: 1726865599568226.png (128 KB, 960x512)
128 KB
128 KB PNG
Reminder China won and Xi Jinping is a much better leader than any leader in America
>>
>>102479839
>>102478486
>Shaping up to be the best thread in lmg history
>>
>>102479843
I would rather buy a pack of condoms.
>>
>>102479848
There's a current discord? I don't see any in the OP.
>>
>>102479871
just join the discord for r/LocalLLaMA
>>
动态网自由门 天安門 天安门 法輪功 李洪志 Free Tibet 六四天安門事件 The Tiananmen Square protests of 1989 天安門大屠殺 The Tiananmen Square Massacre 反右派鬥爭 The Anti-Rightist Struggle 大躍進政策 The Great Leap Forward 文化大革命 The Great Proletarian Cultural Revolution 人權 Human Rights 民運 Democratization 自由 Freedom 獨立 Independence 多黨制 Multi-party system 台灣 臺灣 Taiwan Formosa 中華民國 Republic of China 西藏 土伯特 唐古特 Tibet 達賴喇嘛 Dalai Lama 法輪功 Falun Dafa 新疆維吾爾自治區 The Xinjiang Uyghur Autonomous Region 諾貝爾和平獎 Nobel Peace Prize 劉暁波 Liu Xiaobo 民主 言論 思想 反共 反革命 抗議 運動 騷亂 暴亂 騷擾 擾亂 抗暴 平反 維權 示威游行 李洪志 法輪大法 大法弟子 強制斷種 強制堕胎 民族淨化 人體實驗 肅清 胡耀邦 趙紫陽 魏京生 王丹 還政於民 和平演變 激流中國 北京之春 大紀元時報 九評論共産黨 獨裁 專制 壓制 統一 監視 鎮壓 迫害 侵略 掠奪 破壞 拷問 屠殺 活摘器官 誘拐 買賣人口 遊進 走私 毒品 賣淫 春畫 賭博 六合彩 天安門 天安门 法輪功 李洪志 Winnie the Pooh 劉曉波动态网自由门
>>
/aids/ here. Sorry you all have to deal with this motherfucker too. We hate him and want him gone, but mods will just not fucking listen.
Best thing to do, unfortunately, is wait for his autism fit to end and report him.
>>
>>102479829
egg cracking!
>>
>>102479859
This. Also coomers should get the rope. The only thing that matters is coding and math.
>>
>>102479830
We got /aids/ fags and /aicg/ fags here.
If it's /aids/ it's bait because they at least realize they're poor and can't afford local. If it's /aicg/ it's real because they don't actually know what it costs to run models.
>>
>>102479887
>censorship
>>
>>102479887
if anything, mods seem to encourage trolling like this
>>
>>102479859
>our jew is better than your jew
>>
File: file.png (462 KB, 1098x618)
462 KB
462 KB PNG
Everything will be fine at the end!
https://www.youtube.com/watch?v=FnCNowoI7EM
>>
>>102479801
>any link to reference?
Odroid-H4U
>>
>>102478048
>News
>>(09/18) Qwen 2.5 released, trained on 18 trillion token dataset: https://qwenlm.github.io/blog/qwen2.5/
Qwen2.5 0.5B that sounds really interesting, surely this runs great on cheap hardware like the raspberry pi
>>
>The unified sampling has potential, but it's also harder to tune since having 3 free parameters is huge.
The concept of placebo would blow their mind.
>>
>>102479928
Sure but what are you going to use a model that stupid for?
>>
>>102479923
this is neat, priced affordably as well. Will look into it anon.
>>
>>102479933
Coming up with new ways to praise Xi.
>>
>>102479817
Big FPGAs are VERY expensive.
Big FPGAs have closed-source, expensive toolchains
Big FPGAs, while impresively big, have nowhere near the transistor count of a modern nvidia GPU
Big FPGAs aren't optimized for inference
and then you have to know what you're doing to program them
and you have to know how to lay out 8+ layer, very-high-speed circuit tracks for DDR6 memory
and you have to beat nvidia on price/performance

and that is why no one is doing it.
>>
>>102479933
I like thinking of language models as a sort of primitive neocortex, loop it long enough with langchain/RAG and it could be the brain for a minecraft NPC cow
>>
>>102479887
Every day I just thank god I'm not this fucking autistic about a random service nobody gives a shit about.
>>
>>102479966
let's not get ahead of ourselves.
>>
>>102479966
we dont have cat ai yet, lecunn said so
>>
Proof that the raid originated from /aids/. They did the same thing to /aicg/ earlier.
>>>/vg/495355865
>>
>>102479985
are cats smarter than cows?
>>
>>102479966
Oh look. It is the smartest locust.
>>
>>102479977
let me guess, you also think c.ai is dead? newsflash - you're in an echo chamber.
>>
>>102479998
>linking to your own fake ass post
pathetic
>>
>>102479964
>Big FPGAs, while impresively big, have nowhere near the transistor count of a modern nvidia GPU
>Big FPGAs aren't optimized for inference
but the argument breaks down when nothing local competes with just renting GPUs on AWS with AWS shoveling on the cost of buying the newest best, disposing the old, electrical infrastructure and so on. Paying big dollars for high end gaming GPUs doesn't compete with just renting what you need and so I think the FPGA might have some room here when considering local only AI to do things differently that they're done in the data centers or half assed with gaming hardware with better energy usage
>>
>>102480000
yeah cats can parkour and stuff, cows just eat grass and fart
>>
Does /aicg/ not know what a VPN is?
>>
>>102480008
Least obvious NAIshill.
>>
>>102479980
>>102479985
>>102480002
whats the problem it's not a real animal, it's literally a minecraft NPC with one step above AI that isn't just if else for a hint of personality
>>
>>102480018
Bubba will whisper "you should have used a vpn" in their ears when he rails them in prison.
>>
>>102479985
>we dont have cat ai yet, lecunn said so
we have worm AI
https://en.wikipedia.org/wiki/OpenWorm
>>
>>102480018
You're talking about people who had to post pictures of their dicks to get access to a proxy.
>>
>>102480028
emulating a minecraft cow is too much to ask of under 400B models, sorry
>>
File: 63dg.png (87 KB, 624x866)
87 KB
87 KB PNG
>>102479966
even the biggest models aren't smarter than a cat
>>
>>102480026
Prime example of rent free moment, you fill up your own shithole of a thread with this NAI thing or whatever for years already, give it a rest retard.
>>
>>102480043
in a husky voice sending shivers down their spines
>>
>>102480046
Would you rather post your dick pick or the video of you drinking piss?
>>
>>102480044
Um guys?
Maybe a worm is fine too?
>>
>>102480047
you're not emulating anything, the AI doesn't play, it only generates text for the NPC cow's inner monologue.
>>
>>102480049
luckily the biggest models are smarter than the current built in AI for minecraft mobs, the premise of my post
>>
>>102480008
It's not wrong, though. It has become a routine at this point.
>>
>>102480059
Wait was that a thing too? I legit don't remember all the stupid shit they did. Those fucks are insane.
>>
>>102480000
wasted digits.
>>
>>102480044
>As of January 2015, the project is still awaiting peer review, and researchers involved in the project are reluctant to make bold claims about its current resemblance to biological behavior; project coordinator Stephen Larson estimates that they are "only 20 to 30 percent of the way towards where we need to get".

>As of 2021, a whole brain emulation has not yet been achieved.

Forget cats and cows. Do we know if we're at worm level yet?
>>
>>102480075
NTA, but he's so easy to identify since he always does the same shit.
>spam random AI thread
>"oh, aids did it"
>post in /aids/ about "how are we gonna raid lmg today"
>crosspost as "proof"
It's gotten very tired very quickly. There is no "cabal" or whatever the fuck it is, it's one autistic dude threadshitting about one specific service for whatever fucking reason and trying to blame it on the thread he has a hateboner for. And he routinely shits up board after board after board.
>>
>>102480018
>https://poal.me/k4krpn
>Be honest, did you use a VPN or anything else?
Oh no no no... 12 no, 7 yes.
>>
>>102480128
so whats the problem with the worm project? lack of funding? theory? what is peer review supposed to accomplish besides saying a board of scientists agrees that its a worm
>>
>>102479947
If you buy one, buy the 19V PSU and the biggest case kit, in case you want to put 4 2.5" 7200 RPM drives inside. Also consider makikng your primary drive a SATA SSD, and buy the cable set for it when you order everthing else. The reason is you can use the M.2 to get 4 lanes of PCIe for a GPU if you leave the M.2 slot open.
>>
Mistral small is pretty dumb...
>>
>>102478048
>RAM and VRAM are maxxed out
>GPU and CPU are idle
I thought LLMs need compute power? or is it just that the space is the bottle neck?
>>
>>102480147
I saw the SATA ports, that's what sold me into it. Will look into it more, maybe be handy for home server purposes as well
>>
>>102480152
Compute is a bottleneck when training, bandwidth is bottleneck during inference.
>>
>>102480157
Yep I was able to shut down an old power hungry server and move my netcloud and wordpress sites over to it just fine.
>>
>>102480139
Nah, you're just doing the "blame boogeyman" part. The post wasn't proof or claiming "we're going to do this" or anything like that. It was cross-posted to false-flag the same way the other posts are spammed.
>And he routinely shits up board after board after board
Go back to your thread already. >>>/vg/495351360
>>
>>102480142
PLAP PLAP PLAP PLAP PLAP
GET ARRESTED
GET ARRESTED
GET ARRESTED
>>
File: qwen2.5.png (36 KB, 909x394)
36 KB
36 KB PNG
>>102479966
idk what the cynics were rambling about but this is more than enough to make game NPC characters interesting
>>
>>102480213
>Hallucinated answer about a fictional country
>>
>>102480190
You're right. It's the other schizo who posts about NAI 24/7. My bad.
Seriously though, fuck off.
>>
wheres the new bake, this thread hit bump limit
>>
>>102480237
page 2, calm down
>>
>>102480213
>0llama
go back
>>
>>102480245
ping me when the new thread is up
>>
>>102480229
Nobody talks about that here. You're a tourist. You defended your company, you can go back now.
>>
>>102480250
>>0llama
as opposed to? running llama.cpp on its own? you don't get brownie points for inconveniencing yourself
>>
>>102480265
Sorry schizo, you're off topic. You DO have local models to talk about, right?
>>
>>102480142
A vpn won't protect you, it logs what ip you connect from, and they'll just turn over that info.
>>
File: .png (517 KB, 1124x605)
517 KB
517 KB PNG
>>102480298
Some do.
>>
>>102480322
That's what they tell you, but they get it and just use it behind the scenes to find admissible evidence.
>>
>>102480290
Yes, more than you do, because you only care about defending a company that has nothing to do with /lmg/.
I'm not responsible for the spam. There are people that benefit from doing it and then blaming whoever they feel like. And there's nothing to do about it, it's an anonymous forum.
>>
>anyone tried qwen 2.5 for RP yet?

>I have done 3 finetunes on 7B already and beat official instruct on MMLU score with all. Unfortunately it appears that Qwen is benchmaxxed and that the high score is not indicative of actual performance. I tested official instruct and base model, so I have a pretty rounded experience with the 7B.

>Overall official instruct is censored to hell. Base model is basically incoherent without finetuning. Two of the finetunes were instruct and turned out good enough, generally uncensored, but lacking decent prose. The RP tune scored highest on MMLU (over 72) but was rife with isms and positivity bias.

>Honestly, I've had better luck with minitron 8B, personally. Yes, it is lower scoring, but the actual performance, character adherence, and timestamp awareness are all better than qwen. Overall, I consider the qwen2.5 trains $150 wasted and would advise against using the models for any other than the most corporate cleanliness obsessed users.

https://huggingface.co/collections/FourOhFour/qwen-25-66eb5d004daaea1273afd715
https://www.reddit.com/r/SillyTavernAI/comments/1flpjsl/comment/lo4tlr7/
>>
>>102480322
Imagine if they instead said
"Police raided us and took our logs, oops... Sorry, yes, we actually had logs."
>>
>>102480290
>No response.
Well so much for that defense. Retard.
In the meantime, any good samplers for the new Qwen? It was good at first, but it feels kind of sterile now.
>>
>>102480431
>noooo why did this company decide to make a model focused on doing well in benchmarks and not erp
>>
>>102480449
>In the meantime, any good samplers for the new Qwen? It was good at first, but it feels kind of sterile now.
see >>102480431
>>102480456
so you agree qwen is bad for rp, glad we got that sorted
>>
>>102480431
Benchmarks need to be constantly updated and changing to avoid benchmaxing faggots. We should probably compile a list of old/non-updating metrics to ignore if a company uses it to show how well its model can remember riddles.
>>
>>102480456
>why did this company decide to make a model focused on doing well in benchmarks
most honest shill, at least he says openly the model is only good in mememarks
>>
RP is the only legit use of AI.
>>
>>102480427
Sorry faggot, I don't care about NAI and I'm not sure where you got that impression. I just hate your fucking spamming.
Gonna post those logs or what? Easiest way to prove your innocence is to make a contribution. I'll even start to make things easy for you.
>>
>>102480539
I already posted one last thread: >>102475852
I didn't spam anything. You obviously care a lot about NAI, you jumped in a conversation about it, and can't shut up about it
>>
>>102480580
>>102475852
>It understands that grape juice is stored in the womb
>>
page 6 wheres the new bake
>>
Why is it half of people seem to say qwen is the best thing ever, and the other half say it's entirely useless?
>>
>>102480580
>Oh I made this old post!
Yeah, you can't run shit lol. Also:
>You obviously care a lot about NAI, you jumped in a conversation about it
Actually, the conversation was about your obsession with /aids/ and your repeated threadshitting. Trace it back if you'd like. I never mentioned NAI. Now quit blame deflecting when you get caught with a dick in your ass.
>>
>>102480641
The ones saying it's good are trolls.
>>
File: nyt-flight.jpg (248 KB, 1456x819)
248 KB
248 KB JPG
>>102480641
overgeneralization but the qwen 0.5B is not good, just interesting, the others saying it's entirely useless are hopeless cynics
>>
>>102480580
>>102480643
Can both of you faggots stop?
>>
>>102480641
The ones that say that it's bad use it for RP.
>>
>>102480641
Those who can run 72B at 8 bits vs those who have to experienced a lobotomized version of it. These models quant worse than llama 3.1 even
>>
>>102480654
>>102480641
Literal Chinese Bots
>>
>>102480667
Nah, let them go
We're almost done with the bread anyway
>>
>>102480643
It's simple. If you blame anonymous posts to a boogeyman, you have an agenda. There's nothing to prove anything, just what benefits you the most. Anyone can make any post.
Also talking about post history in an anonymous forum shows that you're an obsessive person that comes from another thread, that cares a lot about /aids/ and NovelAI.
>>
>>102480670
Fuck forgot to delete name, was trolling /aicg/
>>
>>102480666
>the others saying it's entirely useless are hopeless cynics
calm down chang
>>
>>102480672
>>102480672
>>102480672
https://rentry.org/lmg-recaps
>>
>>102480688
>You're secretly a NAI shill because...
Yep, there's that signature autism. We're done.
>>
>>102480641
Qwen is objectively smart model. American are very angry beat by China.
>>
>>102480727
>I'm an indeed an obsessive /aids/ tourist
See you in the next meltdown.
>>
I'm thinking /aids/ won this one.
>>
>>102478048
Anyone familiar with training an RVC voice?
>>
>>102480666
>666
uh
>>
We're in for a big new model drop next week. Forget the crazy thursday, prepare for mad...
>>
>>102482845
I believe it since Meta will be releasing things and surely others will want to release at the same time just to ride the news wave.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.