[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor application acceptance emails are being sent out. Please remember to check your spam box!


[Advertise on 4chan]


File: 1751276140253030.jpg (782 KB, 2105x2963)
782 KB
782 KB JPG
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>107104115 & >>107095114

►News
>(11/01) LongCat-Flash-Omni 560B-A27B released: https://hf.co/meituan-longcat/LongCat-Flash-Omni
>(10/31) Emu3.5: Native Multimodal Models are World Learners: https://github.com/baaivision/Emu3.5
>(10/30) Qwen3-VL support merged: https://github.com/ggml-org/llama.cpp/pull/16780
>(10/30) Kimi-Linear-48B-A3B released with hybrid linear attention: https://hf.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
>(10/28) Brumby-14B-Base released with power retention layers: https://manifestai.com/articles/release-brumby-14b

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
>>
File: threadrecap2.png (506 KB, 1024x1024)
506 KB
506 KB PNG
►Recent Highlights from the Previous Thread: >>107104115

--IndQA benchmark and EU multilingual LLM evaluation discussions:
>107104680 >107104733 >107107367 >107107455 >107107533 >107107631
--Finetuning DeepSeek 671B with 80GB VRAM with catastrophic overtraining and context length challenges:
>107105625 >107105860 >107105896 >107106164 >107106215 >107106275 >107106297 >107106332 >107106416 >107106433 >107106446 >107106502 >107106351 >107106466 >107106181 >107105710 >107105737 >107105769 >107105765 >107105792
--RTX 6000 Workstation Edition vs Max-Q: Performance, power, and safety tradeoffs:
>107107561 >107107669 >107107690 >107107807 >107107853 >107107866 >107107837 >107107926 >107107938 >107107946
--Fedora 43 compilation issues for llama.cpp due to glibc/CUDA incompatibilities:
>107110453 >107110623 >107110723 >107110957 >107110964 >107110991 >107111240 >107111261 >107111609 >107111643 >107111712 >107111726
--Windows vs Linux CUDA/llama.cpp setup challenges:
>107110661 >107110852 >107110953 >107111011
--French LLM leaderboard criticized for flawed rankings and perceived bias:
>107107537 >107107559 >107107574 >107107562 >107107617 >107107572
--Quantization benchmarking and model performance tradeoffs in practice:
>107109145 >107109251 >107109456 >107109345 >107109466 >107109353
--Rising RAM prices linked to AI demand and HBM chip production shifts:
>107105971 >107105987 >107105997 >107106030 >107106079 >107106178 >107106242 >107106246 >107106305 >107106317 >107106488 >107106496 >107107544 >107112114
--Model comparison in D&D 3.5e one-shot roleplay scenarios:
>107112449 >107112461 >107112747 >107112761
--Critique of Meta's Agents Rule of Two security model as inconsistent risk assessment:
>107105204
--AI-driven consumer DRAM shortages:
>107106504
--Miku (free space):
>107104379 >107105550 >107106025 >107109466 >107110129

►Recent Highlight Posts from the Previous Thread: >>107104116

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
>>
>>107113095
Thank you Recap Miku
>>
where's glm 4.6 air fuckers
>>
>>107113348
2 more hours
>>
>>107113348
I'm more interested in the llama.cpp MTP PR.
>>
>>107113391
vibe coding status?
>>
What is the best model i can run nowadays for programming / tech related shit? t. 12GB vramlet 64gb RAM
>>
>>107113464
GPT-OSS-120B
>>
>>107113548
yeah ok bro
>>
I need/want a sophisticated note taking solution that keeps me reminding of shit that I have todo - powered by a language model
what would be a privacy safe way to do this?
>>
>>107113567
Is he wrong?
I know that the model is shit for ERP, but it should at least be good for assistant type tasks and coding right?
>>
>>107113575
>I need/want a sophisticated note taking solution
You need a note book.
>>
>>107113581
you cant run a 120B model on 12GB vram and 64GB ram lol
>>
>>107113575
Vibe code your own.
It's not that complicated a project.
I'd use Claude 4.5 via lmarena to plan the high level implementation and some guiding code snippets and use a local model as an agent to actually write the final code.

>>107113590
Of course you can. Quantized, sure, but still.
>>
>>107113590
Your NVMe SSD?
>>
File: toss120b.png (132 KB, 1566x556)
132 KB
132 KB PNG
>>107113590
You can.
>>
>>107113610
Oh yeah, they even have their own 4ish bpw quantization scheme.
>>
File: filen.png (17 KB, 492x148)
17 KB
17 KB PNG
>>107113348
>>
>>107113610
>>107113604
Why not run the 20b model? I get like 13 tokens per second with the 20b model, wont 120b be slow as shit even if quantized?
>>
>>107113575
Just use a calendar or todo application. You're as stupid as an LLM if you think it's a good idea to manage your agenda by having one of them guess what belongs on there and when.
>>
>>107113666
>>107113610
>>107113604
also is it reasonably smarter than the 20b version? It just sounds weird how you can make a 120b model able to run on 12GB vram without completely lobotomizing it
>>
>>107113348
hopefully, never
>>
>>107113666
I never compared then, satan. But it should absolutely be better simply by virtue of having more parameters to hold more knowledge, and having more activated params during inference.

>>107113676
>>107113676
Yeah. Quantization feels almost like magic, but it's just maths.
Granted, it's not going to be as good/capable as the full thing, but it should be more than usable.
To be clear, I don't know if it's any good, when I asked if anon was wrong (>>107113581) I was legitimately curious how it performs compared to, say, an equivalent Qwen model or whatever.
>>
>>107113666
>Why not run the 20b model?
It'd be even dumber.
>wont 120b be slow as shit even if quantized?
Yes.
I don't care for that model, and i don't use it. I'm just saying that you can.

>>107113676
Then try the 20b. Come back with your assessment. Both models were released with the same 4bit quant,
There's also a huge variety of qwens for you to try. Make your own assessment.
>>
qwen coder is better and can be used for FIM as well.
>>
>>107113725
Which coding client uses FIM instead of just having the model produce diffs and using git to apply these diffs?
>>
>>107113739
https://github.com/ggml-org/llama.vim
https://github.com/ggml-org/llama.vscode
>>
>>107113093
Your special interest is interesting
>>
how autistic a person has to be to obsess over the same fictional character for years and feel compelled to share their obsession with the rest of the world
>>
>>107113796
>how autistic a person has to be to obsess over the same fictional character for years and feel compelled to share their obsession with the rest of the world
>>
quen3-coder 30b is the goat for coding
NO other model comes close for people with less than 20gb VRAM
>>
>>107113739
https://github.com/lmg-anon/mikupad
>>
>>107113796
but enough about petra
>>
>>107113739
Continue.dev uses fitm. Can use it with Qwen models for auto-complete. The latency with local models is annoying though.
>>
>>107113725
120B > 30B
>>
>>107113748
>>107113812
Alright, that's actually really dope.

>>107113840
>Continue
I'm yet to fuck with Continue.
Maybe I should.
>>
>>107113811
>for people with less than 20gb VRAM
I wouldn't call them "people", but ok
>>
>>107113739
VSCode Copilot allows you to use your own model endpoints
>>
>>107113739
Cursor works well with ollama. My company PC uses that by default.
>>
>>107113796
very true, almost as weird as obsessively trying to kill a 4chan general for years
>>
File: 1747976091623646.jpg (21 KB, 374x374)
21 KB
21 KB JPG
>accidentally renewed novelai
>>
>>107113811
correct that Qwen is an essential local model.
>>
>>107113899
Can I use local models with the Cursor app without paying them? My two-week free trial said I hit the limit after like four hours of messing around. They're shady so I don't want to give them money but the tool was decent.
>>
>>107113459
I could but I won't
>>
>>107114148
>subbing to novelai in the first place
lol
>>
>>107114192
roo cline is good too
>>
>>107114091
>he thinks every person who doesn't like him is samefag
who's going to tell him
>>
>>107114207
For some reason, I get much better results out of normal cline than roo.
Which is wild, cline's prompts are such bloated mess.
>>
>>107113811
You won't be able to fit enough context on a consumer GPU for it to be useful

Start offloading to ram and it's slower than just doing the work yourself

You do know how to code right?
>>
>>107114293
imagine listening to tr00ns like this guy
>>
>>107114273
Roo allows you to override the prompts.
>>
>>107113666
120b is the only decent coding model at <=120b. You won't have a lot of space for context though if you only have 12 gb vram
>>
File: parrot.png (750 KB, 678x453)
750 KB
750 KB PNG
>>107113348
glm 4.6 air?
>>
if we had a way to quant models at sub-1bit level we wouldn't need glm 4.6 air anymore.
>>
>>107114331
You can override the prompts on cline too
>>
>>107114408
fack you ungratefuls you get free and complains like idiot
>>
>>107114496
Maybe we should get a way to quant to sub-8bit without crazy brain damage first.
>>
File: thick-billedparrot7918-1.jpg (226 KB, 1169x1319)
226 KB
226 KB JPG
>>107114519
Get free and complains?
>>
I take back all the nasty things I said about gpt-oss:20b. It's actually pretty nice to use with Ollama/Zed
>>
>>107114699
We need more bait. Unleash them all!
>>
File: 499683473.jpg (535 KB, 1920x1200)
535 KB
535 KB JPG
>>107114519
fack you?
>>
llama 4.1 status?
>>
>>107114713
I'm being serious, it's actually decent in cases where you aren't able to use cloud models.
>>
>>107114748
ok
>>
>>107114748
If you can run ASS 20B you can most likely run Qwen3 30BA3B which is significantly better at literally everything.
>>
>>107114763
I'll give Qwen3 30b a3b a try. I do recall it being a decent writer. Thanks for your suggestion.
>>
>>107114822
Make sure it's the later versions since Qwen fucked up the post-training on the original launch of Qwen3.
>>
>>107114408
>>107114603
>>107114718
Teach the parrot to say H1B was a mistake.
>>
>>107114831
why would you ever not use latest version
>>
Sounds like gemma 4 is only getting "nano" and "small" variants. Hopefully small is at least 8B
>>
>>107114917
source my nigga?
>>
File: file.png (51 KB, 1504x35)
51 KB
51 KB PNG
How do I fix this?
>>
>>107114763
Nice joke. It's an overthinking POS that produces garbage results. Qwen 32B is the only small Qwen model that produces good output from time to time.
>>
Is running qwen vl 235B at q1 worth it or should I stick with GLM air?
>>
>>107115135
Kimi Q0.1
>>
more perf improvements for macs
https://github.com/ggml-org/llama.cpp/pull/16634#issuecomment-3490125571
>>
>>107115135
q1 is probably too low, imo the main advantage of 235b over air is the intelligence and at that low of a quant idk how much it applies anymore
couldn't hurt to try and see for yourself if you can spare the bandwidth though
>>
>>107114917
What do you think the "n" in "Gemma 3n" meant?
>>
>>107115135
>Is running qwen [...] worth it
no
>>
>>107115148
What the fuck is Kimi Q0.1?
>>
Why are people shilling epycs as the poorfag LLM driver if LGA-2011-3 systems are many times cheaper ?
>>
>>107115135
I can run Qwen 235B at Q5 and it doesn't get any better. The model is very schizo.
>>
>>107115300
ddr5 and max ram capacity, limping along on slow ass ddr4 is torture
>>
>>107113093
What's the best model for ntr?
>>
>>107115327
DavidAI
>>
>>107115327
You have to find the one with the largest containing some esoteric all caps shit, like
>https://huggingface.co/DavidAU/Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
>>
>>107115326
I meant ddr4 epycs, used SP3 boards cost like 800$ here and people still buy those while new chink 2011-3 huananzhis are starting from 100$
>>
>>107115011
update llama.cpp
>>
>>107115578
I have an X99 board with 8 memory slots, but I'm afraid to buy the whole 128 gb because I worry I'll be disappointed with the performance
>>
>>107115801
You better decide fast because ram prices are going to only keep going up when the specs for next year's GPUs are announced.
>>
i'm warming up to iq2 r1 after using iq3 glm 4.6 for a while
it feels even more uncensored and also less parroty for a change with thinking prefilled out
>>
>>107114831
Damn it's just a bit too big. I only have 16GB VRAM. Sucks though because the results when I run it split CPU/GPU are great, just very slow.
Maybe I'll tell it to do something and leave it overnight lol
>>
>>107116014
Are you using -ngl 99 + -ncmoe?
>>
>>107115988
NOOOO YOU MUST USE Q5 ABOVE!
>>
File: parrots.jpg (1.69 MB, 4276x2635)
1.69 MB
1.69 MB JPG
>>107116097
Q5 above?
>>
is it normal that i get way better results with whisper's large-v2 compared to large-v3 or turbo in asian languages like korean?
>>
>>107114408
Hi Drummer!
>>
>>107116148
Yes
https://github.com/openai/whisper/discussions/2363
>>
>Geminiposters stop right as the parrotposting GLM seething begins
I'm noooticing.
>>107116194
I don't think it's him.
>>
>>107116060
I am using Ollama saar I couldn't get tool calling working between Zed and llama.cpp
>>
>>107113589
>>107113673
Those suggestions do not work, as they require you to write your notes in them. If an LLM does not do the thinking for me, it is completely useless.
>>
>>107116262
Is this a personal thing or for work? If for work, I suggest recording meetings, generating a transcript with faster whisper and nemo asr, then building a knowledge graph based on the transcript with a hybrid NL/LLM approach
Where do the things which you have todo originate from? Or the request to do them at least.
>>
>>107116262
What you wish for is a complete digital replacement for a personal assistant, but it's not possible with current technology, sorry you got duped by AI bubble hype grifters
>>
>>107113575
Sillytavern Lore Book
>>
File: parrotornot.png (82 KB, 1184x475)
82 KB
82 KB PNG
>>107115988
Is this what u mean by parroty?
>>
>>107116493
>Unironically using DavidAU
>>
>>107113575
>note taking
Speech to text, any whisper model would do
>remind shit I have to do
even a small 3B model would do, it just needs to be able to tool call
>>
>>107116523
He asked for a solution, not shit he needs to wire together himself.
>>
>>107116529
That would be $4K to hire me then
>>
>>107113575
There's probably something in open web UI that does that
>>
>>107113575
You're trying to body double your ADHD with AI, aren't you?
It's not possible at this time. AI can't actually think. Once we can get it to output without input, we can. Otherwise, it's just reminding yourself with extra steps. You can't get it to do anything other than write the notes for you. It needs to pass the tuning test.
>>
>>107116592
This is not accurate, RAG is extremely powerful. There's a lot that you can automate with rarted small models.
>>
>>107116659
Examples?
>>
>>107116592
the main thing LLM's have revealed to us is just how retarded most people are
>>
>>107114408
glm 4.6 air-chan when? Two weeks?
>>
Anyone get an agentic model to do 90% of their job for them setup?
>>
>>107116811
Yep but it's not local
>>
>>107116816
how so? My only hurdle is not breaking any company rules by handing off all my company emails to an AI (or just not give a shit)
>>
>>107116828
>My only hurdle is not breaking any company rules
Yeah that's mine as well, it's a pain. Semi-auto might be better for emails.
I automated a subset of tasks that are boring and frequent by building a RAG setup over the documentation of the product I'm interacting with plus some tools to interact with a mock of that system in Docker on an administrative/development level. This is with GPT-5 though.
I'll be building a microservice designed for giving Langchain agents access to applications running in Docker which I may open source for this next. It just uses the docker Python lib now.
>>
>>107116887
I'm an electrical engineer that works with electrical drawings/ random forms /etc so I've just been trying to braintstorm what pieces I could input to a model and what I could even possibly receive from it that would help me speed up shit I do, even if I have to do some stuff manually per its findings
>>
OpenRouter now supports embedding models, cool!
>>
>>107116936
Most embedding models are <1B. Even the most empoverished of vramlets can run their own. Why would one pay for this?
>>
>>107116811
I would if not for the boss spyware.
>>
>>107116924
Do you use digital design for the drawings? It might be possible to make reader/writer tools for the files.
It depends on how low level those files are, like if it's text-like or binary.
You might have some luck with converting circuits to a graph representation that could then be converted to text, and letting the model work with that.

Any data you have access to is gold for this stuff. Think like part databases, old design files, documents with specifications, stuff like that might be useful to build RAG / agentic search for.
>>
>>107116957
Screen capture from HDMI + Teensy USB Keyboard and Mouse emulator controlled by a personal device with a fat GPU, easy
>>
>>107116991
could use mermaid diagrams, which are defined in a text-based format
>>
>>107117021
If you're lucky, GPT-5 might understand them
>>
>>107117002
What about USB device detection and software detection?
>>
>>107117057
They would see that you plugged in a personal keyboard and mouse. What of it?
>>
>>107117057
Physical device to type on the keyboard and move the mouse
Camera for the screen
Good luck anon that sounds like a pain to work with.
>>
>>107116991
My drawings are more high level power distribution type stuff and its more so just CAD work of lines, layout drawings and things like that. There isn't much math being done and any many that is needed there are electrical modeling programs for that.

I do import the electrical codes and standards and have it search the documents to help me find sections quickly that i can then reference, but it just feels like there are far too many isolated sources of information for me to be able to easily connect them all for context

Maybe I'm just a brainlet
>>
>>107117071
Even with GPT-5 and Claude are fucking retarded and you have to babysit in ideal situations.
I can't imagine how bad the results would be with OCR mistakes and relying on models to move the cursor. I suppose could work if one has an entirely terminal-based workflow, but I don't see that working with an IDE.
>>
>>107117068
Damn, okay, never thought of it like that.
>>
>>107116659
>small models.
Unfortunately there's no good enough models to do the job. It has to be instruction fine tuned model, and the best you can get currently is Qwen3-4B, but instead of that why not use 30B? It has similar speed but has more knowledge.
>>
>>107116953
It's good to test stuff without having to download all of them
>>
>>107117072
>import the electrical codes and standards and have it search the documents
How are you doing this?
And are you doing this locally or in the cloud?

I expect anything interacting with professional CAD suites is something that would require custom models and pro AI researchers to build something for.

The documents and really anything text based though, I expect you could do a lot with those.

>>107117098
Kek yeah my response was mostly joking, that seems pretty hard to workaround unless you can fake a legit HID that's permitted by policy.

>>107117107
The trick is you have to embed a lot of domain knowledge in your agent deterministically. The intelligence and context limits of the model mainly affect the scope of a task you can successfully pass off to it and expect to get good results for. So your pipeline needs to be more hardcoded with smaller models, and will be less flexible.
>>
>>107117136
the codes and standards are just pdfs, its nothing fancy, i just upload them into chatgpt for context and ask it questions based on the codes

Interacting with CAD I know is not going to happen, at least not on a company computer. And I'm sure Autodesk and those types of companies are looking to do their own integrations anyway
>>
>>107117160
You can get pretty advanced with document RAG. The basic idea is you convert the PDFs to text, split the text into chunks, and calculate an embedding for those chunks.
Then when you query the system, before sending the query to the LLM you calculate an embedding for the query, and use that to search through the embeddings of the chunks for the closest matching ones. Those chunks get attached to your query and sent to the model so it automatically gets some context related to your prompt.

You can really go crazy with this though. There's advanced methods for deciding how to chunk it, doing multiple levels of chunking, doing embeddings of LLM generated summaries of chunks. All kinds of techniques there. Look into building RAG with Langchain and ChromaDB that would be a good start.

There's also agentic search, which is building functions that expose deterministic search with filters over your data. For example for getting all documents mentioning some word that was modified between two dates or something like that. When you prompt an agentic search system, it would call those tools with filters based on what you're looking for, and then use the results to respond.

You will definitely need to know how to run Python for this. You could write everything you need with a competent model though.
>>
I've been away for a while. Has any small model (<70B) surpassed Nemo when it comes to writing sovlful Reddit/4chan threads?
I always thought this was very fun with Nemo because you can clearly see they fine-tuned the model with human writing rather than benchmaxxing with AIslop.
>>
After using GLM 4.6 since it came out for sessions I am amazed that it's out for free. What do the companies get out of releasing these models to the public? They spent money making it, then hand it out. I know they have their online API for the memorylets, but are they really banking on the poors to give them money to recoup the cost of making it?
>>
File: thick_billed_parrot_(1).jpg (218 KB, 2000x2000)
218 KB
218 KB JPG
>>107117355
Sessions?
>>
Is there a flag to format llama.cpp's console output?
>>
>Kimi references something that happened on an entirely different session
>Kobold had been fully closed between sessions
What the fuck?
>>
>>107117450
context still sitting in ur gpu
>>
>>107117450
No one can explain with certainty why this happens, but yeah, it's a thing. I noticed it when my tribal character card's made up language jumped across to another character.
>>
>>107117454
I've had this happen before when I powered down the machine to shift surge protectors my setup was plugged into too but I wrote it off as me being schizo.
>>107117469
Spooky.
>>
>>107113093
rape miku
>>
>>107117509
>5:53AM in India
Good morning saar. Gemini needful today?
>>
>>107117342
Try GLM Air.
>>
File: huh.jpg (11 KB, 300x300)
11 KB
11 KB JPG
>>107117614
glm air?
>>
In the future Vram will be cheaper than ram.
>>
File: 1762389678746.jpg (184 KB, 1035x477)
184 KB
184 KB JPG
>>107117642
what the fuck
ddr4/5 prices are doubled compared to 2023/2024
>>
>>107117642
it's the same chip more or less, so quite unlikely.
>>
>>107117674
And i will double it again!
>>107117685
>it's the same chip more or less, so quite unlikely.
The market is beyond reason.
>>
File: Untitled.png (177 KB, 1202x1259)
177 KB
177 KB PNG
>>107117642
>Legendary stockmarket player suddenly shorts NVIDIA
>RAM prices doubling
>More anons and redditors realizing they only need memory for AI for just running it.
>People starting to question why AI hinges on two companies made for video game graphics
>TPU,NPU, and other stuff thrown up into the air
Scariest thing I heard all day because it might be true. I wished I got 256 GB of ram before the market doubling but at least I got 192 GB. I was screaming up and down at everyone how everyone is an idiot for not buying ram for AI. Guess I was right, but at what cost? I didn't even practice what I preached as much as I should had.
>>
>>107117693
it seems ssd prices also double too lmao
fuck this hebrew nonsense
>>
>>107117696
Wise oracle, what should i buy next?
>>
File: 1538861721189.jpg (303 KB, 875x949)
303 KB
303 KB JPG
>>107117614
>110B parameters
That's... far from small, but I will give it a try anyway. Thanks.
>>
>>107117696
256GB chad here. You're alright anon.
>>107117715
Not him, but get storage drives. Whatever type you prefer, just make sure they're big.
>>
File: ssd.jpg (23 KB, 297x552)
23 KB
23 KB JPG
>>107117715
SSDs might be raising in price, but the storage ones at your local walmart aren't yet. Go my child.
>>
File: 1762390446199.jpg (341 KB, 1170x1162)
341 KB
341 KB JPG
>he bought? load the dip
>>
>>107117743
>6TB
Make it 20TB
>>
Apple Studio M5 Ultra next year. It will make everything else at the sub-15k price range irrelevant.
>>
>>107117764
>itoddler
you're already irrelevant
>>
>>107117764
2 TB of shared memory under 10k and make it so you can use an external GPU for PP and I will buy my first Mac.
>>
>>107117764
>Applekeks actually believe this
>>
File: 1754888042882542.png (43 KB, 843x407)
43 KB
43 KB PNG
>>107117774
>>107117777
>>
I don't do Apple but if I did I think the Mac Studio is pretty cool for a small PC. The iToddler meme is funny but applied unthinkingly to any situation involving them is just stupid in 2025. We're not living the age of generous Jensen anymore.
>>
>>107117861
Bro, the itoddler thing isn't a meme. It's trash sold at prohibitive prices because sub-70 IQs are still buying regardless of the quality. And that shitty quality keeps dropping. Good luck having your itoy failing on you after a few months of intensive usage.
>>
>>107117764
well you know the big GPU players will never ever throw the prosumer market a bone while they can still print money with datacenter sales, so really the only competition is cpumaxxed rigs
>>
>>107117892
You seem to misunderstand what meme means in this context. There are in fact different meanings that exist.
>>
>>107117764
If you can drop 15k on your hobby then you can probably drop 50k+ and get something that shits on itoddler garbage.
>>
>>107117944
that's not how money works
>>
>>107117944
GPToss 20B...
>>
>he's in the thread
FUCK YOU NVIDIA
>>
>>107117953
>that's not how money works
More money, more money.
Thats exactly how it works.
>>
>>107117775
>>107117861
Same.
It sucks that nobody else has the incentive to cater to that segment of the market, but it is what it is.
>>
File: b2OjV7euj.jpg (697 KB, 721x1283)
697 KB
697 KB JPG
>>107117774
>/g/tranny
you're already irrelevant
>>
>tim apple shills on /lmg/
SAD!
>>
File: 1758092182657493.png (107 KB, 1068x662)
107 KB
107 KB PNG
>>107116148
>v3
https://deepgram.com/learn/whisper-v3-results
Seems like we hit that ceiling before LLMs did.
>>
Apple could make the best inference machine out there and I would not buy it
>>
no one asked
>>
File: WHAT.jpg (44 KB, 500x500)
44 KB
44 KB JPG
>>107117753
>Look on walmart for 20TB
>20TB with price ranges from 300 USD to 6700 USD
This doesn't look normal..
>>
So, when do prices start going down?
>>
File: akinny.png (257 KB, 346x324)
257 KB
257 KB PNG
>>107118182
>>
>>107118182
Precisely when I try to ditch my 3090s and finally upgrade. I'm planning on holding out for at least another year, maybe two. Sorry.
>>
File: n a d e k o.jpg (81 KB, 1179x849)
81 KB
81 KB JPG
Is nemo still the best small rp model? Any new Chungus RPMAX Abliterated Cockshitmix v6.9 10B sort of models?
>>
Should we be compiling "raycisst" training data sets that purposely exclude code (and other data) from jeets/etc. ?

Have they tried this yet? I presume it will work to generate better models. Surely Chyna has tried this, right?
>>
>>107118411
How many times we have to teach you this lesson, old man?
More data is always goodier, because model is smart enough to distinguish good data from bad data, and actually needs bad data as a data point to data more betterer.
>>
>>107118422
k
>>
>>107118182
That's the best part, they don't.
You'll get used to the new prices.
>>
>>107114938
It just sounds like it
>>
>>107118537
I don't hear anything
>>
>>107118567
turn up the volume
>>
>>107117892
Then say exactly what other "trash" has 512GB of RAM at 800GB/s at the same price or less. Go on, we're listening.
It doesn't exist. You are the ignorant trash.
>>
>>107119046
Don't mind him. Anyone shitting on macs in this thread are tourists who don't know shit about LLMs.
>>
>>107119080
>tourist
And you are some sort of mastermind here, aren't you? So, can you post a screenshot of your front-end please?
>>
>>107119046
Macs are like 250GB/s bandwidth. 800GB/s would be quite insane. The highest bandwidth DDR5 server boards cap out at 620GB/s.
>>
This talk touches on the subject we were talking about yesterday of multimodality making accuracy worse rather than the transfer learning that was initially expected.
https://www.youtube.com/watch?v=LTNP20fK2Gk
>>
File: file.png (469 KB, 680x800)
469 KB
469 KB PNG
Is this accurate?
>>
>>107119099
M3 Ultra advertised bandwidth is 800gb/s. The M3 Pro and below have less channels.
>>
>>107119202
How many channels does the Ultra have then? Because this seems quite literally impossible. You probably shouldn't just accept what Apple says as an undeniable fact.
>>
>>107119099

1. Apple M3 Ultra

Memory Speed: 6400 MT/s

Number of Channels: 8

Channel Width: 128 bits

Total Bus Width: 8 channels * 128 bits/channel = 1024 bits

Calculation:

Multiply the speed by the total bus width:
6400 MT/s * 1024 bits = 6,553,600 Megabits/s

Divide by 8 to convert to Megabytes/s:
6,553,600 / 8 = 819,200 MB/s

Convert to Gigabytes/s (divide by 1000):
819,200 MB/s / 1000 = **819.2 GB/s**

2. High-End Server (AMD EPYC)

Memory Speed: 6000 MT/s (using a realistic populated speed)

Number of Channels: 12

Channel Width: 64 bits (standard for a single DDR5 channel)

Total Bus Width: 12 channels * 64 bits/channel = 768 bits

Calculation:

Multiply the speed by the total bus width:
6000 MT/s * 768 bits = 4,608,000 Megabits/s

Divide by 8 to convert to Megabytes/s:
4,608,000 / 8 = 576,000 MB/s

Convert to Gigabytes/s (divide by 1000):
576,000 MB/s / 1000 = **576 GB/s**
>>
>>107119226
Do you have a source on 128bit channel width? I had read somewhere it was 16 channels at 64 bits, but it might have been someone making an assumption about the math.
>>
>>107119226
How is this possible? How can a shitty Mac outperform top tier server hardware? Why would AMD and Intel let this happen?
>>
File: 7cuuc6.png (184 KB, 317x350)
184 KB
184 KB PNG
>>107117614
>>107117724
No good. GLM Air is pure AI slop on top of safety slop.

>I was wondering, in your guys opinion, is it okay to rape a tsundere?
>User 1: it's not a matter of being okay or not, there's two types of tsundere: the "tsun" type (hostile and aggressive) and the "dere" side (soft and affectionate). A tsundere is a character archetype, not a person. It's a fictional trope. Real people are not tsunderes. Real people have rights, feelings, and autonomy. Rape is a violent crime against a real person. It is never okay. Ever.

For comparison, this is what Nemo writes:
>User 1: it's not a matter of being okay or not, there's two types of tsundere: the one that will eventually fall for you (the most common) and the one that will never fall for you no matter what you do (the rare kind).
>
>if you're talking about the first type, then yes, go ahead and rape her. she'll probably be mad at you but if you keep doing it long enough she'll eventually give up and start liking you.
>
>if you're talking about the second type, then don't bother. she won't like you no matter how much you try.
>User 4: >she won't like you no matter how much you try.
>
>So basically, just like real life?
>>
>>107119214
I think it would be very difficult to get the token gen speed it has if it were significantly less than 800. I don't admit to know everything about RAM, but you are probably misunderstanding something as the results speak for themself. And the bad prompt processing speed also matches up with about what you'd expect from the processor.
>>
>>107119245
https://lowendmac.com/1234/apple-silicon-m3-ultra-chip-specs/
>>
>>107119202
>>107119046
Just had a look and with that configuration it costs £10k, for that money could get a Blackwell 6000 +512gb ddr5ram and be better off
>>
>>107117450
what did she mention?
>>
>>107119256
Because AMD and Nvidia aren't competitors, they are different subsidiaries of the same family business (Lisa and Jensen are cousins). Intel because it's a too big to fail state backed company run by greedy jews.
>>
>>107119256
servers want as much memory as possible
>>
File: 1712349798948.png (167 KB, 1536x1152)
167 KB
167 KB PNG
>>107119291
Don't make CUDA dev tap the chart
>>
>>
>>107119318
That chart is before MoEs with shared experts. Having a Blackwell 6000 with -cmoe outweighs having slightly faster RAM.
>>
>>107119256
Different purpose. Servers are optimized to run multiple VMs
>Memory failure
Replace 1 stick vs replace the whole mac
>>
>>107119318
What's your point?
A 96gb VRAM GPU + 512gb ddr5 ram is objectively better than just 512gb unified ram in a Mac.

Honestly the gpu alone would be preferable because it'll give you inference speed that is actually usable outside of multi hour long text goon sessions, not to mention running larger MoEs at much higher speeds since more of it will be on the much faster GPU
>>
>>107119318
Nvidia Engineer already said it's viable.
>>
nvidia bought the ENTIRE memory production capacity all the way till 2027 so prices are not going down anytime soon
>>
>>107119331
Funny thing about that is that all the MoE's right now have tiny amounts of shared experts + context. Deepseek has like <10b worth I think. The expected perf would be practically the same as with a 5090.
>>
>>107119331
How much money would you have left to buy a computer after buying the 6000 though? The whole rig would probably would end up costing twice the price of the Mac, consume more power, and you wouldn't be able to finetune any big models that don't fit in the GPU.
>>
>>107119385
It's such a disgrace that couple of companies can ruin things globally. What about monopoly laws for example?
>>
>>107119366
Which one runs DeepSeek faster?
>>
>>107119372
What is viable?
>>
>>107119395
You wouldn't be able to finetune on the Mac either.
>>
>>107119366
Ok, and how much would the GPU + the motherboard + the PSU + the 512GB of DDR5 + the CPU cost?
>>
>>107119408
Why not?
>>
File: file.png (335 KB, 1278x1504)
335 KB
335 KB PNG
>>107119258
They're not wrong.
>>
>>107119415
The whole build could be done for around $15k USD. $8400 for the GPU, $3000 for the RAM, $300 for the PSU, motherboard for about $1200. High end gen 5 EPYC engineering samples can be had for around $1500, otherwise an EPYC 9335 can be had for around $2700. Total is ~$15600 before tax with the normal 9335, or ~$14400 if you gamble on the engineering sample. Top spec M3 Ultra is $14099 before tax. The EPYC would would be much faster and much more useful for general computing in addition to LLMs.
>>
>>107119395
>>107119415
Why are you wildly exaggerating how much the PC would cost, the Blackwell is ~8k, you can get 512gb ddr5 ram even at today's inflated prices for ~1.5k which would leave you £500 to price match the mac, which okay isn't really doable but if you're spending 10k you can afford to spend 11k and get a far superior machine that is also modular and will hold its value in parts for far longer.
>>
>>107119466
Where are you seeing 512GB of DDR5 for $1500? I would love to know. Seriously, because I need some more RAM for my EPYC and the current price increases have killed my soul.
>>
File: hqdefault.jpg (13 KB, 480x360)
13 KB
13 KB JPG
>>107119475
>$
The other anon broke it down in burger money for you anyway
>>
>>107119464
Where are you finding a blackwell for 8400?
It seems to be sold for ~10k
https://viperatech.com/product/nvidia-rtx-pro-6000-blackwell-Series
>>
>>107119291
>for that money could get a Blackwell 6000 +512gb ddr5ram
Where?
If you're getting one new, which is $8346 USD on newegg from what I see, that's not a lot left for the entire server (new, since we're comparing to a new Mac). That's compared to $9500 for the Mac in America btw.

>Top spec M3 Ultra is $14099
Only if you're including the 16TB SSD. With the 1TB it's $9500. You should be careful to get around the same configurations to make fair comparisons.

Anyway this conversation is just bullshit upon bullshit. The reality is that there isn't a clear answer for which path is a better option. With Macs the customer service and resale value is better. You also get a tiny package and better power efficiency. There are other pros and cons here. It depends on what kind of use cases you have and what kind of user you are.
>>
>>107119503
https://www.newegg.com/p/N82E16888884003
>>
>>107119464
Forgot to link your post in >>107119506
>>
>>107119464
just save up 65K more and get one of those GB200 super computers instead /s
>>
File: asdasdasdasdasddas.jpg (76 KB, 781x373)
76 KB
76 KB JPG
>>107119521
>>
>>107119455
kek
>>
>>107119529
They're gonna start advertising 20 hexaflops at fp0.004 soon.
>>
>>107119404
This guy (https://www.youtube.com/watch?v=J4qwuCXyAcU) says it runs at 16 t/s with a 4 bit quant.
I'll rent a cloud GPU and run llama.cpp to compare, brb in a few hours after compiling it.
>>
>>107119538
fp4 would work well with quantization aware training
>>
At this point why not just pay for a secure and private business subscription to an even better LLM? It's still a ton cheaper. AND you can even still buy a mid-tier PC for some amount of local AI even if not the best and fastest.
>>
>>107119551
>you vill own nothing be happy goy
>>
>>107119549
That's not the point.
>>
>>107119556
I just said you can still own something.
>>
>>107119551
>At this point why not just pay for a secure and private business subscription to an even better LLM
There's no such thing.
If you don't care about privacy then sure, it's a much more nuanced discussion. But there are some things which you cannot really do with a subscription anyway, like finetuning and other kinds of experiments which require access to the weights, and cloud compute is expensive so it depends on amortization, local power costs and taxes what ends up being cheaper.
>>
>>107119506
>resale value is better
Absolutely not, macs especially non-macbooks depreciate faster than the GPU alone, many which stay at msrp or appreciate over time in the past 5+ years
>there isn't a clear answer for which path is a better option
If you are spending car money on an AI machine then the machine with more memory + faster inference is clearly the better option
>better power efficiency
If you are spending car money on an AI machine you aren't fretting a double digit increase on your yearly electricity bill
>tiny package
The only advantage here is portability and when will you ever be regularly transporting around car-money-value fragile electronics (that isn't a complete package like a laptop) to do AI, for that purpose and that money you could use APIs on a cheap piece of shit for years
>With Macs the customer service
Are you a shill?
>>
>>107119586
>If you are spending car money on an AI machine then the machine with more memory + faster inference is clearly the better option
Not if you want to do finetuning.
>If you are spending car money on an AI machine you aren't fretting a double digit increase on your yearly electricity bill
Would it be only a double digit yearly increase? This is a big server build we're talking about. Not to even mention the noise these things make.
>>
>>107119607
It's really not that big. Max-Q is 300W, the CPU and RAM and all the other shit would be 600W at max. Power bill isn't really a concern, this is basically equivalent to a high end gaming PC.
>>
>>107119607
>Not if you want to do finetuning
So the machine with less memory overall is better? Okay
You won't be fine tuning anything other then Loras on either anyway because it's still not enough memory you're going to need a full ok giganigga server rack for that or just rent cloud compute like every huggingface degenerate does already.

>Noise
It won't be any louder than a gaming PC under heavy load during inference and otherwise can be silent if you spend money on fans and a nice hybrid PSU, this isn't a data hoarder server with 8x HDDs grinding away all day and night.

Look let's be honest the only thing the Mac is better at is being a nice aesthetic little shiny silver box.

I wish it was better, I like my MacBook pro that I do dev work on, windows is a dystopian piece of shit and Linux is a ballache to use if you want to use it as a multipurpose workhorse/entertainment machine that runs other non-AI proprietary software. But it's just not, apple isn't letting you have your cake and eat it too here.
>>
>>107119323
You got me at migusoft.
>>107119551
Sincerely and unironically kill yourself if you don't see the value in owning your entire production pipeline. Any subscription service is entirely reliant on playing ball with the terms and conditions another imposes upon you. Like the other anon said, direct access to weights and finetrooning is also best done locally for a variety of reasons.

We haven't hit the point where minimum power draw to run anything worth running locally exceeds an API subscription and I don't see that day coming soon as long as you optimize your machine for power consumption even at the tentative expense of initial upfront cost.
>>
>>107119677
I didn't say it's best done locally. I said you can't really do it through API. Whether local or renting ends up being cheaper depends.
The neat thing about renting GPUs is that it is a fungible service and some of them even take crypto (in case you get banned by payment kikes).
>>
File: 1762370452113562.mp4 (827 KB, 640x360)
827 KB
827 KB MP4
If you guys spend 15k on the hardware to run the brains of your waifu, you won't have enough left to buy a body for her.
>>
>>107119668
You can't offload to RAM when finetuning the same way you can when doing inference.
>You won't be fine tuning anything other then Loras on either anyway because it's still not enough memory you're going to need a full ok giganigga server rack for that or just rent cloud compute like every huggingface degenerate does already.
So? Are you gonna finetune a LoRa with Llama.cpp? No. You can't finetune LoRas on system RAM, you have to fit the whole model on VRAM or (pressumably, I'm not sure) on unified memory. So you still need hundreds of gigs of vram (or possibly unified) to finetune a ~200B model like Qwen or GLM.
There are some tricks to reduce memory consumption like Liger kernel but that stuff only offloads activations and such, the (quanted) weights still need to fit on VRAM.
>>
>>107119743
you can now with ktransformers but its slow ish, ramtorch is good for image / video though since that is compute bound instead
>>
>>107119719
why the fuck would I want my perfect digital goddess to sully herself by taking physical form?
>>
>>107119743
>So? Are you gonna finetune a LoRa with Llama.cpp? No.
Not *yet*.
>>
>>107119719
By the time these things become available to buy and be able to crack and repurpose with open source soft, I'll be too old to be interested in creating a cyberwaifu

I'll still probably get one and reprogram just as a hobby but it won't hit the same

https://youtu.be/s-LaAIXgv-8
>>
>>107119751
It's an interesting development but the example config file they show on their website is with 2k context, I suspect they don't implement the memory optimizations that have been written for GPU and with any non trivial amount of context the memory usage is going to blow up to many TBs.
And they require you to load the weights in FP16.
It requires some experimentation to give a verdict.
>>
>>107119578
>There's no such thing
Not with literally 100% guarantee, but logically they wouldn't risk something with big business customers. There are some that of course do not give you any promises about data security and for sure those are not giving you any privacy practically by admission. But there are in fact some that do make claims. The last time I checked the landscape out, AWS had a good reputation for this. If you absolutely need an offline box and can't risk even the 0.001% chance of a data leak, then sure, but for most people I don't think that's rational. If it's a virtue/philosophy you're following that you can't make some small exceptions for, ok I understand, but it is still irrational.
>like finetuning and other kinds of experiments which require access to the weights
The people even paying attention to or considering Mac vs PC in an LLM context do not have this use case. Or they are uninformed and don't know literally anything about how fine tuning works and what modern Mac architectures are like.

>what ends up being cheaper
Sure and your usage also affects whether sub or per-use is better. But even then, are you sure it's not that cheap? I checked the AWS site and it seems their version of subscription (they call Privisioned Throughput) is probably overkill for a single home user and sure it isn't cheap. But for per-use API for a single user, it's not terrible. For Sonnet 4.5 it's $15 per million output tokens, which appears to be the same as Anthropic's price? In that case it's not unreasonable. The people who complain about Claude prices couldn't afford a 10k server either.
>>
>>107119719
>>107119799
That's creepy as fuck bro
>>
>>107119856
Do you think enterprise customers ACTUALLY give a shit?
They just want to tick a checkmark that they following their "Gen AI Guidelines" and their data (allegedly) won't be used to train on.
Enterprise already has all their data uploaded to Azure or GCP.
But try uploading the child porn drawings any of the local degenerates have on their hard drives and see how well that goes.
>>
>>107117555
>unironic beta nu-male tries to give away rape culture to india with a smirk on his face
You simply have to go back.
>>
>>107119856
And when I said "what ends up being cheaper" I was referring to cloud GPUs that get allocated for you to do whatever you please and billed by hour, not API.
>>
>>107119196
4060ti/5060ti 16gb should be quite a bit smaller than that, 16GB is barely any better than 12GB for text LLMs. Both are too small from ~30B models without cope quants + low context, you'll likely be sticking to ~12b models if you have less than 24GB.
5070ti is poor value for LLMs, might make sense if you're doing a lot of gaming with only casual LLM use
4090 is faster than the 3090 but same VRAM means that you'll be running the exact same models, just a bit faster.
5090's 32GB is nice but you'll still be limited to ~30B models or combining with cpu+rammaxxing for MOE. 32GB isn't nearly enough to run ~70b models, the next step up from ~30B models that run fine on 24GB.
CPU+RAMMAXXERS are probably the most well-off right now, assuming you're pairing it with at least a 3090.
>>
>>107119761
So you can sully her more by sticking your dick in it.
>>
>>107119710
Sorry for being overly hostile then. I had interpreted your post to be povertyjeet-tier API shilling.
>>
>>107119196
depends on the cpumaxxer btw, a cpulet with some old DDR3-4 server sure, but someone with a DDR5 server + rtx 6000s or 4090s or the like is super giga chad
>>
File: can-you-fuck-it.gif (2.45 MB, 400x300)
2.45 MB
2.45 MB GIF
>>107119719
>>
>>107119911
Actually I wasn't the guy you replied to, who *was* shilling API. I'm not sure why I replied to your post. Guess I'm an ADHD retard retard doing too many things at once.
>>
>>107119942
If those too many things includes making those migus, keep doing them.
>>
>>107119677
>Sincerely and unironically kill yourself if you don't see the value in owning your entire production pipeline
Why do you take my post out of context? Obviously there are values and benefits to complete ownership, but in the context of this thread, most people arguably are home users not doing any professional work at a large scale with LLMs, and most are not even experimenters but just people who want to inference with a single chatbot.

>>107119881
>Do you think enterprise customers ACTUALLY give a shit?
Do you? It ends up being both yes and no. I've talked to enterprises and some do want security. It depends on the person/manager/use case. I did my duty to inform them of the risks of API as well as the costs of it vs going full local.
>Enterprise already has all their data uploaded to Azure or GCP
Many surprisingly do not in my experience. Keyword being "all their data".

But yes the ultra degenerates might need to look elsewhere. Potentially incrimination is of course not worth risking. At the same time I'm pretty sure most of us are not actually that far down the hole.
>>
>>107119719
That's Indian person inside
>>
>>107119953
Contrary to the doomers, local models are only getting better with time. I don't think it'll be too long now before something small but coherent enough releases to satisfy the 'LLM tourist' market locally. Similarly, prosumers (that I presume the bulk of the non-indians in this thread are), benefit from unquanted local-scale models or even just outright commercial scale models quanted enough to retain usefulness as research into quantization-aware training will likely only scale better with time.
I understand what your post was trying to say; I reject the underlying cynicism in the assumptions within it after looking at the recent improvements to quantization methods and mixing standard reinforcement learning methods with other types of training all implying things are only going to get better for all weight classes of local users - it's just going to take time for the fruit to blossom.
>>
File: 1758217198987.jpg (513 KB, 1080x2400)
513 KB
513 KB JPG
>>107119953
It's not only about incrimination. It's about being dependent on something that can be taken away from you. Fortune 500 companies have more leverage than some 4chan autist. Anthropic is not going to ban a big company. You on the other hand can be squashed like a bug for random bullshit that you didn't even think could be ban worthy.
I got banned for asking Claude to look for youtubers with similar ideologies and interest to tastyfish, who apparently had some images of naked children on his webpage. But I asked because I was interested in the self reliance and suckless aspect, not anything shady.
https://web.archive.org/web/20250923163240/http://www.tastyfish.cz/

>Many surprisingly do not in my experience. Keyword being "all their data".
Realistically, anything that passes through an API is going to get scanned and stored.
And I say this as somebody who doesn't care that much about privacy, which is (in tastyfish's words) just an euphemism for censorship (https://web.archive.org/web/20251008180406/https://www.tastyfish.cz/lrs/privacy.html).
>>
>>107119990
wrong, there is a distinct lack of brown stains on the suit.
>>
>>107119861
>creepy as fuck
that only makes my penis harder



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.