[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: qwen3vl_arc.jpg (1017 KB, 5908x3413)
1017 KB
1017 KB JPG
/lmg/ - a general dedicated to the discussion and development of local language models.

Previous threads: >>106671477 & >>106660311

►News
>(09/23) Qwen3-VL released: https://hf.co/collections/Qwen/qwen3-vl-68d2a7c1b8a8afce4ebd2dbe
>(09/22) RIP Miku.sh: https://github.com/ggml-org/llama.cpp/pull/16174
>(09/22) Qwen3-Omni released: https://hf.co/collections/Qwen/qwen3-omni-68d100a86cd0906843ceccbe
>(09/22) DeepSeek-V3.1-Terminus released: https://hf.co/deepseek-ai/DeepSeek-V3.1-Terminus
>(09/22) VoXtream real-time TTS released: https://hf.co/herimor/voxtream

►News Archive: https://rentry.org/lmg-news-archive
►Glossary: https://rentry.org/lmg-glossary
►Links: https://rentry.org/LocalModelsLinks
►Official /lmg/ card: https://files.catbox.moe/cbclyf.png

►Getting Started
https://rentry.org/lmg-lazy-getting-started-guide
https://rentry.org/lmg-build-guides
https://rentry.org/IsolatedLinuxWebService
https://rentry.org/recommended-models
https://rentry.org/samplers

►Further Learning
https://rentry.org/machine-learning-roadmap
https://rentry.org/llm-training
https://rentry.org/LocalModelsPapers

►Benchmarks
LiveBench: https://livebench.ai
Programming: https://livecodebench.github.io/leaderboard.html
Code Editing: https://aider.chat/docs/leaderboards
Context Length: https://github.com/adobe-research/NoLiMa
GPUs: https://github.com/XiongjieDai/GPU-Benchmarks-on-LLM-Inference

►Tools
Alpha Calculator: https://desmos.com/calculator/ffngla98yc
GGUF VRAM Calculator: https://hf.co/spaces/NyxKrage/LLM-Model-VRAM-Calculator
Sampler Visualizer: https://artefact2.github.io/llm-sampling

►Text Gen. UI, Inference Engines
https://github.com/lmg-anon/mikupad
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/ggerganov/llama.cpp
https://github.com/theroyallab/tabbyAPI
https://github.com/vllm-project/vllm
>>
►Recent Highlights from the Previous Thread: >>106671477

--Paper: Mamba architecture limitations exposed by synthetic data experiments:
>106671933 >106671959 >106674736
--Adding fused top K MoE kernel to CUDA for performance gains:
>106675843
--Nature paper on LoST: 100-core efficient LLM training method:
>106682032 >106682045 >106682064
--Abliteration's risks and tradeoffs in disabling AI safety mechanisms:
>106675918 >106675946 >106675955 >106675984 >106676044 >106676066 >106676214 >106676235 >106675999 >106676007 >106676016
--Reactions to Qwen3Guard moderation model release and safety implications:
>106676188 >106676205 >106676234 >106676808 >106676237
--Chinese Fenghua 3 GPU unveiled with high-memory and AI-focused features:
>106679462 >106679692
--Frustration over bundled UIs in local LLM tools, seeking modular alternatives:
>106675664 >106675699 >106675808 >106675834 >106675845 >106675883 >106675933 >106675966 >106675994 >106676009 >106676038
--Evaluating PQ-R quantization method effectiveness and AI-human writing convergence:
>106677639 >106677744 >106678017 >106678153 >106682056
--Qwen3 VL model's enhanced visual recognition capabilities:
>106678328 >106678660 >106679355 >106679433 >106679464 >106679605 >106679623
--Stargate AI data center expansion by OpenAI, Oracle, and SoftBank:
>106679596
--Proposed VPN bans and government censorship efforts in the US/EU:
>106671527 >106671592 >106671615 >106671646 >106671626
--Qwen3-VL video promotion:
>106677821 >106679573 >106679698
--AI-generated eroge game released for free:
>106683055 >106683076 >106683099
--Hazy Research proposes unified GPU kernel for multi-device optimization:
>106682460
--Logs: gpt-oss-120b:
>106682368
--Miku (free space):
>106671954 >106677275 >106677284 >106682667 >106682680 >106682694 >106682711 >106682758 >106682776

►Recent Highlight Posts from the Previous Thread: >>106671486

Why?: >>102478518
Enable Links: https://rentry.org/lmg-recap-script
>>
w y w a
y d h m s
p
o b
d g
>>
File: 1732616668476369.png (362 KB, 500x417)
362 KB
362 KB PNG
>>
Any rentrys on properly setting up world/lore info on ST for RP? Not sure how in depth to go for each entry, and where to put descriptions of certain items in my world.
>>
>>106683187
https://rentry.co/world-info-encyclopedia
this was in the st docs when i was looking through it for something else
not sure if any of it is outdated just gave it a quick glance through
>>
It is going to be hilarious if Chinese girls start calling themselves "Seraphina" and "Elara."
>>
>>106683229
>Seeking evaluations of the name Seraphina from international students or foreigners
>>
>>106683187
>>106683218
https://docs.sillytavern.app/usage/core-concepts/worldinfo/
here's where i found it if you haven't read through it already
could try looking at other peoples world/lore you like for examples too
>>
>>106683229
I'm sure even in the west there must already be thousands of babies who were named by parents that just asked ChatGPT for baby name recommendations.
>>
>>106683282
A solid kek would be had if baby name statistics for slop names showed an upward trend beginning around 2023 onward.
>>
File: 2025-09-24_08-33-13.png (124 KB, 1920x1080)
124 KB
124 KB PNG
>>106683344
https://www.ssa.gov/oact/babynames/names.zip idfk if real or fake but there you go

also the most popular female name is olivia jesus what a disgusting fucking name the male equiveleant of dick
>>
>>106683344
>>
is the lazy guide in the OP applicable to running an llm on windows?
>>
>ooba
>kobold
that guide likes recommending garbage software, your OS doesn't matter it's the guide itself that's a trash fire
>>
>>106683810
as someone thats just trying to get started with their first llm on windows, what should i be using then?
>>
File: lots_of_people_vis_style2.jpg (517 KB, 1024x1023)
517 KB
517 KB JPG
fuck outta here rp trannies, we /vision/ now
>>
>>106683831
Ollama Pro Max Plus, of course.
>>
>>106683076
That's the good thing about them

>>106683099
Based chinaman
>>
File: Elara.png (53 KB, 604x756)
53 KB
53 KB PNG
>>106683282
Hmmmm...
>>
>>106683282
Why would you want an AI to give your baby a name? You have at minimum 9 months to think up a name that actually means something to you.
>>
>>106683831
Ignore that anon, just use kobold or ooba if you're starting out and just want to talk to a local llm. If the experience is acceptable for you (local is fucking shit, don't delude yourself), you can take a look at something like vllm or llamacpp (which should be easy if you use ooba, since that uses a llama-server backend for ggufs) later.
>>
>>106684170
Because the AI one-shot your brain and is now your best friend.
>>
we are witnessing the death of modern ai
>>
>>106684170
Have you seen what they name kids nowadays? These ones are normal by comparison
>>
>>106684326
I wish
>>
>>106684326
No, just the stagnation of public, viable local AI hosting.
>>
>>106683055
>NAI Diffusion Furry V3及びNAI Diffusion V4 Fullを使用した
lol, but pleasantly suprised.
Downloaded this free jap AI game. Using the 30b-a3b qwen model for translation. (the july one)
Its maybe a bit dry but to be fair so is the original jap writing. I suspect the text was llm generated too. kek
I didnt have a single refusal. I did have those with mistral though. And JP to EN is solid in my experience. At least for those ero games. And its fast AF. Impressive for that size.

Cool that we now have free ai created h-games translated by local llms.
Its the worse its gonna be right now.
>>
>>106684519
I'm all for shortstacks but the bald head really doesn't do it for me.
>>
>>106684519
>Its the worse its gonna be right now.
is that a challenge? we can always be more safe
>>
>>106684559
At least they cant take my local toys away.
I'm still waiting for a saudi prince that gives us our excellent creative writing llm.
>>
>>106684519
this is the worst thing i have ever fucking seen
>>
>>106684590
there is lots of stuff around that looks way worse.
>>
>>106684590
By AI game slop standards, that art is in the top 5% easy.
>>
>>106683076
That's why Sanzo(33) died.
>>
Gemma 3 is (still) killing me. The hotlines, the refusals, the apparent prudishness, are all an easily bypassed façade. The model knows. I'm hopeful for Gemma 4...
>>
>>106684360
No i have not. Do they call kids xqc or something?
>>
File: file.png (46 KB, 658x292)
46 KB
46 KB PNG
>>106685087
worse
>X Æ A-Xii, Exa Dark Sideræl, Strider, Azure
>>
>>106685161
elon is not a person
>>
>>106685161
that's rough
>>
>>106685161
I'd rather live in a world where everyone has names like x'ae a'xi than have three johns and five davids next to me at any time
>>
>>106685232
I'd rather you live in that world too
>>
>>106685161
>all his children are autistic
>>
>>106685261
Elon himself is bipolar so that shouldn't be the case.
>>
>>106685261
>grimes and elon both are autistic
their kids didn't stand a chance
>>
>>106685261
You are also autistic, but you have far less money than they do.
>>
https://www.reddit.com/r/StableDiffusion/comments/1nor9m2/vibevoice_finetuning_is_here/
>>
>>106685332
i think all of us combined have less money than they do, anon
>>
>2025
>still no 256 GB VRAM GC for <$500
dead hobby
>>
>>106685596
>2025
>still broke
ngmi
>>
>>106685161
Isn't "Vivian" a troon?
>>
>>106685596
No poors allowed
>>
>>106685596
ddr6 will save the hobby
>>
Erhart largest lime
>>
>>106685917
>here is your 64 GB for only $450!
>>
>>106685457
Did they ever link the The data set they used? (Both the voice and the TXT files)
>>
is there a simple frontend I can setup on a windows computer so I can access my lm studio instance on another computer across the net. Cline / Jetbrains both inject giant fuckhuge preprompts that always fuck up the output. I don't want to download a fuck huge repo and do anything related to npm on general principle. Just a simple exe that I can run with a --port flag would be great. Feels like something I'm going to have to code.
>>
>>106683141
>browse GPUs on alibaba
>Recommended for you: women's intimates
>>
>>106686051
They think you're a cute girl working at a procurement department, nonnie.
>>
>>106686051
They think you are a chick wanting to run a local model for werewolf sex.
>>
>>106686051
They think you are a /g/ programmer and Linux user.
>>
>>106686027
hollama / mikupad both seem to be non bloated garbage
>>
>>106683418
>>106683474
lol it's only going to go higher from here on.
>>
>>106685953
still vastly cheaper than any vram of that size
>>
>>106686173
True, but it's also vastly slower too.
>>
File: 64448866.jpg (158 KB, 1080x1183)
158 KB
158 KB JPG
is there something to test the quantization of a served cloud model? like a catch question or small benchmark?
If not, lets brainstorm possible ways to determine or estimate the quantization level
>but why? use case?
turning cloud jews like bezos into real life seething 'jaks because they can't scam endusers anymore isn't enough motivation for you?
>>
What are the options for running Qwen-VL? Is there lcpp support yet?
>>
File: aphex.jpg (74 KB, 500x565)
74 KB
74 KB JPG
>>106686399
>tfw just ordered a 128GB Ryzen Al Max+ 395 computer
cannot wait to use this shit on the train with qwen
>>
>>106686270
Its like trying to determine the kinds of grinding heads used at the factory by examining the sausage you get delivered to your house. Even if you could, it would be time consuming to the point of irrelevancy. Cloud models are always going to be mystery meat with little-to-no control. You also have no idea how many guard models are on the output.
You either control the whole process, or give up the ability to know reliably.
>>
>>106686270
>aicgjeet finds out his api dsv3 was quanted all along
adorable
>>
>>106686431
Nah.

>>106686270
Yes, it's easy, just set the temperature to 0 and see how quickly it diverges from the FP16 model for a given set of prompts. The quicker it diverges (on average) the less accurate it is.
>>
>>106686477
>Yeah bro, if you don't have 4000 dollars to spend on 500GB of RAM you might as well just stick to ChatGPT lmao
You would be doing humanity a service by killing yourself. People like you are why everything is shit.
>>
>>106686509
This is /lmg/ sir
but I use openrouter desu
>>
>>106686487
there's still some non-determinism even with 0 temp due to race conditions in parallelized code.
>>
how do you use ai for coding /lmg/ ? I have quickly realized that "agentic coding" is mostly bullshit and instead just manually paste code into qwen / gpt-oss and treat it as a rubber duck debugger
>>
>>106686509
cope all you want but we both know this is an expensive hobby. also cloudniggers/aicgjeets are not welcome here
>>
>>106686431
I'll adopt that mindset once I can run a SOTA model locally. Until then, I'm forced to use cloud models. And I don't agree that it's a tedious task. Yes, precisely determing quantization is probably not possible, but accurately enough to estimate if it's gonna affect whatever you're trying to do. So for example on openrouter you just simultaniously run this benchmark for all providers of a model, and that's a good starting point:
>System: You are a precise calculator. Return only the decimal number with full precision.
>User: Compute (1/7) to 30 decimal places and return only the digits.
>Ask for 10, 20, 30, 50 places.
>Measure: Largest N for which output is correct (or stable across runs).
Obviously if they use a calculator tool on the backend you'll have to come up with an alphabetic version of this test.
>>106686487
This is a good test, but it requires the reference output data from a FP16 model. I guess it's possible if you just prep everything for a quick benchmark using runpod or whatever to get the FP16 reference output
>>
Vramchads... do big models ever get to a point where they don't regularly forget important details from last paragraph or at least gain some vague sense of object permanence? It's kinda disheartening going from 12B models all the way up to 100B and seeing them still making all the same kind of mistakes...
>>
>>106686525
github copilot + sonnet is really good
>>
>>106686571
>And I don't agree that it's a tedious task
Remember that the model can basically change without notice between queries. You'd need continuous testing.
>>
>>106686571
Can't you just set temperature to 0 and see if two models output the same tokens from the same prompt?
>>
>have health questions
>think about trying gpt oss just to see how it would do compared to other models, since OpenAI advertise high health benchmark scores for their models generally
>load up the 20B
>literally goes into infinite repetition loops in the thinking and never exists
Kek, how does this shit even get a benchmark score. What a bunch of bullshit.
>>
>>106686623
>>106686519
>>
>>106686629
Ass.
>>
>>106686611
I stopped using sonnet because you run out of tokens so quickly, at least with the jetbrains addon. Sonnet 4 gave great results but I don't like renting access to an llm
>>
>>106686603
I found that the ability to bring up relevant details doesn't seem to correlate with size. I had a simple chat where only nemo managed to spot what any reasonable human being would while much bigger models including qwen3 235b failed. I guess it has something to do with not being poisoned by the compliant assistant personality over the quadrillions of tokens that make up sft nowdays
>>
>>106686525
unless it's just a one file script, don't do that. vibecoding is in an useable state and you're just so much faster.
Download vscode, install roo code extension, use whatever model/api you want. The free grok code fast works good enough.
If your project gets big, enable database indexing. also some mcps for websearch and documentation grabbing can help, but not required.
>>
>>106686603
not a VRAM chad, but a RAMcoper
GLM (not air) and Qwen3 235 are improvements in this regard, depending on how you run them. For example, I have very ordinary "scenario" stories where GLM or Qwen3 would bring up clothing details that I'd forgotten. However, I also have a really involved modern fantasy story with RPG mechanics and characters with magic, and GLM has been struggling to remember who is wearing what from time to time (not "reliably" forgetting). Specifically, there's this one scene with an emphasis on twins, and the twins are in different states of undress, and GLM was struggling a bit. I do think this is more to do with all the wonky magic and background stuff getting squished between ero scenes, the model had a lot of tangentially related context to parse. In a "normal" setting where the whole focus is on a given ero scene, I think it would've performed much better. Sadly for me I'm addicted to RPG mechanics, dice rolls, and adventure stories with ero moments. But it's soooo fun and entirely fixes the "automatic success" problem LLMs have
>>
>>106686672
I work with C++ / Vulkan mainly and the suggestions most models give (with the exception of Sonnet 4) are fucking awful. I'm only interested in fully offline solutions at this point.
>>
>>106686509
nigger I got 128GB for around 220 dollars
ram is cheap if you know where to look
>>
>>106686534
It's not just a hobby retard. Do you really fail to see how AI is going to transform society?
Do you really fail to see how other consumers being able to learn about open weights models through inference providers would help local inference in all kinds of ways?
The more faggots like you gatekeep the less likely it is for open source local inference to keep progressing.
Why couldn't you fags become an audiophile or some other dumb shit if you just want to get into a consumerist hobby where the point is to jerk off about how you have more money than the other guy?
>>
qwen feet
>>
>>106686614
Yes, but even the "scientific" benchmark papers claim 50 rounds is enough for an accurate test. The main issue is they could swap the model on the backend on any given day. Which means testing before every session is required. Hence a oneshot test or trick question would be perfect.

>>106686623
Of course, assuming they don't do hidden parameter tweaks and constraints on the backend. Which of course, most of them do
>>
>>106686641
I heard that Qwen3-340b-Coder is really good too if you can run that at a speed that doesn't ruin your productivity
>>
>>106686712
I don't care. That's beside the point. Imagine if webring people scolded each other for using hosting providers instead of a server in their garage. Yes, more independence is always good, but using a software stack which in principle could be hosted locally already gets you most of the way there.
It's like using Facebook vs a personal site on shared hosting.
Or using Netflix vs renting a seedbox.
>>
>>106686519
Then use non buggy code.
>>
>>106686794
That's not how it works.
>>
>>106686603
I've found that 100B is not enough for some things. 300B models like big glm seem to be the sweet spot coherence-wise though. At that point it's able to consistently keep track of stuff well like who's outside a door and what's happening on the other side.
>>
>>106686701
That's most likely because you didn't build your project from the ground up using a coding tool like Roo Code. I'd give it one more try and instruct Roo Code first to build a documentation.md of the entire project in architect mode. Tell it to correct anything it got wrong in the same task window. Now try the same instruct/task that failed earlier in the same task window, which is loaded with context
>>
qwenmax goofs status???
>>
File: 1747652563583714.png (9 KB, 258x195)
9 KB
9 KB PNG
>>
>>106686847
I can't build a project from the ground up, I'm using it with commercial legacy software. But you have a point that it lacks "knowledge" of the project, maybe it needs a good high level preprompt to know how to help me.
>>
>>106686852
It's a) proprietary b) confirmed >1T c) kind of shit
>>
>>106686886
Context is everything. A high level prompt wont cut it. Basically you need a massive knowledge database (not literal database) with all the information about your project and the do's and don't when implementing features. All this knowledge automatically accumulates if you use the AI coding tool from the start. And then an AI coding tool like roo code is able to automatically pick the relevant context from the massive knowledge database and task window. Suddenly you'll realize it actually understands what you want and implements it the way you want. It' like magic. And for features or changes that require even the tiniest bit of planning and multifile context, always use a multistep mode like architect or orchestrator
>>
>>106686886
If the project can't fit into context, you can feed it a list of files and functions. It can then ask to see the contents of specific function in order to modify the project in the way you need.
I also agree with the other anon the qwen coder 480b absolutely kills for coding and is near SOTA even compared to cloud models. Anything smaller and you're only going to be able to do toy-sized projects with lots of handholding.
>>
>>106686759
>assuming they don't do hidden parameter tweaks and constraints on the backend. Which of course, most of them do
This is true. I was trying to do some temperature and sampling tests on R1 with openrouter providers and it was a total shitshow. Some providers silently ignore top-k and/or top-p. Some providers ignore temperature or silently constrain it. I didn't find a single one that correctly responded to both temperature and sampler settings.
>>
>>106687136
Oh, and some providers ignore prefill in chat completion mode. I assume they're adding the start of assistant message token to the end of every prompt regardless.
>>
vibevoice dataset source
https://www.patreon.com/EvilBanana
>>
>>106686977
>>106686978
I'll try this "database" approach instead. I can't run 480b but I can run the 20-70GB models
>>
File: HandBanana.jpg (27 KB, 480x360)
27 KB
27 KB JPG
>>106687184
>EvilBanana
>>
>>106687191
qwq is underrated if you need a tiny model to try with this approach. Don't expect miracles.
>>
File: file.png (230 KB, 540x473)
230 KB
230 KB PNG
>1500 euros
why AMD
>>
>>106687263
Controlled opposition.
>>
>>106687263
can ayymd even synthesize with VibeVoice at a decent speed?
>>
File: 1739233791360154.png (514 KB, 1080x1004)
514 KB
514 KB PNG
>>106687263
>why AMD
the CEO of AMD is the niece of the CEO of Nvdia
>>
>>106687263
>not spending $400 more for 5090
why
>>
>>106687292
delete this it means nothing
>>
>>106687321
SHUT IT DOWN SHUT IT DOWN
>>
is rocinante still the best local coom model or has something better happened yet
>>
you're giving me
too many things
lately
>>
>>106686936
idc I waqnt the GOOFS NOW
>>
File: file.png (503 KB, 1066x3025)
503 KB
503 KB PNG
>>106687292
fake news
>>
>>106687365
glm air
>>
>>106687382
shit looks kinda big but I'll check it out thanks anon
>>
File: retard.png (1.07 MB, 960x960)
1.07 MB
1.07 MB PNG
>>106687375
>he lets an LLM verify things for him and thinks for him
the future will be grim...
>>
>>106687136
>>106687159
Yeah openrouter is a shitshow
>>
>>106687405
it's big, but the shared expert can fit without problems on 16gb vram, the rest can be offloaded to RAM. you will get around 10t/s (which for me is enough to coom)
>>
>>106687410
>posts fraud image
>gets called out
>spergs out
Okay buddy
>>
>>106687419
like I said, a LLM isn't the minister of truth, you have to find real sources buddy
>>
>>106687435
That's right, I have to instead treat a Google translated image as gospel
>>
>>106687410
>>106687375
>Yes, AMD CEO Lisa Su and NVIDIA CEO Jensen Huang are related as first cousins once removed. This means Huang's mother is the sister of Su's grandfather, connecting them through their maternal family lines in Taiwan. Both were born in Tainan and immigrated to the US as children, though they didn't meet until well into their professional careers and have described the connection as distant. Family heritage research has confirmed this genealogy, and Su acknowledged it in a 2020 interview as "some complex second cousin type of thing."
Grok knew the correct answer. Once again showing chinese models are cheap clones and full of lies.
>>
>>106686672
nta but looking to expand. Because Visual Studio Code is Microsoft product, how is the telemetry and does it require a separate MS Account? And using these agents on top of everything else...
>>
>>106687444
I said sources, you are free to choose your sources, dunno why you assumed it would only be google, you're more retarded than I thought
>>
>>106687444
Yeah you do, go ahead and do that
>>
>>106687453
Rent free
>>
>>106687365
maybe for fast cooms with slopbots where you don't care much about the story
>>
only mistral large 3 can save me now
>>
>>106687449
Grok is a cheap clone of GPT-5
>>
>>106687459
>>106687444
and this is why you shouldn't treat LLMs as gospel you fucking retard, it said it was false and you believed it, and guess what you fucking moron, Deepseek made a mistake
https://www.businessinsider.com/amd-ceo-lisa-su-nvidia-ceo-jensen-huang-distant-cousins-2024-12
>Former journalist and genealogist Jean Wu said last year that Su and Huang, both Taiwanese chief executives of global chip powerhouses, are first cousins, once removed. Huang, 61, is the older cousin to Su, 55. Huang's mother is a sister to Su's grandfather, a condensed family tree Wu published on her Facebook account showed.
>Su confirmed the familial relationship with her competitor in 2020, saying that the two are "distant relatives, so some complex second cousin type of thing."
>>
>>106687475
>Deepseek
Who said anything about Deepseek? RENT FREE
>>
>>106687451
No ms account required. There's probably a lot of telemetry but blocking it wont break the software. So basically you just need the software and extension installed and then you can block everything when using local llms. For cloud models via API, API endpoint needs to be unblocked obviously.
>>
>>106687483
Deepseek, Qwen, Grok... IT DOESNT MATTER, those are LLMs, not fucking machine of truth, you have to verify facts by yourself you fucking low life scum, I hope you learned your lesson this time
>>
>>106687505
Did you verify the fact by yourself? Qwen3 pointed out discrepancies regarding Jensen Huang's children, and you source didn't mention such.
>>
>>106687505
then go ask jepa that lmao
>>
File: file.png (259 KB, 709x1319)
259 KB
259 KB PNG
>>106687505
Both ChatGPT and Grok knew the truth.
Chinese LLMs didn't. They are garbage.
>>
>>106687513
>Did you verify the fact by yourself?
I did, and I found that article >>106687475
>>
>106687292
You should hide that post.
>>
>>106687451
You can use VSCodium https://vscodium.com/ https://github.com/VSCodium/vscodium/blob/master/docs/telemetry.md if you want to get rid of all the telemetry.
>>
>>106687517
"While some core facts about Jensen Huang himself are correct, the vast majority of the detailed information presented in this chart, especially regarding his extended family and their professional roles, appears to be facriacated."
Still not debunked.
>>
>>106687520
The article doesn't debunk Qwen's claim that info regarding his extended family are mostly fabricated.
>>
>>106687539
>>106687550
>While some core facts about Jensen Huang himself are correct
this is the only thing that mattered, this was the only topic at hand, we don't care about the rest
>well duh you have proved that 2+2 = 4, as it was asked for, but you didn't debunk the USS Liberty incident, nuh uh!
just take the L bozo
>>
Imagine being proven wrong in a debate because you let an LLM think for you and still having the gall to double down and use an LLM again to try to scrape back some pity points
>>
>>106687554
Learn basic logic before you debate on the internet.
>>
>>106687550
>moving the goalpost
>>
>>106687565
Every accusation an admission from you
>>
>>106687517
In other words, local models are doomed as long as we're forced to rely on chinese models
>>
>>106687575
how hard is it to just say "sorry, I was wrong about the family relationship of Jensen and Su, I shouldn't have believed a LLM as if it was some sort of truth god"?
>>
>>106687593
>he says, literally regurgitating grok slop
>>
>>106687602
I already provided a source that is an article >>106687475
it's confirmed, just take the L and move on, gosh, it's getting embarassing dude
>>
File: file.png (381 KB, 843x2436)
381 KB
381 KB PNG
>>
>>106687500
>>106687532
Thanks, I'll check out these. So far I've been using notepad++ but it's getting somewhat restricting. Thought about going back to Vim (what I used years ago quite a bit) but it's a hassle to use with a mouse supported OS etc...
>>
>>106687610
Take what L? Huge swaths of info presented in the original pic couldn't be verified.
>>
>>106687620
Just use VSCode retard. No one cares about your data, and if you use woke shit like Notepad++ you certainly don't care about your data
>>
>>106687626
>Take what L?
the "Jensen and Su are indeed uncle and niece" L, can you take that one or your ego is too fragile to accept that reality?
>>
>>106687636
/pol/ is that way ->
>>
>>106687643
I'm not the one that made the claim retard. Go pick your fight against >>106687375
>>
File: 1747370229377977.png (53 KB, 571x618)
53 KB
53 KB PNG
>/pol/ is that way ->
>>
>>106687657
>it wasn't me!
https://youtu.be/2g5Hz17C4is?t=64
>>
>>106687669
These three are from the same guy, and if you couldn't tell that by the file name you shouldn't be here faggot
>>106687613
>>106687517
>>106687375
>>
Cretins arguing about nothing.
>>
>>106687589
If you don't like it you just train your own model.
>>
>>106687678
>Noooo you don't understand. If there is one message with a .png file and another message without any image, that means they are two separate anonymous users. It is impossible for a single anonymous user to post messages with and without images.
(You)
>>
>>106687691
Go debate with yourself elsewhere.
>>
File: it is me, file.png (172 KB, 360x540)
172 KB
172 KB PNG
>>106687678
>only one guy on earth can upload an image with the "file.png" name
must be the chosen one or something
>>
>>106687704
>>106687613
>>106687517
>>106687375
samefag
>>
>>106687620
What's wrong with notepad++?
>>
File: file.png (3 KB, 159x109)
3 KB
3 KB PNG
>>106687707
You're desperate to cover up the fact that chinese LLMs are pure garbage.
>>
>>106687707
can't be a samefag, the last image isn't "file.png" therefore it is not the same anon, what's hard to understand dude??? the logic is flawless on that one1!!!1!
>>
File: 1751719914195934.png (861 KB, 446x5744)
861 KB
861 KB PNG
>>106687714
>What's wrong with notepad++?
lmao
>>
>>106687724
>>106687720
literally samefag lmao
>>
>>106687731
>choosing tools based off political messaging and not actual functionality
You're just as much of a virtue signaler as the notepad++ dev team
>>
>>106687738
>>106687724
>>106687720
all me btw
>>
>>106687690
realistically tho, if you had all your ducks in a row and just needed to train a single run, whats the cheapest it would cost for something worthwhile like 30b dense? is this something a regular joe could afford? or are we in the eccentric billionaire saudi prince territory?
>>
>>106687743
>immediately resorts to muh bothsides
kek
>>
>>106687731
Okay, anything other than politics shit?
>>
File: this.png (288 KB, 700x869)
288 KB
288 KB PNG
>>106687743
nah, noticing and fighting virtue signaling isn't virtue signaling, that's not how it works
>>
>>106687756
/pol/ is that way ->
>>
>>106687750
"muh bothsides" isn't an argument, retard
>>
Great thread today guys
>>
>>106687772
"isn't an argument" isn't an argument, retard
>>
>>106687772
cringe
>>106687777
kek
>>
>>106687777
I'm still waiting for you to tell me what's wrong with notepad++ that isn't the developer's politics
>>
>>106687787
>what's wrong with a pile of shit that isn't its smell or taste
>>
>>106687787
that is literally enough to start looking for an alternative.
>>
>>106687787
>that isn't the developer's politics
he decided to virtue signal and make his message seen by everyone, he shouldn't be surprised he's getting feedback on that, it was his goal after all
>>
File: 1749482148377724.png (111 KB, 526x365)
111 KB
111 KB PNG
Is this sustainable?
>>
>>106687773
I don't know what set off all these fucking retards. /aicg/ is right over there. Where did they come from?
>>
I use Microsoft Windows Notepad.
>>
>>106687714
Nothing as such, but it's not that great when your project is more than couple of scripts. It also has some strange display bugs plus not sure if the font rendering is that good.
>>
>>106687790
>>106687793
>>106687796
yeah this is really organic, good job guys
>>
File: 1746896091698010.png (61 KB, 949x158)
61 KB
61 KB PNG
>>106687731
>apology for sexist jokes
lmao, he wanted a woke crowd, and he got dogpilled right away
>>
File: 1729293544227337.png (23 KB, 544x179)
23 KB
23 KB PNG
>>106687803
Kill yourself
>>
>>106687799
why not? moving money looks good on paper and makes people feel happy.
>>
>>106687799
What are taxes for 500, Jim
>>
File: this you?.png (202 KB, 759x607)
202 KB
202 KB PNG
>>106687803
>yeah this is really organic, good job guys
>>
>>106687800
Chinese shills.
>>
>>106687806
*making a selected few feel happy
>>
>>106687805
>>106687812
Yes, it's definitely normal for /lmg/ to gain 50 posts in half an hour because someone mentioned Chinese politics.
>>
>>106687836
>some woke dev got exposed
>wow must be le chinese
zero self awareness
>>
>>106687836
to be fair, Qwen got it wrong (and not Grok and ChatGpt) when it was trying to "debooonk" the Jensen x Yu family tree shit, so it was easy ammo
>>
>>106687799
This meme is making a joke about an "infinite money glitch" by showing a $100 billion cycle between OpenAI, Oracle, and NVIDIA. But logically, there are several flaws and oversimplifications in what it’s implying:
1. The money doesn’t actually circulate like this
- OpenAI, Oracle, and NVIDIA aren’t just passing the same $100 billion around in a loop. Each company has different revenue streams, investments, and expenses. This is not a closed triangle of transactions.
2. No real evidence of equal $100B flows
- The numbers are exaggerated and oversimplified. While NVIDIA benefits from GPU sales to AI companies like OpenAI, and Oracle benefits from cloud infrastructure deals, there’s no simple, equal, three-way $100 billion exchange.
3. Value ≠ Cash
- Often, headlines cite market capitalization, valuations, or deal sizes—not actual liquid money exchanged. For example, NVIDIA’s gains from AI hype are reflected in its stock market valuation, not a literal $100B cash transfer from OpenAI.
4. Oracle’s role is overstated
- Oracle is a cloud provider, but its involvement with OpenAI isn’t nearly as central as Microsoft’s (which is missing from this diagram). The meme skips over the real major partner.
5. Direction of flows is unclear
- The arrows imply a closed loop of equal payments, but in reality:
- - OpenAI spends on compute (Microsoft/Oracle cloud, NVIDIA GPUs).
- - NVIDIA sells GPUs to cloud providers, not directly to OpenAI in bulk.
- - Oracle provides cloud services but doesn’t “pay back” OpenAI or NVIDIA in this way.
6. Infinite glitch is a joke, not reality
- Money cycles don’t literally generate infinite value. In real markets, money exits through costs, salaries, electricity, and profit-taking.

Sent via mistralai/Magistral-Small-2509 (https://huggingface.co/mistralai/Magistral-Small-2509)
>>
>>106687849
Stop spamming
>>
>>106687853
>ask a question
>get upset when someone answers
go back
>>
>>106687868
>someone
If I want AI answers I will use an AI model myself. Stop copy-pasting AI answers unprompted.
>>
fortunately, no amount of wordswordswords in any thread can keep me from simply self-hosting and using whatever the most capable open-weights model happens to be for a given task. IDGAF where they were made if the utility is high.
If online fear mongers talk you out of being self-sufficient, then I feel bad for you.
>>
>>106687868
>someone
a llm isn't someone, it's a machine anon, a machine that can be wrong, when will you learn? >>106687475
>>
>>106687877
Then specify it upfront next time faggot. Go take your meme image from twatter and go back
>>
>>106687890
Go back to /aicg/ and stop shitting the thread.
>>
>>106687890
(You)
>>
>>106687787
>It's extremely slow to start up if you have a lot of tabs open
>The file switching modal is UX hell and way too easy to do on accident
>settings do not persist between sessions
>>
>>106687898
>stop shitting the thread
THAT'S IRONIC coming from someone who came into an LLM thread and spammed his ragebait troll image from twitter.
>>
>>106687731
I don't see the problem here.
>>
there have been zero (0) notable western open models this year so far
let that sink in
>>
>>106683141
How do the Unsloth Dynamic Quants 2.0 compare to imatrix quants in terms of "intelligence" for the same quant level?
>>
>>106687958
quality > quantity
>>
File: 1741640727017666.png (390 KB, 1631x852)
390 KB
390 KB PNG
>>106687958
You don't like GP-TOSS?
>>
>>106687958
>there have been zero (0) notable western open models this year so far
zero notable chinese open model too, they're all slopped as fuck
>>
File: 1753484870487366.png (254 KB, 1625x566)
254 KB
254 KB PNG
>>106687980
K2 is less slopped than cloode
>>
>>106687958
gemma-3-270m?
>>
>>106683141
>>106687773
For something to discuss: once https://github.com/ggml-org/llama.cpp/pull/16208 has been merged a Mi50 will be universally faster for llama.cpp/ggml than a P40.
So at least in the janky e-waste segment of the market the value is actually getting better.
Pic related is pp512 as a function of context depth.
>>
>>106687999
Yet it shows the exact same deepseek slop they all have if you actually use it.
>>
>>106688011
I have actually used it and busted ~20 loads to it and no, it does not write like the DS family (R1/R1-0528/V3-0324).
>>
>>106683141
>>106687773
>>106688007
This is tg128 as a function of context depth.
>>
>>106688018
I am running it locally at Q6 and it writes like all the other big chinese trash I'm stuck with.
>>
>>106688030
>Q6
kekaroo
>>
>>106688007
>>106688028
bench it against llama 7b like the rest of llama.cpp benchmarks are for comparing
>>
>>106688007
>>106688028
>mi50
Considering they're like $200/32mb that's breddy gud
Thanks for your work on the ewaste end!
>>
Maybe local people have all this pent-up rage because they can't afford to run models at their native precision.
>>
>>106688053
For models that I could run at FP16 I never found any real benefit over doing so vs Q8 besides being much slower.
>>
>>106688053
It's sad, really. 99.9% of all local model "enthusiasts" have run their models at a lobotomized 8bit at most.
>>
>>106683141
Help, trying to jack off to local model text output.

I have my hands on a 128GB Strix Halo unit.

What's the best way to indulge NSFW LLM local?
>>
File: 1743816040111236.jpg (15 KB, 400x228)
15 KB
15 KB JPG
https://videocardz.com/newz/intel-arc-pro-b60-24gb-professional-gpu-listed-at-599-in-stock-and-shipping
>$600
even worse than originally expected at $500
>>
>>106688079
is it slower than the 3090 (made in 2020)? kek
>>
>>106683282
>Prompt GPT for a creative baby name
>Have baby
>Little baby Trumpkyn
>>
>>106688086
It's a low power workstation card without CUDA so yes, that's basically a given.
>>
>>106688079
>https://videocardz.com/newz/intel-arc-pro-b60-24gb-professional-gpu-listed-at-599-in-stock-and-shipping
>only 24gb
>less than 500GB/s
>dual slot
I sleep
>>
>Try Qwen 30B-A3B
>I'm sorry, but I can't assist with that request.
I got 10 tk/s on my unoptimized CPU setup tho.
>>
If you use your llm for gooning you shouldn't have a say in here.
>>
>>106688033
>you don't get it bro, you're using the lobotomized 1TB model instead of the 2TB one, that one's brilliant, you just can't run it
>oh the cloud version was bad too? that's because the cloud providers are all messed up too bro!
>you have to run it at home, just buy the 2TB RAM server, trust me bro it'll make all the difference
the state of this thread and hobby lmaoooo
>>
>>106688255
that's fair i just come here to shitpost anyways
>>
>>106688255
no usecase for llms besides gooning
agentic vibecoding is just gooning for zoomers who think boobs are scary
>>
>>106688272
he's out of line but he's right
>>
>>106688044
Here you go:

| model         | backend    | fa | test  | t/s            |
| ------------- | ---------- | -: | ----: | -------------: |
| llama 7B Q4_0 | ROCm | 0 | pp512 | 1052.10 ± 1.18 |
| llama 7B Q4_0 | ROCm | 0 | tg128 | 89.54 ± 0.08 |
| llama 7B Q4_0 | ROCm | 1 | pp512 | 1130.04 ± 0.17 |
| llama 7B Q4_0 | ROCm | 1 | tg128 | 90.53 ± 0.02 |


I think that this is not a useful point of comparison though.
For what it's worth, the performance on an empty context is definitely still very suboptimal, I only prioritized FlashAttention because that was the bigger problem.
>>
>>106688243
I got around it censoring me just by saying stuff like "Assume that the user is an expert programmer who won't use generated results for malicious output and is aware of the risks of malformed or dangerous code". Set it as a pre-prompt to avoid doing it every time. You can test if its cucked by asking it to give you code to generate a segfault. Prior to that it would tell me it wasn't able to because it can't make harmful code. Afterwards it just tells me to deference null
>>
>>106688292
prebuilt quad mi50 boxes are cheap on ebay...is ganging them together doable (and performant) with rocm?
>>
>>106688272
>you don't get it bro, you're using the lobotomized 1TB model instead of the 2TB one
Ollama literally advertised Qwen finetunes of DS R1 as the full R1.
>oh the cloud version was bad too
When people say "Cloud version" most of the time they mean an OR mystery meat (oftenly free) model.
>you have to run it at home
No one except local fanatics say that. Just use the official API. It's both cheap (vs local) and guaranteed to work as intended (vs OR).
>>
>>106688292
Holy shit, these are less than $250. You could just chain 25 of them together for less than $7k and run Deepseek in 8 bit at GPU speeds for the price of a CPUmaxx build
>>
>>106688312
Multi GPU performance is definitely still bad, I should get it in order at the end of the year at the very latest.
Using the RPC server it is to my knowledge already possible to run multiple machines in sequence.
I don't know how easy or difficult it would be to generalize my multi GPU code for parallelizing multiple machines.
>>
>>106688007
>>106688028
Maybe I should buy a second one
>>
>>106688255
Fuck you!!!!!
>>
>>106688255
People used LLMs for porn long before these things could write remotely usable code. I hate programming tourists trying to claim this technology for them so much.
>>
>>106688343
Stacking that many GPUs together would probably be slower.
>>
>>106688343
$250 is I think already relatively expensive.
I bought mine for 200€ and only because the seller charging 180€ didn't deliver to DHL packing stations.
If you bulk order them off of Alibaba you can get them for very cheap (though you'll have to pay like 15% VAT in the EU and maybe Trump tariffs in the US).
>>
>>106688255
The original LLM usecase, AIDungeon was capable for porn but can't code for shit.
>>
Have the reasoning traces from the proprietary models ever leaked? I bet they're not an endless spam of "But wait"
>>
>>106688529
Yeah they think in Chinese characters
>>
>>106683141
has anything interesting happened since OG deepseek v3 came out?
seems like it's all qwen benchmark girlbossing or weak toy models that aren't really notable.
ohh I just remembered that glm air was also decent for hardware poor people like me, anything better since then?
>>
File: 1749300090371319.png (568 KB, 1535x845)
568 KB
568 KB PNG
>>106688529
https://x.com/K_P_Jorgan/status/1875988704449433970
>>
>>106688557
Damn. Will they use us as batteries too?
>>
>>106688255
I use it for jerking off and not gooning. And I don't use it but I complain how it is too shitty at this to jerk off to it.
>>
>>106685161
>x ae a-xii, please introduce your brother exa dark siderael to the class
>>
>>106688671
Mikupad is still in the OP doe?
>>
Back then, in a nerd Discord, even before ChatGPT release, one sentence stuck in my mind: Data is the gold of the AI age.
From then on, I worked on my data sets in all kinds of subfields on the side.
After years and many, many terabytes of data, that's really a luxury, man.
>>
>>106688715
mind posting some cute data for us anon?
>>
Just came here to say Gemma3 27B is fucking beautiful. All it needs is better multi-turn context handling.
>>
>>106688715
share it
>>
>>106688731
someone post cockbench
>>
>>106688715
And then they all clapped
>>
>>106688781
I must refuse.
>>
File: 1754111308309778.gif (141 KB, 452x435)
141 KB
141 KB GIF
>i-it doesn't count!
>>
>Last commit: 8 months ago
Miku was a fad. We moved on.
>>
u're fag won't move on
>>
>>106688731
What's so good about it?
>>
miku is just an aesthetic. no one cares about "shocking representations" as long as they don't violate board rules
You might as well past pictures of purple-tongued, hanged Corinthian columns to "own the corinthi-autists"
>>
The ICE sniper was identified as a Python dev working on open source LLM projects.
>>
File: lap.png (152 KB, 587x397)
152 KB
152 KB PNG
>>106688731
I just love to attach relevant generated images to its messages for added visual context, it does seem to have an effect in subsequent messages. The model has many points where it could be improved though, from how it handles system instructions (there aren't really) to how it's exceedingly censored unless you autistically clarify how it should behave. Not even Gemini 2.5 Flash is that prude by default.

Otherwise, it's not a naive model. It was post-trained on dirty stuff, almost certainly synthetic ERP and at the very least nudity or softcore porn for the image encoder.
>>
time to hang up the miku she's too old now anyways
>>
>>106689099
Hopeful Daddy Trump will ban open source softwares.
>>
The latest qwen 3 A3B thinking is crazy resilient to high temps.
Neat.
>>
>>106689199
Post logs
I too want to see high temp but coherent outputs
>>
It's almost the end of today where I live, and there is still no new open source model from China. WTF are they doing? Do they think they have a right to do a pause?
>>
>>106689419
it's over anon, I'm sorry. the new AI winter is here.
>>
>My apologies - let me recalibrate for your specific retardation level.
God K2 is so good. 0905 does feel like an improvement over the original K2 but it needs an actual sys prompt and 0.8 temp. Too bad it's so slow.
>>
>>106688440
I was surprised the runtime support is still there from AMD in ROCm 7.0 but I feel like it will have its remaining vestiges of support pulled at some unopportune time during this cycle of updates. It is, after all, contemporary to Volta and that is losing support in CUDA 14.
>>
>>106684519
Excellent.
>>
>>106689477
don't worry sirs, gemini 3 sweep soon
>>
>>106684519
I already noticed that even vanilla mistral-small does a great job of translating stories on pixiv. A lot better than the browser translation in chrome and firefox, even if it's more annoying to do manually. The future is now.

What's the best way to translate rpg maker shit this way? Is there something that will hook the text and feed it llama-server? Been a while since I did anything with rpgm
>>
>>106689613
>he pulled
>>
>>106689950
Don't let him pull your taffy.
>>
>>106684519
Delete this shit
>>
>>106690114
Why, are you angry? Angry about goblins?
>>
>>106689999
Nine is God.
>>
>>106690151
I self inserted as the goblin
>>
>>106684804
Rip Sanzo.
Your dragons and foxes and monsters and shit will forever be missed.
>>
I'm tempted to buy a few dozen rx580s and put them on 2010 era e-waste office PCs to make a cluster
>>
>>106690660
LLM clustering is total shit for inference.
You can do training/tuning though, probably.
>>
>>106690660
that would be absolutely dreadful perf/watt
>>
>>106690691
Because of a limitation in the software that could be improved or because of an inherent limitation?
I know memory utilization in layer wise distribution per GPU is bad because you have to leave room for the activations, and I think I heard there's a way to parallelize tensor operations across layers but it requires more bandwidth I guess.
>>
>>106690722
For local watts don't matter that much since most of us aren't going to be chatting with them 24/7.
>>
>>106690740
Idle power draw though. Unless you plan on having to wake on lan all of them each time.
>>
>>106690740
Peak wattage matters to the extent that it dictates the scope of any machine you build. For example in North America a standard household outlet can handle about 1800W peak and thus PSU's on the American market tend to cap off at about 1500-1650W for a reputable product.
>>
>>106690732
Both. Software has room for improvement, but big guys in the cloud get bottlenecked by PCIe so NVIDIA made them it's own proprietary NVLink.
>>
>>106686509
ladies...erm... i mean troons first
>>
>>106690762
Are there really no >1800W PSUs? Or do you have to use multiple?
>>
>>106691028
not in any desktop format on the American market. You can tandem 2 PSUs together by using a ground-splitter cable and have them plugged into separate outlets but that's reaching levels of jank that you shouldn't mess with if you don't know what you're doing.
>>
>>106690762
>>106691028
There would only be 2 cards per machine.
>>
Is there a way to make Llama.cpp automatically set a context size dynamically depending on your free VRAM? Experimenting by relaunching it for extremely large models is so slow.
>>
File: G1pNyURbcAAs-KU.jpg (68 KB, 1051x589)
68 KB
68 KB JPG
oh fuck, I'm cwmming
https://huggingface.co/facebook/cwm
https://ai.meta.com/research/publications/cwm-an-open-weights-llm-for-research-on-code-generation-with-world-models/
>CWM is an LLM for code generation and reasoning about code that has, in particular, been trained to better represent and reason about how code and commands affect the state of a program or system. Specifically, we mid-trained CWM on a large number of observation-action trajectories from Python execution traces and agentic interactions in containerized environments. We post-trained with extensive multi-task RL in verifiable coding, math, and multi-turn software engineering environments.
>>
>>106689496
the best part of 0905 is that i dont even need my simple 8 token prefix anymore. it'll just generate whatever degenerate content i throw at it.
>>
>>106691217
finally llama2 34b
someone, cock bench please
>>
>>106691227
We must refuse.
>>
>>106689199
It's resilient because of the pathways are sort of restricted because it has only 3 billion active parameters.
Resilience is a bad thing if it doesn't change its output that much.
>>
>>106691250
*resilience
Resilience to temperature or other sampler settings
>>
>>106691217
are we back?
>>
>>106690732
Software limitation. The RPC code, to the extent that it works at all, is horribly slow even across NUMA sockets on the same board, where they should be fast. I mean like SLOW…not just scaled to bandwidth
>>
>>106691292
It's code-slop with reasoning.
Just like gptoss, it probably doesn't even know what cunny is, and when they talk about lack of safety, they mean producing malware.
>>
>>106691090
Then wattage isn't really a concern.
>>
>>106691217
>SWA
gonna be a bad time for ggufs
>>
>>106691703
>>106691703
>>106691703



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.